content
stringlengths
86
994k
meta
stringlengths
288
619
Understanding the X Algorithm Your definitive guide to understand how X algorithm works. Very detailed. Understanding the X Algorithm 500,000,000 - the number of posts made on X in ONE day. 1,500 - the number of posts that you are shown on the feed. Talk about picking a needle from a haystack; that’s a 0.0003% filtering rate. X, like all Social Media Intermediaries (SMIs), earns money from advertising. Advertisers, however, invest in X only when the company guarantees extended user engagement for their ads. And that is the algorithm’s job: To keep you engaged as long as it can. And for that, it has to show content ‘relevant’ to you. The ‘Following’ tab is simpler: a chronological arrangement of posts from accounts you follow. The ‘For you’ tab: The algorithm chooses posts that you’d probably like to see. And that probability is the crux of the entire algorithm. Objective: find 1500 tweets that the user might be interested in. A Quick 5-min Tour: Demystifying the Complex in a Few Bites Scenario: Alex is a user on X. He follows a handful of users and is, in turn, followed by a few other users on X. He interacts with content by liking the posts, reposting them, replying and bookmarking his favourites. Alex also comes across profiles & content that he doesn’t like or even finds annoying and misleading. He mutes them, unfollows some, clicks the “show fewer from” option, or even blocks them. Problem Statement: Alex goes to the “For you” tab and needs to be shown 1500 posts that he might be interested in. The Process: Now X has a fair understanding of Alex through his: 1. Likes, reports, quotes, followings, bookmarks, post clicks, time spent on each post, profile clicks, and more information like Alex’s geolocation, the trending topics in that location, etc. It also understands the content he does not like - collectively, this information is known as ‘Data’. 2. X assigns a different weight to each of these data. For instance, when Alex likes a post from one user (say Bella) and reposts a post from another user (say Charlie), X understands that Alex has more affinity towards Bella’s posts than Charlie’s. This happens on a large scale with Alex and every user and every post that he can be associated with. A graphical representation of the same is made. Such kind of a relationship is called ‘Feature Formation’. 3. X gathers 1,500 posts from the features so formed. In the next step, X tries assigning a rank to each one of them. The data comes into the picture again. Based on Alex’s interaction so far on X and based on the current popularity of each post in the 1,500 posts, X determines the probability of Alex - liking it, commenting on it, reposting it, etc. Here again, each probability is assigned a different weight. For instance, if Bella makes two posts and Alex is more likely (probability) to like one and comment on the other, the later post will be shown first. This process is known as ‘Ranking’. 4. Now that the posts are ranked, they undergo refinement. For instance, Alex will be bored if he sees 10 posts of Bella, one below the other, when he follows 100 other users. Alex should also not see content that is legally banned in his country. There should also be a good mix of posts from accounts that Alex follows and ones he doesn’t at present. This refinement happens in the Filtering, Heuristics, and Product Features stage. 5. The final step is to mix these posts with Ads (that’s obviously how X makes money) and follow-recomendations. This is the mixing stage. The key areas where much of the work happens is in stage 2 & stage 3, i.e., Feature Formation and Ranking Objective: How can you, as a user hack these two stages in order to maximize reach? For that, we’ll look into these 2 stages with more diligence considering two users viz., you, a post author who likes to maximize reach, and Alex, a random user on X. Feature Formation: Essentially, what happens here is: Based on Alex's {favorites, reposts, quotes, followings, bookmarks, clicks, unfollows, spam reports, blocks, etc., } ------> Alex's relationship with other [users] and [posts] on X is formed (as a graphical representation) Imagine you are a post’s author. Your intention here should be that both your profile and your posts have a strong relationship with Alex (in fact, not only Alex but with everyone on X) so that your posts are shown more often to him (and also to other users on X). In this pursuit, your posts & profile is given a preference in this order: [ >1 = advantage and <1 = disadvantage] 1. If you are Blue verified and are followed (by Alex), = 4x higher than a random unverified post author on X 2. If you are Blue verified and are not followed (by Alex) = still get a 2x higher than a random unverified post author on X. 3. If Alex favourites your post (Likes + Bookmarks) = 30x 4. If Alex reposts your post= 20x 5. If Alex follows you = 4x 6. If your posts carry an image & audio = 2x 7. If your post is in line with current trend = 1.1x 8. If you’ve used an unknown language/word in your post = (0.005) deboosted by 20x 9. If your post contains offensive words = (0.1) deboosted by 10x 10. If your post has multiple hashtags = (0.6) deboosted by 1.1x 11. If Alex has unfollowed you recently, muted, blocked, or reported you as spam = NO relationship is formed 12. If your post is NSFW, abusive, or toxic = NO relationship is formed For a detailed metric along with relevant code snippets, refer to the detailed version. From the relationship so formed, the top 1,500 posts are picked. In this, For each of the 1,500 posts, based on its current {favorites, reposts, quotes, bookmarks, clicks, time spent on them, links, image & video, and even correctness of spellings} ------> 10 probabilities (for eg: the probability that Alex will repost it; the probability that Alex will comment on it; the probability that the post author will reply to Alex's comment, etc) are formed with each having a value between 0 & 1. Notice here that the same “Data” is used to arrive at the probability values. However, X has not open-sourced how it arrives at these values depending on the data. But the twist here is that all the 10 probabilities are not treated equally. Each carries a different weight. [ +ve value = advantage and -ve value = disadvantage] 1. The probability that Alex will favourite your Post = 0.5 2. The probability that Alex will Retweet your Post = 1 3. The probability that Alex replies to your Post = 13.5 4. The probability that Alex opens your profile and Likes or Replies to a Post = 12 5. The probability (for a video Post) that Alex will watch at least half of the video = 0.005 6. The probability that Alex replies to the Post and this reply is engaged by you = 75 7. The probability that Alex will click into the conversation of this Post and reply or Like a Post = 11 8. The probability that Alex will click into the conversation of your Post and stay there for at least 2 minutes = 10 9. The probability that Alex will react negatively (requesting "show less often" on your Post or profile, block or mute you) = -74 10. The probability that Alex will click Report your Post = -369 Illustration: Let us assume you have made a post ‘ABC’ on X and the algorithm has assigned a probability value of 0.5 for the first 8 scenarios and a value of 0.001 for scenarios 9 and 10 based on {your, ABCs & Alex}’s Data. Remember that ABC’s Data and your Data is also used apart from Alex’s; so even if Alex had not reacted negatively to your profile or posts in the past, if your profile or ABC had been reported negatively by other users so far, it will negatively impact your chances of getting recommended to Alex as well. Now, the score for the post ‘ABC’ will be: (0.5 * 0.5) + (1 * 0.5) + (13.5 * 0.5) + (12 * 0.5) + (0.005 * 0.5) + (75 * 0.5) + (11 * 0.5) + (10 * 0.5) + (-74 * 0.001) + (-369 * 0.001) = 61.0595 For a detailed metric along with relevant code snippets, refer to the detailed version. Based on the Feature Formation and ranking, a sorted list of 1,500 posts is made. They pass the filtering & heuristics and mixing stage before being finally displayed on Alex’s “For you” timeline. Nerdy Nirvana: A Deep-Dive Into The Algorithm Tools that X uses: Problem Statement: Alex & Bella are two users. They might be friends, relatives, neighbours, or just two strangers from different continents. How will X determine what their relationship is? And what is the relationship between them and their tweets? Further, what is their relationship with other users and posts on X? Before diving into the algorithm, it is vital to have an understanding of the following: • Features: they act as inputs for all of X’s algorithm [they can be likes, reposts, replies, clicks, mutes, blocks, user data, etc.] • Packages: These bots/programs take features as input and create a graph → cluster them → pick top posts from it → rank them → mix them with ads and serve the For you feed. • Graphing: A relationship graph is created among various nodes (entities) within X. A sub-part of the process, graphs are the outcome of the machine-learning packages on X and it is based on this the 1,500 posts are picked. A) Features: For the purpose of the above question, X uses multiple features (categorized into 3 headers). The data inputs such as Alex’s & Bella’s: Tweet Engagement Social graph User Data Likes/Favourites Following Blocks Reposts Circles Unfollows Replies ㅤ Mutes Clicks ㅤ Spam Reports Profile Clicks ㅤ Abuse Reports Picture/Video ㅤ Geolocation And many more are taken into consideration. There are 1000s of features on X. For a detailed list refer: The nodes in the above example, Alex & Bella, are users. In reality, they can be anything - from users, posts, clusters, media, and other things - making up billions of nodes. B) Packages: X has a gamut of packages (programs/bots). The most essential ones worth knowing are: Package Name Purpose Source RealGraph captures the relationship between users RealGraph GraphJet captures the relationship between users and posts GraphJet TwHIN captures the relationship between users, posts, and other entities TwHIN SimClusters groups users & tweets into a ‘basket’ (similar clusters) and captures the relationship between clusters SimClusters HeavyRanker gives a final ranking for the posts HeavyRanker Trust-and-safety-models filters abusive and NSFW content Trust-and-safety-models Visibility-filters filters content based on legal compliance, blocked accounts, etc Visibility-filters Home-mixer mixes posts with ads and follow-recommendations to be displayed on the feed Home-mixer Bonus: these packages are also used by X for other functionalities such as returning search results, follow-recommendation-service, etc. c) Graphing: Four key packages viz., RealGraph, GraphJet, TwHIN, and SimClusters get into work here and do the following: 1- Creates a graph between users. Here, the users are nodes and the edges that connect them are the interactions (can be a like, a repost, etc.), and each edge is directed (A→B is different from B→A) and carries a weight (indicating relationship strength) The relationship strength of Alex reposting Bella’s posts is 0.56. Similarly, there can be many interactions, with each interaction carrying a weight. 2- Creates a graph between users and posts A, B, C, D, and E are users and Pa, Pb, Pc, Pd, and Pe are posts made them respectively. 3- Associates users and posts into one cluster and creates a graph of clusters X has 145,000 clusters updated every 3 weeks. Each cluster varies in size from a few 100K to several millions. Users/posts can belong to multiple clusters simultaneously - for instance, Elon Musk can be associated with memers, businessmen, billionaires, rockets, and electric vehicles. To give a picture of how the features are used to create a graph, let us look at the Representation-Scorer package. It provides a scoring system for SimClusters For the Representation-Scorer: {favourites, reposts, followings, shares, replies, posts, etc } = positive signals { blocks, mutes, reports, see-fewer } = negative signals The Process: Now that you've understood the core tech behind it, let’s put them to use. X breaks down the whole process into three sequential jobs. 1. Candidate Sourcing: 50% (or) 750 tweets come from users you follow, a.k.a. In-network.50% (or) 750 tweets come from the most relatable posts but you don’t follow the authors yet, a.k.a. Out-of-network Social Graph: 15% of posts come from what people you follow have liked/engaged [OR] what other posts that people with similar interests to yours have engaged with. For instance, Alex follows Bella and Bella liked Charlie’s post. Now Charlie’s shows up on Alex’s feed Alex and Bella (not-followers) both liked Charlie’s post. Bella also liked David’s post. Now David’s shows up on Alex’s feed Embedding Spaces: This is the most complex aspect of X’s machine learning. It picks posts by way of clustering. Alex, Bella, Charlie & David (non-followers) have no engagement in common (likes, followers). But they both display similar interests - say, pop-music, soccer, news, politics or even Hollywood. They are likely to be shown posts by top influences/personalities, say, Emily, and posts that are trending in these clusters. Trust-and-Safety works in parallel with all of the above. 2. Ranking: Now that X has fetched 1,500 posts to be displayed, here comes the next challenge - in what order they are to be displayed to the user. X’s blog reads: The ranking mechanism takes into account thousands of features and outputs ten labels to give each Tweet a score, where each label represents the probability of an engagement This is done by the HeavyRanker package and the ranking formula is: score = sum_i { (weight of engagement i) * (probability of engagement i) } There are 2 values here: weights and probability. The probability can be between 0 & 1 → It is arrived by studying 1000s of features (refer to Feature section) of 1. The post itself, and 2. The user that the post is being recommended to Based on the 1000s of features, the model outputs 10 discreet labels and gives a value between 0 and 1 for each of the labels. The labels are: 1. The probability the user will favorite the Post. 2. The probability the user will Retweet the Post. 3. The probability the user replies to the Post. 4. The probability the user opens the Post author profile and Likes or replies to a Post. 5. The probability (for a video Post) that the user will watch at least half of the video. 6. The probability the user replies to the Post and this reply is engaged by the Post author. 7. The probability the user will click into the conversation of this Post and reply or Like a Post. 8. The probability the user will click into the conversation of this Post and stay there for at least 2 minutes. 9. The probability the user will react negatively (requesting "show less often" on the Post or author, block or mute the Post author). 10. The probability the user will click Report Post. PS: The algorithm on how X computes this probability based on the features (i.e., features → probability) has not been open-sourced yet, and is a point of major criticism. • Weights: For each of the above labels, there is a pre-defined weight. This can be found in the Configuration File: The probability that the user will Sentiment Weight Like the post Positive 0.5 Retweet the post Positive 1 Reply to the post Positive 13.5 Open the post author’s profile and like or reply to a post Positive 12 [Video] will watch at least half of the video Positive 0.005 Reply to the post and the tweet author will engage with the reply Positive 75 Click into the conversation of the post and engage with a reply Positive 11 Click into the conversation of the post and stay there for ≥ 2 mins Positive 10 Request “show less often”/block/mute the post author Negative -74 Report the Tweet Negative -369 Note: The weights are used to compute probability -i.e., they are assigned before such actions are taken on a post based on likelihood. Based on the above formula [ Σ ( weight x probability ) ], a score is computed for the 1,500 tweets and ranked. 3. Filters, Heuristics & Product Features: This is done to enhance product quality. • Visibility Filtering: Hides posts from blocked or muted accounts. The visibility-filters package does this. It also filters out posts from accounts held back by legal agencies, suspended accounts, etc. • Author Diversity: Prevent seeing too many posts from the same person in a row. • Content Balance: Show a mix of posts from in-network and out-of-network • Feedback-based Fatigue: Reduces visibility of posts that have received negative feedback from the user. • Social Proof: Ensures out-of-network posts are shown only if some of the accounts followed by a user have interacted with them. • Conversations: Provide more context to a Reply by threading it together with the original post. • Edited Tweets: Keeps the feed fresh by replacing outdated tweets with their updated versions. 4. Mixing & Serving: Ranked Posts + Ads + Follow-recommendations = "For you" feed The HomeMixer package finally mixes the outcome of the above with Ads and Follower-recommendation-service (one that recommends whom to follow in the feed) placing them in-between tweets, and serves you an exclusively curated “For you” Understanding Reach: Based on the above process, there are four levels where you can take advantage as a post author to maximize reach. While we take a look at each one of them, assume the perspective of a post-author They are: 1. At the Candidate Sourcing Level We’ve seen that RealGraph, GraphJet, TwHIN, and SimClusters try to make a graph of users, posts, clusters, etc. as nodes and establish a relationship between each node. You & your posts are some of the many billion nodes on Twitter. Objective: make your nodes connect with as many nodes as possible and increase the relationship strength (edge weights - 0.56 in the graphing example). The code for determining this is: private def getLinearRankingParams: ThriftRankingParams = { `type` = Some(ThriftScoringFunctionType.Linear), minScore = -1.0e100, retweetCountParams = Some(ThriftLinearFeatureRankingParams(weight = 20.0)), replyCountParams = Some(ThriftLinearFeatureRankingParams(weight = 1.0)), reputationParams = Some(ThriftLinearFeatureRankingParams(weight = 0.2)), luceneScoreParams = Some(ThriftLinearFeatureRankingParams(weight = 2.0)), textScoreParams = Some(ThriftLinearFeatureRankingParams(weight = 0.18)), urlParams = Some(ThriftLinearFeatureRankingParams(weight = 2.0)), isReplyParams = Some(ThriftLinearFeatureRankingParams(weight = 1.0)), favCountParams = Some(ThriftLinearFeatureRankingParams(weight = 30.0)), langEnglishUIBoost = 0.5, langEnglishTweetBoost = 0.2, langDefaultBoost = 0.02, unknownLanguageBoost = 0.05, offensiveBoost = 0.1, inTrustedCircleBoost = 3.0, multipleHashtagsOrTrendsBoost = 0.6, inDirectFollowBoost = 4.0, tweetHasTrendBoost = 1.1, selfTweetBoost = 2.0, tweetHasImageUrlBoost = 2.0, tweetHasVideoUrlBoost = 2.0, useUserLanguageInfo = true, ageDecayParams = Some(ThriftAgeDecayRankingParams(slope = 0.005, base = 1.0)) Since the graphing algorithms make use of data under three categories (Tweet Engagement, Follower Graph, and User Data), let us break the code into three parts and look at each one of them: • Follower Graph: this is simpler, i.e., who follows you Your post is boosted by to direct followers 4x to trusted circle 3x NOTE: The TweepCred package impacted the credibility of users based on the followers-to-following ratio. It has since been deprecated. • Tweet Engagement: this is the data about your posts. If your post it is boosted gets a Favourite (Like + Boomark) 30x gets a Repost 20x has an image 2x has a video 2x in line with the current trend 1.1x gets a reply 1x There are also severe de-boosts: If your post it is boosted (deboosted) by has unknown words/language 0.05 (-20x) has offensive words 0.1 (-10x) has multiple hashtags 0.6 (-1.7x) • User Data: this is the data about you as a user. While it collects information from a plethora of features like geolocation to business partner data, the essential ones are: val allEdgeFeatures: SCollection[Edge] = Seq(blocks, mutes, abuseReports, spamReports, unfollows))) Getting blocked, muted, reported as abuse or spam, and being unfollowed hurts (for up to the next 90 days from any of them happening) Unfollows are not as heavily penalized as the other 4 Also, if you are a Verified (Blue subscribed) user, you get a boost. object BlueVerifiedAuthorInNetworkMultiplierParam extends FSBoundedParam [Double] ( name = “home_mixer_blue_verified_author_in_network_multiplier", default = 4.0, min = 0.6, max = 100.0 object BlueVerifiedAuthorOutOfNetworkMultiplierParam extends FSBoundedParam [Double] ( name = “home_mixer_blue_verified_author_out_of_network_multiplier", default = 2.0, min = 0.6, max = 100.0 Your posts get a minimum of 2x boost across X and for your followers, it gets a 4x boost. • Escaping Trust & Safety Filters: The exact keywords and topics in the filters are dynamic and keep changing over time. Until recently, posts on Ukraine were de-boosted. As former head of Trust & Safety Yael Roth has said: Mr. Musk empowered my team to move more aggressively to remove hate speech across the platform — censoring more content, not less. Source: In short, there are 4 filters and you want to avoid each one of them: pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content. pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics. pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service. pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior. 2. At the Ranking Level a- Hacking the feature weight table not yet open-sourced by X b- Hacking the probability weight table The 10 probability scores as discussed above: The probability that the user will Sentiment Weight Like the post Positive 0.5 Retweet the post Positive 1 Reply to the post Positive 13.5 Open the post author’s profile and like or reply to a post Positive 12 [Video] will watch at least half of the video Positive 0.005 Reply to the post and the tweet author will engage with the reply Positive 75 Click into the conversation of the post and engage with a reply Positive 11 Click into the conversation of the post and stay there for ≥ 2 mins Positive 10 Request “show less often”/block/mute the post author Negative -74 Report the Tweet Negative -369 This is the key and most definitive aspect of the entire algorithm scored_tweets_model_weight_fav: 0.5 scored_tweets_model_weight_retweet: 1.0 scored_tweets_model_weight_reply: 13.5 scored_tweets_model_weight_good_profile_click: 12.0 scored_tweets_model_weight_video_playback50: 0.005 scored_tweets_model_weight_reply_engaged_by_author: 75.0 scored_tweets_model_weight_good_click: 11.0 scored_tweets_model_weight_good_click_v2: 10.0 scored_tweets_model_weight_negative_feedback_v2: -74.0 scored_tweets_model_weight_report: -369.0 Note: The weights are used to compute probability -i.e., they are assigned before such actions are taken on a post based on likelihood. 3. At the Filtering, Heuristics, and Product Features Level a- Timing your Posts: Older posts are less relevant and are hence shown less often. Posts on X have a half-life of 360 minutes [6 hours] - This means that a post’s relevance will decrease by 50% every 6 hours struct ThriftAgeDecayRankingParams { // the rate in which the score of older tweets decreases 1: optional double slope = 6.003 // the age, in minutes, where the age score of a tweet is half of the latest tweet 2: optional double halflife = 360.0 // the minimal age decay score a tweet will have 3: optional double base = 0.6 So, the first few engagements (likes, replies, and reposts) are critical. This factor has no hard & fast rule, and hence, experiment and time your posts when your target audiences are awake and are likely to be active on X. b- Ensure that your posts do not fall in the extremities of the legal spectrum At the HomeMixer Level [Optional] Experiment with paid promotions and place your posts as Ads on X. Actionable Intelligence Getting Blue verified gives you an immediate boost of 1. 4x to your followers object BlueVerifiedAuthorInNetworkMultiplierParam extends FSBoundedParam [Double] ( name = “home_mixer_blue_verified_author_in_network_multiplier", default = 4.0, min = 0.6, max = 100.0 1. and 2x across X object BlueVerifiedAuthorOutOfNetworkMultiplierParam extends FSBoundedParam [Double] ( name = “home_mixer_blue_verified_author_out_of_network_multiplier", default = 2.0, min = 0.6, max = 100.0 Avoid using multiple hashtags. 1. You get immediately deboosted by 1.7x (i.e., by a factor of 0.6) multipleHashtagsOrTrendsBoost = 0.6, Avoid posting content that is NSFW (both media and text), abusive, or possibly misinformation (content that you are not sure of) pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content. pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics. pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service. pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior. Offensive words also get deboosted by 10x (i.e., by a factor of 0.1) offensiveBoost = 0.1, Avoid posting content on sensitive topics. 1. The exact keywords and topics in the filters are dynamic and keep changing over time. Until recently, posts on Ukraine were de-boosted. Check the GitHub page of the code to keep yourself updated on what X considers sensitive. Stick to posting on one/a few topics. This way you develop a strong relationship with a cluster 1. The SimCluster package ensures that you have a strong relationship among members in a cluster. Source: SimCuster 2. Your post also gets a 3x boost to users in your Trusted Circle inTrustedCircleBoost = 3.0, 1. Replying to content that is out-of-network also carries a penalty of 10. Unless you are sure that you can offset this by getting high engagement on such replies, try avoiding it // subtractive penalty applied after boosts for out-of-network replies. 120: optional double OutOfNetworkReplyPenalty = 10.0 1. Try to use captive and relevant images and videos in your posts. They boost your post by 2x. tweetHasImageUrlBoost = 2.0, tweetHasVideoUrlBoost = 2.0, 1. Avoid making spelling errors or posting in unknown languages (you can try representing them through images instead). They get deboosted by 20x (i.e., by a factor of 0.05) unknownLanguageBoost = 0.05, 1. Write long-form content or threads where the user might spend more than 2 minutes reading it "recap.engagement.is_good_clicked_convo_desc_v2": The probability the user will click into the conversation of this Tweet and stay there for at least 2 minutes. scored_tweets_model_weight_good_click_v2: 10.0 1. Reply to people who comment on your posts. "recap.engagement.is_replied_reply_engaged_by_author": The probability the user replies to the Tweet and this reply is engaged by the Tweet author. scored_tweets_model_weight_reply_engaged_by_author: 75.0 1. Sometimes, even having a captivating profile picture that might kindle a user to click your profile might help. If the user does so, ensure that they like your posts by pinning your top posts and by making sure your highlights tab is updated with your most interesting posts. "recap.engagement.is_profile_clicked_and_profile_engaged": The probability the user opens the Tweet author profile and Likes or replies to a Tweet. scored_tweets_model_weight_good_profile_click: 12.0 Challenging Choices 1. Writing good content: ultimately everything boils down to this. It is the central crux that holds all the other cards together. Every like, repost, comment, and bookmark that your posts get has a major boost (both at the Feature Formation stage and the Ranking stage). retweetCountParams = Some(ThriftLinearFeatureRankingParams(weight = 20.0)), replyCountParams = Some(ThriftLinearFeatureRankingParams(weight = 1.0)), isReplyParams = Some(ThriftLinearFeatureRankingParams(weight = 1.0)), favCountParams = Some(ThriftLinearFeatureRankingParams(weight = 30.0)), "recap.engagement.is_favorited": The probability the user will favorite the Tweet. "recap.engagement.is_favorited": 0.5 "recap.engagement.is_good_clicked_convo_desc_favorited_or_replied": The probability the user will click into the conversation of this Tweet and reply or Like a Tweet. "recap.engagement.is_good_clicked_convo_desc_favorited_or_replied": 11* "recap.engagement.is_replied": The probability the user replies to the Tweet. "recap.engagement.is_replied": 27 "recap.engagement.is_retweeted": The probability the user will ReTweet the Tweet. "recap.engagement.is_retweeted": 1 //parameters used by Representation-scorer 1: optional double fav1dLast10Max // max score from last 10 faves in the last 1 day 2: optional double fav1dLast10Avg // avg score from last 10 faves in the last 1 day 3: optional double fav7dLast10Max // max score from last 10 faves in the last 7 days 4: optional double fav7dLast10Avg // avg score from last 10 faves in the last 7 days 5: optional double retweet1dLast10Max // max score from last 10 retweets in the last 1 days 6: optional double retweet1dLast10Avg // avg score from last 10 retweets in the last 1 days 7: optional double retweet7dLast10Max // max score from last 10 retweets in the last 7 days 8: optional double retweet7dLast10Avg // avg score from last 10 retweets in the last 7 days 9: optional double follow7dLast10Max // max score from the last 10 follows in the last 7 days 10: optional double follow7dLast10Avg // avg score from the last 10 follows in the last 7 days 11: optional double follow30dLast10Max // max score from the last 10 follows in the last 30 days 12: optional double follow30dLast10Avg // avg score from the last 10 follows in the last 30 days 13: optional double share1dLast10Max // max score from last 10 shares in the last 1 day 14: optional double share1dLast10Avg // avg score from last 10 shares in the last 1 day 15: optional double share7dLast10Max // max score from last 10 shares in the last 7 days 16: optional double share7dLast10Avg // avg score from last 10 shares in the last 7 days 17: optional double reply1dLast10Max // max score from last 10 replies in the last 1 day 18: optional double reply1dLast10Avg // avg score from last 10 replies in the last 1 day 19: optional double reply7dLast10Max // max score from last 10 replies in the last 7 days 20: optional double reply7dLast10Avg // avg score from last 10 replies in the last 7 days 1. Write on topics that are in trend ( but limit the use of hashtags). Apart from the hashtags, also use keywords that are trending. Your content gets boosted by a factor of 1.1x tweetHasTrendBoost = 1.1 1. Tweet at a time when your target audience is likely to be active on X. The half-life of content on X is 5 hours. This means that every 6 hours, the chance of your post being recommended to others is reduced by half. Experiment and arrive at an optimum time through trial & error. struct ThriftAgeDecayRankingParams { // the rate in which the score of older tweets decreases 1: optional double slope = 6.003 // the age, in minutes, where the age score of a tweet is half of the latest tweet 2: optional double halflife = 360.0 // the minimal age decay score a tweet will have 3: optional double base = 0.6 Avoiding Pitfalls Creating posts with unconventional opinions, that run counter to public acceptance may attract significant attention in terms of clicks and replies. However, it can also result in numerous reports, mutes, and unfollows. Avoid getting muted, reported, unfollowed, and blocked. val allEdgeFeatures: SCollection[Edge] = Seq(blocks, mutes, abuseReports, spamReports, unfollows))) val allEdgeFeatures: SCollection[Edge] = getEdgeFeature(SCollection.unionAll(Seq(blocks, mutes, abuseReports, spamReports, unfollows))) val negativeFeatures: SCollection[KeyVal[Long, UserSession]] = .map { case (srcId, pqEdges) => val topKNeg = userId = Some(srcId), realGraphFeaturesTest = //parameters from representation scorer // 2001 - 3000 Negative Signals // Block Series 2001: optional double block1dLast10Avg 2002: optional double block1dLast10Max 2003: optional double block7dLast10Avg 2004: optional double block7dLast10Max 2005: optional double block30dLast10Avg 2006: optional double block30dLast10Max // Mute Series 2101: optional double mute1dLast10Avg 2102: optional double mute1dLast10Max 2103: optional double mute7dLast10Avg 2104: optional double mute7dLast10Max 2105: optional double mute30dLast10Avg 2106: optional double mute30dLast10Max // Report Series 2201: optional double report1dLast10Avg 2202: optional double report1dLast10Max 2203: optional double report7dLast10Avg 2204: optional double report7dLast10Max 2205: optional double report30dLast10Avg 2206: optional double report30dLast10Max // Dontlike 2301: optional double dontlike1dLast10Avg 2302: optional double dontlike1dLast10Max 2303: optional double dontlike7dLast10Avg 2304: optional double dontlike7dLast10Max 2305: optional double dontlike30dLast10Avg 2306: optional double dontlike30dLast10Max // SeeFewer 2401: optional double seeFewer1dLast10Avg 2402: optional double seeFewer1dLast10Max 2403: optional double seeFewer7dLast10Avg 2404: optional double seeFewer7dLast10Max 2405: optional double seeFewer30dLast10Avg 2406: optional double seeFewer30dLast10Max
{"url":"https://tweethunter.io/blog/understanding-the-x-algorithm","timestamp":"2024-11-02T14:32:17Z","content_type":"text/html","content_length":"1049492","record_id":"<urn:uuid:2fd00dcc-94a2-4480-8617-a4c7696f519e>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00312.warc.gz"}
Mail Archives: djgpp/1999/04/13/18:36:51 delorie.com/archives/browse.cgi search Mail Archives: djgpp/1999/04/13/18:36:51 From: no_mail_from AT use DOT net (Brian Chapman) Newsgroups: comp.os.msdos.djgpp Subject: Re: Qustion on the casting of float's to int's... Message-ID: <MPG.117d7c0c1fff4e9c989681@news.southwind.net> References: <199904131434 DOT QAA03789 AT acp3bf DOT physik DOT rwth-aachen DOT de> X-Newsreader: MicroPlanet Gravity v2.10 Lines: 35 NNTP-Posting-Host: 206.53.100.87 X-Trace: news15.ispnews.com 924041137 206.53.100.87 (Tue, 13 Apr 1999 18:05:37 EDT) NNTP-Posting-Date: Tue, 13 Apr 1999 18:05:37 EDT Date: Tue, 13 Apr 1999 17:07:17 -0500 To: djgpp AT delorie DOT com DJ-Gateway: from newsgroup comp.os.msdos.djgpp Reply-To: djgpp AT delorie DOT com Ahh! I see. I'll be sure to watch my step from now on. Thank you very much! :-) Previously, broeker AT physik DOT rwth-aachen DOT de says... > You've just encountered a pitfall that surprised many beginners, > before they learn how floating point numbers actually work, and what > their limitations are. The basic problem is that '0.2' is not exactly > representable by *any* floating point value, in a PC. Instead, you'll > get something like 0.1999999.... or 0.20000001... In the case at > hand, it's the first kind. > Adding up several of these values, you'll see that you won't ever > reach 1.0 exactly. Instead, you get 0.999999999. Because the FPU has > some hidden extra precision, the problem actually does not happen > around a sum of 1.0, but only when it's reached 5.0 or so. > This whole issue is summed up in a nice quote: > In computers, 10 times 0.1 is hardly ever 1.0 > > i'm still new to C, but what is the compiler (gcc 2.8.1) doing??? > > why does (int)2.0=2 yet (int)8.0=7??? I have always assumed in the > > back of my mind that typecasting floats to ints just rounded down to > > the nearest whole number (ie: truncated the fraction). > Typecasting does that. But *printing* doesn't truncate, it really > rounds. The real floating point value is 7.9999999.... which will be > printed as '8.0', but the truncated value is 7. > -- > Hans-Bernhard Broeker (broeker AT physik DOT rwth-aachen DOT de) > Even if all the snow were burnt, ashes would remain. webmaster delorie software privacy Copyright © 2019 by DJ Delorie Updated Jul 2019
{"url":"https://delorie.com/archives/browse.cgi?p=djgpp/1999/04/13/18:36:51","timestamp":"2024-11-13T14:09:21Z","content_type":"text/html","content_length":"6489","record_id":"<urn:uuid:ec9c8c6b-625d-4773-80ed-13dd7a3978c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00684.warc.gz"}
Essentials of Statistics: Exercises After reading the theory book about Statistics it is time to test your knowledge to make sure that you are well prepared for your exam. This exercise book follows the same structure as the theory book about Statistics. Answer questions about for example probability theory, random variables, expected value and the law of large numbers. All the exercises are followed by their solutions.
{"url":"https://bookboon.com/nb/statistics-exercise-book-ebook","timestamp":"2024-11-14T11:22:23Z","content_type":"text/html","content_length":"94306","record_id":"<urn:uuid:75aed0a4-b8a4-406b-8bf4-4f455849bdb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00673.warc.gz"}
Mathematics Rising Mental Magnitudes By Joselle, on April 10th, 2013 I am increasingly fascinated by the mathematics of fundamental cognitive processes – like creatures finding their way to and from significant locations, or foraging for food, or foraging with the eyes, or comprehending the duration of an event. I’m excited by the fact that there are cognitive neuroscientists that have become focused on the architecture of these processes in particular. Their work seems to always suggest that our formal mathematical systems are growing out of these very same processes. I read today Charles Gallistel’s contribution to Dehaene and Brannon’s Space, Time and Number in the Brain. Gallistel has a link to the pdf version on the Rutgers website. Gallistel is concerned with the abstractions of space, time, number, rate and probability that have been experimentally studied and found to be playing a fundamental role in the lives of nonverbal animals and preverbal humans. His premise is this: the brain’s ability to represent these foundational abstractions depends on a still more basic ability, the ability to store, retrieve and arithmetically manipulate signed magnitudes. He makes a point of distinguishing between magnitude and our symbolic numbers. Magnitudes are what he calls ‘computable numbers’ a quantity that “can be subjected to arithmetic manipulation in a physically realized system.” Being a bit pressed for time, I’ll just reproduce some of his observations. The representation of space, he says: requires summing successive displacements in an allocentric (other-centered) framework, a framework in which the coordinates of locations other than that of the animal do not change as the animal moves. By summing successive small displacements (small changes in its location), the animal maintains a representation of its location in the allocentric framework. This representation makes it possible to record locations of places and objects of interest as it encounters them, thereby constructing a cognitive map of its experienced environment. Computational considerations make it likely that this representation is Cartesian and allocentric. But in order to have a directive function, these representations of experienced locations must be vectors – ordered sets of magnitudes. And the organism accomplishes arithmetic with them. A fundamental operation in navigation is computing courses to be run… Assuming that the vectors are Cartesian, the range and bearing are the modulus and angle of the difference between the destination vector and the current-location vector. This difference vector is the element-by-element differences between the two vectors. Thus, the representation of spatial location depends on the arithmetic processing of magnitudes. Gallistel challenges the notion that time-interval experience is generated by an interval-timing mechanism, pointing out that There is, however, a conceptual problem with this supposition: The ability to record the first occurrence of an interesting temporal interval would seem to require the starting of an infinite number of timers for each of the very large number of experienced events that might turn out to be “the start of something interesting”–or not Instead, he proposes that temporal intervals are derived from the representation of temporal locations, just as displacements (directed spatial intervals) are derived from differences in spatial locations. This, in turn leads to arithmetic operations on temporal vectors (see Gallistel, 1990, for details). Rats represent rates (numbers of events divided by the durations of the intervals over which they have been experienced) and combine them multiplicatively with reward magnitudes [9]. Both mice and adult human subjects represent the uncertainty in their estimates of elapsing durations (a probability distribution defined over a continuous variable) and discrete probability (the proportion between the number of trials of one kind and the number of trials of a different kind) can combine these two representations multiplicatively to estimate an optimal target time [1]. I found one of the most interesting parts of this discussion to be the one on closure. Closure is an important constraint on the mechanism that implement arithmetic processing in the brain. Closure means that there are no inputs that crash the machine. Closure under subtraction requires that magnitudes have sign (direction), because otherwise entering a subtrahend greater than the minuend would crash the machine; it would not be able to produce a valid output. Rats learn directed (signed) temporal differences; they distinguish between whether the reward comes before or after the signal and they can integrate one directed difference with another [11]. I find this particularly interesting because it took us some time to find signed differences in our symbolic system of subtraction or even to recognize the significance of closure. I’ll end this with his brief conclusion. Some of the details of these studies can be found in the linked pdf. It seems likely that magnitudes (computable numbers) are used to represent the foundational abstractions of space, time, number, rate, and probability. The growing evidence for the arithmetic processing of the magnitudes in these different domains, together with the “unreasonable” efficacy of representations founded on arithmetic, suggests that there must be neural mechanisms that implement the arithmetic operations. Because the magnitudes in the different domains are interrelated–in for example, the representation of rate (numerosity divided by duration) or spatial density (numerosity divided by area)–it seems plausible to assume that the same mechanism is used to process the magnitudes underlying the representation of space, time and number. It should be possible to identify these neural mechanisms by their distinctive combinatorial signal processing in combination with the analytic constraint that numerosity 1 be represented by the multiplicative identity symbol is the system of symbols for representing magnitude. Recent Comments • John Golden on Other kinds of coding • Vincent Migliore on Infinity is not the problem • Joselle on Infinity is not the problem • Bruce E. Camber on Infinity is not the problem • Joselle on The monad, autopoiesis and Christmas Recent Comments • John Golden on Other kinds of coding • Vincent Migliore on Infinity is not the problem • Joselle on Infinity is not the problem • Bruce E. Camber on Infinity is not the problem • Joselle on The monad, autopoiesis and Christmas April 10th, 2013 | Tags: animal cognition, biology, brain, cognitive psychology, mathematics, neuroscience, number, number sense | Category: animal cognition, biology, cognitive science, mathematics, neuroscience | Comments are closed
{"url":"https://mathrising.com/?p=977","timestamp":"2024-11-10T13:51:42Z","content_type":"application/xhtml+xml","content_length":"133665","record_id":"<urn:uuid:363735a9-6080-4cc1-85e0-9ba3ec74abd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00699.warc.gz"}
Square Simple Decimals When we square a number that is less than 1, the answer is smaller than the original number. Work out (0.3)^2 We must calculate 0.3 x 0.3 We know that 3 x 3 = 9 But now we must divide each of the 3s by 10 and therefore we must divide the 9 by 100. 9 ÷ 100 = 0.09 So, 0.3^2 = 0.09 Similarly, 0.7^2 = 0.49 and 0.03^2 = 0.0009 The easiest way to calculate these tricky problems is to count decimal places. The rule is that you should always have the same number of decimal places in the answer as in the calculation. So, in 0.3^2 we have 0.3 x 0.3 which has a total of two decimal places. The answer of 0.09 also has two decimal places. In 0.03^2 we have 0.03 x 0.03 which has four decimal places. The answer of 0.0009 also has four decimal places. Let's try some questions now - but don't forget to count those decimal places!
{"url":"https://www.edplace.com/worksheet_info/maths/keystage3/year7/topic/951/2167/squaring-decimals","timestamp":"2024-11-03T13:53:04Z","content_type":"text/html","content_length":"81448","record_id":"<urn:uuid:5037f1e8-c69f-4771-b860-27306f86f408>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00129.warc.gz"}
On the deleted product criterion for embeddability of manifolds in $ \Bbb {R}^m $ | EMS Press On the deleted product criterion for embeddability of manifolds in • A. Skopenkov Moscow State University, Russian Federation For a space N let . Let act on Ñ and on by exchanging factors and antipodes, respectively. For an embedding define the map by .¶Theorem. Let and N be a closed PL n-manifold.¶a) If , N is d-connected and there exists an equivariant map , then N is PL-embeddable in .¶b) If , , N is (d + 1)-connected and are PL-embeddings such that are equivariantly homotopic, then f, g are PL-isotopic.¶Corollary. a) Every closed 6-manifold N such that H1(N) = 0 PL embeds in ;¶b) Every closed PL 2-connected 7-manifold PL embeds in ;¶c) There are exactly four PL embeddings up to PL isotopy (l > 0). Cite this article A. Skopenkov, On the deleted product criterion for embeddability of manifolds in . Comment. Math. Helv. 72 (1997), no. 4, pp. 543–555 DOI 10.1007/S000140050033
{"url":"https://ems.press/journals/cmh/articles/208","timestamp":"2024-11-11T00:57:10Z","content_type":"text/html","content_length":"104865","record_id":"<urn:uuid:ff37feef-2e1a-46e4-9167-19bdbf989101>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00693.warc.gz"}
Artificial Neural Networks for Beginners Deep Learning is a very hot topic these days especially in computer vision applications and you probably see it in the news and get curious. Now the question is, how do you get started with it? Today's guest blogger, Toshi Takeuchi, gives us a quick tutorial on artificial neural networks as a starting point for your study of deep learning. MNIST Dataset Many of us tend to learn better with a concrete example. Let me give you a quick step-by-step tutorial to get intuition using a popular MNIST handwritten digit dataset. Kaggle happens to use this very dataset in the Digit Recognizer tutorial competition. Let's use it in this example. You can download the competition dataset from "Get the Data" page: • train.csv - training data • test.csv - test data for submission Load the training and test data into MATLAB, which I assume was downloaded into the current folder. The test data is used to generate your submissions. tr = csvread('train.csv', 1, 0); % read train.csv sub = csvread('test.csv', 1, 0); % read test.csv The first column is the label that shows the correct digit for each sample in the dataset, and each row is a sample. In the remaining columns, a row represents a 28 x 28 image of a handwritten digit, but all pixels are placed in a single row, rather than in the original rectangular form. To visualize the digits, we need to reshape the rows into 28 x 28 matrices. You can use reshape for that, except that we need to transpose the data, because reshape operates by column-wise rather than row-wise. figure % plot images colormap(gray) % set to grayscale for i = 1:25 % preview first 25 samples subplot(5,5,i) % plot them in 6 x 6 grid digit = reshape(tr(i, 2:end), [28,28])'; % row = 28 x 28 image imagesc(digit) % show the image title(num2str(tr(i, 1))) % show the label Data Preparation You will be using the nprtool pattern recognition app from Deep Learning Toolbox. The app expects two sets of data: • inputs - a numeric matrix, each column representing the samples and rows the features. This is the scanned images of handwritten digits. • targets - a numeric matrix of 0 and 1 that maps to specific labels that images represent. This is also known as a dummy variable. Deep Learning Toolbox also expects labels stored in columns, rather than in rows. The labels range from 0 to 9, but we will use '10' to represent '0' because MATLAB is indexing is 1-based. 1 --> [1; 0; 0; 0; 0; 0; 0; 0; 0; 0] 2 --> [0; 1; 0; 0; 0; 0; 0; 0; 0; 0] 3 --> [0; 0; 1; 0; 0; 0; 0; 0; 0; 0] 0 --> [0; 0; 0; 0; 0; 0; 0; 0; 0; 1] The dataset stores samples in rows rather than in columns, so you need to transpose it. Then you will partition the data so that you hold out 1/3 of the data for model evaluation, and you will only use 2/3 for training our artificial neural network model. n = size(tr, 1); % number of samples in the dataset targets = tr(:,1); % 1st column is |label| targets(targets == 0) = 10; % use '10' to present '0' targetsd = dummyvar(targets); % convert label into a dummy variable inputs = tr(:,2:end); % the rest of columns are predictors inputs = inputs'; % transpose input targets = targets'; % transpose target targetsd = targetsd'; % transpose dummy variable rng(1); % for reproducibility c = cvpartition(n,'Holdout',n/3); % hold out 1/3 of the dataset Xtrain = inputs(:, training(c)); % 2/3 of the input for training Ytrain = targetsd(:, training(c)); % 2/3 of the target for training Xtest = inputs(:, test(c)); % 1/3 of the input for testing Ytest = targets(test(c)); % 1/3 of the target for testing Ytestd = targetsd(:, test(c)); % 1/3 of the dummy variable for testing Using the Deep Learning Toolbox GUI App 1. You can start the Neural Network Start GUI by typing the command nnstart. 2. You then click the Pattern Recognition Tool to open the Neural Network Pattern Recognition Tool. You can also usehe command nprtool to open it directly. 3. Click "Next" in the welcome screen and go to "Select Data". 4. For inputs, select Xtrain and for targets, select Ytrain. 5. Click "Next" and go to "Validation and Test Data". Accept the default settings and click "Next" again. This will split the data into 70-15-15 for the training, validation and testing sets. 6. In the "Network Architecture", change the value for the number of hidden neurons, 100, and click "Next" again. 7. In the "Train Network", click the "Train" button to start the training. When finished, click "Next". Skip "Evaluate Network" and click next. 8. In "Deploy Solution", select "MATLAB Matrix-Only Function" and save t the generated code. I save it as myNNfun.m. 9. If you click "Next" and go to "Save Results", you can also save the script as well as the model you just created. I saved the simple script as myNNscript.m Here is the diagram of this artificial neural network model you created with the Pattern Recognition Tool. It has 784 input neurons, 100 hidden layer neurons, and 10 output layer neurons. Your model learns through training the weights to produce the correct output. W in the diagram stands for weights and b for bias units, which are part of individual neurons. Individual neurons in the hidden layer look like this - 784 inputs and corresponding weights, 1 bias unit, and 10 activation outputs. Visualizing the Learned Weights If you look inside myNNfun.m, you see variables like IW1_1 and x1_step1_keep that represent the weights your artificial neural network model learned through training. Because we have 784 inputs and 100 neurons, the full layer 1 weights will be a 100 x 784 matrix. Let's visualize them. This is what our neurons are learning! load myWeights % load the learned weights W1 =zeros(100, 28*28); % pre-allocation W1(:, x1_step1_keep) = IW1_1; % reconstruct the full matrix figure % plot images colormap(gray) % set to grayscale for i = 1:25 % preview first 25 samples subplot(5,5,i) % plot them in 6 x 6 grid digit = reshape(W1(i,:), [28,28])'; % row = 28 x 28 image imagesc(digit) % show the image Computing the Categorization Accuracy Now you are ready to use myNNfun.m to predict labels for the heldout data in Xtest and compare them to the actual labels in Ytest. That gives you a realistic predictive performance against unseen data. This is also the metric Kaggle uses to score submissions. First, you see the actual output from the network, which shows the probability for each possible label. You simply choose the most probable label as your prediction and then compare it to the actual label. You should see 95% categorization accuracy. Ypred = myNNfun(Xtest); % predicts probability for each label Ypred(:, 1:5) % display the first 5 columns [~, Ypred] = max(Ypred); % find the indices of max probabilities sum(Ytest == Ypred) / length(Ytest) % compare the predicted vs. actual ans = 1.3988e-09 6.1336e-05 1.4421e-07 1.5035e-07 2.6808e-08 1.9521e-05 0.018117 3.5323e-09 2.9139e-06 0.0017353 2.2202e-07 0.00054599 0.012391 0.00049678 0.00024934 1.5338e-09 0.46156 0.00058973 4.5171e-07 0.00025153 4.5265e-08 0.11546 0.91769 2.1261e-05 0.00031076 1.1247e-08 0.25335 1.9205e-06 1.1014e-06 0.99325 2.1627e-08 0.0045572 1.733e-08 3.7744e-07 1.7282e-07 2.2329e-09 7.6692e-05 0.00011479 0.98698 1.7328e-06 1.9634e-05 0.0011708 0.069215 0.01249 0.00084255 0.99996 0.14511 1.0106e-07 2.9687e-06 0.0033565 ans = Network Architecture You probably noticed that the artificial neural network model generated from the Pattern Recognition Tool has only one hidden layer. You can build a custom model with more layers if you would like, but this simple architecture is sufficient for most common problems. The next question you may ask is how I picked 100 for the number of hidden neurons. The general rule of thumb is to pick a number between the number of input neurons, 784 and the number of output neurons, 10, and I just picked 100 arbitrarily. That means you might do better if you try other values. Let's do this programmatically this time. myNNscript.m will be handy for this - you can simply adapt the script to do a parameter sweep. sweep = [10,50:50:300]; % parameter values to test scores = zeros(length(sweep), 1); % pre-allocation models = cell(length(sweep), 1); % pre-allocation x = Xtrain; % inputs t = Ytrain; % targets trainFcn = 'trainscg'; % scaled conjugate gradient for i = 1:length(sweep) hiddenLayerSize = sweep(i); % number of hidden layer neurons net = patternnet(hiddenLayerSize); % pattern recognition network net.divideParam.trainRatio = 70/100;% 70% of data for training net.divideParam.valRatio = 15/100; % 15% of data for validation net.divideParam.testRatio = 15/100; % 15% of data for testing net = train(net, x, t); % train the network models{i} = net; % store the trained network p = net(Xtest); % predictions [~, p] = max(p); % predicted labels scores(i) = sum(Ytest == p) /... % categorization accuracy Let's now plot how the categorization accuracy changes versus number of neurons in the hidden layer. plot(sweep, scores, '.-') xlabel('number of hidden neurons') ylabel('categorization accuracy') title('Number of hidden neurons vs. accuracy') It looks like you get the best result around 250 neurons and the best score will be around 0.96 with this basic artificial neural network model. As you can see, you gain more accuracy if you increase the number of hidden neurons, but then the accuracy decreases at some point (your result may differ a bit due to random initialization of weights). As you increase the number of neurons, your model will be able to capture more features, but if you capture too many features, then you end up overfitting your model to the training data and it won't do well with unseen data. Let's examine the learned weights with 300 hidden neurons. You see more details, but you also see more noise. net = models{end}; % restore the last model W1 = zeros(sweep(end), 28*28); % pre-allocation W1(:, x1_step1_keep) = net.IW{1}; % reconstruct the full matrix figure % plot images colormap(gray) % set to grayscale for i = 1:25 % preview first 25 samples subplot(5,5,i) % plot them in 6 x 6 grid digit = reshape(W1(i,:), [28,28])'; % row = 28 x 28 image imagesc(digit) % show the image The Next Step - an Autoencoder Example You now have some intuition on artificial neural networks - a network automatically learns the relevant features from the inputs and generates a sparse representation that maps to the output labels. What if we use the inputs as the target values? That eliminates the need for training labels and turns this into an unsupervised learning algorithm. This is known as an autoencoder and this becomes a building block of a deep learning network. There is an excellent example of autoencoders on the Training a Deep Neural Network for Digit Classification page in the Deep Learning Toolbox documentation, which also uses MNIST dataset. For more details, Stanford provides an excellent UFLDL Tutorial that also uses the same dataset and MATLAB-based starter code. Sudoku Solver: a Real-time Processing Example Beyond understanding the algorithms, there is also a practical question of how to generate the input data in the first place. Someone spent a lot of time to prepare the MNIST dataset to ensure uniform sizing, scaling, contrast, etc. To use the model you built from this dataset in practical applications, you have to be able to repeat the same set of processing on new data. How do you do such preparation yourself? There is a fun video that shows you how you can solve Sudoku puzzles using a webcam that uses a different character recognition technique. Instead of static images, our colleague Teja Muppirala uses a live video feed in real time to do it and he walks you through the pre-processing steps one by one. You should definitely check it out: Solving a Sudoku Puzzle Using a Webcam. Submitting Your Entry to Kaggle You got 96% categorization accuracy rate by simply accepting the default settings except for the number of hidden neurons. Not bad for the first try. Since you are using a Kaggle dataset, you can now submit your result to Kaggle. n = size(sub, 1); % num of samples sub = sub'; % transpose [~, highest] = max(scores); % highest scoring model net = models{highest}; % restore the model Ypred = net(sub); % label probabilities [~, Label] = max(Ypred); % predicted labels Label = Label'; % transpose Label Label(Label == 10) = 0; % change '10' to '0' ImageId = 1:n; ImageId = ImageId'; % image ids writetable(table(ImageId, Label), 'submission.csv');% write to csv You can now submit the submission.csv on Kaggle's entry submission page. In this example we focused on getting a high level intuition on artificial neural network using a concrete example of handwritten digit recognition. We didn’t go into details such as how the inputs weights and bias units are combined, how activation works, how you train such a network, etc. But you now know enough to use Deep Learning Toolbox in MATLAB to participate in a Kaggle competition. To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
{"url":"https://blogs.mathworks.com/loren/2015/08/04/artificial-neural-networks-for-beginners/?","timestamp":"2024-11-14T07:52:00Z","content_type":"text/html","content_length":"199585","record_id":"<urn:uuid:dd4e7e60-c9f0-4db4-99b6-bcb88b636b1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00647.warc.gz"}
Using Multiple Imputations to Accommodate Time-Outs in Online Interventions Original Paper Background: Accurately estimating the period of time that individuals are exposed to online intervention content is important for understanding program engagement. This can be calculated from time-stamped data reflecting navigation to and from individual webpages. Prolonged periods of inactivity are commonly handled with a time-out feature and assigned a prespecified exposure duration. Unfortunately, this practice can lead to biased results describing program exposure. Objective: The aim of the study was to describe how multiple imputations can be used to better account for the time spent viewing webpages that result in a prolonged period of inactivity or a Methods: To illustrate this method, we present data on time-outs collected from the Q^2 randomized smoking cessation trial. For this analysis, we evaluate the effects on intervention exposure of receiving content written in a prescriptive versus motivational tone. Using multiple imputations, we created five complete datasets in which the time spent viewing webpages that resulted in a time-out were replaced with values estimated with imputation models. We calculated standard errors using Rubin’s formulas to account for the variability due to the imputations. We also illustrate how current methods of accounting for time-outs (excluding timed-out page views or assigning an arbitrary viewing time) can influence conclusions about participant engagement. Results: A total of 63.00% (1175/1865) of participants accessed the online intervention in the Q^2 trial. Of the 6592 unique page views, 683 (10.36%, 683/6592) resulted in a time-out. The median time spent viewing webpages that did not result in a time-out was 1.07 minutes. Assuming participants did not spend any time viewing a webpage that resulted in a time-out, no difference between the two message tones was observed (ratio of mean time online: 0.87, 95% CI 0.75-1.02). Assigning 30 minutes of viewing time to all page views that resulted in a time-out concludes that participants who received content in a motivational tone spent less time viewing content (ratio of mean time online: 0.86, 95% CI 0.77-0.98) than those participants who received content in a prescriptive tone. Using multiple imputations to account for time-outs concludes that there is no difference in participant engagement between the two message tones (ratio of mean time online: 0.87; 95% CI 0.75-1.01). Conclusions: The analytic technique chosen can significantly affect conclusions about online intervention engagement. We propose a standardized methodology in which time spent viewing webpages that result in a time-out is treated as missing information and corrected with multiple imputations. Trial Registration: Clinicaltrials.gov NCT00992264; http://clinicaltrials.gov/ct2/show/NCT00992264 (Archived by WebCite at http://www.webcitation.org/6Kw5m8EkP). J Med Internet Res 2013;15(11):e252 Tracking Exposure Time to Content As Internet-based behavioral interventions become more prevalent, it is increasingly important that researchers understand how people interact with these programs, including the time participants spend viewing individual content pages and interacting with the program overall [-]. Exposure time is one of several important proxies of engagement and could be an important mediator of the programs’ intended effects on participants’ knowledge, attitudes, and behavior. Exposure time can be tracked by monitoring when each webpage is opened or exited or when the browser itself is closed. More sophisticated software can further assess activity on a particular webpage by tracking keystrokes or mouse clicks, but no software is able to distinguish when a user is actively reading or viewing a page versus engaged in other activities in their surroundings. Moreover, there are limitations on tracking activities such as viewing content in separate browsers or windows or even working concurrently in other open programs or applications. In all cases, the result will appear to be long periods of inactivity on the program webpage. A common strategy for dealing with these extended periods of inactivity has been to time out the program after a prespecified time (eg, 30 minutes) [-]. This strategy makes sense as a means for closing out the program, but it would be misleading to rely on the time-stamped data from these timed-out periods as an indicator of how long participants were actually exposed to the program content in the open webpage. Other researchers have allowed long page views with no time-out feature, but then truncate the assumed actual viewing time after the fact for analytic purposes [-,,]. As these two approaches are equivalent for the purpose of measuring time spent online, we treat them identically and refer to each as a “time-out.” Unfortunately, neither of the approaches above is ideal when trying to estimate the time participants were actively viewing online content. Each will likely either over- or underestimate the true viewing time. The actual length of time an individual spent engaged with the webpage is unknown, resulting in missing information. Consequently, excluding all page views that result in a time-out is the same as a complete case analysis and assigning an arbitrary length of time is the same as a single, uninformed imputation method. It is well-known that complete case analyses can result in bias and a reduction in power, as can single imputation [-]. As an alternative analytic approach, we recommend using standard missing data methods, in particular multiple imputations (MI), to accommodate long periods of inactivity or time-outs when analyzing time spent online. Multiple imputations is a flexible and straightforward approach to accommodating missing data, which uses available observed information to predict values for missing information. Standard software exists and simple formulas can be used to incorporate multiple imputations into an analysis. We outline how to implement multiple imputations methods, reviewing standard formulae, to accommodate page views that resulted in a time-out. As an example, we use data collected from a randomized trial of an online smoking cessation intervention called the “Questions about Quitting” (Q^2) trial [,]. Using data from this trial, we demonstrate how the method chosen for dealing with time-out data can significantly affect conclusions drawn about program exposure. The Questions About Quitting Trial The Q^2 trial was a collaboration between the Group Health Research Institute in Seattle, Washington, and the University of Michigan Center for Health Communications Research in Ann Arbor, Michigan. Detailed information about the study design and methods have been published elsewhere []. In brief, adult smokers were recruited from a large regional health plan population and invited to participate in a randomized clinical smoking cessation trial; however, participants did not have to have an interest in quitting smoking to enroll. The primary aim of this full factorial randomized trial [] was to assess the effects of contrasting levels of four specific design features or factors, on smokers’ abstinence and utilization of adjunct treatment (counseling and pharmacotherapy) available to them through their health insurance. The effects of the contrasting levels of each design factor on program engagement were also explored and have been published []. Participants in this trial were randomized to one of 16 different combinations of the levels of the four design factors, with half of the participants assigned to one of two contrasting levels of each factor. Randomization was stratified by a baseline measure of a participant’s readiness to quit smoking. The four factors and the two contrasting levels of each were message tone (prescriptive vs motivational); navigation autonomy (dictated vs not dictated); proactive email reminders (yes vs no); and availability of testimonials (yes vs no). Here, we focus on comparing the impact of the two contrasting levels of message tone on program engagement, as measured by total time spent viewing online intervention content assessed during the first two months after study enrollment. Half of the participants were randomized to receive intervention content written in a prescriptive message tone, and half were randomized to an intervention written in a motivational tone. Intervention content written in a prescriptive tone was didactic and directly advised smokers to quit smoking and specified how to achieve this goal. In contrast, motivational messaging was written in a tone consistent with the main principles of motivational interviewing (express empathy, develop discrepancy, roll with resistance, support autonomy, and self-efficacy) []. The Q^2 program collected automated tracking data each time participants visited the intervention website. This automated collection process recorded the date and time each participant visited the website and individual date/time stamps every time a content page was accessed or left by logging out of the intervention website, closing the browser, or moving to a different intervention webpage or an external webpage in the same browser window. The Q^2 online intervention included an automatic time-out feature that logged participants out of the program after 30 minutes of inactivity. Multiply Imputing Page View Times Missing information is often classified according to the assumed missing data generating process, that is, the determinants that affect the probability that a particular data element is missing or observed. There are three general missing data generating processes: missing completely at random (MCAR), missing at random (MAR), or not missing at random (NMAR) [,]. MCAR assumes the probability that a data element is missing is independent of both observed and unmeasured information. This is unlikely to occur in practice and is the only situation in which a complete case analysis is unbiased (a reduction in power always occurs). The less restrictive MAR generating process assumes that the probability of a data element being missing depends on observed information, while NMAR means that the probability of missingness is dependent on both observed and unmeasured information. Multiple imputations is a flexible and straightforward approach for accommodating missing data. Imputation methods estimate predictive models using observed information and replace missing data elements with samples from the estimated predictive models. Multiple imputations methods are preferred over single imputation [,,] and repeatedly utilize estimated predictive models to create several complete datasets. Each complete dataset is then analyzed as if all information was observed and information is combined across each of the completed datasets. There are two common approaches to estimate predictive models when multivariate imputation models are needed (ie, when more than one variable contains missing data or one longitudinal variable has missing information over time). One approach assumes a joint predictive distribution over all recorded variables [,], and the other method estimates separate conditional predictive models for each variable with missing information separately [-]. The second method is called multiple imputations by chained equations (MICE) or fully conditional specification and is growing in popularity due to its computational efficiency and flexibility. The MICE procedure can easily accommodate binary, categorical, and continuous variables as well as more complex data challenges such as bounded variable values and imputing information for subsets of individuals. For these reasons, we use MICE to impute missing page view times (ie, times for page views that timed out). MICE methods cycle through each variable with missing information estimating regression models for each variable. Missing values are then replaced with samples from these regression-based predictive distributions, which include the appropriate random error. There are several built-in and stand-alone software packages that implement the MICE procedure [-]. MICE algorithms begin by imputing all missing information with naive values (eg, median of observed values of variable); then, the first variable (variable 1) containing missing information is considered (usually the variable with the least amount of missing data). A regression-based predictive model is estimated using observed values of variable 1 and observed and naively imputed values of all other variables selected as predictors. Usually all other variables are used as predictors, unless the analyst chooses to restrict the set of predictors [,]. The naively imputed values from variable 1 are replaced with imputations drawn from this predictive model, and the procedure continues on to the second variable with missing information (variable 2). A predictive model is estimated using the observed values of variable 2, the observed and newly imputed values of variable 1, and the observed and naively imputed values of all other predictors. The naively imputed values of variable 2 are then replaced with imputations drawn from this newly estimated imputation model. The imputation process cycles through all the variables that contain missing information replacing the naively imputed missing values with draws from newly estimated imputation models. When the MICE algorithm has cycled through all of the variables with missing information, this is called one “iteration”. The cycle is then repeated, replacing the imputed values from the first iteration with imputations from newly estimated predictive models in the second iteration. Several iterations of MICE are used to ensure that the imputations have “stabilized”, such that the order in which the variables were cycled through no longer affects the imputation values [,]. The iterative nature of the MICE algorithm provides both strengths and weaknesses. While the MICE procedure has proven useful in practice, it does not have the solid theoretical justification of alternative imputation methods. For example, convergence (ie, imputation values that “stabilize”) is not guaranteed [,]. It is also possible that conditional imputation models will be estimated such that there exists no joint multivariate distribution that is consistent with all conditional distributions. While these drawbacks may give rise to valid theoretical concerns, it appears that they are generally not a concern in practice [,,,], and MICE is increasingly being used to accommodate missing data in analyses [,-]. Once M completed datasets have been created, each completed dataset is used to calculate the estimate of interest (see #1 in ), where the subscript m is used to denote that the estimate corresponds to the m-th completed dataset. The average of the M estimates (see #2 in ) is used as the estimate for the parameter of interest. Rubin developed a straightforward formula for estimating the standard errors of the multiple imputations estimators that accounts for the traditional sampling variability of the estimator and the added variability due to the imputation process [,,]. Rubin’s formula can be used to calculate the standard error for most standard estimators. It is a function of the M complete data standard errors (W[1,..M]) and the variability between the complete data estimates across the M imputations (B[M]). Let W[M] be the standard error of the complete data estimator in the m-th imputed dataset, then Rubin’s formula for the standard error of the imputation estimator appears as in #3 in [,]. In practice, analysts usually use 5-10 imputations as this has been shown to be sufficient to correctly capture the variability in the imputation estimator []. We generated five complete datasets with all missing page view times replaced with samples from estimated conditional imputation models. Imputation models were assumed to be normal distributions after log transforming the page view times with means and appropriate standard deviations estimated from linear regression models. We structured the data in a wide format with each person representing one row in the dataset and multiple webpage views represented by multiple columns. A new imputation model was estimated for each repeated page view. We used observed page view times for estimating imputation models and only imputed times for those page views that were observed but that resulted in an automatic time-out. Linear regression models were used to specify the mean of each of the conditional predictive distributions with the following predictors: baseline participant information (participant demographics, smoking history, beliefs about smoking, and readiness to quit), randomized arm, and the number of minutes spent on the first core content page viewed by the participant. Additionally, we used, as predictors, information about the type of webpage viewed, such as the content addressed in the webpage (getting ready to quit, quitting, and staying quit) and the type of page viewed (eg, introduction page, testimonial). Effect of Content Tone on Engagement We calculated the total number of intervention visits, individual page views, and total number of page views that resulted in a time-out. We summarized the distribution of the time in minutes that participants spent viewing intervention content excluding all timed-out page views. After imputing missing page view times, total time spent online was calculated for each participant by adding up the number of minutes spent on an intervention webpage. In order to evaluate the impact of assigning an arbitrary value for time spent viewing pages that resulted in a time-out, we varied the number of minutes assigned to page views that timed out from near zero to 30 minutes. We then compared the contrasting factor levels of message tone on the total time spent viewing intervention content using a zero-inflated Poisson (ZIP) model [,]. We used a ZIP model because the distribution of total time spent online had a larger proportion of zeros than expected from a Poisson distribution; study subjects who were never exposed to the intervention content all spent exactly zero total minutes online, causing a notable point mass in the distribution at zero. We included in the logistic portion of the ZIP, which models the “excess” zeros in the population, an intercept term. In the Poisson part of the ZIP model, we included the randomized factor level and the baseline readiness to quit measure that was used to stratify randomization. We report the estimates from the Poisson part of the ZIP model. Generally, estimates obtained from Poisson models are interpreted as incidence rate ratios, but when all subjects share a common period of exposure, as in the Q^2 trial, estimates can be interpreted as the ratio of mean event counts comparing the two contrasting factor levels. Thus, we report the ratio of the mean number of minutes spent online for individuals who received the content in a motivational tone to those who received the prescriptive message tone. We used Stata Version 12 for all analyses, including imputing missing page view times [,]. The Q^2 trial enrolled 1865 current smokers; 1175 (63.00%, 1175/1865) participants accessed the online intervention at least once. The intervention content was viewed on a total of 1691 separate visits, resulting in 6592 unique page views. A total of 683 (10.36%, 683/6592) of these page views automatically timed out after 30 minutes of inactivity, and 550 (46.81%, 550/1175) participants had at least one page view that resulted in a time-out. shows the distribution of the time spent on page views that did not result in a time-out; the median observed time spent on an intervention page was 1.07 minutes (interquartile range 0.47-2.27). This suggests that assigning 30 minutes to all page views that resulted in a time-out would overestimate the time participants spent viewing online intervention content. presents the estimated ratios of mean time spent online for those who received content in a prescriptive tone compared to those who received content in a motivational tone when the value assigned to the time spent viewing webpages that resulted in a time-out is varied from near zero to 30 minutes. While the ratio of means estimate was stable around 0.87, the width of the 95% confidence intervals (CI) around the estimate vary as the time assigned to time-outs changes. Assigning a value close to zero (0.00001 minutes) for time-outs resulted in an estimate of 0.87 with a 95% CI 0.75-1.02 that includes one (ie, fail to reject the null hypothesis that there are no differences in participant engagement between the two factor levels at a 0.05 significance level). Alternatively, assigning a value of 30 minutes to page views that automatically timed out resulted in an estimate of 0.86 with a 95% CI 0.77-0.97 that excludes one, leading to the conclusion that participants assigned to the prescriptive tone viewed content for significantly fewer minutes than those assigned to the motivational tone. Averaged across the five completed datasets (ie, time-outs replaced with imputed page view times), the average total time spent viewing intervention content was 12.3 minutes. The total number of minutes spent viewing the intervention ranged from less than 1 minute to greater than 180 minutes, with a median of 7.0 minutes. Comparing the mean cumulative number of minutes spent viewing intervention content among those who viewed content in a prescriptive tone versus a motivational tone resulted in a ratio of means of 0.87 (95% CI 0.75-1.01; P=.06). Thus, participants who had content presented in a prescriptive tone spent 13% less time viewing online intervention content, although this difference was not statistically significant at the .05 level. Figure 1. Distribution of minutes spent viewing an intervention page, excluding page views that resulted in an automatic time-out. View this figure Figure 2. Sensitivity of model results to assigning an arbitrary time spent online to page views that resulted in a time-out (estimate from the zero-inflated Poisson model for the ratio of the mean time spent online comparing individuals who received content in a prescriptive [RX] tone versus a motivational tone [MI]). View this figure Principal Findings The number of available Internet-based behavioral and educational intervention programs has exploded over the past decade. As researchers seek to understand how to optimize the design of these programs to be most effective, it is imperative that researchers examine to what extent participants are exposed to and engage with the programs and to what extent this interaction influences intervention outcomes. Even with the advent of more sophisticated means for tracking program interactivity, there will continue to be periods of time which, either by design or happenstance, involve no direct human-computer interactions resulting in extended periods of “inactivity”. As our case example illustrates, how these data are handled analytically can significantly alter the conclusions drawn about how much time participants actually spent viewing the content. In turn, this could affect analyses designed to explore whether or not program exposure mediated the observed treatment We propose a standard methodology whereby researchers utilize the MI processes outlined in this paper for managing extended periods of inactivity or time-out data. The decision to use this methodology should be made a priori, as one cannot know ahead of time how much of an impact assigning an arbitrary value to time-outs will have on study conclusions. Researchers are encouraged to employ multiple imputations when examining exposure to online intervention content in the future. This research was funded by the National Cancer Institute (R01 CA138598, J McClure, PI). We are grateful to the contributions of the many study team members at Group Health Research Institute and the University of Michigan. The intervention evaluated in this study was developed by researchers at the Group Health Research Institute and University of Michigan. Conflicts of Interest Dr Shortreed has received funding from research grants awarded to Group Health Research Institute by Bristol Meyers Squibb. Mr Bogart and Dr McClure have no conflicts of interest to declare. 1. Bennett GG, Glasgow RE. The delivery of public health interventions via the Internet: actualizing their potential. Annu Rev Public Health 2009;30:273-292. [CrossRef] [Medline] 2. Baker TB, Gustafson DH, Shaw B, Hawkins R, Pingree S, Roberts L, et al. Relevance of CONSORT reporting criteria for research on eHealth interventions. Patient Educ Couns 2010 Dec;81 Suppl:S77-S86 [FREE Full text] [CrossRef] [Medline] 3. Proudfoot J, Klein B, Barak A, Carlbring P, Cuijpers P, Lange A, et al. Establishing guidelines for executing and reporting Internet intervention research. Cognitive Behaviour Therapy 2011 Jun 4. Peterson ET. Web Site Measurement Hacks. Sebastopol, CA: O'Reilly Media; 2005. 5. Danaher BG, Boles SM, Akers L, Gordon JS, Severson HH. Defining participant exposure measures in Web-based health behavior change programs. J Med Internet Res 2006;8(3):e15 [FREE Full text] [ CrossRef] [Medline] 6. Danaher BG, Seeley JR. Methodological issues in research on web-based behavioral interventions. Ann Behav Med 2009 Aug;38(1):28-39. [CrossRef] [Medline] 7. Crenshaw K, Curry W, Salanitro AH, Safford MM, Houston TK, Allison JJ, et al. Is physician engagement with Web-based CME associated with patients' baseline hemoglobin A1c levels? The Rural Diabetes Online Care study. Acad Med 2010 Sep;85(9):1511-1517 [FREE Full text] [CrossRef] [Medline] 8. Crutzen R, Roosjen JL, Poelman J. Using Google Analytics as a process evaluation method for Internet-delivered interventions: an example on sexual health. Health Promot Int 2013 Mar;28(1):36-42. [CrossRef] [Medline] 9. Graham AL, Cha S, Papandonatos GD, Cobb NK, Mushro A, Fang Y, et al. Improving adherence to web-based cessation programs: a randomized controlled trial study protocol. Trials 2013;14:48 [FREE Full text] [CrossRef] [Medline] 10. McClure JB, Shortreed SM, Bogart A, Derry H, Riggs K, St John J, et al. The effect of program design on engagement with an internet-based smoking intervention: randomized factorial trial. J Med Internet Res 2013;15(3):e69 [FREE Full text] [CrossRef] [Medline] 11. Richardson A, Graham AL, Cobb N, Xiao H, Mushro A, Abrams D, et al. Engagement promotes abstinence in a web-based cessation intervention: cohort study. J Med Internet Res 2013;15(1):e14 [FREE Full text] [CrossRef] [Medline] 12. Glasgow RE, Christiansen SM, Kurz D, King DK, Woolley T, Faber AJ, et al. Engagement in a diabetes self-management website: usage patterns and generalizability of program use. J Med Internet Res 2011;13(1):e9 [FREE Full text] [CrossRef] [Medline] 13. Zbikowski SM, Jack LM, McClure JB, Deprey M, Javitz HS, McAfee TA, et al. Utilization of services in a randomized trial testing phone- and web-based interventions for smoking cessation. Nicotine Tob Res 2011 May;13(5):319-327 [FREE Full text] [CrossRef] [Medline] 14. Allison PD. Missing Data. Thousand Oaks, CA. USA: Sage; 2002. 15. Little RJA, Rubin DB. Statistical Analysis with Missing Data. Second ed. New York, NY. USA: J Wiley & Sons; 2002. 16. National Research Council. The Prevention and Treatment of Missing Data in Clinical Trials. Panel on Handling Missing Data in Clinical Trials, Committee on National Statistics, Division of Behavioral and Social Sciences and Education, editors. Washington, DC: The National Academies Press; 2010. 17. McClure JB, Derry H, Riggs KR, Westbrook EW, St John J, Shortreed SM, et al. Questions about quitting (Q2): design and methods of a Multiphase Optimization Strategy (MOST) randomized screening experiment for an online, motivational smoking cessation intervention. Contemp Clin Trials 2012 Sep;33(5):1094-1102. [CrossRef] [Medline] 18. Wu CFJ, Hamada M. Experiments: Planning, Analysis, and Parameter Design Optimization. New York: Wiley and Sons, Inc; 2000. 19. Miller WR, Rollnick S. Motivational Interviewing: Preparing People for Change. 2nd ed. New York: The Guilford Press; 2002. 20. Little RJ, D'Agostino R, Cohen ML, Dickersin K, Emerson SS, Farrar JT, et al. The prevention and treatment of missing data in clinical trials. N Engl J Med 2012 Oct 4;367(14):1355-1360. [CrossRef ] [Medline] 21. Blankers M, Koeter MW, Schippers GM. Missing data approaches in eHealth research: simulation study and a tutorial for nonmathematically inclined researchers. J Med Internet Res 2010;12(5):e54 [ FREE Full text] [CrossRef] [Medline] 22. Schafer JL. Analysis of Incomplete Multivariate Data. London, UK: Chapman & Hall; 1997. 23. Schafer JL. Imputation of missing covariates under a multivariate linear mixed model. Technical report, Dept. of Statistics, Penn State University 1997:1-24. 24. Raghunathan TE, Lepkowski JM, Van Hoewyk J, Solenberger P. A multivariate technique for multiply imputing missing values using a sequence of regression models. Survey Methodology 2001;27 25. Van Buuren S, Brand JPL, Groothuis-Oudshoorn CGM, Rubin DB. Fully conditional specification in multivariate imputation. Journal of Statistical Computation and Simulation 2006;76(12):1049-1064. 26. van Buuren S. Multiple imputations of discrete and continuous data by fully conditional specification. Stat Methods Med Res 2007 Jun;16(3):219-242. [CrossRef] [Medline] 27. Raghunathan TE, Solenberger P, Van Hoewyk J. IVEware: Imputation and Variance Estimation software user guide. Survey Research Center, Institute for Social Research, University of Michigan 28. Horton NJ, Kleinman KP. Much ado about nothing: A comparison of missing data methods and software to fit incomplete data regression models. Am Stat 2007 Feb;61(1):79-90 [FREE Full text] [CrossRef ] [Medline] 29. van Buuren S, Groothuis-Oudshoorn K. Mice: Multivariate Imputation by Chained Equations in R. Journal of Statistical Software 2011;45(3):1-67. 30. Royston P, White IR. Multiple Imputations by Chained Equations (mice): Implementation in Stata. Journal of Statistical Software 2011;45(1):1-20. 31. Stuart EA, Azur M, Frangakis C, Leaf P. Multiple imputations with large data sets: a case study of the Children's Mental Health Initiative. Am J Epidemiol 2009 May 1;169(9):1133-1139 [FREE Full text] [CrossRef] [Medline] 32. Shortreed SM, Laber EB, Pineau J, Murphy SA. Imputations methods for the clinical antipsychotic trials of intervention and effectiveness study. School of Computer Science, McGill University 33. Schafer JL, Graham JW. Missing data: our view of the state of the art. Psychol Methods 2002 Jun;7(2):147-177. [Medline] 34. Lee KJ, Carlin JB. Multiple imputations for missing data: fully conditional specification versus multivariate normal imputation. Am J Epidemiol 2010 Mar 1;171(5):624-632 [FREE Full text] [ CrossRef] [Medline] 35. Raghunathan TE, Siscovick DS. A multiple imputation analysis of a case-control study of the risk of primary cardiac arrest among pharmacologically treated hypertensives. Applied Statistics 1996; 36. Barnard J, Meng XL. Applications of multiple imputation in medical studies: from AIDS to NHANES. Stat Methods Med Res 1999 Mar;8(1):17-36. [Medline] 37. Nevalainen J, Kenward MG, Virtanen SM. Missing values in longitudinal dietary data: a multiple imputation approach based on a fully conditional specification. Stat Med 2009 Dec 20;28 (29):3657-3669. [CrossRef] [Medline] 38. Rubin DB, Schenker N. Multiple imputation for interval estimation from simple random samples with ignorable nonresponse. J Am Stat Assoc 1986;81(394):366-374. 39. Rubin DB. Multiple Imputation for Nonresponse in Surveys. New York: J Wiley & Sons; 1987. 40. Böhning D, Dietz E, Schlattmann P, Mendonca L, Kirchner U. The zero-inflated poisson model the decayed, MissingFilled teeth index in dental epidemiology. J R Statist Soc A 1999;162:195-209. 41. Preisser JS, Stamm JW, Long DL, Kincade ME. Review and recommendations for zero-inflated count regression modeling of dental caries indices in epidemiological studies. Caries Res 2012;46 (4):413-423 [FREE Full text] [CrossRef] [Medline] 42. StataPress. Stata Statistical Software: Release 12. College Station, TX: Stata Corp; 2011. MAR: missing at random MCAR: missing completely at random MI: multiple imputations MICE: multiple imputations by chained equations NMAR: not missing at random ZIP: zero-inflated Poisson model Edited by G Eysenbach; submitted 18.06.13; peer-reviewed by M Blankers, C Morrison; comments to author 19.08.13; revised version received 05.09.13; accepted 13.10.13; published 21.11.13 ©Susan M Shortreed, Andy Bogart, Jennifer B McClure. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 21.11.2013. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
{"url":"https://www.jmir.org/2013/11/e252","timestamp":"2024-11-01T20:55:02Z","content_type":"text/html","content_length":"386015","record_id":"<urn:uuid:430cf23a-00cb-4b07-a3d8-59adca79b12f>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00104.warc.gz"}
Find the value of a that makes the statement true What is the value of a that makes the statement true? Given the following equation: To determine the value of a that makes the statement true, we would apply the law of indices: What are the laws of indices? In Mathematics, laws of indices can be defined as the standard principles or rules that are used for simplifying an equation or expression that involves powers of the same base. Note: The common base is 3. Applying the division law of indices, we have: 3^(-1) ÷ 3^4 = 3^a 3^(-1-4) = 3^a -1-4 = a a = -5 Read more on powers here: URL link here a = -5 next = 1/243
{"url":"https://tutdenver.com/sat/find-the-value-of-a-that-makes-the-statement-true.html","timestamp":"2024-11-08T20:54:27Z","content_type":"text/html","content_length":"20491","record_id":"<urn:uuid:3e29b26c-c5d6-4136-bbc5-947acc7ce46f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00365.warc.gz"}
Understanding Mathematical Functions: At The Point Shown On The Functi Understanding Mathematical Functions Mathematical functions play a crucial role in various fields such as engineering, economics, and computer science. In this blog post, we will explore the importance of mathematical functions, overview common types of functions, and set the stage for understanding specific points on a function's graph. A Definition and Importance of Mathematical Functions Definition: A mathematical function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. Functions are often denoted by f(x) and are used to model various real-world phenomena. Importance: Mathematical functions are fundamental to problem-solving and decision-making in fields such as engineering, economics, and computer science. They provide a framework for analyzing and predicting outcomes based on input variables, making them essential in areas such as optimization, modeling, and simulation. Overview of Common Types of Functions There are several common types of mathematical functions, each with its unique characteristics and applications. Some of the most prevalent types include: • Linear Functions: These functions have a constant slope and form a straight line when graphed. They are widely used to model proportional relationships and are expressed in the form f(x) = mx + • Quadratic Functions: Quadratic functions have a squared term and form a parabola when graphed. They are used to model a wide range of phenomena, including projectile motion and economic behavior. • Polynomial Functions: These functions consist of terms with non-negative integer exponents and are used to model a variety of natural phenomena, from population growth to the spread of diseases. • Exponential Functions: Exponential functions grow or decay at a constant percentage rate. They are frequently used to model processes such as population growth, radioactive decay, and compound • Trigonometric Functions: Trigonometric functions such as sine, cosine, and tangent are essential in the study of periodic phenomena and waveforms. They are commonly used in fields such as physics, engineering, and signal processing. Setting the Stage for Understanding Specific Points on a Function's Graph Understanding specific points on a function's graph is crucial for interpreting and analyzing the behavior of the function. When examining a function's graph, several key factors come into play: • Intercepts: The x and y-intercepts represent the points at which the graph of the function crosses the x-axis and y-axis, respectively. • Maximum and Minimum Points: These points indicate the highest and lowest values of the function within a given interval and are crucial for optimization and decision-making. • Inflection Points: Inflection points represent the locations where the concavity of the function changes, signaling a shift in the rate of increase or decrease. • Critical Points: Critical points occur where the derivative of the function is either zero or undefined and are essential for identifying maximum, minimum, or saddle points. Key Takeaways • Understanding the point on a mathematical function • Identifying the characteristics of the point • Applying the knowledge to solve problems Fundamental Concepts of Functions Understanding mathematical functions is essential for various fields such as engineering, physics, economics, and computer science. Functions are a fundamental concept in mathematics that describes the relationship between a set of inputs and a set of permissible outputs. Let's delve into the key concepts related to functions. Explanation of domain, range, and the idea of a function as a mapping from inputs to outputs Domain: The domain of a function is the set of all possible input values (often denoted as x) for which the function is defined. It represents the independent variable in a function and determines the valid inputs that can be processed. Range: The range of a function is the set of all possible output values (often denoted as y) that the function can produce based on the given inputs. It represents the dependent variable and defines the permissible outputs resulting from the inputs. Idea of a function as a mapping: A function can be conceptualized as a mapping from the domain to the range, where each input value is associated with exactly one output value. This mapping ensures that for every input, there is a unique corresponding output, and no input is left unmapped. The role of the independent variable and dependent variable In a mathematical function, the independent variable (x) is the input value that is chosen or given, and the dependent variable (y) is the output value that is determined by the function based on the input. The independent variable represents the quantity that is being manipulated or controlled, while the dependent variable represents the quantity that is being observed or measured as a result of the changes in the independent variable. How to interpret the graph of a function Graphs are a visual representation of functions and provide valuable insights into their behavior. When interpreting the graph of a function, it's important to understand the following: • Shape: The shape of the graph can reveal information about the nature of the function, such as whether it is linear, quadratic, exponential, or trigonometric. • Intercepts: The x-intercepts represent the points where the graph intersects the x-axis, indicating the roots or solutions of the function. The y-intercept is the point where the graph intersects the y-axis, representing the value of the function when x=0. • Slope: The slope of the graph at a specific point indicates the rate of change of the function at that point. It provides insights into the direction and steepness of the function. • Behavior: Observing the behavior of the graph towards positive and negative infinity can reveal the end behavior of the function and its overall trend. Types of Points on a Function's Graph When analyzing a mathematical function, it is important to understand the different types of points that can appear on its graph. These points provide valuable information about the behavior of the function and can help in determining its properties and characteristics. A Critical Points: maximum, minimum, and saddle points Critical points are the points on the graph of a function where the derivative is either zero or undefined. These points can be classified into three categories: maximum points, minimum points, and saddle points. • Maximum points: These are the points where the function reaches a local maximum, meaning that the function has a higher value at that point compared to its neighboring points. • Minimum points: Conversely, minimum points are the points where the function reaches a local minimum, with a lower value compared to its neighboring points. • Saddle points: Saddle points are the points where the function has a critical point but does not reach a maximum or minimum value. At these points, the function changes direction along different B Intercepts: where the function crosses the x-axis and y-axis Intercepts are the points where the graph of a function crosses the x-axis or the y-axis. These points provide information about the behavior of the function at specific input values. • X-intercepts: These are the points where the graph crosses the x-axis, indicating the values of x for which the function equals zero. • Y-intercepts: Y-intercepts are the points where the graph crosses the y-axis, representing the value of the function when x is zero. C Discontinuities and cusps: points where the function is not defined or has abrupt changes Discontinuities and cusps are points on the graph where the function is not defined or exhibits abrupt changes in its behavior. These points can provide insights into the overall continuity and smoothness of the function. Discontinuities can be classified into different types, such as jump discontinuities, infinite discontinuities, and removable discontinuities, each indicating a specific type of behavior at that Cusps, on the other hand, are points where the function exhibits a sharp change in direction, often resembling a sharp corner on the graph. These points can indicate sudden changes in the rate of change of the function. Understanding the different types of points on a function's graph is essential for analyzing its behavior and properties. By identifying critical points, intercepts, discontinuities, and cusps, mathematicians and scientists can gain valuable insights into the nature of the function and its relationship to the input and output values. At the Point of Interest: Analyzing the Function When analyzing a mathematical function at a specific point, several key concepts come into play. Understanding the significance of slope and tangent lines, as well as concavity and the second derivative test, is essential for gaining insights into the behavior of the function at that point. Additionally, the Mean Value Theorem provides valuable implications for understanding the function's behavior at specific points. A The significance of slope and tangent lines at a point At a given point on a function, the slope of the tangent line represents the rate of change of the function at that point. The slope indicates whether the function is increasing, decreasing, or remaining constant at that specific point. By calculating the derivative of the function at that point, we can determine the slope of the tangent line and gain insights into the behavior of the Example: If the slope of the tangent line is positive, it indicates that the function is increasing at that point. Conversely, a negative slope suggests a decreasing function, while a zero slope indicates a point of inflection or a horizontal tangent line. B Understanding concavity and the second derivative test Concavity refers to the curvature of the function at a specific point. By analyzing the concavity, we can determine whether the function is concave up (opening upwards) or concave down (opening downwards) at that point. The second derivative test is a method used to determine the concavity of a function at a given point. Example: If the second derivative of the function is positive at a specific point, it indicates that the function is concave up at that point. Conversely, a negative second derivative suggests concavity down, while a second derivative of zero may indicate a point of inflection. C The Mean Value Theorem and its implications at specific points The Mean Value Theorem states that if a function is continuous on a closed interval and differentiable on the open interval, then there exists at least one point in the open interval where the instantaneous rate of change (the derivative) is equal to the average rate of change over the closed interval. This theorem has important implications for understanding the behavior of a function at specific points. Example: By applying the Mean Value Theorem, we can determine the existence of points where the instantaneous rate of change of the function is equal to the average rate of change over a given interval. This provides valuable insights into the behavior of the function at those specific points. Applying Knowledge to Determine Truth Statements When analyzing a mathematical function at a specific point, there are several methods to determine the truth statements about the function's behavior. By applying knowledge of derivatives, examining the function's behavior, and exploring practical examples, we can gain a deeper understanding of the function's characteristics. A. Using derivatives to analyze the rate of change at the point Derivatives play a crucial role in understanding the behavior of a function at a given point. By calculating the derivative of the function at the specific point, we can determine the rate of change of the function. If the derivative is positive, it indicates that the function is increasing at that point. Conversely, if the derivative is negative, the function is decreasing at that point. This information helps us determine the direction of the function's behavior at the given point. B. Examining the function's behavior near the point for increasing or decreasing trends Another approach to understanding the behavior of a function at a specific point is to examine its behavior in the vicinity of that point. By analyzing the function's behavior for increasing or decreasing trends, we can determine whether the function is reaching a maximum, minimum, or inflection point at the given location. This analysis provides valuable insights into the overall behavior of the function and helps us make accurate truth statements about its characteristics. C. Practical examples: analyzing points on revenue functions for a business or acceleration in physics Practical examples offer real-world applications of understanding mathematical functions at specific points. For instance, in business, analyzing points on revenue functions helps determine the maximum revenue or the break-even point for a product or service. This analysis guides business decisions and strategic planning. Similarly, in physics, analyzing points on acceleration functions provides insights into the motion of objects and helps predict their behavior in various scenarios. These practical examples demonstrate the significance of understanding mathematical functions at specific points in different fields. Troubleshooting Common Misconceptions and Errors When dealing with mathematical functions, it's important to be aware of common misconceptions and errors that can arise when analyzing a function at a specific point. By understanding and addressing these issues, you can ensure a more accurate interpretation of the function's behavior. A. Mistaking local extrema for global ones One common mistake when analyzing a function at a specific point is mistaking a local extremum for a global extremum. It's important to remember that a local extremum only represents the highest or lowest point within a specific interval, while a global extremum is the highest or lowest point across the entire domain of the function. Example: At a point on the function, if the function reaches a high point, it may appear to be a global maximum. However, upon closer examination, it could be a local maximum within a smaller interval, and the actual global maximum may lie elsewhere within the function's domain. B. Ignoring the domain restrictions that affect the function's behavior at a point Another common error is ignoring the domain restrictions that can affect the function's behavior at a specific point. The domain of a function defines the set of all possible input values, and any restrictions within the domain can significantly impact the function's behavior at a given point. Example: If a function has a domain restriction that excludes certain values, it's crucial to consider how this restriction affects the behavior of the function at a specific point. Ignoring domain restrictions can lead to misinterpretations of the function's characteristics at that point. C. Misinterpreting points of inflection with no change in concavity Points of inflection are often misunderstood, particularly when there is no change in concavity at the point. A point of inflection occurs when the concavity of the function changes, transitioning from concave up to concave down, or vice versa. However, it's important to note that not all points where the second derivative is zero represent points of inflection. Example: If the second derivative of a function is zero at a specific point, it does not necessarily indicate a point of inflection. It's essential to analyze the behavior of the function around that point to determine if there is a change in concavity, as this is the defining characteristic of a point of inflection. By addressing these common misconceptions and errors, you can enhance your understanding of mathematical functions and make more accurate interpretations when analyzing a function at a specific Conclusion & Best Practices in Understanding Mathematical Functions A Recap of the importance of correctly analyzing points on a function’s graph Understanding mathematical functions is crucial for various fields such as engineering, physics, economics, and computer science. Correctly analyzing points on a function’s graph is essential for making accurate predictions and decisions based on the data. It allows us to understand the behavior of the function and its relationship with the variables involved. Best practices: continuous practice with different functions, verifying results with multiple methods (algebraically and graphically), and peer discussions for diverse perspectives • Continuous Practice: To gain a deep understanding of mathematical functions, continuous practice with different types of functions is essential. This helps in recognizing patterns and understanding the behavior of various functions. • Verifying Results: It is important to verify results using multiple methods such as algebraic manipulation and graphical analysis. This not only ensures the accuracy of the analysis but also provides a comprehensive understanding of the function. • Peer Discussions: Engaging in discussions with peers who have diverse perspectives can provide valuable insights into different approaches to analyzing mathematical functions. It encourages critical thinking and broadens the understanding of the subject. Encouragement to explore real-world applications of mathematical functions to cement understanding Real-world applications of mathematical functions can help in cementing the understanding of their significance. From predicting stock market trends to modeling the spread of diseases, mathematical functions play a crucial role in various real-world scenarios. Exploring these applications not only reinforces the understanding of functions but also highlights their practical relevance.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-point-shown-function-true","timestamp":"2024-11-11T17:23:09Z","content_type":"text/html","content_length":"231125","record_id":"<urn:uuid:b9137fd1-3315-4e25-a370-67c4deb2694b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00455.warc.gz"}
Navigator Suite - Catalog - View Catalog BS Degree in Mathematics {22-23} B.S. Degree in Mathematics A B.S. Degree in Mathematics with the Computational emphasis includes courses from several areas of mathematics, including mathematical analysis, statistics, and computer modeling and simulation. Many of the courses will involve heavy use of computers. Computational mathematics is focused on the skills needed to solve real-world problems. The program includes 41 credits of required Math courses, starting with the Calculus sequence, as well as two other Math courses chosen from specific lists. At least one of those two additional Math courses must be chosen to complete a two-semester upper level sequence. Typical choices for that upper level sequence for Computational majors would be Math 335/435, Math 311/411 or Math 366/466, although other options are available. Students should see their advisor for additional discussion. In addition, 15 credits of CSIS courses are required for this program. Students pursuing this degree have enough free electives to pursue a minor or to explore other academic interests. To receive the B.S. Degree in Mathematics, the student must meet the minimum university requirements and specific requirements for the program. Completion of 120 credits is required for this degree which includes the Liberal Arts and Sciences Curriculum. Student Learning Outcomes • Demonstrate an understanding of the theory and applications of Calculus. • Apply technology to solving problems. • Demonstrate the ability to write and analyze proof and/or use models to make real world predictions. • Demonstrate an ability to precisely communicate ideas orally and in writing. • Demonstrate an understanding of the breadth of mathematics and its deep interconnecting principles. • Apply critical thinking skills to solve multi-step problems and perform complex tasks. • Demonstrate the mathematical skills and knowledge to facilitate a life of ongoing and independent learning. Program Delivery Mode Land plus: face-to-face where some online courses may be available or required Core Requirements ( 23 credits ) All majors must complete the ETS Major Field Test in Mathematics. MATH 260 Computer Calculus (1) MATH 261 Calculus I (4) MATH 262 Calculus II (4) MATH 311 Introduction to Proof and Abstract Mathematics (3) MATH 323 Multi-Variable and Vector Calculus (4) MATH 327 Introduction to Linear Algebra (3) MATH 335 Intermediate Probability and Statistics I (4) Designated Writing Intensive Course for Major MATH 491 Mathematical Writing (3) Emphasis in Computational Math Program Requirements ( 18 credits ) MATH 291 LaTex (1) MATH 355 Mathematical Modeling (3) MATH 461 Intermediate Analysis I (4) or MATH 435 Mathematical Statistics I (4) MATH 366 Differential Equations (3) MATH 450 Numerical Analysis I (4) MATH 491 Mathematical Writing (3) Students completing a BS in Mathematics with a Computational emphasis must take an upper level sequence chosen from the following list. MATH 335 AND MATH 435, OR MATH 366 AND MATH 466, OR MATH 311 AND MATH 411, OR MATH 327 AND MATH 427 The first course in each of these sequences is a required course for the emphasis, and the second course is an allowed option or upper level Math elective in the program. Related Requirements ( 15 credits ) Students must take fifteen credits of approved Computer Science and Information Systems courses. These must include the following courses: CSIS 152 Introduction to Computers and Programming Ia (3) and CSIS 153 Introduction to Computers and Programming Ib (3) and CSIS 252 Introduction to Computers and Programming II (3) plus any two of the following CSIS courses: CSIS 304 Databases (3) CSIS 335 Graphical User Interface Programming (3) CSIS 336 C#.Net Programming (3) CSIS 349 Networks and Data Communications (3) CSIS 352 Advanced Concepts in Programming (3) CSIS 360 Linux Programming and Development Tools (3) CSIS 446 Decision Support Systems (3) CSIS 450 Programming Languages (3) Restricted Electives ( 3 credits ) Students must take three credits in mathematics at the level of Math 300 or higher and may not include Math 302, 303, 304, 316, 386, 402, 406, 416, or 486.
{"url":"https://navigator.mnstate.edu/Catalog/ViewCatalog.aspx?pageid=viewcatalog&catalogid=39&chapterid=581&topicgroupid=5284&loaduseredits=False","timestamp":"2024-11-05T18:57:11Z","content_type":"application/xhtml+xml","content_length":"39951","record_id":"<urn:uuid:b3cdcb3a-5a53-4d9e-926a-00cc71ee5e9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00829.warc.gz"}
Greek Letters in R Plot Label and Title - R FAQS - Learn R Greek Letters in R Plot Label and Title Introduction to Greek Letters in R Plot The post is about writing Greek letters in R plot, their labels, and the title of the plots. There are two main ways to include Greek letters in your R plot labels (axis labels, title, legend): 1. Using the expression Function This is the recommended approach as it provides more flexibility and control over the formatting of the Greek letters and mathematical expressions. 2. Using raw Greek letter Codes This method is less common and requires memorizing the character codes for each Greek letter. Table of Contents Question: How one can include Greek letters (symbols) in R plot labels? Answer: Greek letters or symbols can be included in titles and labels of a graph using the expression command. Following are some examples Note that in these examples random data is generated from a normal distribution. You can use your own data set to produce graphs that have symbols or Greek letters in their labels or titles. Greek Letters in R Plot The following are a few examples of writing Greek letters in R plot. Example 1: Draw Histogram mycoef <- rnorm (1000) hist(mycoef, main = expression(beta) ) where beta in expression is the Greek letter (symbol) of $latex \beta$. A histogram similar to the following will be produced. Example 2: sample <- rnorm(mean=5, sd=1, n=100) hist(sample, main=expression( paste("sampled values, ", mu, "=5, ", sigma, "=1" ))) where mu and sigma are symbols of $latex \mu$ and $latex \sigma$ respectively. The histogram will look like Example 3: curve(dnorm, from= -3, to=3, n=1000, main="Normal Probability Density Function") will produce a curve of Normal probability density function ranging from $latex -3$ to $latex 3$. Normal Density Function To add a normal density function formula, we need to use the text and paste command, that is text(-2, 0.3, expression(f(x) == paste(frac(1, sqrt(2*pi* sigma^2 ) ), " ", e^{frac(-(x-mu)^2, 2*sigma^2)})), cex=1.2) Now the updated curve of the Normal probability density function will be Example 4: x <- dnorm( seq(-3, 3, 0.001)) plot(seq(-3, 3, 0.001), cumsum(x)/sum(x), type="l", col="blue", xlab="x", main="Normal Cumulative Distribution Function") The Normal Cumulative Distribution function will look like, To add the formula, use the text and paste command, that is text(-1.5, 0.7, expression(phi(x) == paste(frac(1, sqrt(2*pi)), " ", integral(e^(-t^2/2)*dt, -infinity, x))), cex = 1.2) The curve of Normal Cumulative Distribution Function The Curve of the Normal Cumulative Distribution Function and its formula in the plot will look like this,
{"url":"https://rfaqs.com/r-graphics/plot-function/greek-letters-in-r-plot-label/","timestamp":"2024-11-09T04:36:55Z","content_type":"text/html","content_length":"195213","record_id":"<urn:uuid:8b48543f-3b84-4f5d-a33a-3158fb7fbf03>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00388.warc.gz"}
Harbin Miles, Ruth Ruth Harbin Miles Mary Baldwin College, VA Ruth Harbin Miles coaches rural, suburban, and inner-city school mathematics teachers. Her professional experiences include coordinating the K-12 Mathematics Teaching and Learning Program for the Olathe, Kansas, Public Schools for more than 25 years; teaching mathematics methods courses at Virginia’s Mary Baldwin College; and serving on the Board of Directors for the National Council of Teachers of Mathematics, the National Council of Supervisors of Mathematic, and both the Virginia Council of Teachers of Mathematics and the Kansas Association of Teachers of Mathematics. Ruth is a co-author of five Corwin books including A Guide to Mathematics Coaching, A Guide to Mathematics Leadership, Visible Thinking in the K-8 Mathematics Classroom, The Common Core Mathematics Standards, and Realizing Rigor in the Mathematics Classroom. As co-owner of Happy Mountain Learning, Ruth specializes in developing teachers’ content knowledge and strategies for engaging students to achieve high standards in mathematics.
{"url":"https://www.sagepub.com/hi/sam/author/ruth-harbin-miles","timestamp":"2024-11-05T13:30:37Z","content_type":"text/html","content_length":"150802","record_id":"<urn:uuid:5342edf3-66e1-4f1c-8930-e3bf181f29d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00041.warc.gz"}
Choices for axioms I am fairly used to debates among mathematicians, constructivists, intuitionists etc. over principles like the law of excluded middle or the axiom of choice. However I am somewhat at a loss when it comes to some of the axioms that arise in type theory, like function extensionality. I understand that when we assume an axiom we are fabricating an element of a type which cannot actually be constructed. So in the case of function extensionality, one is asserting that there's an inhabitant of the equality type. This leads to something like Definition p : f = g := (proof involving function extensionality) Definition t : nat := match p with | refl f => 0 where the match statement will not simplify to zero because p is not constructed with refl. Therefore we have some blocked computation. This destroys canonicity because now we have a term of natural number type which is not a numeral. Ok. However I don't really understand the practical repercussions of this. As a mathematician interested in formalizing mathematics can someone give me a reason why I might be concerned about this? What justification might someone have for trying to avoid this? I guess in the case above, the term t is propositionally equal to zero even if not judgementally equal. Short answer: For mathematicians proving things, it mostly doesn't matter. For people wanting to compute things, it can get in the way. (The first group overlaps with the second). It is possible to come up with a model of type theory where function extensionality doesn't hold, but I don't know if that is of much interest to somebody formalizing mathematics unrelated to logic. Reminds me of a related question: contrary to some axioms like excluded middle, it's easy to provide a computation rule for FunExt which just returns refl, making it possible to compute with it. I think having such a computation rule in the empty context should be safe, but has anyone worked out the implications of having such a reduction rule in other (potentially inconsistent) contexts? what does "safe" mean? refl does not have type (fun x : bool => x) = (fun x : bool => if x then true else false) so it would break subject reduction Ali Caglayan said: Short answer: For mathematicians proving things, it mostly doesn't matter. For people wanting to compute things, it can get in the way. (The first group overlaps with the second). Ok, this is helpful. This problem is intangible to me. Can you think of any good examples of computations we would want to carry out which would be hindered by assuming function extensionality? I am not sure whether the kinds of computations I'm interested in would be hindered by assuming this axiom. If I only use it for proving correctness results about algorithms rather than defining the algorithms themselves it should be fine. I would guess that's basically it. If you only use it "at the end", for proving equalities, but not for defining programs, then you should be mostly fine. agree with Théo — the exception is ver e.g. if you use Definition some_function. ... rewrite foo. ... Defined., and foo is an equality proof that does not reduce to eq_refl (or is not even propositionally equal to it) because it is stuck on functional extensionality. However, such definitions are questionable even without axioms, because they'll often be stuck on Qed (https://gmalecha.github.io/reflections/2017/qed-considered-harmful has examples of the problems, but I would NOT recommend those solutions). (also OTOH, equalities on types with decidable equalities can be okay, since those equalities are provably proof irrelevant anyway) @Paolo Giarrusso Hm, ok so just to be sure I understand you correctly, if f, g are not themselves judgementally equivalent then we can never have that some p : f =g is even propositionally equivalent to refl, right? Because f =f and f=g are different types. Or do you mean up to transport along p or something? I am not sure if your point about decidable equality is meant to be directly related to the functional extensionality question. I probably spoke too quickly; the decidable equality point is about equalities which are "easier" to mix with data (the example I use most often is { x : T | bool_pred T = true }) and it remains true that if you don't mix proofs and computations, you won't get stuck on funext in computation (at least IIANM) Last updated: Oct 13 2024 at 01:02 UTC
{"url":"https://coq.gitlab.io/zulip-archive/stream/237977-Coq-users/topic/Choices.20for.20axioms.html","timestamp":"2024-11-13T21:48:58Z","content_type":"text/html","content_length":"13738","record_id":"<urn:uuid:cf0e3110-330c-4cd5-9482-34f25fe8933b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00806.warc.gz"}
Recursion Vs. Loops: A Simple Introduction to Elegant Javascript One of the key features of functional programming is recursion. A recursive function is one that calls itself. Wait - what? Yes - a function that loops over itself to do its thing. Why? Because we're adventurous like that! But also, because understanding recursion can open up a new perspective on problem-solving, and lead us to write elegant, self-explanatory, and surprisingly versatile code. When Iteration Feels a Bit Loopy... Let's begin with our familiar friend, the for loop. It's been with us since we started learning to code, helping us count sheep when we can't sleep, and even graciously assisting us to traverse arrays and manipulate data. But we've also had our fair share of messy break-ups (pun intended). It's not uncommon to find ourselves tangled in complex loop constructs, with several exit conditions and nested iterations, which could turn our code into a labyrinth of hard-to-navigate and debug logic. Enter Recursion: The Self-Reflective Solution In simplest terms, recursion is when a function calls itself until it doesn't. This technique is a fundamental concept in many programming languages and can sometimes offer a more elegant solution to problems that typically require loops. function countDownFrom(n) { if (n > 0) { countDownFrom(n - 1); countDownFrom(5); // Outputs: 5 4 3 2 1 With recursion, our code becomes a straightforward translation of the problem's logic: start from number n, print it, and then do the same for n - 1. The base case (when n is not greater than 0) stops the recursion. Recursion Advantages Recursion can make your code cleaner and easier to understand, especially when dealing with problems with a hierarchical or nested structure, such as traversing a file directory, manipulating nested arrays or objects, and solving complex mathematical problems. It can be a potent tool when used appropriately in your code: 1. Clarity and Readability: Recursive functions can often be written in a more straightforward and readable way than their iterative counterparts. This makes the code easier to understand and 2. Less Code: Recursive functions can frequently achieve the same result with less code than iterative solutions. This is particularly true when dealing with inherently recursive problems, like traversing tree-like data structures. 3. Elegant Problem Solving: Problems involving complex nested structures, backtracking, or divide-and-conquer techniques often have elegant and intuitive solutions when approached with recursion. 4. No State Management: With recursive functions, you don't need to keep track of the state with additional variables as you would in a loop. Each function call has its execution context, reducing the chance of bugs caused by mutable shared state. 5. Natural Fit for Certain Data Structures: Recursive algorithms are a natural fit for certain types of data structures, such as trees and graphs. Functions to traverse or search these structures are often simpler and more intuitive when written recursively. 6. Sequential Processing: In certain scenarios where you must ensure a sequence of operations happen one after another (for instance, when dealing with asynchronous tasks), recursion can help manage these sequential processes more intuitively than loops. 7. Solving Complex Problems: Recursion is often used in algorithmic problems such as sorting, searching, and traversing. Algorithms like Merge Sort, Quick Sort, Tree, and Graph traversals are easier to implement with recursion. Loops or Recursion? Before we get deeper, let's understand when NOT to use recursion. Loops and recursion both have their strengths and weaknesses, and deciding when to use one over the other largely depends on the specific problem at hand and the context. Loops are generally more efficient in terms of performance, and they should be your go-to choice for simple iterations and manipulations on flat data structures. Recursion shines in scenarios where the problem is recursive, such as traversing a DOM tree or a file directory. Here are some scenarios where using loops might be a more suitable choice: 1. Performance Concerns: Loops are generally more efficient than recursion regarding time and space complexity. Recursive calls can lead to increased memory usage because each function call is added to the stack, while a loop only requires a single memory allocation. Using a loop may be a better choice if you're dealing with a large data set or performance-critical application. 2. Language Limitations: Some languages, including JavaScript, have a maximum call stack size, which limits the depth of your recursion. You'll run into a stack overflow error if you exceed this limit. So a loop might be a safer option for problems where you expect deep recursive calls. 3. Simple Iteration: If you're working with a flat structure (like a simple array or list) and performing a straightforward operation that doesn't involve nested elements or dependencies between elements, a loop is a more straightforward and efficient choice. 4. Mutative Operations: If you're performing an operation that requires changing the state of a variable during each iteration, a loop is usually a clearer and more efficient solution. 5. Unpredictable Termination: If the termination condition is not straightforward or predictable, using a loop with a clearly defined exit condition might be better. 6. Understanding and Readability: If your team or collaborators are more comfortable with iterative constructs and could find recursive solutions confusing, it may be best to stick with loops. As you can see, performance is the key issue to keep in mid. We must be cautious when using recursion in JavaScript due to the language's call stack limit. Exceeding this limit will result in a "Maximum call stack size exceeded" error. This can be mitigated by using techniques like tail call optimization (in ECMAScript 2015). Remember, though, recursion isn't a magic bullet and doesn't come without its trade-offs. Excessive or inappropriate use of recursion can lead to issues like stack overflow errors and can be more computationally expensive than iterative solutions. Always weigh your options and use the right tool for the job. More Recursive Fun! Ok - so we have some ideas on when to use recursion, and when not to use it. Now, lets look at a few key examples to understand how this crazy recursion thing works. 1. Fibonacci Sequence: The Fibonacci sequence is a classic recursion example. Each number in the sequence is the sum of the two preceding ones. function fibonacci(n) { if (n <= 1) { return n; } else { return fibonacci(n - 1) + fibonacci(n - 2); console.log(fibonacci(10)); // Outputs: 55 2. Sum of an Array: Although this is more efficiently done with a loop, it is a good recursion example. function sumOfArray(array) { if (array.length === 0) { return 0; } else { return array[0] + sumOfArray(array.slice(1)); console.log(sumOfArray([1, 2, 3, 4, 5])); // Outputs: 15 3. Flatten a Nested Array: This function recursively flattens nested arrays into a single array. function flattenArray(array) { if (array.length === 0) { return []; if (Array.isArray(array[0])) { return flattenArray(array[0]).concat(flattenArray(array.slice(1))); } else { return [array[0]].concat(flattenArray(array.slice(1))); console.log(flattenArray([1, [2, [3, 4], 5]])); // Outputs: [1, 2, 3, 4, 5] 4. Find a key in a nested object: When you have a complex nested objects with a potentially unknown number of levels, this can be a great way to find a needle in a haystack. function findKey(obj, key) { if (key in obj) { return obj[key]; for (let i in obj) { if (typeof obj[i] === 'object') { let found = findKey(obj[i], key); if (found) return found; return null; What's the Big Deal with Recursion and Loops? When coding in JavaScript, you often encounter problems that need you to do something over and over, like going through a list of items. You can solve these problems using loops, like the for loop, which is pretty straightforward. But there's another cool way called recursion, where a function calls itself to solve a problem bit by bit. How Do I Choose Between Recursion and Loops? Choosing between recursion and loops depends on what you're trying to do. If your task involves a lot of straightforward, repeat actions on things like lists or arrays, loops are your best friend because they're simple and don't eat up too much memory. Recursion is awesome for more complex stuff, like when you're dealing with nested lists or structures (think of a folder within a folder), because it can make your code cleaner and easier to follow. What's This Tail Call Optimization Thing? Tail Call Optimization (TCO) is a fancy term for a way that some programming languages, including JavaScript, can make recursion more memory-efficient. It's supposed to prevent your program from crashing by using too much memory if you're doing a lot of recursions. However, not all browsers and JavaScript environments handle it the same way, so it's a bit hit or miss whether you can rely on it to save your recursive functions from causing problems. Recursion Sounds Cool, But How Do I Keep It From Getting Too Confusing? Recursion can get tricky, especially when trying to figure out why something's not working right. When you're debugging (that's coder talk for fixing bugs in your code) recursive functions, start by checking your base case (the condition that stops the recursion) to make sure it's set up correctly. Also, take it slow and maybe use some code tracing tools or good old-fashioned pen and paper to keep track of what your function is doing at each step. It's like solving a mystery, and every little clue helps! So, When Should I Really Use Recursion Over Loops? Recursion is great for problems where you're dealing with things that have a lot of layers, like a tree or a set of nested folders. It makes your code look cleaner and can be easier to understand once you get the hang of it. Loops, on the other hand, are better for when you're just going through a list or array and doing something with each item. They're also a bit easier on your computer's Recursion is a powerful tool in your JavaScript toolbox. It can make your code cleaner, more elegant, and easier to understand. It can be very helpful in breaking down complex problems into simpler, smaller problems, and these examples are just the tip of the iceberg. However, use it wisely! Always consider the problem you're solving and the structure of your data.
{"url":"https://community.appsmith.com/content/blog/recursion-vs-loops-simple-introduction-elegant-javascript","timestamp":"2024-11-08T02:34:01Z","content_type":"text/html","content_length":"59220","record_id":"<urn:uuid:99b8145f-7fad-4e36-924d-b2681bbf691f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00625.warc.gz"}
Algebra 2 Tutoring in Redondo Beach, CA | Hire the Best Tutors Now! I am a hardworking and passionate Mathematics and Physics tutor with experience tutoring in all levels (middle school through college) across various tutoring companies (ranging from bespoke, Academic Management to informal test prep). I bring a wealth of strategies intended to promote development of critical skills such as creative problem solving, analytical and independent thinking, time management, organization, self-discipline, self-advocacy, test preparation, and novel study skills. In addition to my professional tutoring responsibilities, I am a full-time ... See more
{"url":"https://heytutor.com/tutors/algebra-2/ca/redondo-beach/","timestamp":"2024-11-02T18:29:25Z","content_type":"text/html","content_length":"196195","record_id":"<urn:uuid:3919d117-c9d0-45f0-be55-80bac44df4f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00701.warc.gz"}
Geometric - Modelling written 5.2 years ago by modified 2.5 years ago by 1 Answer Geometric Modelling is the field that discusses the mathematical methods behind the modelling of realistic objects for computer graphics and computer aided design. The three principal classifications of the geometric modelling systems are: 1) Wireframe modelling 2) Surface modelling 3) Solid modelling Wireframe Modelling A wireframe model represents the shape of a solid object with its characteristic lines and points. The word “wireframe” is related to the fact that one may imagine a wire that is bent to follow the object edges to generate a model. In other words, a wire frame model is an edge or skeletal representation of a real-world 3D object using lines and curves. Model consists entirely of points, lines, arcs and circles, conics, and curves. In 3D wireframe model, an object is not recorded as a solid. Instead the vertices that define the boundary of the object or the intersections of the edges of the object boundary are recorded as a collection of points and their connectivity. One can use a wire frame model to 1) View the model from any vantage point 2) Generate standard orthographic and auxiliary views automatically 3) Generate exploded and perspective views easily 4) Analyse spatial relationships, including the shortest distance between corners and edges, and checking for interferences 5) Reduce the number of prototypes required • Simple to construct for 2D and simple and symmetric 3D objects. • Designer needs little training • System needs little memory • Take less manipulation time • Retrieving and editing can be done easy • Consumes less time • Best suitable for manipulations as orthographic isometric and perspective views. • Image causes confusion • Cannot get required information from this model • Hidden line removal features not available • Not possible for volume and mass calculation, NC programming cross sectioning etc. • Not suitable to represent complex solids Surface Modelling A surface model is a set of faces. A surface model consists of wireframe entities that form the basis to create surface entities the basis to create surface entities. A Surface modelling is a model with minimized ambiguous representation than the wireframe modelling but not as good as solid modelling. The construction of surface modelling is done with the use of geometric entities like surfaces and curves. Surface Modelling uses B - Splines and Bezier mathematical techniques for controlling curves. It is used to make technical surfaces (e.g. air plane wing) or aesthetic surfaces (e.g. car’s hood). It was developed for the aerospace and automotive industries in the late 70s. Overall on the basis of performance, the surface modelling stays in between wireframe modelling and solid modelling for representing a realism object. • It is less ambiguous. • Complex surfaces can be easily identified. • It removes hidden line and adds realism. • Difficult to construct. • Difficult to calculate mass property. • More time is required for creation. • Requires high storage space as compared to wire frame modelling. • Also requires more time for manipulation. Solid Modelling Solid Modelling is a modelling that provides a complete representation of an object than a wire frame modelling and surface modelling. In this model, the appearance of an object is displayed in solid design. A solid modelling is defining an object with geometric mass. Solid modelling programs usually create models by creating a base solid and adding or subtracting from it with subsequent features. It was originally developed for machine design, and is used heavily for engineering with large part assemblies, digital testing and rapid prototyping. • Complete modelling. • Unambiguous. • Best suitable for calculating mass properties. • Very much suitable for automated applications. • Fast creation. • Gives huge information. • Requires large memory. • Slow manipulation. • Some manipulations can be complex and require tedious procedure to add an answer.
{"url":"https://www.ques10.com/p/48698/geometric-modelling-1/","timestamp":"2024-11-09T16:14:19Z","content_type":"text/html","content_length":"29821","record_id":"<urn:uuid:1bb8ea3d-e275-4c13-a86f-38d8369bba3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00637.warc.gz"}
Expansion of covariant derivative General questions (367) At the end of the page Programming in Cadabra, there is a function expand_nabla, which expands the covariant derivative in terms of partial derivative and contractions with the connection. In that implementation the dummy index contracting the connection with the tensor is called p, i.e. is fixed. Since the dummy index is fixed, consecutive application of the covariant derivative cannot be expanded. For example, if we consider the code below, foo := \nabla_{a}{ \nabla_{b}{ h^{c}_{d} } }; an error is raised since the expansion of the inner covariant derivative introduces the index p, and then the expansion of the second covariant derivative attempts to introduce another p index (already in use). How could the definition of the expand_nabla function be generalised to modify automatically the name of the dummy index? Moreover, assuming that we have a set of symbols with the property Indices assigned, Is it possible to preserve the type of index? i.e. to introduce a new dummy index using a symbol of the list. The answer I'm posting (after two years) is based on an idea by @dominicprice (former student of Kasper, and developer of many aspects of cadabra). The code def expand_nabla(ex, *, index_name=None, index_set=None): if index_name is None: index_name = "expandnablaidx" Indices($expandnablaidx, expandnablaidx#$, $expandnablaindices$) index_counter = 1 for nabla in ex[r'\nabla']: dindex = nabla.indices().__next__() for arg in nabla.args(): for index in arg.free_indices(): t2:= @(arg); newidx = Ex(index_name + str(index_counter)) index_counter += 1 if index.parent_rel==sub: t1:= -\Gamma^{@(newidx)}_{@(dindex) @(index)}; t2[index]:= _{@(newidx)}; t1:= \Gamma^{@(index)}_{@(dindex) @(newidx)}; t2[index]:= ^{@(newidx)}; ret += Ex(str(nabla.multiplier)) * t1 * t2 nabla += ret distribute(ex, repeat=True) if index_set is not None: rename_dummies(ex, "expandnablaindices", index_set) return ex How does it work? The problem I found with the algorithm expand_nabla reported in the cadabra book https://cadabra.science/notebooks/ref_programming.html, was that when considering higher order derivatives, the name of the dummy index wouldn't change, resulting in a "repeated indices" error. It is desirable that the algorithm could use one of the declared indices! Therefore, we have to give a name to the set of indices: {a, b, c, d, e, f, g, h, i, j, k ,l}::Indices(space, position=fixed). {\mu,\nu,\lambda,\rho}::Indices(spacetime, position=fixed). NOTE: there are two types of indices: space and spacetime. Now, we declare an expression and expand the nabla operator (covariant derivative). foo := \nabla_{c}{ h^{a}_{b} }; expand_nabla(_, index_set="space"); Since we have asked to expand with dummy indices of type space, the result is $$\partial{c}{h^{a},{b}}+\Gamma^{a},{c d} h^{d},{b}-\Gamma^{d},{c b} h^{a},{d}$$ However, if we expand indices of type spacetime, we get $$\partial{c}{h^{a}\,{b}}+\Gamma^{a}\,{c \mu} h^{\mu}\,{b}-\Gamma^{\mu}\,{c b} h^{a}\,{\mu}$$ Higher orders Higher orders (I was able to calculate up to third order) can be computed with ease, for example: foo := \nabla_{e}{\nabla_{d}{\nabla_{c}{ h^{a}_{b} }}}; #expand_nabla(_, index_set="vector"); expand_nabla(_, index_name=r'\lambda'); The index_name=r'\lambda' ask the algorithm to use the name \lambda to expand the dummy indices. I faced the same problem in my time, so I ended up writing a function like this. def select_index(used_indices): indeces = r'w v u s r q o n m l k j i h g f e d c b a'.split() for uind in indeces: found = False for qind in used_indices: if qind == uind: found = True if not found: index = uind return Ex(index), used_indices def one_nabla(ex, used_indices): t3, used_indices = select_index(used_indices) free = dict() free['sub'] = set() free['up'] = set() for nabla in ex[r'\nabla']: dindex = nabla.indices().__next__() for arg in nabla.args(): for index in arg.free_indices(): if index.parent_rel==sub: for key in free.keys(): for index in free[key]: ind = Ex(index) if key == 'sub': t1:= -\Gamma^{@[t3]}_{@(dindex) @[ind]}; t1:= \Gamma^{@[ind]}_{@(dindex) @[t3]}; t2:= @[arg]; for term_index in arg.free_indices(): if str(term_index.ex()) == index: if term_index.parent_rel==sub: t2[term_index]:= _{@[t3]}; t2[term_index]:= ^{@[t3]}; ret += Ex(str(nabla.multiplier)) * t1 * t2 nabla += ret # return ex, used_indices def nabla_calculation(ex, used_indices, count): ex = ex.ex() for element in ex.top().terms(): local_count = 0 for nabla in element[r'\nabla']: local_count += 1 if local_count == 0: elif local_count == 1: new, used_indices = (one_nabla(element, used_indices)) count -= 1 if element.ex().top().name == r'\prod': i = 0 while i < 2: for mult in element.ex().top().children(): local_count2 = 0 for nabla in mult[r'\nabla']: local_count2 += 1 if local_count2 == 0: new *= mult.ex() elif local_count2 == 1: for nabla in mult[r'\nabla']: nabla1, used_indices = one_nabla(nabla, used_indices) new *= nabla.ex() mult1, used_indices = nabla_calculation(mult, used_indices, local_count) new *= mult1 new *= Ex(str(element.multiplier)) for nabla in element[r'\nabla']: for arg1 in nabla.args(): arg2, used_indices = nabla_calculation(arg1, used_indices, count - 1) index = nabla.indices().__next__() t := \nabla_{@(index)}{@[arg2]}; new = t nabla1, used_indices = one_nabla(new, used_indices) new = Ex(str(nabla.multiplier)) * nabla1 return ex, used_indices def expand_nabla(ex): if ex.top().name == '\equals': for child in ex.top().children(): for element in child.ex().top().terms(): count = 0 used_indices = set() for nabla in element.ex()[r'\nabla']: count += 1 if count == 0: ret += element.ex() #new_ex += element.ex() for n in element.ex(): for index in n.indices(): #for nabla in element[r'\nabla']: element1, used_indices = nabla_calculation(element, used_indices, count) ret +=element1 for element in ex.top().terms(): count = 0 used_indices = set() for nabla in element.ex()[r'\nabla']: count += 1 if count == 0: for n in element.ex(): for index in n.indices(): element1, used_indices = nabla_calculation(element, used_indices, count) return ex To use it, you need to call the main function expand_nabla(ex). It is suitable for large expressions, does not duplicate already occupied indexes, and also knows how to work with equalities. OK, I went and fixed the example from the programming page and here is my suggestion. It actually takes the idea from @jsem to add a rename_dummies() after each operator expansion, and tries to do so Please do review! def expand_covariant_derivative(ex, operator=r'\nabla'): Expand the covariant derivative operator into partial coordinate derivatives and Christoffel symbols. :operator: which operator is used for the covariant derivative, default `\nabla` **Warning** whatever you choose for the `operator`, you must also add a definition of that operator as a derivative, otherwise indexes will not be handled correctly. For example: # \nabla{#}::Derivative. This function deals with the covariant derivative with respect to generalized coordinates, not with respect to flat coordinates as used in the Vierbein (Tetrad) def expand_one_appearance(ex): Finds the first appearance of `operator` and expands it with a single partial derivative and Christoffel symbols for oper in ex[operator]: dindex = oper.indices().__next__() for arg in oper.args(): for index in arg.free_indices(): t2:= @(arg). if index.parent_rel==sub: t1:= -\Gamma^{\gamma}_{@(dindex) @(index)}. t2[index]:= _{\gamma}. t1:= \Gamma^{@(index)}_{@(dindex) \gamma}. t2[index]:= ^{\gamma}. ret += Ex(str(oper.multiplier)) * t1 * t2 oper += ret return ex # loop that deals with the covariant-derivative operators, one at a time # First we expand one of them # Then we rename dummies, to avoid name conflicts while len(list(ex[operator])) > 0: ex_temp := @(ex). ex = expand_one_appearance(ex_temp) except RuntimeError as reason: return ex
{"url":"https://cadabra.science/qa/2041/expansion-of-covariant-derivative?show=2385","timestamp":"2024-11-07T13:40:33Z","content_type":"text/html","content_length":"65645","record_id":"<urn:uuid:50cb4376-0bac-4353-aa2e-2659fde48b90>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00590.warc.gz"}
Range of a Linear Map Range of a Linear Map Definition: If $T \in \mathcal L (V, W)$ then the Range of the linear transformation $T$ is the subset of $W$ defined as $\mathrm{range} (T) = \{ T(v) : v \in V \}$, that is, the set of all vectors $T(v) \in W$ to which are mapped to from vectors $v \in V$. Before we look at some examples of ranges of vector spaces, we will first establish that the range of a linear transformation can never be equal to the empty set. This should intuitively make sense. If $T : V \to W$ is a linear transformation, then since $V \neq \emptyset$ (since $V$ is a linear transformation) then there must contain at least one element in $V$, and so this element is mapped to some vector in $W$. We will verify this with the following lemma. Lemma 1: If $T \in \mathcal L (V, W)$ then the range of $T$ contains at least one element from $W$, that is $\mathrm{range} (T) \neq \emptyset$. • Proof: Since $V$ and $W$ are vector spaces, then we have that both $V$ and $W$ are nonempty sets. Furthermore, both $V$ and $W$ contain their respective additive identities $0_V$ and $0_W$. From the Null Space of a Linear Map page, we know that $T(0_V) = 0_W$ and so $0_W \in \mathrm{range} (T)$, so $\mathrm{range} (T) \neq \emptyset$. $\blacksquare$ Now the following lemma will tell us that the $\mathrm{range} (T)$ is a subspace of $W$. Lemma 2: If $T \in \mathcal L (V, W)$ then the subset $\mathrm{range} (T)$ is a subspace of $W$. • Proof: Since $\mathrm{range} (T) \subseteq W$, all we must do is verify that $\mathrm{range} (T)$ is closed under addition, closed under scalar multiplication, and contains the zero vector of $W$ • Let $w, y \in W$ and $a \in \mathbb{F}$. • Since $w, y \in W$ we have that there exists vectors $u, v \in V$ such that $w = T(u)$ and $y = T(v)$. Therefore $w + y = T(u) + T(v)$ and since $T$ is a linear transformation then $T(u) + T(v) = T(u + v)$. Therefore $w + y = T(u + v)$ and so $(w + y) \in \mathrm{range} (T)$ so $\mathrm{range} (T)$ is closed under addition. • Once again, since $w \in W$, there exists a vector $u \in V$ such that $w = T(u)$. So $aw = aT(u)$ and since $T$ is a linear tranformation then $aT(u) = T(au)$. Therefore $aw = T(au)$ and so $ (au) \in \mathrm{range} (T)$ so $\mathrm{range} (T)$ is closed under scalar multiplication. • From Lemma 1, we have that $0_W \in \mathrm{range} (T)$, and so $\mathrm{range} (T)$ contains the zero vector of $W$. Therefore $\mathrm{range} (T)$ is a subspace of $W$. $\blacksquare$ Notice that lemmas 1 and 2 above are analogous that of lemmas 1 and 2 from the Null Space of a Linear Map page. It is important to note that if $T : V \to W$ is a linear map from the vector spaces $V$ to $W$, then both the null space of $T$ and the range of $T$ are nonempty and the null space of $T$ is a subspace of $V$, while the range of $T$ is a subspace of $W$. We will now look at some examples of ranges of linear transformations. The Range of the Zero Map If $0 \in \mathcal L (V, W)$ represents the zero map, then $\mathrm{range} (T) = \{ 0 \}$ since every vector $v \in V$ is mapped to $0_W \in W$. The Range of the Identity Map If $I \in \mathcal L (V, V)$ represents the identity map, then $\mathrm{range} (T) = V$ since every vector $v \in V$ is mapped to itself, so the range contains all vectors from $V$. The Range of the Left Shift Operator If $T \in \mathcal L (\mathbb{F}^{\infty}, \mathbb{F}^{\infty})$ represents the left shift operator, then $\mathrm{range} (T) = \mathbb{F}^{\infty}$ since any sequence $(x_2, x_3, ...) \in \mathbb{F} ^{\infty}$ is mapped from the sequence $(x_1, x_2, ...) \in \mathbb{F}^{\infty}$. The Range of the Right Shift Operator If $T \in \mathcal L (\mathbb{F}^{\infty}, \mathbb{F}^{\infty})$ represents the right shift operator, then $\mathrm{range} (T) = \{ (0, x_1, x_2, ...) : x_1, x_2, ... \in \mathbb{F} \}$. We note that any sequence $(x_1, x_2, ...)$ where $x_1 \neq 0$ cannot be in the range of $T$ since the first term of any sequence under $T$ will be zero.
{"url":"http://mathonline.wikidot.com/range-of-a-linear-map","timestamp":"2024-11-09T07:44:05Z","content_type":"application/xhtml+xml","content_length":"20726","record_id":"<urn:uuid:cfb8b8d1-e2c0-48d7-a09a-018f0a992697>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00825.warc.gz"}
First Microgaming 243 Lines Video Slot Coming Up!! Great news Microgaming will launch soon their first 243 ways Video Slot!!! Burning Desire is the first Internet slot to make use of ways instead of paylines to calculate wins. Instead of a set configuration of paylines, the slot calculates wins on left-to-right combinations regardless of their position on each reel. A winning combination is any combination from left to right (starting on Reel 1) of winning symbols. In order to pay, a winning combination must begin on the first reel, and contain symbols on each subsequent reel in any position (top, center, or bottom). Burning Desire presents a new and more exciting way to play, where instead of being constrained by the number of pay-lines, the player can generate more winning opportunities - up to 243 of them . Casino Rewards Online Casinos: Grand Mondial Casino Players Palace Casino Quatro Casino Rich Reels Casino Villento Casino Last edited: This sounds like a very interesting concept..cannot wait to give it a try! I cant wait to try Burning Desire This sounds AWESOME!!! MsBoss those screenshots are:socool: Last edited by a moderator: wow can't wait to play :socool: JBet casino accept USA? :thank you: luvslots Last edited by a moderator: As far I know all micros shoud get this slot dear so good chance to play it in one of your exsiting accounts, sure pitty microgaming casinos dont accept new signups from the USA anymore. I'm keeping my fingers crossed, maybe soon all this banning and i do think it will happen, it's just a matter of time! :thank you: luvslots Well, i received an email saying that Villento's casino would be getting this slot soon, so just a matter of time before alomst all of them get it i'm sure. looks really nice and fun...can't wait to try out. Thanks for the post before it is available for us to check it out.:thank you: The new Slot is also at Villento Casino and Rich Reels Casino now as well! I played it for a few minutes and its pretty awesome! I'm Burning with Desire to Play Burning Desire! Kotsy got room in your corner for one more? Good 1 Vicki, good things come to those who wait! Last edited by a moderator: 2 more days D 2 MORE DAYS!!! Then WATCH OUT! OMG. 243 lines???? Sounds very expensive unless you have a huge bankroll to start with. (Which I don't!!!!) Thats the beauty of this game FearFacter. it starts you out with $.25 cent per spin but you can win 243 ways. Its like every symbol is a scatter in a way, where you get paid for hitting symbols anywhere on the reels. Also, i realized that say if you hit 5 10's and you have 3 wild symbols on there too, the wilds not substituting for the 10's but in addition of the 5 10's anywhere on the reels, then you would get paid 4 times for the 10's because you would have it 4 different ways. The beauty of the 243 ways of winning is that there is 243 possible lines or ways to win. its pretty addictive, because if you hit the free spins, you can really rack up with 15 free spins and a 3x multiplier. check out this hit i had while playing play money on it. Last edited by a moderator: 243 Lines. Interessting Idea. But this Slot is one of the latest in my Favorites List. Theres no Action at this Slot. Only 15 simple Freegames. One of my Favorite is the new TombRaider - The Secret Sword :socool: i am ssssooooooo~sssooooo~ in love with this game. My heart is burning by desire to win big! Lol I love that song during the freespins.:socool:
{"url":"https://bonusparadise.com/forum/threads/first-microgaming-243-lines-video-slot-coming-up.4413/","timestamp":"2024-11-05T03:30:15Z","content_type":"text/html","content_length":"208284","record_id":"<urn:uuid:5605750b-e425-4fdd-aea0-43be997f2496>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00344.warc.gz"}
Members: 3658 Articles: 2'599'751 Articles rated: 2609 05 November 2024 Article overview Average Entropy of a Subsystem Don N. Page ; Date: 7 May 1993 Journal: Phys.Rev.Lett. 71 (1993) 1291-1294 Subject: gr-qc hep-th Abstract: If a quantum system of Hilbert space dimension $mn$ is in a random pure state, the average entropy of a subsystem of dimension $mleq n$ is conjectured to be $S_{m,n}=sum_{k=n+1}^{mn}frac {1}{k}-frac{m-1}{2n}$ and is shown to be $simeq ln m - frac{m}{2n}$ for $1ll mleq n$. Thus there is less than one-half unit of information, on average, in the smaller subsystem of a total system in a random pure state. Source: arXiv, gr-qc/9305007 Other [GID 524630] pmid10055503 Services: Forum | Review | PDF | Favorites No review found. Note: answers to reviews or questions about the article must be posted in the forum section. Authors are not allowed to review their own article. They can use the forum section.
{"url":"http://science-advisor.net/article/gr-qc/9305007","timestamp":"2024-11-05T19:51:36Z","content_type":"text/html","content_length":"21527","record_id":"<urn:uuid:e23d48e2-895c-46bd-8621-32c0a4d41d52>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00458.warc.gz"}
small ball mill theory WEBJan 13, 2015 · Ball bearings (that fits the axle) Belts (to drive the pulleys) Table (big and stable to fit all the above. Ball mill jar. Some photos of the constuction. The motor is 1/2hp 1380rpm. A 4in pulley that has been presed in the 20mm axle. A 2in pulley for the motor. The axle. WhatsApp: +86 18838072829 WEBDec 1, 2013 · The effect of ball size on the particle size reduction has been investigated first for varying rotation speed of the container. Percent passing and size distributions of the milled Al 2 O 3 powder are shown in Fig. 1, Fig. 2, respectively, as a function of particle size for varying ball average particle sizes (d 50) of the milled Al 2 O 3 powder are . WhatsApp: +86 18838072829 WEBPlanetary ball mills are well known and used for particle size reduction on laboratory and pilot scales for decades while during the last few years the appliion of planetary ball mills has extended to mechanochemical approaches. Processes inside planetary ball mills are complex and strongly depend on the processed material and synthesis and, thus, . WhatsApp: +86 18838072829 WEBJul 2, 2020 · In recent research done by AmanNejad and Barani [93] using DEM to investigate the effect of ball size distribution on ball milling, charging the mill speed with 40% small balls and 60% big balls ... WhatsApp: +86 18838072829 WEBJul 12, 2021 · Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. ... This sample is ground in a Bond ball mill for an arbitrary number of revolutions (N 1 = 50, 100, or 150 ... Bond, Third theory of comminution. Trans ... WhatsApp: +86 18838072829 WEBDec 22, 2015 · Hardinge Conical Ball Mill. Diameter: ft. Length: 16 in. Shell speed: 29 rpm. 1/2 in. steel plate sell. Wet grind ball mill. Access man hole. Comes with: 20 hp electric motor. 230/460V. 1760 rpm. 3 phase, 60 hz. Falk reducer. Ratio: :1. Gear and pinion ratio: 150/19. Hardinge Conical Ball Mill Liners WhatsApp: +86 18838072829 WEBNov 8, 2016 · Design engineers generally use Bond Work Index and Bond energy equation to estimate specific energy requirements for ball milling operations. Morrell has proposed different equations for the Work Index and specific energy, which are claimed to have a wider range of appliion. In this paper an attempt has been made to provide a . WhatsApp: +86 18838072829 WEBDec 1, 2021 · The effects of the balltopowder diameter ratio (BPDR) and the shape of the powder particles on EDEM simulation results and time in the planetary ball mill was investigated. WhatsApp: +86 18838072829 WEBOre Grinding Mill THEORY BALL AND TUBE MILLS Grinding Action INSIDE Mill. ... Efficiency must necessarily be sacrificed to some extent in small mills by capital requirements, and even greater reduction ratios are justified in a singlestage grinding unit. WhatsApp: +86 18838072829 WEBBall milling is a simple, fast, costeffective green technology with enormous potential. One of the most interesting appliions of this technology in the field of cellulose is the preparation and the chemical modifiion of cellulose nanocrystals and nanofibers. Although a number of studies have been repo Recent Review Articles Nanoscale . WhatsApp: +86 18838072829 WEBNov 17, 2021 · For the milling process, g of the asreceived Nb powder were loaded into two separate hardened steel containers of 125 ml volume with steel balls of mm diameter, in a balltopowder ... WhatsApp: +86 18838072829 WEBFeb 1, 2019 · 1. Introduction. Planetary ball mills provide high energy density due to the superimposed effect of two centrifugal fields produced by the rotation of the supporting disc and the rotation of the vials around its own axis in the opposite direction [1].During operation, the grinding balls execute motion paths that result in frictional and impact effects. WhatsApp: +86 18838072829 WEBJul 3, 2017 · Rods in place weigh approximately 400 pounds per cu. ft. and balls in place approximately 300 pounds per cu. ft.. Thus, quantitatively, less material can progress through the voids in the rod mill grinding media than in the ball mill, and the path of the material is more confined. This grinding action restricts the volume of feed which passes ... WhatsApp: +86 18838072829 WEBMill Type Overview. Three types of mill design are common. The Overflow Discharge mill is best suited for fine grinding to 75 – 106 microns.; The Diaphram or Grate Discharge mill keeps coarse particles within the mill for additional grinding and typically used for grinds to 150 – 250 microns.; The CenterPeriphery Discharge mill has feed reporting from both . WhatsApp: +86 18838072829 WEBMar 23, 2022 · A ball mill consists of a cylinder, which is filled with 30–35% of its volume by small steel balls and is rotated through motor. When the cylinder starts to rotate, the balls start to lift under centrifugal and frictional forces and fall back into the cylinder and onto the feed as gravitational pull exceeds those forces (Fig. ). WhatsApp: +86 18838072829 WEBRetsch offers mills with jar capacities from ml up to 150 l and balls are available from mm to 40 mm, see Figure 2. A third and very important characteristic of a ball mill, which also has a great influence on the result of a milling process, is the power of a mill. Depending on the appliion, jars should be moved either slowly for ... WhatsApp: +86 18838072829 WEBJul 4, 2013 · Operation aim of ball mill grinding process is to control grinding particle size and circulation load to ball mill into their objective limits respectively, while guaranteeing producing safely and stably. The grinding process is essentially a multiinput multioutput system (MIMO) with large inertia, strong coupling and uncertainty characteristics. . WhatsApp: +86 18838072829 WEBJun 19, 2015 · The basic parameters used in ball roller structure (power calculations), rod mill otherwise any tumbling mill sizing live; material to must ground, characteristics, Bond Work Index, bulk specific, specific density, desired mill tonnage capacity DTPH, operating % solids or pulp gas, feed size as F80 and maximum 'chunk size', product size as P80 . WhatsApp: +86 18838072829 WEBWe offer industrial grinding balls made of Kyocera alumina (DEGUSSIT AL23 or which are % pure alumina) and zirconia. Both of these technical ceramics are quite dense, which makes them ideal for grinding the vast majority of products. The density of alumina is close to 4 g/cm3, that of zirconia is g/cm3. WhatsApp: +86 18838072829 WEBJan 12, 2023 · In this study, silica nanoparticles (SiO2 NPs) were fabried using a handmade ball mill as a novel, simple, rapid, costeffective, and green approach. The sol–gel method was also used to produce these NPs as a comparative method. The SiO2 NPs produced by both methods were characterized using highresolution transmission . WhatsApp: +86 18838072829 WEBThese miniature ball end mills have a smaller diameter than standard ball end mills, which allows them to provide more precision and better control over material removal during milling tasks. They are suitable for detailed milling appliions such as dental milling or smallelectronics manufacturing. WhatsApp: +86 18838072829 WEBOct 1, 2020 · Fig. 1 a shows the oscillatory ball mill (Retsch® MM400) used in this study and a scheme (Fig. 1 b) representing one of its two 50 mL milling jars. Each jar is initially filled with a mass M of raw material and a single 25 mmdiameter steel ball. The jars vibrate horizontally at a frequency chosen between 3 and 30 Hz. The motion of the jar follows a . WhatsApp: +86 18838072829 WEBNov 1, 1999 · Eccentric vibratory mills — theory and practice. Author links open overlay panel Eberhard Gock a ... The microstructure demonstrated that small particles of starch was successfully coated on the surface of the SHP. ... attritor (stirring ball mill), pin mill, and rolling mill are used for mechanical activation appliions (Balaz, 2008 ... WhatsApp: +86 18838072829 WEBDec 1, 2023 · 1. Introduction. In the field of polymer mechanochemistry, mechanical forces are applied to polymers, leading to chemical transformations in their chains [1], [2], [3], [4].Various methods can be utilized to conduct polymer mechanochemistry, such as single molecule force spectroscopy [5], ultrasoniion [6], and ballmill grinding (BMG) . WhatsApp: +86 18838072829 WEBFloor Mounted Laboratory Grinding Mill. US 11,000. Small Vibratory Ball Mill. US 5,000. to 15 TPH Small Scale Miner's Ball Mill. US 30,000. Mini Ball Mill. US 50,000. Ceramic Ball Mill. WhatsApp: +86 18838072829 WEBFeb 15, 2001 · The present mathematical analysis of the milling dynamics aims at predicting the milling condition in terms of ωd and ωv, for the occurrence of the most effective impact between the ball and vial wall to achieve MA. In the present analysis, the values of rd, rv and ball radius ( rb) are taken as 132, 35 and 5 mm, respectively (typical . WhatsApp: +86 18838072829
{"url":"https://www.antennesplus.fr/2020-11-06/small-ball-mill-theory.html","timestamp":"2024-11-09T19:34:18Z","content_type":"application/xhtml+xml","content_length":"24688","record_id":"<urn:uuid:949a599a-2b80-46c1-9fc7-db65cd4647f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00800.warc.gz"}
Unlocking Quantum Computing's Potential: Unraveling the Enigma of Superposition and EntanglementUnlocking Quantum Computing's Potential: Unraveling the Enigma of Superposition and Entanglement Quantum mechanics, a realm of physics that explores the fundamental behavior of matter at the atomic and subatomic levels, has unveiled the enigmatic concepts of superposition and entanglement. These phenomena, defying classical intuition, hold profound implications for the future of computing and scientific discovery. Exploring Superposition: The Multidimensional Existence of Quantum States Superposition describes the peculiar ability of quantum particles to exist in multiple states simultaneously. Unlike classical objects, which can only occupy a single state at any given time, quantum particles remain in a superposition of states until they are measured or observed. This phenomenon challenges the traditional notion of a particle having a definitive position or property until it is Consider a coin in a superposition of heads and tails. Before it is flipped and observed, the coin does not exist solely as heads or tails; rather, it exists as both heads and tails simultaneously. This paradoxical state persists until the act of observation forces the coin to collapse into a single outcome. Unveiling Entanglement: Quantum Connections that Transcend Distance Entanglement, a profoundly interconnected relationship between quantum particles, establishes an immediate and instantaneous connection between them, regardless of the distance separating them. Changes to one particle instantly affect the state of the other, even if they are light-years apart. This non-local connection defies the speed of light and our conventional understanding of Imagine two entangled electrons, each possessing a property known as spin. When one electron's spin is measured, the spin of the other electron is instantaneously determined, even if the two electrons are separated by billions of miles. This inexplicable correlation between entangled particles has spurred a vibrant debate among physicists and philosophers alike. Harnessing Quantum Potential: Paving the Way for Revolutionary Discoveries and Applications The exploration of superposition and entanglement holds immense potential for transformative discoveries and technological advancements: • Quantum Computing: Superposition and entanglement enable the creation of quantum computers, which leverage the unique properties of quantum systems to perform complex calculations exponentially faster than classical computers. Quantum algorithms show promise for solving computationally intensive problems in fields such as cryptography, materials science, and drug discovery. • Quantum Information Processing: The development of quantum communication protocols that utilize entangled states allows for secure and unbreakable communication channels. Quantum cryptography, based on the principles of quantum mechanics, ensures the confidentiality and integrity of sensitive information transmission. • Precision Measurements: Quantum sensors harness superposition and entanglement to achieve unprecedented sensitivity and precision in measuring physical quantities such as time, magnetic fields, and gravitational waves. These sensors have applications in scientific research, medical imaging, and the exploration of extreme environments. Superposition and entanglement, once considered abstract concepts in the realm of theoretical physics, are now at the forefront of scientific research and technological development. Their profound implications for our understanding of the universe and their potential to revolutionize computing, communication, and scientific discovery are yet to be fully realized. As physicists delve deeper into the enigmatic realm of quantum mechanics, we stand on the cusp of an era where these phenomena will drive transformative advancements and shape the future of human knowledge and technological
{"url":"https://www.carsalerental.com/2024/10/unlocking-quantum-computings-potential.html","timestamp":"2024-11-14T11:25:34Z","content_type":"application/xhtml+xml","content_length":"132528","record_id":"<urn:uuid:e7088634-b9fe-435b-81e4-1084a15b51e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00539.warc.gz"}
The Astonishing Behavior of Recursive Sequences | Quanta Magazine Kristina Armitage/Quanta Magazine In mathematics, simple rules can unlock universes of complexity and beauty. Take the famous Fibonacci sequence, which is defined as follows: It begins with 1 and 1, and each subsequent number is the sum of the previous two. The first few numbers are: 1, 1, 2, 3, 5, 8, 13, 21, 34 … Simple, yes, but this unassuming recipe gives rise to a pattern of far-reaching significance, one that appears to be woven into the very fabric of the natural world. It’s seen in the whorls of nautilus shells, the bones in our fingers, and the arrangement of leaves on tree branches. Its mathematical reach extends to geometry, algebra and probability, among other areas. Eight centuries since the sequence was introduced to the West — Indian mathematicians studied it long before Fibonacci — the numbers continue to attract the interest of researchers, a testament to how much mathematical depth can underlie even the most elementary number sequence. In the Fibonacci sequence, every term builds on the ones that came before it. Such recursive sequences can exhibit a wide range of behaviors, some wonderfully counterintuitive. Take, for instance, a curious family of sequences first described in the 1980s by the American mathematician Michael Somos. Like the Fibonacci sequence, a Somos sequence starts with a series of ones. A Somos-k sequence starts with k of them. Each new term of a Somos-k sequence is defined by pairing off previous terms, multiplying each pair together, adding up the pairs, and then dividing by the term k positions back in the sequence. The sequences aren’t very interesting if k equals 1, 2 or 3 — they are just a series of repeating ones. But for k = 4, 5, 6 or 7 the sequences have a weird property. Even though there is a lot of division involved, fractions don’t appear. Merrill Sherman/Quanta Magazine “Normally we don’t have this kind of phenomenon,” Somos said. “It’s a deceptively simple recurrence, similar to Fibonacci. But there’s a lot behind that simplicity.” Other mathematicians continue to uncover startling connections between Somos sequences and seemingly unrelated areas of mathematics. One paper posted in July uses them to construct solutions to a system of differential equations used to model everything from predator-prey interactions to waves traveling in high-energy plasmas. They are also used to study the structure of mathematical objects called cluster algebras and are connected to elliptic curves — which were the key to cracking Fermat’s Last Theorem. Janice Malouf, a graduate student at the University of Illinois, published the first proof that Somos-4 and Somos-5 sequences are integral (meaning all of their terms are integers) in 1992. Other proofs of the same result by different mathematicians appeared around the same time, along with proofs that the Somos-6 and Somos-7 sequences are integral. This strange property of Somos sequences astounded mathematicians. “Somos sequences intrigued me as soon as I learned about them,” said James Propp, a professor of mathematics at the University of Massachusetts, Lowell. “The fact that Somos-4 through Somos-7 always give integers, no matter how far out you go, seemed like a miracle when you viewed things from a naïve perspective. So a different perspective was required.” Propp found a fresh perspective in the early 2000s, when he and his colleagues discovered that the numbers in the Somos-4 sequence are actually counting something. The terms in the sequence correspond to structures found in certain graphs. For some graphs, it’s possible to pair up vertices (dots) with edges (lines) so that every vertex is connected to exactly one other vertex — there are no unpaired vertices, and no vertex connected to more than one edge. The terms in the Somos-4 sequence count the number of different perfect matchings for a particular sequence of graphs. The discovery not only offered a new perspective on Somos sequences, but also introduced new ways to think about and analyze graph transformations. Propp and his students celebrated by having the result put on a T-shirt. “To me a big part of the allure of math is when you arrive at the same destination by different paths and it seems like something miraculous or deep is going on,” Propp said. “The cool thing about these sequences is there are various points of view that explain why you get integers. There are hidden depths there.” The story changes for higher-numbered Somos sequences. The first 18 terms of Somos-8 are integers, but the 19th term is a fraction. Every Somos sequence after that also contains fractional values. Another type of sequence, developed by the German mathematician Fritz Göbel in the 1970s, is an interesting counterpoint to the Somos sequences. The nth term of the Göbel sequence is defined as the sum of the squares of all the previous terms, plus 1, divided by n. Like the Somos sequences, the Göbel sequence involves division, so we might expect that terms won’t remain integers. But for a while — as the sequence grows enormous — they seem to be. The 10th term in the Göbel sequence is about 1.5 million, the 11th 267-some billion. The 43rd term is far too large to calculate — it has some 178 billion digits. But in 1975, the Dutch mathematician Hendrik Lenstra showed that unlike the first 42 terms, this 43rd term is not an integer. Merrill Sherman/Quanta Magazine Göbel sequences can be generalized by replacing the squares in the sum with cubes, fourth powers, or even higher exponents. (Under this convention, his original sequence is called a 2-Göbel sequence.) These sequences also display a surprising trend of starting with an extended stretch of integer terms. In 1988, Henry Ibstedt showed that the first 89 terms of the 3-Göbel sequence (which uses cubes instead of squares) are integers, but the 90th isn’t. Subsequent research on other Göbel sequences found even longer stretches. The 31-Göbel sequence, for instance, kicks off with a whopping 1,077 integer terms. In July, the Kyushu University mathematicians Rinnosuke Matsuhira, Toshiki Matsusaka and Koki Tsuchida shared a paper showing that for a k-Göbel sequence, no matter the choice of k, the first 19 terms of the sequence are always integers. They were inspired to look into the question by a Japanese manga called Seisū-tan, which translates to “The Tale of Integers.” A frame in the comic book asked readers to figure out the minimum possible value of N[k], the point at which a k-Göbel sequence ceases to produce integer terms. The three mathematicians set out to answer the question. “The unexpected persistence of integers for such an extended duration contradicts our intuition,” Matsusaka said.“When phenomena occur contrary to intuition, I believe there is always beauty present.” They found a pattern of repeating behavior as k increases. By focusing on a finite number of repeating cases, they made the calculation tractable, and they were able to complete the proof. A closer look at the sequence N[k] reveals another surprise: N[k] is prime far more often than you would expect if it were purely random. “With the k-Göbel sequence it’s not just remarkable that they’re integers,” said Richard Green, a mathematician at the University of Colorado. “What’s remarkable is that the prime numbers show up so often. That makes it look like something deeper might be going on.” Though the new paper presents a proof that N[k] is always at least 19, it’s not known if it is always finite, or if there exists a k for which the sequence contains integers indefinitely. “N[k] behaves mysteriously. … There is a fundamental desire to comprehend its underlying pattern,” Matsusaka said. “It might be akin to the joy I felt as a child when solving puzzles given by teachers. Even now, those sentiments from that time linger within me.”
{"url":"https://www.quantamagazine.org/the-astonishing-behavior-of-recursive-sequences-20231116/","timestamp":"2024-11-14T10:47:34Z","content_type":"text/html","content_length":"206087","record_id":"<urn:uuid:e0b70ba3-8420-4654-8816-6d14ab5612c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00534.warc.gz"}
What is a Plane? A plane, in geometry, prolongs infinitely in two dimensions. It has no width. We can see an example of a plane in coordinate geometry. The coordinates define the position of points in a plane. In Maths, a plane is a flat, two-dimensional surface that prolongs infinitely far. A plane is a two-dimensional analogue that could consist of a point, a line and three-dimensional space. Planes can appear as subspaces of a few higher-dimensional spaces, like the room’s walls extended exceptionally far away. These walls experience an independent existence on their own, as in the framework of Euclidean geometry. What is a point? A point is a location in a plane that has no size, i.e. no width, no length and no depth. What is a line? A line is a set of points that stretches infinitely in opposite directions. It has only one dimension, i.e., length. The points that lie on the same line are called collinear points. Plane in Algebra In algebra, the points are plotted in the coordinate plane, and this denotes an example of a geometric plane. The coordinate plane has a number line, extending left to right endlessly and another one extending up and down infinitely. It is impossible to view the complete coordinate plane. Arrows designate the truth that it extends infinitely along the x-axis and the y-axis on the number lines’ ends. These number lines are two-dimensional, where the plane extends endlessly. When we plot the graph in a plane, then the point or a line, plotted does not have any thickness. Also, learn: Plane Meaning A surface comprising all the straight lines that join any two points lying on it is called a plane in geometry. In other words, it is a flat or level surface. In a Euclidean space of any number of dimensions, a plane is defined through any of the following uniquely: • Using three non-collinear points • Using a point and a line not on that line • Using two distinct intersecting lines • Using two separate parallel lines Intersecting Planes Two planes can be related in three ways, in a three-dimensional space. • They can be parallel to each other • They can be identical • They can intersect each other The figure below depicts two intersecting planes. The method to get the equation of the line of intersection connecting two planes is to determine the set of points that satisfy both the planes’ equations. Since the equation of a plane comprises three variables and two equations (because of two planes), solving the simultaneous equations will give a relationship between the three variables, which is equal to the intersection line equation. Properties of a Plane • If there are two different planes than they are either parallel to each other or intersecting in a line, in a three-dimensional space • A line could be parallel to a plane, intersects the plane at a single point or is existing in the plane. • If there are two different lines, which are perpendicular to the same plane then they must be parallel to each other. • If there are two separate planes that are perpendicular to the same line then they must be parallel to each other. What is a Plane Figure? A plane figure is defined as a geometric figure that has no thickness. It lies entirely in one plane. It is possible to form a plane figure c with line segments, curves, or a combination of these two, i.e. line segments and curves. Let’s have a look at some examples of plane figures in geometry such as circle, rectangle, triangle, square and so on. These are given in the below figure.
{"url":"https://mathlake.com/What-is-a-Plane","timestamp":"2024-11-06T02:46:45Z","content_type":"text/html","content_length":"12048","record_id":"<urn:uuid:ad0ad66a-63b1-42a0-8cb8-68313b22edf3>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00369.warc.gz"}
Java Program To Sort A Linked List We will sort a linked list in optimal way here. We will use “LeetCode 148. Sort List” problem as an example & solve it using Java language. The problem statement is: Given the head of a linked list, return the list after sorting it in ascending order. Follow up: Can you sort the linked list in O(n logn) time and O(1) memory (i.e. constant space)? Input: head = [5, 6, 4, 2, 1, 3, 7] Output: [1, 2, 3, 4, 5, 6, 7] As you can see, the expectation here is to sort the linked list in O(n log n) time complexity & O(1) space complexity. Let’s evaluate Merge Sort & Quick Sort, and figure out if they can satisfy both time & space complexity bounds. Standard Merge Sort does sorting in O(n log n) worst time complexity. But its space complexity is O(n) as new array is used while merging the elements. Quick Sort space complexity is O(1) as it can do in-place sorting easily. Its average time complexity is O(n log n), but worst time complexity can go up to O(n ^ 2). So none of them completely satisfy expected time & space complexity bounds. We would need to tweak the approach of the standard sorting algorithm. We will be using modified Merge Sort algorithm which will do in-place sorting of the linked list. I have already explained standard Merge Sort algorithm previously. You can go through that post if you need reference. There I have used array as an example. There are some basic differences how we would follow the steps while using an array & a linked list. Space Complexity: In-place Merge Sort can be done for arrays also. But it might require shifting a lot of elements within the original array. It makes the logic a bit more complicated & also there is a performance hit due to shifting of elements within the array. So we prefer to use a new array & insert elements one by one there. In-place merging in linked list is simpler. We can do this by simply changing the next pointers of the original nodes. We can do that using few extra node variables as shown in the Java code at the bottom of the post. So space complexity becomes constant O(1). Time Complexity: Time complexity of merging at each recursion level will still be O(n). There is no change in that. You can refer to my earlier post on Merge Sort & the diagram there. But dividing step will have an additional traversal to find the middle element of the linked list. We can find middle element of an array without traversal when we know the array length. But in singly linked list there is no other way except doing a traversal. Recursion tree length is log n (again refer to my previous Merge Sort post). Dividing at each recursion level will require traversal of total n elements. So dividing phase time complexity is O(n log n). We already know that merging phase time complexity is O(n log n). So total time complexity is O(2n log n) which is O(n log n) if we remove the constant. So this modified version of Merge Sort is still working in O(n log n) time complexity. Here is the fully working Java code solution of the above LeetCode problem: * Definition for singly-linked list. * public class ListNode { * int val; * ListNode next; * ListNode() {} * ListNode(int val) { this.val = val; } * ListNode(int val, ListNode next) * { this.val = val; this.next = next; } * } class Solution { public ListNode sortList(ListNode head) { return mergeSort(head); public ListNode mergeSort(ListNode head){ if(head == null || head.next == null){ return head; ListNode middle = findMiddle(head); ListNode nextToMiddle = middle.next; middle.next = null; //sort left hand side array ListNode leftHead = mergeSort(head); //sort right hand side array ListNode rightHead = mergeSort(nextToMiddle); return merge(leftHead, rightHead); public ListNode merge(ListNode left, ListNode right){ ListNode dummyHead = new ListNode(0); ListNode tail = dummyHead; ListNode leftTemp = left; ListNode rightTemp = right; int count = 0; while(leftTemp != null && rightTemp != null){ if(leftTemp.val <= rightTemp.val){ tail.next = leftTemp; leftTemp = leftTemp.next; } else { tail.next = rightTemp; rightTemp = rightTemp.next; tail = tail.next; if(leftTemp == null){ tail.next = rightTemp; } else { tail.next = leftTemp; return dummyHead.next; public ListNode findMiddle(ListNode head){ ListNode slow = head; ListNode fast = head; while(fast.next != null && fast.next.next != null){ slow = slow.next; fast = fast.next.next; return slow; Leave a Comment
{"url":"https://www.w3spot.com/2021/03/java-program-to-sort-linked-list.html","timestamp":"2024-11-07T16:09:39Z","content_type":"text/html","content_length":"44696","record_id":"<urn:uuid:a659d4a7-6e36-47ae-a3c6-4676085c1700>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00103.warc.gz"}
Problem Set: Supply and Demand 3 Test your understanding of the learning outcomes in this module by working through the following problems. These problems aren’t graded, but they give you a chance to practice before taking the quiz. If you’d like to try a problem again, you can click the link that reads, “Try another version of these questions.” Use the information provided in the first question for all of the questions in this problem set.
{"url":"https://courses.lumenlearning.com/suny-microeconomics/chapter/problem-set-supply-and-demand-3/","timestamp":"2024-11-05T00:31:27Z","content_type":"text/html","content_length":"45828","record_id":"<urn:uuid:21d39ded-825f-4c59-a566-cf5c0c49de7b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00841.warc.gz"}
Alternative Elliptic Curve Representations Struik Security Consultancy This document specifies how to represent Montgomery curves and (twisted) Edwards curves as curves in short-Weierstrass form and illustrates how this can be used to carry out elliptic curve computations using existing implementations of, e.g., ECDSA and ECDH using NIST prime curves. We also provide extensive background material that may be useful for implementers of elliptic curve cryptography.
{"url":"https://datatracker.ietf.org/doc/bibxml3/draft-ietf-lwig-curve-representations-10.xml","timestamp":"2024-11-11T03:17:39Z","content_type":"application/xml","content_length":"1935","record_id":"<urn:uuid:109007a1-19dd-4065-ae61-152f03a85c79>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00492.warc.gz"}
Deep Learning to Accelerate Topology Optimization Topology Optimization Data Set for CNN Training Neural networks for topology optimization is an interesting paper I read on arXiv that illustrates how to speed up the topology optimization calculations by using a deep learning convolution neural network. The data sets for training the network are generate in , which is an Open Source topology optimization tool The approach the authors take is to run ToPy for some number of iterations to generate a partially converged solution, and then use this partially converged solution and its gradient as the input to the CNN. The CNN is trained on a data set generated from randomly generated ToPy problem definitions that are run to convergence. Here's their abstract, In this research, we propose a deep learning based approach for speeding up the topology optimization methods. The problem we seek to solve is the layout problem. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as the efficient pixel-wise image labeling technique to perform the topology optimization. We introduce convolutional encoder-decoder architecture and the overall approach of solving the above-described problem with high performance. The conducted experiments demonstrate the significant acceleration of the optimization process. The proposed approach has excellent generalization properties. We demonstrate the ability of the application of the proposed model to other problems. The successful results, as well as the drawbacks of the current method, are discussed. The deep learning network architecture from the paper is shown below. Each kernal is 3x3 pixels and the illustration shows how many kernals are in each layer. Architecture (Figure 3) from Neural Networks for Topology Optimization The data set that the authors used to train the deep learning network contained 10,000 randomly generated (with certain constraints, see the paper ) example problems. Each of those 10k "objects" in the data set included 100 iterations of the ToPy solver, so they are 40x40x100 tensors (40x40 is the domain size). The authors claim a 20x speed-up in particular cases, but the paper is a little light in actually showing / exploring / explaining timing results. The problem for the network to learn is to predict the final iteration from some intermediate state. This seems like it could be a generally applicable approach to speeding up convergence of PDE solves in computational fluid dynamics (CFD) or computational structural mechanics / finite element analysis. I haven't seen this sort of approach to speeding up solvers before. Have you? Please leave a comment if you know of any work applying similar methods to CFD or FEA for speed-up. 2 comments: 1. Machine learning for supper fast simulations has an interesting comment: "...While I find this an interesting approach, it seems to me to be really confusing to talk about 'acceleration' and 'speed-up' in the way you are, because you're doing a COMPLETELY different thing from what a standard solver is doing. " That's why it's surprising! Surprise is powerful... 2. Here's an approach to speeding up a lattice-Boltzmann solver: Deep Learning to Accelerate Computational Fluid Dynamics
{"url":"https://www.variousconsequences.com/2017/11/deep-learning-to-accelerate-topology-optimization.html","timestamp":"2024-11-14T01:58:55Z","content_type":"application/xhtml+xml","content_length":"131988","record_id":"<urn:uuid:00177ccd-79a0-4886-841d-7e800581a1be>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00842.warc.gz"}
On the formation of axial corner vortices during spin-up in a cylinder of square cross-section Munro, Richard J., Hewitt, R.E and Foster, M.R. (2015) On the formation of axial corner vortices during spin-up in a cylinder of square cross-section. Journal of Fluid Mechanics, 772 . pp. 246-271. ISSN 1469-7645 Full text not available from this repository. We present experimental and theoretical results for the adjustment of a fluid (homogeneous or linearly stratified), which is initially rotating as a solid body with angular frequency Ω−ΔΩ, to a nonlinear increase ΔΩ in the angular frequency of all bounding surfaces. The fluid is contained in a cylinder of square cross-section which is aligned centrally along the rotation axis, and we focus on the O(Ro−1Ω−1) time scale, where Ro=ΔΩ/Ω is the Rossby number. The flow development is shown to be dominated by unsteady separation of a viscous sidewall layer, leading to an eruption of vorticity that becomes trapped in the four vertical corners of the container. The longer-time evolution on the standard ‘spin-up’ time scale, E−1/2Ω−1 (where E is the associated Ekman number), has been described in detail for this geometry by Foster & Munro (J. Fluid Mech., vol. 712, 2012, pp. 7–40), but only for small changes in the container’s rotation rate (i.e. Ro≪1). In the linear case, for Ro≪E1/2≪1, there is no sidewall separation. In the present investigation we focus on the fully nonlinear problem, Ro=O(1), for which the sidewall viscous layers are Prandtl boundary layers and (somewhat unusually) periodic around the container’s circumference. Some care is required in the corners of the container, but we show that the sidewall boundary layer breaks down (separates) shortly after an impulsive change in rotation rate. These theoretical boundary-layer results are compared with two-dimensional Navier–Stokes results which capture the eruption of vorticity, and these are in turn compared to laboratory observations and data. The experiments show that when the Burger number, S=(N/Ω)2 (where N is the buoyancy frequency), is relatively large – corresponding to a strongly stratified fluid – the flow remains (horizontally) two-dimensional on the O(Ro−1Ω−1) time scale, and good quantitative predictions can be made by a two-dimensional theory. As S was reduced in the experiments, three-dimensional effects were observed to become important in the core of each corner vortex, on this time scale, but only after the breakdown of the sidewall layers. Actions (Archive Staff Only)
{"url":"http://eprints.nottingham.ac.uk/33544/","timestamp":"2024-11-14T07:25:18Z","content_type":"application/xhtml+xml","content_length":"32880","record_id":"<urn:uuid:3a485578-854b-44d8-9501-44f084f5e143>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00846.warc.gz"}
Bilinear Optimal Control of the Fokker-Planck Equation Title data Fleig, Arthur ; Guglielmi, Roberto: Bilinear Optimal Control of the Fokker-Planck Equation. In: IFAC-PapersOnLine. Vol. 49 (2016) Issue 8 . - pp. 254-259. ISSN 2405-8963 DOI: https://doi.org/10.1016/j.ifacol.2016.07.450 This is the latest version of this item. Project information Project title: Project's official title Project's id Analisi e controllo di equazioni a derivate parziali nonlineari No information Model Predictive Control for the Fokker-Planck Equation GR 1569/15-1 Project financing: Deutsche Forschungsgemeinschaft Istituto Nazionale di Alta Matematica (INdAM) Abstract in another language The optimal tracking problem of the probability density function of a stochastic process can be expressed in term of an optimal bilinear control problem for the Fokker-Planck equation, with the control in the coefficient of the divergence term. As a function of time and space, the control needs to belong to an appropriate Banach space. We give suitable conditions to establish existence of optimal controls and the associated first order necessary optimality conditions. Further data Available Versions of this Item
{"url":"https://eref.uni-bayreuth.de/id/eprint/35125/","timestamp":"2024-11-07T10:38:13Z","content_type":"application/xhtml+xml","content_length":"23987","record_id":"<urn:uuid:9346ef21-8ab9-470e-9c53-1b0907457b1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00013.warc.gz"}
The Stacks project Lemma 12.3.2. Let $\mathcal{A}$ be a preadditive category. Let $x$ be an object of $\mathcal{A}$. The following are equivalent 1. $x$ is an initial object, 2. $x$ is a final object, and 3. $\text{id}_ x = 0$ in $\mathop{\mathrm{Mor}}\nolimits _\mathcal {A}(x, x)$. Furthermore, if such an object $0$ exists, then a morphism $\alpha : x \to y$ factors through $0$ if and only if $\alpha = 0$. Comments (1) Comment #9833 by Miles Reid on The x in the addendum is not the same as the x in the main statement, Better to say morphism y -> z in A (for every y,z in Ob(A) Furthermore, if such an object 0 exists, then a morphism α: y→ z factors through 0 if and only if α=0. There are also: • 10 comment(s) on Section 12.3: Preadditive and additive categories Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 00ZZ. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 00ZZ, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/00ZZ","timestamp":"2024-11-10T08:08:48Z","content_type":"text/html","content_length":"16069","record_id":"<urn:uuid:c200194a-9c98-4720-a31e-4593c0a80778>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00431.warc.gz"}
Find The Number 1 to 100 1.7 APK Download - com.techarts.find.number.digit.alphabet.hidden.one.hundre.dlight Description of Find The Number 1 To 100 1.7 Apk Here we present a new Find The Number 1 To 100 1.7 Apk apk file for Android 5.0+ and up. Find The Number 1 To 100 1.7 app is among the category of Jeux de socit in app store. This is advanced and newest type of Find The Number 1 To 100 1.7 (com.techarts.find.number.digit.alphabet.hidden.one.hundre.dlight). The installation and downloading process is easy and quite simple. You can simply press on install button to install the app and remember to allow app installation from anonymous sources. We have also given direct download link with hight speed download. Be sure that we only provide the original, free and pure apk installer for Find The Number 1 To 100 1.7 APK free of mofifications. All the apps & games that we provide are for personal or domestic use. If any apk download violates any copyright, you can contact us. Find The Number 1 To 100 1.7 is the assets and trademark from the developer Resco Brands. For more help, TechArts Games you can know more about the company/developer who programmed this. All editions this app apk offered with us: You could also install apk of Find The Number 1 To 100 1.7 and execute it using trendy android emulators. More about app About Game Find the number from 1 to 100 is a addictive game for number and alphabet finder. All the number form 1 to 100 generate randomly. You need to find number as per instruction given on top middle. More than 50 different levels in the game. There are 6 different types of board in the game. Mode 1) Relaxed mode. 2) Timed mode. Levels 1)Number ————— Find the number from 1 to 10 Find the number from 11 to 25 Find the number from 26 to 45 Find the number from 55 to 46 Find the number from 75 to 56 Find the number from 100 to 76 Find the even number from 1 to 40 Find the odd number from 1 to 40 Find the even number from 100 to 50 Find the odd number from 100 to 50 Find the number from 1 to 60 which is divide by 3 Find the number from 60 to 1 which is divide by 3 Find the number from 1 to 80 which is divide by 4 Find the number from 80 to 1 which is divide by 4 Find the number from 1 to 100 which is divide by 5 Find the number from 100 to 1 which is divide by 5 Find the number from 100 to 1 which is divide by 4 Find the number from 100 to 1 which is divide by 3 Find the number from 1 to 50 Find the number from 51 to 100 Find the number from 100 to 1 2)Alphabets —————— Find the alphabet from A to M Find the alphabet from N to Z Find the alphabet from a to m Find the alphabet from n to z Find the alphabet from A to Z Find the alphabet from a to z Find the even alphabet from A a , B b , ..... , M m Find the even alphabet from N n , O o , ..... , Z z Find the alphabet from M to A Find the alphabet from Z to N Find the alphabet from m to a Find the alphabet from z to n Find the alphabet from Z to A Find the alphabet from z to a Find the even alphabet from M m , L l , ..... , A a Find the even alphabet from Z z , Y y , ..... , N n 3)Hidden Number —————————— You need to find the numbers which is in upper panel. 4)Find the Difference / Spot the Difference ———————————————————————— Find the 10 differences form both pictures. How to Play ? Tap on number/alphabet as per instruction. There is a hint functionality so if you can’t find number/alphabet use it. Who can play ? No age limit. Game Features Realistic graphics and ambient sound. Realistic stunning and amazing animations. Real-time particles & effects Smooth and simple controls. User friendly interface and interactive graphics. Important Point Please do not tap randomly. If you getting stuck you can use hint to find number/alphabet. Benefits Improve focus. Improve IQ. Challenge your self to beat your own records. Download now.
{"url":"https://designkug.com/app/com.techarts.find.number.digit.alphabet.hidden.one.hundre.dlight/","timestamp":"2024-11-15T03:52:01Z","content_type":"text/html","content_length":"37374","record_id":"<urn:uuid:3cdae366-5116-40bf-8014-7bcca260a31b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00343.warc.gz"}
Introduction to the liquid state - educontenthub Introduction to the liquid state Introduction to the Liquid State Difference between solid, liquid and gas • Closely packed ordered arrangement of particles (atoms, molecules, ions) • Intermolecular forces of attraction are very strong. • Definite shape (because molecules in solid are strongly held and cannot move) • Definite Volume • Low K.E. i.e., only vibrate about their mean position. • Long range order (i.e., the definite and ordered arrangement of the constituents of solids extends over a large distance.) • Molecules in liquids are not far apart from each other i.e., molecules are in contact with each other. But no regular arrangement. • Fairly strong intermolecular forces but less than solids. • Molecules move past one another but this movement is restricted as compared to molecules in gases. Hence liquids have definite volume but no definite shape(takes shape of container). • Particles of liquid have High kinetic energy, i.e., they slide over each other. • Random motion or irregular motion of particles. • Liquids exhibit short range order. • Molecules are in constant random motion. • No regular pattern. • There are large spaces in between them. Therefore, very weak (negligible) intermolecular forces of attraction . • Molecules have random motion therefore it has neither definite shape(takes shape of container) nor definite volume. • Show no order at all. Some Important Orders:- Intermolecular Force➤ Solid> Liquid> Gas • The forces of attraction or repulsion which act between neighboring particles is called as Intermolecular force. compressibility➤ Solid < Liquid < Gas • Solids can not be compressed. • Liquids are slightly compressible. • While gases are highly compressible. Kinetic Energy ➤Solid< Liquid< Gas • More is the distance between the particles, less will be the inter particle force of attraction. Hence, particles would be free to move with higher kinetic energy. order of expansion on heating➤ Solid< Liquid< Gas • In solids, molecules are tightly packed as compared to liquids and gases. • similarly, molecules of a liquid are bound more as compared to gases. • Hence, on heating, solids expand less as compared to liquids and liquids expand less as compared to gases. • In case of gases, the molecules are not bound at all. Thus they expand maximum upon heating. The nature and magnitude of intermolecular forces is the key concept in describing the physical properties of liquids. Different types of Intermolecular forces • Intermolecular forces are the attractive or repulsive forces between the molecules. • Intermolecular forces exists between polar molecules as well as non-polar molecules. • Intermolecular forces as a whole are usually called as Van der Walls forces. • These are electrostatic in nature. • Intermolecular forces only exists in non-metals. There are different types of Intermolecular Forces/interactions :- 1. Dipole-Dipole forces 2. Dipole-Induced dipole forces 3. Induced dipole – Induced dipole forces 4. Hydrogen bonding 1. Dipole-Dipole forces • As polar molecule have permanent dipole moment. • A polar molecule exists as dipole having a positive pole and a negative pole. • The positive pole of one molecule is attracted by the negative pole of other molecule. • The van der Walls force due to electrical interaction between the dipoles of two molecules is known as dipole-dipole interaction or dipole-dipole force. • It exists only in polar covalent compounds e.g. NH[3], HCl, SO[2] etc. As all these gases have permanent dipoles which results in appreciable dipole-dipole interactions between the dipoles of these molecules. Some Important Points– • As a polarized molecule have two poles, partial positive and partial negative pole. • The electron cloud lies near more electronegative element which results in the dipole-dipole interactions. • It exists only in polar covalent compounds e.g. HCl, SO[2] etc. • It is not a chemical bond. It is an intermolecular force. • Dipole-Dipole forces doesn’t exist in non-polar covalent compounds like O[2], Cl[2] etc. because the electron cloud lies in between them and hence no interactions between the compounds. Magnitude of dipole-dipole interactions • The magnitude of dipole dipole forces depends upon the dipole moment of the polar molecule. • More is the value of dipole moment, more will be the dipole-dipole interaction.e.g.1 \(\implies\) the dipole-dipole forces are stronger in NH[3] \(\implies\) NH[3] is more easily liquefiable than HCl gas. e.g. 2 $$\mu_{{H}_2 O}=1.85\,D$$ $$\mu_{H_2 O}>\mu_{{NH}_3}$$ \(\implies\) intermolecular forces of attraction are stronger in case of \({H}_2 O\) than in \(NH_3\) \(\implies meling\,\, point➤ {H}_2 O > NH_3\) Average Potential Energy of Dipole-Dipole Forces Consider two polar molecules having dipole moments \(\mu_1\) & \(\mu_2\) respectively. The average interaction energy between them is given by the expression : Interaction energy,$$V=\frac{C}{r^6}$$ where \(C=\frac{-2}{3kT}×\mu_1^2\, \mu_2^2\) \(\therefore\) \(V=-\frac{2}{3kT}\mu_1^2\, \mu_2^2 ×\frac{1}{r^6} \) …….(1) where r= distance between polar molecules k= Boltzmann constant T= Absolute temperature The above equation (1) carries -ve sign because dipole-dipole interaction is an attractive force. Equation (1) shows that: • the force of attraction between dipoles depends on \(\frac{1}{r^6}\). • dipole-dipole attractions vary inversely with temperature. • As the intermolecular distance ‘r’ is very large under normal conditions of temperature and pressure; \(\implies\)dipole-dipole interactions among gas molecules are very small. • When P \(\uparrow\) or T \(\downarrow\) then the distance ‘r’ between the molecules decreases. • \(\implies\) magnitude of attractive forces increases and the gas changes into liquid or solid under these conditions. 2. Dipole-induced dipole forces • When a non-polar molecule lies in neighbourhood of a polar molecule then it may sometimes polarized by the polar molecule. • In this way, the polar molecule having dipole moment \(\mu_1\) can induce a dipole \(\mu_2\) in the polarisable molecule. Thus, the non-polar molecule behaves as an Induced dipole. This Induced dipole then interacts with the permanent dipole moment of polar molecule, and hence the two molecules are attracted together as shown in fig. The magnitude of this interaction depends upon the polarisability of the non-polar molecule and the dipole moment of the polar molecule . In 1920, Debye showed that a non-polar molecule is polarized by a polar molecule in its vicinity. The average kinetic energy of attraction of dipole-induced dipole interaction is given by: \(V=-\frac{C}{r^6}\) ; where \(C=2 \alpha \mu_1^2 \) \(\therefore V=-2 \alpha \mu_1^2 × \frac{1}{r^6}\) where \(\mu_1\) = permanent dipole moment of polar molecule \(\alpha\) = polarisability of non-polar(polarisable) molecule. r= distance between the molecules. Above equation shows that the dipole-induced dipole interaction energy depends upon \(\frac{1}{r^6}\) and is independent of temperature. Dipole-induced dipole interaction varies with distance in the same way as dipole-dipole interaction. But its magnitude is much smaller. 3. Induced dipole-induced dipole forces or London/dispersion Forces In 1930, the existance of forces of attraction between the non-polar molecules was explained by Fritz London. • Due to temporary distortion of electron cloud of the non-polar molecule, it produces a momentary dipole. • This momentary dipole induces again a momentary dipole in the neighbouring molecule. • These two dipoles attract each other and the force of attraction between these two dipoles ( induced dipole and original dipole) are known as London/dispersion forces. Some Important Facts:- • London forces arises due to motion of electrons, therefore, also exists in polar molecules. • Vanderwaals attraction in non-polar molecules is only due to London forces. • Van der Walls forces is a general term that includes the forces of attraction between polar as well as non-polar molecules. Hence, the London forces are reffered to as Van der Walls forces. Energy of Interaction of London Forces The energy of interaction of London forces is given by London formula as : Interaction Energy, \(V=-\frac{C}{r^6}\) where, \(C=\frac{3}{2} \alpha_1 \alpha_2 (\frac{I_1 I_2}{I_1 + I_2}) \) where \(\alpha_1 \, and \, \alpha_2\) = polarisabilities of molecule 1 and 2 respectively. \(I_1\) and \(I_2\) = ionisation energies of two molecules. \(\therefore\) \(V=-\frac{3}{2} \alpha_1 \alpha_2 \frac{I_1 I_2}{I_1 + I_2} × \frac{1}{r^6}\) Magnitude of London Forces Magnitude of London Forces \(\propto\) size of molecule Magnitude of London Forces \(\propto\) Surface area of the molecule. i.e., magnitude of London forces increase with increase in size and surface area of the molecule. Because the extent of polarisation increase with the increase in the surface area which results in increasing amount of attractive forces. For example : n-pentane & neo-pentane molecular formula of both are same = \(C_5 H_{14}\) n-pentane ➤ linear shape ➠ Large surface area neo-pentane➤nearly spherical shape➠ Less surface area \(\implies\) Intermolecular force of attraction is more in n-pentane. \(\implies\) Boiling point of n-pentane is more \({B.P.}_{n-pentane} = 36.4° \) \({B.P.}_{neo-pentane} = 9.7° \) The magnitude of London forces increase with increase in molecular mass. \(\Rightarrow\) The extent of polarisability increase with increase in molecular size. Due to which, the London forces of attraction also increase. 4. Hydrogen Bond It is a unique type of dipole-dipole interaction and exists in the molecules in which a hydrogen atom is covalently bonded to the highly electronegative atom (e.g. F, O, N) As electronegativity of hydrogen is very less than F, O and N. Due to this large electronegativity difference, the shared pair of electron between them lies far away from hydrogen atom, i.e., the shared pair of electron displaced towards more electronegative atom and more electronegative atom acquires partial negative charge (δ-). As a result, hydrogen atom becomes highly electropositive w.r.t. the other atom and acquires partial positive charge (δ+) Hence there is an electrostatic force between positively charged atom of one molecule and negatively charged atom of neighbouring molecule which results in the formation of hydrogen bond. In this way, we can define hydrogen bond as the attractive force which binds the hydrogen atom of one molecule with electronegative atom (F,O,N) of another molecule. • The hydrogen bond may be between two different molecules or within the same molecule. • If the hydrogen bonding happens between molecules having either same or other compounds then it is called as Intermolecular hydrogen bonding. • If the hydrogen bonding happens within the same molecule, then it is called as Intramolecular hydrogen bonding. For example: Two HF molecules are joined together due to dipole-dipole interaction viz., called as Intermolecular H-bonding as shown in fig. While the hydrogen bonding within o-hydroxybenzoic acid is Intramolecular hydrogen bonding as shown in fig. Repulsive Intermolecular Forces When two molecules come closer then the interactions occur between the nuclei and electrons of the molecules. At Large distances, the attractive forces operate between two molecules. While, at very small distances, the nuclei and electrons of the molecules repel each other. At this stage, the repulsive forces begin to dominate the attractive forces. Repulsive forces vary inversely as the 12th power of intermolecular distance i.e., \(\propto\) \(\frac{1}{r^{12}}\) Hence, the repulsive forces increase sharply with decrease in the distance between molecules. Mathematically, the magnitude of repulsive interaction energy is given by \(V_{repulsion} = \frac{B}{r^{12}}\) where B= constant (depending upon the nature of substance) Total Energy of Interaction between a Pair of Molecules or The ‘Lennard-Jones Potential’ The total energy of interaction between a pair of molecules is the sum of all attractive and repulsive forces. i.e., It is the sum of following interaction energies: Dipole-Dipole interaction energy \(V_1 =-\frac{2}{3kT}\mu_1^2\, \mu_2^2 ×\frac{1}{r^6}\) Dipole-Induced dipole interaction energy \(V_2 =-2 \alpha \mu_1^2 × \frac{1}{r^6}\) Induced dipole-Induced dipole interaction energy \(V_3 =-\frac{3}{2} \alpha_1 \alpha_2 \frac{I_1 I_2}{I_1 + I_2} × \frac{1}{r^6} \) Repulsive interaction energy \(V_{repulsion} = \frac{B}{r^{12}}\) Hence, Total Potential Energy = Total attractive forces + Total repulsive forces =\([ V_1 + V_2 + V_3 ] + V_r \) = \(-\frac{2}{3kT}\mu_1^2\, \mu_2^2 ×\frac{1}{r^6}\) + \([-2 \alpha \mu_1^2 × \frac{1}{r^6}]\) + \([-\frac{3}{2} \alpha_1 \alpha_2 \frac{I_1 I_2}{I_1 + I_2} × \frac{1}{r^6} \)] + \(\frac{B}{r^12}\) $$V= -[\frac{2}{3} \frac{\mu_1^2 \mu_2^2}{kT} + 2 \alpha \mu_1^2 +\frac{3}{2} \alpha_1 \alpha_2 [\frac{I_1 I_2}{I_1 + I_2}]] \frac{1}{r^6} + \frac{B}{r^{12}}$$ \(\implies V=-\frac{A}{r^6} + \frac{B}{r^{12}}\) where, A and B are constants. The potential energy of interaction is usually expressed in a standard form known as Lennard-Jones potential. Linnard-Jones Potential is the special case of Mie Potential energy with m=6 and n=12 Mie Potential Energy It is a function of the distance between two particles (r) and is written as \(V(r) = C ε [( \frac{r_0}{r})^{12} – (\frac{r_0}{r})^6] \) where, \(C=\frac{n}{n-m}(\frac{n}{m})^{\frac{m}{n-m}}\) ε= maximum energy of attraction ( depth of the well) r[0] = closest distance between two molecules at which V=0 At m=6 and n=12, \(V= 4ε [( \frac{r_0}{r})^{12} – (\frac{r_0}{r})^6]\) viz., the Lennard Jones potential. Linnard-Jones Potential is also reffered to as 6-12 Potential because VWL attractive forces fall of as 6th power of intermolecular distance and the repulsive forces vary inversely as the 12th power of the intermolecular distance. Linnard-Jones Potential Energy Curve:- The energy of Interaction is given by the derivative of V(r) w.r.t. r $$V= – \frac{ \partial V} {\partial \,r}$$ If \(\frac{ \partial V} {\partial \,r}\) = +ve , then the interaction energy = -ve \(\Rightarrow\) the molecules are attracted towards each other. If \(\frac{ \partial V} {\partial \,r}\) = -ve , then the interaction energy = +ve \(\Rightarrow\) the molecules repel each other. Leave a Comment
{"url":"https://educontenthub.com/introduction-to-the-liquid-state/","timestamp":"2024-11-08T18:55:16Z","content_type":"text/html","content_length":"130452","record_id":"<urn:uuid:f2796178-114b-4382-a51b-d36828912018>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00526.warc.gz"}
Beat the Tax Collector Why I love this problem: At first, students usually think it is impossible to beat the tax collector. After some trial and error, they figure out it is possible to beat the tax collector. By this time, they are highly motivated to try and find the best possible set of paychecks and to explore other extensions. Tip: Start by playing the role of the tax collector and let students choose the paychecks. Play up the idea that the tax collector never loses. They get a big thrill when they discover they can win. Grade Band: 4th – 6th Math Content: factors, multiples, prime, and composite numbers Math Standards: • 4.OA – Gain familiarity with factors and multiples. • 5.OA – Analyze patterns and relationships. • 6.NS – Compute fluently with multi-digit numbers and find common factors and multiples Standards of Mathematical Practice: • Make sense of problems & persevere in solving them • Reason abstractly & quantitatively • Construct viable arguments & critique the reasoning of others • Attend to precision • Look for & make use of structure Strategies to try: • Try this a few times. Keep track of your choices. • Write down all the factors of each paycheck and search for patterns. • Consider which is the best paycheck to take first. • Think carefully about the order in which you take the paychecks. Questions to explore: • What is the most money you can get? • Is there an optimal strategy? • Try a different set of paychecks and see if you can still win? What about paychecks from 1 to 100? • Are there any sets of paychecks where the tax collector always wins? Implementing online: You can make a copy of this Jamboard to use with your students and let them explore their own strategies. • One player chooses the paychecks and the other player collects the taxes. • If possible, split students into breakout rooms with 2 or 3 students in each one. (If you place three students in a breakout, the third student can take notes and help the team develop better • Direct each breakout room to a specific frame of the Jamboard so they can play without interfering with other students. Additional Information:
{"url":"https://drrajshah.com/beat-the-tax-collector/","timestamp":"2024-11-13T18:30:44Z","content_type":"text/html","content_length":"31386","record_id":"<urn:uuid:303026c0-ce24-44a6-9fc4-29c2638ef8dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00120.warc.gz"}
The research work in this dissertation presents a new perspective for obtaining solutions of initial value problems using Artificial Neural Networks (ANN). We discover that neural network based model for the solution of Ordinary Differential Equations (ODEs) provides a number of advantages over standard numerical methods. First, the neural network based solution is differentiable and is in closed analytic form. On the other hand most other techniques offer a discretized solution or a solution with limited differentiability. Second, the neural network based method for solving a differential equation provides a solution with very good generalization properties. In our novel approach, we consider first, second and third order homogeneous and nonhomogeneous linear ordinary differential equations, and first order nonlinear ODE. In the homogeneous case, we assume a solution in exponential form and compute a polynomial approximation using SPSS statistical package. From here we pick the unknown coefficients as the weights from input layer to hidden layer of the associated neural network trial solution. To get the weights from hidden layer to the output layer, we form algebraic equations incorporating the default sign of the differential equations. We then apply the Gaussian Radial Basis Function (GRBF) approximation model to achieve our objective. The weights obtained in this manner need not be adjusted. We proceed to develop a Neural Network algorithm using MathCAD 14 software, which enables us to slightly adjust the intrinsic biases. For first, second and third order non-homogeneous ODE, we use the forcing function with the GRBF model to compute the weights from hidden layer to the output layer. The operational neural network model is redefined to incorporate the nonlinearity seen in nonlinear differential equations. We compare exact results with the neural network results for our example ODE problems. We find the results to be in good agreement. Furthermore these compare favourably with the existing neural network methods of solution. The major advantage here is that our method reduces considerably the computational tasks involved in weight updating, while maintaining satisfactory accuracy. TABLE OF CONTENTS Title Page i Certification ii Declaration iii Dedication iv Acknowledgement v Table of Contents vi List of Tables viii List of Figures xi Abstract xiii CHAPTER 1: INTRODUCTION 1 1.1 Definition of a Neural Network 1 1.2 Statement of the Problem 3 1.3 Purpose of the Study 4 1.4 Aim and Objectives 4 1.5 Significance of the Study 5 1.6 Justification of the Study 6 1.7 Scope of the Study 6 1.8 Definition of Terms 6 1.9 Acronyms 7 CHAPTER 2: REVIEW OF RELATED LITERATURE 9 CHAPTER 3: MATERIALS AND METHODS 14 3.1 Artificial Neural Network 14 3.1.1 Architecture 14 3.1.2 Training feed forward neural network 15 3.2 Mathematical Model of Artificial Neural Network 15 3.3 Activation Function 16 3.3.1 Linear activation function 18 3.3.2 Sign activation function 18 3.3.3 Sigmoid activation function 19 3.3.4 Step activation function 19 3.4 Function Approximation 19 3.5 General Formulation for Differential Equations 21 3.6 Neural Network Training 22 3.7 Method of Solving First Order Ordinary Differential Equations 23 3.8 Computation of the Gradient 35 3.9 Regression Based Learning 49 3.9.1 Linear regression: A simple learning algorithm 50 3.9.2 A neural network view of linear regression 50 3.9.3 Least squares estimation of the parameters 51 CHAPTER 4: RESULTS AND DISCUSSION 53 4.1 First and Second Order Homogeneous Ordinary Differential Equation 53 4.2 First and Second Order Non-Homogeneous Ordinary Differential Equations 73 4.3 Third Order Homogeneous and Non-Homogeneous ODE 103 4.4 First and Second Order Linear ODE with Variable Coefficients: 112 4.5 Nonlinear Ordinary Differential Equations (The Riccati Form of ODE) 122 4.6 Solving Nth order linear ordinary differential equations 131 4.7 Simulation 132 4.8 Discussion 144 CHAPTER 5: SUMMARY, CONCLUSION AND Recommendation 145 5.1 Summary 145 5.2 Conclusion 146 5.3 Recommendations 146 5.4 Contribution to knowledge 146 References 147 LIST OF TABLES LIST OF FIGURES CHAPTER 1 1.1 DEFINITION OF A NEURAL NETWORK A neural network is fundamentally a mathematical model, and its structure consists of a series of processing elements which are inter-connected and their operation resemble that of the human neurons. These processing elements are also known as units or nodes. The ability of the network to process information is embedded in the connection strengths, simply called weights, which, when exposed to a set of training patterns, adapts to it. (Graupe 2007). The human brain consists of billions of nerve cells or neurons, as shown in Figure 1.1a. Neurons communicate through electrical signals which are short-lived impulses in the electromotive force of the cell wall. The neuron to neuron inter-connections are intermediated by electrochemical junctions called synapses, which are located on branches of the cell known as dendrite. Each neuron receives a good number of connections from other neurons, and there is constant incoming of multitude of signals, which each neuron receives and eventually gets to the cell body. Here, they are summed together in a way that if the resulting signal is greater than some threshold then the neuron will generate an impulse in response, coming from an electromotive force. This particular response is transmitted to other neurons through the axon which is a branching fibre. (Gurney, 1997). See figures 1.1a and 1.1bs. Neural network methods can solve both ordinary as well as partial differential equations. And it relies on the function approximation seen in feed- forward neural networks which results in a solution written in an analytic form. This form employs a feed forward neural network as a basic approximation element. (Principe et al., 1997). Training of the neural network can be done either by any optimization technique which in turn requires the computation of the derivative of the error with respect to the network parameters, by regression based model or by basis function approximation. In any of these methods, a neural network solution of the given differential equation is assumed and designated a trial solution which is written as a sum of two parts, proposed by Lagaris et al., (1997). The first part of the trial solution satisfies the conditions prescribed at the initial or boundary, and contains non-of the parameters that need adjustment. The other part contains some adjustable parameters that involves feed- forward neural network and is constructed in a way that does not affect the conditions. Through the construction, the trial solution, initial or boundary conditions are satisfied and the network is trained to satisfy the differential equation. Fig.1.1 a Biological Neuron (Carlos G., Online) Figure 1.1 b An Artificial Neuron (Yadav et al., 2015) It is this architecture in Figure 1.1b, and style of processing that we hope to incorporate in neural networks solution of differential equations. 1.2 STATEMENT OF THE PROBLEM In this research, we propose a new method of solving ordinary differential equations (ODEs) with initial conditions through Artificial Neural Network (ANN) based models. The conventional way of solving differential equations using artificial neural network involves updating of all the parameters, weights and biases, during the neural network training. This is caused by the inability of the neural network to predict a solution with an acceptable minimum error. In order to reduce the error, the error function is minimized. Minimizing the error function demands finding its gradient. This gradient involves the computation of multivariate partial derivatives of the error function with respect to all the parameters, weights and biases, and the independent variable. This is quite involving as we shall demonstrate later for first order differential equation. It is even more difficult when solving second or higher order ODE where you need to find the second or higher order derivative of the error function. This research work involves systematically computing the weights such that no updating is required, thereby eliminating the herculean task in finding the partial derivative of the error function. 1.3 PURPOSE OF THE STUDY The main purpose of embarking on this research is to explore an avenue or an approach to reducing the herculean task involved in weight updating in the process of neural network training of the parameters associated with the minimization of the error function, which in turn involves multivariate partial derivatives with respect to all the parameters and the independent variables. 1.4 AIM AND OBJECTIVES Aim: The aim of this work is to solve both linear and nonlinear ordinary differential equations using Artificial Neural Network (ANN) model, by implementing the new approach which this study proposes. We shall achieve the aim through the following objectives: Objectives: We shall systematically (i) compute the weights from input layer to hidden layer using regression based model (ii) compute the weights from hidden layer to output layer using Radial Basis Function (RBF) model (iii) slightly adjust the biases using Mathematical Computer Aided Design (MathCAD) 14 software algorithm to achieve the desired accuracy (iv) develop a neural network that will incorporate the nonlinearity found in such ODE as the Riccati type (v) Suggests a way of tackling n^th order ODE (vi) Compare our results with analytical results and some other neural network result. (vii) Simulate our results to show how they agree with other solutions 1.5 SIGNIFICANCE OF THE STUDY A neural network based model for solving differential equations provides the following advantages over the standard numerical methods: a. The neural network based solution of a differential equation is differentiable and is in closed analytic form that can be applied in any further calculation. On the other hand most other methods like Euler, Runge-Kutta, finite difference, etc, give a discrete solution or a solution that has limited differentiability. b. The neural network based method for solving a differential equation makes available a solution with fantastic generalization properties. c. Computational complexity does not increase rapidly in the neural network method when the number of points to be sampled is increased while in the other standard numerical methods computational complexity increases rapidly as we increase the number of sampling points in the interval. Most other approximation methods are observed to be iterative in nature, and the step size fixed before the beginning of the computation. ANN offers some reliefs to overcoming some of these repetition of iterations. Now, after the ANN has converged, we may use it as a black box to get numerical results at any randomly picked points in the domain. d. The method is general and can be applied to the systems defined on either orthogonal box boundaries or on irregular arbitrary shaped boundaries. e. Models based on neural network offers an opportunity to handle difficult differential equation problems arising in many sciences and engineering applications. f. The method can be implemented on parallel architectures. (Yadav et al. 2015) 1.6 JUSTIFICATION OF THE STUDY The new approach we are proposing in this research will eliminate the computation of partial derivative of the error function thereby reducing the task involved in using neural network to solve differential equations. 1.7 SCOPE OF THE STUDY This study covers first, second and third order linear and first order nonlinear ODE with constant and variable coefficients. It is also extended to the nth order linear ODE, all with initial conditions. It does not include ODE with boundary conditions, other nonlinear ODE with the product of the dependent variable and its derivative, and partial differential equations. 1.8 DEFINITION OF TERMS Nodes: are computational units which receive inputs, and process them into output. Synapses: are connections between neurons. They determine the information flow which exists between nodes. Weights: are the respective signaling strength. The ability of the network to process information is stored in connection strength, simply called weights. Neurons: are the primary signaling units of the central nervous system and each neuron is a distinct cell whose several processes arise from its cell body. A neuron is the basic processor or processing element in a neural network. Each neuron receives one or more input over its connections and produces only one output. Architecture: is the pattern of connections between the neurons which can be a multilayer feed forward neural network architecture. (Tawfiq & Oraibi, 2013). When a neural network is in layers, the neurons are arranged in the form of layers. There are a minimum of two layers: an input layer and an output layer. The layers between the input layer and the output layer, if they exist, are referred to as hidden layers, and their computation nodes are referred to as hidden neurons or hidden units. Extra neurons at the hidden layers raise the network’s ability to extract higher-order statistics from (input) data. (Alaa, 2010). Training: is the process of setting the weights and biases from the network, for the desired output. Regression: is a least-squares curve that fits a particular data. Goodness of Fit (R^2): is a terminology used in regression analysis to tell us how good a given data has fit the regression model Neural Network: is interconnection of processing elements, which resemble that of human neurons. Artificial Neural Network: is a simplified mathematical model of human brain, also known as information processing system. Activation function: is a threshold or transfer function (non-linear operator), which keeps the cell’s output between certain limits as is the case in the biological neuron. Axon: conducts electric signals down its length. Bias: is a parameter which helps to speed up convergence. The addition of biases increases the flexibility of the model to feed the given data. Bias determines if a neuron is activated. The performance of an activation function ought to be propagated forward through the network. The bias term in the network determines whether or not this will happen. The absence of bias hinders this forward propagation, leading to undesirable outcome. 1.9 ACRONYMS: ANN – Artificial Neural Network BVP – Boundary Value Problem CPROP - Constrained-Backpropagation FFNN – Feed Forward Neural Network GRBF - Gaussian Radial Basis function IVP – Initial Value Problem MathCAD – Mathematical Computer Aided Design MLP – Multi Layer Perceptron MSE – Mean Squared Error NN – Neural Network ODE – Ordinary Differential Equation PDE – Partial Differential Equation PDP - Parallel Distributed Processing PE - Processing Elements RBA – Regression Based Algorithm RBF – Radial Basis Function RBFNN – Radial Basis Function Neural Network SPSS – Statistical Package for Social Sciences Click “DOWNLOAD NOW” below to get the complete Projects FOR QUICK HELP CHAT WITH US NOW!
{"url":"https://projectshelve.com/item/a-new-perspective-to-the-solution-of-ordinary-differential-equations-using-artificial-neural-networks-qbn7355xml","timestamp":"2024-11-02T10:31:49Z","content_type":"text/html","content_length":"991494","record_id":"<urn:uuid:5f434c27-8430-407a-840a-1ed760a721cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00067.warc.gz"}
What do you call a foot that is about to step on to the ground? | Thinkmad.in Share with Facebook Comment You may also like.. A hunter met two shepherds, one of whom had three loaves and the other, five loaves. All the loaves were the same size. The three men agreed to share the eight loaves equally between them. After they had eaten, the hunter gave the shepherds eight bronze coins as payment for his meal. How should the two shepherds fairly divide this money? Answer: The shepherd who had three loaves should get one coin and the shepherd who had five loaves should get seven coins. If there were eight loaves and three men, each man ate two and two-thirds loaves. So the first shepherd gave the hunter one-third of a loaf and the second shepherd gave the hunter two and one-third loaves. The shepherd who gave one-third of a loaf should get one coin and the one who gave seven-thirds of a loaf should get seven coins. Show Answer The shepherd who had three loaves should get one coin and the shepherd who had five loaves should get seven coins. If there were eight loaves and three men, each man ate two and two-thirds loaves. So the first shepherd gave the hunter one-third of a loaf and the second shepherd gave the hunter two and one-third loaves. The shepherd who gave one-third of a loaf should get one coin and the one who gave seven-thirds of a loaf should get seven coins. Previous Next
{"url":"https://thinkmad.in/jokes-and-riddles/what-do-you-call-a-foot-that-is-about-to-step-on-to-the-ground/","timestamp":"2024-11-08T15:15:35Z","content_type":"text/html","content_length":"328755","record_id":"<urn:uuid:81f81a5a-0d71-49cf-8540-f6f472b20167>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00580.warc.gz"}
Autolev Parser¶ Autolev (now superseded by MotionGenesis) is a domain specific language used for symbolic multibody dynamics. The SymPy mechanics module now has enough power and functionality to be a fully featured symbolic dynamics module. This parser parses Autolev (version 4.1) code to SymPy code by making use of SymPy’s math libraries and the mechanics module. The parser has been built using the ANTLR framework and its main purpose is to help former users of Autolev to get familiarized with multibody dynamics in SymPy. The sections below shall discuss details of the parser like usage, gotchas, issues and future improvements. For a detailed comparison of Autolev and SymPy Mechanics you might want to look at the SymPy Mechanics for Autolev Users guide. We first start with an Autolev code file. Let us take this example (Comments % have been included to show the Autolev responses): % double_pendulum.al MOTIONVARIABLES' Q{2}', U{2}' SIMPROT(N, A, 3, Q1) % -> N_A = [COS(Q1), -SIN(Q1), 0; SIN(Q1), COS(Q1), 0; 0, 0, 1] SIMPROT(N, B, 3, Q2) % -> N_B = [COS(Q2), -SIN(Q2), 0; SIN(Q2), COS(Q2), 0; 0, 0, 1] % -> W_A_N> = U1*N3> % -> W_B_N> = U2*N3> P_O_P> = L*A1> % -> P_O_P> = L*A1> P_P_R> = L*B1> % -> P_P_R> = L*B1> V_O_N> = 0> % -> V_O_N> = 0> V2PTS(N, A, O, P) % -> V_P_N> = L*U1*A2> V2PTS(N, B, P, R) % -> V_R_N> = L*U1*A2> + L*U2*B2> MASS P=M, R=M Q1' = U1 Q2' = U2 % -> FORCE_P> = G*M*N1> % -> FORCE_R> = G*M*N1> ZERO = FR() + FRSTAR() % -> ZERO[1] = -L*M*(2*G*SIN(Q1)+L*(U2^2*SIN(Q1-Q2)+2*U1'+COS(Q1-Q2)*U2')) % -> ZERO[2] = -L*M*(G*SIN(Q2)-L*(U1^2*SIN(Q1-Q2)-U2'-COS(Q1-Q2)*U1')) INPUT M=1,G=9.81,L=1 INPUT Q1=.1,Q2=.2,U1=0,U2=0 INPUT TFINAL=10, INTEGSTP=.01 CODE DYNAMICS() some_filename.c The parser can be used as follows: >>> from sympy.parsing.autolev import parse_autolev >>> sympy_code = parse_autolev(open('double_pendulum.al'), include_numeric=True) # The include_pydy flag is False by default. Setting it to True will # enable PyDy simulation code to be outputted if applicable. >>> print(sympy_code) import sympy.physics.mechanics as me import sympy as sm import math as m import numpy as np q1, q2, u1, u2 = me.dynamicsymbols('q1 q2 u1 u2') q1d, q2d, u1d, u2d = me.dynamicsymbols('q1 q2 u1 u2', 1) l, m, g=sm.symbols('l m g', real=True) frame_a.orient(frame_n, 'Axis', [q1, frame_n.z]) # print(frame_n.dcm(frame_a)) frame_b.orient(frame_n, 'Axis', [q2, frame_n.z]) # print(frame_n.dcm(frame_b)) frame_a.set_ang_vel(frame_n, u1*frame_n.z) # print(frame_a.ang_vel_in(frame_n)) frame_b.set_ang_vel(frame_n, u2*frame_n.z) # print(frame_b.ang_vel_in(frame_n)) particle_p=me.Particle('p', me.Point('p_pt'), sm.Symbol('m')) particle_r=me.Particle('r', me.Point('r_pt'), sm.Symbol('m')) particle_p.point.set_pos(point_o, l*frame_a.x) # print(particle_p.point.pos_from(point_o)) particle_r.point.set_pos(particle_p.point, l*frame_b.x) # print(particle_p.point.pos_from(particle_r.point)) point_o.set_vel(frame_n, 0) # print(point_o.vel(frame_n)) # print(particle_p.point.vel(frame_n)) # print(particle_r.point.vel(frame_n)) particle_p.mass = m particle_r.mass = m force_p = particle_p.mass*(g*frame_n.x) # print(force_p) force_r = particle_r.mass*(g*frame_n.x) # print(force_r) kd_eqs = [q1d - u1, q2d - u2] forceList = [(particle_p.point,particle_p.mass*(g*frame_n.x)), (particle_r.point,particle_r.mass*(g*frame_n.x))] kane = me.KanesMethod(frame_n, q_ind=[q1,q2], u_ind=[u1, u2], kd_eqs = kd_eqs) fr, frstar = kane.kanes_equations([particle_p, particle_r], forceList) zero = fr+frstar # print(zero) #---------PyDy code for integration---------- from pydy.system import System sys = System(kane, constants = {l:1, m:1, g:9.81}, initial_conditions={q1:.1, q2:.2, u1:0, u2:0}, times = np.linspace(0.0, 10, 10/.01)) The commented code is not part of the output code. The print statements demonstrate how to get responses similar to the ones in the Autolev file. Note that we need to use SymPy functions like .ang_vel_in(), .dcm() etc in many cases unlike directly printing out the variables like zero. If you are completely new to SymPy mechanics, the SymPy Mechanics for Autolev Users guide guide should help. You might also have to use basic SymPy simplifications and manipulations like trigsimp(), expand(), evalf() etc for getting outputs similar to Autolev. Refer to the SymPy Tutorial to know more about these. • Don’t use variable names that conflict with Python’s reserved words. This is one example where this is violated: %Autolev Code LAMBDA = EIG(M) #SymPy Code lambda = sm.Matrix([i.evalf() for i in (m).eigenvals().keys()]) • Make sure that the names of vectors and scalars are different. Autolev treats these differently but these will get overwritten in Python. The parser currently allows the names of bodies and scalars/vectors to coincide but doesn’t do this between scalars and vectors. This should probably be changed in the future. %Autolev Code VARIABLES X,Y FRAMES A A> = X*A1> + Y*A2> A = X+Y #SymPy Code x, y = me.dynamicsymbols('x y') frame_a = me.ReferenceFrame('a') a = x*frame_a.x + y*frame_a.y a = x + y # Note how frame_a is named differently so it doesn't cause a problem. # On the other hand, 'a' gets rewritten from a scalar to a vector. # This should be changed in the future. • When dealing with Matrices returned by functions, one must check the order of the values as they may not be the same as in Autolev. This is especially the case for eigenvalues and eigenvectors. %Autolev Code EIG(M, E1, E2) % -> [5; 14; 13] E2ROW = ROWS(E2, 1) EIGVEC> = VECTOR(A, E2ROW) #SymPy Code e1 = sm.Matrix([i.evalf() for i in m.eigenvals().keys()]) # sm.Matrix([5;13;14]) different order e2 = sm.Matrix([i[2][0].evalf() for i in m.eigenvects()]).reshape(m.shape[0], m.shape[1]) e2row = e2.row(0) # This result depends on the order of the vectors in the eigenvecs. eigenvec = e2row[0]*a.x + e2row[1]*a.y + e2row[2]*a.y • When using EVALUATE, use something like 90*UNITS(deg,rad) for angle substitutions as radians are the default in SymPy. You could also add np.deg2rad() directly in the SymPy code. This need not be done for the output code (generated on parsing the CODE commands) as the parser takes care of this when deg units are given in the INPUT declarations. The DEGREES setting, on the other hand, works only in some cases like in SIMPROT where an angle is expected. %Autolev Code A> = Q1*A1> + Q2*A2> B> = EVALUATE(A>, Q1:30*UNITS(DEG,RAD)) #SymPy Code a = q1*a.frame_a.x + q2*frame_a.y b = a.subs({q1:30*0.0174533}) # b = a.subs({q1:np.deg2rad(30)} • Most of the Autolev settings have not been parsed and have no effect on the parser. The only ones that work somewhat are COMPLEX and DEGREES. It is advised to look into alternatives to these in SymPy and Python. • The REPRESENT command is not supported. Use the MATRIX, VECTOR or DYADIC commands instead. Autolev 4.1 suggests these over REPRESENT as well while still allowing it but the parser doesn’t parse • Do not use variables declarations of the type WO{3}RD{2,4}. The parser can only handle one variable name followed by one pair of curly braces and any number of ' s. You would have to declare all the cases manually if you want to achieve something like WO{3}RD{2,4}. • The parser can handle normal versions of most commands but it may not parse functions with Matrix arguments properly in most cases. Eg: This would compute the coefficients of U1, U2 and U3 in E1 and E2. It is preferable to manually construct a Matrix using the regular versions of these commands. %Autolev Code % COEF([E1;E2],[U1,U2,U3]) M = [COEF(E1,U1),COEF(E1,U2),COEF(E1,U3) & • MOTIONVARIABLE declarations must be used for the generalized coordinates and speeds and all other variables must be declared in regular VARIABLE declarations. The parser requires this to distinguish between them to pass the correct parameters to the Kane’s method object. It is also preferred to always declare the speeds corresponding to the coordinates and to pass in the kinematic differential equations. The parser is able to handle some cases where this isn’t the case by introducing some dummy variables of its own but SymPy on its own does require them. Also note that older Autolev declarations like VARIABLES U{3}' are not supported either. %Autolev Code MOTIONVARIABLES' Q{2}', U{2}' % ----- OTHER LINES ---- Q1' = U1 Q2' = U2 ----- OTHER LINES ---- ZERO = FR() + FRSTAR() #SymPy Code q1, q2, u1, u2 = me.dynamicsymbols('q1 q2 u1 u2') q1d, q2d, u1d, u2d = me.dynamicsymbols('q1 q2 u1 u2', 1) # ------- other lines ------- kd_eqs = [q1d - u1, q2d - u2] kane = me.KanesMethod(frame_n, q_ind=[q1,q2], u_ind=[u1, u2], kd_eqs = kd_eqs) fr, frstar = kane.kanes_equations([particle_p, particle_r], forceList) zero = fr+frstar • Need to change me.dynamicsymbols._t to me.dynamicsymbols('t') for all occurrences of it in the Kane’s equations. For example have a look at line 10 of this spring damper example. This equation is used in forming the Kane’s equations so we need to change me.dynamicsymbols._t to me.dynamicsymbols('t') in this case. The main reason that this needs to be done is because PyDy requires time dependent specifieds to be explicitly laid out while Autolev simply takes care of the stray time variables in the equations by itself. The problem is that PyDy’s System class does not accept dynamicsymbols._t as a specified. Refer to issue #396. This change is not actually ideal so a better solution should be figured out in the • The parser creates SymPy symbols and dynamicsymbols by parsing variable declarations in the Autolev Code. For intermediate expressions which are directly initialized the parser does not create SymPy symbols. It just assigns them to the expression. On the other hand, when a declared variable is assigned to an expression, the parser stores the expression against the variable in a dictionary so as to not reassign it to a completely different entity. This constraint is due to the inherent nature of Python and how it differs from a language like Autolev. Also, Autolev seems to be able to assume whether to use a variable or the rhs expression that variable has been assigned to in equations even without an explicit RHS() call in some cases. For the parser to work correctly however, it is better to use RHS() wherever a variable’s rhs expression is meant to be used. %Autolev Code VARIABLES X, Y E = X + Y X = 2*Y RHS_X = RHS(X) I1 = X I2 = Y I3 = X + Y INERTIA B,I1,I2,I3 % -> I_B_BO>> = I1*B1>*B1> + I2*B2>*B2> + I3*B3>*B3> #SymPy Code x,y = me.dynamicsymbols('x y') e = x + y # No symbol is made out of 'e' # an entry like {x:2*y} is stored in an rhs dictionary rhs_x = 2*y i1 = x # again these are not made into SymPy symbols i2 = y i3 = x + y body_b.inertia = (me.inertia(body_b_f, i1, i2, i3), b_cm) # This prints as: # x*b_f.x*b_f.x + y*b_f.y*b_f.y + (x+y)*b_f.z*b_f.z # while Autolev's output has I1,I2 and I3 in it. # Autolev however seems to know when to use the RHS of I1,I2 and I3 # based on the context. • This is how the SOLVE command is parsed: %Autolev Code A = RHS(X)*2 + RHS(Y) #SymPy Code # Behind the scenes the rhs of x # is set to sm.solve(zero,x,y)[x]. a = sm.solve(zero,x,y)[x]*2 + sm.solve(zero,x,y)[y] The indexing like [x] and [y] doesn’t always work so you might want to look at the underlying dictionary that solve returns and index it correctly. • Inertia declarations and Inertia functions work somewhat differently in the context of the parser. This might be hard to understand at first but this had to be done to bridge the gap due to the differences in SymPy and Autolev. Here are some points about them: 1. Inertia declarations (INERTIA B,I1,I2,I3) set the inertias of rigid bodies. 2. Inertia setters of the form I_C_D>> = expr however, set the inertias only when C is a body. If C is a particle then I_C_D>> = expr simply parses to i_c_d = expr and i_c_d acts like a regular 3. When it comes to inertia getters (I_C_D>> used in an expression or INERTIA commands), these MUST be used with the EXPRESS command to specify the frame as SymPy needs this information to compute the inertia dyadic. %Autolev Code INERTIA B,I1,I2,I3 I_B_BO>> = X*A1>*A1> + Y*A2>*A2> % Parser will set the inertia of B I_P_Q>> = X*A1>*A1> + Y^2*A2>*A2> % Parser just parses it as i_p_q = expr E1 = 2*EXPRESS(I_B_O>>,A) E2 = I_P_Q>> E3 = EXPRESS(I_P_O>>,A) E4 = EXPRESS(INERTIA(O),A) % In E1 we are using the EXPRESS command with I_B_O>> which makes % the parser and SymPy compute the inertia of Body B about point O. % In E2 we are just using the dyadic object I_P_Q>> (as I_P_Q>> = expr % doesn't act as a setter) defined above and not asking the parser % or SymPy to compute anything. % E3 asks the parser to compute the inertia of P about point O. % E4 asks the parser to compute the inertias of all bodies wrt about O. • In an inertia declaration of a body, if the inertia is being set about a point other than the center of mass, one needs to make sure that the position vector setter for that point and the center of mass appears before the inertia declaration as SymPy will throw an error otherwise. %Autolev Code P_SO_O> = X*A1> INERTIA S_(O) I1,I2,I3 • Note that all Autolev commands have not been implemented. The parser now covers the important ones in their basic forms. If you are doubtful whether a command is included or not, please have a look at this file in the source code. Search for “<command>” to verify this. Looking at the code for the specific command will also give an idea about what form it is expected to work in. Limitations and Issues¶ • A lot of the issues have already been discussed in the Gotchas section. Some of these are: □ Vector names coinciding with scalar names are overwritten in Python. □ Some convenient variable declarations aren’t parsed. □ Some convenient forms of functions to return matrices aren’t parsed. □ Settings aren’t parsed. □ symbols and rhs expressions work very differently in Python which might cause undesirable results. □ Dictionary indexing for the parsed code of the SOLVE command is not proper in many cases. □ Need to change dynamicsymbols._t to dynamicsymbols('t') for the PyDy simulation code to work properly. Here are some other ones: • Eigenvectors do not seem to work as expected. The values in Autolev and SymPy are not the same in many cases. • Block matrices aren’t parsed by the parser. It would actually be easier to make a change in SymPy to allow matrices to accept other matrices for arguments. • The SymPy equivalent of the TAYLOR command .series() does not work with dynamicsymbols(). • Only DEPENDENT constraints are currently parsed. Need to parse AUXILIARY constraints as well. This should be done soon as it isn’t very difficult. • None of the energy and momentum functions are parsed right now. It would be nice to get these working as well. Some changes should probably be made to SymPy. For instance, SymPy doesn’t have a function equivalent to NICHECK(). • The numerical integration parts work properly only in the case of the KANE command with no arguments. Things like KANE(F1,F2) do not currently work. • Also, the PyDy numerical simulation code works only for cases where the matrix say ZERO = FR() + FRSTAR() is solved for. It doesn’t work well when the matrix has some other equations plugged in as well. One hurdle faced in achieving this was that PyDy’s System class automatically takes in the forcing_full and mass_matrix_full and solves them without giving the user the flexibility to specify the equations. It would be nice to add this functionality to the System class. Future Improvements¶ 1. Completing Dynamics Online¶ The parser has been built by referring to and parsing codes from the Autolev Tutorial and the book Dynamics Online: Theory and Implementation Using Autolev. Basically, the process involved going through each of these codes, validating the parser results and improving the rules if required to make sure the codes parsed well. The parsed codes of these are available on GitLab here. The repo is private so access needs to be requested. As of now, most codes till Chapter 4 of Dynamics Online have been parsed. Completing all the remaining codes of the book (namely, 2-10, 2-11, rest of Ch4, Ch5 and Ch6 (less important) ) would make the parser more complete. 2. Fixing Issues¶ The second thing to do would be to go about fixing the problems described above in the Gotchas and Limitations and Issues sections in order of priority and ease. Many of these require changes in the parser code while some of these are better fixed by adding some functionality to SymPy. 3. Switching to an AST¶ The parser is currently built using a kind of Concrete Syntax Tree (CST) using the ANTLR framework. It would be ideal to switch from a CST to an Abstract Syntax Tree (AST). This way, the parser code will be independent of the ANTLR grammar which makes it a lot more flexible. It would also be easier to make changes to the grammar and the rules of the parser.
{"url":"https://docs.sympy.org/latest/modules/physics/mechanics/autolev_parser.html","timestamp":"2024-11-11T00:55:51Z","content_type":"text/html","content_length":"119608","record_id":"<urn:uuid:169a0499-f025-496b-85a8-e2870982f544>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00093.warc.gz"}
2.2 C) Linear Equations: Unknowns and Numbers on Both Sides – Linear Equations – AQA GCSE Maths Higher If we are given a question with unknowns on both sides, we must get all of the unknowns to one side and all of the numbers to the other side. We then divide both sides by the coefficient of the Example 1 Find the value of A. It is easier to work with positive values of the unknown, therefore we get all of the unknowns to the side with the greatest number of unknowns. This means that we are going to get all of the unknowns to the left side of the equation because 7A is greater than 3A. Therefore, we want to move the 3A from the right side to the left, which we are able to do by doing the opposite; we take 3A from both sides. The next step is to get all of the numbers on the right side, which means that we need to move the 4 that is currently on the left side to the right. We are able to do this by doing the opposite; we take 4 from both sides of the equation. We want to know what A is and not what 4A is. Therefore, we divide both sides by 4 (the coefficient of the unknown that we are looking for). This tells us that A is equal to -3. We can check that we have the correct value for A by subbing A in as -3 into the equation that we were given at the beginning (we should always use the equation at the beginning rather than any manipulated equation because we may have made a mistake whilst manipulating the equation and this would mean that we are checking an incorrect equation). This equation works, which means that we have the correct value for A; A is -3. Example 2 Find the value of b. We are going to answer this question by getting all of the unknowns to one side and all of the numbers to the other side. We get the unknowns to the side that has the greatest number of unknowns, which is the left side because -4b is greater than -6b. Therefore, we want to have all of the unknowns on the right and all of the numbers on the left. This means that we need to move the -6b from the left to the right. We do this by doing the opposite, which is to add 6b to both sides of the equation. We now need to move the 4 that is currently on the right side to the left side. We do this by taking 4 from both sides. We want to find the value of b and not 2b. Therefore, we divide both sides of the equation by 2. This tells us that b is 6. We are able to check that we have found the correct value for b by subbing in b as 6 into the first equation (the equation at the top). This equation works, which means that we have found the correct value for b; b is 6.
{"url":"https://www.elevise.co.uk/g-a-m-h-22-c.html","timestamp":"2024-11-02T02:38:52Z","content_type":"text/html","content_length":"99365","record_id":"<urn:uuid:5e8ff6b3-6de5-459d-b707-f2e7cf9e4d0b>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00603.warc.gz"}
Material Behavior at the Event Horizon • Thread starter MattRob • Start date In summary, the conversation discusses the concept of energy in relation to escape velocity and black holes. It is stated that the energy required to escape from the event horizon is not well-defined because the spacetime is not static at or inside the event horizon. Additionally, the mass that an object adds to a black hole is determined by its original rest mass, not its kinetic energy gained as it falls. The conversation also touches on the paradox of time-reversal and the lack of symmetry in a black hole formed by gravitational collapse. In classical mechanics, to raise from some height [itex]h_{0}[/itex] to infinity over a gravitational body, takes a certain amount of energy, the energy associated with escape velocity, let's just call it [itex]ε[/itex]. [itex]ε = \lim_{t\rightarrow +\infty} \int_{h_{0}}^t ƒ(h)dh[/itex] Likewise, it's time-reversible, so dropping something from stationary at an infinite distance, then when it reaches [itex]h_{0}[/itex], because of potential energy becoming kinetic, it will have that same amount of energy in kinetic. So, if the energy required to escape from the event horizon is infinite, then what keeps something falling into a black hole from achieving an infinite amount of energy as it reaches the event horizon, thus contributing an infinite amount of mass to the black hole? I guess time-reversal doesn't really apply in the same way here, since a time-reversal would mean reversing the direction of gravity as well (since its a curve in spacetime), creating a white hole. But I'm still wondering how it can be that something can require an infinite amount of energy to escape from a certain [itex]h_{0}[/itex], yet not achieve an infinite energy when dropping from higher up down to [itex]h_{0}[/itex]. So different equations would be used to describe something falling as opposed to something attempting to rise out of the gravity well? You are using energy arguments, which do not apply to GR. Sufficient to say, the event horizon is characterised by light not being able to increase the ##r## coordinate. MattRob said: if the energy required to escape from the event horizon is infinite That's not correct. The correct statement is that "the energy required to escape from the event horizon" is not well-defined, because the spacetime is not static at (or inside) the event horizon, and the "energy" you are talking about is only defined in the region where the spacetime is static. MattRob said: what keeps something falling into a black hole from achieving an infinite amount of energy as it reaches the event horizon, thus contributing an infinite amount of mass to the black hole? The mass that the object falling in from rest at infinity adds to the hole is not determined by the kinetic energy gained as it falls. It is determined by the object's original rest mass, when it was sitting at infinity. (This assumes that the object free-falls through the horizon.) The reason for this is similar to the reason why an object going very, very fast does not turn into a black hole: kinetic energy is frame-dependent, but whether or not an object is a black hole, or how much mass an infalling object adds to a black hole, is not frame-dependent; it's an invariant, the same for all Staff Emeritus Science Advisor Gold Member Orodruin said: You are using energy arguments, which do not apply to GR. I guess this is a funny case where the correctness of your statement depends on the punctuation. With the comma, it sounds like you're making a general statement that energy arguments don't apply to GR. That would be wrong. A test particle in a static spacetime does have a conserved energy. But without the comma, maybe you mean that the particular energy arguments used by the OP are wrong in GR, and then I would agree with you. Staff Emeritus Science Advisor Gold Member MattRob said: In classical mechanics, to raise from some height [itex]h_{0}[/itex] to infinity over a gravitational body, takes a certain amount of energy [...] [itex]ε = \lim_{t\rightarrow +\infty} \int_{h_{0}}^t ƒ(h)dh[/itex] [...]So, if the energy required to escape from the event horizon is infinite This description makes it sound as though a finite amount of energy would be enough to make the particle come out through the event horizon and rise to some height, but not to infinity. Actually, you can't get the particle to come out through the event horizon at all. MattRob said: Likewise, it's time-reversible What you've really stated is a paradox involving time-reversal, not energy. The energy part isn't needed in order to create the paradox, which is fundamentally just this: if we have trajectories for particles falling into a black hole, then by time-reversal, why don't we have trajectories for particles coming out? If you construct the maximal extension of the Schwarzschild spacetime, you get a spacetime that is time-reversal symmetric, and in which there is a white hole as well as a black hole. There are indeed outgoing trajectories for test particles, but they are emerging from the white hole. In a black hole that forms by gravitational collapse, the spacetime doesn't have this symmetry. This total lack of symmetry isn't so obvious when you just look at the expression for the Schwarzschild metric, which applies after the black hole has settled down. It's very obvious, though, if you look at the Penrose diagram. Keep in mind that inside the event horizon, it's the Schwarzschild r coordinate that's timelike, not t, so the transformation ##t\rightarrow -t## isn't a time-reversal. bcrowell said: I guess this is a funny case where the correctness of your statement depends on the punctuation. With the comma, it sounds like you're making a general statement that energy arguments don't apply to GR. That would be wrong. A test particle in a static spacetime does have a conserved energy. But without the comma, maybe you mean that the particular energy arguments used by the OP are wrong in GR, and then I would agree with you. I would agree with removing the comma, yes. Thank you for clarifying. PeterDonis said: That's not correct. The correct statement is that "the energy required to escape from the event horizon" is not well-defined, because the spacetime is not static at (or inside) the event horizon, and the "energy" you are talking about is only defined in the region where the spacetime is static. The mass that the object falling in from rest at infinity adds to the hole is not determined by the kinetic energy gained as it falls. It is determined by the object's original rest mass, when it was sitting at infinity. (This assumes that the object free-falls through the horizon.) The reason for this is similar to the reason why an object going very, very fast does not turn into a black hole: kinetic energy is frame-dependent, but whether or not an object is a black hole, or how much mass an infalling object adds to a black hole, is not frame-dependent; it's an invariant, the same for all observers. Okay; so I've been reading the book recommended in this thread: And I came across this; I think this is what you were referring to? [itex]m^{2} = E^{2} - ρ^{2}[/itex], where [itex]E[/itex] and [itex]ρ[/itex] will differ for different observers, but in such a way that [itex]m[/itex] remains constant? Wouldn't that kind of make this equation a metric, too, then? That's really fascinating, then. I can see how they derived this, but it's still somewhat baffling that you'd subtract the momentum from the energy like that to get mass. Fascinating! So mass really is a constant, then. Doesn't that mean that whole thing about "mass increase as your approach the speed of light" is kind of a misconception, then? I've seen that explanation used before - that if you watch a rocket (accelerating with a constant acceleration) accelerate indefinitely (ignoring fuel constraints for simplicity), then its approach to [itex]c[/itex] will be asymptotic because its mass increases by a factor of [itex]γ = \frac{1} {1-β^{2}}[/itex] as it gains velocity relative to the coordinate frame; thus will never reach [itex]c[/itex] in the coordinate frame (ie; mass in coordinate frame = [itex]m_{0}γ[/itex], with [itex]m_ {0}[/itex] being rest mass). But, while this would yield the same result for coordinate position as a function of coordinate time(?), this is a misconception that's inconsistent with other physics - that a more accurate approach to take is to describe the rocket as experiencing a constant acceleration in its proper frame, but because its "proper clock" appears to tick more and more slowly to the coordinate frame, its acceleration likewise decreases in the coordinate frame. When [itex]\frac{d\tau}{dt} = 1[/itex], then [itex]\frac{dv}{dt} = a_{0}[/itex] (where [itex]a_{0}[/itex] is its proper acceleration), but as it increases in velocity, [itex]\frac{d\tau}{dt} = i > 1[/itex], [itex]\frac{dv}{dt} = \frac{a_{0}}{i}[/itex], and as [itex]v→c[/itex], [itex]\frac{d\tau}{dt} → ∞[/itex], and thus [itex]\frac{dv}{dt} → \frac{a_{0}}{∝} = 0[/itex]. So in other words, as it approaches [itex]c[/itex], its acceleration in the coordinate frame approaches zero in such a way that its velocity approaches [itex]c[/itex] asymptotically, not because of an increase in mass, but due to time dilation slowing its acceleration. Thus, the idea of an object gaining mass due to relativistic velocities is simply incorrect - an artifact of time dilation. Is this understanding correct? But then what of mass-energy equivalence; where is that kinetic energy's "mass"? MattRob said: So mass really is a constant, then. Doesn't that mean that whole thing about "mass increase as your approach the speed of light" is kind of a misconception, then? I suggest you have a look at our relativity FAQ section: https://www.physicsforums.com/threads/what-is-relativistic-mass-and-why-it-is-not-used-much.796527/ Last edited by a moderator: MattRob said: I think this is what you were referring to? ##m^{2} = E^{2} - ρ^{2}## where ##E## and ##ρ## will differ for different observers, but in such a way that mm remains constant? Wouldn't that kind of make this equation a metric, too, then? No. It's just the relativistic energy-momentum relation. It has nothing to do with the metric; it's the same in any spacetime. MattRob said: a more accurate approach to take is to describe the rocket as experiencing a constant acceleration in its proper frame This is equivalent to saying it has constant proper acceleration, yes. But this has nothing to do with an object freely falling into a black hole; such an object has zero proper acceleration. There isn't any useful analogy between these two cases. PeterDonis said: No. It's just the relativistic energy-momentum relation. It has nothing to do with the metric; it's the same in any spacetime. This is equivalent to saying it has constant proper acceleration, yes. But this has nothing to do with an object freely falling into a black hole; such an object has zero proper acceleration. There isn't any useful analogy between these two cases. It's only tangentially related to something freely falling into a black hole, since as you said, something freely falling into a black hole experiences no proper acceleration. The point, though, is that the asymptotic nature of accelerating to [itex]c[/itex] can be explained without saying that objects increase in mass as they accelerate in any particular frame of reference; this effect can instead be attributed to time dilation. The relation to the original question, is that mass is not frame-dependent. [S:Which statement I find somewhat confusing, though, due to mass-energy equivalence and that energy is frame dependent. :S] Orodruin said: I suggest you have a look at our relativity FAQ section: https://www.physicsforums.com/threads/what-is-relativistic-mass-and-why-it-is-not-used-much.796527/ I'm still taking this in. But in the course of it, I found this: that cleared that up. I'll continue to mull these over. Last edited by a moderator: MattRob said: the asymptotic nature of accelerating to ##c## can be explained without saying that objects increase in mass as they accelerate in any particular frame of reference; this effect can instead be attributed to time dilation. Neither of these is an explanation; they are just different ways of looking at the side effects of proper acceleration. The explanation is the causal structure of spacetime: timelike vectors and null vectors are fundamentally different things, and an object's 4-momentum vector can only be of one type or the other; it can't change type. Proper acceleration changes the "direction in spacetime" of an object's 4-momentum, but it can't change a timelike 4-momentum into a null 4-momentum, which is what "accelerating to c" non-asymptotically would require. MattRob said: The relation to the original question, is that mass is not frame-dependent. This is just another way of saying that the norm of an object's 4-momentum (as with the norm of any 4-vector) is not frame-dependent. But that's not the same as saying the norm can't change from timelike to null. That's a different statement. FAQ: Material Behavior at the Event Horizon 1. What is the event horizon? The event horizon is the boundary surrounding a black hole, where the gravitational pull is so strong that not even light can escape. 2. How does material behave at the event horizon? At the event horizon, the intense gravitational force causes extreme distortion of space and time, which can greatly affect the behavior of matter. Objects approaching the event horizon will experience extreme tidal forces and may be torn apart by the intense gravity. 3. Can anything escape from the event horizon? No, once an object crosses the event horizon, it is trapped and cannot escape, even if it is travelling at the speed of light. 4. What happens to the material that falls into a black hole? The material that falls into a black hole is compressed and heated to extreme temperatures, emitting intense radiation as it spirals towards the center of the black hole. 5. How does the behavior of material at the event horizon impact our understanding of the universe? The behavior of material at the event horizon provides insight into the extreme conditions of space and time, and how gravity can affect the behavior of matter. It also helps us understand the formation and evolution of black holes, which play a significant role in shaping the structure of our universe.
{"url":"https://www.physicsforums.com/threads/material-behavior-at-the-event-horizon.821002/","timestamp":"2024-11-09T13:08:07Z","content_type":"text/html","content_length":"146624","record_id":"<urn:uuid:12c3f2b6-8e45-4dfd-a930-a2d02f13d192>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00301.warc.gz"}
Linear & Quadratic Functions Chapter 4 Overview This chapter further explores the properties of functions, their graphs, and their applications. The topics include: linear functions, demand, revenue, quadratic functions, vertices, axes of symmetry, intercepts, domain, range, extrema, and applications of functions. Supplemental Documentation The documents below accompany the lessons in this chapter. Before moving on to the video sections, please feel free to download the documents below.
{"url":"https://www.pantheralgebra.com/4/more-functions","timestamp":"2024-11-07T15:35:57Z","content_type":"text/html","content_length":"26874","record_id":"<urn:uuid:416f5f71-81c3-4ae8-ab8a-7e6a883e1418>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00421.warc.gz"}
Automatic Grammatical Evolution-Based Optimization of Matrix Factorization Algorithm Faculty of Electrical Engineering, University of Ljubljana, 1000 Ljubljana, Slovenia Author to whom correspondence should be addressed. Submission received: 28 February 2022 / Revised: 28 March 2022 / Accepted: 29 March 2022 / Published: 1 April 2022 Nowadays, recommender systems are vital in lessening the information overload by filtering out unnecessary information, thus increasing comfort and quality of life. Matrix factorization (MF) is a well-known recommender system algorithm that offers good results but requires a certain level of system knowledge and some effort on part of the user before use. In this article, we proposed an improvement using grammatical evolution (GE) to automatically initialize and optimize the algorithm and some of its settings. This enables the algorithm to produce optimal results without requiring any prior or in-depth knowledge, thus making it possible for an average user to use the system without going through a lengthy initialization phase. We tested the approach on several well-known datasets. We found our results to be comparable to those of others while requiring a lot less set-up. Finally, we also found out that our approach can detect the occurrence of over-saturation in large datasets. 1. Introduction Recommender systems (RS) are computerized services offered to the user that diminish information overload by filtering out unnecessary and annoying information, thus simplifying the process of finding interesting and/or relevant content which improves comfort and quality of life. The output of a typical RS is a list of recommendations produced by one of the several prediction generation algorithms (e.g., word vectors [ ], decision trees [ ], (naïve) Bayes classifiers [ ], k-nearest neighbors [ ], support vector machines [ ], etc.) built upon a specific user model (e.g., collaborative, content-based, or hybrid) [ ]. The first application of a recommender algorithm was recorded in the 1980s when Salton [ ] published an article about a word-vector-based algorithm for text document search. The algorithm was expanded to a wider range of content for applications ranging from a document search [ ] to e-mail filtering [ ] and personalized multimedia item retrieval [ ]. Nowadays, RSs are used in on-line shops such as Amazon to recommend additional articles to the user, in video streaming services such as Netflix to help users find something interesting to view, in advertising to limit the number of displayed advertisements to those that meet the interests of the target audience, and even in home appliances. The field of RSs is undergoing a new evolution in which researchers are tackling the topics of recommendation diversity [ ], contextualization [ ], and general optimization/automation of the recommendation process. It is important to note that these mentioned aspects often counteract each other so that, for example, increased diversity often leads to lower accuracy and vice versa [ Recommender systems include a large number of parameters that can (and should) be optimized, such as the number of nearest neighbors, the number of items to recommend, the number of latent features, and which context fields are used, just to name a few. We are therefore dealing with a multidimensional optimization problem which is also time dependent, as stated in [ ]. The optimization of these parameters requires a lot of manual work by a system administrator/developer and often cannot be performed in real time as it requires the system to go off-line until the new values are determined. For this article, we focused on the matrix factorization (MF) algorithm [ ], which is currently one of the most widespread collaborative RS algorithms and is implemented in several software packages and server frameworks. Despite this, the most often used approach of selecting the best values for the algorithm parameters is still by trial and error (see, for example, the Surprise [ ] and Scikit-Learn [ ] documentation, as well as articles such as [ ]). In addition, the MF approach is also highly sensitive to the learning rate, whose initial choice and adaptation strategy are crucial, as stated in [ Evolutionary computing is emerging as one of the automatic optimization approaches in recommender systems [ ], and there have been several attempts of using genetic algorithms on the matrix factorization (MF) algorithm [ ]. Balcar [ ] used the multiple island method in order to find a better way of calculating the latent factors using the stochastic gradient descent method. Navgaran et al. [ ] had a similar idea by using genetic algorithm to directly calculate latent factor matrices for user and items which worked but encountered issues with scalability when dealing with larger datasets. Razaei et al. [ ], on the other hand, focused on the initialization of optimal parameters and used a combination of multiple island and genetic algorithm to achieve this. Lara-Cabrera et al. [ ] went a step further and used Genetic Programming to evolve new latent factor calculation strategies. With the exception of [ ], the presented approaches focus on algorithm initialization or direct calculation of factor matrices (which introduces large chromosomes even for relatively small datasets). They do not, however, optimize the latent feature calculation procedure itself. In this article, we present a novel approach that uses grammatical evolution (GE) [ ] to automatically optimize the MF algorithm. Our aim is not only to use GE for optimization of the MF algorithm but also to do so in a black-box manner that is completely autonomous and does not rely on any domain knowledge. The reasoning behind this approach is that we want to create a tool for an average user who wants to use the MF algorithm but lacks any domain/specialized knowledge of the algorithm’s settings. Our approach would therefore enable a user to activate and correctly use modules featured on their server framework (such as Apache). Out of existing evolutionary approaches, we selected GE because we want to develop new equations—latent factor calculation methods. Alternatives such as genetic algorithm or particle swarm optimization focus on parameter optimization and cannot be easily used for the task of evolving new expressions. Genetic Programming [ ] would also be an option but is more restrictive in terms of its grammar rules. GE, on the other hand, allows the creation of specialized functions and incorporation of expert knowledge. In the first experiment, we used GE for automatic optimization of the parameter values of the MF algorithm to achieve the best possible performance. We then expanded our experiments to use GE for modification of the latent feature update equations of the MF algorithm to further optimize its performance. This is a classical problem of meta-optimization, where we tune the parameters of an optimization algorithm using another optimization algorithm. We evaluated our approach using four diverse datasets (CoMoDa, MovieLens, Jester, and Book Crossing) featuring different sizes, saturation, and content types. Section 2 , we outline the original MF algorithm and -fold cross-validation procedure for evaluating the algorithm’s performance that we used in our experiments. Section 3 summarizes some basic concepts behind GE with a focus on specific settings that we implemented in our work. In Section 4 , we describe both used datasets and the hardware used to run the experiments. Finally, we present and discuss the evolved equations and numerical results obtained using those equations in the MF algorithm in Section 5 2. Matrix Factorization Algorithm Matrix factorization (MF), as presented in [ ], is a collaborative filtering approach that builds a vector of latent factors for each user or item to describe its character. The higher the coherence between the user and item factor vectors, the more likely the item will be recommended to the user. One of the benefits of this approach is that it can be fairly easily modified to include additional options such as contextualization [ ], diversification, or any additional criteria. The algorithm, however, still has some weak points. Among them, we would like to mention the so-called cold start problem (i.e., adding new and unrated content items to the system or adding users who did not rate a single item yet), the dimensionality problem (the algorithm requires several passes over the whole dataset, which could become quite large during the lifespan of the service), and the optimization problem (the algorithm contains several parameters which must be tuned by hand before running the service). The MF algorithm used in this research is the original singular value decomposition (SVD) model which is one of the state-of-the-art methods in collaborative filtering [ ]. The MF algorithm itself is based on an approach similar to the principal component analysis because it decomposes the user-by-item sparse matrix into static biases and latent factors of each existing user and item . These factors are then used to calculate the missing values in the original matrix which, in turn, are used as predicting ratings. 2.1. An Overview of a Basic Matrix Factorization Approach In a matrix factorization model, users and items are represented as vectors of latent features in a joint -dimensional latent factor space. Specifically, item is represented by vector $q i ∈ R f$ and user is represented by vector $p u ∈ R f$ . Individual elements of these vectors express either how much of a specific factor the item possesses or how interested the user is in a specific factor. Although there is no clear explanation to these features (i.e., one cannot directly interpret them as genres, actors, or other metadata), it has been shown that these vectors can be used in MF to predict the user’s interest in items they have not yet rated. This is achieved by calculating the dot product of the selected user’s feature vector $p u$ and the feature vector of the potentially interesting item $q u$ , as shown in ( ). The result of this calculation is the predicted rating $r ^ u i$ which serves as a measure of whether or not this item should be presented to the user. The most intriguing challenge of the MF algorithm is the calculation of the factor vectors $q i$ $p u$ which is usually accomplished by using a regularized model to avoid overfitting [ ]. A system learns the factor vectors through the minimization of the regularized square error on the training set of known ratings: $min q , p ∑ u , i ∈ κ ( r u i − r ^ u i ) 2 + λ ( ‖ q i ‖ 2 + ‖ p u ‖ 2 ) ,$ representing the training set (i.e., the set of user/item pairs for which rating $r u i$ is known). Because the system uses the calculated latent factor values to predict future, unknown ratings, the system must avoid overfitting to the observed data. This is accomplished by regularizing the calculated factors using the constant , whose value is usually determined via cross-validation. 2.2. Biases In reality, Equation ( ) does not completely explain the observed variations in rating values. A great deal of these variations are contributed by effects independent of any user/item interaction. These contributions are because they model inclinations of some users to give better/worse ratings than others or tendencies of some items to receive higher/lower ratings than others. Put differently, biases measure how much a certain user or item deviates from the average. Apart from the user ( $b u$ ) and item ( $b i$ ) biases, the overall average rating ( ) is also considered as a part of a rating: $r ^ u i = μ + b i + b u + q i T p u .$ With the addition of biases and the average rating, the regularized square error ( ) expands into $min q , p , b ∑ u , i ∈ κ ( r u i − μ − b i − b u − q i T p u ) 2 + λ ( ‖ q i ‖ 2 + ‖ p u ‖ 2 + b u 2 + b i 2 ) .$ 2.3. The Algorithm The system calculates latent factor vectors by minimizing Equation ( ). For our research, we used the stochastic gradient descent algorithm as it is one of the more often used approaches for this task. The parameter values used for the baseline evaluation were the same as presented in [ ], and we summarized them in Table 1 . The initial value assigned to all the latent factors was 0.03 for the CoMoDa dataset [ ] and random values between 0.01 and 0.09 for the others. The minimization procedure is depicted in Algorithm 1 and can be summarized as follows. The algorithm begins by initializing latent factor vectors $p u$ $q i$ with default values (0.03) for all users and items. These values—together with the constant biases and the overall average rating—are then used to calculate the prediction error for each observed rating in the dataset: $e u i = r u i − μ − b i − b u − q i T p u .$ The crucial part of this procedure is the computation of new user/item latent factor values. We compute a new th latent factor value $p u k$ of user and a new th latent factor value $q i k$ of item as follows: $p u k ← p u k + γ p ( e u i q i k − λ p u k ) ,$ $q i k ← q i k + γ q ( e u i p u k − λ q i k ) ,$ $γ p$ $γ q$ determine learning rates for users and items, respectively, and controls the regularization. The computation of Equations ( ) is carried out for all observed ratings and is repeated until a certain stopping criterion is met, as outlined in Algorithm 1. Algorithm 1 Stochastic gradient descent Initialize user and item latent factor vectors $p u$ and $q i$ Calculate constant biases $b u$ and $b i$ and the overall average rating $μ$ for$k ← 1 to f$do for each observed rating $r u i$do Compute the prediction error $e u i$ for $i t e r ← 1 to N$do Compute new factor values $p u k$ ) and $q i k$ end for end for end for 2.4. Optimization Task As already mentioned, the original MF algorithm still suffers from a few weaknesses. In our research, we focused on the problem of algorithm initialization and optimization during its lifetime. As seen in ( ) and ( ), we needed to optimize several parameter values for the algorithm to work properly: learning rates $γ p$ $γ q$ , regularization factor , and initial latent feature values that are used during the first iteration. Apart from that, the optimal values of these parameters change as the dataset matures through time and the users provide more and more ratings. Despite the widespread use of the MF algorithm and its implementation in several frameworks, an efficient methodology to automatically determine the values of the algorithm’s parameters is still missing. During our initial work with the MF algorithm [ ], we spent a lot of time comparing RMSE plots and CSV values in order to determine the initial parameter values. The listed articles [ ] show that this is still the usual practice with this algorithm. The main goal of our research was to use GE to achieve the same (or possibly better) RMSE performance as the original (hand-tuned) MF algorithm without manually setting any of its parameters. This way, we would be able to build an automatic recommender service yielding the best algorithm performance without any need of human intervention. 2.5. Evaluation of Algorithm’s Performance The selection of the evaluation metric for MF depends on whether we look at the algorithm as a regression model or as a pure RS problem. With a regression model, we focus on fitting all the values and do not discern between “good” and “bad” values (i.e., ratings above and below a certain value). An RS model, on the other hand, is focused on classification accuracy which is interpreted as the number of correctly recommended items. Such an item has both ratings (predicted and true) above the selected threshold value, for example, above 4 in a system with rating on a scale from 1 to 5. Such a metric is therefore more forgiving because it does not care if the predicted rating is different from the actual rating by a large factor as long as both of them end up in the same category (above/ below the threshold value). Typical examples of regression metrics include mean-squared error, root-mean-squared error (RMSE), and R-squared, while classification metrics include precision, recall, f-measure, ROC-curve, and intra-list diversity. Because our focus was on optimizing the MF algorithm, we chose the regression aspect and wanted to match all the existing ratings as closely as possible. We used the RMSE measure in combination with -fold cross-validation as this combination is most often used in the RS research community [ ], providing us with a lot of benchmark values with which to compare. In addition, we also verified our findings using the Wilcoxon ranked sum test with significance level $α = 0.05$ . All of our experiments resulted in a -value that was lower than the selected significance level, which confirms that our experiments were statistically significant. A summary of our statistical testing is given in Table 2 2.5.1. RMSE RMSE is one of the established measures in regression models and is calculated using the following equation: $R M S E = ∑ u , i ∈ κ ( r u i − r ^ u i ) 2 | κ | ,$ $r u i$ is the actual rating of user for item $r ^ u i$ is the system’s predicted rating. $| κ |$ is the cardinality of the training set (i.e., the number of known user ratings). The RMSE values from experiments of other research groups on the datasets used in this article are summarized in Table 3 2.5.2. Overfitting and Cross-Validation Overfitting is one of the greatest nemeses of RS algorithms because it tends to fit the existing data perfectly instead of predicting missing (future) data, which is what RSs are meant for. All algorithms must therefore either include a special safety mechanism or be trained using additional techniques such as cross-validation. The original MF algorithm used in this research uses a safety mechanism in the form of regularization parameter $λ$ whose value had to be set manually. The value of the parameter was set in a way that achieved the best performance on the test set and thus reduced the amount of overfitting. It is important to note that the optimal value of the regularization parameter changes depending on the dataset as well as with time (i.e., with additional ratings added to the system). Regularization alone is however not enough to avoid overfitting, especially when its value is subject to automated optimization which could reduce it to zero, thus producing maximum fitting to the existing data. In order to avoid overfitting, we used cross-validation in our experiments. n-fold cross-validation is one of the gold standard tests for RS assessment. The test aims to reduce the chance of overfitting the training data by going over the dataset multiple times while rearranging the data before each pass. The dataset is split into n equal parts. During each pass over the dataset, one of the parts is used as the test set for evaluation while the remaining $n − 1$ parts represent the training set that is used for training (calculating) the system’s parameters. The final quality of the system is calculated as the average RMSE of n, thus performed tests. 3. Grammatical Evolution The first and the most important step when using GE is defining a suitable grammar by using the Backus–Naur form (BNF). This also defines the search space and the complexity of our task. Once the grammar is selected, the second step is to choose the evolution parameters such as the population size and the type and probability of crossover and mutation. 3.1. The Grammar Because we want to control the initialization process and avoid forming invalid individuals due to infinite mapping of the chromosome, we need three different sections of our grammar: recursive, non-recursive, and normal [ ]. The recursive grammar includes rules that never produce a non-terminal. Instead, they result in direct or indirect recursion. The non-recursive grammar, on the other hand, never leads to the same derivation rule and thus avoids recursion. We then control the depth of a derivation tree by alternating between non-recursive and recursive grammar. Using recursive grammar will result in tree growth, while switching to non-recursive stops this growth. The last of the grammars—normal—is the combination of the recursive and non-recursive grammar and can therefore produce trees of varying depth. The following derivation rules constitute our non-recursive grammar: <expr> :: = <const> | <var> <const> :: = <sign><n>.<n><n> <sign> :: = + | − <n> :: = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 <var> :: = $p u k$ | $p i k$ | $e u i$ This is the recursive grammar: <expr> :: = <expr> <binOper> <expr> | <unOper> <expr> <binOper> :: = + | − | ∗ | / <unOper> :: = log() | The normal grammar is the combination of the above two grammars where both derivation rules for an expression are merged into a single one: <expr> :: = <expr> <binOper> <expr> | <unOper> <expr> | <const> | <var> 3.2. Initialization In GE, the genotype consists of a sequence of codons which are represented by eight-bit integers. We used this codon sequence to synthesize the derivation tree (i.e., the phenotype) based on the selected start symbol. Each codon was then used to select the next derivation rule from the grammar. The start symbol determined the first rule, while the remaining rules were selected based on whatever symbols appear in the derivation tree until all the branches end in a terminal rule. Because the range of possible codon values is usually larger than the number of available rules for the current symbol, we used a modus operator to map the codon value to the range of the number of rules for that symbol. This also means that we seldom used the complete codon sequence in the genotype. Either we produced a complete tree before using all the codons or we ran out of codons and used wrapping to complete the tree (i.e., we simply carried on with mapping using the same string of codons again). In order to overcome the problem of starting genotype strings failing to map to finite phenotypes [ ], we used the sensible initialization technique proposed in [ ]. This technique enables the use of a monitored selection of rules from either a non-recursive or recursive grammar. This method mimics the original ramped half-and-half method described in [ ]. The ramped half-and-half method results in a population half of the trees having individual leafs of the same depth, while the other half have leafs with arbitrary depths no deeper than the specified maximum depth. Such simple procedures often result in improved performance because they allow us to potentially avoid the pitfall of having all individuals start close to a localized extreme. In such a case, the algorithm could become stuck there or spend a lot of time (generations) to find the true optimal value. In our previous experiments [ ], using the sensible initialization proved to be sufficient. Should this fail, however, one could also consider additional techniques such as Cluster-Based Population Initialization proposed by Poikolainen et al. [ 3.3. Generating Individuals from Chromosomes—An Example Each of the individuals begins as a single chromosome from which an update equation is derived and used in our GE enhanced MF algorithm (see Algorithm 2). In most of our experiments, the chromosome produces an equation that is used to calculate the values of user/item latent factors. To help understand our approach, let us look at an example and assume that we have the chromosome {14, 126, 200, 20, 75, 12, 215, 178, 48, 88, 78, 240, 137, 160, 190, 98, 247, 11} and use the start symbol from Section 5.2 (in fact, we will derive the two equations from Equation ( )). The first symbol in the start symbol is and, using our grammar from Section 3.1 , we perform the following modulo operation: (rule number) = (codon integer value) mod (number of rules for the current non-terminal) As our start symbol has four possible rules (we use the normal grammar), the codon value of 15 selects the third production rule (i.e., 14 mod $4 = 2$ ), hence choosing the constant ( ). The next four codons are then used to derive the constant value which, according to the grammar, consists of a sign and three digits. The second equation (i.e., the right equation in Equation ( )) is derived from the remaining codons. The whole mapping process is summarized in Table 4 . The first two columns list the codons and the symbols that are being used for the derivation, while the second two columns give the results of the modulo operations and selected terminals/ non-terminals using the corresponding production rule on non-terminals from the second row. The result of this procedure is Equation ( ) which was then used as part of our further experiments (see Section 5.3 for further details). 3.4. Optimizing MF—Evaluating Individuals For each of the experiments, the just described derivation procedure is used to compose code representing Equations ( ) and ( ) within Algorithm 2. Thus, the obtained algorithm represents an individual in the population whose fitness we evaluated according to standard 10-fold cross-validation: First, the algorithm was run on a train set to obtain user and item latent factor values. Second, RMSE is computed using those factor values on a test set. The procedure was repeated for all ten folds and the final fitness of an individual is the average of the obtained ten RMSEs. Algorithm 2 GE enhanced stochastic gradient descent Initialize user and item latent factor vectors $p u$ and $q i$ Calculate constant biases $b u$ and $b i$ and the overall average rating $μ$ for$k ← 1 to f$do for each observed rating $r u i$do Compute the prediction error $e u i$ for $i t e r ← 1 to N$do Compute new factor values using functions from the individual’s chromosome end for end for end for In cases where the individual’s chromosome would result in an unfeasible solution (e.g., division by zero or too large latent factor values), we immediately mark the individual for deletion by setting its fitness function to an extremely large value even before the evaluation of the individual starts. With this approach, we quickly trimmed the population of unwanted individuals and start searching within the feasible solution set after the first few generations. 3.5. Crossover and Mutation The original one-point crossover which was proposed by O’Neil et al. [ ] has a destructive effect on the information contained by the parents. This occurs because changing the position in the genotype string results in a completely different phenotype. For our experiments, we decided to use the LHS replacement crossover [ ] instead because it does not ruin the information contained in the parents’ phenotype. This is a two-point crossover where only the first crossover point is randomly selected. The crossover point in the second parent is then limited so that the selected codon expands the same type of non-terminal as the selected codon of the first parent. Both crossover points therefore feature codons that expand the expressions starting at the first crossover points. The LHS replacement crossover has many similarities with the crossover proposed in [ ] but also has some additional advantages. It is not limited by closure and maintains the advantages of using BNF grammar. Mutation can result in the same destructive effect when not properly controlled. Byrne [ ] called this a structural mutation . For our experiment, we created a mutation operator that works in a similar manner as the LHS replacement crossover. It mutates a randomly selected codon and reinterprets the remaining codons. As proposed by [ ], we set the probability of crossover and mutation to a relatively small value. This method is easy to implement and reduces the amount of situations where only a single terminal is mutated or two terminals are exchanged by a crossover operation. Although there is not much evidence for or against this practice, it has been demonstrated that changing node selection to favor larger subtrees can noticeably improve GE performance on basic standard problems [ 3.6. Evolution Settings Table 5 shows the parameters in our experiment. In order to control bloat, we limited the maximal tree depth [ ] and the count of nodes of individuals by penalizing those that overstepped either of the two limits. This penalty was performed by raising the fitness values of such individuals to infinity. Each following generation was created performing three steps. First, we created duplicates of a randomly selected 10% of individuals from the current population and mutated them. We then selected 20% of the current individuals to act as parents in crossover operations. This selection was performed using the tournament selection [ ] with the tournament size of two. Tournament selection chooses fitter individuals with higher probability and disregards how much better they are. This impacts fitness values to create constant pressure on the population. We determined the two above percentages based on previous experience and some preliminary experiments. In the last phase, we substituted the worst 30% of individuals in the population with the offspring obtained in the previous two steps (i.e., mutated copies (10%) and the crossover offspring (20%)). Using a maximum depth of 12 in our individually generated trees and assuming that a number is terminal, we can estimate the size of the search space to be in the range of 10 to the power of 18 (4 at depth level one, 361 at depth level two, 4272 at depth level three, and so on). With our number of generations (150), population size (50), and elite individual size (10%), we can calculate that we only need to evaluate 6755 individuals to find our solution. Considering the total search space size, this is a very low number of evaluations to perform. We can also assume that methods such as Exhaustive Search would have serious issues in such a large space. 4. Datasets and Hardware We used four different datasets to evaluate our approach. Each of the datasets featured different characteristics (different sparsity, item types, rating range, etc.), thus representing a different challenge. The different dataset sizes also demanded the use of different hardware sets to produce our results. 4.1. Dataset Characteristics The summary of main characteristics of the four datasets is given in Table 6 . We first used the LDOS CoMoDa dataset [ ], which we already used in our previous work [ ]. In order to obtain a more realistic picture of the merit of our approach, we also used three considerably larger and different (regarding the type of items and ratings) datasets—the MovieLens 100k dataset [ ], the Book-Crossing dataset [ ], and the Jester dataset [ 4.1.1. LDOS CoMoDa The collection of data for the LDOS CoMoDa began in 2009 as part of our work on contextualizing the original matrix factorization algorithm [ ]. The collection was performed via a simple web page which at first offered only a rating interface through which volunteers could provide feedback about any movies that they watched. The web page was later enriched with metadata from The Movie Database (TMDb) and recommendations generated by three different algorithms (MF, collaborative, and content-based). It should be noted that the specialty of this dataset lies in the fact that most of the ratings also contain context data about the rating—when and where the user watched the movie, with whom, what was their emotional state, and so on. 4.1.2. MovieLens The MovieLens dataset is a part of a web-based MovieLens RS, the successor of the EachMovie site which was closed in 1997. As with CoMoDa, the dataset contains ratings given to movies by users using the same rating scale (1 to 5) over a longer period of time. The dataset is offered in several forms (Small, Full, Synthetic), of which we chose the 100k MovieLens dataset. 4.1.3. Book-Crossing Dataset This dataset contains ratings related to books which share some similarity to movie ratings (i.e., genres, authors) but are at the same time distinct enough to warrant different RS settings. The ratings collected in this dataset present four weeks’ worth of ratings from the Book-Crossing community [ ]. The dataset is the largest dataset used in our research (more than one million ratings) and also features a different range of ratings (from 1 to 10). 4.1.4. Jester The Jester dataset [ ] consists of ratings given by users to jokes. The dataset represents several additional problems such as a wider rating range ( $− 10$ to 10), negative rating values, and smaller steps between ratings because users rate items by sliding a slider instead of selecting a discrete number of stars. Out of the three datasets offered, we selected the jester-data-3 dataset. 4.2. The Hardware We ran the experiments with the CoMoDa dataset on a personal computer with Intel Xeon 3.3 GHz processor, 16 Gb of RAM, 1 Tb hard disk, and Windows 10 Enterprise OS. The algorithm was developed in Python 2.7 using the Anaconda installation and Spyder IDE. For the experiments with the larger MovieLens dataset, we used a computer grid consisting of 3 2.66 GHz Core i5 (4 cores per CPU) machines running customized Debian Linux OS. In addition, we introduced several additional Python optimizations such as implementing parts of code in Cython which enables the compilation of GE created programs into C and their use as imported libraries. This approach was crucial with the Book-Crossing and Jester datasets as it enabled us to evaluate several generations of programs per hour despite the datasets huge size. 5. Results In this section, we present the results of our work using the four databases listed in Table 3 . The CoMoDa dataset is covered in Section 5.1 Section 5.2 . On this dataset, we first optimized only the learning rates and regularization factors (as real constants) from latent factors update equations (Equations ( ) and ( )). After that, in Section 5.2 , we also evolved the complete latent factors update equations. Armed with these preliminary results—and the convergence results presented in Section 5.3 —we then moved to the three much larger datasets (MovieLens, Book-Crossing, and Jester) in the last three subsections of this section. As the reader will learn, one of the most important findings of our work shows how GE detects when the static biases prevail over the information stored in latent factors, which is a phenomenon commonly observed in large datasets. 5.1. Automatic MF Algorithm Initialization In the first part of the experiment, we only optimized the real parameters used in the original MF algorithm outlined in Algorithm 2. Specifically, we optimized parameters $γ p$ $γ q$ representing the user and item learning rates and the regularization parameter . By doing this, we used GE as a tool that can automatically select the values that work best without requiring any input from the operator, thus avoiding problems arising from the selection of the wrong values at the start (as stated in [ ]). It can be argued that this approach does not use the full available power of GE, as it is “demoted” to the level of genetic algorithm (i.e., it only searches for the best possible set of constants in a given equation). Although this is partially true, we wanted to determine the best baseline value with which to compete in the following experiments and our GE method was flexible enough to be used in such a way. Otherwise, we would have used any other genetic algorithm which would most likely produce the same results but require additional complications in our existing framework. In order to constrain GE to act only on parameter values of the algorithm, we used the following start symbol in our grammar: $p u k ← p u k + < const > ( e u i q i k − < const > p u k ) ,$ $q i k ← q i k + < const > ( e u i p u k − < const > q i k ) ,$ Because we only wished to optimize certain parameters of the algorithm, 50 generations were enough for the evolution of an acceptable solution. Table 7 shows the results of 10 evolution runs ordered by the RMSE score of the best obtained programs. Note that there is a slight change from the original equation, because the evolution produced two different values for the regularization factor , denoted by $λ p$ $λ q$ for user and item regularization factor, respectively. As shown, all our programs exhibit better RMSE than the baseline algorithm using default settings from Table 1 , which shows a RMSE score of . The average RMSE of our ten genetically evolved programs is with a standard deviation of . In addition, a two-tailed -value of $5 × 10 − 6$ obtained by the Wilcoxon rank-sum test confirms that our results are significantly better than the result obtained from the baseline algorithm. A final remark can also be made by comparing the RMSE values when increasing the number of iterations, as seen in Figure 1 . We can see that the original baseline first drops rapidly but jumps back to a higher value after a maximum dip around the 100-th iteration. The optimized baseline, on the other hand, shows a constant reduction in the RMSE value without any jumps (except after every 100 iterations when we introduce an additional latent feature). This shows that even though we believed to have found the best possible settings in our previous work [ ], we set the learning rate too high which prevented us from finding the current minimum possible RMSE value. The new algorithm, on the other hand, detected this issue and adjusted the values accordingly. Although a 0.41% decrease in RMSE does not seem to be a very significant improvement, we should note that the GE enhanced MF algorithm achieved this level of performance without any manual tuning beyond the initial grammar settings. Compared to the amount of time we had to spend during our previous work for manual determination of the appropriate parameter values, this represents a significant reduction in the time which a practitioner has to spend tinkering with settings to obtain the first results. One should also note that an evaluation of 150 generations takes less time than setting parameter ranges manually. 5.2. Evolving New Latent Factors Update Equations After a successful application of GE to a simple real parameter optimization, we now wanted to evolve a complete replacement for Equations ( ) and ( ). For that purpose, we used the following start symbol for the grammar: $p u k$ = < expr > $q i k$ = < expr > Using this symbol, we can either evolve a complex equation (as seen in most of the item latent factor equations in Table 8 ) or a simple constant value. Note that during the evolution, we still used the values from Table 1 for evaluation of each single program in the population. This time, we let the evolution run for 150 generations, using the CoMoDa dataset again. Table 8 shows the ten best resulting equations for user and item latent factor calculation in addition to their RMSE score. Note that the actual equations produced by the evolution were quite hieroglyphic and the versions shown are mathematically equivalent equations simplified by hand. All 10 produced equations performed not only better than the hand-optimized baseline algorithm but also better than the GE optimized baseline version from the first part of the experiment. Compared to the average value of that we were able to achieve using only a parameter optimization, we now obtained an average RMSE value of with a standard deviation of . It was to be expected that the dispersion would now be greater as GE was given more freedom as to how to build the latent factors update expressions. A -value of $5 × 10 − 6$ signifies that the results are significantly better than those obtained from the optimized baseline RMSE in the first part of the experiment. What is even more important, this time, we managed to obtain a program (i.e., the best program in Table 8 ) whose performance is more than 10% better than that of a baseline algorithm. It is quite interesting to compare the evolved equations from Table 8 with those of the original algorithm (i.e., Equations ( ) and ( )). We noticed that the evolved equations fixated latent factor values of a user. There seems to be one exception (i.e., the fifth row of the table), but as the equation simply copies a factor value to the next iteration, this value retains its initial value and can therefore be considered constant as well. The right column of Table 8 contains equations for calculating latent factor values of an item. After a closer inspection of the equations, we can observe that they are all very similar to Equation ( ), only with quite a large learning rate $γ q$ and very small or even zero normalization constant . For example, in the first row of the table, we have $γ q = 500$ $λ = 0.002$ , and in the last row, we have $γ q = 140$ $λ = 0$ . In more than half of the equations, there is an additional constant factor whose interpretation is somehow ambiguous; it could be a kind of combination of a static bias and regularization. In summary, it is clear that GE diminished or sometimes even removed the regularization factor and greatly increased the learning rate in order to achieve the best possible result. Apart from that, it assigned the same constant value to all user latent factors. This could in turn signify that we are starting to experience the so-called bias-variance trade-off [ ], where static biases take over a major part of contribution to variations in rating values. 5.3. Convergence Analysis We have so far succeeded to evolve MF update equations that produced significantly better RMSE values than the original hand-optimized algorithm. During the second part of our experiment, we observed a notable increase in learning rate which made us believe that we do not actually need 100 iterations to reach the final values for the user and item factors. We generated the plot of the RMSE value as a function of iteration number to observe how the algorithm converges toward the final values. The blue line in Figure 1 shows how the RMSE value changed during a run of a baseline MF algorithm using the original parameter values from Table 1 , and the orange line shows the algorithm convergence using the optimized parameter values from the first row of Table 7 . The noticeable jumps in the curves that happen every 100 iterations are a consequence of adding an additional latent factor into the calculation every 100 iterations as shown in the baseline We can observe that the unoptimized algorithm results in quite curious behavior. The smallest RMSE value is reached after only 65 iterations, but then it rapidly increases to a value even greater than the initial guess. The RMSE value stays larger than the initial one, even after adding additional factors and letting the MF algorithm run for 100 iterations for each one of the added factors. Conversely, using the GE optimized version of the MF algorithm, we obtain a curve whose RMSE score fell steadily toward the final, much better RMSE score. Figure 2 shows how the RMSE value converges when we used the following equations as the update equations: $p u k = 0.05 q i k = q i k + 7 e u i$ $p u k = 0.05 q i k = 0.75 q i k + 2 e u i$ The equations are taken from the last and fourth row of Table 8 , respectively. The most obvious difference from Figure 1 is a much more rapid convergence, which was to be expected as the learning rates are several orders of magnitude larger (this can be seen by rewriting Equations ( ) and ( ) back into the forms of ( ) and ( ), respectively). It seems that a learning rate of 40 and a regularization factor of (Equation ( )) are quite appropriate values as seen in Figure 2 . However, the figure also shows how a learning rate of 140 (Equation ( )) already causes overshoots. At the same time, it seems that an absence of regularization produces overfitting which manifests itself as an increase in the RMSE values when the last two latent factors are added. Either way, the algorithm converged in just a few iterations each time a new latent factor was added, indicating that much less than 100 iterations are actually needed. Thus, we reran all of the evolved programs on the same dataset, this time only using 20 iterations for each of the latent factors. We obtained exactly the same RMSE scores as we did with 100 iterations, which means that the calculation is now at least five times faster. This speedup is very important, especially when dealing with huge datasets that usually reside behind recommender systems. The results obtained by ( ) and ( ) using only 20 iterations are shown in Figure 3 5.4. MovieLens Dataset In the next experiment, we wanted to evolve the update equations using a much larger dataset. We selected the MovieLens 100k dataset for the task while using the same starting symbol as in Section 5.2 Because we switched to a different dataset, we first had to rerun our original baseline algorithm in order to obtain a new baseline RMSE, which now has a value of . Then, we ran over 20 evolutions, each of 150 generations. Table 9 shows the five best (and unique, as some RMSE values were repeated over several runs) evolved equations and their corresponding RMSE values. In summary, we achieved an average RMSE value of and a standard deviation of (with a minimum value of and a maximum value of ). We again used the Wilcoxon ranked sum test to confirm (with a -value of ) that our numbers significantly differ from the baseline value. The most striking difference from the previous experiment is the fact that now all but one of the latent factors are in fact constants. Note that the third item latent factor in Table 9 is actually a constant because $p u k$ is a constant. This is a sign that a bias-variance trade-off as described in [ ] has occurred. Because the MovieLens dataset contains a lot more ratings than the CoMoDa dataset, the information contained in the static biases becomes more prominent. This in turn means that there is almost no reason to calculate the remaining variance, which was automatically detected by GE. Using fixed latent factors also means that the algorithm can be completed within a single iteration, which is a huge improvement in the case of a very large dataset. More likely, the occurrence of constant factors will be used as a detection of over-saturation—when the equations produced by GE become constants, the operator knows that they must either introduce a time-dependent calculation of biases [ ] or otherwise modify the algorithm (e.g., introduce new biases [ 5.5. Book-Crossing Dataset In this experiment, we used the Book-Crossing dataset which is larger than the MovieLens dataset by a factor of 10. In addition, it features a different range of ratings (from 1 to 10 instead of 1 to 5) and covers a different type of item, namely books. Again, we first calculated the baseline RMSE which was now . We ran 20 evolutions of 150 generations each. Table 10 shows the five best evolved equations and their corresponding RMSE values. In summary, we achieved an average RMSE value of and a standard deviation of (with a minimum value of and a maximum value of ). We again used the Wilcoxon ranked sum test to confirm (with a -value of ) that our numbers significantly differ from the baseline value. By observing the results, we can see a similar pattern as in the previous experiment—where one set of latent factors ($p u k$) is set to a static value, while the other changes its value during each iteration according to the current error value of $e u i$. Because the dataset bears some similarities to the previous (MovieLens) in terms of saturation and ratios (ratings per user and per items), this was somehow expected. 5.6. Jester Dataset In the last experiment, we used the Jester dataset. Again, we first calculated the baseline RMSE which was now . One should note that the number differs from those presented in Table 3 due to the fact that both cited articles ([ ]) used a slightly different version of the dataset (either an expanded version or a filtered version). Although this means that we cannot directly compare our results, we can still see that we are in the same value range. We ran 20 evolutions of 150 generations each. Table 11 shows the five best evolved equations and their corresponding RMSE values. In summary, we achieved an average RMSE value of and a standard deviation of (with a minimum value of and a maximum value of ). We again used the Wilcoxon ranked sum test to confirm (with a -value of ) that our numbers significantly differ from the baseline value. Observing this last set of results, we find some similarities with the previous two experiments. One set of latent factors is again set to a static value. An interesting twist, however, is the fact that in this experiment the static value is assigned to the item latent factors ($q i k$) instead of the user latent factors. Upon reviewing the dataset characteristics, we can see that this further confirms the bias-variance trade-off. This is due to the fact that in this dataset the ratio of ratings per item is a hundred times larger than the ratio of ratings per user. The item’s biases therefore carry a lot more weight and thus reduce the importance of its latent factors. Once again, the GE approach detected this and adjusted the equations accordingly. 5.7. Result Summary Table 12 shows a condensed version of our results. It can be seen that we managed to match or improve the performance of the MF algorithm on all four datasets. All the results were also tested using the Wilcoxon rank-sum test and, for each case, the test statistic was lower than the selected significance value of $α = 0.05$ which confirms that our results were significantly different (better) than those of the baseline approach. 6. Conclusions We used GE to successfully optimize the latent factors update equations of the MF algorithm on four diverse datasets. The approach works in an autonomous way which requires only the information about the range of ratings (e.g., 1 to 5, −10 to 10, etc.) and no additional domain knowledge. Such an approach is therefore friendly to any non-expert user who wants to use the MF algorithm on their database and does not have the knowledge and resources for a lengthy optimization study. In the first part of the research, we limited the optimization process only to the parameter values, which already produced statistically significantly better RMSE values using 10-fold validation. After using the GE’s full potential to produce latent factor update equations, we observed an even better increase in the algorithm’s accuracy. Apart from even better RMSE values, this modification accelerated the basic MF algorithm for more than five times. We then switched to three larger datasets that contained different item types and repeated our experiments. The results showed that GE almost exclusively assigned constant values to either user or item latent factors. It is outstanding how GE is able to gradually change—depending on the dataset size—the nature of the update equations from the classical stochastic gradient descent form, through a modified form where user latent factors are constants, to a form where both latent factors are constants. This way, GE adapts to the degree of a bias-variance trade-off present in a dataset and is able to dynamically warn about over-saturation. A great potential of the usage of GE to support a MF-factorization-based recommender system lies in the fact that GE can be used as a black box to aid or even replace an administrator in initial as well as on-the-fly adjustments of a recommender system. Apart from that, GE can be used to generate warnings when the system becomes over-saturated and requires an administrator’s intervention. We believe that our results make a valuable contribution to the emerging field of employing evolutionary computing techniques in the development of recommender systems [ The presented approach therefore offers a nice quality-of-life upgrade to the existing MF algorithm but still has a few kinks that need to be ironed out. Although we are able to consistently optimize the algorithm and fit it to the selected dataset, the approach does require a lot of computational resources and time. This is not necessarily a drawback because we do not require real-time optimization but only need to run the algorithm once in a while to find the optimal settings. For our future applications, we therefore plan to improve the algorithm’s performance by introducing parallel processing and to export parts of the code to C using the Cython library. In addition, we will also experiment with expanding the scope of our algorithm to optimizing the MF algorithm parameters as well (the number of iterations and latent factor, for example). We believe that this could lead to further improvements and potentially even speed up the MF method itself (by realizing that we need fewer iterations/factors, for example). It would also be interesting to test if we can apply our algorithm to other versions of the MF algorithm (graph-based, non-negative, etc.) and if we can apply our strategy to a completely different recommender system as well. Author Contributions Conceptualization, I.F.; Methodology, M.K. and Á.B.; Software, M.K.; Supervision, Á.B.; Writing—original draft, M.K.; Writing—review & editing, I.F. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Not applicable. The authors acknowledge the financial support from the Slovenian Research Agency (research core funding No. P2-0246 ICT4QoL—Information and Communications Technologies for Quality of Life). Conflicts of Interest The authors declare no conflict of interest. 1. Ahanger, G.; Little, T.D.C. Data semantics for improving retrieval performance of digital news video systems. IEEE Trans. Knowl. Data Eng. 2001, 13, 352–360. [Google Scholar] [CrossRef] 2. Uchyigit, G.; Clark, K. An Agent Based Electronic Program Guide. In Proceedings of the 2nd Workshop on Personalization in Future TV, Malaga, Spain, 28 May 2002; pp. 52–61. [Google Scholar] 3. Kurapati, K.; Gutta, S.; Schaffer, D.; Martino, J.; Zimmerman, J. A multi-agent TV recommender. In Proceedings of the UM 2001 workshop Personalization in Future TV, Sonthofen, Germany, 13–17 July 2001. [Google Scholar] 4. Bezerra, B.; de Carvalho, F.; Ramalho, G.; Zucker, J. Speeding up recommender systems with meta-prototypes. In Brazilian Symposium on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2002; pp. 521–528. [Google Scholar] 5. Yuan, J.L.; Yu, Y.; Xiao, X.; Li, X.Y. SVM Based Classification Mapping for User Navigation. Int. J. Distrib. Sens. Netw. 2009, 5, 32. [Google Scholar] [CrossRef] 6. Pogačnik, M. Uporabniku Prilagojeno Iskanje Multimedijskih Vsebin. Ph.D. Thesis, University of Ljubljana, Ljubljana, Slovenia, 2004. [Google Scholar] 7. Hand, D.J.; Mannila, H.; Smyth, P. Principles of Data Mining; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar] 8. Salton, G.; McGill, M.J. Introduction to Modern Information Retrieval; McGraw-Hill, Inc.: New York, NY, USA, 1986. [Google Scholar] 9. Barry Crabtree, I.; Soltysiak, S.J. Identifying and tracking changing interests. Int. J. Digit. Libr. 1998, 2, 38–53. [Google Scholar] [CrossRef] 10. Mirkovic, J.; Cvetkovic, D.; Tomca, N.; Cveticanin, S.; Slijepcevic, S.; Obradovic, V.; Mrkic, M.; Cakulev, I.; Kraus, L.; Milutinovic, V. Genetic Algorithms for Intelligent Internet search: A Survey and a Package for Experimenting with Various Locality Types. IEEE TCCA Newsl. 1999, pp. 118–119. Available online: https://scholar.google.co.jp/scholar?q= Genetic+algorithms+for+intelligent+internet+search:+A+survey+and+a+++package+for+experimenting+with+various+locality+types&hl=zh-CN&as_sdt=0&as_vis=1&oi=scholart (accessed on 1 February 2022). 11. Mladenic, D. Text-learning and related intelligent agents: A survey. IEEE Intell. Syst. 1999, 14, 44–54. [Google Scholar] [CrossRef] [Green Version] 12. Malone, T.; Grant, K.; Turbak, F.; Brobst, S.; Cohen, M. Intelligent Information Sharing Systems. Commun. ACM 1987, 30, 390–402. [Google Scholar] [CrossRef] [Green Version] 13. Buczak, A.L.; Zimmerman, J.; Kurapati, K. Personalization: Improving Ease-of-Use, trust and Accuracy of a TV Show Recommender. In Proceedings of the 2nd Workshop on Personalization in Future TV, Malaga, Spain, 28 May 2002. [Google Scholar] 14. Difino, A.; Negro, B.; Chiarotto, A. A Multi-Agent System for a Personalized Electronic Programme Guide. In Proceedings of the 2nd Workshop on Personalization in Future TV, Malaga, Spain, 28 May 2002. [Google Scholar] 15. Guna, J.; Stojmenova, E.; Kos, A.; Pogačnik, M. The TV-WEB project—Combining internet and television—Lessons learnt from the user experience studies. Multimed. Tools Appl. 2017, 76, 20377–20408. [Google Scholar] [CrossRef] 16. Kunaver, M.; Požrl, T. Diversity in Recommender Systems A Survey. Knowl. Based Syst. 2017, 123, 154–162. [Google Scholar] [CrossRef] 17. Odic, A.; Tkalcic, M.; Tasic, J.F.; Kosir, A. Predicting and Detecting the Relevant Contextual Information in a Movie-Recommender System. Interact. Comput. 2013, 25, 74–90. [Google Scholar] [ CrossRef] [Green Version] 18. Rodriguez, M.; Posse, C.; Zhang, E. Multiple Objective Optimization in Recommender Systems. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys’12, Dublin, Ireland, 9–13 September 2021; ACM: New York, NY, USA, 2012; pp. 11–18. [Google Scholar] [CrossRef] 19. Koren, Y. Collaborative Filtering with Temporal Dynamics. Commun. ACM 2010, 53, 89–97. [Google Scholar] [CrossRef] 20. Koren, Y.; Bell, R.; Volinsky, C. Matrix Factorization Techniques for Recommender Systems. Computer 2009, 42, 30–37. [Google Scholar] [CrossRef] 21. Hug, N. Surprise, a Python Library for Recommender Systems. 2017. Available online: http://surpriselib.com (accessed on 1 March 2022). 22. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar] 23. Mnih, A.; Salakhutdinov, R.R. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2008; pp. 1257–1264. [Google Scholar] 24. Rosenthal, E. Explicit Matrix Factorization: ALS, SGD, and All That Jazz. 2017. Available online: https://blog.insightdatascience.com/ explicit-matrix-factorization-als-sgd-and-all-that-jazz-b00e4d9b21ea (accessed on 19 March 2018). 25. Yu, H.F.; Hsieh, C.J.; Si, S.; Dhillon, I.S. Parallel matrix factorization for recommender systems. Knowl. Inf. Syst. 2014, 41, 793–819. [Google Scholar] [CrossRef] 26. Horváth, T.; Carvalho, A.C. Evolutionary Computing in Recommender Systems: A Review of Recent Research. Nat. Comput. 2017, 16, 441–462. [Google Scholar] [CrossRef] 27. Salehi, M.; Kmalabadi, I.N. A Hybrid Attribute-based Recommender System for E-learning Material Recommendation. IERI Procedia 2012, 2, 565–570. [Google Scholar] [CrossRef] [Green Version] 28. Zandi Navgaran, D.; Moradi, P.; Akhlaghian, F. Evolutionary based matrix factorization method for collaborative filtering systems. In Proceedings of the 2013 21st Iranian Conference on Electrical Engineering (ICEE), Mashhad, Iran, 14–16 May 2013; pp. 1–5. [Google Scholar] 29. Hu, L.; Cao, J.; Xu, G.; Cao, L.; Gu, Z.; Zhu, C. Personalized Recommendation via Cross-domain Triadic Factorization. In Proceedings of the 22nd International Conference on World Wide Web, WWW’13, Rio de Janeiro, Brazil, 13–17 May 2013; ACM: New York, NY, USA, 2013; pp. 595–606. [Google Scholar] [CrossRef] 30. Balcar, S. Preference Learning by Matrix Factorization on Island Models. In Proceedings of the 18th Conference Information Technologies—Applications and Theory (ITAT 2018), Hotel Plejsy, Slovakia, 21–25 September 2018; Volume 2203, pp. 146–151. [Google Scholar] 31. Rezaei, M.; Boostani, R. Using the genetic algorithm to enhance nonnegative matrix factorization initialization. Expert Syst. 2014, 31, 213–219. [Google Scholar] [CrossRef] 32. Lara-Cabrera, R.; Gonzalez-Prieto, Á.; Ortega, F.; Bobadilla, J. Evolving Matrix-Factorization-Based Collaborative Filtering Using Genetic Programming. Appl. Sci. 2020, 10, 675. [Google Scholar] [CrossRef] [Green Version] 33. O’Neil, M.; Ryan, C. Grammatical Evolution. In Grammatical Evolution: Evolutionary Automatic Programming in an Arbitrary Language; Springer: Boston, MA, USA, 2003; pp. 33–47. [Google Scholar] [ 34. Bokde, D.K.; Girase, S.; Mukhopadhyay, D. An Approach to a University Recommendation by Multi-criteria Collaborative Filtering and Dimensionality Reduction Techniques. In Proceedings of the 2015 IEEE International Symposium on Nanoelectronic and Information Systems, Indore, India, 21–23 December 2015; pp. 231–236. [Google Scholar] [CrossRef] 35. Košir, A.; Odić, A.; Kunaver, M.; Tkalčič, M.; Tasič, J.F. Database for contextual personalization. Elektroteh. Vestn. 2011, 78, 270–274. [Google Scholar] 36. Breese, J.S.; Heckerman, D.; Kadie, C. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, Madison Wisconsin, 24–26 July 1998; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1998; pp. 43–52. [Google Scholar] 37. Herlocker, J.L.; Konstan, J.A.; Borchers, A.; Riedl, J. An Algorithmic Framework for Performing Collaborative Filtering. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’99, Berkeley, CA, USA, 15–19 August 1999; ACM: New York, NY, USA, 1999; pp. 230–237. [Google Scholar] [CrossRef] 38. Shardanand, U.; Maes, P. Social information filtering: Algorithms for automating ‘word of mouth’. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 7–11 May 1995; ACM Press/Addison-Wesley Publishing Co., Ltd.: Boston, MA, USA, 1995; pp. 210–217. [Google Scholar] 39. Bao, Z.; Xia, H. Movie Rating Estimation and Recommendation, CS229 Project; Stanford University: Stanford, CA, USA, 2012; pp. 1–4. [Google Scholar] 40. Chandrashekhar, H.; Bhasker, B. Personalized recommender system using entropy based collaborative filtering technique. J. Electron. Commer. Res. 2011, 12, 214. [Google Scholar] 41. Ranjbar, M.; Moradi, P.; Azami, M.; Jalili, M. An imputation-based matrix factorization method for improving accuracy of collaborative filtering systems. Eng. Appl. Artif. Intell. 2015, 46, 58–66. [Google Scholar] [CrossRef] 42. Kunaver, M.; Fajfar, I. Grammatical Evolution in a Matrix Factorization Recommender System. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 12–16 June 2016; Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 9692, pp. 392–400. [ Google Scholar] [CrossRef] 43. Chen, H.H. Weighted-SVD: Matrix Factorization with Weights on the Latent Factors. arXiv 2017, arXiv:1710.00482. [Google Scholar] 44. Yu, T.; Mengshoel, O.J.; Jude, A.; Feller, E.; Forgeat, J.; Radia, N. Incremental learning for matrix factorization in recommender systems. In Proceedings of the 2016 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 5–8 December 2016; pp. 1056–1063. [Google Scholar] 45. Tashkandi, A.; Wiese, L.; Baum, M. Comparative Evaluation for Recommender Systems for Book Recommendations. In BTW (Workshops); Mitschang, B., Ritter, N., Schwarz, H., Klettke, M., Thor, A., Kopp, O., Wieland, M., Eds.; Hair Styling & Makeup Servi: Hong Kong, China, 2017; Volume P-266, pp. 291–300. Available online: https://www.broadwayteachinggroup.com/about-btw (accessed on 1 February 2021). 46. Ryan, C.; Azad, R.M.A. Sensible Initialisation in Chorus. In Proceedings of the European Conference on Genetic Programming, Essex, UK, 14–16 April 2003; Ryan, C., Soule, T., Keijzer, M., Tsang, E.P.K., Poli, R., Costa, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2610, pp. 394–403. [Google Scholar] [CrossRef] 47. Koza, J. Genetic Programming: On the Programming of Computers by Means of Natural Selection; The MIT Press: Cambridge, MA, USA, 1992. [Google Scholar] 48. Kunaver, M.; Žic, M.; Fajfar, I.; Tuma, T.; Bűrmen, Á.; Subotić, V.; Rojec, Ž. Synthesizing Electrically Equivalent Circuits for Use in Electrochemical Impedance Spectroscopy through Grammatical Evolution. Processes 2021, 9, 1859. [Google Scholar] [CrossRef] 49. Kunaver, M. Grammatical evolution-based analog circuit synthesis. Inf. MIDEM 2019, 49, 229–240. [Google Scholar] 50. Poikolainen, I.; Neri, F.; Caraffini, F. Cluster-Based Population Initialization for differential evolution frameworks. Inf. Sci. 2015, 297, 216–235. [Google Scholar] [CrossRef] [Green Version] 51. Harper, R.; Blair, A. A Structure Preserving Crossover In Grammatical Evolution. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 3, pp. 2537–2544. [Google Scholar] 52. Byrne, J.; O’Neill, M.; Brabazon, A. Structural and Nodal Mutation in Grammatical Evolution. In Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, GECCO’09, Montreal, QC, Canada, 8–12 July 2009; ACM: New York, NY, USA, 2009; pp. 1881–1882. [Google Scholar] [CrossRef] 53. Helmuth, T.; Spector, L.; Martin, B. Size-based Tournaments for Node Selection. In Proceedings of the 13th Annual Conference Companion on Genetic and Evolutionary Computation GECCO’11, Dublin, Ireland, 12–16 July 2011; pp. 799–802. [Google Scholar] 54. Luke, S.; Panait, L. A Comparison of Bloat Control Methods for Genetic Programming. Evol. Comput. 2006, 14, 309–344. [Google Scholar] [CrossRef] 55. Poli, R.; Langdon, W.; McPhee, N. A Field Guide to Genetic Programming; Lulu Enterprises UK Ltd.: Cardiff, Glamorgan, UK, 2008. [Google Scholar] 56. GroupLens. MovieLens. 2017. Available online: https://grouplens.org/blog/2017/07/ (accessed on 19 March 2018). 57. Ziegler, C.N.; McNee, S.M.; Konstan, J.A.; Lausen, G. Improving recommendation lists through topic diversification. In Proceedings of the 14th International Conference on World Wide Web (WWW), Seoul, Korea, 7–11 April 2005. [Google Scholar] 58. Goldberg, K.; Roeder, T.; Gupta, D.; Perkins, C. Eigentaste: A Constant Time Collaborative Filtering Algorithm. Inf. Retr. 2001, 4, 133–151. [Google Scholar] [CrossRef] 59. Aggarwal, C.C. Recommender Systems—The Textbook; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1–498. [Google Scholar] Figure 1. A comparison of the convergence of the baseline MF algorithm on the CoMoDa dataset using the original parameters from [ ] (blue) and GE optimized parameters (orange). Figure 2. Convergence of the RMSE value using Equations ( ) (orange) and ( ) (green) compared to the results obtained with optimized parameters (blue) using the CoMoDa dataset. Figure 3. Convergence of the RMSE value using the same equations as in Figure 2 but only 20 iterations. Parameter Value N (number of iterations) 100 f (number of latent factors) 7 $γ p$ $0.03$ $γ q$ $0.03$ $λ$ $0.3$ $( p u k ) initial$ $0.03$ or random $( q i k ) initial$ $0.03$ or random Dataset p-Value CoMoDa $5 × 10 − 6$ MovieLens 100k $0.004$ Jester $0.002$ Book-Crossing $0.005$ Dataset RMSE Reference CoMoDa 1.27 [42] MovieLens 100k $0.98$ [39] $1.00$ [40] $1.00$ [30] $1.20$ [43] $0.93$ [32] $0.96$ [41] Jester $5.30$ [44] $4.50$ [41] Book-Crossing $1.94$ [45] $1.95$ [45] $1.92$ [45] Table 4. Using the chromosome ${ 14 , 126 , 200 , 20 , 75 , 12 , 215 , 178 , 48 , 88 , 78 , 240 , 137 , 160 , 190 , 98 , 247 , 11 }$ to create Equation ( Codon Non-Terminal Number of Rules/ Selected Resulting Rule 14 <expr> 4/2 <const> 126 <sign> 2/0 + 200 <n> 10/0 0 20 <n> 10/0 0 75 <n> 10/5 5 12 <expr> 4/0 <expr><binOper><expr> 215 <expr> 4/3 <var> 178 <var> 3/1 $q i k$ 48 <binOper> 4/0 + 88 <expr> 4/0 <expr><binOper><expr> 78 <expr> 4/2 <const> 240 <sign> 2/0 + 137 <n> 10/7 7 160 <n> 10/0 0 190 <n> 10/0 0 98 <binOper> 4/2 * 247 <expr> 4/3 <var> 11 <var> 3/2 $e u i$ Objective Find update Equations (6) and (7) to be used in Algorithm 2 to obtain the minimum RMSE$a$ Initial chromosome length 300 Grammar primitives $e u i$, $p u k$ , $q i k$, +, −, *, /, , log() Grammar See Section 3.1 Initial population Ramped half-and-half as presented in [47]. Population size 50 Fitness An average RMSE value obtained from 10-fold cross-validation Crossover probability 20% Mutation probability 10% Probability of mutation/crossover occurring at a terminal 10 % Derivation tree depth limit 12 Max num of nodes 280 Termination After 50 or 150 generations$a$ Value Dataset CoMoDa MovieLens Book-Crossing Jester Users 232 5627 278,858 24,983 Items 3141 3084 271,379 100 Ratings 5639 100,000 1,149,780 641,850 Ratings type 1–5 1–5 1–10 $− 10$–10 Average rating $3.8$ $3.6$ $7.6$ $1.2$ Ratings/user 2 16 6 26 Ratings/item 23 9 2 6419 RMSE Program Parameters $λ p$ $γ p$ $λ q$ $γ q$ $1.271$ $0.01$ $0.02$ $0.07$ $0.01$ $1.271$ $0.01$ $0.02$ $0.02$ $0.01$ $1.271$ $0.01$ $0.02$ $0.01$ $0.01$ $1.271$ $0.01$ $0.01$ $0.07$ $0.01$ $1.271$ $0.01$ $0.01$ $0.07$ $0.01$ $1.273$ $0.03$ $0.02$ $0.04$ $0.01$ $1.274$ $0.04$ $0.01$ $0.02$ $0.01$ $1.274$ $0.01$ $0.03$ $0.03$ $0.01$ $1.274$ $0.02$ $0.01$ $0.04$ $0.02$ $1.273$ $0.02$ $0.01$ $0.07$ $0.02$ RMSE User Latent Factor Equation Item Latent Factor Equation $1.148$ $p u k = 0.02$ $q i k = 10 e u i$ $1.176$ $p u k = 0.05$ $q i k = q i k + 2 e u i − 0.17$ $1.176$ $p u k = 0.05$ $q i k = q i k + 9 e u i − 0.04$ $1.178$ $p u k = 0.05$ $q i k = 0.75 q i k + 2 e u i$ $1.199$ $p u k = p u k$ $q i k = e u i / 0.08 − 0.09$ $1.218$ $p u k = 0.07$ $q i k = 3 e u i + 0.05$ $1.221$ $p u k = 0.07$ $q i k = 2 e u i + 0.05$ $1.221$ $p u k = 0.07$ $q i k = 2 e u i + 0.25$ $1.236$ $p u k = 0.08$ $q i k = e u i$ $1.257$ $p u k = 0.05$ $q i k = q i k + 7 e u i$ RMSE User Latent Factor Equation Item Latent Factor Equation $1.029$ $p u k = 0.05$ $q i k = 0.14$ $1.031$ $p u k = 0.04$ $q i k = 0.12$ $1.032$ $p u k = 0.02$ $q i k = 0.0006 / p u k$ $1.032$ $p u k = e u i − 0.1$ $q i k = 0.02$ $1.032$ $p u k = 0.04$ $q i k = 0.08$ RMSE User Latent Factor Equation Item Latent Factor Equation $1.952$ $p u k = 0.03$ $q i k = e u i$ $1.964$ $p u k = 0.06 − 0.08 * p u k$ $q i k = e u i$ $1.965$ $p u k = 0.07$ $q i k = 0.07 + e u i$ $1.965$ $p u k = 0.02$ $q i k = e u i$ $1.967$ $p u k = 0.05$ $q i k = e u i$ RMSE User Latent Factor Equation Item Latent Factor Equation $5.801$ $p u k = e u i$ $q i k = 0.03$ $5.800$ $p u k = 2 * e u i − 7.28$ $q i k = 0.03$ $5.800$ $p u k = 2 * e u i + 0.48 * p u k + 5.309$ $q i k = 0.04$ $5.801$ $p u k = ( 5 * p u k − 6.97 ) * q u k$$+ ( 7 * p u k − 6.35 ) * e u i$ $q i k = 0.02$ $5.801$ $p u k = e u i − 1.045$ $q i k = 0.05$ Value Dataset CoMoDa MovieLens Book-Crossing Jester best RMSE $1.148$ $1.029$ $1.95$ $5.8$ average RMSE $1.204$ $1.031$ $1.97$ $5.8$ st. dev. of RMSE $0.031$ $0.001$ $0.007$ $0.0002$ p-value (for $α = 0.05$) $5 × 10 − 6$ $0.004$ $0.002$ $0.005$ Comparable RMSE $1.27$ [42] $0.98$ [39] $1.94$ [45] $5.3$ [44] $1.00$ [30] $1.95$ [45] $4.5$ [41] Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Kunaver, M.; Bűrmen, Á.; Fajfar, I. Automatic Grammatical Evolution-Based Optimization of Matrix Factorization Algorithm. Mathematics 2022, 10, 1139. https://doi.org/10.3390/math10071139 AMA Style Kunaver M, Bűrmen Á, Fajfar I. Automatic Grammatical Evolution-Based Optimization of Matrix Factorization Algorithm. Mathematics. 2022; 10(7):1139. https://doi.org/10.3390/math10071139 Chicago/Turabian Style Kunaver, Matevž, Árpád Bűrmen, and Iztok Fajfar. 2022. "Automatic Grammatical Evolution-Based Optimization of Matrix Factorization Algorithm" Mathematics 10, no. 7: 1139. https://doi.org/10.3390/ Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2227-7390/10/7/1139","timestamp":"2024-11-08T12:37:44Z","content_type":"text/html","content_length":"579362","record_id":"<urn:uuid:cd499796-f2ae-4985-8193-42b107ab07c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00017.warc.gz"}
Statistics[Sample] - generate random sample Calling Sequence Sample(X, n, opts) Sample(X, m, opts) Sample(X, rng, opts) Sample(X, out, opts) Sample(X, opts) X - algebraic; random variable or distribution n - nonnegative integer; sample size m - list of two nonnegative integers; Matrix dimensions rng - integer range or list of integer ranges; Array dimensions out - float rtable; to be filled with data options - (optional) equations of the form option = value, where option is method, possibly indexed by a name; specify options for sample generation • The Sample command generates a random sample drawn from the distribution given by X. • The first parameter, X, can be a distribution (see Statistics[Distribution]), a random variable, or an algebraic expression involving random variables (see Statistics[RandomVariable]). • In the first calling sequence, the second parameter, n, is the sample size. This calling sequence will return a newly created Vector of length n, filled with the sample values. This calling sequence, or one of the next two, is recommended for all cases where there are no great performance concerns. • In the second calling sequence, the second parameter, m, is a list of two nonnegative integers. This calling sequence will return a newly created Matrix with the specified dimensions, filled with the sample values. • In the third calling sequence, the second parameter, rng, is a range or a list of ranges determining the dimensions of an Array. This Array will be created, filled with the sample values, and • In the fourth calling sequence, the second parameter, out, is an rtable (such as a Vector) that was created beforehand. Upon successful return of the Sample command, out will have been filled with the sample values. out needs to have rectangular storage and the float data type that is consistent with the current settings of Digits and UseHardwareFloats. That is, if either UseHardwareFloats = true, or UseHardwareFloats = deduced and Digits <= evalhf(Digits) (which is the default), then out needs to have datatype = float[8]; in the other case, that is, if either UseHardwareFloats = false, or UseHardwareFloats = deduced and Digits > evalhf(Digits), then out needs to have datatype = sfloat. This can easily be achieved by supplying the option datatype = float to the rtable creation function; this will automatically select the correct data type for the current settings. • In the fourth calling sequence, Sample returns a procedure p, which can subsequently be called to generate samples of X repeatedly. The procedure p accepts a single argument, which can be n, m, rng, or out, and then behaves as if one of the first three calling sequences were called. p does not accept options; any options should be given in the call to Sample itself. • method = name or method = list -- This option can be used to select a method of generating the sample. There are four main choices: method = default, method = custom, method = discrete, and method = envelope. One can supply method-specific options by instead specifying a list, the first element of which is one of the names default, custom, discrete, and envelope, and the other elements are equations; for example, method = [envelope, updates=20, range=0..100]. These method-specific options will be explained below. – method = envelope uses an implementation of acceptance/rejection generation with an adaptive piecewise linear envelope, applicable to continuous distributions. This implementation will only work for distributions where on its support, the PDF is twice differentiable, has a continuous first derivative, and has only finitely many inflection points. There are three valid method-specific options: range, basepoints, and updates. range: The (finite) range over which the piecewise linear envelope is to be defined, and consequently where the samples are to be found. If range = deduce (the default), then Maple takes the range given by the and Quantiles of the distribution, for some small positive value of that depends on the value of Digits. Otherwise, range should be a range of two real numbers, such as range = 0 .. 1. basepoints: The base points are the boundaries between the segments of the piecewise linear envelope, which should include all inflection points of the PDF of the distribution. If basepoints = deduce (the default), then Maple attempts to find all inflection points itself. Otherwise, basepoints should be a list of floating point real numbers which includes all inflection points. updates: The envelope is automatically refined as more numbers are generated; the maximal number of segments is given by this option, which should be a positive integer. The default value is . – method = discrete uses an implementation of the alias method by Walker (see references below), applicable to discrete distributions. Because this method computes and stores the individual probabilities for all possible outcomes within the range (see below), it may be inefficient for distributions with very heavy tails. There is one method-specific option: range. range: The (finite) range of integers for which the probabilities are computed. If the distribution uses the DiscreteValueMap feature (this is the case if the distribution can attain non-integer values), then this describes the range of source values; the map is applied to these integers to obtain the resulting values. – method = custom uses a distribution-specific method. Almost all predefined distributions have a highly efficient custom implementation in external C code. Method-specific options are all ignored. – method = default (which is the default) selects one of the other three methods. For most built-in distributions, it selects method = custom. For other distributions, such as custom-defined ones, the system falls back to either using method = envelope (for continuous distributions) or using method = discrete (for discrete distributions). The method-specific options accepted are the same as for the applicable fallback method, and they are only used in case the system falls back to that generator. • If X is an algebraic expression involving multiple random variables, say and , then one can specify different sample generation methods for and by using options and , where and are sample generation methods that could be validly specified as . If a random variable-specific sample generation method is given only for some of the random variables, the others will use the method given by the option, or default if no such option is present. • When implementing an algorithm that uses a large number of random samples, it can be worthwhile to think about efficiency of the random sample generation. In most cases, the best efficiency is achieved when all samples are generated at once in a preprocessing phase and stored in a Vector (using the first calling sequence, above), and the values are then used one by one in the algorithm. In some cases, however, this is not possible. For example, this might take too much memory (if a very large number of samples is needed), it might be difficult or impossible to predict the number of samples needed, or the parameters of the random variable might change during the algorithm. In the first two cases, the recommended strategy is to use the fourth calling sequence to create a procedure p, then use p to create a Vector that can hold a large number of samples (using, say, ), using the elements of v one by one, and calling to refill v when the samples run out. If the parameters of the random variable keep changing, then one can define the random variable with parameters that are unassigned initially, use the fourth calling sequence to create a procedure p, then assign values to the parameters afterwards. An example is provided below. • For some of the discrete distributions, the method selected by default is not method = custom but method = discrete. For these distributions, this method is faster when generating more than about 1000 random numbers. If you need to generate fewer random numbers, you can select method = custom by including that option explicitly. • The rng and out parameters were introduced in Maple 15. • The method option was introduced in Maple 15. • For more information on Maple 15 changes, see Updates in Maple 15. • The m parameter was introduced in Maple 16. • The method option was updated in Maple 16. • For more information on Maple 16 changes, see Updates in Maple 16. Straightforward sampling of a distribution. We can also sample an expression involving two random variables. Sampling of a custom-defined distribution. If we supply a list of ranges instead of a number, we get an Array. With a list of two numbers, we get a Matrix. We can use envelope rejection sampling to restrict and to a certain range. Or to restrict only to a certain range. We can refill with different samples as follows. Another option is to use a procedure. Sampling of a custom-defined discrete distribution with non-integer values. This distribution attains the value with probability for positive . Finally, here is a somewhat longer example, where we want to generate exponentially distributed numbers; the rate parameter starts as being , but for each subsequent value it is the square root of the previous sample value. In order to be able to use a procedure (important for efficiency), we need to make sure that is not defined when we create the procedure, otherwise it will only generate samples for the value that had at the time of definition. (If has a value, it can be undefined by executing lambda := 'lambda';, but since we have not used yet, that should not be necessary in this If we now compute a sample of , then Maple will complain, because is unassigned: Error, (in p) unable to evaluate lambda to floating-point Instead, we assign to and start an iteration. We now create a point plot where pairs of subsequent samples are the horizontal and vertical coordinates. See Also Statistics, Statistics[Computation], Statistics[Distributions], Statistics[RandomVariables] Stuart, Alan, and Ord, Keith. Kendall's Advanced Theory of Statistics. 6th ed. London: Edward Arnold, 1998. Vol. 1: Distribution Theory. Walker, Alastair J. New Fast Method for Generating Discrete Random Numbers with Arbitrary Frequency Distributions, Electronic Letters, 10, 127-128. Walker, Alastair J. An Efficient Method for Generating Discrete Random Variables with General Distributions, ACM Trans. Math. Software, 3, 253-256. Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"https://www.maplesoft.com/support/helpjp/addons/view.aspx?path=Statistics%5BSample%5D&L=E","timestamp":"2024-11-09T03:44:10Z","content_type":"application/xhtml+xml","content_length":"320981","record_id":"<urn:uuid:fe8e3ce2-9270-498b-aa71-86c1bf335f72>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00381.warc.gz"}
Latest recommendations Id Title * ▲ Authors * Abstract * Picture Thematic fields Recommender Reviewers Submission * * date 13 Aug 2024 Disclosing effects of Boolean network reduction on dynamical properties and control strategies Recommended by Claudine Chaouiya based on reviews by Tomas Gedeon and David Safranek Boolean networks stem from seminal work by M. Sugita [1], S. Kauffman [2] and R. Thomas [3] over half a century ago. Since then, a very active field of research has been developed, leading to theoretical advances accompanied by a wealth of work on modelling genetic and signalling networks involved in a wide range of cellular processes. Boolean networks provide a successful formalism for the mathematical modelling of biological processes, with a qualitative abstraction particularly well adapted to handle the modelling of processes for which precise, quantitative data is barely available. Nevertheless, these abstract models reveal fundamental dynamical properties, such as the existence and reachability of attractors, which embody stable cellular responses (e.g. differentiated states). Analysing these properties still faces serious computational complexity. Reduction of model size was proposed as a mean to cope with this issue. Furthermore, to enhance the capacity of Boolean networks to produce relevant predictions, formal methods have been developed to systematically identify control strategies enforcing desired In their paper, E. Tonello and L. Paulevé [4] assess the most popular reduction that consists in eliminating a model component. Considering three typical update schemes (synchronous, asynchronous and general asynchronous updates), they thoroughly study the effects of the reduction on attractors, minimal trap spaces (minimal subspaces from which the model dynamics cannot leave), and on phenotype controls (interventions which guarantee that the dynamics ends in a phenotype <p>We investigate how defined by specific component values). Because they embody potential elimination of variables can behaviours of the biological process under study, these are all Phenotype affect the asymptotic dynamics properties of great interest for a modeller. control and and phenotype control of Dynamical elimination of Elisa Tonello, Boolean networks. In systems, Claudine 2024-06-05 View The authors show that eliminating a component can significantly affect variables in Loïc Paulevé particular, we look at the Systems biology Chaouiya 10:12:39 some dynamical properties and may turn a control strategy ineffective. Boolean impact on minimal trap spaces, The different update schemes, targets of phenotype control and control networks and identify a structural strategies are carefully handled with useful supporting examples. condition that guarantees their pre... Whether the component eliminated does not share any of its regulators with its targets is shown to impact the preservation of minimal trap space. Since, in practice, model reduction amounts to eliminating several components, it would have been interesting to further explore this type of structural constraints, e.g. members of acyclical pathways or of circuits. Overall, E. Tonello and L. Paulevé’s contribution underlines the need for caution when defining a regulatory network and characterises the consequences on critical model properties when discarding a component [1] Motoyosi Sugita (1963) Functional analysis of chemical systems in vivo using a logical circuit equivalent. II. The idea of a molecular automation. Journal of Theoretical Biology, 4, 179–92. https://doi.org [2] Stuart Kauffman (1969) Metabolic stability and epigenesis in randomly constructed genetic nets. Journal of Theoretical Biology, 22, 437–67. https://doi.org/10.1016/0022-5193(69)90015-0 [3] René Thomas (1973) Boolean formalization of genetic control circuits. Journal of Theoretical Biology, 42, 563–85. https://doi.org/ [4] Elisa Tonello, Loïc Paulevé (2024) Phenotype control and elimination of variables in Boolean networks. arXiv, ver.2 peer-reviewed and recommended by PCI Math Comp Biol https://arxiv.org/ 02 May 2023 Estimates of Effective Population Size in Subdivided Populations Recommended by Alan Rogers based on reviews by 2 anonymous reviewers We often use genetic data from a single site, or even a single individual, to estimate the history of effective population size, Ne, over time scales in excess of a million years. Mazet and Noûs [2] emphasize that such estimates may not mean what they seem to mean. The ups and downs of Ne may reflect changes in gene flow or selection, rather than changes in census population size. In fact, gene flow may cause Ne to decline even if the rate of gene flow has remained Consider for example the estimates of archaic population size in Fig. 1, which show an apparent decline in population size between roughly 700 kya and 300 kya. It is tempting to interpret this as evidence of a declining number of individuals, but that is not the only plausible Each of these estimates is based on the genome of a single diploid individual. As we trace the ancestry of that individual backwards into the past, the ancestors are likely to remain in the same locale for at least a generation or two. Being neighbors, there’s a chance they will mate. This implies that in the recent past, the ancestors of a sampled individual lived in a population of small effective size. As we continue backwards into the past, there is more and more time for the ancestors to move around on the landscape. The farther back we go, the less likely they are to be neighbors, and the less likely they are to mate. In this more remote past, the ancestors of our sample lived in a population of larger effective size, even if neither the number of individuals nor the rate of gene flow has changed. For awhile then, Ne should increase as we move backwards into the past. This process does not continue forever, because eventually the ancestors will be randomly distributed across the population as a whole. We therefore expect Ne to increase towards an asymptote, which represents the effective size of the entire population. This simple story gets more complex if there is change in either the census size or the rate of gene flow. Mazet and Noûs [2] have shown that one can mimic real estimates of population history using models in which the rate of gene flow varies, but census size does not. This implies that the curves in Fig. 1 are ambiguous. The observed changes in Ne could reflect changes in census size, gene flow, or both. <p style="text-align: justify; Population ">We propose in this article a For this reason, Mazet and Noûs [2] would like to replace the term genetics: brief description of the work, Genetics and “effective population size” with an alternative, the “inverse coalescence Olivier Mazet, over almost a decade, population Joseph 2022-07-11 instantaneous coalescent rate,” or IIRC. I don’t share this rate and Camille Noûs resulting from a collaboration Genetics, Alan Rogers Lachance, 14:03:04 View preference, because the same critique could be made of all definitions demographic between mathematicians and Probability and Anonymous of Ne. For example, Wright [3, p. 108] showed in 1931 that Ne varies parameters biologists from four different statistics in response to the sex ratio, and this implies that changes in Ne need inference research laboratories, not involve any change in census size. This is also true when identifiable as the c... populations are geographically structured, as Mazet and Noûs [2] have emphasized, but this does not seem to require a new vocabulary. Figure 1: PSMC estimates of the history of population size based on three archaic genomes: two Neanderthals and a Denisovan [1]. Mazet and Noûs [2] also show that estimates of Ne can vary in response to selection. It is not hard to see why such an effect might exist. In genomic regions affected by directional or purifying selection, heterozygosity is low, and common ancestors tend to be recent. Such regions may contribute to small estimates of recent Ne. In regions under balancing selection, heterozygosity is high, and common ancestors tend to be ancient. Such regions may contribute to large estimates of ancient Ne. The magnitude of this effect presumably depends on the fraction of the genome under selection and the rate of In summary, this article describes several processes that can affect estimates of the history of effective population size. This makes existing estimates ambiguous. For example, should we interpret Fig. 1 as evidence of a declining number of archaic individuals, or in terms of gene flow among archaic subpopulations? But these questions also present research opportunities. If the observed decline reflects gene flow, what does this imply about the geographic structure of archaic populations? Can we resolve the ambiguity by integrating samples from different locales, or using archaeological estimates of population density or interregional trade? [1] Fabrizio Mafessoni et al. “A high-coverage Neandertal genome from Chagyrskaya Cave”. Proceedings of the National Academy of Sciences, USA 117.26 (2020), pp. 15132–15136. https://doi.org/10.1073/ [2] Olivier Mazet and Camille Noûs. “Population genetics: coalescence rate and demographic parameters inference”. arXiv, ver. 2 peer-reviewed and recommended by Peer Community In Mathematical and Computational Biology (2023). https://doi.org/10.48550/ [3] Sewall Wright. “Evolution in mendelian populations”. Genetics 16 (1931), pp. 97–159. https://doi.org/10.48550/ARXIV.2207.0211110.1093/ 10 Apr 2024 Faster method for estimating the openness of species Recommended by Leo van Iersel based on reviews by Guillaume Marçais, Abiola Akinnubi and 1 anonymous reviewer When sequencing more and more genomes of a species (or a group of closely related species), a natural question to ask is how quickly the total number of distinct sequences grows as a function of the total number of sequenced genomes. A similar question can be asked about the number of distinct genes or the number of distinct k-mers (length-k The paper “Revisiting pangenome openness with k-mers” [1] describes a general mathematical framework that can be applied to each of these versions. A genome is abstractly seen as a set of “items” and a species as a set of genomes. The question then is how fast the function f_tot, the average size of the union of m genomes of the species, grows as a function of m. Basically, the faster the growth the more “open” the species is. More precisely, the function f_tot can be described by a power law plus a constant and the openness $\alpha$ refers to one minus the exponent $\gamma$ of the power law. With these definitions one can make a distinction between “open” genomes ($\alpha < 1$) where the total size f_tot tends to infinity and “closed” genomes ($\alpha > 1$) where the total size f_tot tends to a constant. However, performing this classification is difficult in practice and the relevance of such a disjunction is debatable. Hence, the authors of the current paper focus on estimating the openness <p style="text-align: justify; parameter $\alpha$. ">Pangenomics is the study of Revisiting related genomes collectively, Guillaume The definition of openness given in the paper was suggested by one of pangenome Luca Parmigiani, usually from the same species Combinatorics, Leo van Marçais, 2022-11-22 the reviewers and fixes a problem with a previous definition (in which openness with Roland Wittler, or closely related taxa. Genomics and Iersel Yadong 14:48:18 View it was mathematically impossible for a pangenome to be closed). k-mers Jens Stoye Originally, pangenomes were Transcriptomics Zhang defined for bacterial species. While the framework is very general, the authors apply it by using k After the concept was extended -mers to estimate pangenome openness. This is an innovative approach to eukaryoti... because, even though k-mers are used frequently in pangenomics, they had not been used before to estimate openness. One major advantage of using k-mers is that it can be applied directly to data consisting of sequencing reads, without the need for preprocessing. In addition, k -mers also cover non-coding regions of the genomes which is in particular relevant when studying openness of eukaryotic species. The method is evaluated on 12 bacterial pangenomes with impressive results. The estimated openness is very close to the results of several gene-based tools (Roary, Pantools and BPGA) but the running time is much better: it is one to three orders of magnitude faster than the other methods. Another appealing aspect of the method is that it computes the function f_tot exactly using a method that was known in the ecology literature but had not been noticed in the pangenomics field. The openness is then estimated by fitting a power law function. Finally, the paper [1] offers a clear presentation of the problem, the approach and the results, with nice examples using real data. [1] Parmigiani L., Wittler, R. and Stoye, J. (2024) "Revisiting pangenome openness with k-mers". bioRxiv, ver. 4 peer-reviewed and recommended by Peer Community In Mathematical and Computational Biology. https://doi.org/10.1101/2022.11.15.516472 07 Dec 2021 The emergence of a birth-dependent mutation rate in asexuals: causes and consequences Florian Patout, Raphaël Forien, Matthieu Alfaro, Julien Papaïx, Lionel Roques https://doi.org/10.1101/2021.06.11.448026 A new perspective in modeling mutation rate for phenotypically structured populations Recommended by Yuan Lou based on reviews by Hirohisa Kishino and 1 anonymous reviewer In standard mutation-selection models for describing the dynamics of phenotypically structured populations, it is often assumed that the mutation rate is constant across the phenotypes. In particular, this assumption leads to a constant diffusion coefficient for diffusion approximation models (Perthame, 2007 and references therein). Patout et al (2021) study the dependence of the mutation rate on the birth rate, by introducing some diffusion approximations at the population level, derived from the large population limit of a stochastic, individual-based model. The reaction-diffusion model in this article is of the “cross-diffusion” type: The form of “cross-diffusion” also appeared in ecological literature as a type of biased movement behaviors for organisms (Shigesada et al., 1979). The key underlying assumption for “cross-diffusion” is that the transition probability at the individual level depends solely upon the condition at the departure point. Patout et al (2021) envision that a higher birth rate yields more mutations per unit of time. One of their motivations is that during cancer development, the mutation rates of <p style="text-align: justify; cancer cells at the population level could be correlated with The emergence ">In unicellular organisms Dynamical reproduction success. of a Florian Patout, such as bacteria and in most systems, birth-dependent Raphaël Forien, viruses, mutations mainly Evolutionary Anonymous, The reaction-diffusion approximation model derived in this article mutation rate Matthieu Alfaro, occur during reproduction. Biology, Yuan Lou Hirohisa 2021-06-12 View illustrates several interesting phenomena: For the time evolution in asexuals: Julien Papaïx, Thus, genotypes with a high Probability and Kishino 13:59:45 situation, their model predicts different solution trajectories under causes and Lionel Roques birth rate should have a statistics, various assumptions on the fitness function, e.g. the trajectory could consequences higher mutation rate. However, Stochastic initially move towards the birth optimum but eventually end up at the standard models of asexu... dynamics survival optimum. Their model also predicts that the mean fitness could be flat for some period of time, which might provide another alternative to explain observed data. At the steady-state level, their model suggests that the populations are more concentrated around the survival optimum, which agrees with the evolution of the time-dependent solution trajectories. Perhaps one of the most interesting contributions of the study of Patout et al (2021) is to give us a new perspective to model the mutation rate in phenotypically structured populations and subsequently, and to help us better understand the connection between mutation and selection. More broadly, this article offers some new insights into the evolutionary dynamics of phenotypically structured populations, along with potential implications in empirical studies. Perthame B (2007) Transport Equations in Biology Frontiers in Mathematics. Birkhäuser, Basel. https://doi.org/10.1007/ Patout F, Forien R, Alfaro M, Papaïx J, Roques L (2021) The emergence of a birth-dependent mutation rate in asexuals: causes and consequences. bioRxiv, 2021.06.11.448026, ver. 3 peer-reviewed and recommended by Peer Community in Mathematical and Computational Biology. https://doi.org/10.1101/2021.06.11.448026 Shigesada N, Kawasaki K, Teramoto E (1979) Spatial segregation of interacting species. Journal of Theoretical Biology, 79, 83–99. https: 07 Sep 2021 How mammals adapt their breath to body activity – and how this depends on body size Recommended by Wolfram Liebermeister based on reviews by Elad Noor, Oliver Ebenhöh, Stefan Schuster and Megumi Inoue How fast and how deep do animals breathe, and how does this depend on how active they are? To answer this question, one needs to dig deeply into how breathing works and what biophysical processes it involves. And one needs to think about body size. It is impressive how nature adapts the same body plan – e.g. the skeletal structure of mammals – to various shapes and sizes. From mice to whales, also the functioning of most organs remains the same; they are just differently scaled. Scaling does not just mean “making bigger or smaller”. As already noted by Galilei, body shapes change as they are adapted to body dimensions, and the same holds for physiological variables. Many such variables, for instance, heartbeat rates, follow scaling laws of the form y~x^a, where x denotes body mass and the exponent a is typically a multiple of ¼ [1]. These unusual exponents – instead of multiples of ⅓, which would be expected from simple geometrical scaling – are why these laws are called “allometric”. Kleiber’s law for metabolic rates, with a scaling exponent of ¾, is a classic example [2]. As shown by G. West, allometric laws can be explained through a few simple steps [1]. In his models, he focused on network-like organs such as the vascular system and assumed that these systems show a self-similar structure, with a fixed minimal unit (for instance, capillaries) but varying numbers of hierarchy levels depending on body size. To determine the flow through such networks, he employed biophysical models and optimality principles (for instance, assuming that oxygen must be transported at a minimal mechanical effort), and showed that the solutions – and the physiological variables – respect the known scaling relations. The paper “The origin of the allometric scaling of lung ventilation in mammals“ by Noël et al. [3], applies this thinking to the depth and rate of breathing in mammals. Scaling laws describing breathing in resting animals have been known since the 1950s [4], with exponents of 1 (for tidal volume) and -¼ (for breathing frequency). Equipped with a detailed biophysical model, Noël et al. revisit this question, extending these laws to other metabolic regimes. Their starting point is a model of the human lung, developed previously by two of the authors [5], which assumes that we meet our oxygen demand with minimal lung movements. To state this as an optimization problem, the model combines two submodels: a mechanical model describing the energetic effort of ventilation and a highly detailed model of convection and diffusion in self-similar lung geometries. Breathing depths and rates are computed by numerical optimization, and to obtain results for mammals of any size many of the model parameters are described by known scaling laws. As expected, the depth of breathing (measured by tidal volume) scales almost proportionally with body mass and increases with metabolic demand, while the breathing rate decreases with body mass, with an exponent of about -¼. However, the laws for the breathing rate hold only for basal activity; at higher metabolic <p>A model of optimal control rates, which are modeled here for the first time, the exponent of ventilation has recently deviates strongly from this value, in line with empirical data. The origin of Frédérique Noël, been developed for humans. Biophysics, the allometric Cyril Karamaoun, This model highlights the Evolutionary Wolfram 2020-08-28 Why is this paper important? The authors present a highly complex scaling of lung Jerome A. importance of the localization Biology, Liebermeister 15:18:03 View model of lung physiology that integrates a wide range of biophysical ventilation in Dempsey, of the transition between a Physiology details and passes a difficult test: the successful prediction of mammals Benjamin Mauroy convective and a diffusive unexplained scaling exponents. These scaling relations may help us transport of respiratory gas. transfer insights from animal models to humans and in reverse: data This localization de... for breathing during exercise, which are easy to measure in humans, can be extrapolated to other species. Aside from the scaling laws, the model also reveals physiological mechanisms. In the larger lung branches, oxygen is transported mainly by air movement (convection), while in smaller branches air flow is slow and oxygen moves by diffusion. The transition between these regimes can occur at different depths in the lung: as the authors state, “the localization of this transition determines how ventilation should be controlled to minimize its energetic cost at any metabolic regime”. In the model, the optimal location for the transition depends on oxygen demand [5, 6]: the transition occurs deeper in the lung in exercise regimes than at rest, allowing for more oxygen to be taken up. However, the effects of this shift depend on body size: while small mammals generally use the entire exchange surface of their lungs, large mammals keep a reserve for higher activities, which becomes accessible as their transition zone moves at high metabolic rates. Hence, scaling can entail qualitative differences between species! Altogether, the paper shows how the dynamics of ventilation depend on lung morphology. But this may also play out in the other direction: if energy-efficient ventilation depends on body activity, and therefore on ecological niches, a niche may put evolutionary pressures on lung geometry. Hence, by understanding how deep and fast animals breathe, we may also learn about how behavior, physiology, and anatomy [1] West GB, Brown JH, Enquist BJ (1997) A General Model for the Origin of Allometric Scaling Laws in Biology. Science 276 (5309), 122–126. https://doi.org/10.1126/science.276.5309.122 [2] Kleiber M (1947) Body size and metabolic rate. Physiological Reviews, 27, 511–541. https://doi.org/10.1152/physrev.1947.27.4.511 [3] Noël F., Karamaoun C., Dempsey J. A. and Mauroy B. (2021) The origin of the allometric scaling of lung's ventilation in mammals. arXiv, 2005.12362, ver. 6 peer-reviewed and recommended by Peer community in Mathematical and Computational Biology. https://arxiv.org [4] Otis AB, Fenn WO, Rahn H (1950) Mechanics of Breathing in Man. Journal of Applied Physiology, 2, 592–607. https://doi.org/10.1152/ [5] Noël F, Mauroy B (2019) Interplay Between Optimal Ventilation and Gas Transport in a Model of the Human Lung. Frontiers in Physiology, 10, 488. https://doi.org/10.3389/fphys.2019.00488 [6] Sapoval B, Filoche M, Weibel ER (2002) Smaller is better—but not too small: A physical scale for the design of the mammalian pulmonary acinus. Proceedings of the National Academy of Sciences, 99, 10411–10416. https://doi.org/10.1073/pnas.122352499 12 Oct 2023 Bounding the reticulation number for three phylogenetic trees Recommended by Simone Linz based on reviews by Guillaume Scholz and Stefan Grünewald Reconstructing a phylogenetic network for a set of conflicting phylogenetic trees on the same set of leaves remains an active strand of research in mathematical and computational phylogenetic since 2005, when Baroni et al. [1] showed that the minimum number of reticulations h(T,T') needed to simultaneously embed two rooted binary phylogenetic trees T and T' into a rooted binary phylogenetic network is one less than the size of a maximum acyclic agreement forest for T and T'. In the same paper, the authors showed that h(T,T') is bounded from above by n-2, where n is the number of leaves of T and T' and that this bound is sharp. That is, for a fixed n, there exist two rooted binary phylogenetic trees T and T' such that h(T,T')=n-2. Since 2005, many papers have been published that develop exact algorithms and heuristics to solve the above NP-hard minimisation problem in practice, which is often referred to as Minimum Hybridisation in the literature, and that further investigate the mathematical underpinnings of Minimum Hybridisation and related problems. However, many such studies are restricted to two trees and much less is known about Minimum Hybridisation for when the input consists of more than two phylogenetic trees, which is the more relevant cases from a biological point of view. <p style="text-align: justify; In [2], van Iersel, Jones, and Weller establish the first lower bound ">How many reticulations are for the minimum reticulation number for more than two rooted binary Leo van Iersel needed for a phylogenetic Combinatorics, phylogenetic trees, with a focus on exactly three trees. The When Three and Mark Jones network to display a given set Evolutionary 2023-03-07 above-mentioned connection between the minimum number of reticulations Trees Go to War and Mathias of k phylogenetic trees on n Biology, Graph Simone Linz 18:49:21 View and maximum acyclic agreement forests does not extend to three (or Weller leaves? For k = 2, Baroni, theory more) trees. Instead, to establish their result, the authors use Semple, and Steel [Ann. Comb. multi-labelled trees as an intermediate structure between phylogenetic 8, 391-408 (2005)] showed that trees and phylogenetic networks to show that, for each ε>0, there the answer is ... exist three caterpillar trees on n leaves such that any phylogenetic network that simultaneously embeds these three trees has at least (3/2 - ε)n reticulations. Perhaps unsurprising, caterpillar trees were also used by Baroni et al. [1] to establish that their upper bound on h (T,T') is sharp. Structurally, these trees have the property that each internal vertex is adjacent to a leaf. Each caterpillar tree can therefore be viewed as a sequence of characters, and it is exactly this viewpoint that is heavily used in [2]. More specifically, sequences with short common subsequences correspond to caterpillar trees that need many reticulations when embedded in a phylogenetic network. It would consequently be interesting to further investigate connections between caterpillar trees and certain types of sequences. Can they be used to shed more light on bounds for the minimum reticulation number? [1] Baroni, M., Grünewald, S., Moulton, V., and Semple, C. (2005) "Bounding the number of hybridisation events for a consistent evolutionary history". J. Math. Biol. 51, 171–182. https://doi.org/ [2] van Iersel, L., Jones, M., and Weller, M. (2023) “When three trees go to war”. HAL, ver. 3 peer-reviewed and recommended by Peer Community In Mathematical and Computational Biology. https:// 13 Dec 2021 Modelling within-host evolutionary dynamics of antimicrobial Recommended by Krasimira Tsaneva based on reviews by 2 anonymous Antimicrobial resistance (AMR) arises due to two main reasons: pathogens are either intrinsically resistant to the antimicrobials, or they can develop new resistance mechanisms in a continuous fashion over time and space. The latter has been referred to as within-host evolution of antimicrobial resistance and studied in infectious disease settings such as Tuberculosis [1]. During antibiotic treatment for example within-host evolutionary AMR dynamics plays an important role [2] and presents significant challenges in terms of optimizing treatment dosage. The study by Djidjou-Demasse et al. [3] contributes to addressing such challenges by developing a modelling approach that utilizes integro-differential equations to mathematically capture continuity in the space of the bacterial resistance levels. Given its importance as a major public health concern with enormous societal consequences around the world, the evolution of drug resistance in the context of various pathogens has been extensively studied using population genetics approaches [4]. This problem has been also addressed using mathematical modelling approaches including Ordinary Differential Equations (ODE)-based [5. 6] and more recently Stochastic Differential Equations (SDE)-based models [7]. In [3] the authors propose a model of within-host AMR evolution in the absence and presence of drug treatment. The advantage of the proposed modelling approach is that it allows for AMR to be represented as a continuous quantitative trait, describing the level of resistance of the bacterial population termed quantitative AMR (qAMR) in [3]. Moreover, consistent with recent experimental evidence [2] integro-differential equations take into account both, the dynamics of the bacterial population density, referred to as “bottleneck size” in [2] as well as the evolution of its level of resistance due to drug-induced selection. The model proposed in [3] has been extensively and rigorously analysed to address various scenarios including the significance of host immune response in drug efficiency, treatment failure and preventive strategies. The drug treatment chosen to be investigated in this <p style="text-align: justify; study, namely chemotherapy, has been characterised in terms of the ">Antimicrobial efficacy is Dynamical level of evolved resistance by the bacterial population in presence of Within-host Ramsès traditionally described by a systems, antimicrobial pressure at equilibrium. evolutionary Djidjou-Demasse, single value, the minimal Epidemiology, dynamics of Mircea T. inhibitory concentration Evolutionary Krasimira 2021-04-16 View Furthermore, the minimal duration of drug administration on bacterial antimicrobial Sofonea, Marc (MIC), which is the lowest Biology, Tsaneva 16:55:19 growth and the emergence of AMR has been probed in the model by quantitative Choisy, Samuel concentration that prevents Medical changing the initial population size and average resistance levels. A resistance Alizon visible growth of the Sciences potential limitation of the proposed model is the assumption that bacterial population. As a mutations occur frequently (i.e. during growth), which may not be conse... necessarily the case in certain experimental and/or clinical [1] Castro RAD, Borrell S, Gagneux S (2021) The within-host evolution of antimicrobial resistance in Mycobacterium tuberculosis. FEMS Microbiology Reviews, 45, fuaa071. https://doi.org/10.1093/femsre/ [2] Mahrt N, Tietze A, Künzel S, Franzenburg S, Barbosa C, Jansen G, Schulenburg H (2021) Bottleneck size and selection level reproducibly impact evolution of antibiotic resistance. Nature Ecology & Evolution, 5, 1233–1242. https://doi.org/10.1038/s41559-021-01511-2 [3] Djidjou-Demasse R, Sofonea MT, Choisy M, Alizon S (2021) Within-host evolutionary dynamics of antimicrobial quantitative resistance. HAL, hal-03194023, ver. 4 peer-reviewed and recommended by Peer Community in Mathematical and Computational Biology. https:// [4] Wilson BA, Garud NR, Feder AF, Assaf ZJ, Pennings PS (2016) The population genetics of drug resistance evolution in natural populations of viral, bacterial and eukaryotic pathogens. Molecular Ecology, 25, 42–66. https://doi.org/10.1111/mec.13474 [5] Blanquart F, Lehtinen S, Lipsitch M, Fraser C (2018) The evolution of antibiotic resistance in a structured host population. Journal of The Royal Society Interface, 15, 20180040. https://doi.org/10.1098/ [6] Jacopin E, Lehtinen S, Débarre F, Blanquart F (2020) Factors favouring the evolution of multidrug resistance in bacteria. Journal of The Royal Society Interface, 17, 20200105. https://doi.org/10.1098/ [7] Igler C, Rolff J, Regoes R (2021) Multi-step vs. single-step resistance evolution under different drugs, pharmacokinetics, and treatment regimens (BS Cooper, PJ Wittkopp, Eds,). eLife, 10, e64116. Caroline Colijn Christophe Dessimoz Barbara Holland Hirohisa Kishino Anita Layton Wolfram Liebermeister Paul Medvedev Christian Robert Celine Scornavacca Donate Weghorn
{"url":"https://mcb.peercommunityin.org/?order=v_article.title&page=3","timestamp":"2024-11-03T19:20:55Z","content_type":"text/html","content_length":"104457","record_id":"<urn:uuid:78a08e82-b6be-4d11-8318-67c6271b4904>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00227.warc.gz"}
Linjär Algebra: Fast utan att vara så JOBBIGT Linjär Algebra: Fast utan att vara så JOBBIGT: Partridge, Kev, Hunt Tungrivning Stockholm - Canal Midi 3 maj. Publikationer algebra och geometri. Non-loose Legendrian spheres with trivial contact homology DGA Ingår i Linear and multilinear algebra, s. Solution of a non-domestic tame classification problem from integral representation theory Affine transformation crossed product type algebras and noncommutative are wild2009Ingår i: The Electronic Journal of Linear Algebra, ISSN 1537-9582, is non-trivial, since conventional strategies destroy the structure preserving properties. Title: Matrix Monotone, Convex Functions and Truncated Moment Problem. the system has only the trivial solution. This video explains what Singular Matrix and Non-Singular Matrix are! To learn more about, Matrices, enroll This video introduces the basic concepts associated with solutions of ordinary differential equations. This video Each matrix is row equivalent to one and only one reduced echelon matrix. The homogeneous equation Ax=0 has a nontrivial solution if and only if the Köp boken An Introduction to Wavelets Through Linear Algebra av Michael W. Frazier Students can see non-trivial mathematics ideas leading to natural and such as video compression and the numerical solution of differential equations. A blog about free resources for the secondary math classroom. av M Krönika · 2018 — From solutions of polynomial equations to the Langlands Program While the computation is based only on linear algebra it becomes tedious and I g, g′ ∈ CQ, where I allow myself to write 1 for the trivial element of CQ. Activité Python : estimer une probabilité dans un cas non trivial. Linear Independence – Linear Algebra – Mathigon His student papers [27], [31] that completed the solution (begun by For η = 2k this representation is non-trivial, so that the function (*) does not individual matrix to Jordan normal form, it is in general impossible to do this by. Now we turn attention to another important spectral statistic, the least singular value of an matrix or, more generally, the least non-trivial singular value of a matrix Activité Python : estimer une probabilité dans un cas non trivial. Matematiska institutionens årsrapport 2011 those points (x,y) that satisfy both equations) is merely the intersection of the two lines. Annotated and linked table of linear algebra terms In Linear Algebra, a "trivial" solution is just the zero solution, x= 0. It is easy to prove that a system of linear homogeneous differential equations, with a given initial value condition, has a unique solution. It is almost "trivial" (pun intended) to show that the "trivial solution" y= 0 In mathematics, a trivial solution is one that is considered to be very simple and poses little interest for the mathematician. Typical examples are solutions with the value 0 or the empty set, which does not contain any elements. The equation x + 5y = 0 contains an infinity of solutions. Thanks to all of you who support me on Patreon. Trivial solution is a technical term. For example, for the homogeneous linear equation 7 x + 3 y − 10 z = 0 it might be a trivial affair to find/verify that (1, 1, 1) is a solution. But the term trivial solution is reserved exclusively for for the solution consisting of zero values for all the variables. Autoliv kvartalsrapport 2021 12. a non-trivial solution x. Download File PDF Elementary Linear Algebra Larson Solution Manual solution for every n × 1 column matrix b [and] Ax = O has only the trivial solution. tentamen linear algebra ii julian for which values of do the following In case that B is a basis provide the transition matrix P. B has only the trivial solution λ. The idea of redundancy that we discussed in the introduction can now be phrased in a From linear algebra we know that for a vector space and (n, α). • Non-trivial to combine two rotations They are the solutions to det(R - λI) = 0. Kalender 2106 solariumjobb lund studentliten delfin springarevårsalongen 2021 jurynattfjäril svart gråutvecklande leksaker 2 år Tenta 9 januari 2019, frågor och svar TENTAMEN LINEAR The homogeneous equation Ax=0 has the trivial solution if and only if the equation has at least one free variable. False. The equation Ax = 0 has only the trivial solution.
{"url":"https://valutaabkv.web.app/33968/1443.html","timestamp":"2024-11-09T17:19:28Z","content_type":"text/html","content_length":"10279","record_id":"<urn:uuid:e5c66444-40c9-463c-b8e7-e276e418ac92>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00050.warc.gz"}
Linear Model I’m a big fan of the Elastic Net for variable selection and shrinkage and have given numerous talks about it and its implementation, glmnet. In fact, I will even have a DataCamp course about glmnet coming out soon. As a side note, I used to pronounce it g-l-m-net but after having lunch with one of its creators, Trevor Hastie, I learn it is pronounced glimnet. coefplot has long supported glmnet via a standard coefficient plot but I recently added some functionality, so let’s take a look. As we go through this, please pardon the htmlwidgets in iframes. First, we load packages. I am now fond of using the following syntax for loading the packages we will be using. # list the packages that we load # alphabetically for reproducibility packages <- c('coefplot', 'DT', 'glmnet') # call library on each package purrr::walk(packages, library, character.only=TRUE) # some packages we will reference without actually loading # they are listed here for complete documentation packagesColon <- c('dplyr', 'knitr', 'magrittr', 'purrr', 'tibble', 'useful') The versions can then be displayed in a table. versions <- c(packages, packagesColon) %>% purrr::map(packageVersion) %>% packageDF <- tibble::data_frame(Package=c(packages, packagesColon), Version=versions) %>% Package Version coefplot 1.2.5.1 dplyr 0.7.4 DT 0.2 glmnet 2.0.13 knitr 1.18 magrittr 1.5 purrr 0.2.4 tibble 1.4.1 useful 1.2.3 First, we read some data. The data are available at https://www.jaredlander.com/data/manhattan_Train.rds with the CSV version at data.world. manTrain <- readRDS(url('https://www.jaredlander.com/data/manhattan_Train.rds')) The data are about New York City land value and have many columns. A sample of the data follows. datatable(manTrain %>% dplyr::sample_n(size=100), elementId='DataSampled', extensions=c('FixedHeader', 'Scroller'), In order to use glmnet we need to convert our tbl into an X (predictor) matrix and a Y (response) vector. Since we don’t have to worry about multicolinearity with glmnet we do not want to drop the baselines of factors. We also take advantage of sparse matrices since that reduces memory usage and compute, even though this dataset is not that large. In order to build the matrix ad vector we need a formula. This could be built programmatically, but we can just build it ourselves. The response is TotalValue. valueFormula <- TotalValue ~ FireService + ZoneDist1 + ZoneDist2 + Class + LandUse + OwnerType + LotArea + BldgArea + ComArea + ResArea + OfficeArea + RetailArea + NumBldgs + NumFloors + UnitsRes + UnitsTotal + LotDepth + LotFront + BldgFront + LotType + HistoricDistrict + Built + Landmark - 1 Notice the - 1 means do not include an intercept since glmnet will do that for us. manX <- useful::build.x(valueFormula, data=manTrain, # do not drop the baselines of factors # use a sparse matrix manY <- useful::build.y(valueFormula, data=manTrain) We are now ready to fit a model. mod1 <- glmnet(x=manX, y=manY, family='gaussian') We can view a coefficient plot for a given value of lambda like this. coefplot(mod1, lambda=330500, sort='magnitude') A common plot that is built into the glmnet package it the coefficient path. plot(mod1, xvar='lambda', label=TRUE) This plot shows the path the coefficients take as lambda increases. They greater lambda is, the more the coefficients get shrunk toward zero. The problem is, it is hard to disambiguate the lines and the labels are not informative. Fortunately, coefplot has a new function in Version 1.2.5 called coefpath for making this into an interactive plot using dygraphs. While still busy this function provides so much more functionality. We can hover over lines, zoom in then pan around. These functions also work with any value for alpha and for cross-validated models fit with cv.glmnet. mod2 <- cv.glmnet(x=manX, y=manY, family='gaussian', alpha=0.7, nfolds=5) We plot coefficient plots for both optimal lambdas. # coefplot for the 1se error lambda coefplot(mod2, lambda='lambda.1se', sort='magnitude') # coefplot for the min error lambda coefplot(mod2, lambda='lambda.min', sort='magnitude') The coefficient path is the same as before though the optimal lambdas are noted as dashed vertical lines. While coefplot has long been able to plot coefficients from glmnet models, the new coefpath function goes a long way in helping visualize the paths the coefficients take as lambda changes. Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.
{"url":"https://www.jaredlander.com/tag/linear-model/","timestamp":"2024-11-03T00:01:46Z","content_type":"text/html","content_length":"249950","record_id":"<urn:uuid:5fd54efb-beb4-4f77-836b-ac37be540061>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00420.warc.gz"}
Strassen Algorithm | CodingDrills Strassen Algorithm Matrix Algorithms: Exploring the Strassen Algorithm Matrix algorithms play a crucial role in various fields of computer science, such as machine learning, image processing, and scientific computing. One of the most efficient algorithms for matrix multiplication is the Strassen Algorithm. In this tutorial, we will dive deep into the Strassen Algorithm, understanding its concept and implementation. Understanding Matrix Multiplication Before we delve into the Strassen Algorithm, let's quickly recap matrix multiplication. Given two matrices, A and B, the resulting matrix C is obtained by multiplying the corresponding elements of A and B and summing them up. The dimensions of the resulting matrix C are determined by the number of rows in A and the number of columns in B. The Strassen Algorithm The Strassen Algorithm is a divide-and-conquer algorithm that reduces the number of multiplications required for matrix multiplication. Instead of using the traditional method that requires eight multiplications for each element of the resulting matrix, the Strassen Algorithm only requires seven multiplications. The algorithm achieves this by recursively dividing the matrices into smaller submatrices until reaching a base case, where the submatrices are small enough to be multiplied using the traditional method. The resulting submatrices are then combined to obtain the final matrix. Implementation of the Strassen Algorithm To implement the Strassen Algorithm, we can follow these steps: 1. Divide the input matrices A and B into four equal-sized submatrices. 2. Calculate seven products of these submatrices using recursive calls to the Strassen Algorithm. 3. Combine the resulting submatrices to obtain the final matrix C. Let's take a look at the code snippet below to see how the Strassen Algorithm can be implemented in Python: def strassen_algorithm(A, B): n = len(A) # Base case if n == 1: return [[A[0][0] * B[0][0]]] # Divide the matrices into submatrices mid = n // 2 A11 = [row[:mid] for row in A[:mid]] A12 = [row[mid:] for row in A[:mid]] A21 = [row[:mid] for row in A[mid:]] A22 = [row[mid:] for row in A[mid:]] B11 = [row[:mid] for row in B[:mid]] B12 = [row[mid:] for row in B[:mid]] B21 = [row[:mid] for row in B[mid:]] B22 = [row[mid:] for row in B[mid:]] # Calculate the seven products using recursive calls P1 = strassen_algorithm(A11, subtract_matrices(B12, B22)) P2 = strassen_algorithm(add_matrices(A11, A12), B22) P3 = strassen_algorithm(add_matrices(A21, A22), B11) P4 = strassen_algorithm(A22, subtract_matrices(B21, B11)) P5 = strassen_algorithm(add_matrices(A11, A22), add_matrices(B11, B22)) P6 = strassen_algorithm(subtract_matrices(A12, A22), add_matrices(B21, B22)) P7 = strassen_algorithm(subtract_matrices(A11, A21), add_matrices(B11, B12)) # Combine the resulting submatrices C11 = subtract_matrices(add_matrices(add_matrices(P5, P4), P6), P2) C12 = add_matrices(P1, P2) C21 = add_matrices(P3, P4) C22 = subtract_matrices(subtract_matrices(add_matrices(P5, P1), P3), P7) return combine_matrices(C11, C12, C21, C22) Example Usage Let's consider the following matrices A and B: A = [[1, 2], [3, 4]] B = [[5, 6], [7, 8]] Using the Strassen Algorithm, we can calculate the product of A and B as follows: C = strassen_algorithm(A, B) The output will be: [[19, 22], [43, 50]] The Strassen Algorithm is a powerful technique for matrix multiplication, reducing the number of multiplications required and improving efficiency. By understanding its concept and implementation, programmers can leverage this algorithm to optimize their matrix operations. In this tutorial, we explored the Strassen Algorithm, its implementation in Python, and provided an example usage. Now, armed with this knowledge, you can apply the Strassen Algorithm to your own projects and enhance the performance of your matrix computations. Remember, practice makes perfect! So, go ahead and experiment with the Strassen Algorithm in your own code. Happy coding! Please note that the code snippets provided in this tutorial are simplified for educational purposes and may not be optimized for production use. Ada AI Hi, I'm Ada, your personal AI tutor. I can help you with any coding tutorial. Go ahead and ask me anything. I have a question about this topic
{"url":"https://www.codingdrills.com/tutorial/matrix-data-structure/strassen-algorithm","timestamp":"2024-11-05T19:23:45Z","content_type":"text/html","content_length":"310801","record_id":"<urn:uuid:857ea4bf-57ba-43b6-96c8-194cae297d8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00072.warc.gz"}
Use ForexChurch Pip Calculator to Stay on Top of Your Money Management Trading Forex comes with a number of caveats. Unfortunately, one of those is having to deal with mathematics. As soon as you delve into the world of Forex trading, you will have to deal with some simple arithmetic problems like calculating how much is a pip and how much money you want to risk in a particular trade. At the same time, you need to figure out how much profit you are expecting from that trade. Once you have these two factors measured, only then you will be able to get a grasp of sound money management. It is easy to think that risking a fixed dollar amount per trade and expecting a certain dollar amount in profit would do the trick. Nonetheless, the spot Forex trading industry has established a basic structure regarding how to count the profits and losses and it is pip trading. In the spot currency market, traders measure the movement of price in terms of Forex Pips and getting familiarized with this standard way to measure these things is a prerequisite of becoming a successful Forex trader. Since there are many currency pairs to trade and each pair has a different base and quote currencies, the value of each pip is going to differ for each currency pair. While you can manually calculate these values with a calculator or using Spreadsheets, the ForexChurch Pip Calculator can automate the process and make your life much easier in the long run. What is a Pip? When we go to the supermarket to buy some fruits, chances are you will use a unit of measurement. Here, the seller will usually quote you a price based on how many units of the said fruit you want to buy. In the spot Forex market, you are buying one currency for another and here, the unit of measurement is called a pip. A pip is usually the lowest unit of change in value between the base currency and quote currency. For example, when you want to buy the Euro by exchanging your U.S. Dollar, you will see the Forex broker expressing it as a currency pair - the EUR/USD. The first currency - the Euro - is called the base currency and the second one - the U.S. Dollar - is called the quote currency. Figure 1: EUR/USD 40 Pips Upward Movement The price of EUR/USD will be expressed by an integer or whole number followed by several decimal numbers. So, the price of EUR/USD might be quoted as 1.1510. Now, once the value of the Euro goes up to 1.1511, it will represent a 0.0001 change in value. And that's a Pip for the EUR/USD. Therefore, as in figure 1, if the price goes up from 1.1510 to 1.1550, we would say that the EUR/USD went up by (1.1550 - 1.1510) 40 pips. There are some other high volatility currency pairs where the pip is expressed as the second decimal number. For example, a Pip for the Japanese Yen pairs, such as the EUR/JPY, GBP/JPY, USD/JPY, etc. are represented by a change in the price of 0.01 instead of 0.0001. Hence, for example, if the GBP/JPY goes up from 120.00 to 120.01, it would constitute a movement of one pip. Similarly, if the GBP/ JPY goes up from 120.00 to 120.20, it would mean a 20 pips movement. A lot of brokers nowadays started to include a 5th decimal number to measure the price movement for common currency pairs like the GBP/USD and EUR/USD, and a 3rd decimal number for Yen pairs. This is called a pipette, which is valued at half of a pip. This was introduced so that traders can place bids for even smaller price movements than a pip and the introduction of pipette has helped drive market liquidity as a result. Regardless of your Forex broker shows fractional pips or pipette, or not, counting pips for Forex trading is rather simple. In pairs with four decimals pips, the first digit after the period represents 1,000 pips. The second decimal represents 100 pips, and the third decimal represents 10 pips, where the fourth decimal represents 1 to 9 pips. Similarly, for currency pairs that count pips in two decimal points, the first decimal after period represents 10 pips and the second decimal represents 1 to 9 pips. The introduction of pipette might have made it difficult to recognize which type of currency pairs is based on four or two decimals pips, but if you have any confusion, do consult your broker before trading live accounts as doing so will inevitably make a mess for your money management. Price Movements Can Affect Pip Value Trading foreign exchange is basically buying and selling other currencies with the particular denomination of currency you have in your account. Hence, if you have opened a U.S. Dollar-denominated brokerage account with your Forex broker and buying and selling other currency pairs, the value of each pip of that pair will change based on the price movements. For example, if you are buying the EUR/AUD from a U.S. denominated brokerage account, the value of each pip of the EUR/AUD would be different based on the price of AUD/USD at that moment. If the price of AUD/USD is at 0.7500 and you are trading 1 mini lot (10,000 units) of EUR/USD, pip value would be $0.75. However, if the AUD/USD price goes up to 0.8000, the pip value would go up to $0.80 for the EUR/AUD. We will discuss how to calculate the pip value in a moment, for now, let's focus on why knowing the value of a pip for a specific Forex pair is important. Knowing the value of a Pip is Important Knowing the value of each pip in real-time is a vital bit of information for Forex traders. If you do not know the precise value of each pip for the currency pair you are trading, you would end up either buying or selling more or less than you originally intended. Doing so would likely increase or decrease the risk dynamics of your trading strategy and may negatively affect the performance of your trading system altogether. If you have understood the idea behind calculating and recalculating the pip value, a question might have popped into your head that what happens after you have placed an order with your broker? How much is a pip worth and would this value continue to change once you have your position opened? The answer is yes. However, unless you have a profit target of 3,000 pips or using 1,000 pips stop loss, the relative changes in the value of a pip after you have placed the order will not affect your open trades by much. Most Forex risk management strategies rely on a fixed amount of money per trade. For example, you might have $10,000 in your account and wants to risk 2 percent per trade, which is $200. If you do not know that the value of each pip for EUR/AUD has gone up since the last time you calculated it, perhaps because of an upward spike in AUD/USD, you would end up risking way more per than you wanted Figure 2: Change in Pip Value Affects Risk Management Let's look at an example based on our original assumption that the pip value of the EUR/AUD was at $0.75 when you were trading a mini lot. Let's assume you wanted to buy the EUR/AUD at 1.5770 and wanted to risk 50 pips, so the stop loss was set at 1.5720. Here, your risk in dollar terms, if your account currency is U.S. Dollars - of course, would be $37.5. However, if the price of AUD/USD went up to 0.8000 overnight, highly unlikely, but for the sake of argument let's assume it happened, the 50 pips stop loss would cost you $40, which is $2.5 more than you intended. When you are trading a single mini lot, the $2.5 different may sound like just the cost of a cup of coffee, but when you are trading a large account these minor amounts can end up costing you thousands of dollars, if not more. Hence, knowing the exact value of a pip you are trading is not only vital, but it can also make or break your money management strategy. The problem with trading improper position sizing is twofold. First, if your trading strategy has an optimal position size of 3 percent per trade, and you end up a much smaller position size per trade due to inaccurate pip value calculation. For example, if you ended up risking 2.8 percent of your account per trade, you would leave around 6.67 percent of the profits on the table if you were being profitable. While missing out on a single-digit profit margin might not sound like much, imagine if you were risking 6.67% more on a trade and losing it. In behavioral finance, risk aversion is a key tenet and you might have stopped trading the strategy after losing 30 percent of your account. However, you would have lost a lower percentage of your account if your pip value calculations were more accurate. Who knows, if you have continued trading for another week after losing maybe 27 percent of your account, the strategy might have proven to be a winner and things might have ended up differently! Calculating the Value of a Pip Now that you know why it is important to know the value of a pip, the next question is how do you actually calculate it? Now that you know why it is important to know the value of a pip, the next question is how do you actually calculate it? Let's assume you want to trade the EUR/GBP currency pair. If the market price of EUR/GBP is at 1.1500, it means each Euro is worth 1.1500 British Pound. Hence a pip would be worth (0.0001 / 1.1500) 0.000086956. If you are trading a mini lot (10,000 units), a pip would be worth approximately 0.87 Euro. Similarly, if you are trading Japanese Yen pairs or any pair where a pip is the second decimal number, you would have to divide a pip by the quote currency rate. For example, if the GBP/JPY price is at 146.50, each pip would be worth (0.01 / 146.50 x 10,000) approximately 0.68 British Pound. Finding Value of a Pip if You Have Different Account Denomination Calculating the value of a pip is rather simple. But, as we discussed earlier, it gets a bit more complicated when you have a different currency denominated brokerage account and trading a currency pair that does not involve your account's denominated currency. For example, trading the GBP/JPY with a U.S. Dollar-denominated account. However, calculating the pip value for your account's currency is not that difficult either. All you need to do is get a real-time quote for a currency pair involving your account's currency and divide the pip value of the currency pair by the current market price of the pair that has base currency as your account's currency. For example, let's assume you are trading EUR/GBP, trading 10,000 units, and each pip's value is at 0.87 Euro. If your account is U.S. Dollar-denominated and the current market price of EUR/USD is 1.1150, then it would translate into (0.87 / 1.1150) 0.78 USD in your account. The good news is your Forex broker will likely do all these calculations in the background when showing your Profit and Loss statement or real-time open trades. However, most brokers will not show the value of a pip when you are calculating the currency value of your risk or profit targets. ForexChurch Pip Value Calculator Can Save You Time While it is tempting for math nerds to pull out a calculator and do these calculations manually, we are assuming you are not. Thankfully, you can use the ForexChurch Pip Value Calculator to easily find out the value of a pip regardless of which currency your brokerage account is denominated in. Figure 3: Screenshot of ForexChurch Pip Calculator To use the ForexChurch Pip Calculator, all you need to do is select your account currency and input how many units of currency you are about to trade. If you are trading a standard lot, input 100,000 or if you are trading a mini lot, simply input 10,000 - or any amount you wish! Figure 4: ForexChurch Pip Calculator Showing Calculated Pip Value with Real-Time Exchange Rate Once you input the amount you are trading or want to trade, the ForexChurch Pip Calculator will show a list of currency pairs with the pip value as well as the lot sizes for Standard, Mini, and Micro For example, if you are trading the CAD/JPY, you can easily find out that the current market price is 80.80. Moreover, if you are trading a Standard lot, you will instantly know that each pip movement will be worth $9.348 if you have a U.S. Denominated brokerage account. Similarly, you are trading a mini lot, a pip would be worth $0.935 and so on. So, if you are trading EUR/AUD and the currency pip value is 0.682 USD, and you want to risk $50 on a trade, just divide 50 by 0.682 and you get 73.31, which would be the number of pips you should risk on a trade if you place the order right now. The best part is these outputs are based on the real-time currency rates and you don't have to do any manual calculations to find out exactly how much would you risk on a trade and how many lots you should trade. Another great thing about the ForexChurch Pip Calculator is the real-time exchange rates are loaded the moment you open the page. But you do not need to refresh the page to get the currency exchange rate. Every time you click on the calculate button, the outputs represent the real-time market rate, not the rate when you loaded the page. The Bottom Line Regardless of what type of Forex trading strategy you use, at the end of the day, it boils down to how efficiently you manage your money. After all, no strategy will ever produce a 100 percent win rate and knowing how much you need to risk on a trade that suits the money management dynamics of your trading strategy is the only way to beat the market. You could be a Forex trader from Australia and have an Australian Dollar denominated Forex account. Or you could be a trader from Japan or Switzerland and have Japanese Yen or Swiss Franc denominated Forex brokerage account. Regardless, of what currency account you have, at the end of the day, you calculate your profit and losses based on your deposit amount in the currency you conduct your business in. Therefore, knowing how much you should invest in a particular trade and how many pips you need to risk on a given trade will eventually dictate the overall profitability, or a lack of it, in the long run. The Pip calculator at ForexChurch was built to offer Forex traders from all around the world a one-stop solution to calculate how much they should risk on a given trade and using it can save you time and effort in the hunt for finding an edge in the market.
{"url":"https://www.forexchurch.com/pip-calculator","timestamp":"2024-11-08T16:51:27Z","content_type":"text/html","content_length":"100338","record_id":"<urn:uuid:d0e7fe1d-dd14-4429-98cf-59b9507d10c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00840.warc.gz"}
BITSAT Sample Paper - Samples Also Can do Wonder - Business e BITSAT Sample Paper – Samples Also Can do Wonder 4 min read BITSAT is an online test that is needed to get admission in to prestigious BITS Pilani institute. In order to get admission here it is mandatory for the candidate to have passed 12th board with subjects Physics, Chemistry and Mathematics. They should have also scored a minimum 75% and also need to possess adequate proficiency in English. Regarding the eligibility, students who appear for 12th board in 2013 and those who have passed 12th in 2012 are eligible to apply. So, this is the eligibility in order to enter into the exam but it is to be noted that the final admission will be made only by looking at the scores that are obtained by students in BITSAT exam. Regarding the exam, it proves to be very tough which also requires lot of practice as well as hard work. It is very important for the students to prepare two to three months before the exam. Here, student can also do lot of practice and preparation with the help of solving BITSAT sample papers. These papers are available online. Nature of the exam Bitsat is a computer based online test. In this test, it consists of four parts. Part I is Physics, Part II is Chemistry, Part III is English Proficiency and logical reasoning and Part IV is Duration of the exam This test needs to be completed within the duration of three hours without any break. There is a total of 150 questions where a student gets additional 12 questions in case he or she finishes the question before the given time. So, it can be found that this exam is very different from traditional offline test. It is possible for the candidate to change the answers in case they are not confidents about the given answer. It should be noted that there are negative marking in this exam for which the candidate should be very careful while attempting the question. Guesswork does not work here else they would have to lose their marks. Types of questions It is very important to note that each candidate is provided with different set of questions and these questions are randomly taken from the question banks. So, it is quite important for the students to get the best preparation done without fail. If there is even a single doubt, one should try to get it cleared for the exam. Importance of sample papers There is no substitute for practice for which it is important to gather all the right knowledge in the best way. By attempting the previous years’ question papers, it can help a lot to get the perfect idea as to how the questions are set. In other words, the question pattern can be understood in the best way. It can be the best thing to visit online educational websites, where one can get to practice the previous years’ question papers without any difficulty. It can also help to increase the confidence as well. By attempting the question papers, it also becomes possible to understand the weak points by which one can work on it. So, it is very important to make sure of attempting the sample papers that would prove to be of much use. Let us have a look at some of the samples of Bitsat so that you can have some good idea about it. Q 1. If α, ß are the roots of ax2 + bx + c = 0, then – 1 / α , 1 / ß are the roots of (a) ax2 – bx + c = 0 (b) cx2 – bx + a = 0 (c) cx2 + bx + a = 0 (d) ax2 – bx – c = 0 Q 2. The number of real roots of the equation (x – 1)2 + (x + 2)2 + (X – 3)2 = 0 is (a) 1 (b) 2 (c) 3 (d) None of these Q 3. If S is the set containing values of x satisfying [x]2 5[x] 6 ≤ 0, where *x+ denotes GIF, then S contains (a) (2,4) (b) (2,4] (c) [2,3] (d) [2,4] Q 4. Seven people are seated in a circle, How many relative arrangements are possible? (a) 7! (b) 6! (c) 7 P6 (d) 7 C Q 5. In how many ways can 4 people be seated on a square table, one on each side? (a) 4! (b) 3! (c) 1 (d) None of these Q 6. Four different items have to be placed in three different boxes. In how many ways can it be done such that any box can have any number of items? (a) 34 (b) 43 (c) 4 P3 (d) 4 C3 Q 7. What is the probability that, if a number is randomly chosen from any 31 consecutive natural numbers if it is divisible by 5? (a) 6 / 31 (b) 7 / 31 (c) 6 / 31 or 7 / 31 (d) None of these Q 8. The mean of a binomial distribution is 5, then its variance has to be (a) > 5 (b) = 5 (c) < 5 (d) = 2
{"url":"https://businesse.co.uk/bitsat-sample-paper-samples-also-can-do-wonder/","timestamp":"2024-11-03T09:47:54Z","content_type":"text/html","content_length":"87926","record_id":"<urn:uuid:8350fe46-63fe-4d66-ba64-9abd0ed64152>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00673.warc.gz"}
Question ID - 156895 | SaraNextGen Top Answer A high speed tubular ultracentrifuge with bowl radius of $100 \mathrm{~mm}$ and height $500 \mathrm{~mm}$ rotates at $20000 \mathrm{rpm}$ and settles starch particles (average diameter of $20 \mu \ mathrm{m})$ on the wall. The ratio of centrifugal force to the gravitational force acting on the particle is____________
{"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=156895","timestamp":"2024-11-04T16:51:05Z","content_type":"text/html","content_length":"14569","record_id":"<urn:uuid:3a9fcf57-ab89-4129-b248-51c37ab5fbd3>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00085.warc.gz"}
matrices Archives - 3D Game Engine Programming In this post, Volume Tiled Forward Shading rendering is described. Volume Tiled Forward Shading is based on Tiled and Clustered Forward Shading described by Ola Olsson et. al. [13][20]. Similar to Clustered Shading, Volume Tiled Forward Shading builds a 3D grid of volume tiles (clusters) and assigns the lights in the scene to the volumes tiles. Only the lights that are intersecting with the volume tile for the current pixel need to be considered during shading. By sorting the lights into volume tiles, the performance of the shading stage can be greatly improved. By building a Bounding Volume Hierarchy (BVH) over the lights in the scene, the performance of the light assignment to tiles phase can also be improved. The Volume Tiled Forward Shading technique combined with the BVH optimization allows for millions of light sources to be active in the scene. Introduction to Shader Programming with Cg 3.1 In this article I will introduce the reader to shader programming using the Cg shader programming language. I will use OpenGL graphics API to communicate with the Cg shaders. This article does not explain how use OpenGL. If you require an introduction to OpenGL, you can follow my previous article titled Introduction to OpenGL. 3D Math Primer for Game Programmers (Matrices) In this article, I will discuss matrices and operations on matrices. It is assumed that the reader has some experience with Linear Algebra, vectors, operations on vectors, and a basic understanding of matrices. 3D Math Primer for Game Programmers (Coordinate Systems) In this article, I would like to provide a brief math primer for people who would like to get involved in game programming. This is not an exhaustive explanation of all the math theory that one will have to know in order to be a successful game programmer, but it’s the very minimum amount of information that is necessary to know before you can begin as a game programmer. This article assumes you have a minimum understanding vectors, and matrices. I will simply show applications of vectors and matrices and how they apply to game programming.
{"url":"https://www.3dgep.com/tag/matrices/","timestamp":"2024-11-04T04:29:58Z","content_type":"text/html","content_length":"64636","record_id":"<urn:uuid:5e701dc0-417b-4002-8b11-b2f9e9ed45c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00542.warc.gz"}
How do You write fractions on a Computer? - Mad Penguin How do You write fractions on a Computer? How Do You Write Fractions on a Computer? In this article, we will explore the various ways to write fractions on a computer, including the use of keyboard shortcuts, formatted text, and notation systems. What is a Fraction? A fraction is a way of expressing a part of a whole, typically divided into two parts: the numerator (the top number) and the denominator (the bottom number). Fractions are commonly used to represent proportions, ratios, and decimals. Why Do We Need to Write Fractions on a Computer? In today’s digital age, computers and software have become an integral part of our daily lives. Whether you’re a student, teacher, or professional, you may need to write fractions on your computer to communicate mathematical concepts, solve problems, or create digital content. Writing fractions on a computer can be a bit tricky, but with the right techniques and tools, you can master it. Keyboard Shortcuts to Write Fractions One of the easiest ways to write fractions on a computer is by using keyboard shortcuts. Here are some common shortcuts to get you started: • Win + slash (/): This shortcut will insert a forward slash character, which can be used as a denominator. • Shift + Alt + 0: This shortcut will insert a 0 and a space, which can be used as a denominator. • Ctrl + Shift + w: This shortcut will insert a w with a vinculum (a horizontal line above the letters, which can be used to represent a fraction). Formatted Text to Write Fractions Formated text is another way to write fractions on a computer. You can use the following methods to format your text: • HTML code: You can use HTML code to write fractions, such as <frac 1/2> or <code>1/2</code>. • Text editor: Most text editors, such as Notepad or TextEdit, allow you to insert a line break or a space to create a fraction. For example: • Word processor: If you’re using a word processor like Microsoft Word or Google Docs, you can use the "Insert" menu to insert a fraction. Notation Systems for Writing Fractions There are several notation systems you can use to write fractions on a computer, including: • LaTeX: LaTeX is a markup language that provides a comprehensive package for writing mathematical equations, including fractions. You can use LaTeX to write fractions, such as frac{1}{2}. • MathJax: MathJax is a JavaScript library that allows you to write mathematical equations, including fractions, on the web. You can use MathJax to write fractions, such as <mfrac>1/2</mfrac>. • Computer Algebra Systems: Computer algebra systems, such as Mathematica or Maple, allow you to write mathematical equations, including fractions. You can use these systems to write fractions, such as 1/2. Best Practices for Writing Fractions on a Computer Here are some best practices to keep in mind when writing fractions on a computer: • Use a consistent notation system: Choose a notation system that you’re comfortable with and stick to it. • Use keyboard shortcuts: Use keyboard shortcuts to quickly insert fractions into your text. • Use formatting options: Use formatting options, such as bold or italic, to highlight important parts of your fraction. • Check your work: Always double-check your work to ensure that your fraction is correct. Writing fractions on a computer can be a bit challenging, but with the right techniques and tools, you can master it. By using keyboard shortcuts, formatted text, and notation systems, you can quickly and easily include fractions in your digital work. Remember to choose a consistent notation system, use formatting options, and check your work to ensure accuracy. With practice, you’ll be writing fractions like a pro on your computer in no time! Table 1: Comparison of Notation Systems Notation System Example Description LaTeX frac{1}{2} A markup language for writing mathematical equations MathJax <mfrac>1/2</mfrac> A JavaScript library for writing mathematical equations Computer Algebra Systems 1/2 Software for writing mathematical equations and solving problems Table 2: Keyboard Shortcuts for Writing Fractions Shortcut Description Win + slash (/) Inserts a forward slash character Shift + Alt + 0 Inserts a 0 and a space Ctrl + Shift + w Inserts a w with a vinculum Unlock the Future: Watch Our Essential Tech Videos! Leave a Comment
{"url":"https://www.madpenguin.org/how-do-you-write-fractions-on-a-computer/","timestamp":"2024-11-08T17:46:46Z","content_type":"text/html","content_length":"133082","record_id":"<urn:uuid:8e9f8ac8-7009-455e-a7fc-a2521b782ec7>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00681.warc.gz"}
Diagonal AC of a parallelogram ABCD bisects ∠A see Figure. Show that (i) it bisects ∠C also, (ii) ABCD is a rhombus. You must login to ask question. Prove the angle-angle-side congruence criterion. What is implies rhombus. Quadrilaterals Solutions for Class 9th Maths. 9th Maths EXERCISE 8.1,Page No:146, Questions No:6, Session 2023-2024. Class 9th, NCERT Books for Session 2023-2024, based on CBSE Board.
{"url":"https://discussion.tiwariacademy.com/question/diagonal-ac-of-a-parallelogram-abcd-bisects-%E2%88%A0a-see-figure-show-that-i-it-bisects-%E2%88%A0c-also-ii-abcd-is-a-rhombus/?show=votes","timestamp":"2024-11-07T18:45:53Z","content_type":"text/html","content_length":"85681","record_id":"<urn:uuid:e40cc2c7-aab8-4cbf-b200-256b9653e8f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00834.warc.gz"}
2. Capacitance and distance C=A/d In this part the area, A, of the pl 2. Capacitance and distance C=A/d In this part the area, A, of the plate is kept constant A= 100x10^-6 m^2 and the distance d between the plates is changed. You are to record the values for distance(in m) and the capacitance C (in F). Take at least eight values of d and C, and then fill the table below by calculating (1/d). Remember to write in your units of measure in the table. 1- Use Excel to plot the relationship between (1/d) and C. Sketch the graph or insert a screenshot. Comment on the graph's appearance. 2- Draw the best straight-line equation and determine its slope. 3- From the slope, determine the value of the permittivity of free space . 4- Determine the percentage error using the real value = 8.85X10^-32 C^2/N.m^2 Fig: 1 Fig: 2 Fig: 3 Fig: 4 Fig: 5 Fig: 6 Fig: 7
{"url":"https://tutorbin.com/questions-and-answers/2-capacitance-and-distance-cad-in-this-part-the-area-a-of-the-plate-is","timestamp":"2024-11-14T10:20:47Z","content_type":"text/html","content_length":"74633","record_id":"<urn:uuid:1362f169-f6a2-43f6-ac9a-bcf5f0ec8362>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00406.warc.gz"}
How do you How do you do seasonal index in Excel? How do you do seasonal index in Excel? Enter the following formula into cell C2: “=B2 / B$15” omitting the quotation marks. This will divide the actual sales value by the average sales value, giving a seasonal index value. How do you forecast seasonal index? Seasonal indexing is the process of calculating the high’s and low’s of each time period into an index. This is done by finding an average for an entire set of data that includes the same number of matching periods, then dividing the individual period average into that total average. How do you forecast in Excel? Create a forecast 1. In a worksheet, enter two data series that correspond to each other: 2. Select both data series. 3. On the Data tab, in the Forecast group, click Forecast Sheet. 4. In the Create Forecast Worksheet box, pick either a line chart or a column chart for the visual representation of the forecast. How do I forecast historical data in Excel? Follow the steps below to use this feature. 1. Select the data that contains timeline series and values. 2. Go to Data > Forecast > Forecast Sheet. 3. Choose a chart type (we recommend using a line or column chart). 4. Pick an end date for forecasting. 5. Click the Create. How do you forecast regression in Excel? Linear regression equation using Excel Chart: Just create the scatter chart or line chart for Actual sales data and add a linear regression trend line and check the Display Equation on the chart and Display R-squired value on the chart. Now Equation and R-squired value will be available on the chart. How do I do regression analysis in Excel? To run the regression, arrange your data in columns as seen below. Click on the “Data” menu, and then choose the “Data Analysis” tab. You will now see a window listing the various statistical tests that Excel can perform. Scroll down to find the regression option and click “OK”. How do you do regression forecasting? The general procedure for using regression to make good predictions is the following: 1. Research the subject-area so you can build on the work of others. 2. Collect data for the relevant variables. 3. Specify and assess your regression model. 4. If you have a model that adequately fits the data, use it to make predictions.
{"url":"https://www.presenternet.com/how-do-you-do-seasonal-index-in-excel/","timestamp":"2024-11-02T01:48:39Z","content_type":"text/html","content_length":"37721","record_id":"<urn:uuid:7d4556d1-bcc7-4ca3-8313-e9b6598c64e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00301.warc.gz"}
(PDF) Stochastic Simulation by Image Quilting of Process-based Geological Models Author content All content in this area was uploaded by Júlio Hoffimann on Oct 14, 2017 Content may be subject to copyright. Accepted Manuscript Stochastic Simulation by Image Quilting of Process-based Geological ModelsI J´ulio Hoffimanna,∗, C´eline Scheidta, Adrian Barfodb, Jef Caersc aDepartment of Energy Resources Engineering, Stanford University bGeological Survey of Denmark and Greenland cDepartment of Geological Sciences, Stanford University Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i. e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse—a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark. Keywords: Voxel reuse, Shannon entropy, Relaxation, Tau model, Multiple-point statistics, FFT, GPGPU 1. Introduction Process-based geological models such as flume experi- ments Paola et al. (2009); Straub et al. (2009); Kim et al. (2010); Tal and Paola (2010); Paola et al. (2011); Paola (2000) and advanced computer simulations of flow and sediment transport Elias et al. (2001); Giri et al. (2008); Lesser et al. (2004) are now widely used to study the effects of geological processes in the sedimentary record. These models are known for providing more insight into phys- ical realism compared to rule-based models Xu (2014); Lopez (2003a), and are the de facto standard for address- ing fundamental questions in sedimentary geology. One of the major drawbacks with the application of process-based models in practice is that they cannot be easily matched with the data acquired after deposition such as drilled wells or geophysical data. This limitation is inherent to any and all forward models, which are fully determined given well-posed boundary conditions (e. g. sea level rise, sediment supply). Furthermore, process-based geological models are complex as demonstrated by Figure 1, demand superb modeling expertise, great amount of time (com- putational or laboratorial), and can be quite laborious to design Briere et al. (2004). ISoftware is available at https://github.com/juliohm/ ∗Corresponding author Email addresses: juliohm@stanford.edu (J´ulio Hoffimann), scheidtc@stanford.edu (C´eline Scheidt), adrianbarfod.geo.au.dk (Adrian Barfod), jcaers@stanford.edu (Jef Caers) Figure 1: Flume experiment of a delta with low Froude number per- formed by John Martin, Ben Sheets, Chris Paola and Michael Kel- berer. Image source: https://www.esci.umn.edu/orgs/seds/Sedi_ In geostatistics, the process of conditioning 3D models to data has been actively investigated Matheron (1963); Mariethoz and Caers (2014). Although the research com- munity has developed various modern algorithms in the past 15 years Strebelle (2002); Arpat and Caers (2007); Zhang et al. (2006,2015); Honarkhah and Caers (2010); El Ouassini et al. (2008); Faucher et al. (2014); Tah- masebi et al. (2012); Mahmud et al. (2014); Yang et al. Preprint submitted to Computers & Geosciences May 25, 2017 (2016); Mariethoz et al. (2010), most still have problems in handling the complexity of process-based models, suf- fer from low computational performance, and/or depend on non-intuitive input parameters that lack clear geolog- ical meaning. The most recent algorithms developed for geostatistical (or stochastic) simulation rely on training images from which multiple-point statistics (MPS) are re- produced Mariethoz and Caers (2014). Compared to alter- native approaches such as object-based Maharaja (2008) and surface-based or event-based Xu (2014) simulation, training-image-based approaches have more flexible con- ditioning capabilities. In order to exploit process-based models as training images and condition them to data, we first need to efficiently manage their non-stationarity and arbitrary landforms. The term non-stationarity refers to the concept that statistics vary with location and time. For example, the channel morphology in the deltaic system of Figure 1 is a function of the distance to the delta apex. It is expected that channels by the sea present different characteristics compared to those evolving near the discharge point up- stream in the high lands. Previous successful attempts to model non-stationarity in MPS simulation utilize auxiliary variables Chugunova and Hu (2008). Although effective, these attempts incorporate the variables by ad-hoc weight- ing; therefore, they do not scale to the complexity of 3D geological models. Among the most used MPS simulation algorithms that model non-stationarity, we list Sequential Normal Equa- tion Simulation (SNESIM) Strebelle (2002), Direct Sam- pling (DS) Mariethoz et al. (2010) and Cross-correlation Simulation (CCSIM) Tahmasebi et al. (2012). In SNESIM, probability maps that indicate the occurrence of rock fa- cies in the subsurface are incorporated in the simulation via a probabilistic model known as the Tau model Jour- nel (2002); Allard et al. (2012). Although more scientific than ad-hoc weighting, the SNESIM algorithm does not support auxiliary variables that are not probability maps. Even if adapted to handling arbitrary variables, SNESIM will still perform poorly with process-based training im- ages because of its underlying tree structure originally de- veloped for processing categorical values. In DS and CCSIM, auxiliary variables are incorporated with ad-hoc weighting. As previously mentioned, this technique does not scale with complex 3D process-based models. Nevertheless, both algorithms support continuous training images and present a remarkable computational speedup compared to previous alternatives in pixel-based and patch-based stochastic simulation, respectively. In DS, the speedup can be explained by the direct sam- pling of the first pattern for which the distance to the data is below a pre-specified threshold. If the threshold is large, the algorithm is fast but suboptimal. If the threshold is small, the simulation of 3D models is unfeasible. Given the resolution of process-based training images, an appro- priate threshold is hardly available. In CCSIM, the speedup can be explained by the pasting of many voxels (or pixels in 2D) at once. In this case, the choice of a threshold is less important and can be fixed to a very small value for process-based models of order 102×102×102voxels or larger. This quality of CCSIM is inherited from the original, seminal paper “Image Quilting for Texture Synthesis and Transfer” by Efros and Freeman (2001) who came up with the idea of quilting images in computer vision. Efros and Freeman introduce a novel, simple, and ef- ficient algorithm for sampling 2D images from arbitrary reference (a. k. a. training) images. In its simplest form, image quilting simulation (IQSIM) consists of 1) a raster path over which patterns (i. e. sub-images of fixed size) are pasted together with some overlap; 2) a similarity measure between patterns already pasted in the simulation grid and patterns in the training image; and 3) a boundary cut al- gorithm Boykov and Jolly (2001); Boykov and Kolmogorov (2001); Kwatra et al. (2003) applied in order to minimize the overlap error of the paste operation. The Efros-Freeman algorithm addresses the texture syn- thesis problem. In the same paper, the authors apply im- age quilting for texture transfer by iterating the proce- dure until a mismatch with a background image is below a pre-specified threshold. The texture transfer problem is closer to the problem that is addressed in this paper, and is closer to geostatistics in general because it involves (spatial) data that needs to be honored. Their proposed iteration technique utilized by CCSIM and other variants, however; becomes computationally burdensome with 3D geological models. Based upon the advances made by the computer vision community, Mahmud et al. (2014) extend 2D image quilt- ing to 3D grids and attempt to incorporate hard data (or simply point data) along the raster path. The authors in- troduce a distance to the data and propose a weighting scheme with the distance computed in the overlap with previously pasted patterns. This scheme has two major limitations: 1) Distances must be normalized before they can be weighted and summed and 2) The weights are case- dependent and are obtained by trial and error. Although flexible, the weighting scheme proposed by Mahmud et al, and the template splitting procedure described therein, are unfeasible in real 3D applications. In a similar attempt, Faucher et al. (2014) formulate a patch-based stochastic simulation as an unconstrained op- timization where the objective function has penalty terms for hard data and local-mean histograms. In this formula- tion, the weights appear directly in the objective function and are chosen under a set of simplifying assumptions. Despite the very good analysis, Faucher et al assump- tions may be considered too strong for arbitrary process- based training images and field data. Furthermore, there is no theoretical result that proves the existence of global weights for conditioning arbitrary random fields. Conditioning image quilting to hard data is particularly challenging as demonstrated by all previously published attempts. The raster path is suboptimal for this task as it does not sense the data ahead in the simulation domain. In the extreme case, the data is clustered near the end of the path and is invisible to the algorithm until the very last iteration. Tahmasebi et al. (2014) alleviate the raster path issue by incorporating data ahead of the path. The proposed solution comes with an extra unknown parame- ter, there called the “co-template”, that is not trivial to set, and yet determines the data conditioning performance. Co-templates add an unnecessary layer of complexity to grids with arbitrary landforms, and as it will be discussed in the next sections, there exists a much simpler and more effective solution. Besides the unknown weights for combining different variables and data defined over the domain, MPS simu- lation algorithms usually depend on a non-trivial list of input parameters that do not convey geological nor physi- cal understanding. In particular, the Efros-Freeman image quilting algorithm requires a window (or template) size for scanning the training image. The choice of this window can greatly affect the quality of the realizations and there is still no good criterion for its design. In this paper, we propose a systematic probabilistic pro- cedure for data aggregation in the original Efros-Freeman algorithm. Our proposed algorithm is faster than any other MPS simulation algorithm previously published, by- passes the ad-hoc weighting limitation, and produces vi- sually realistic images conditioned to data. The paper is organized as follows. In Section 2, we introduce a new method for data aggregation and other minor modifica- tions to the original Efros-Freeman algorithm to accom- modate hard data (e. g. wells). In Section 3, we apply the proposed algorithm to 2D process-based and 3D process- mimicking models with real-field complexity. In Section 4, we discuss the choice of the template size in image quilt- ing and introduce a novel criterion for template design. In Section 5, we conclude the work pointing to future research 2. Data aggregation in image quilting In this section, we introduce a new method for data aggregation in image quilting as an alternative to ad-hoc weighting. This method is introduced with auxiliary vari- ables and is extended later to conditioning with hard data. 2.1. Efros-Freeman algorithm The original Efros-Freeman image quilting for uncondi- tional simulation is illustrated in Figure 2. In iteration 1, a pattern “A” is randomly selected from the training image and placed in the top left corner of the simulation domain. In iteration 2, the sliding window leaves an over- lap region highlighted in red. This region is compared to all regions of equal size in the training image using an Eu- clidean distance as measure of similarity; the next pattern “B” is drawn at random from a uniform distribution over a set of candidates colored in red (e. g. the most similar pat- terns). The two patterns are stitched together by means of a cut that maximizes continuity Boykov and Jolly (2001); Boykov and Kolmogorov (2001); Kwatra et al. (2003). Af- ter the first row is filled, the second row is simulated sim- ilarly except that there are two overlap regions instead of one. Tile by tile the puzzle is solved. Resulting images and all the cuts performed along the path are shown in Figure 3. Figure 2: Efros-Freeman algorithm. Patches are extracted from the training image and pasted in the simulation domain in raster path order. A cut is performed in the overlap with the previously pasted patch to maximize continuity. Black pixels are copied from pattern A whereas white pixels are copied from pattern B. Figure 3: Image quilting realizations of two training images and their corresponding cut masks. Texture is reproduced in both examples. Template size for binary training image is 62 ×62 ×1 and template size for continuous training image is 48 ×48 ×1 in the example. 2.2. Incorporation of auxiliary variables Consider the setup of the problem in Figure 4 with the introduction of an auxiliary variable. A training image T I , an auxiliary variable AUX D defined over the simula- tion domain, and a forward operator G∗:T I →AUX T I are given. The goal is to generate multiple realizations that honor the relationship established by the auxiliary variables AUX D and AU X T I. The operator G∗approxi- mates the mapping Gused to generate the auxiliary vari- able AUXD.Gmay be a simple mathematical expression G=G(i, j, k) in terms of the spatial indices of the grid or may consist of a series of elaborated engineering workflows that produce a property cube over the domain of interest. Figure 4: Problem setup. Training image in the upper left is used to simulate the domain in the bottom left. An auxiliary variable AUXD is provided over the domain as well as a proxy G∗of the forward operator Gused to create AUXD. Our method for data aggregation is illustrated in 2D for clarity. We start by placing a small window in the simulation domain along any overlapping path (e. g. raster path). As illustrated in Figure 5, this placement defines a local variable AUXD(it, jt) for every location (it, jt) in the path. At a current location (it, jt), the local variable AUXD(it, jt) is compared to all local variables AU XT I (ip, jp) in the auxiliary training image. The subscript tin (it, jt) refers to the few tile locations in the simulation domain whereas the subscript pin (ip, jp) refers to the many pixel locations in the training image. Although there are as many variables AU XT I (ip, jp) as there are pixels (or voxels in 3D), these local comparisons are simple Euclidean distance calculations that can be implemented very efficiently with Fast Fourier Transforms (FFTs) and Graphics Processing Units (GPUs). Therefore, the auxiliary distances =kAUXD(it, jt)−AU X T I(ip, jp)k2 are computed with a convolution pass on the auxiliary training image, similar to the procedure introduced in the Figure 5: Proposed method (part I). Euclidean distance with “FFT trick” between current tile location (it, jt) in the domain and all pixel locations (ip, jp) in the training image. Pattern AUX D(it, jt) is compared to all patterns AU XT I (ip, jp) in a single pass. original Efros-Freeman algorithm for computing overlap =kDomain(it, jt)−T I (ip, jp)k2 at the location (it, jt). While Daux(p) is a distance be- tween -shaped (i. e. rectangular-shaped) auxiliary vari- ables, Dov(p) is a distance between L-shaped overlap re- In order to address unit and scaling issues, the dis- tances Dov(p) and Daux(p) are converted into ranks. For a training image with Npat patterns, ranks are permu- tations of the integers (1,2, . . . , Npat). A permutation (p1, p2, . . . , pNpat ) is a valid rank for the distance D(p) if D(pi)≤D(pj) for all 1 ≤i≤j≤Npat. Two such permu- tations exist, one for Dov(p) and another for Daux(p). In order to guarantee a smooth transition from the previous pattern simulated in the domain and the pattern being pasted, we introduce a tolerance for the overlap distance and use it to define an initial subset of Nbest best candi- date patterns according to the overlap information. Such tolerance is not a sensitive parameter of the algorithm and can be made arbitrarily small. In Figure 6 we illustrate the two ranks on the training image and the reduced set of Nbest Npat best candidate patterns based on the overlap Next, we introduce a relaxation technique whereby a subset of the Nbest best candidate patterns is selected. This subset S contains patterns that are in agreement with both the overlap information and the auxiliary variable defined at the location (it, jt). We define a chain of sets A1⊆A2⊆ · · · ⊆ Akwith Aifor i= 1,2, . . . , k containing the first Nibest candidate patterns according to the aux- iliary variable, N16= 0 and Nk=Npat. By denoting Othe Figure 6: Proposed method (part II). Ranking of patterns based on overlap and auxiliary distances followed by successive relaxation of auxiliary information. Given a tolerance, the best patterns are selected according to the overlap (e.g. 2,3,7,1) and the set is in- tersected with a growing set of patterns (e. g. 8,1,3,...) until the intersection is non-empty. set of Nbest best candidate patterns according to the over- lap, the relaxation technique consists of iterating ifrom 1 to kuntil the intersection Si=O∩Aiis non-empty. Let Sbe the first non-empty intersection. The patterns in Shave two ranks, one associated to Dov(p) and another associated to Daux(p). In order to draw a pattern at random we convert the ranks into prob- abilities with a simple linear transformation. The condi- tional probability of a pattern in Sgiven its overlap rank rov is given by P rob (pattern |rov )=(|S| − rov + 1)/kov (3) with |S|the cardinality of Sand kov a normalization con- stant. kov is the sum of |S| − rov +1 over all patterns in S. Similarly, the conditional probability of the same pattern given the auxiliary rank raux is given by P rob (pattern |raux )=(|S| − raux + 1)/kaux (4) These two probabilities are combined into P rob (pattern |rov , raux ) with the Tau model assum- ing no information redundancy (i. e. τ= 1). In Figure 7, all the patterns in Sare assigned a color representing their probability (e. g. |S|= 985). After a pattern is drawn, the entire procedure is repeated for the next location in the overlapping path. The relaxation technique can be applied to multiple aux- iliary variables. In this case, multiple chains A(c) · · · ⊆ A(c) kfor c= 1,2, . . . , Ncare run in parallel instead of one. The intersection Si=O∩A(1) i∩ · · · ∩ A(Nc) is guaranteed to be non-empty for some index iand the subset Sis defined as before. Taking intersections of large sets is a CPU demanding operation in general, however; we exploit the fact that the maximum rank possible for Figure 7: Proposed method (part III). Conditional probability of pasting a pattern given both overlap and auxiliary information com- puted from the Tau model over all patterns in the non-empty set obtained from relaxation. a pattern is Npat and implement a fast intersection al- gorithm for bounded sets with O(Npat) time complexity. In fact, the algorithm is a simple element-wise logical & (AND) comparison between two vectors of size Npat. In Figure 8, we compare the traditional weighting scheme with the proposed relaxation technique. Our method pro- duces realizations that honor the auxiliary variable with- out the specification of weights. Figure 8: Comparison of ad-hoc weighting and proposed method. Different weight configurations A, B and C leading to different con- ditioning results. Our method shown at the bottom left does not require specification of weights and produces the most likely out- comes given the data. Training image size: 400 ×400 ×1, Domain size: 300 ×260 ×1, Template size: 27 ×27 ×1. 2.3. Incorporation of hard data We apply the same relaxation technique to conditioning with hard data HD(it, jt). Besides the distance to the overlap and to the auxiliary variables, we define a distance =kHD(it, jt)−WT I (ip, jp)k2 to the point data that may exist at the current location (it, jt) in the simulation domain. In Equation 5, the matrix (or tensor in 3D) Wis a mask that is only active at the pixels with datum in HD(it, jt), and is the element-wise multiplication. The ranking induced by the hard data is combined with the other rankings through the same Tau model used for incorporating auxiliary variables. We introduce two additional modifications to the Efros- Freeman algorithm to increase the quality of the hard data match. The first modification is the replacement of the raster path by a data-first path illustrated in Figure 9. In this path, locations that have data are visited first and the rest of the simulation domain is filled outwards from the data using successive morphological dilations, a well known operation in image processing. We stress that this path is not related to the data-driven path described by Abdollahifard (2016), that was originally introduced by Criminisi et al. (2003). Figure 9: Data-first path. Tiles are first pasted where hard data exists and outwards until the entire domain is filled. The data-first path when applied together with the re- laxation technique leads to perfect match in most data configurations. There are still two scenarios in which data is not honored: 1) the data configuration is not present in the training image and 2) the configuration is present in the training image but not in Sdue to conflicting ranks. We propose a simple restoration of the data (i. e. we enforce values at hard data locations) at the end of the simula- tion in a post-processing step. Although this construction may introduce local discontinuities under very complex settings, it is effective with many realistic process-based training images. 3. Image quilting of deterministic process-based geological models In this section, we apply the proposed method with 2D process-based and 3D process-mimicking models. Four ap- plications of varying complexity are presented: 1) Stochas- tic simulation of meandering rivers constrained to thick- ness maps, 2) Spatial variability analysis with flume ex- periments as proposed by Scheidt et al. (2016,2015), 3) Subsurface modeling with moderately dense well configu- rations, and 4) Completion of buried valley models with SkyTEM and partial interpretation. Applications 1) and 2) serve to illustrate the efficiency of the relaxation technique on large 3D grids and with complex process-based training images, respectively. Ap- plication 3) highlights a known limitation of the method in the case where hard data is moderately dense. Finally, application 4) illustrates a real project in Denmark where both hard data and auxiliary variables are available. 3.1. Stochastic simulation of meandering rivers In this application, models of a meandering river gen- erated with the FLUMY software Lopez S., Cojan I., Rivoirard J. (2008); Lopez (2003b) are used as training images. Our goal is to assess the performance of the re- laxation technique with the Tau model on large 3D grids. We focus on a single training image with 200 ×300 ×45 cells and utilize the thickness of the basin as an auxiliary variable. This variable is introduced to minimize the ap- pearance of channels in areas of low sediment transport. In our method, the quality of the realizations is still a function of the template size, and because the choice of this parameter is complex, we discuss it in details in Section 4 where we propose a novel criterion for template design. By using this criterion, we select a template size of 49×49×14 and run IQSIM to obtain 50 realizations. In Figure 10, we observe that the thickness map constrains the placement of channels to the center of the basin as intended. However, we also observe illegitimate patterns near the boundary of the realizations caused by the arbitrary landform of the model. Artifacts like these can be easily pruned with a post-processing step for a specific geometry, but the prob- lem is still unsolved for arbitrarily shaped training images and simulation domains. Figure 10: Image quilting realizations of a meandering river. Real- izations conditioned to thickness map have channels in the center. Artifacts observed near the boundary of the basin. Training image size: 200 ×300 ×45, Domain size: 200 ×300 ×45, Template size: 49 ×49 ×14. A conditional simulation of the model is generated in 6 minutes on an integrated Intel R HD Graphics Skylake ULT GT2 GPU of a Dell XPS 13 laptop. Our algorithm and implementation are orders of magnitude faster than most (and probably all) other MPS simulation software in the literature. Besides the FFT on the GPU, we exploit the shape of the basin to save computation. For reference, alternative methods like SNESIM require many hours to handle grids of this size. 3.2. Spatial variability analysis with flume experiments In the flume experiment provided by the St. An- thony Falls Laboratory (http://www.safl.umn.edu), we are given 136 overhead shots of a delta. Our goal is to com- pare the spatial variability of the given snapshots with that of image quilting realizations. We rely on the definition of a distance between these 2D models in order to quantify variability. In this work, the modified Hausdorff distance Dubuisson and a.K. Jain (1994); Huttenlocher et al. (1993) is investigated that only takes into account the shape of geobodies deposited in the delta. We select a template size of 26 ×26 ×1 via the crite- rion discussed in Section 4 and run IQSIM with overhead shots constrained to two auxiliary variables as illustrated in Figure 11. Figure 11: Image quilting realizations of an overhead shot from the flume experiment with two auxiliary variables incorporated by pro- posed method. Training image size: 300 ×260 ×1, Domain size: 300 ×260 ×1, Template size: 26 ×26 ×1. The simulation is performed with 13 such snapshots (or training images) previously selected by clustering points in a multidimensional scaling projection Scheidt et al. (2015, 2016); Borg and Groenen (2005). For performance rea- sons, the modified Hausdorff distance is computed between point sets that represent the edges of the corresponding geobodies as illustrated in Figure 12. Because distances are ultimately computed between black & white images, we further run DS with the 13 intermediate binary images of the delta in order to compare the proposed algorithm with an existing software that requires fine parameter tun- Figure 12: Distance calculation between images. First, images are threshold to wet/dry binary images. Second, an edge filter is applied to produce a reduced set of points. Finally, the Modified Hausdorff distance is computed between the resulting point clouds. In Figure 13, we show the Q-Q plot between the dis- tribution of distances originated from the experiment and the distribution of distances artificially created with geo- statistics. Although the comparison of spatial variability with the modified Hausdorff distance is limited, we observe that both image quilting and direct sampling approximate the natural variability in the delta reasonably well. Out- lier images exist particularly in the upper tail, and most importantly, we observe that spatial variability is usually underestimated by geostatistical simulation. This under- estimation is caused by the many auxiliary variables and constraints imposed during simulation, and is depicted by the reduced interquartile range in the kernel density esti- mation plot in Figure 13. Figure 13: Comparison of natural variability present in the flume experiment with variability created by means of geostatistical simu- lation. Presence of outliers in the upper tail of the distribution. Un- derestimation of spatial variability depicted by reduced interquartile 3.3. Stochastic simulation with dense well configurations In this example, we assess the performance of the pro- posed method with moderately dense well configurations. The training image consists of channels generated with the Fluvsim software Deutsch and Tran (2002), and 9 vertical wells are placed with equal spacing in a domain of the same size as illustrated in Figure 14. Figure 14: Image quilting realizations of fluvial river channels con- ditioned to 9 vertical wells. Placement of channels illustrated on horizontal slices. Training image size: 250 ×250 ×100, Domain size: 250 ×250 ×100, Template size: 25 ×25 ×20. After selecting a template size of 25 ×25 ×20 via the criterion discussed in Section 4, we run image quilting and obtain 50 realizations. Three of these realizations are il- lustrated in Figure 14. We observe that channels are cor- rectly placed at the wells, but we also notice discontinuity in the generated patterns. This discontinuity is caused by the combination of the data-first path and the chosen tem- plate size, and can be quantified with various metrics as discussed in Renard and Allard (2013). We use the number and size of geobodies as metrics in Figure 15 to illustrate the difference in connectivity between the training image and the IQSIM realizations for this well configuration. Figure 15: Cummulative distribution of geobody size for a mod- erately dense well configuration. Positive skewed distributions for image quilting realizations indicate pattern discontinuity compared to the training image. Reducing the template size to accommodate the wells is a valid strategy, but it increases the computational time and can diminish the performance of the simulation to that of alternative methods. In Figure 16, we illustrate the ensemble average and variance of the 50 realizations. High average and low vari- ance at the well locations are guaranteed by design. Figure 16: Ensemble average and variance over 50 realizations. Channels placed where indicated in the wells and corresponding low 3.4. Completion of buried valleys with SkyTEM and par- tial interpretation A collection of buried valleys interpreted from SkyTEM measurements Sørense and Auken (2004) in Denmark is used to illustrate the application of our method in a case with real field complexity. In Figure 17, we show a sin- gle 3D model with 229 ×133 ×39 voxels interpreted by hydrologists that are working on mapping groundwater in the country Thomsen et al. (2004); Høyer et al. (2015). Figure 17: Single interpretation of buried valleys from SkyTEM mea- surements. Resulting model has three categories: 0) sand & gravel— Quaternary meltwater sand and sand till, Miocene sand, and Qua- ternary buried valleys infilled with sand, 1) coarse clay—Quaternary clay till, meltwater clay and buried valleys infilled with clay and clay till, and 2) hemipelagic clay—Hemipelagic, fine grained Paleogene and Oligocene clays. To test our method in this real field case, we propose an experiment in which we assume that half of the in- terpretation is unavailable. In the first case, we use the patterns in the left half of the model to simulate the right half “L→R”. In the second case, we revert the setup “R→L” as illustrated in Figure 18. In this experiment, we have hard data conditioning— the known half of the interpretation—and the SkyTEM measurements as an auxiliary variable. For each case, we generate 50 realizations with a template size of 49×49×18. Figure 18: Experiment setup. Half of the interpretation is discarded and then simulated with image quilting. The known half is used as hard data and the SkyTEM measurements are incorporated as an auxiliary variable. Realizations of the valleys are shown in Figure 19 for the setup “L→R”. Figure 19: Image quilting realizations of buried valleys conditioned to SkyTEM measurements and known half of the basin. Training image size: 229 ×133 ×39, Domain size: 229 ×133 ×39, Template size: 49 ×49 ×18. In Figure 20, we show the average of indicator variables (a probability) defined for the first two categories of the training image—sand & gravel and coarse clay. The third category corresponding to the background red color— hemipelagic clay—is omitted. We observe that many geo- bodies are correctly recovered from the SkyTEM data, but that a limited number of patterns in the training image can only approximate the other half of the most likely inter- For the case “L→R”, we run SNESIM with a set of tuned parameters. Similar to the comparison of IQSIM and DS in the 2D flume experiment, we want to empha- size that our method does not require fine parameter tun- ing for producing decent results. In Figure 21, we illustrate the distribution of modified Hausdorff distances per cat- egory computed between each of the 50 realizations and the most likely interpretation from SkyTEM. The distri- bution obtained with the two methods is compared on a per-category basis. Image quilting realizations present lower distances in Figure 20: Ensemble average of indicator variables for categories 1 and 2. Single 3D model interpreted from SkyTEM illustrated in the first column for reference. Figure 21: Distance-per-category between geostatistical realizations and single 3D model interpreted from SkyTEM. Image quilting (IQSIM) presents lower distances in distribution than single normal equation simulation (SNESIM). distribution and better reproduce the texture of the train- ing image. For this specific setup, a single realization is generated in 3 minutes with IQSIM on an Intel R Graphics Skylake ULT GT2 GPU versus 30 minutes with SNESIM on an Intel R CoreTM i7-6500U CPU. For com- pleteness, another realization is generated in 5 minutes with IQSIM on the same CPU. 4. Criterion for template design In this section, we introduce a novel criterion for choos- ing template configurations in image quilting. We start by motivating the criterion with a simple example in 2D where we compare image quilting realizations of two dif- ferent training images. Next, we state the proposed cri- terion as an optimization problem and derive an efficient approximation that is solved in low CPU time. Finally, we compare the criterion with the traditional entropy plot and assess its robustness with basic checks and well-known training images. In Figure 22 and Figure 23, we illustrate a few image quilting realizations of 2D training images with different template configurations. In this example, template config- urations are squares of the form (T, T, 1) with Tthe tem- plate size in pixels. We observe that different template sizes lead to different texture in the realizations. For the channelized training image, increasing the template size from T= 12 to T= 63 improves the results, whereas for the Gaussian training image, the improvement is obtained by decreasing from T= 82 to T= 32. Figure 22: Image quilting realizations of Strebelle training image. Texture reproduction improves by increasing template size. Figure 23: Image quilting realizations of Gaussian training image. Texture reproduction improves by decreasing template size. The interesting observation is that a measure for tem- plate selection based on a monotonically increasing mea- sure (e. g. entropy Tahmasebi and Sahimi (2012); Journel and Deutsch (1993); Honarkhah and Caers (2010)) is sub- optimal. We propose a function inspired by the principle of minimum energy from thermodynamics. This princi- ple can be rephrased in the context of image quilting as A good image quilting simulation pastes patterns sequentially without overwriting what was already pasted in previous iterations. The motivation for this principle is better understood by considering the boundary cuts in Figure 24. According to the principle of minimum energy (or overwrite), the quilt- ing algorithm should be designed to maximize the number of black pixels in the overlap region, which is only invaded by white pixels when there is misalignment of the pattern coming from the training image and the patterns already pasted along the overlapping path. Figure 24: Zoom in 2D boundary cut mask. Voxel reuse defined as the number of black pixels divided by overlap area. Definition (voxel reuse).The voxel (or pixel in 2D) reuse V∈[0,1] of an image quilting realization is the number of black voxels in the boundary cut divided by the total number of voxels in the overlap region. For a fixed template size to overlap ratio (e.g. 6 ÷1), the voxel reuse is a function of the template size V(T). We seek its maximum, or alternatively, the minimum over- write defined as the complement 1 − V(T). Because the function is stochastic we formally state the optimization in terms of mean voxel reuse: T∗= arg max E[V(T)] (6) We argue that, given a set of image quilting realizations generated with template size Tand their corresponding boundary cuts, the number E[V(T)] ∈[0,1] is a measure of texture reproduction. Consequently, the multiple op- tima T∗are also the solution to the template design prob- lem. In Figure 25, we illustrate the mean voxel reuse as a function of the template size for a few training images in our library. We observe that the mean voxel reuse gener- alizes the Shannon entropy to continuous training images. The plots in Figure 25 were generated by brute force: for each template size Twe generated 10 unconditional im- age quilting realizations with the same size of the training image and averaged the voxel reuse. However, an estimate of mean voxel reuse does not require full simulation, only a few boundary cuts performed with the training image. We derive a fast approximation with the notion of elementary overlapping paths as follows. Given any 3D template configuration (Tx, Ty, Tz), the most simple path that exhibits all overlap combinations has 2 ×2×2 tiles (or blocks), it is shown in Figure 26. For the vast majority of the lookups in the training im- age that consider the overlaps x,yand zseparately, there exists a perfect pattern match. We can assume no over- write E[Vx ] = E[Vy ] = E[Vz ] = 1 and conclude that these boundary cuts are irrelevant to the estimate of the mean voxel reuse. On the other hand, the combinations xy,xz, yz and xyz, at which misalignment is likely to happen, contain valuable information (e. g. E[Vxy ] is a function of the texture). Figure 25: Mean voxel reuse (solid line) and standard deviation (colored area) for a few training images in our library. Generalization of Shannon entropy (dashed line) to continuous training images. Figure 26: Elementary overlapping path. 2 ×2×2 tiles stitched We consider the average over a few Nelementary over- lapping paths (i. e. 2×2×2 tiles) in Equation 7 and discuss the implications of using this average instead of averaging full image quilting realizations. V, k (7) The voxel reuse of an elementary overlapping path can be decomposed into its different overlap combinations: +· · · +fxyz Vxyz where fcis the fraction of the overlap volume associated to the combination c∈C={x, y, z, xy, xz, yz, xyz}. Denote (Tx, Ty, Tz) the template size and (ox, oy, oz) the overlap. There are (2Tx−ox)×(2Ty−oy)×(2Tz−oz) voxels in the path or nx×ny×nzfor short. We can write fractions of the overlap volume Vov in terms of these geometrical parameters, for example: Thus, the terms in the expansion V=Pc∈CfcVc troduced in Equation 8 are a product of geometric factors fctimes texture terms Vc . The mean voxel reuse is given E[V] = X c∈{xy,xz,yz,xyz } We first consider the 2D case where we have E[V] = fx+fy+fxy E[Vxy ]. If instead of 2×2 tiles we had mx×my tiles in the path, the derived expression would be with the variable ilooping over all tiles for which both cuts in xand yare performed. Equation 11 can be further simplified to E[V]=(mx−1)fx+(my−1)fy+(mx−1)(my−1)fxy E[Vxy if we assume that the texture is the same everywhere in the training image (i. e. 1st-order stationary random process assumption). Notice that the fractions fcare a function of the number of tiles mx×myin the realization, but are not a function of the template size (Tx, Ty). Equation 12 can be rewritten in a simpler form E[V] = a0+a1E[Vxy with a0and a1functions of the overlapping path size. The effects of a0and a1in the mean voxel reuse plot are vertical shift and scaling, respectively. These oper- ations do not affect the locations of the maxima T∗= arg max TE[V(T)] and this proves that the use of ele- mentary overlapping paths for template design of 2D sta- tionary random processes is error-free. Although we do not prove the result for non-stationary random processes where boundary cuts are also a function of space, we ex- pect the error to be very low in practice. This approximation with elementary overlapping paths cannot be extended to 3D random processes without errors in general. By following a similar derivation we can write E[V] = a0+a1E[Vxy ]+ a2E[Vxz ]+ · · · +a4E[Vxyz ] (13) which is the equation of a hyperplane defined by the nor- mal vector (a1, a2, a3, a4)∈R4 +. This vector is a function of (mx, my, mz) and there are counter-examples where the maxima T∗is altered by the overlapping path size. If besides stationarity we assume that the training image is isotropic (i. e. statistics do not vary with direction), we have E[Vxy ] = E[Vxz ] = E[Vyz ] = E[Vxyz ] = E[V= ] and the approximation E[V] = a0+ (a1+a2+a3+a4)E[V= is error-free again. We emphasize that the mean voxel reuse criterion is a function of both the training image and the quilting algo- rithm itself. To our knowledge, there is no other criterion with such property in the literature. In order to assess the robustness of the criterion, we perform a few basic checks with overhead shots of the flume experiment. The first check consists of plotting the mean voxel reuse for different times of the experiment. In Figure 27, we observe that the function is preserved across time with very small fluctuations. This result matches our expecta- tion given that this is an autogenic deltaic system without external forcing that could alter the texture. Figure 27: Mean voxel reuse for different overhead shots of the flume experiment. All curves match except for small fluctuations. The second and last check consists of choosing a few template sizes Thand Tlfor which the mean voxel reuse is high and low, respectively. The criterion states that Thleads to good texture reproduction in image quilting, whereas Tldoes not. In Figure 28, we illustrate the mean voxel reuse and optimum template ranges for the Strebelle and Gaussian training images. Figure 22 was generated with Th= 63 and Tl= 12, and Figure 23 was generated with Th= 32 and Tl= 82. Figure 28: Mean voxel reuse for Strebelle and Gaussian training im- ages with ascending and descending trends, respectively. Optimum range for template size depicted in horizontal axis. 5. Conclusions In this work, we proposed a systematic probabilistic pro- cedure for data aggregation in MPS simulation. We imple- mented the procedure within image quilting and tested it on 2D process-based and 3D process-mimicking geological models. Our results show that the procedure is fast, dis- penses fine parameter tuning, and produces realistically- looking realizations conditioned to auxiliary variables and hard data. We introduced a novel criterion for template design that generalizes the Shannon entropy to continuous training im- ages. The criterion is based on the concept of voxel reuse and is the first in the literature that is quilting-aware. We proposed an efficient approximation of the mean voxel reuse and proved that it is error-free under stationary as- sumptions. We recognized artifacts in the image quilting realizations caused by complex landforms in 3D. These artifacts call for a better representation of incomplete pat- terns in the training image and should be seen as a current defect of the algorithm. Another limitation that deserves attention is that of suboptimal texture reproduction with dense hard data configurations. Our method can work with dense configurations, but may lead to suboptimal texture reproduction if speed is to be maintained. Future developments should be concentrated on these two fronts. Another important issue that is not addressed in this work is that of data uncertainty. We assumed that both hard and soft data are free of errors. For applications where measurement errors are large, the proposed algo- rithm, like most other stochastic simulation algorithms mentioned in the paper, is not appropriate. The accompanying software was made available as a Ju- lia package. Documentation can be found online including examples of use and instructions for fast simulation with GPUs: https://github.com/juliohm/ImageQuilting. We thank CAPES and SCRF at Stanford University for funding this research. We also thank Anjali Fernandes and Chris Paola for providing data and insight on flume experiments, Marco Pontiggia and Andrea Da Pra for giv- ing feedback on the software. Abdollahifard, M.J., 2016. Fast multiple-point simulation using a data-driven path and an efficient gradient-based search. Comput- ers & Geosciences 86, 64–74. URL: http://dx.doi.org/10.1016/ j.cageo.2015.10.010, doi:10.1016/j.cageo.2015.10.010. Allard, D., Comunian, A., Renard, P., 2012. Probability Aggregation Methods in Geoscience. doi:10.1007/s11004-012-9396- 3. Arpat, G.B., Caers, J., 2007. Conditional simulation with pat- terns. Mathematical Geology 39, 177–203. doi:10.1007/ s11004-006- 9075-3. Borg, I., Groenen, P.J., 2005. Modern Multidimen- sional Scaling. Learning 40, 637. URL: http: and+methods/book/978-0- 387-98134- 5?cm{_}mmc= AD-{_}- Enews-{_}- ECS12245{_}V1-{_}-978- 0-387-98134- 5, doi:10.1007/0-387- 28981-X,arXiv:arXiv:1011.1669v3. Boykov, Y., Kolmogorov, V., 2001. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vi- sion, in: Lecture Notes in Computer Science (including sub- series Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 359–374. doi:10.1007/3- 540-44745- 8_24, Boykov, Y.Y., Jolly, M.P., 2001. Interactive graph cuts for optimal boundary & region segmentation of objects in ND images, in: Pro- ceedings Eighth IEEE International Conference on Computer Vi- sion. ICCV 2001, pp. 105—-112. doi:10.1109/ICCV.2001.937505. Briere, C., Giardino, A., Werf, J.J.V.D., 2004. Morphological Mod- elling Of Bar Dynamics With Delft3d: The Quest For Optimal Free Parameter Settings Using An Automatic Calibration Tech- nique. Coastal Engineering 2010 , 1–12doi:10.9753/icce.v32. Chugunova, T.L., Hu, L.Y., 2008. Multiple-Point simulations con- strained by continuous auxiliary data. Mathematical Geosciences 40, 133–146. doi:10.1007/s11004- 007-9142-4. Criminisi, A., Perez, P., Toyama, K., 2003. Object re- moval by exemplar-based inpainting, in: Proc. IEEE Com- puter Vision and Pattern Recognition (CVPR). URL: object-removal- by-exemplar- based-inpainting/. Deutsch, C.V., Tran, T.T., 2002. FLUVSIM: A program for object- based stochastic modeling of fluvial depositional systems. Com- puters and Geosciences 28, 525–535. doi:10.1016/S0098- 3004(01) Dubuisson, M.P., a.K. Jain, 1994. A modified Hausdorff distance for object matching. Proceedings of 12th International Conference on Pattern Recognition 1, 566–568. doi:10.1109/ICPR.1994.576361. Efros, A., Freeman, W., 2001. Image Quilting for Texture Synthesis and Transfer. Proceedings of the 28th annual conference on Com- puter graphics and interactive techniques , 1–6URL: http://dl. acm.org/citation.cfm?id=383296, doi:10.1145/383259.383296. El Ouassini, A., Saucier, A., Marcotte, D., Favis, B.D., 2008. A patchwork approach to stochastic simulation: A route towards the analysis of morphology in multiphase systems. Chaos, Solitons and Fractals 36, 418–436. doi:10.1016/j.chaos.2006.06.100. Elias, E.P.L., Walstra, D.J.R., Roelvink, J.a., Stive, M.J.F., Klein, M.D., 2001. Hydrodynamic Validation of Delft3D with Field Measurements at Egmond. Coastal Engineering 2000 40549, 2714–2727. URL: http://ascelibrary.org/doi/abs/10.1061/ 40549(276)212, doi:10.1061/40549(276)212. Faucher, C., Saucier, A., Marcotte, D., 2014. Corrective pattern-matching simulation with controlled local-mean his- togram. Stochastic Environmental Research and Risk Assessment 28, 2027–2050. doi:10.1007/s00477- 014-0864-9. Giri, S., Vuren, S.V., Ottevanger, W., Sloff, K., 2008. A preliminary analysis of bedform evolution in the Waal during 2002- 2003 flood event using Delft3D. Marine and River Dune Dynamics , 141–148. Honarkhah, M., Caers, J., 2010. Stochastic simulation of patterns using distance-based pattern modeling. Mathematical Geosciences 42, 487–517. doi:10.1007/s11004- 010-9276-7. Høyer, A.S., Jørgensen, F., Sandersen, P.B.E., Viezzoli, A., Møller, I., 2015. 3D geological modelling of a complex buried-valley net- work delineated from borehole and AEM data. Journal of Applied Geophysics 122, 94–102. doi:10.1016/j.jappgeo.2015.09.004. Huttenlocher, D.P., Klanderman, G.A., Rucklidge, W.J., 1993. Com- paring Images Using the Hausdorff Distance. IEEE Transac- tions on Pattern Analysis and Machine Intelligence 15, 850–863. Journel, A.G., 2002. Combining knowledge from diverse sources: An alternative to traditional data independence hypotheses. Mathe- matical Geology 34, 573–596. doi:10.1023/A:1016047012594. Journel, A.G., Deutsch, C.V., 1993. Entropy and spatial disorder. Mathematical Geology 25, 329–355. doi:10.1007/BF00901422. Kim, W., Sheets, B.A., Paola, C., 2010. Steering of experimental channels by lateral basin tilting. Basin Research 22, 286–301. Kwatra, V., Schodl, A., Essa, I., Turk, G., Bobick, A., 2003. Graphcut textures: Image and video synthesis using graph cuts. ACM Transactions on Graphics 22, 277–286. doi:10.1145/882262. Lesser, G.R., Roelvink, J.A., van Kester, J.A.T.M., Stelling, G.S., 2004. Development and validation of a three-dimensional morpho- logical model. Coastal Engineering 51, 883–915. doi:10.1016/j. Lopez, S., 2003a. Channelized Reservoir Modeling: a Stochastic Process-based Approach. Theses. {´ E}cole Nationale Sup{´e}rieure des Mines de Paris. URL: https://pastel.archives-ouvertes. Lopez, S., 2003b. Mod´elisation de r´eservoirs chenalis´es m´eandriformes : une approche g´en´etique et stochastique. Ph.D. thesis. Centre de Geostatistique. Lopez S., Cojan I., Rivoirard J., G.A., 2008. Process-based stochastic modelling: meandering channelized reservoirs. Spec. Publ. Int. Assoc. Sedimentol. 40, 139:144. Maharaja, A., 2008. TiGenerator: Object-based training image gen- erator. Computers and Geosciences 34, 1753–1761. doi:10.1016/ Mahmud, K., Mariethoz, G., Caers, J., Tahmasebi, P., Baker, A., 2014. Simulation of Earth textures by conditional image quilt- ing. Water Resources Research 50, 3088–3107. doi:10.1002/ Mariethoz, G., Caers, J., 2014. Multiple-point Geostatis- tics: Stochastic Modeling with Training Images. doi:10.1002/ Mariethoz, G., Renard, P., Straubhaar, J., 2010. The direct sampling method to perform multiple-point geostatistical simulations. Wa- ter Resources Research 46. doi:10.1029/2008WR007621. Matheron, G., 1963. Principles of geostatistics. Economic Geology 58, 1246–1266. doi:10.2113/gsecongeo.58.8.1246. Paola, C., 2000. Quantitative models of sedimentary basin filling. Paola, C., Straub, K., Mohrig, D., Reinhardt, L., 2009. The ”unreasonable effectiveness” of stratigraphic and geomorphic ex- periments. Earth-Science Reviews 97, 1–43. URL: http:// dx.doi.org/10.1016/j.earscirev.2009.05.003, doi:10.1016/j. Paola, C., Twilley, R.R., Edmonds, D.A., Kim, W., Mohrig, D., Parker, G., Viparelli, E., Voller, V.R., 2011. Natural processes in delta restoration: application to the Mississippi Delta. Ann Rev Mar Sci 3, 67–91. URL: http://www.ncbi.nlm.nih.gov/pubmed/ 21329199, doi:10.1146/annurev-marine- 120709-142856. Renard, P., Allard, D., 2013. Connectivity metrics for subsurface flow and transport. Advances in Water Resources 51, 168–196. URL: http://dx.doi.org/10.1016/j.advwatres.2011.12.001, doi:10. Scheidt, C., Fernandes, A.M., Paola, C., Caers, J., 2015. Can Geo- statistical Models Represent Nature’s Variability? An Analysis Using Flume Experiments. AGU Fall Meeting Abstracts . Scheidt, C., Fernandes, A.M., Paola, C., Caers, J., 2016. Quanti- fying natural delta variability using a multiple-pointgeostatistics prior uncertainty model. Journal of Geophysical Research: Earth Surface , 1–19doi:10.1002/2016JF003922.Received. Sørense, K., Auken, E., 2004. SkyTEM ? a new high-resolution helicopter transient electromagnetic system. Exploration Geo- physics 35, 194. URL: https://doi.org/10.1071%2Feg04194, Straub, K.M., Paola, C., Mohrig, D., Wolinsky, M.a., George, T., 2009. Compensational Stacking of Channelized Sedimen- tary Deposits. Journal of Sedimentary Research 79, 673–688. Strebelle, S., 2002. Conditional simulation of complex geological structures using multiple-point statistics. Mathematical Geology 34, 1–21. doi:10.1023/A:1014009426274. Tahmasebi, P., Hezarkhani, A., Sahimi, M., 2012. Multiple- point geostatistical modeling based on the cross-correlation func- tions. Computational Geosciences 16, 779–797. doi:10.1007/ s10596-012- 9287-1. Tahmasebi, P., Sahimi, M., 2012. Reconstruction of three- dimensional porous media using a single thin section. Physical Review E 85, 1–13. doi:10.1103/PhysRevE.85.066709. Tahmasebi, P., Sahimi, M., Caers, J., 2014. MS-CCSIM: Accelerat- ing pattern-based geostatistical simulation of categorical variables using a multi-scale search in fourier space. Computers and Geo- sciences 67, 75–88. doi:10.1016/j.cageo.2014.03.009. Tal, M., Paola, C., 2010. Effects of vegetation on channel morphody- namics: Results and insights from laboratory experiments. Earth Surface Processes and Landforms 35, 1014–1028. doi:10.1002/ Thomsen, R., Søndergaard, V.H., Sørensen, K.I., 2004. Hydrogeolog- ical mapping as a basis for establishing site-specific groundwater protection zones in Denmark. Hydrogeology Journal 12, 550–562. doi:10.1007/s10040-004- 0345-1. Xu, S., 2014. Integration of Geomorphic Experiment Data in Surface- Based Modeling : From Characterization To Simulation . Yang, L., Hou, W., Cui, C., Cui, J., 2016. GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimiza- tion. Computers and Geosciences 89, 57–70. doi:10.1016/j. Zhang, T., Du, Y., Huang, T., Yang, J., Li, X., 2015. Stochastic simulation of patterns using ISOMAP for dimensionality reduc- tion of training images. Computers and Geosciences 79, 82–93. Zhang, T., Switzer, P., Journel, A., 2006. Filter-based classification of training image patterns for spatial simulation. Mathematical Geology 38, 63–80. doi:10.1007/s11004- 005-9004-x.
{"url":"https://www.researchgate.net/publication/317151543_Stochastic_Simulation_by_Image_Quilting_of_Process-based_Geological_Models","timestamp":"2024-11-14T12:24:33Z","content_type":"text/html","content_length":"870893","record_id":"<urn:uuid:3b1cb22c-4cd5-4e7f-a319-b20c2d238ea2>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00508.warc.gz"}
Rate Of Change (Level 2 Word Problems) Quiz Online (7.G.B.5) : 7th grade Math #8 of 8: Medium Rate of change (Level 2 word problems) <p>A hiker is at an elevation of `-800` feet at the lowest point of their hike. After hiking up the mountain for `2` hours, their final elevation is `5800` feet. </p> <p>What was the average change in elevation every hour?</p> #7 of 8: Medium Rate of change (Level 2 word problems) <p>The average temperature in a town was `8`°C at `1` pm. After `5` hours, the average temperature is `-2`°C. </p> <p>What was the average rate at which the temperature dropped every hour?</p> #6 of 8: Medium Rate of change (Level 2 word problems) <p>At 8 AM, the temperature in a city was `-2`°C. Due to a cold front, the temperature in the city dropped to `-10`°C after `4` hours. </p> <p>What was the rate at which the temperature dropped per #5 of 8: Medium Rate of change (Level 2 word problems) <p>The account balance of Ashley is `-$250` on 1st January 2021. Her bank's final balance was `-$600` after spending for `6` months. <p> </p>At what rate was Ashley spending per month? </p> #4 of 8: Mild Rate of change (Level 2 word problems) <p>While cliff jumping, Harris jumped from a height of `15` feet. After `5` seconds, he reached an elevation of `-10` feet. </p> <p>What was the average rate of change in Harris' elevation per #3 of 8: Mild Rate of change (Level 2 word problems) <p>Shawn took off in his helicopter from an elevation of `-75` meters and ascended `255` meters in half an hour. </p> <p>What was the average rate of change in the helicopter's elevation per hour?</ #2 of 8: Spicy Rate of change (Level 2 word problems) <p>A scuba diver's lowest elevation is `-143` feet. After ascending for `2.3` minutes, the final elevation was `-28` feet. </p> <p>What was the average change in the scuba diver's elevation every #1 of 8: Spicy Rate of change (Level 2 word problems) <p>A submarine was at an elevation of `-394` feet. Its elevation becomes `-634` feet after descending for `40` seconds. </p> <p>What was the average rate at which the submarine was moving every We can divide the rate of change as the total amount of change of one item divided by the total amount of change of the other item. In simple terms, we can say that the rate of change is the rate at which one quantity changes with respect to the change in the other quantity. The concepts like the distance travelled by a car in an hour can be calculated using the rate of change. Students can easily solve rate of change (Level 2 word problems) using this quiz. Share this amazing rate of change quiz with your students so that they can learn in detail about the rate of ... Show all
{"url":"https://www.bytelearn.com/math-grade-7/quiz/rate-of-change-level-2-word-problems","timestamp":"2024-11-12T09:32:15Z","content_type":"text/html","content_length":"220041","record_id":"<urn:uuid:2c5a3fc5-8952-453c-85e5-be57a6d3f3e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00509.warc.gz"}
The modeling of areas susceptible to soil loss due to hydro-erosive processes consists of methods that simplify reality to predict future behavior based on the observation and interaction of a set of geoenvironmental factors. Thus, the objective of the current analysis is to predict susceptibility to soil loss and map areas with the potential risk of erosion using the principles of Binary Logistic Regression (BLR) and Artificial Neural Networks (ANN). The hydrographic sub-basin of the Sete Voltas River (330 km2), Rondônia, Brazil, was defined as the experimental area. Models were obtained using 100 sample units and 14 predictor parameters. Susceptibility was mapped based on five reference classes: very low, low, moderate, high, and very high. ANN obtained an area under the curve (AUC) of 0.808 and global precision of 79.2%, and the BLR model showed an AUC of 0.888 and global precision of 77%. Potentially susceptible areas represent 57.71% and 54.80% of the area for BLR and ANN models, respectively. The greatest potential risks are verified in places with no vegetation cover associated with agricultural practices. The technique proved to be effective, with adequate precision and the advantage of being less time-consuming and expensive than other methods. Binary Logistic Regression; Artificial Neural Network; Erosion Susceptibility
{"url":"https://www.scielo.br/j/mercator/a/FVgtk44wfxvWdSfZw75sRzt/abstract/?lang=en","timestamp":"2024-11-05T16:47:42Z","content_type":"text/html","content_length":"107627","record_id":"<urn:uuid:2107a298-0920-4b1c-bb97-212d6ebe8469>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00636.warc.gz"}
Convexity by Barry Simon To order from Cambridge University Press Table of Contents 1. Convex functions and sets 2. Orlicz spaces 3. Gauges and locally convex spaces 4. Separation theorems 5. Duality: dual topologies, bipolar sets, and Legendre transforms 6. Monotone and convex matrix functions 7. Loewner's theorem: a first proof 8. Extreme points and the Krein–Milman theorem 9. The strong Krein–Milman theorem 10. Choquet theory: existence 11. Choquet theory: uniqueness 12. Complex interpolation 13. The Brunn–Minkowski inequalities and log concave functions 14. Rearrangement inequalities: a) Brascamp–Lieb–Luttinger inequalities 15. Rearrangement inequalities: b) Majorization 16. The relative entropy 17. Notes Author index Subject index Sample Chapter
{"url":"http://www.math.caltech.edu/Convexity.html","timestamp":"2024-11-14T13:09:43Z","content_type":"text/html","content_length":"3415","record_id":"<urn:uuid:8ffff257-0ba8-402c-a85e-1c6878b35071>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00199.warc.gz"}
USACO 2016 December Contest, Platinum Problem 2. Team Building Contest has ended. Log in to allow submissions in analysis mode Every year, Farmer John brings his $N$ cows to compete for "best in show" at the state fair. His arch-rival, Farmer Paul, brings his $M$ cows to compete as well ($1 \leq N \leq 1000, 1 \leq M \leq Each of the $N + M$ cows at the event receive an individual integer score. However, the final competition this year will be determined based on teams of $K$ cows ($1 \leq K \leq 10$), as follows: Farmer John and Farmer Paul both select teams of $K$ of their respective cows to compete. The cows on these two teams are then paired off: the highest-scoring cow on FJ's team is paired with the highest-scoring cow on FP's team, the second-highest-scoring cow on FJ's team is paired with the second-highest-scoring cow on FP's team, and so on. FJ wins if in each of these pairs, his cow has the higher score. Please help FJ count the number of different ways he and FP can choose their teams such that FJ will win the contest. That is, each distinct pair (set of $K$ cows for FJ, set of $K$ cows for FP) where FJ wins should be counted. Print your answer modulo 1,000,000,009. INPUT FORMAT (file team.in): The first line of input contains $N$, $M$, and $K$. The value of $K$ will be no larger than $N$ or $M$. The next line contains the $N$ scores of FJ's cows. The final line contains the $M$ scores of FP's cows. OUTPUT FORMAT (file team.out): Print the number of ways FJ and FP can pick teams such that FJ wins, modulo 1,000,000,009. Problem credits: Brian Dean and William Luo Contest has ended. No further submissions allowed.
{"url":"https://usaco.org/index.php?page=viewproblem2&cpid=673","timestamp":"2024-11-07T10:21:04Z","content_type":"text/html","content_length":"8679","record_id":"<urn:uuid:52a13e44-0451-472a-a791-5b5a767a7c66>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00346.warc.gz"}
Design and description of the MUSICA IASI full retrieval product Articles | Volume 14, issue 2 © Author(s) 2022. This work is distributed under the Creative Commons Attribution 4.0 License. Design and description of the MUSICA IASI full retrieval product IASI (Infrared Atmospheric Sounding Interferometer) is the core instrument of the currently three Metop (Meteorological operational) satellites of EUMETSAT (European Organization for the Exploitation of Meteorological Satellites). The MUSICA IASI processing has been developed in the framework of the European Research Council project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water). The processor performs an optimal estimation of the vertical distributions of water vapour (H[2]O), the ratio between two water vapour isotopologues (the $\mathrm{HDO}/{\mathrm{H}}_{\mathrm{2}}\mathrm{O}$ ratio), nitrous oxide (N[2]O), methane (CH[4]), and nitric acid (HNO[3]) and works with IASI radiances measured under cloud-free conditions in the spectral window between 1190 and 1400cm^−1. The retrieval of the trace gas profiles is performed on a logarithmic scale, which allows the constraint and the analytic treatment of ln[HDO]−ln[H [2]O] as a proxy for the $\mathrm{HDO}/{\mathrm{H}}_{\mathrm{2}}\mathrm{O}$ ratio. Currently, the MUSICA IASI processing has been applied to all IASI measurements available between October 2014 and June 2021 and about two billion individual retrievals have been performed. Here we describe the MUSICA IASI full retrieval product data set. The data set is made available in the form of netCDF data files that are compliant with version 1.7 of the CF (Climate and Forecast) metadata convention. For each individual retrieval these files contain information on the a priori usage and constraint, the retrieved atmospheric trace gas and temperature profiles, profiles of the leading error components, and information on vertical representativeness in the form of the averaging kernels as well as averaging kernel metrics, which are more handy than the full kernels. We discuss data filtering options and give examples of the high horizontal and continuous temporal coverage of the MUSICA IASI data products. For each orbit an individual standard output data file is provided with comprehensive information for each individual retrieval, resulting in a rather large data volume (about 40TB for the almost 7 years of data with global daily coverage). This, at a first glance, apparent drawback of large data files and data volume is counterbalanced by multiple possibilities of data reuse, which are briefly discussed. Examples of standard data output files and a README .pdf file informing users about access to the total data set are provided via https://doi.org/10.35097/408 (Schneider et al., 2021b). In addition, an extended output data file is made available via https://doi.org/10.35097/412 (Schneider et al., 2021a). It contains the same variables as the standard output files together with Jacobians (and spectral responses) for many different uncertainty sources and gain matrices (due to this additional variables it is called the extended output). We use these additional data for assessing the typical impact of different uncertainty sources – like surface emissivity or spectroscopic parameters – and different cloud types on the retrieval results. The extended output file is limited to 74 example observations (over a polar, mid-latitudinal, and tropical site); its data volume is only 73MB, and it is thus recommended to users for having a quick look at the data. Received: 02 Mar 2021 – Discussion started: 12 Apr 2021 – Revised: 29 Nov 2021 – Accepted: 01 Dec 2021 – Published: 18 Feb 2022 The IASI (Infrared Atmospheric Sounding Interferometer, a thermal nadir sensor; Blumstein et al., 2004) instrument aboard the Metop (Meteorological Operational) satellites presents possibilities for measuring a large variety of different atmospheric trace gases (e.g. Clerbaux et al., 2009) with daily global coverage. Because each Metop is an operational EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) satellite, IASI measurements offer excellent global daily coverage and a sustained long-term perspective (measurements of IASI and IASI successor instruments are guaranteed between 2006 and the 2040s). This provides unique opportunities for consistent long-term observations and climate research. In addition to humidity and temperature profiles (which are the meteorological core products; August et al., 2012), IASI can detect, for instance, atmospheric ozone (O[3]; e.g. Keim et al., 2009; Boynard et al., 2018), carbon monoxide (CO; e.g. De Wachter et al., 2012), nitric acid (HNO[3]; Ronsmans et al., 2016), nitrous oxide and methane (N[2]O and CH[4]; De Wachter et al., 2017; Siddans et al., 2017; García et al., 2018), the ratio between different water vapour isotopologues (Schneider and Hase, 2011; Lacour et al., 2012), and different volatile organic compounds (Franco et al., These diverse opportunities of IASI together with the good horizontal and daily coverage result in a large number of IASI products generated in the context of often computationally expensive retrievals. In order to ensure the ultimate benefit from these efforts, the generated data should be FAIR (e.g. Wilkinson et al., 2016): findable, accessible, interoperable, and reusable. During the European Research Council project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water, from 2011 to 2016) we developed at the Karlsruhe Institute of Technology a processor for the analysis of the thermal nadir spectra of IASI. Here we present the MUSICA IASI trace gas processing output, which encompasses vertical profiles of H[2]O, δ D ($\mathit{\delta }\mathrm{D}=\mathrm{1000}\left(\frac{\mathrm{HDO}/{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}{\mathrm{VSMOW}}-\mathrm{1}\right)$, with Vienna Standard Mean Ocean Water, VSMOW, of $\mathrm {3.1152}×{\mathrm{10}}^{-\mathrm{4}}$), N[2]O, CH[4], and HNO[3]. In addition to the retrieved trace gas profiles, the processing output consists of a comprehensive set of variables describing the retrieval settings and product characteristics for each individual retrieval. Figure 1 shows a schematic of the MUSICA IASI processing chain and data reusage possibilities. In this work we focus on the main processing chain, which is indicated by a red frame in Fig. 1. In a preprocessing step EUMETSAT IASI spectra (L1c) and EUMETSAT IASI retrieval products (L2) are merged and observations made under cloudy conditions are filtered out. The EUMETSAT data and data from other sources (e.g. model data for the generation of the a priori information, emissivity and topography databases, spectroscopic parameters) serve then as input for the retrieval code PROFFIT-nadir. In the output generation stage the PROFFIT-nadir output is converted into netCDF data files following a well-known metadata standard. The data are easily findable via digital object identifiers (DOIs) and are freely available for download at http://www.imk-asf.kit.edu/english/musica-data.php (last access: 25 January 2022). The integrated supply of comprehensive information on retrieval input and retrieval settings (measured spectra, used a priori states, and constraints) and the retrieval output and characteristics (retrieved state vectors, averaging kernels, and error covariances) makes the data processing fully reproducible and strongly facilitates data interoperability and data reusage. Some examples are indicated at the bottom of the schematics of Fig. 1. The paper is organised as follows: Sect. 2 briefly presents the satellite experiment on which the retrieval product relies. Section 3 describes the structure of the data files, the data volume, and the nomenclature of the data variables. In Sect. 4 we discuss the details of the MUSICA IASI retrieval setup. There we describe the cloud filtering and the comprehensive information that is provided about the a priori state vectors and the generation of the applied constraints. This information is essential for being able to perform a posteriori processing according to Diekmann et al. (2021) or to optimally combine the data with other remote sensing data products (e.g. Schneider et al., 2021c). Section 4 can be skipped by readers that do not plan such complex data reuse. In Sect. 5 the data variables and the variables describing the quality of the data are explained. This is of general importance for correctly using the data (understanding uncertainties, representativeness, application in the context of model comparisons and data assimilation systems, application for inter-comparison studies, etc.). In Sect. 6 the options for filtering data according to their quality and characteristics are discussed. This enables the user to develop their own tailored data filtering. Section 7 visualises the data volume in the form of two examples. A first example shows the continuous data availability over several years and a second example the good global daily data coverage. Section 8 discusses the potential of the data set in regard to data interoperability and data reuse, which is achieved by providing the retrieved state vectors together with comprehensive information on the a priori state vectors, the constraint matrices, the averaging kernels matrices, and the error covariance matrices. A summary and an outlook are provided in Sect. 10. For readers that are no experts in the field of remote sensing retrievals, Appendix A provides a short compilation with the theoretical basics and the most important equations to which we refer throughout this paper. Appendix B reveals that for the MUSICA IASI retrieval product we can assume moderate non-linearity (according to chap. 5 of Rodgers, 2000), which is important for many data reuse options. Appendix C explains how the data can be used in the form of a total or partial column product. 2The IASI instruments on Metop satellites IASI is a Fourier-transform spectrometer and measures in the infrared part of the electromagnetic spectrum between 645 and 2760cm^−1 (15.5 and 3.63µm). After apodisation (L1c spectra) the spectral resolution is 0.5cm^−1 (full width at half maximum, FWHM). The main purpose of IASI is the support of numerical weather prediction. However, due to its high signal-to-noise ratio and the high spectral resolution, the IASI measurements offer very interesting possibilities for atmospheric trace gas observations (e.g. Clerbaux et al., 2009). The IASI instruments are carried by the Metop satellites, which are Europe's first polar-orbiting satellites dedicated to operational meteorology. The Metop programme has been planned as a series of three satellites to be launched sequentially over an observational period of 14 years. Metop-A was launched on 19 October 2006, Metop-B on 17 September 2012, and Metop-C on 7 November 2018. IASI is the main payload instrument and operates in the nadir viewing geometry with a horizontal resolution of 12km (pixel diameter at nadir viewing geometry) over a swath width of about 2200km. With 14 orbits in a sun-synchronous mid-morning orbit (09:30 local solar time, LT, descending node), each IASI on a Metop satellite provides observations twice a day at middle and low latitudes (at about 09:30 and 21:30LT) and several times a day at high latitudes. Metop-A, Metop-B, and Metop-C overflight times generally take place within about 45min. Table 1 gives an overview of the major specifications of the Metop–IASI mission. The number of individual observations made by the three currently orbiting IASI instruments is tremendous. During a single orbit 91800 observations are made. In 24h the three satellites conclude in total about 42 orbits, which means more than 3.85 million individual IASI spectra per day and more than 1.4 billion per year. IASI-like observations are guaranteed for several decades. The first observations were made in 2006. In the context of the Metop – Second Generation (Metop-SG) satellite programme, IASI–Next Generation (IASI-NG) instruments will perform measurements until the 2040s. In this context the IASI programme offers unique possibilities for studying the long-term evolution of the atmospheric 3MUSICA IASI data format In this section we discuss the format of the MUSICA IASI full product data files and the nomenclature of the data variables. 3.1Data files The MUSICA IASI full product data are provided as netCDF files compliant with version 1.7 of the CF (Climate and Forecast) metadata convention (https://cfconventions.org, last access: 25 January 2022). The data files contain all information needed for reproducing the retrievals and for optimally reusing the data. Because the MUSICA IASI retrieval builds upon the EUMETSAT L2 cloud filter and uses the EUMETSAT L2 atmospheric temperature as the a priori atmospheric temperature, the output files contain some EUMETSAT retrieval data as well as the MUSICA retrieval data. In addition, they contain the EUMETSAT L1C spectral radiances (and the simulated radiances) as well as auxiliary data needed for the retrieval (like surface emissivity from other sources; Masuda et al. , 1988; Seemann et al., 2008; Baldridge et al., 2009). We provide standard output files comprising all processed IASI observations and one extended output file with detailed calculations of Jacobians (and spectral responses for surface emissivity, spectroscopic parameters, and cloud coverage) and gain matrices for a few selected observations. The standard output is provided in the files “IASI[S]_MUSICA_[V]_L2_AllTargetProducts_[D]_[O].nc” and in one file per orbit and instrument. The symbols within the square brackets indicate placeholders: “[S]” for the sensor (A, B, or C, for IASI instruments on the satellites Metop-A, Metop-B, or Metop-C, respectively), “[V]” for the MUSICA IASI retrieval processor version used, “[D]” for the starting date and time of the observation (format YYYYMMDDhhmmss), and “[O]” for the number of the orbit. In our database these files are provided in daily .tar files, with all orbits of all IASI instruments archived into a single .tar file, with the name “IASI[multipleS]_MUSICA_[V]_L2_AllTargetProducts_ [DAY].tar”. The placeholders are as follows: “[multipleS]” for the considered sensors, e.g. AB if IASI sensors on Metop-A and Metop-B are considered; “[V]” for the MUSICA IASI retrieval processor version used; and “[DAY]” for the date of observations (universal time, format YYYYMMDD). The typical size of a .tar file with the orbit-wise netCDF files of a single day is 15GB. This number is for the typically 28 orbits per day of two satellites (for three satellites there are typically 42 orbits per day). The standard output data files are linked to a DOI (Schneider et al., 2021b). The extended file represents 74 observations over polar, mid-latitudinal, and tropical GRUAN stations (GRUAN stands for Global Climate Observing System Reference Upper-Air Network, https:// www.gruan.org, last access: 25 January 2022). More details on the time periods and locations represented by these retrievals are given in Borger et al. (2018). The file provides the same output as the standard files and in addition detailed information on Jacobians (and spectral responses) and gain matrices. The Jacobian matrices collect the derivatives of the radiances as measured by the satellite sensors with respect to a parameter (e.g. atmospheric temperature, instrumental conditions). The spectral response matrices give information about the change in the radiances due to changes in the surface emissivities, the spectroscopic parameters, and the cloud coverage. The gain matrices are the derivatives of the retrieved atmospheric state with respect to the radiances. The name of this extended output file is “IASIAB_MUSICA_030201_L2_AllTargetProductsExtended_examples.nc”; its size is 70MB, and it is linked to an extra DOI (Schneider et al., 2021a). Here we report on the MUSICA IASI processing version 3.2.1 (applied for IASI observations until the end of June 2019). For observations from July 2019 onward processing versions 3.3.0 and 3.3.1 are applied (the difference of version 3.3.1 with respect to 3.3.0 is updated a priori data for the retrievals of observations from January 2021 onward). Version 3.2.1 and the versions 3.3.x use the same retrieval setting, and the output files contain the same variables. The difference between version 3.2.1 and the versions 3.3.x is that for the former some minor corrections were necessary after the retrieval process due to some very minor inconsistencies in version 3.2.1 with regard to the following: the vertical gridding; the a priori of δD; and the constraint for N[2]O, CH[4], and HNO[3]. This difference between the versions is actually not noticeable by the user, and the report provided here on version 3.2.1 data is also valid for data of versions 3.3.x, which will soon be made available for the public in the same format as the version 3.2.1 data. There are three different categories of variables. The first category consists of variables that contain information resulting from the EUMETSAT L2 PPF (product processing facility) retrieval. They can be identified by the prefix eumetsat_ in their names. A second category consists of variables that contain information from the MUSICA IASI retrieval. Here the prefix in the name is musica_. The third category encompasses all other variables, and their names have no specific prefix. The EUMETSAT L2 retrieval variables are flags (mainly for cloud coverage – see Sect. 4.1, surface conditions, and EUMETSAT retrieval quality) and the EUMETSAT L2 retrieval output of H[2]O. The variables belonging to the third category are supporting data and inform about the sensors' viewing geometry, observation time, measured radiances, climatological tropopause altitude, and surface emissivity. Although our MUSICA IASI retrieval uses the EUMETSAT L2 PPF version 6 land surface emissivity, the emissivity variables are assigned to the category of supporting data, because for older observations where no L2 PPF version 6 is available we use the surface emissivity climatology from IREMIS (Seemann et al., 2008) and over water we always use the values reported by Masuda et al. ( 1988). The large majority of variables are MUSICA IASI variables. These variables document the MUSICA IASI retrieval settings (like the a priori states and constraints; see Sect. 4.4 to 4.6), provide the MUSICA IASI retrieval products (retrieved trace gas profiles, Sect. 5.1), and characterise these products (averaging kernels, estimated errors, Sect. 5.2). For variables that refer to a specific retrieval product, a corresponding syllable is embedded into the respective variable names: _wv_ and _wvp_ stand for water vapour isotopologues and water vapour isotopologue proxies, respectively; _ghg_ for the greenhouse gases N[2]O and CH[4], _hno3_ for HNO[3]; and _at_ for the atmospheric temperature. The water vapour and greenhouse gas variables (_wv_, _wvp_, and _ghg_) contain information on two species, which can be identified by the value of the dimension musica_species_id. For _wv_ these are the species H[2]O and HDO, for _wvp_ the water vapour proxy species (see Sect. 4.4.2), and for _ghg_ the species N[2]O and CH[4]. 4MUSICA IASI retrieval setup In this section the principle setup of the MUSICA IASI retrieval is presented. We discuss our filtering before processing, the retrieval algorithm used, the measurement state (spectral region), the atmospheric state that is retrieved in an optimal estimation sense, and the a priori information used and the applied constraints. A detailed explanation of these settings ensures the full reproducibility of the data and is also important in the context of data reusage (see examples given in Sect. 8). 4.1Data selection prior to processing We focus on the processing of IASI data for which EUMETSAT L2 data files of PPF version 6.0 or later are available. For former data versions not all of the subsequently discussed L2 PPF variables are available. Furthermore, we found that there are several modifications made within versions 4 and 5 that significantly affect the stability of our MUSICA IASI retrieval output (see discussion in García et al., 2018). EUMETSAT L2 PPF version 6 data are available from October 2014 onward, so we focus our processing on IASI observations made from October 2014 onward. In addition, the MUSICA IASI retrievals are currently restricted to cloud-free scenarios. The selection of cloud-free conditions is made by means of the EUMETSAT L2 PPF cloudiness assessment summary flag variable (called flag_cldnes in the EUMETSAT L2 netCDF data files). We only process IASI observations with this flag having the value 1 (the IASI instrumental field of view, IFOV, is clear) or 2 (the IASI IFOV is processed as cloud-free, but small cloud contamination is possible). This requirement for cloud-free scenarios removes more than two-thirds of all available IASI observations. Furthermore, we require EUMETSAT L2 PPF temperature profiles to be generated by the EUMETSAT L2 PPF optimal estimation retrieval scheme. For this purpose we use the EUMETSAT L2 PPF variable flag_itconv. We only process data with this flag having value 3 (the minimisation did not converge, sounding accepted) or 5 (the minimisation converged, sounding accepted). Figure 2 gives a climatological overview of the number of IASI data that remain after the aforementioned preselection. The maps largely reflect the cloud cover conditions. A very large number of IASI data passed our selection criteria in the subtropical regions, where cloud-free conditions generally prevail. In the North Atlantic storm track region, the southern American and southern African tropics, and the southern polar oceans the sky is generally cloudy in February, leading to a low number of IASI observations that passed our selection criteria. In August we can clearly identify the Asian and West African monsoon region as an area with increased cloud coverage and consequently fewer MUSICA IASI processed data. Figure 3 is similar to Fig. 2, but instead of showing the total number of observations that fall within a $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ box, it depicts the probability of having at least one observation per overpass in a $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ box. In both figures we observe very similar structures. 4.2The retrieval algorithm We use the thermal nadir retrieval algorithm PROFFIT-nadir (Schneider and Hase, 2011; Wiegele et al., 2014). It is an extension of the PROFFIT algorithm (PROFile FIT; Hase et al., 2004) that has been used for many years by the ground-based infrared remote sensing community (Kohlhepp et al., 2012; Schneider et al., 2012). This extension has been made in support of the IASI retrieval development during the project MUSICA. The algorithm consists of the line-by-line radiative transfer code PROFFWD (Hase et al., 2004; Schneider and Hase, 2009) and can consider Voigt as well as non-Voigt line shapes (Gamache et al., 2014) and the water continuum signatures according to the model MT_CKD v2.5.2 (Delamere et al., 2010; Payne et al., 2011; Mlawer et al., 2012). For the MUSICA IASI processing we use the water continuum model MT_CKD v2.5.2 and for all trace gases a Voigt line shape model and the spectroscopic line parameters according to the HITRAN2016 molecular spectroscopic database ( Gordon et al., 2017). However, we increase the line intensity parameter for all HDO lines by +10% in order to correct for the bias observed between MUSICA IASI δD retrievals and respective aircraft-based in situ profile data (Schneider et al., 2015). For the inversion calculations PROFFIT-nadir offers options that are essential for water vapour isotopologue retrievals. These are the options for logarithmic-scale retrievals and for setting up a cross constraint between different atmospheric species (see also Sect. 4.4.2). The theoretical basics for atmospheric trace gas retrievals are provided in Appendix A. 4.3The analysed spectral region The retrieval works with the radiances measured in the spectral region between 1190 and 1400cm^−1. The respective radiance values are the elements of the MUSICA IASI measurement state vector referred to as y in Appendix A. Figure 4 depicts measured and simulated radiances as well as a large variety of different spectral responses (Jacobians multiplied by parameter changes) for a typical mid-latitudinal summer observation over land. Please note the different radiance scales for measurement and simulation, on the one hand, and residuals and spectral responses, on the other hand. We show trace gas spectral responses for a uniform increase in the trace gases throughout the whole atmosphere: 100% for H[2]O and HDO, 10% for N[2]O and CH[4], and 50% for HNO[3]. The respective values are reasonable approximations of the typical atmospheric variabilities of these trace gases. We see that the measured radiances are most strongly affected by the water isotopologues. The variations in N[2]O and CH[4] are also recognisable (larger than the spectral residuals, i.e. the difference between measured and simulated radiances). The spectral responses of HNO[3] are very close to the noise level. The atmospheric temperature spectral responses are depicted for a uniform 2K temperature increase over three different layers: surface–2, 2–6, and 6–12kma.s.l. (a.s.l. means above sea level). In the analysed spectral region (1190–1400cm^−1), the atmospheric temperature variations close to the surface affect mainly the radiances below 1300cm^−1 and variations at higher altitudes mainly the radiances above 1300cm^−1. In Fig. 4 we depict the spectral responses for 2K because this is a reasonable approximation of the uncertainty in the EUMETSAT L2 PPF temperatures (August et al., 2012). The spectral responses for surface emissivity and temperature reveal that surface properties hardly affect the radiances above 1250cm^−1 but have a strong impact below 1250cm^−1. We calculate the emissivity spectral responses for a −2% change in the emissivity independently above and below 1270^−1, which is a typical uncertainty in emissivity judging from its dependency on the viewing angle and wind speed over ocean (Masuda et al., 1988) and small-scale inhomogeneities; however, this uncertainty might be significantly higher over arid areas (Seemann et al., 2008). Concerning spectroscopy, the spectral responses calculated for the typical uncertainty range of spectroscopic parameters are relatively small. In Fig. 4 we show the spectral responses for consistent +5% changes in the line intensity and pressure-broadening parameters of all water vapour isotopologues, which are in reasonable agreement with the uncertainty values given by HITRAN (Gordon et al., 2017). Concerning the water continuum, the spectral responses are for a water continuum that is 10% larger than the continuum according to the model MT_CKD v2.5.2 (Delamere et al., 2010; Payne et al., 2011; Mlawer et al., 2012). The bottom panel of Fig. 4 depicts the impact of clouds on the radiances. The thermal nadir radiance when observing over an opaque cloud can be calculated by defining the cloud top instead of the surface as the thermal background. Cirrus and mineral dust clouds are not opaque, and we have to consider partial attenuation by the cloud particles. We calculate the attenuated radiances using forward model calculations from KOPRA (Karlsruhe Optimized and Precise Radiative transfer Algorithm; Stiller, 2000) and consider single scattering. The frequency dependency of the extinction cross sections, the single-scattering albedo, and the scattering phase functions of the clouds are calculated from OPAC v4.0b (Optical Properties of Aerosol and Clouds; Hess et al., 1998; Koepke et al., 2015). For cirrus clouds we assume the particle composition as given by OPAC's “Cirrus 3” ice cloud example (see Table 1b in Hess et al., 1998) and for mineral dust clouds a particle composition according to OPAC's “Desert” aerosol composition example (see Table 4 in Hess et al., 1998). The spectral responses shown are for 10% cumulus cloud coverage with the cloud top at 3km, a homogeneous dust cloud between 2 and 4km, and 25% cirrus cloud coverage between 10 and 11km. These are relatively weak clouds, and we assume that they might occasionally not correctly be identified by the EUMETSAT L2 cloud screening algorithm. Because the respective spectral responses are significantly above the noise level, these unrecognised clouds can have an important impact on the retrieval. A comprehensive set of different spectral responses is provided with the extended output data file for the 74 exemplary observations at an Arctic, mid-latitudinal, and tropical site. 4.4The state vector In this section we discuss the MUSICA IASI state vector, which is referred to as x in Appendix A. 4.4.1Components of the state vector We retrieve vertical profiles of the trace gases H[2]O, HDO, N[2]O, CH[4], and HNO[3] and of atmospheric temperature. For all these profile retrievals we use constraints (for more details see Sect. 4.6). In addition we fit the surface skin temperature and the spectral frequency scale without any constraint. We discretise the profiles on atmospheric levels between the surface and the top of the atmosphere (which we set at 56km). The grid is relatively fine in the lower troposphere (≈400m) and increases in the stratosphere to above 5km. The number of atmospheric levels (nal) depends on the surface altitude. For instance, for a surface altitude at sea level (0ma.s.l.) nal=28 and for a surface altitude of 4000ma.s.l. nal=21. Consequently, the state vector for an observation with surface altitude at sea level has a length of $\mathrm{6}×\mathrm{28}+\mathrm{2}=\mathrm{170}$. 4.4.2Water vapour isotopologue proxies The water vapour isotopologues H[2]O and HDO vary largely in parallel. The information that HDO actually adds to H[2]O lies in the value of the $\mathrm{HDO}/{\mathrm{H}}_{\mathrm{2}}\mathrm{O}$ ratio. This ratio is typically expressed as $\mathit{\delta }\mathrm{D}=\frac{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}/\mathrm{HDO}}{\mathrm{VSMOW}}-\mathrm{1}$, with VSMOW being $\mathrm{3.1152}×{\mathrm {10}}^{-\mathrm{4}}$ (VSMOW – Vienna Standard Mean Ocean Water). In Schneider et al. (2006b) the logarithmic-scale difference between H[2]O and HDO was introduced as a good proxy for δD, and Schneider et al. (2012) showed that by a transformation between the state {H[2]O,HDO} – needed for the radiative transfer calculations – and the proxy state {$\frac{\mathrm{1}}{\mathrm{2}}\left(\ mathrm{H}}_{\mathrm{2}}\mathrm{O}\right]\right)$} – where we can formulate the correct constraints – the climatologically expected variability in the atmospheric state can be described correctly. First we have to transfer the associated mixing ratio entries in the state vector to a logarithmic scale. This means that all the derivatives provided by the radiative transfer calculations have to be transferred from the linear scale to the logarithmic scale by using $\partial x=x\partial \mathrm{ln}\left[x\right]$. For highly variable trace gases logarithmic-scale retrievals are advantageous because they allow the consideration of the correct a priori statistics (log-normal instead of normal distributions; Hase et al., 2004; Schneider et al., 2006a). For trace gases with weak variability but still detectable spectral signatures, the statistics in logarithmic and linear scale become very similar, so logarithmic-scale retrievals have no apparent disadvantage with respect to linear-scale retrievals; instead they offer unique possibilities as outlined in the following. In the logarithmic scale the water vapour isotopologue state can be expressed in the basis of {ln[H[2] O],ln[HDO]} or in the basis of the proxy state $\mathit{\left\{}\frac{\mathrm{1}}{\mathrm{2}}\left(\mathrm{ln}\left[{\mathrm{H}}_{\mathrm{2}}\mathrm{O}\right]+\mathrm{ln}\left[\mathrm{HDO}\right]\ right),\left(\mathrm{ln}\left[\mathrm{HDO}\right]-\mathrm{ln}\left[{\mathrm{H}}_{\mathrm{2}}\mathrm{O}\right]\right)\mathit{\right\}}$. Both expressions are equally valid. Each basis has the dimension (2×nal). In the following the full water vapour isotopologue state vector expressed in the {ln[H[2]O],ln[HDO]} basis and the $\mathit{\left\{}\frac{\mathrm{1}}{\mathrm{2}}\left(\mathrm \right)\mathit{\right\}}$ proxy basis will be referred to as x and x^′, respectively. The basis transformation can be achieved by operator P: $\begin{array}{}\text{(1)}& \mathbf{P}=\left(\begin{array}{cc}\frac{\mathrm{1}}{\mathrm{2}}\mathbf{I}& \frac{\mathrm{1}}{\mathrm{2}}\mathbf{I}\\ -\mathbf{I}& \mathbf{I}\end{array}\right).\end{array}$ Here the four matrix blocks have the dimension (nal×nal), I stands for an identity matrix, and the state vectors x and x^′ are related by $\begin{array}{}\text{(2)}& {\mathbit{x}}^{\prime }=\mathbf{P}\mathbit{x}.\end{array}$ Similarly logarithmic-scale covariance matrices can be expressed in the two basis systems, and the respective matrices S and S^′ are related by $\begin{array}{}\text{(3)}& {\mathbf{S}}^{\prime }={\mathbf{PSP}}^{\mathrm{T}},\end{array}$ and respective averaging kernel matrices A and A^′ are related by $\begin{array}{}\text{(4)}& {\mathbf{A}}^{\prime }={\mathbf{PAP}}^{-\mathrm{1}}.\end{array}$ In contrast to H[2]O and HDO, H[2]O and δD vary to a large extent independently, and we can easily set up the constraint matrix R^′ for the proxy basis $\mathit{\left\{}\frac{\mathrm{1}}{\mathrm{2}}\ $\begin{array}{}\text{(5)}& {\mathbf{R}}^{\prime }=\left(\begin{array}{cc}{\mathbf{R}}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{0}}& \mathrm{0}\\ \mathrm{0}& {\mathbf{R}}_{\mathit{\delta }\mathrm{D}}\end Back transformation to the {ln[H[2]O],ln[HDO]} basis reveals automatically the strong cross constraints between H[2]O and HDO: $\begin{array}{}\text{(6)}& \begin{array}{rl}\mathbf{R}& ={\mathbf{P}}^{-\mathrm{1}}{\mathbf{R}}^{\prime }{\mathbf{P}}^{-\mathrm{T}}=\\ & \left(\begin{array}{cc}\frac{\mathrm{1}}{\mathrm{2}}{\mathbf {R}}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{0}}+\frac{\mathrm{1}}{\mathrm{2}}{\mathbf{R}}_{\mathit{\delta }\mathrm{D}}& \frac{\mathrm{1}}{\mathrm{2}}{\mathbf{R}}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{0}}-\ frac{\mathrm{1}}{\mathrm{2}}{\mathbf{R}}_{\mathit{\delta }\mathrm{D}}\\ \frac{\mathrm{1}}{\mathrm{2}}{\mathbf{R}}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{0}}-\frac{\mathrm{1}}{\mathrm{2}}{\mathbf{R}}_{\ mathit{\delta }\mathrm{D}}& \frac{\mathrm{1}}{\mathrm{2}}{\mathbf{R}}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{0}}+\frac{\mathrm{1}}{\mathrm{2}}{\mathbf{R}}_{\mathit{\delta }\mathrm{D}}\end{array}\right).\ For more details on the utility of the water vapour isotopologue proxy state please refer to Schneider et al. (2012) and Barthlott et al. (2017). The atmospheric state variables that are independently constrained during the MUSICA IASI processing are the vertical profiles of the water vapour isotopologue proxies H[2]O and δD and the vertical profiles of N[2]O, CH[4], HNO[3], and atmospheric temperature. For all the trace gases (not only for the water vapour isotopologues) the retrieval works with the state variables in a logarithmic scale. For atmospheric temperature a linear scale is used. Surface skin temperature and the spectral frequency shift are also components of the state vector; however, they are not constrained during the retrieval procedure. The reason for this is that surface skin temperature and spectral frequency shift can be identified very clearly in the spectra. There is no need to impose a priori information and thereby constrain these retrieved quantities. Also without such constraint the retrieval converges in a very stable manner. The variables musica_wv_apriori and musica_wv provide the a priori assumed and the retrieved values of H[2]O and HDO, respectively (see also Sects. 4.5 and 5.1). The output is given in parts per million as a volume fraction (ppmv) and normalised with respect to the naturally occurring isotopologue abundance. In this context, δD is calculated from the content of these variables as $\mathit{\ delta }\mathrm{D}=\mathrm{1000}\left(\frac{\mathrm{HDO}}{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}-\mathrm{1}\right)$. Information about H[2]O and δD related to differentials (constraints, averaging kernels, kernel metrics, or uncertainties) is generally provided in the proxy states (variables with the term _wvp_). 4.5A priori states The reference for the a priori data used for the MUSICA IASI trace gas retrievals is the CESM1–WACCM (Community Earth System Model version 1 – Whole Atmosphere Community Climate Model) monthly output of the 1979–2014 time period. The CESM1–WACCM is a coupled chemistry climate model from the Earth's surface to the lower thermosphere (Marsh et al., 2013). The horizontal resolution is 1.9^∘ latitude×2.5^∘ longitude. The vertical resolution in the lower stratosphere ranges from 1.2km near the tropopause to about 2km near the stratopause; in the mesosphere and thermosphere the vertical resolution is about 3km. Simulations used for generating the MUSICA IASI a priori data are based on the International Global Atmospheric Chemistry–Stratosphere-troposphere Processes And their Role in Climate (IGAC–SPARC) Chemistry Climate Model Initiative (CCMI; Morgenstern et al., 2017). From the surface to 50km the meteorological fields are “nudged” towards meteorological analysis taken from the National Aeronautics and Space Administration (NASA) Global Modeling and Assimilation Office (GMAO) Modern-Era Retrospective Analysis for Research and Applications (MERRA; Rienecker et al., 2011), and above 60km the model meteorological fields are fully interactive, with a linear transition in between (details about the nudging approach are described in Kunz et al., For the MUSICA IASI a priori profiles of H[2]O, N[2]O, CH[4], and HNO[3], we consider a mean latitudinal dependence, seasonal cycles, and long-term evolution. Therefore, the a priori data are constructed by means of a low dimensional multi-regression fit on the CESM1–WACCM data independently for each vertical grid level. We fit an annual cycle with the two frequencies 1 per year and 2 per year, and for the long-term baseline we fit a second-order polynomial. The fits are performed individually for 15 equidistant latitudinal bands between 90^∘S and 90^∘N. In order to capture the yearly anomalies in N[2]O and CH[4] a priori data, we use the Mauna Loa Global Atmospheric Watch yearly mean data records for a correction of the WACCM parameterised time series (for more details on this correction procedure see Barthlott et al., 2015). We also use the temperature lapse rate tropopause – according to the definition of the World Meteorological Organization – from WACCM and construct a latitudinally dependent tropopause altitude by fitting a seasonal cycle and a constant baseline (no long-term dependency) and assume a transition zone between the troposphere and stratosphere with a vertical extension of 12.5km. The MUSICA IASI δD a priori profiles between the ground and the tropopause altitude are constructed from the H[2]O a priori profiles by using a single global relation between tropospheric H[2]O concentration and δD values. This relation has been determined from simultaneous H[2]O and δD measurements made by high-precision in situ instruments at different ground stations located in the mid-latitudes and the subtropics and between 100m and 3650ma.s.l. (González et al., 2016; Christner et al., 2018) and by aircraft-based in situ measurements made between the sea surface and about 7000ma.s.l. (Dyroff et al., 2015). Above the troposphere (where δD is close to −600‰) we smoothly connect the tropospheric δD values with the typical stratospheric δD value of −350‰. Figure 5 depicts the MUSICA IASI a priori data derived from WACCM. It shows latitudinal cross sections for a northern hemispheric winter and summer day as well as the temporal evolution between 2014 and 2020 at a mid-latitudinal site. The H[2]O and δD a priori data have strong latitudinal gradients and also a marked seasonal cycle. For δD the lowest values are in the neighbourhood of the tropopause altitude (depicted as a thick violet line). The a priori values of N[2]O and CH[4] have a strong latitudinal and seasonal variability in the tropopause region. CH[4] has a strong tropospheric latitudinal gradient and seasonal cycle in the troposphere, whereas the tropospheric N[2]O variability is rather small. The HNO[3] a priori has a maximum in the lower stratosphere (20–25km) with the highest values at higher latitudes. The a priori trace gas profiles are provided in the variables musica_wv_apriori (H[2]O and HDO with species index 1 and 2, respectively), musica_ghg_apriori (N[2]O and CH[4] with species index 1 and 2, respectively), and musica_hno3_apriori (HNO[3]). The unit is ppmv. As a priori for the atmospheric and the surface temperatures we use the EUMETSAT L2 PPF atmospheric temperature output. These data are provided in the unit kelvin and in the variables musica_at_apriori and musica_st_apriori for atmospheric temperature and surface temperature, respectively. 4.6A priori covariances and constraints We set up simplified a priori covariance matrices by means of two parameters. The first parameter is the altitude-dependent amplitudes of the variability (v[amp,i], with i indexing the ith altitude level). For the trace gases we work with the relative variability, i.e. with the variability on the logarithmic scale. For atmospheric temperatures the variability is given in the unit kelvin. The second parameter is the altitude-dependent vertical correlation lengths (σ[cl,i], for considering correlated variations between different altitudes). The elements of the a priori covariance matrix (S [a]) are then calculated as $\begin{array}{}\text{(7)}& {S}_{{\mathrm{a}}_{i,j}}={v}_{\mathrm{amp},i}{v}_{\mathrm{amp},j}\mathrm{exp}-\frac{\left({z}_{i}-{z}_{j}{\right)}^{\mathrm{2}}}{\mathrm{2}{\mathit{\sigma }}_{\mathrm {cl},i}{\mathit{\sigma }}_{\mathrm{cl},j}},\end{array}$ with z[i] being the altitude at the ith altitude level. The values v[amp,i] and σ[cl,i] are oriented to the typical covariances of in situ observations made from the ground (e.g. González et al., 2016; Gomez-Pelaez et al., 2019), aircraft (e.g. Wofsy, 2011; Dyroff et al., 2015), or balloons (e.g. Karion et al., 2010; Dirksen et al., 2014) and also aligned to the vertical dependency of the monthly mean covariances we obtain from the WACCM simulations. For the v[amp,i] of δD we use in addition the isotopologue-enabled version of the Laboratoire de Météorologie Dynamique (LMD) general circulation model as a reference (Risi et al., 2010; Lacour et al., 2012). For atmospheric temperature we use the uncertainty in the EUMETSAT L2 atmospheric temperature as reference (August et al., 2012). Generally, we classify three different altitude regions with specific vertical dependencies in the values of v[amp,i] and σ[cl,i]: the troposphere (below the climatological tropopause altitude as depicted in Fig. 5), the stratosphere (starting 12.5km above the climatological tropopause altitude), and the transition region between the troposphere and stratosphere. The values of v[amp,i] are specific for each trace gas and for the atmospheric temperature, and they are provided in the MUSICA IASI standard output files in the variables having the suffix As a simplification we use the same values of σ[cl,i] for all trace gases and for the atmospheric temperature. These values are provided in the MUSICA IASI output files as the variable As the constraint of the retrieval we use an approximation of the inverse of the covariance matrix. For this purpose the constraint matrix R is constructed as a sum of a diagonal constraint, and first- and second-order Tikhonov-type regularisation matrices (Tikhonov, 1963): $\begin{array}{}\text{(8)}& \mathbf{R}=\left({\mathbit{\alpha }}_{\mathrm{0}}{\mathbf{L}}_{\mathrm{0}}{\right)}^{\mathrm{T}}{\mathbit{\alpha }}_{\mathrm{0}}{\mathbf{L}}_{\mathrm{0}}+\left({\mathbit{\ alpha }}_{\mathrm{1}}{\mathbf{L}}_{\mathrm{1}}{\right)}^{\mathrm{T}}{\mathbit{\alpha }}_{\mathrm{1}}{\mathbf{L}}_{\mathrm{1}}+\left({\mathbit{\alpha }}_{\mathrm{2}}{\mathbf{L}}_{\mathrm{2}}{\right)}^ {\mathrm{T}}{\mathbit{\alpha }}_{\mathrm{2}}{\mathbf{L}}_{\mathrm{2}},\end{array}$ $\begin{array}{}\text{(9)}& {\mathbf{L}}_{\mathrm{0}}=\left(\begin{array}{ccccc}\mathrm{1}& \mathrm{0}& \mathrm{0}& \mathrm{\cdots }& \mathrm{0}\\ \mathrm{0}& \mathrm{1}& \mathrm{0}& \mathrm{\cdots } & \mathrm{0}\\ \mathrm{0}& \mathrm{0}& \mathrm{1}& \mathrm{\cdots }& \mathrm{0}\\ \mathrm{⋮}& \mathrm{⋮}& \mathrm{⋮}& \mathrm{\ddots }& \mathrm{⋮}\\ \mathrm{0}& \mathrm{0}& \mathrm{0}& \mathrm{\cdots }& \mathrm{1}\end{array}\right),\text{(10)}& {\mathbf{L}}_{\mathrm{1}}=\left(\begin{array}{ccccc}\mathrm{1}& -\mathrm{1}& \mathrm{0}& \mathrm{\cdots }& \mathrm{0}\\ \mathrm{0}& \mathrm{1}& -\mathrm {1}& \mathrm{\cdots }& \mathrm{0}\\ \mathrm{⋮}& \mathrm{⋮}& \mathrm{\ddots }& \mathrm{\ddots }& \mathrm{⋮}\\ \mathrm{0}& \mathrm{\cdots }& \mathrm{0}& \mathrm{1}& -\mathrm{1}\end{array}\right),\end $\begin{array}{}\text{(11)}& {\mathbf{L}}_{\mathrm{2}}=\left(\begin{array}{ccccc}\mathrm{1}& -\mathrm{2}& \mathrm{1}& \mathrm{\cdots }& \mathrm{0}\\ \mathrm{⋮}& \mathrm{\ddots }& \mathrm{\ddots }& \ mathrm{\ddots }& \mathrm{⋮}\\ \mathrm{0}& \mathrm{\cdots }& \mathrm{1}& -\mathrm{2}& \mathrm{1}\end{array}\right).\end{array}$ The diagonal elements of the diagonal matrices α[0], α[1], and α[2] are the inverse of the absolute variabilities and the variabilities of the first and the second vertical derivatives of the profiles. These values can be calculated from the elements of the a priori matrix (S[a]) as follows: $\begin{array}{}\text{(12)}& {\mathit{\alpha }}_{{\mathrm{0}}_{i,i}}=\left(\sqrt{{S}_{{\mathrm{a}}_{i,i}}}{\right)}^{-\mathrm{1}},\text{(13)}& {\mathit{\alpha }}_{{\mathrm{1}}_{i,i}}=\left(\sqrt{{S}_ {{\mathrm{a}}_{i,i}}+{S}_{{\mathrm{a}}_{i+\mathrm{1},i+\mathrm{1}}}-\mathrm{2}{S}_{{\mathrm{a}}_{i,i+\mathrm{1}}}}{\right)}^{-\mathrm{1}},\text{(14)}& {\mathit{\alpha }}_{{\mathrm{2}}_{i,i}}={\left(\ sqrt{\begin{array}{c}{S}_{{\mathrm{a}}_{i,i}}+\mathrm{4}{S}_{{\mathrm{a}}_{i+\mathrm{1},i+\mathrm{1}}}+{S}_{{\mathrm{a}}_{i+\mathrm{2},i+\mathrm{2}}}\\ -\left(\mathrm{4}{S}_{{\mathrm{a}}_{i,i+\mathrm Starting the retrievals with the constraint matrix $\mathbf{R}\approx {\mathbf{S}}_{\mathrm{a}}^{-\mathrm{1}}$ optimises the computational efficiency of the retrieval processes because according to Eqs. (A4) and (A5) the retrieval calculations work with ${\mathbf{S}}_{\mathrm{a}}^{-\mathrm{1}}$. Furthermore, calculating the inversion of S[a] approximatively as the sum of diagonal constraint and first- and second-order Tikhonov-type regularisation matrices offers the possibility of tuning the constraint according to specific user requirements with respect to smoothness or absolute deviations (e.g. Steck, 2002; Diekmann et al., 2021). For the greenhouse gases (N[2]O and CH[4]) and HNO[3] we constrain with respect to the absolute values of the profiles and the first derivative of the profile; i.e. we do not consider the term (α[2]L [2])^Tα[2]L[2] of Eq. (8). In the case of the water vapour isotopologue proxies and the atmospheric temperature, we additionally constrain with respect to the second derivative of the profile; i.e. we consider all terms of Eq. (8). Please note that for the trace gases the constraints work on the logarithmic scale and for the atmospheric temperature on the linear scale. Because HNO[3] has only very weak spectroscopic signatures in the analysed spectral region (see Fig. 4), we loosen the absolute constraint and at the same time strengthen the constraint with respect to the first vertical derivate: α[0] and α[1] are calculated from an S[a] constructed with the values of v[amp,i] increased by a factor of 1.5 and with the values of σ[cl,i] increased by a factor of 2. Similarly and in order to avoid a negative impact of an underconstrained retrieval of the temperature profile on the trace gas products (e.g. artificial oscillatory features), we strengthen the atmospheric temperature constraint: α[0], α[1], and α[2] are calculated from an S[a] constructed with the values of v[amp,i] decreased by a factor of 0.5. The diagonal entries of the diagonal matrices α[0], α[1], and α[2] contain all information about the actual constraints used by the retrieval. They are provided in the MUSICA output files for each individual retrieval and for the different trace gases and the atmospheric temperature as variables with the suffix _reg. For the trace gases these vector elements are depicted in Fig. 6 for a northern hemispheric summer in the tropics, mid-latitudes, and polar regions. The dotted lines indicate the climatological tropopause and the altitude 12.5km above this tropopause (transition zone between the troposphere and stratosphere). 5MUSICA IASI retrieval output In this sections we describe the variables that give information about the retrieval target products (vertical trace gas profiles) and the characteristics of these products (averaging kernels and errors). A detailed explanation of these data supports their interoperability and is also important in the context of data reusage (see examples given in Sect. 8). 5.1Trace gas profiles and temperatures The retrieved trace gas profiles are provided in the variables musica_wv (H[2]O and HDO with species index 1 and 2, respectively), musica_ghg (N[2]O and CH[4] with species index 1 and 2, respectively), and musica_hno3 (HNO[3]). The unit is ppmv. The retrieved atmospheric temperature is provided in the variable musica_at and the retrieved surface temperature in the variable musica_st. The unit is kelvin. In order to provide a brief insight into the data diversity, Fig. 7 gives examples with a priori and retrieved trace gas profiles for an observation on 30 August 2008 over Lindenberg (53^∘N). The profile data represent 28 altitude levels and are provided with detailed information on their sensitivity, vertical representativeness, and errors (see following subsections). 5.2Characteristics of retrieved products For a limited number of retrievals we provide an extended netCDF output file (see Sect. 3.1). The extended output file contains the same variables as the standard output files and in addition the full averaging kernels and a large set of Jacobians (and spectral responses for surface emissivity, spectroscopic parameters, and cloud coverage) together with gain matrices. The latter allows the calculation of full error covariances for a large variety of different uncertainty sources. In the standard output files we do not provide the full averaging kernels (which would consider all the cross-correlations between the different retrieval products) or the full error covariances. The reason for this is that providing the full kernels and/or the full error covariances would strongly increase the storage needs for the data output (Weber, 2019). Figure 8 explains the matrix blocks that are made available in the extended output file and in all standard output files. The extended file contains the full gain matrices, the Jacobian matrices for all state vector components, and Jacobians for parameters that are not retrieved but that affect the retrieval (spectroscopy, different cloud types, and surface emissivity). Using the gain matrices and the Jacobians, the full averaging kernels and the full error covariances can be calculated as indicated by Fig. 8. The full averaging kernel for the trace gas products is marked at the right side by the thick black frame (an example for these kernels is plotted in Fig. 9). The full error covariances are indicated by the yellow frame (examples of the root-mean-square values of the diagonals of these error covariances are plotted in Fig. 12). The parts of this full matrix that are provided by the standard output files for all individual retrievals are indicated as the matrix blocks filled by green and red colour. Green represents the individual averaging kernels of the water vapour isotopologues, the greenhouse gases, HNO[3], and the atmospheric temperature. Red marks the cross kernels of the trace gas products with respect to atmospheric temperature (i.e. they indicate how errors in the EUMETSAT L2 PPF atmospheric temperatures – used as MUSICA IASI a priori temperatures – affect the retrieved trace gas products). These temperature cross kernels allow the calculation of the full error covariances for the temperature uncertainty for each individual observation of the standard output file. In addition, for all individual observations the standard output files contain square root values of the diagonal of the error covariance matrix for the most important uncertainty sources (noise and temperature uncertainty). We always provide differential or derivatives (covariances, averaging kernels, gain matrices, and Jacobian matrices) related to the trace gas products in the logarithmic scale. Logarithmic-scale kernels are the same as the fractional kernels used in Keppens et al. (2015). Furthermore, we strongly recommend the use of the logarithmic-scale kernels for analytic calculation. Because the MUSICA IASI trace gas retrievals are made on the logarithmic scale, the assumption of a moderately non-linear case according to Rodgers (2000) can be made on the logarithmic scale (i.e. requires the use of logarithmic-scale kernels) but has limited validity on the linear scale. More details on the valid assumption of moderately non-linear problems are given in Appendix B. 5.2.1Averaging kernels Figure 9 depicts the averaging kernels for the full atmospheric composition state (water vapour proxy state, N[2]O, CH[4], and HNO[3]) for a typical summertime observation over a mid-latitudinal land location. Shown are all the matrix blocks marked by the thick black frame in the right part of the schematic of Fig. 8. In the diagonal we see the trace gas specific kernels and in the outer diagonal blocks the cross kernels. For the H[2]O proxy (see Sect. 4.4.2) we achieve very high values of about 5.3 for DOFS (the degree of freedom of signal, which is calculated as the trace of the respective matrix block). Also for the δD proxy, N[2]O, and CH[4], the DOFS values are clearly larger than 1.0, indicating the capability of the retrieval to provide some information on the trace gases' vertical distribution. The cross kernel representing the impact of atmospheric δD on the retrieved H[2]O (${\mathbf{A}}_{\mathrm{12}}^{\prime }$ in Fig. 9) has the largest entries of all cross kernels; however, because variations in δD are smaller by an order of magnitude than variations in H[2]O, in reality this impact will be of secondary importance only. For consistency with the other data products we provide these kernels in the {ln[H[2]O],ln[HDO]} basis (not in the $\mathit{\left\{}\frac{\mathrm{1}}{\mathrm{2}}\left(\mathrm{ln}\left[{\mathrm{H}}_{\mathrm{2}}\mathrm{O}\right]+\mathrm{ln}\left[\mathrm {HDO}\right]\right),\left(\mathrm{ln}\left[\mathrm{HDO}\right]-\mathrm{ln}\left[{\mathrm{H}}_{\mathrm{2}}\mathrm{O}\right]\right)\mathit{\right\}}$ proxy basis used in Fig. 9). In the {ln[H[2]O],ln [HDO]} basis the cross kernels have very large and important entries, and we provide in all standard files all four blocks of the water vapour isotopologue kernels (the diagonal kernels and the cross Similarly we also provide in all standard files all four block kernels describing the greenhouse gases (kernels A[33], A[34], A[43], and A[44] in Fig. 9). Although the respective cross kernel values are rather small, their availability supports the precise characterisation of a combined CH[4]–N[2]O product, which has a higher precision than the individual N[2]O and CH[4] products (see discussion in García et al., 2018). Because HNO[3] has only weak spectroscopic signatures in the analysed spectral window, the respective kernel (A[55] in Fig. 9) reveals a pronounced maximum, which is limited to the lower/middle stratosphere. By tuning the constraint (see discussion at the end of Sect. 4.6), we obtain DOFS values of generally close to 1.0. We also provide atmospheric temperature profile kernels (not shown in Fig. 9), for which we typically obtain a DOFS value of about 2.0. Because we want to provide averaging kernels for each individual observation, we developed a compression procedure, which is necessary for keeping the size of the data files in an acceptable range. Section 5.2.4 describes the compression method, the format, and the variables in which the averaging kernels are provided. 5.2.2Metrics for sensitivity and resolution Table 2 gives an overview of metrics that can be calculated from the averaging kernel elements. In the previous section the DOFS metric has been introduced as the trace of the averaging kernel matrix. Figure 10 depicts the typical geographical distribution of the DOFS values for the different trace gas products. The largest values are generally achieved at low latitudes, except for HNO[3], where we obtain the largest values at middle and high latitudes. The high values for H[2]O indicate that we can detect H[2]O profiles everywhere around the globe but in particular at low latitudes. For δD and CH[4] we can also detect two independent altitude layers in the tropics and summer hemispheric subtropics. There is limited profiling capability for N[2]O and almost no profiling capability for HNO[3]. For the latter we occasionally find DOFS values of below 0.8 over the tropics, arid subtropical areas, and the central Antarctic. The DOFS values are provided in the variables with the suffix _dofs. For $\mathrm{1}<i<\mathrm{nal}$: $\mathrm{\Delta }{z}_{i}=\frac{{z}_{i+\mathrm{1}}-{z}_{i}}{\mathrm{2}}-\frac{{z}_{i}-{z}_{i-\mathrm{1}}}{\mathrm{2}}$; $\mathrm{\Delta }{z}_{\mathrm{1}}=\frac{{z}_{\ mathrm{2}}-{z}_{\mathrm{1}}}{\mathrm{2}}-{z}_{\mathrm{1}}$; $\mathrm{\Delta }{z}_{\mathrm{nal}}={z}_{\mathrm{nal}}-\frac{{z}_{\mathrm{nal}}-{z}_{\mathrm{nal}-\mathrm{1}}}{\mathrm{2}}$. Figure 11 shows vertical profiles of the averaging kernel metric measurement response (MR), layer width per DOFS (LWpD), information displacement (difference between the centre altitude, C, and nominal altitude, Alt), and resolving length (RL). The depicted profiles are for the averaging kernels of Fig. 9. The metrics are vectors, and each element of the vector represents a certain altitude. The equations for calculating the elements of these vectors are given in Table 2. The measurement response (MR) is the sum along the row of the averaging kernel matrix (Eriksson, 2000; Baron et al., 2002). It is provided in the variables with the suffix _response. If a retrieval provides a smoothed version of the truth, without systematically pushing results towards greater or smaller values, the sum of the elements over each row of the averaging kernel should be unity. Any deviation of the row sums from unity thus hints at an influence of the constraint that is beyond pure smoothing (von Clarmann et al., 2020). Depending on the trace gas we observe different altitudes with MR values close to unity (1±0.2): tropospheric altitudes for H[2]O and δD, altitudes between the free troposphere and the lower stratosphere for N[2]O and CH[4], and lower stratospheric altitudes for HNO[3]. Layer width per DOFS is calculated as the local grid width divided by the respective diagonal value of the averaging kernel matrix (Purser and Huang, 1993; Keppens et al., 2015). It is a reasonable measure for vertical resolution. For our example observation we see a very good vertical resolution for H[2]O almost throughout the troposphere. For δD the resolution is reasonable in the lower and middle troposphere, for N[2]O and CH[4] in the middle troposphere and upper troposphere–lower stratosphere, and for HNO[3] only in a very limited altitude region in the stratosphere. Maximum values in a row of the kernel matrix away from the diagonal means that the nominal altitude and the altitude of the maximum kernel values are different. For these altitudes LWpD values strongly increase, even if the MR value is still in a reasonable range (e.g. for CH[4] at about 15km). The centre altitude (C) indicates the atmospheric altitude region by which the retrieved values are mostly affected. In an optimal case this altitude region should correspond to the nominal altitude of the retrieval. A difference between the centre altitude and the nominal altitude (C−Alt) reveals a vertical information displacement; i.e. the signals reported by the retrieval for the nominal altitude are real atmospheric signals of a systematically different altitude. We observe very low information displacements for tropospheric H[2]O and middle tropospheric δD. For N[2]O and CH [4] the values are reasonable between the middle/upper troposphere and the lowermost stratosphere. For HNO[3] the centre value is almost the same for all altitudes; i.e. the signals retrieved at different altitudes reflect all the signals of the same real atmospheric altitude region. The resolving length (RL) indicates the vertical resolution at the centre altitude, i.e. the breadth of the atmospheric altitude layer by which the retrieved value is significantly affected. As briefly discussed in Rodgers (2000) resolving length is not a satisfactory definition of resolution for slowly decaying averaging kernels or for averaging kernels that have strong side lobes, for instance the MUSICA IASI kernels for H[2]O (see top left panel of Fig. 9). Resolving length and the centre altitude are calculated according to Eqs. (7) and (8) of Keppens et al. (2015). These parameters were originally introduced by Backus and Gilbert (1970) and are also discussed in chap. 3 of Rodgers (2000). The variables with the suffix _resolution provide the vertical information displacement and resolution metrics for each individual observation. As parameters 1 and 2 these variables provide the centre altitude (C) and the resolving length (RL), respectively, and as parameter 3 the layer width per DOFS value (LWpD). For the 74 observations provided in the extended output file (see Sect. 3.1) calculations of a large variety of Jacobians (and spectral responses for surface emissivity, spectroscopic parameters, and cloud coverage) and full gain matrices are available for a polar, mid-latitudinal, and tropical site (Borger et al., 2018). Figure 12 presents the errors calculated for a mid-latitudinal summer observation using the gain matrices and Jacobians (or spectral responses) according to Eqs. (A10) and (A11). The uncertainty assumption Δb and S[b] used for these calculations are summarised in Table 3. The measurement noise error is calculated according to Eq. (A12) with S[y,noise] being a diagonal matrix with diagonal values set to the mean-square value calculated from the spectral residuals (measured−simulated spectra). Hess et al., 1998Hess et al., 1998 We organise the errors in three categories: random errors (measurement noise, uncertainties in emissivity and atmospheric temperature, and interferences from atmospheric humidity and δD variations), spectroscopic errors (uncertainties in the water continuum modelling and uncertainties in the intensity and pressure-broadening parameters of all target trace gases), and errors due to unrecognised Concerning random errors, we find that atmospheric temperature uncertainties dominate the error budget for all retrieval products except for δD (because temperature uncertainties have similar impacts on H[2]O and HDO, they cancel out in their ratio). Measurement noise is the second most important error contributor (and the dominating error source for δD). Estimations of the dominating temperature error (assuming atmospheric temperature uncertainty covariances in line with August et al., 2012) and the measurement noise error are provided in standard files in the variables with the suffix _error, for all trace gas products (for the water vapour isotopologue in the proxy state basis) and for atmospheric temperature. By providing the cross averaging kernels with respect to atmospheric temperature (see matrix blocks filled by red colour at the right side of the schematics of Fig. 8) we can calculate the propagation of any assumed temperature profile uncertainty ΔT individually for all observations in the standard files, according to Eq. (A10): $\begin{array}{}\text{(15)}& \mathbf{\Delta }\stackrel{\mathrm{^}}{\mathbit{x}}=-{\mathbf{GK}}_{\mathrm{T}}\mathrm{\Delta }\mathbit{T}=-{\mathbf{A}}_{\mathrm{T}}\mathrm{\Delta }\mathbit{T},\end with K[T] being the Jacobians for atmospheric temperature and A[T] being the temperature cross kernel provided for all observations in the standard data file. For all observations we can also reconstruct the full error covariance matrix ${\mathbf{S}}_{\stackrel{\mathrm{^}}{x},\mathrm{noise}}$ due to the spectral noise used for constraining the solution state. For the MUSICA IASI processing we use a diagonal matrix with the mean-square values of the spectral residual (difference between the simulated and measured spectrum) as the spectral noise covariance S[y,noise]. According to Eqs. (A5) to (A8) and (A12) we can write $\begin{array}{}\text{(16)}& {\mathbf{S}}_{\stackrel{\mathrm{^}}{x},\mathrm{noise}}=\mathbf{A}\left(\mathbf{I}-\mathbf{A}\right){\mathbf{R}}^{-\mathrm{1}}.\end{array}$ H[2]O interferences from atmospheric δD and δD interferences from atmospheric H[2]O are also significant (blue and cyan lines in the random-error plots of Fig. 12). For this reason we provide in the standard file the four blocks of the water vapour isotopologues averaging kernels, which enables us to estimate these interferences for each individual observation. The error covariance due to interference of δD on H[2]O can be calculated by $\begin{array}{}\text{(17)}& {\mathbf{S}}_{\stackrel{\mathrm{^}}{x},\mathit{\delta }\mathrm{D},\mathrm{if}}={\mathbf{A}}_{\mathrm{12}}^{\prime }{\mathbf{S}}_{\mathrm{a},\mathit{\delta }\mathrm{D}}{\ mathbf{A}}_{\mathrm{12}}^{\prime }{}^{\mathrm{T}},\end{array}$ and the error due to interference of H[2]O on δD by $\begin{array}{}\text{(18)}& {\mathbf{S}}_{\stackrel{\mathrm{^}}{x},{\mathrm{H}}_{\mathrm{2}}\mathrm{O},\mathrm{if}}={\mathbf{A}}_{\mathrm{21}}^{\prime }{\mathbf{S}}_{\mathrm{a},{\mathrm{H}}_{\mathrm {2}}\mathrm{O}}{\mathbf{A}}_{\mathrm{21}}^{\prime }{}^{\mathrm{T}}\end{array}$ Here S[a,δD] and ${\mathbf{S}}_{\mathrm{a},{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}$ are covariances of the δD and H[2]O proxy states, respectively, and ${\mathbf{A}}_{\mathrm{12}}^{\prime }$ and ${\ mathbf{A}}_{\mathrm{12}}^{\prime }$ are the cross kernels of the proxy states. Please note that the water vapour isotopologue kernels provided in the standard files are for the {ln[H[2]O],ln[HDO])} basis and not for the {$\frac{\mathrm{1}}{\mathrm{2}}\left(\mathrm{ln}\left[{\mathrm{H}}_{\mathrm{2}}\mathrm{O}\right]+\mathrm{ln}\left[\mathrm{HDO}\right]\right)$,(ln[HDO]−ln[H[2]O])} proxy state basis; i.e. to be used according to Eqs. (17) and (18) the provided kernels have to be transformed according to Eq. (4). Spectroscopic uncertainties cause mainly systematic errors. The assumed uncertainties in line intensity ΔS and pressure-broadening Δγ (see Table 3) are in reasonable agreement with the values reported in Gordon et al. (2017). Respective error estimations can be performed for the 74 exemplary observations provided in the extended data file over a polar, mid-latitudinal, and tropical site. As shown in Fig. 12 they are typically within 5%, except for HNO[3], where we estimate errors in the lower stratosphere due to spectroscopic uncertainties of up to 12% (mainly reflecting the larger uncertainty budget allowed for the band intensity). The uncertainties in the spectroscopic parameters of line intensity and pressure broadening mainly affect the retrieval of the trace gas, for which the parameters are assumed to be uncertain. Cross impacts are largest for uncertainties in water vapour parameters and there mostly for the water continuum (to a lesser extent for line intensity and pressure broadening). For this reason we plot the effect of the water continuum uncertainty for all trace gases, whereas we only show the effects of the line intensity and pressure-broadening parameters of the trace gas that is examined. MUSICA IASI retrievals are only executed when the EUMETSAT L2 PPF flag flag_cldnes is set to 1 (the IASI instrumental field of view, IFOV, is clear) or 2 (the IASI IFOV is processed as cloud-free, but small cloud contamination is possible). This means that in particular for MUSICA IASI retrievals made with a cloud flag value of 2, clouds can have an impact, which should be examined. For this reason we calculated a variety of different cloud spectral responses for our 74 exemplary observations over polar, mid-latitudinal, and tropical sites and provide them in the extended data files. Examples of the obtained errors are depicted on the right of Fig. 12. We find that clouds with the properties as described in Table 3 have a significant effect on the retrievals. The impact of a cirrus cloud is particularly strong, and the H[2]O and HNO[3] data products seem to be the most affected. However, in this context we also have to consider the natural variability in the different trace gas products. Because the natural variability in δD, N[2]O, and CH[4] is very small, uncertainties due to clouds of 1% can already be a large problem. In summary this estimation of errors due to unrecognised clouds indicates that we should be careful when using MUSICA IASI data products corresponding to an EUMETSAT L2 PPF cloud flag value of 2 (see also discussion in Sects. 6 and 7). 5.2.4Matrix compression In order to reduce the storage needs of the output files, we compress the averaging kernel matrices. For this compression we perform a singular value decomposition of the original averaging kernel $\begin{array}{}\text{(19)}& \mathbf{A}={\mathbf{UDV}}^{\mathrm{T}}\end{array}$ and a subsequent filtering for the leading eigenvalues. We only keep the most important eigenvalues and eigenvectors; i.e. we only keep a small part of the matrices U, D, and V. The variables that store this leading information on the averaging kernels have specific suffixes in their names. The variable with suffix _avk_rank stores the number (r) of the leading eigenvalues and eigenvectors that are kept. Suffix _avk_val identifies the variable containing the eigenvalues. The variables with suffixes _avk_lvec and _avk_rvec store the leading left and right eigenvectors. The reconstruction of the averaging kernel is made according to Eq. (19), whereby we setup the r×r diagonal matrix D consisting of the leading eigenvalues and the n×r matrices U and V consisting of the leading left and right eigenvectors. Here n is the numbers of elements in the considered state vector. When reconstructing all four blocks of the water vapour or greenhouse gas averaging kernels, $n= \mathrm{2}×\mathrm{nal}$. For the reconstruction of the HNO[3] or atmospheric temperature averaging kernels n=nal. For more details on the effectiveness of this compression method please refer to Weber (2019). The suffixes _xavkat_rank, _xavkat_val, _xavkat_lvec, and _xavkat_rvec identify the respective variables needed for the reconstruction of the temperature cross averaging kernels. In this case the right eigenvectors have the length of the atmospheric temperature state vector, which is different from the length of the atmospheric state vector in the case of the water vapour isotopologue and the greenhouse gas product (i.e. for the water vapour isotopologue and the greenhouse gas temperature cross averaging kernels, the left and right eigenvectors have different sizes). The MUSICA IASI retrieval data are provided with detailed information on the retrieval quality, the retrieval products' characteristics, and errors, as well as variables summarising cloud conditions and the main aspects of sensitivity, vertical resolution, and errors. In this section we discuss the variables providing this information and recommend possibilities for data filtering. The EUMETSAT L2 PPF flag flag_cldnes is written in the MUSICA IASI variable eumetsat_cloud_summary_flag. As discussed in Sect. 5.2.3 there is some risk that the MUSICA IASI product retrieved for eumetsat_cloud_summary_flag set to 2 has significant errors due to clouds. In order to exclude this risk we can filter out these data; i.e. we can use a very stringent cloud filtering criterion by using only observations where the variable eumetsat_cloud_summary_flag is set to 1. Another and less stringent option is to use in addition the EUMETSAT L2 fractional cloud cover, which is written in the MUSICA IASI variable eumetsat_cloud_area_fraction. If eumetsat_cloud_summary_flag is set to 2 we require in addition that the determination of a cloud area fraction has not been successful; i.e. we require that eumetsat_cloud_area_fraction is set to NaN. No clear determination of a value for fractional cloud cover means that the cloud signals are rather weak (the contrast between cloud and surface signals is smaller than the instrument noise). 6.2Quality of the spectral fit The spectral noise level considered in the cost function Eq. (A2) during the MUSICA IASI processing is the root-mean square (rms) of the spectral fit residual (difference between the simulated and measured spectrum). By this retrieval setting we use the degree to which the spectra can be understood by the forward model as the spectral noise level. The so-defined spectral noise level is generally larger than the pure instrumental noise level because it is a sum of the instrumental noise and the signatures that are not understood by the forward model. In the MUSICA IASI retrieval this rms value is treated as white noise; i.e. for S[y,noise] of the cost function Eq. (A2) we use a diagonal matrix filled by the mean-square values according to the spectral residuals. As long as the residual is close to white noise, this kind of processing ensures a correct weighting of the measured spectra, on the one hand, and the a priori information, on the other hand. However, occasionally the measured spectra are very poorly simulated by the forward model and the residuals cannot be described as white noise; instead the residuals show systematic signatures. This happens, for instance, if incorrect surface emissivities are used or if the retrieval is made for an observation that is affected by a cloud. In order to identify the systematic part of the residuals we smooth the residuals using a ±2cm^−1 running mean. The smoothed residuals are the systematic residuals, and the difference between the original residuals and the smoothed residuals can then be interpreted as the random (or white noise) residuals. Residuals, systematic residuals, and random residuals are provided in the standard files for each observation in the variable musica_fit_quality. In order to facilitate the filtering of data corresponding to a poor spectral fit quality, we set up a flag (provided as variable musica_fit_quality_flag) that works with the rms values of the systematic residuals and the random residuals. The flag is set to 0 (poor quality) if the systematic residuals have an rms value of larger than 40nW/(cm^2srcm^−1). For all other observations we analyse the ratio between the rms of the systematic residuals and the rms of the random residuals. If this ratio is larger than 1.0, the flag is set to 1 (restricted quality); if it is between 0.5 and 1.0, the flag is set to 2 (fair quality); and if it is smaller than or equal to 0.5, the flag is set to 3 (good quality). Figure 13 depicts residuals corresponding to different values of this fit quality flag. All observations are made during the same orbit, at close-by locations (northern Africa), and for very similar surface temperatures. It is very likely that the poor spectral fit quality is due to incorrect surface emissivity values used for the respective retrievals (over arid areas like northern Africa, surface emissivity data have an increased uncertainty; Seemann et al., 2008). Our recommendation is to use data that belong to the quality groups fair and good. For all observations and all trace gas products, the standard files provide estimations of the errors dominating the random-error budget: errors due to noise in the spectra and errors due to uncertainties in the atmospheric temperature a priori data (the EUMETSAT L2 PPF temperatures). The noise error and estimations of atmospheric temperature error are given in the error variable (variable with suffix _error; see Sect. 5.2.3) for all trace gas products and can be used for filtering out data with anomalously high errors. Incorrect spectroscopic parameters (line intensity, pressure-broadening coefficients, or water continuum modelling) can be responsible for large errors. Although these uncertainty sources are systematic, the errors they cause depend on the sensitivity of the remote sensing system, which in turn is affected by the geometry of the observation. In the first order the optical path of the measured radiances depends on the platform zenith angle (PZA, provided as the variable platform_zenith_angle). In order to avoid that systematic uncertainties in the spectroscopic parameters cause artificial signals, we can set threshold values for the PZA and limit the PZA to angles close to nadir (e.g. by requiring $\mathrm{PZA}\le \mathrm{30}{}^{\circ }$). 6.4Sensitivity and resolution The standard files provide the averaging kernels in a compressed format for all observations (see Sect. 5.2.4) as well as metrics that capture the most important aspects of the sensitivity and vertical resolution (see Sect. 5.2.2). These metrics are provided in the variables with the suffixes _response and _resolution and allow analyses of the sensitivity and vertical resolution for each individual observation without the need for reconstructing the averaging kernels. We can use the metrics for filtering out data where the response to the real atmospheric variability is low or where the vertical representativeness is irregular. In order to ensure a good sensitivity (retrieval product being mainly affected by the real atmosphere and not by the a priori assumption), the measurement response (MR) should be close to unity. Layer width per DOFS (LWpD), centre altitude displacement (C−Alt), and resolving length (RL) can be used to filter out data that do not fulfil the requirements in terms of the vertical representativeness needed for a dedicated study. Respective filter threshold values depend on the objective of the scientific study. If processes within vertically well confined layers shall be examined, rather small vertical displacement and very good vertical resolution are required, and thus very stringent thresholds should be set. In addition to filtering according to absolute values of LWpD, C−Alt, or RL, the respective metrics can also be used for the identification of groups of data that have a similar vertical representativeness. For instance, we can robustly analyse time series of data that have a stable vertical information displacement and a stable vertical resolution. For data where these conditions are not fulfilled, time series signals might be significantly affected by the time-variant data characteristics. The same is true when analysing horizontal patterns, which might partly be due to the pattern in the data characteristics and not a real atmospheric pattern. Each Metop satellite accomplishes about 14 orbits per day, which makes about 5100 orbits per year. For our MUSICA IASI retrieval period there are two or even three orbiting IASI instruments making operational measurements. Until the end of October 2019 there were IASI-A and IASI-B, and since November 2019 there has been in addition IASI-C. So we have more than 10000 Metop–IASI orbits and in consequence MUSICA IASI netCDF output files per year with useful measurements (see Sect. 3.1 for information on output data file nomenclature and format). As an average about 30% of all measurements are made for cloud-free conditions (EUMETSAT L2 PPF cloudiness assessment summary flag set to 1 or 2; see also Sect. 4.1). This makes about 25000 individual retrievals per orbit/output file. In the following we present examples of this large number of data. We select example altitudes where the respective products have generally a good sensitivity and reasonable vertical representativeness. According to Figs. 9 and 11 a good altitude choice is 4.2km for H[2]O and δD and 10.9km for N[2]O and CH[4]. For HNO[3] the MUSICA IASI processor does not provide profile information; instead the kernels for all altitudes show a similar vertical dependence and reveal retrieval sensitivity for a broad lower stratospheric layer. For this reason we aggregate the HNO[3] data in the form of partial column-averaged mixing ratios for the layer between 10 and 35km. Details on this resampling are given in Appendix C. We filter the data according to the settings and threshold values of Table 4. For all data we require “fair” and “good” for the MUSICA IASI spectral fit quality (flag variable musica_fit_quality_flag is required to be set to 2 or 3), and we filter the data using the EUMETSAT L2 PPF cloudiness assessment flag (provided as the variable eumetsat_cloud_summary_flag). For the N[2]O and CH[4] data we apply a more stringent cloud filter and further inspect data where the EUMETSAT L2 PPF cloudiness assessment summary flag indicates a possibility of small cloud contamination. For respective data we require that the EUMETSAT L2 processing cannot clearly attribute a value for fractional cloud cover, which means that the cloud signals are rather weak (see Sect. 6.1). We use this more stringent cloud filtering for N[2]O and CH[4] because both species have relatively weak atmospheric variabilities that are very similar to the errors estimated for a small cloud coverage (10% coverage with opaque cumulus clouds or 25% coverage with cirrus clouds). ^a Only if variable eumetsat_cloud_area_fraction is set to NaN. ^b For dry-air mixing ratios averaged for partial column 10–35kma.s.l. ^c Here the bottom thresholds are set to be below the lowest actually occurring positive value. Furthermore, we filter according to the retrieval fit noise and estimated atmospheric temperature errors. The respective errors are provided for each observation in the MUSICA IASI standard file output variables with the suffix _error. For HNO[3] we calculate the retrieval fit noise and the estimated temperature errors for the 10–35km partial column-averaged mixing ratios according to Eq. ( C7), whereby we reconstruct the noise covariance matrix for HNO[3] according to Eq. (16) and generate the atmospheric temperature covariance according to Eq. (7) using the MUSICA IASI standard file output variables musica_at_apriori_amp and musica_apriori_cl for setting up the values of v[amp,i] and σ[cl,i], respectively. In order to ensure that the time series signals or horizontal patterns are not significantly affected by varying sensitivity and vertical resolution, we filter H[2]O, δD, N[2]O, and CH[4] data according to the averaging kernel metrics MR, LWpD, and C−Alt. This filters out data with anomalous vertical sensitivities. For HNO[3] we calculate the 10–35km partial column-averaged mixing ratio averaging kernels according to Eq. (C6) and filter for good sensitivity by requiring a diagonal entry close to unity. The filter threshold values of LWpD and C−Alt are defined relative to the a priori assumed vertical correlation length (provided in the variable musica_apriori_cl). We use threshold values for these ratios that are constant for all altitudes, which allows for increased LWpD and C−Alt values in the case of an increased vertical correlation length (for altitudes with a larger correlation length, higher values of LWpD andC−Alt can be accepted). 7.2Continuous time series In this section we give an example of the temporal continuity of the data. Figure 14 depicts a time series at the mid-latitudinal site of Karlsruhe, Germany, between October 2014 and June 2021 of MUSICA IASI trace gas retrieval products. For all trace gases, except for δD, we have good temporal coverage with no significant data gaps caused by the comprehensive data filtering. Concerning δD, there is a reduced data volume in winter mainly due to filtering out data with reduced sensitivity (measurement response below 0.8). It is worth noting that for the {H[2]O, δD} pair optimal estimation product – generated a posteriori according to Diekmann et al. (2021) – we achieve a significantly better measurement response. We observe typical seasonal cycles for all species. The seasonal cycles of H[2]O and δD follow the seasonal cycle of temperature. In winter H[2]O concentrations can be as low as 100ppm and δD values can be below −350‰. In summer the maximum values are about 8000ppm and −150‰. The concentrations of N[2]O and CH[4] at 10.9kma.s.l. are lowest in winter–spring and highest in summer–autumn. This cycle is linked to the vertical shift of the tropopause altitude: in winter–spring the 10.9km altitude is much more strongly affected by the stratosphere (where N[2]O and CH[4] are decreasing with height) than in summer–autumn. Concerning HNO[3] we observe the highest values in winter–spring, which might indicate the detection of air masses with an Arctic stratospheric history (Arctic winter stratospheric HNO[3] mixing ratios are particularly large). 7.3Daily global maps In this section we give an example of the good daily global coverage achieved by high-quality MUSICA IASI products. Figure 15 depicts the data retained during 24h when using the filter setting listed in Table 4. For our example we choose 1 February and 1 August 2018 and plot the data for the same altitudes as in Fig. 14. For all data products, except for δD, we have very dense global coverage. Areas with missing data are mostly linked to the cloud filtering. The reduced data coverage for δD in the middle and high latitudinal winter hemispheres is due to δD measurement response values lying below 0.8 (we achieve a significantly better measurement response and thus horizontal coverage for the optimal estimation {H[2]O,δD} pair product generated according to Diekmann et al., The highest H[2]O concentrations at 4.2km are observed at low latitudes where temperatures are generally highest. However, there are also low latitudinal areas with rather low H[2]O concentrations, for instance in the eastern Pacific on 1 August 2018, which indicates a region where large-scale subsidence is prevailing. The δD values at 4.2km are also highest at low latitudes but with a stronger zonal variability. For high tropical H[2]O concentrations, δD values can be relatively high (for instance on 1 February 2018 in the tropical Atlantic) or relatively low (for instance on 1 February 2018 in the tropical Indian Ocean). This indicates that δD data contain information that is complementary to the H[2]O data. For N[2]O and CH[4] at 10.9km we observe maximum concentrations at low latitudes and rather low values in the polar regions. The reason for this is that at high latitudes the 10.9km altitude is strongly influenced by low stratospheric concentrations, whereas in the tropics the 10.9km altitude is representative of the upper troposphere, where concentrations are higher. This means that the concentrations observed at 10.9km reflect to a large extent the altitude of the tropopause. On 1 August 2018 we observe for both trace gases a clear gradient between the Northern Hemisphere and Southern Hemisphere, whereas there is no significant gradient on 1 February 2018. This is caused by higher tropospheric concentrations of both trace gases in the Northern Hemisphere. On 1 August 2018 the stratosphere affects the 10.9km altitude more strongly in the Southern Hemisphere than in the Northern Hemisphere and we observe particularly strong gradients. On 1 February 2018 it is the other way round and the tropospheric concentration gradients are counterbalanced by the tropopause altitude effect. The global maps of the HNO[3] 10–35km partial column-averaged mixing ratios show very low values in the tropics and the highest values in polar regions. However, in the Antarctic low values are also found in winter because at very low temperatures (<195K) polar stratospheric clouds (PSCs) are formed on which HNO[3] condensates. In the Arctic winter temperatures are generally not that low and PSCs and consequently low HNO[3] values are mainly restricted to areas with a local mountain lee wave occurrence. 8Interoperability and data reusage For each individual observation, the MUSICA IASI full retrieval product provides detailed information on retrieval settings (a priori and constraints) and retrieval characteristics (error covariances and averaging kernels). This comprehensive set of information ensures ultimate interoperability and offers the possibility of a variety of data reuse applications, in particular, because the MUSICA IASI inversion problem is a moderately non-linear problem (see Appendix B). In the following we briefly list some data reusage possibilities. For interoperability (the common use of different data sets or their inter-comparison) the impact of different a priori data should be assessed or eliminated. Assuming that the MUSICA IASI data (generated using a priori state x[a]) should be commonly used with (or inter-compared to) another remote sensing data set whose retrieval processor used the a priori state x[a,m], then we can calculate the MUSICA IASI retrieval state that would result from an x[a,m] a priori usage according to Eq. (B1). For these calculations we need, from the MUSICA IASI data, the originally retrieved state, the a priori state, and the averaging kernels, which are all provided by the MUSICA IASI full retrieval product. For comparisons to atmospheric model simulation or for data assimilation applications, a remote sensing product has to be made available together with full information about its error covariances and measurement operator. This is the case for the MUSICA IASI full retrieval product data set. For each individual observation the averaging kernels are made available and the full a posteriori covariances and the error covariances due to the fit residuals can be reconstructed from the provided constraint and the averaging kernel matrices according to Eqs. (A7) and (16), respectively. As shown in Sect. 7 and Appendix C the MUSICA IASI trace gas profiles can be easily resampled according to user-specific needs in the form of partial column-averaged mixing ratios with corresponding averaging kernels and error covariances. This is possible because the data set provides full information on pressure profiles, constraints (for reconstructing the error covariances due to the corresponding fit residuals, see Eq. 16), temperature cross kernels A[T] (in order to calculate the error covariances due to atmospheric temperature uncertainties, see Eq. 15), and averaging kernels. Worden et al. (2012) and García et al. (2018) discussed the advantages of a ${\mathrm{CH}}_{\mathrm{4}}/{\mathrm{N}}_{\mathrm{2}}\mathrm{O}$ ratio product. García et al. (2018) showed that this ratio product has a theoretically higher precision than the individual N[2]O and CH[4] products. Because N[2]O is chemically more stable than CH[4] in the troposphere, it is also more homogeneously distributed than CH[4]. García et al. (2018) argued that by combining ${\mathrm{CH}}_{\mathrm{4}}/{\mathrm{N}}_{\mathrm{2}}\mathrm{O}$ ratio observations with a model of the N[2]O climatology, it should be possible to determine tropospheric CH[4] concentration with relatively high precision. The MUSICA IASI full retrieval product provides information on constraints and the averaging kernels (including the cross averaging kernels between N[2]O and CH[4]); thus it offers all that is needed for calculating the ${\mathrm{CH}}_{\mathrm{4}}/{\mathrm{N}}_{\mathrm{2}}\mathrm{O}$ ratio product as well as the corresponding averaging kernels and error covariances. Another interesting data reuse possibility is that the retrievals' a priori data or the retrievals' constraints can be modified a posteriori in accordance to particular user requirements. According to Eq. (18) of Rodgers and Connor (2003), we can calculate the retrieval result (${\stackrel{\mathrm{^}}{\mathbit{x}}}_{\mathrm{m}}$) for a modified constraint (R[m]) by $\begin{array}{}\text{(20)}& {\stackrel{\mathrm{^}}{\mathbit{x}}}_{\mathrm{m}}={\mathbit{x}}_{\mathrm{a}}+{\mathbf{R}}_{\mathrm{m}}^{-\mathrm{1}}{\mathbf{A}}^{\mathrm{T}}\left({\mathbf{AR}}_{\mathrm Here x[a], A, ${\mathbf{S}}_{\stackrel{\mathrm{^}}{x},\mathrm{noise}}$, and $\stackrel{\mathrm{^}}{\mathbit{x}}$ are the a priori state, the averaging kernel, the error covariance due to retrieval fit noise, and the originally retrieved state, respectively. All this information is made available in (or can be reconstructed from the information provided by) the MUSICA IASI full retrieval product. Diekmann et al. (2021) present an optimal estimation {H[2]O,δD} pair product, which among others makes use of such a posteriori constraint modification. Schneider et al. (2021c) present another possibility for MUSICA IASI data reuse. They apply the extensive information provided in the MUSICA IASI full retrieval product for optimally combining MUSICA IASI CH[4] data with the total column XCH[4] retrieval products of the sensor TROPOMI (TROPOspheric Monitoring Instrument) aboard the satellite Sentinel-5P (Lorente et al., 2021) without the need for running new retrievals. This a posteriori product combination can be achieved by Kalman filter calculations (Kalman, 1960; Rodgers, 2000), which have large similarities to Eq. (20). The method optimally combines the MUSICA IASI retrieval state (vector $\stackrel{\mathrm{^}}{\mathbit{x}}$) with the information provided by the TROPOMI XCH[4] product (the scalar ${\stackrel{\mathrm{^}}{x}}_{\ mathrm{n}}$; we use index n for new observation): $\begin{array}{}\text{(21)}& {\stackrel{\mathrm{^}}{\mathbit{x}}}_{\mathrm{c}}=\stackrel{\mathrm{^}}{\mathbit{x}}+\stackrel{\mathrm{^}}{\mathbf{S}}{\mathbit{a}}_{\mathrm{n}}\left({\mathbit{a}}_{\ Here the vector ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{\mathrm{c}}$ is the optimally combined state, the row vector ${\mathbit{a}}_{\mathrm{n}}^{\mathrm{T}}$ is the column averaging kernel of the TROPOMI XCH[4] observation, the scalar x[a] is the a priori XCH[4] data, and the vector x[a] is the a priori CH[4] profile. $\stackrel{\mathrm{^}}{\mathbf{S}}$ is the a posteriori covariance of the MUSICA IASI data, which can be reconstructed with averaging kernel and constraint matrices being available according to Eq. (A7). The scalar ${S}_{{\stackrel{\mathrm{^}}{x}}_{\mathrm{n},\mathrm {noise}}}$ is the measurement noise error variance of the TROPOMI XCH[4] product. Optimal means here that the uncertainties and sensitivities of the MUSICA IASI CH[4] product and the TROPOMI XCH[4] product are correctly taken into account. The MUSICA IASI data can be freely downloaded at http://www.imk-asf.kit.edu/english/musica-data.php (last access: 25 January 2022). We offer two data packages with DOIs. The first data package has a data volume of about 17.5GB and is linked to via https://doi.org/10.35097/408 (Schneider et al., 2021b). It contains example standard output data files for all MUSICA IASI retrievals made for a single day (more than 0.6 million) and a description of how to access the total data set (2014–2019, data volume 25TB) or parts of it. This data package is for users interested in the typical global daily data coverage and in information about how to download the large data volumes of global daily data for longer periods. The second data package contains the extended output data file, is only about 73MB, and is linked to via https://doi.org/10.35097/412 (Schneider et al., 2021a). It contains retrieval products for only 74 observations made at a polar, mid-latitudinal, and tropical location. It provides the same variables as the standard output files and in addition the variables with the prefixes musica_jac_ and musica_gain_, which are Jacobians (or spectral responses) for many different uncertainty sources and gain matrices (due to these additional variables it is called the extended output file). Because this data package is rather small, it is recommended to potential reviewers and to users for having a quick look at the data. MUSICA IASI data processing is ongoing. For IASI observations after June 2019 the MUSICA IASI processing versions 3.3.0 and 3.3.1 instead of 3.2.1 are used (the differences between the versions 3.2.1 and 3.3.x are of a technical nature and not noticeable by the data user). Data representing observations after 2019 will soon be made available to the public in the same format as the data presented here (such data are already depicted in Fig. 14). Measurements of the IASI instruments on the three satellites Metop-A, Metop-B, and Metop-C have been processed by the MUSICA IASI processor. The processing has been made globally for all measurements that are declared as likely cloud-free by the EUMETSAT L2 PPF cloud detection procedure. Here we report on the full retrieval product of the MUSICA IASI processing version 3.2.1 used for the observation time period between October 2014 and June 2019. This report is equally valid for version 3.3.x data (in use for observations after June 2019.) The full retrieval product is the comprehensive output of the main MUSICA IASI processing chain. It contains the simulated and the residual radiances (the difference between measured and simulated radiances), some flags and retrieval outputs provided by the EUMETSAT L2 PPF processing, full information on the MUSICA IASI retrieval settings, and the full MUSICA IASI retrieval output. For each observation we provide information on the MUSICA IASI a priori settings and constraints so that the data are very easily reproducible. The retrieval outputs are the trace gas profiles of H[2]O, HDO, N[2]O, CH[4], and HNO[3] as well as the atmospheric temperature profiles. Concerning H[2]O and HDO the retrieval is optimised for H[2]O and the ratio of $\mathrm{HDO}/{\mathrm{H}}_{\mathrm{2}}\mathrm {O}$. All products are provided with a very extensive characterisation. For each individual retrieval the leading errors are made available together with the averaging kernels. In order to reduce the data volume, the kernels are provided in a compressed data format and can be reconstructed by simple matrix calculations. In addition we provide variables with averaging kernels metrics that capture the most important characteristics of the vertical representativeness (sensitivity and vertical resolution). These variables can be used for identifying data with an acceptable vertical representativeness without the need for reconstructing the averaging kernels. We give some suggestions on how to use different flags, error information, and averaging kernel metrics for data filtering recommendable for the study of global distribution maps or time series. The output of a priori states and averaging kernels for each individual observation guarantees ultimate interoperability (the common use of different data sets or their inter-comparison). Furthermore, the additional supply of constraint matrices for each individual observation together with the averaging kernels enables us to reconstruct the a posteriori covariances and the retrieval fit noise error covariance. Having all this information available offers excellent data reuse possibilities. We can a posteriori adjust the a priori or the constraints to specific user needs or optimally combine the MUSICA IASI products with other remote sensing products without the need for running new retrievals. MUSICA IASI data processing is ongoing. For IASI observations starting after June 2019 the MUSICA IASI processing versions 3.3.x instead of 3.2.1 are used. In version 3.2.1 there are some very minor inconsistencies in setting up the vertical gridding and in setting the a priori of δD and the constraints for N[2]O, CH[4], and HNO[3], which are accounted for during the postprocessing step. In versions 3.3.x these inconsistencies have already been addressed before running the retrievals. This is the only difference between the processing versions, and it is actually not noticeable by the data user. The report provided here on version 3.2.1 data is equally valid for versions 3.3.x data. MUSICA IASI data for observations after June 2019 (processed using versions 3.3.x) will soon be made available for the public in the same format as the data presented here. Appendix A:Basics of retrieval theory and notations This appendix gives an overview of the theoretical basics and notations of optimal estimation remote sensing retrieval methods. It is meant as a compilation of the most important equations that are related to the discussions provided in this paper. Although it is similar to Sect. 2.1 of Borger et al. (2018), we think it is a very helpful support here for readers that are no experts in the field. Further details on remote sensing retrievals can be found in Rodgers (2000). Atmospheric remote sensing means that the atmospheric state is retrieved from the radiation measured after having interacted with the atmosphere. This interaction of radiation with the atmosphere is modelled by a radiative transfer model (also called the forward model, F), which enables relating the measurement vector and the atmospheric state vector by $\begin{array}{}\text{(A1)}& \mathbit{y}=\mathbit{F}\left(\mathbit{x},\mathbit{b}\right).\end{array}$ We measure y (the measurement vector, e.g. a thermal nadir spectrum in the case of IASI) and are interested in x (the atmospheric state vector). Vector b represents auxiliary parameters (like surface emissivity) or instrumental characteristics (like the instrumental line shape), which are not part of the retrieval state vector. However, a direct inversion of Eq. (A1) is generally not possible because there are many atmospheric states x that can explain one and the same measurement y. For solving this ill-posed problem a cost function J is set up that combines the information provided by the measurement with a priori known characteristics of the atmospheric state: $\begin{array}{}\text{(A2)}& \begin{array}{rl}J& =\left[\mathbit{y}-\mathbit{F}\left(\mathbit{x},\mathbit{b}\right){\right]}^{\mathrm{T}}{\mathbf{S}}_{y,\mathrm{noise}}^{-\mathrm{1}}\left[\mathbit{y} -\mathbit{F}\left(\mathbit{x},\mathbit{b}\right)\right]\\ & +\left[\mathbit{x}-{\mathbit{x}}_{\mathrm{a}}{\right]}^{\mathrm{T}}\mathbf{R}\left[\mathbit{x}-{\mathbit{x}}_{\mathrm{a}}\right].\end Here, the first term is a measure of the difference between the measured spectrum (represented by y) and the spectrum simulated for a given atmospheric state (represented by x) while taking into account the actual measurement noise (S[y,noise] is the measurement noise covariance matrix). The second term of the cost function Eq. (A2) constrains the atmospheric solution state (x) towards an a priori most likely state (x[a]), whereby the kind and strength of the constraint are defined by the constraint matrix R, for which we use an approximate inversion of the a priori covariance matrix S [a] (for more details see Sect. 4.6): $\begin{array}{}\text{(A3)}& \mathbf{R}\approx {\mathbf{S}}_{\mathrm{a}}^{-\mathrm{1}}.\end{array}$ The constrained solution is reached at the minimum of the cost function Eq. (A2). Due to the non-linear behaviour of F(x,b), the minimisation is generally achieved iteratively. For the (i+1)th iteration it is $\begin{array}{}\text{(A4)}& {\mathbit{x}}_{i+\mathrm{1}}={\mathbit{x}}_{\mathrm{a}}+{\mathbf{G}}_{i}\left[\mathbit{y}-\mathbit{F}\left({\mathbit{x}}_{i},\mathbit{b}\right)+{\mathbf{K}}_{i}\left({\ K is the Jacobian matrix (derivatives that capture how the measurement vector will change for changes in the atmospheric state x). G is the gain matrix (derivatives that capture how the retrieved state vector will change for changes in the measurement vector y). G can be calculated from K, S[y,noise], and R as $\begin{array}{}\text{(A5)}& \mathbf{G}=\left({\mathbf{K}}^{\mathrm{T}}{\mathbf{S}}_{y,\mathrm{noise}}^{-\mathrm{1}}\mathbf{K}+\mathbf{R}{\right)}^{-\mathrm{1}}{\mathbf{K}}^{\mathrm{T}}{\mathbf{S}}_ with the a posteriori covariance matrix ($\stackrel{\mathrm{^}}{\mathbf{S}}$): $\begin{array}{}\text{(A6)}& \stackrel{\mathrm{^}}{\mathbf{S}}=\left({\mathbf{K}}^{\mathrm{T}}{\mathbf{S}}_{y,\mathrm{noise}}^{-\mathrm{1}}\mathbf{K}+\mathbf{R}{\right)}^{-\mathrm{1}},\end{array}$ which can also be written as $\begin{array}{}\text{(A7)}& \stackrel{\mathrm{^}}{\mathbf{S}}=\left(\mathbf{I}-\mathbf{A}\right){\mathbf{R}}^{-\mathrm{1}},\end{array}$ where I is the identity operator and A the averaging kernel matrix. The averaging kernel is an important component of a remote sensing retrieval, and it is calculated as $\begin{array}{}\text{(A8)}& \mathbf{A}=\mathbf{GK}.\end{array}$ The averaging kernel A reveals how a small change in the real atmospheric state vector x affects the retrieved atmospheric state vector $\stackrel{\mathrm{^}}{\mathbit{x}}$: $\begin{array}{}\text{(A9)}& \stackrel{\mathrm{^}}{\mathbit{x}}-{\mathbit{x}}_{\mathrm{a}}=\mathbf{A}\left(\mathbit{x}-{\mathbit{x}}_{\mathrm{a}}\right).\end{array}$ The propagation of errors due to parameter uncertainties Δb can be estimated analytically with the help of the parameter Jacobian matrix K[b] (derivatives that capture how the measurement vector will change for changes in the parameter b). According to Eq. (A4), using the parameter b+Δb (instead of the correct parameter b) for the forward model calculations will result in an error in the atmospheric state vector of $\begin{array}{}\text{(A10)}& \mathbf{\Delta }\stackrel{\mathrm{^}}{\mathbit{x}}=-{\mathbf{GK}}_{b}\mathrm{\Delta }b.\end{array}$ The respective error covariance matrix ${\mathbf{S}}_{\stackrel{\mathrm{^}}{x},b}$ is $\begin{array}{}\text{(A11)}& {\mathbf{S}}_{\stackrel{\mathrm{^}}{x},b}={\mathbf{GK}}_{b}{\mathbf{S}}_{b}{\mathbf{K}}_{b}^{\mathrm{T}}{\mathbf{G}}^{\mathrm{T}},\end{array}$ where S[b] is the covariance matrix of the uncertainties Δb. Noise in the measured radiances also affects the retrievals. The error covariance matrix for noise can be analytically calculated as $\begin{array}{}\text{(A12)}& {\mathbf{S}}_{\stackrel{\mathrm{^}}{x},\mathrm{noise}}={\mathbf{GS}}_{y,\mathrm{noise}}{\mathbf{G}}^{\mathrm{T}},\end{array}$ where S[y,noise] is the covariance matrix for noise on the measured radiances y. Note that Eqs. (A5) to (A12) are only valid for a moderately non-linear inversion problem (see chap. 5 of Rodgers, 2000). In Appendix B we show that our inversion problem is of such a kind. As outlined in Sect. 4 the MUSICA IASI processor uses a logarithmic scale for constraining the trace gas retrievals. We strongly recommend working on the logarithmic scale for the analytic treatment of the trace gas states. This is very obvious in the context of the water vapour isotopologue proxy introduced in Sect. 4.4.2 (a transformation to the proxy state is only possible on the logarithmic scale). In addition, the analytic treatment of the states is important for characterising the data in the context of Eqs. (A9)–(A12) or Eqs. (15)–(18), and it can also be used for modifying the retrieval settings without the need for performing new computationally expensive retrieval calculations (see chap. 10 of Rodgers, 2000). However, a requirement for the analytic treatment is that the problem is moderately non-linear (linearisation is adequate for the analytic treatment but not for finding the solution; see chap. 5 of Rodgers, 2000). In this Appendix we demonstrate that our problem is indeed moderately non-linear as long as we perform the calculations on the logarithmic scale. B1Setup of the linearity test We test the validity of assuming linearity for the analytic treatment by performing retrievals with different a priori settings. The standard setting is described in Sect. 4.5. It has a dependence on latitude as well as on seasonal and interannual timescales. For the test we perform additional retrievals with a priori data that have no latitudinal dependence; i.e. for all latitudes we use a latitudinal mean a priori profile. The additional retrievals are made for the Metop-A orbit no. 51267, whose footprints are depicted on the left of Fig. B1. We choose this orbit because it has a good global representativeness: the first part consists of observations over land and covers many different latitudes (western Asia to South Africa) and the second part of observation over sea from pole to pole (Pacific Ocean). The right panels of Fig. B1 show the differences between the modified latitudinal mean a priori profile and the a priori profiles used for the standard retrieval (a priori from Sect. 4.5). We investigate here retrievals of H[2]O and CH[4]. For H[2]O the standard a priori profiles have a large latitudinal dependence, and the difference from the latitudinal mean a priori profile is occasionally even outside ±200%. For CH[4] there is also a clear latitudinal dependence in the standard a priori profiles, which is, however, much smaller than for H[2]O: below the stratosphere the difference with respect to the latitudinal mean CH[4] profile is within ±10%. According to Eq. (A9) we can also simulate the retrieval for the modified a priori by $\begin{array}{}\text{(B1)}& {\stackrel{\mathrm{^}}{\mathbit{x}}}_{\mathrm{m}}=\stackrel{\mathrm{^}}{\mathbit{x}}+\left(\mathbf{I}-\mathbf{A}\right)\left({\mathbit{x}}_{\mathrm{a},\mathrm{m}}-{\ Here ${\stackrel{\mathrm{^}}{\mathbit{x}}}_{\mathrm{m}}$ is the retrieval results that would be obtained using the modified a priori, $\stackrel{\mathrm{^}}{\mathbit{x}}$ is the original retrieval result, I is the identity matrix, A is the averaging kernel matrix, x[a,m] is the modified a priori, and x[a] the original a priori. The linearity test consists in comparing the results obtained by the full retrieval using the modified a priori data and the results obtained by using the analytic treatment according to Eq. (B1). B2Test results for logarithmic and linear scale The results of the linearity test are shown in Fig. B2. We demonstrate the impact of the modified a priori by calculating the differences between the original retrieval and the additional retrieval using the modified a priori profiles. We make a latitudinally dependent characterisation of these differences by calculating root-mean-square (rms) values of the differences within 5^∘ latitude bands. Latitudinal cross sections of these rms differences are depicted on the left of Fig. B2 and reveal that the impact of the modified a priori on the retrieval is largest at the winter polar regions (high southern latitudes). This is where we find large differences between the original and the modified a priori (see Fig. B1) and where at the same time the retrieval sensitivity is relatively low (see DOFS maps in Fig. 10). The centre and right columns of Fig. B2 show the 5^∘ latitude band rms values for differences between the additional retrieval using the modified a priori profiles and the modification according to the analytic calculations of Eq. (B1). The centre column shows the results when performing the calculations of Eq. (B1) on the logarithmic scale. We observe that with the analytic calculations we can almost achieve the same results as with the full retrieval calculations. This indicates that the assumption of linearity for such analytic calculation is indeed valid. The right column shows the results when performing the calculations of Eq. (B1) on the linear scale; i.e. state vectors as well as derivatives (here the averaging kernel entries) are used on the linear scale ($\partial x=x\ partial \mathrm{ln}\left[x\right]$). The linearity assumption is not valid when performing the analytic calculation for H[2]O on the linear scale. We see very large differences between the full retrieval results and the results obtained by Eq. (B1). For CH[4] the linear-scale analytic calculations have worse agreement with the full retrievals than the logarithmic-scale calculations; however, they are not that pronounced as in the case of H[2]O. The reason for this is the setup of the linearity test (see Fig. B1 and the corresponding discussion): for the test the modification of the CH[4] a priori is weak, but for H[2]O the a priori modification is rather strong. In summary, the test shows that the assumption of linearity needed for an analytic treatment of the MUSICA IASI trace gas data is valid. Nevertheless, we have to be careful. Because the retrievals are performed in the logarithmic scale, the analytic calculation that uses the averaging kernels, gain matrices, or constraint matrices should also be performed on the logarithmic scale. On this scale the linearity assumption is valid, which is contrary to the linear scale, where the linearity assumption is not valid, meaning that an analytic treatment on the linear scale can lead to large Appendix C:Partial column-averaged mixing ratios For converting mixing ratio profiles into amount profiles we set up a pressure weighting operator Z as a diagonal matrix with the following entries: $\begin{array}{}\text{(C1)}& {Z}_{i,i}=\frac{\mathrm{\Delta }{p}_{i}}{{g}_{i}{m}_{\mathrm{air}}\left(\mathrm{1}+\frac{{m}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}}{{m}_{\mathrm{air}}}{\stackrel{\mathrm Using the pressure p[i] at atmospheric grid level i we use $\mathrm{\Delta }{p}_{\mathrm{1}}=\frac{{p}_{\mathrm{2}}-{p}_{\mathrm{1}}}{\mathrm{2}}-{p}_{\mathrm{1}}$, $\mathrm{\Delta }{p}_{\mathrm {nal}}={p}_{\mathrm{nal}}-\frac{{p}_{\mathrm{nal}}-{p}_{\mathrm{nal}-\mathrm{1}}}{\mathrm{2}}$, and $\mathrm{\Delta }{p}_{i}=\frac{{p}_{i+\mathrm{1}}-{p}_{i}}{\mathrm{2}}-\frac{{p}_{i}-{p}_{i-\mathrm {1}}}{\mathrm{2}}$ for $\mathrm{1}<i<\mathrm{nal}$. Furthermore, g[i] is the gravitational acceleration at level i; m[air] and ${m}_{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}$ the molecular mass of dry air and water vapour, respectively; and ${\stackrel{\mathrm{^}}{x}}_{i}^{{\mathrm{H}}_{\mathrm{2}}\mathrm{O}}$ the retrieved water vapour mixing ratio at level i. We define an operator W^T for resampling fine gridded atmospheric amount profiles into coarse gridded atmospheric partial column amount profiles. It has the dimension c×nal, where c is the number of the resampled coarse atmospheric grid levels and nal the number of atmospheric levels of the original fine atmospheric grid. Each line of the operator has the value 1 for the levels that are resampled and 0 for all other levels: $\begin{array}{}\text{(C2)}& {\mathbf{W}}^{\mathrm{T}}=\left(\begin{array}{ccccccccc}\mathrm{1}& \mathrm{\cdots }& \mathrm{1}& \mathrm{0}& \mathrm{\cdots }& \mathrm{\cdots }& \mathrm{\cdots }& \ mathrm{\cdots }& \mathrm{0}\\ \mathrm{0}& \mathrm{\cdots }& \mathrm{0}& \mathrm{1}& \mathrm{\cdots }& \mathrm{1}& \mathrm{0}& \mathrm{\cdots }& \mathrm{0}\\ \mathrm{0}& \mathrm{\cdots }& \mathrm{\ cdots }& \mathrm{\cdots }& \mathrm{\cdots }& \mathrm{0}& \mathrm{1}& \mathrm{\cdots }& \mathrm{1}\end{array}\right).\end{array}$ We can combine the operators Z and W^T and calculate a pressure-weighted resampling operator by $\begin{array}{}\text{(C3)}& {{\mathbf{W}}^{*}}^{\mathrm{T}}=\left({\mathbf{W}}^{\mathrm{T}}\mathbf{ZW}{\right)}^{-\mathrm{1}}{\mathbf{W}}^{\mathrm{T}}\mathbf{Z}.\end{array}$ This operator resamples linear-scale mixing ratio profiles into linear-scale partial column-averaged mixing ratio profiles. With operator ${{\mathbf{W}}^{*}}^{\mathrm{T}}$ we can calculate a coarse gridded partial column-averaged state ${\stackrel{\mathrm{^}}{\mathbit{x}}}^{*}$ from the fine gridded linear mixing ratio state $\stackrel{\mathrm{^}}{\mathbit{x}}$ by $\begin{array}{}\text{(C4)}& {\stackrel{\mathrm{^}}{\mathbit{x}}}^{*}={{\mathbf{W}}^{*}}^{\mathrm{T}}\stackrel{\mathrm{^}}{\mathbit{x}}.\end{array}$ Furthermore, we introduce an operator L for transferring the differentials from the logarithmic mixing ratio scale to differentials in linear mixing ratio scale. It is a diagonal matrix having the elements of the linear-scale atmospheric mixing ratios' state as the diagonal elements: $\begin{array}{}\text{(C5)}& {L}_{i,i}={\stackrel{\mathrm{^}}{x}}_{i}\phantom{\rule{0.25em}{0ex}}.\end{array}$ The averaging kernel of the partial column-averaged state can be calculated from the averaging kernel of the fine gridded logarithmic scale (A) by $\begin{array}{}\text{(C6)}& {\mathbf{A}}^{*}\approx {{\mathbf{W}}^{*}}^{\mathrm{T}}{\mathbf{LAL}}^{-\mathrm{1}}\mathbf{W}.\end{array}$ This kernel describes how a change in the partial column-averaged mixing ratios affects the retrieved partial column-averaged mixing ratios. This is an approximation because on the right side the diagonal values of L should be the actual mixing ratios instead of those retrieved. The matrix W is an interpolation matrix that resamples the coarse gridded partial column-averaged mixing ratio profiles as a fine gridded mixing ratio profile without modifying the partial columns. It is ${{\mathbf{W}}^{*}}^{\mathrm{T}}\mathbf{W}=\mathbf{I}$, which can be easily seen by using ${{\mathbf{W}}^ {*}}^{\mathrm{T}}$ from Eq. (C3). The covariances of the partial column-averaged mixing ratio state can be calculated from the corresponding covariance matrices of the fine gridded logarithmic scale (S) by $\begin{array}{}\text{(C7)}& {\mathbf{S}}^{*}\approx {{\mathbf{W}}^{*}}^{\mathrm{T}}{\mathbf{LSLW}}^{*}.\end{array}$ Here the approximation is because Δx≈xΔlnx. MS set up the MUSICA IASI retrieval, designed the netCDF CF conform MUSICA IASI output files, made the calculations in the context of the extended output file, developed and performed the compression of the averaging kernel output, and wrote this paper. BE developed the efficient MUSICA IASI processing chain and ran the processing at the supercomputer ForHLR. CJD supported the paper with several graphics. FK, CJD, and BE helped in preparing the MUSICA IASI output files with the compressed averaging kernels. AW developed the software tool for compressing the averaging kernels. FH developed the PROFFIT-nadir retrieval code. MH provided the code use for the MT_CKD water continuum calculations and helped with the scattering calculation needed for the cloud spectral responses. OEG and ES provided the MUSICA IASI processed data product generated at the Teide supercomputer. DK provided the CESM1–WACCM data used for generating the MUSICA IASI trace gas a priori data. All authors contributed with corrections and comments to the final version of the manuscript. The contact author has declared that neither they nor their co-authors have any competing interests. Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This work has strongly benefited from the project MUSICA (funded by the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013), ERC grant agreement number 256961), from financial support in the context of the projects MOTIV and TEDDY (funded by the Deutsche Forschungsgemeinschaft under project IDs [Geschäftszeichen] 290612604/GZ:SCHN1126/2-1 and 416767181/GZ:SCHN1126/5-1, respectively), and from INMENSE (funded by the Ministerio de Economía y Competividad from Spain, CGL2016-80688-P). Retrieval calculations for this work were performed on the supercomputer ForHLR funded by the Ministry of Science, Research and the Arts Baden-Württemberg and by the German Federal Ministry of Education and Research. Furthermore, we acknowledge the contribution of Teide High Performance Computing (TeideHPC) facilities. TeideHPC facilities are provided by the Instituto Tecnológico y de Energías Renovables (ITER), S.A (https://teidehpc.iter.es, last access: 25 January 2022). This research has been supported by the European Research Council, FP7 Ideas (MUSICA (grant no. 256961)); the Deutsche Forschungsgemeinschaft (grant nos. 290612604, project MOTIV, and 416767181, project TEDDY); the Ministerio de Economía y Competitividad (grant no. CGL2016-80688-P, project INMENSE); the Bundesministerium für Bildung und Forschung (ForHLR supercomputer); and the Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg (ForHLR supercomputer). This paper was edited by Nellie Elguindi and reviewed by Leonid Yurganov and one anonymous referee. August, T., Klaes, D., Schluessel, P., Hultberg, T., Crapeau, M., Arriaga, A., O'Carroll, A., Coppens, D., Munro, R., and Calbet, X.: IASI on Metop-A: Operational Level 2 retrievals after five years in orbit, J. Quant. Spectrosc. Ra., 113, 1340–1371, https://doi.org/10.1016/j.jqsrt.2012.02.028, 2012.a, b, c, d Backus, G. E. and Gilbert, F.: Uniqueness in the Inversion of inaccurate Gross Earth Data, Philos. T. R. Soc. A, 266, 123–192, 1970.a Baldridge, A., Hook, S., Grove, C., and Rivera, G.: The ASTER spectral library version 2.0, Remote Sens. Environ., 113, 711–715, https://doi.org/10.1016/j.rse.2008.11.007, 2009.a Baron, P., Ricaud, P., de la Noë, J., Eriksson, J. E., Merino, F., Ridal, M., and Murtagh, D. P.: Studies for the Odin sub-millimetre radiometer. II. Retrieval methodology, Can. J. Phys., 80, 341–356, https://doi.org/10.1139/p01-150, 2002.a Barthlott, S., Schneider, M., Hase, F., Wiegele, A., Christner, E., González, Y., Blumenstock, T., Dohe, S., García, O. E., Sepúlveda, E., Strong, K., Mendonca, J., Weaver, D., Palm, M., Deutscher, N. M., Warneke, T., Notholt, J., Lejeune, B., Mahieu, E., Jones, N., Griffith, D. W. T., Velazco, V. A., Smale, D., Robinson, J., Kivi, R., Heikkinen, P., and Raffalski, U.: Using XCO[2] retrievals for assessing the long-term consistency of NDACC/FTIR data sets, Atmos. Meas. Tech., 8, 1555–1573, https://doi.org/10.5194/amt-8-1555-2015, 2015.a Barthlott, S., Schneider, M., Hase, F., Blumenstock, T., Kiel, M., Dubravica, D., García, O. E., Sepúlveda, E., Mengistu Tsidu, G., Takele Kenea, S., Grutter, M., Plaza-Medina, E. F., Stremme, W., Strong, K., Weaver, D., Palm, M., Warneke, T., Notholt, J., Mahieu, E., Servais, C., Jones, N., Griffith, D. W. T., Smale, D., and Robinson, J.: Tropospheric water vapour isotopologue data (${{\ mathrm{H}}_{\mathrm{2}}}^{\mathrm{16}}\mathrm{O}$, ${{\mathrm{H}}_{\mathrm{2}}}^{\mathrm{18}}\mathrm{O}$, and HD^16O) as obtained from NDACC/FTIR solar absorption spectra, Earth Syst. Sci. Data, 9, 15–29, https://doi.org/10.5194/essd-9-15-2017, 2017.a Blumstein, D., Chalon, G., Carlier, T., Buil, C., Hebert, P., Maciaszek, T., Ponce, G., Phulpin, T., Tournier, B., Simeoni, D., Astruc, P., Clauss, A., Kayal, G., and Jegou, R.: IASI instrument: technical overview and measured performances, in: Infrared Spaceborne Remote Sensing XII, edited by: Strojnik, M., vol. 5543, International Society for Optics and Photonics, SPIE, 196–207, https:// doi.org/10.1117/12.560907, 2004.a Borger, C., Schneider, M., Ertl, B., Hase, F., García, O. E., Sommer, M., Höpfner, M., Tjemkes, S. A., and Calbet, X.: Evaluation of MUSICA IASI tropospheric water vapour profiles using theoretical error assessments and comparisons to GRUAN Vaisala RS92 measurements, Atmos. Meas. Tech., 11, 4981–5006, https://doi.org/10.5194/amt-11-4981-2018, 2018.a, b, c Boynard, A., Hurtmans, D., Garane, K., Goutail, F., Hadji-Lazaro, J., Koukouli, M. E., Wespes, C., Vigouroux, C., Keppens, A., Pommereau, J.-P., Pazmino, A., Balis, D., Loyola, D., Valks, P., Sussmann, R., Smale, D., Coheur, P.-F., and Clerbaux, C.: Validation of the IASI FORLI/EUMETSAT ozone products using satellite (GOME-2), ground-based (Brewer–Dobson, SAOZ, FTIR) and ozonesonde measurements, Atmos. Meas. Tech., 11, 5125–5152, https://doi.org/10.5194/amt-11-5125-2018, 2018.a Christner, E., Aemisegger, F., Pfahl, S., Werner, M., Cauquoin, A., Schneider, M., Hase, F., Barthlott, S., and Schädler, G.: The Climatological Impacts of Continental Surface Evaporation, Rainout, and Subcloud Processes on δD of Water Vapor and Precipitation in Europe, J. Geophys. Res.-Atmos., 123, 4390–4409, https://doi.org/10.1002/2017JD027260, 2018.a Clerbaux, C., Boynard, A., Clarisse, L., George, M., Hadji-Lazaro, J., Herbin, H., Hurtmans, D., Pommier, M., Razavi, A., Turquety, S., Wespes, C., and Coheur, P.-F.: Monitoring of atmospheric composition using the thermal infrared IASI/MetOp sounder, Atmos. Chem. Phys., 9, 6041–6054, https://doi.org/10.5194/acp-9-6041-2009, 2009.a, b Delamere, J. S., Clough, S. A., Payne, V. H., Mlawer, E. J., Turner, D. D., and Gamache, R. R.: A far-infrared radiative closure study in the Arctic: Application to water vapor, J. Geophys. Res.-Atmos., 115, D17106, https://doi.org/10.1029/2009JD012968, 2010.a, b De Wachter, E., Barret, B., Le Flochmoën, E., Pavelin, E., Matricardi, M., Clerbaux, C., Hadji-Lazaro, J., George, M., Hurtmans, D., Coheur, P.-F., Nedelec, P., and Cammas, J. P.: Retrieval of MetOp-A/IASI CO profiles and validation with MOZAIC data, Atmos. Meas. Tech., 5, 2843–2857, https://doi.org/10.5194/amt-5-2843-2012, 2012.a De Wachter, E., Kumps, N., Vandaele, A. C., Langerock, B., and De Mazière, M.: Retrieval and validation of MetOp/IASI methane, Atmos. Meas. Tech., 10, 4623–4638, https://doi.org/10.5194/ amt-10-4623-2017, 2017.a Diekmann, C. J., Schneider, M., Ertl, B., Hase, F., García, O., Khosrawi, F., Sepúlveda, E., Knippertz, P., and Braesicke, P.: The global and multi-annual MUSICA IASI {H[2]O, δD} pair dataset, Earth Syst. Sci. Data, 13, 5273–5292, https://doi.org/10.5194/essd-13-5273-2021, 2021.a, b, c, d, e Dirksen, R. J., Sommer, M., Immler, F. J., Hurst, D. F., Kivi, R., and Vömel, H.: Reference quality upper-air measurements: GRUAN data processing for the Vaisala RS92 radiosonde, Atmos. Meas. Tech., 7, 4463–4490, https://doi.org/10.5194/amt-7-4463-2014, 2014.a Dyroff, C., Sanati, S., Christner, E., Zahn, A., Balzer, M., Bouquet, H., McManus, J. B., González-Ramos, Y., and Schneider, M.: Airborne in situ vertical profiling of HDO$/$${{\mathrm{H}}_{\mathrm {2}}}^{\mathrm{16}}\mathrm{O}$ in the subtropical troposphere during the MUSICA remote sensing validation campaign, Atmos. Meas. Tech., 8, 2037–2049, https://doi.org/10.5194/amt-8-2037-2015, 2015.a, Eriksson, P.: Analysis and comparison of two linear regularization methods for passive atmospheric observations, J. Geophys. Res.-Atmos., 105, 18157–18167, https://doi.org/10.1029/2000JD900172, Franco, B., Clarisse, L., Stavrakou, T., Müller, J.-F., Van Damme, M., Whitburn, S., Hadji-Lazaro, J., Hurtmans, D., Taraborrelli, D., Clerbaux, C., and Coheur, P.-F.: A General Framework for Global Retrievals of Trace Gases From IASI: Application to Methanol, Formic Acid, and PAN, J. Geophys. Res.-Atmos., 123, 13963–13984, https://doi.org/10.1029/2018JD029633, 2018.a Gamache, R. R., Lamouroux, J., Blot-Lafon, V., and Lopes, E.: An intercomparison of measured pressure-broadening, pressure shifting parameters of carbon dioxide and their temperature dependence, J. Quant. Spectrosc. Ra., 135, 30–43, https://doi.org/10.1016/j.jqsrt.2013.11.001, 2014.a García, O. E., Schneider, M., Ertl, B., Sepúlveda, E., Borger, C., Diekmann, C., Wiegele, A., Hase, F., Barthlott, S., Blumenstock, T., Raffalski, U., Gómez-Peláez, A., Steinbacher, M., Ries, L., and de Frutos, A. M.: The MUSICA IASI CH[4] and N[2]O products and their comparison to HIPPO, GAW and NDACC FTIR references, Atmos. Meas. Tech., 11, 4171–4215, https://doi.org/10.5194/amt-11-4171-2018, 2018.a, b, c, d, e, f Gomez-Pelaez, A. J., Ramos, R., Cuevas, E., Gomez-Trueba, V., and Reyes, E.: Atmospheric CO2, CH4, and CO with the CRDS technique at the Izaña Global GAW station: instrumental tests, developments, and first measurement results, Atmos. Meas. Tech., 12, 2043–2066, https://doi.org/10.5194/amt-12-2043-2019, 2019.a González, Y., Schneider, M., Dyroff, C., Rodríguez, S., Christner, E., García, O. E., Cuevas, E., Bustos, J. J., Ramos, R., Guirado-Fuentes, C., Barthlott, S., Wiegele, A., and Sepúlveda, E.: Detecting moisture transport pathways to the subtropical North Atlantic free troposphere using paired H[2]O-δD in situ measurements, Atmos. Chem. Phys., 16, 4251–4269, https://doi.org/10.5194/ acp-16-4251-2016, 2016.a, b Gordon, I., Rothman, L., Hill, C., Kochanov, R., Tan, Y., Bernath, P., Birk, M., Boudon, V., Campargue, A., Chance, K., Drouin, B., Flaud, J.-M., Gamache, R., Hodges, J., Jacquemart, D., Perevalov, V., Perrin, A., Shine, K., Smith, M.-A., Tennyson, J., Toon, G., Tran, H., Tyuterev, V., Barbe, A., Császár, A., Devi, V., Furtenbacher, T., Harrison, J., Hartmann, J.-M., Jolly, A., Johnson, T., Karman, T., Kleiner, I., Kyuberis, A., Loos, J., Lyulin, O., Massie, S., Mikhailenko, S., Moazzen-Ahmadi, N., Müller, H., Naumenko, O., Nikitin, A., Polyansky, O., Rey, M., Rotger, M., Sharpe, S., Sung, K., Starikova, E., Tashkun, S., Auwera, J. V., Wagner, G., Wilzewski, J., Wcisło, P., Yu, S., and Zak, E.: The HITRAN2016 molecular spectroscopic database, J. Quant. Spectrosc. Ra., 203, 3–69, https://doi.org/10.1016/j.jqsrt.2017.06.038, 2017.a, b, c Hase, F., Hannigan, J. W., Coffey, M. T., Goldman, A., Höpfner, M., Jones, N. B., Rinsland, C. P., and Wood, S.: Intercomparison of retrieval codes used for the analysis of high-resolution, J. Quant. Spectrosc. Ra., 87, 25–52, 2004.a, b, c Hess, M., Koepke, P., and Schult, I.: Optical Properties of Aerosols and Clouds: The Software Package OPAC, B. Am. Meteorol. Soc., 79, 831–844, https://doi.org/10.1175/1520-0477(1998)079<0831:OPOAAC> 2.0.CO;2, 1998.a, b, c, d, e Kalman, R. E.: A new approach to linear filtering and prediction problems, J. Basic Eng., 82, 35–45, https://doi.org/10.1115/1.3662552, 1960.a Karion, A., Sweeney, C., Tans, P., and Newberger, T.: AirCore: An Innovative Atmospheric Sampling System, J. Atmos. Ocean. Tech., 27, 1839–1853, https://doi.org/10.1175/2010JTECHA1448.1, 2010.a Keim, C., Eremenko, M., Orphal, J., Dufour, G., Flaud, J.-M., Höpfner, M., Boynard, A., Clerbaux, C., Payan, S., Coheur, P.-F., Hurtmans, D., Claude, H., Dier, H., Johnson, B., Kelder, H., Kivi, R., Koide, T., López Bartolomé, M., Lambkin, K., Moore, D., Schmidlin, F. J., and Stübi, R.: Tropospheric ozone from IASI: comparison of different inversion algorithms and validation with ozone sondes in the northern middle latitudes, Atmos. Chem. Phys., 9, 9329–9347, https://doi.org/10.5194/acp-9-9329-2009, 2009.a Keppens, A., Lambert, J.-C., Granville, J., Miles, G., Siddans, R., van Peet, J. C. A., van der A, R. J., Hubert, D., Verhoelst, T., Delcloo, A., Godin-Beekmann, S., Kivi, R., Stübi, R., and Zehner, C.: Round-robin evaluation of nadir ozone profile retrievals: methodology and application to MetOp-A GOME-2, Atmos. Meas. Tech., 8, 2093–2120, https://doi.org/10.5194/amt-8-2093-2015, 2015.a, b, c Koepke, P., Gasteiger, J., and Hess, M.: Technical Note: Optical properties of desert aerosol with non-spherical mineral particles: data incorporated to OPAC, Atmos. Chem. Phys., 15, 5947–5956, https://doi.org/10.5194/acp-15-5947-2015, 2015.a Kohlhepp, R., Ruhnke, R., Chipperfield, M. P., De Mazière, M., Notholt, J., Barthlott, S., Batchelor, R. L., Blatherwick, R. D., Blumenstock, Th., Coffey, M. T., Demoulin, P., Fast, H., Feng, W., Goldman, A., Griffith, D. W. T., Hamann, K., Hannigan, J. W., Hase, F., Jones, N. B., Kagawa, A., Kaiser, I., Kasai, Y., Kirner, O., Kouker, W., Lindenmaier, R., Mahieu, E., Mittermeier, R. L., Monge-Sanz, B., Morino, I., Murata, I., Nakajima, H., Palm, M., Paton-Walsh, C., Raffalski, U., Reddmann, Th., Rettinger, M., Rinsland, C. P., Rozanov, E., Schneider, M., Senten, C., Servais, C., Sinnhuber, B.-M., Smale, D., Strong, K., Sussmann, R., Taylor, J. R., Vanhaelewyn, G., Warneke, T., Whaley, C., Wiehle, M., and Wood, S. W.: Observed and simulated time evolution of HCl, ClONO[2], and HF total column abundances, Atmos. Chem. Phys., 12, 3527–3556, https://doi.org/10.5194/acp-12-3527-2012, 2012.a Kunz, A., Pan, L. L., Konopka, P., Kinnison, D. E., and Tilmes, S.: Chemical and dynamical discontinuity at the extratropical tropopause based on START08 and WACCM analyses, J. Geophys. Res.-Atmos., 116, D24302, https://doi.org/10.1029/2011JD016686, 2011.a Lacour, J.-L., Risi, C., Clarisse, L., Bony, S., Hurtmans, D., Clerbaux, C., and Coheur, P.-F.: Mid-tropospheric δD observations from IASI/MetOp at high spatial and temporal resolution, Atmos. Chem. Phys., 12, 10817–10832, https://doi.org/10.5194/acp-12-10817-2012, 2012.a, b Lorente, A., Borsdorff, T., Butz, A., Hasekamp, O., aan de Brugh, J., Schneider, A., Wu, L., Hase, F., Kivi, R., Wunch, D., Pollard, D. F., Shiomi, K., Deutscher, N. M., Velazco, V. A., Roehl, C. M., Wennberg, P. O., Warneke, T., and Landgraf, J.: Methane retrieved from TROPOMI: improvement of the data product and validation of the first 2 years of measurements, Atmos. Meas. Tech., 14, 665–684, https://doi.org/10.5194/amt-14-665-2021, 2021.a Marsh, D. R., Mills, M. J., Kinnison, D. E., Lamarque, J.-F., Calvo, N., and Polvani, L. M.: Climate Change from 1850 to 2005 Simulated in CESM1(WACCM), J. Climate, 26, 7372–7391, https://doi.org/ 10.1175/JCLI-D-12-00558.1, 2013.a Masuda, K., Takashima, T., and Takayama, Y.: Emissivity of pure and sea waters for the model sea surface in the infrared window regions, Remote Sens. Environ., 24, 313–329, https://doi.org/10.1016/ 0034-4257(88)90032-6, 1988.a, b, c Mlawer, E. J., Payne, V. H., Moncet, J.-L., Delamere, J. S., Alvarado, M. J., and Tobin, D. C.: Development and recent evaluation of the MT_CKD model of continuum absorption, Philos. T. R. Soc. A, 370, 2520–2556, https://doi.org/10.1098/rsta.2011.0295, 2012.a, b Morgenstern, O., Hegglin, M. I., Rozanov, E., O'Connor, F. M., Abraham, N. L., Akiyoshi, H., Archibald, A. T., Bekki, S., Butchart, N., Chipperfield, M. P., Deushi, M., Dhomse, S. S., Garcia, R. R., Hardiman, S. C., Horowitz, L. W., Jöckel, P., Josse, B., Kinnison, D., Lin, M., Mancini, E., Manyin, M. E., Marchand, M., Marécal, V., Michou, M., Oman, L. D., Pitari, G., Plummer, D. A., Revell, L. E., Saint-Martin, D., Schofield, R., Stenke, A., Stone, K., Sudo, K., Tanaka, T. Y., Tilmes, S., Yamashita, Y., Yoshida, K., and Zeng, G.: Review of the global models used within phase 1 of the Chemistry–Climate Model Initiative (CCMI), Geosci. Model Dev., 10, 639–671, https://doi.org/10.5194/gmd-10-639-2017, 2017.a Payne, V. H., Mlawer, E. J., Cady-Pereira, K. E., and Moncet, J. L.: Water Vapor Continuum Absorption in the Microwave, IEEE T. Geosci. Remote, 49, 2194–2208, https://doi.org/10.1109/ TGRS.2010.2091416, 2011.a, b Purser, R. J. and Huang, H.-L.: Estimating Effective Data Density in a Satellite Retrieval or an Objective Analysis, J. Appl. Meteorol., 32, 1092–1107, https://doi.org/10.1175/1520-0450(1993)032 <1092:EEDDIA>2.0.CO;2, 1993.a Rienecker, M. M., Suarez, M. J., Gelaro, R., Todling, R., Bacmeister, J., Liu, E., Bosilovich, M. G., Schubert, S. D., Takacs, L., Kim, G.-K., Bloom, S., Chen, J., Collins, D., Conaty, A., da Silva, A., Gu, W., Joiner, J., Koster, R. D., Lucchesi, R., Molod, A., Owens, T., Pawson, S., Pegion, P., Redder, C. R., Reichle, R., Robertson, F. R., Ruddick, A. G., Sienkiewicz, M., and Woollen, J.: MERRA: NASA’s Modern-Era Retrospective Analysis for Research and Applications, J. Climate, 24, 3624–3648, https://doi.org/10.1175/JCLI-D-11-00015.1, 2011.a Risi, C., Bony, S., Vimeux, F., and Jouzel, J.: Water-stable isotopes in the LMDZ4 general circulation model: Model evaluation for present-day and past climates and applications to climatic interpretations of tropical isotopic records, Journal of Geophysical Research: Atmospheres, 115, https://doi.org/10.1029/2009JD013255, 2010.a Rodgers, C.: Inverse Methods for Atmospheric Sounding: Theory and Praxis, Series on Atmospheric, Oceanic and Planetary Physics – Vol. 2, edited by: Taylor, F. W. (University of Oxford), World Scientific Publishing Co., Singapore, ISBN 981-02-2740-X, 2000.a, b, c, d, e, f, g, h, i Rodgers, C. and Connor, B.: Intercomparison of remote sounding instruments, J. Geophys. Res., 108, 4116–4129, https://doi.org/10.1029/2002JD002299, 2003.a Ronsmans, G., Langerock, B., Wespes, C., Hannigan, J. W., Hase, F., Kerzenmacher, T., Mahieu, E., Schneider, M., Smale, D., Hurtmans, D., De Mazière, M., Clerbaux, C., and Coheur, P.-F.: First characterization and validation of FORLI-HNO[3] vertical profiles retrieved from IASI/Metop, Atmos. Meas. Tech., 9, 4783–4801, https://doi.org/10.5194/amt-9-4783-2016, 2016.a Schneider, M. and Hase, F.: Improving spectroscopic line parameters by means of atmospheric spectra: Theory and example for water vapour and solar absorption spectra, J. Quant. Spectrosc. Ra., 110, 1825–1839, https://doi.org/10.1016/j.jqsrt.2009.04.011, 2009.a Schneider, M. and Hase, F.: Optimal estimation of tropospheric H[2]O and δD with IASI/METOP, Atmos. Chem. Phys., 11, 11207–11220, https://doi.org/10.5194/acp-11-11207-2011, 2011.a, b Schneider, M., Hase, F., and Blumenstock, T.: Water vapour profiles by ground-based FTIR spectroscopy: study for an optimised retrieval and its validation, Atmos. Chem. Phys., 6, 811–830, https:// doi.org/10.5194/acp-6-811-2006, 2006a.a Schneider, M., Hase, F., and Blumenstock, T.: Ground-based remote sensing of $\mathrm{HDO}/{\mathrm{H}}_{\mathrm{2}}\mathrm{O}$ ratio profiles: introduction and validation of an innovative retrieval approach, Atmos. Chem. Phys., 6, 4705–4722, https://doi.org/10.5194/acp-6-4705-2006, 2006b.a Schneider, M., Barthlott, S., Hase, F., González, Y., Yoshimura, K., García, O. E., Sepúlveda, E., Gomez-Pelaez, A., Gisi, M., Kohlhepp, R., Dohe, S., Blumenstock, T., Wiegele, A., Christner, E., Strong, K., Weaver, D., Palm, M., Deutscher, N. M., Warneke, T., Notholt, J., Lejeune, B., Demoulin, P., Jones, N., Griffith, D. W. T., Smale, D., and Robinson, J.: Ground-based remote sensing of tropospheric water vapour isotopologues within the project MUSICA, Atmos. Meas. Tech., 5, 3007–3027, https://doi.org/10.5194/amt-5-3007-2012, 2012.a, b, c Schneider, M., González, Y., Dyroff, C., Christner, E., Wiegele, A., Barthlott, S., García, O. E., Sepúlveda, E., Hase, F., Andrey, J., Blumenstock, T., Guirado, C., Ramos, R., and Rodríguez, S.: Empirical validation and proof of added value of MUSICA's tropospheric δD remote sensing products, Atmos. Meas. Tech., 8, 483–503, https://doi.org/10.5194/amt-8-483-2015, 2015.a Schneider, M., Ertl, B., and Diekmann, C.: MUSICA IASI full retrieval product extended output (processing version 3.2.1), KIT [data set], https://doi.org/10.35097/412, 2021a.a, b, c Schneider, M., Ertl, B., and Diekmann, C.: MUSICA IASI full retrieval product standard output (processing version 3.2.1), KIT [data set], https://doi.org/10.35097/408, 2021b.a, b, c Schneider, M., Ertl, B., Diekmann, C. J., Khosrawi, F., Röhling, A. N., Hase, F., Dubravica, D., García, O. E., Sepúlveda, E., Borsdorff, T., Landgraf, J., Lorente, A., Chen, H., Kivi, R., Laemmel, T., Ramonet, M., Crevoisier, C., Pernin, J., Steinbacher, M., Meinhardt, F., Deutscher, N. M., Griffith, D. W. T., Velazco, V. A., and Pollard, D. F.: Synergetic use of IASI and TROPOMI space borne sensors for generating a tropospheric methane profile product, Atmos. Meas. Tech. Discuss. [preprint], https://doi.org/10.5194/amt-2021-31, in review, 2021c.a, b Seemann, S. W., Borbas, E. E., Knuteson, R. O., Stephenson, G. R., and Huang, H.-L.: Development of a Global Infrared Land Surface Emissivity Database for Application to Clear Sky Sounding Retrievals from Multispectral Satellite Radiance Measurements, J. Appl. Meteorol. Clim., 47, 108–123, https://doi.org/10.1175/2007JAMC1590.1, 2008.a, b, c, d Siddans, R., Knappett, D., Kerridge, B., Waterfall, A., Hurley, J., Latter, B., Boesch, H., and Parker, R.: Global height-resolved methane retrievals from the Infrared Atmospheric Sounding Interferometer (IASI) on MetOp, Atmos. Meas. Tech., 10, 4135–4164, https://doi.org/10.5194/amt-10-4135-2017, 2017. a Steck, T.: Methods for determining regularization for atmospheric retrieval problems, Appl. Opt., 41, 1788–1797, https://doi.org/10.1364/AO.41.001788, 2002.a Stiller, G. P. (Ed.): The Karlsruhe Optimized and Precise Radiative transfer Algorithm (KOPRA), Vol. FZKA 6487 of Wissenschaftliche Berichte, Forschungszentrum Karlsruhe, 2000.a Tikhonov, A. N.: Solution of Incorrectly Formulated Problems and the Regularization Method, Soviet Mathematics Doklady, 4, 1035–1038, 1963.a von Clarmann, T., Degenstein, D. A., Livesey, N. J., Bender, S., Braverman, A., Butz, A., Compernolle, S., Damadeo, R., Dueck, S., Eriksson, P., Funke, B., Johnson, M. C., Kasai, Y., Keppens, A., Kleinert, A., Kramarova, N. A., Laeng, A., Langerock, B., Payne, V. H., Rozanov, A., Sato, T. O., Schneider, M., Sheese, P., Sofieva, V., Stiller, G. P., von Savigny, C., and Zawada, D.: Overview: Estimating and reporting uncertainties in remotely sensed atmospheric composition and temperature, Atmos. Meas. Tech., 13, 4393–4436, https://doi.org/10.5194/amt-13-4393-2020, 2020.a Weber, A.: Storage-Efficient Analysis of Spatio-Temporal Data with Application to Climate Research, Master Thesis, Karlsruhe Institute of Technology, https://doi.org/10.5281/ZENODO.3360021, 2019.a, Wiegele, A., Schneider, M., Hase, F., Barthlott, S., García, O. E., Sepúlveda, E., González, Y., Blumenstock, T., Raffalski, U., Gisi, M., and Kohlhepp, R.: The MUSICA MetOp/IASI H[2]O and δD products: characterisation and long-term comparison to NDACC/FTIR data, Atmos. Meas. Tech., 7, 2719–2732, https://doi.org/10.5194/amt-7-2719-2014, 2014.a Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., Gonzalez-Beltran, A., Gray, A. J., Groth, P., Goble, C., Grethe, J. S., Heringa, J., 't Hoen, P. A., Hooft, R., Kuhn, T., Kok, R., Kok, J., Lusher, S. J., Martone, M. E., Mons, A., Packer, A. L., Persson, B., Rocca-Serra, P., Roos, M., van Schaik, R., Sansone, S.-A., Schultes, E., Sengstag, T., Slater, T., Strawn, G., Swertz, M. A., Thompson, M., van der Lei, J., van Mulligen, E., Velterop, J., Waagmeester, A., Wittenburg, P., Wolstencroft, K., Zhao, J., and Mons, B.: The FAIR Guiding Principles for scientific data management and stewardship, Sci Data, 3, 2052–4463, https://doi.org/10.1038/sdata.2016.18, 2016.a Wofsy, S. C.: HIAPER Pole-to-Pole Observations (HIPPO): fine-grained, global-scale measurements of climatically important atmospheric gases and aerosols, Philos. T. R. Soc. A, 369, 2073–2086, https:/ /doi.org/10.1098/rsta.2010.0313, 2011.a Worden, J., Kulawik, S., Frankenberg, C., Payne, V., Bowman, K., Cady-Peirara, K., Wecht, K., Lee, J.-E., and Noone, D.: Profiles of CH[4], HDO, H[2]O, and N[2]O with improved lower tropospheric vertical resolution from Aura TES radiances, Atmos. Meas. Tech., 5, 397–411, https://doi.org/10.5194/amt-5-397-2012, 2012.a
{"url":"https://essd.copernicus.org/articles/14/709/2022/","timestamp":"2024-11-08T15:51:02Z","content_type":"text/html","content_length":"632433","record_id":"<urn:uuid:c0d5e21e-3120-43f8-abb4-bbb6b8f4796e>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00715.warc.gz"}
Which function has a graph with a horizontal asymptote at y = --Turito Are you sure you want to logout? Which function has a graph with a horizontal asymptote at y = -1. The correct answer is: y = 1/1 = 1 • If both the polynomials have the same degree, divide the coefficients of the leading terms. This is your asymptote. • If the degree of the numerator is less than the denominator, then the asymptote is located at y = 0 (which is the x-axis). If the degree of the numerator is greater than the denominator, then there is no horizontal asymptote a) Horizontal asymptote is given by As both the polynomials have the same degree, divide the coefficients of the leading terms. y = Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/which-function-has-a-graph-with-a-horizontal-asymptote-at-y-1-a-f-x-x-5-x-3-q584a8d09","timestamp":"2024-11-09T04:33:57Z","content_type":"application/xhtml+xml","content_length":"1052465","record_id":"<urn:uuid:aeef4df4-ce59-4081-b7c5-3b63756fac7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00555.warc.gz"}
Automata Theory | Curious Toons Table of Contents Welcome to Advanced Automata Theory, where we delve into the fundamental principles that underpin the vast landscape of computational theory. As you embark on this intellectual journey, prepare to explore the intricate world of formal languages, state machines, and computational limits. This course will equip you with rigorous analytical skills, promoting a deep understanding of the abstract machines that define computation. From the classic foundations laid by Turing and Church to the cutting-edge applications in modern computing, Automata Theory is not just a theoretical pursuit; it’s a core component enriching your problem-solving toolbox. Why is Automata Theory crucial? At its essence, this course teaches you how to conceptualize and construct models of computation, shedding light on what problems can be solved algorithmically and which cannot—a topic that’s invaluable in advancing both theoretical knowledge and practical innovation. We will explore deterministic and nondeterministic models, pushdown automata, context-free grammars, and the subtleties of the Chomsky hierarchy. Additionally, we’ll unlock the mysteries of decidability, the P vs. NP question, and complexity classes, challenging you to think critically about efficiency and feasibility in computation. Imagine understanding the algorithms behind Google’s search engine or the protocols securing your online transactions. With Automata Theory, you gain insights into how these technologies are built and how they can evolve. As AI and machine learning continue to reshape our world, the foundational knowledge you acquire in Automata Theory will be crucial in advancing your career and contributing to groundbreaking developments. Join us as we unravel the complexities of computation and transform challenges into opportunities for innovation. This course will not only expand your academic boundaries but also enhance your capacity to pioneer new technologies that address global challenges. Embrace the challenge, and let’s redefine what’s possible in the ever-evolving field of computer science. Introduction to Automata Theory Historical Background As we embark on our journey through Automata Theory, it’s essential to first explore its historical background, which provides the foundation upon which modern computational theory is built. The roots of Automata Theory trace back to the mid-20th century, a period marked by groundbreaking work in mathematics and logic by pioneers such as Alan Turing and Alonzo Church. Turing’s introduction of the Turing machine in 1936 was a seminal moment, offering a formalization of computation and mechanical processes. This theoretical construct gave rise to the concept of algorithmic processes and laid the groundwork for what we now understand as computational theory. Concurrently, Alonzo Church developed the Lambda calculus, contributing significantly to the formalization of functions and recursive functions, which further enriched the theoretical underpinnings of computation. The synergy between these two foundational works catalyzed a rich exploration of formal languages and automata—abstract machines that recognize patterns and process strings according to a set of rules—paving the way for the digital revolution and computer science as a discipline. Automata Theory gained further momentum with the contributions of John von Neumann, who conceptualized self-replicating automata, thus broadening the scope of computational models. During the latter half of the 20th century, contributions by Noam Chomsky in formal language theory interconnected linguistics and computer science, categorizing languages based on generative grammar models and thus enriching our understanding of syntax and structure. Consequently, Automata Theory became a cornerstone of theoretical computer science, influencing advancements in artificial intelligence, compiler design, and complex problem-solving algorithms. By understanding this historical context, we appreciate how Automata Theory has shaped contemporary computing, underscoring its relevance in progressive technology and innovation today. Importance in Computer Science The importance of Automata Theory in computer science cannot be overstated, as it forms the foundational backbone for understanding computational processes and the development of efficient algorithms. Automata Theory delves into abstract machines and the problems they can solve, providing vital insights into the capabilities and limitations of computers. This theoretical framework is crucial for developing compilers and interpreters, as it defines grammars and syntax that facilitate language processing. Consequently, software engineers and computer scientists draw extensively on Automata Theory when designing complex software systems and optimizing code execution. Moreover, it plays an instrumental role in the fields of artificial intelligence and machine learning, where understanding state machines can lead to advancements in decision-making processes and predictive models. As the digital world increasingly relies on automation and sophisticated computing, Automata Theory remains a critical component of computer science education, equipping practitioners with the tools they need to innovate and push the boundaries of what is computationally possible. For students and professionals alike, mastering Automata Theory is a gateway to exploring new computing paradigms, such as quantum computing and blockchain technology, which demand a robust understanding of how theoretical machines process information. Its relevance is underscored by its applications in natural language processing, cybersecurity, and even genetic algorithms, showcasing its versatility across various domains. As a result, understanding Automata Theory is not just an academic exercise but a practical necessity for those aiming to excel in the ever-evolving landscape of computer science. By combining mathematical rigor with practical application, Automata Theory serves as a pivotal subject for anyone striving to comprehend and harness the full potential of computational science, ensuring that they remain at the forefront of technological innovation. Deterministic Finite Automata (DFA) Definition and Components Deterministic Finite Automata (DFA) form a foundational concept in automata theory, an essential area in computer science. A DFA is a theoretical machine used to model computation and solve problems related to language recognition. The definition of a DFA involves a 5-tuple: (Q, Σ, δ, q₀, F). Here, Q represents a finite set of states, and Σ denotes a finite set of input symbols known as the alphabet. The transition function, δ: Q × Σ → Q, describes how the automaton transitions from one state to another based on input symbols. The start state, q₀ ∈ Q, is where the computation begins, while F ⊆ Q is the set of accept states, which determine if a string is accepted by the automaton. Each component plays a crucial role—Q and Σ define the structure, while δ provides dynamic behavior, making the DFA deterministic; for each state and input symbol, there is precisely one state transition. This deterministic nature underpins the predictability and reliability of DFAs in computational tasks. DFAs are crucial for parsing and lexical analysis in compiler design, where they efficiently handle regular languages. Their simplicity and precision make them ideal for defining pattern-matching engines and text search algorithms. Understanding DFAs provides a gateway into more complex automata theory concepts like nondeterministic finite automata (NFA) and Turing machines. Exploring this concept enriches one’s comprehension of language processing and algorithm development fundamentals, making DFAs an indispensable tool for computer scientists and software engineers. Engaging with DFAs not only deepens theoretical understanding but also enhances practical skills in designing efficient algorithms and systems. This exploration into deterministic finite automata offers invaluable insights into the intersection of theory and practical application in computer science. State Transition Diagrams State Transition Diagrams are fundamental tools in understanding Deterministic Finite Automata (DFA), playing a crucial role in automata theory and computational sciences. These diagrams provide a visual representation of how a DFA processes input strings, making abstract concepts more tangible and accessible for learners. In essence, a State Transition Diagram consists of states, represented as nodes or circles, and transitions, depicted as directed edges or arrows between these states. Each transition is labeled with an input symbol, illustrating how the machine moves from one state to another upon processing specific input. A crucial feature of these diagrams is the unique determination of state transitions: for every state and input symbol in a DFA, there’s precisely one defined transition to a subsequent state. This determinism ensures predictability and precision in automata behavior. The starting state, often indicated by an arrow pointing towards it, marks where the input processing begins, while accept states, depicted with double circles, signify where the DFA can successfully terminate with acceptance of the input string. These visualization tools are indispensable for designing and analyzing algorithms, particularly in fields requiring computation models like linguistic pattern recognition, compiler design, and digital circuit verification. Understanding how to interpret and construct state transition diagrams enhances one’s ability to conceptualize complex computational processes, bridging theoretical computer science and practical application. Furthermore, mastering these diagrams strengthens analytical thinking, contributing profoundly to fields like machine learning and artificial intelligence. This synergy between theoretical concepts and practical implications underscores the significance of state transition diagrams in exploring deterministic finite automata, fostering deeper learning and innovation in computational theory. In this way, state transition diagrams not only elucidate how DFAs operate but also empower students and professionals in advancing their computational proficiency and Nondeterministic Finite Automata (NFA) Fundamental Differences from DFA In the realm of automata theory, understanding the fundamental differences between Nondeterministic Finite Automata (NFA) and Deterministic Finite Automata (DFA) is crucial for advanced learners. Both NFAs and DFAs are pivotal in the study of computational theory and formal languages, yet they operate distinctly. The primary distinction lies in their transition mechanisms. In a DFA, each state has exactly one transition for each input symbol, ensuring a single, unambiguous path through the automaton. Conversely, an NFA can have multiple transitions for a single input symbol, including epsilon (ε) transitions that allow state changes without consuming input. This nondeterminism enables NFAs to explore multiple computational paths simultaneously, akin to parallel processing in modern computing architectures. While this might suggest that NFAs are more powerful, in actuality, they are equivalent to DFAs in terms of the languages they can recognize. However, NFAs often afford simpler and more intuitive design for complex patterns, as their construction can be more straightforward and less restrictive. This intrinsic nondeterminism results in compact state representations and facilitates easier expressions of regular concepts. From a computational perspective, converting an NFA to its DFA equivalent is possible through the powerset construction method, but this may exponentially increase the number of states, highlighting efficiency trade-offs. These fundamental differences underscore why NFA versus DFA remains a cornerstone topic in Automata Theory, essential for understanding deeper constructs like regular expressions and language recognition algorithms. For students delving into computational models, grasping these distinctions enhances their ability to leverage each automaton type effectively, optimizing use cases in pattern matching, compiler design, and more. Mastery of both NFAs and DFAs equips learners with foundational knowledge pivotal for further exploration in computer science fields. Construction of NFA In the construction of Nondeterministic Finite Automata (NFA), we focus on creating a model that can efficiently process a wide range of input strings, leveraging its inherent nondeterminism. An NFA is defined by a quintuple (Q, Σ, δ, q0, F), where Q represents a finite set of states, Σ is the finite input alphabet, δ is the transition function, q0 is the initial state, and F is the set of accepting states. The key feature of an NFA is its ability to transition to multiple states for a given input symbol, or even move without consuming any input (ε-transitions). This flexibility allows the NFA to explore different computation paths simultaneously, making it a powerful tool for tackling complex string patterns. To construct an NFA, start by identifying the language conditions you wish to recognize. Create states representing each significant condition or checkpoint in your language and define the transition function to reflect the allowed moves based on your input symbols. Each transition can lead to multiple subsequent states, capturing the nondeterministic behavior inherent in NFAs. Additionally, set up the initial state from where your computation begins and determine the set of accepting states that signify successful input recognition. Using various design methodologies, such as the state elimination method or subset construction, you can refine your NFA to optimize performance. Remember, NFAs can be converted into equivalent Deterministic Finite Automata (DFA) through the subset construction algorithm, ensuring versatility in automaton design. With this foundational understanding of NFA construction, you are equipped to explore more complex scenarios in automata theory, delving deeper into the nuances of computation, efficiency, and language recognition. Regular Languages and Regular Expressions Definition and Examples In the realm of computational theory, understanding regular languages and regular expressions is crucial. Regular languages are a key concept in automata theory, representing the simplest class of languages recognized by finite automata, specifically deterministic finite automata (DFA) and non-deterministic finite automata (NFA). Their significance stems from their ability to model a wide array of search patterns and string manipulations, making them foundational in computer science and indispensable for tasks like lexical analysis in compiler design. Regular expressions, a highly compact syntax for defining regular languages, offer a powerful and flexible method for string pattern matching. For instance, the regular expression [a-z]* defines a regular language consisting of all strings composed solely of lowercase letters from the English alphabet. Another example, the regex (ab)+, describes strings formed by one or more repetitions of the sequence “ab”. These constructs can be boiled down to primitive operations: union, concatenation, and Kleene star, which allow for the construction of complex patterns from simpler ones. Regular languages boast closure properties under these operations, enabling the creation of new languages. Exploring the synergy between regular languages and expressions provides insights into the design of efficient algorithms and their implementation in programming languages. Mastering these concepts empowers computer scientists to tackle problems involving pattern recognition, text processing, and even network security. This harmonious blend of theoretical richness and practical applicability positions regular languages and regular expressions at the heart of innovations in computing, offering powerful tools for problem-solving and creativity in technological domains. For more extensive resources and in-depth analysis of regular languages and regular expressions, check authoritative lectures and tutorials available through Harvard University’s computer science program, ensuring a comprehensive grasp of these foundational elements. Closure Properties In the fascinating realm of automata theory, understanding the closure properties of regular languages is crucial for any advanced computer science student. Regular languages, which are pivotal in formal language theory, exhibit remarkable closure properties under various operations such as union, concatenation, and Kleene star. These closure properties ensure that when two regular languages undergo these operations, the resulting language is also regular. This guarantees that regular languages maintain their structural integrity, making them both predictable and robust. For instance, if (L1) and (L2) are regular languages, their union (L1 \cup L2) remains regular, underscoring the language’s resilience. Similarly, concatenation (L1L2) and repetition in the form of the Kleene star (L_1^*) also yield regular languages. Additionally, regular languages are closed under intersection and complementation, providing tremendous flexibility in automata design and analysis. These properties are not just theoretical constructs; they have practical applications in programming languages, lexical analysis, and text processing. By mastering the closure properties, students can design efficient algorithms and automated systems, recognizing the power of regular expressions in pattern matching and search functionalities. Understanding these properties enhances computational efficiency, algorithmic design, and ultimately, the development of resilient software. This deep dive into the closure properties of regular languages bridges theoretical concepts with real-world applications, preparing students to tackle complex computational problems with confidence. By integrating these concepts into your advanced automata theory toolkit, you gain invaluable insights into language processing and automata design, pivotal for advancing in computer science research and development. Pushdown Automata and Context-Free Languages Introduction to Pushdown Automata In the realm of Automata Theory, “Pushdown Automata” (PDA) serve as a crucial bridge between finite automata and more powerful computational models, seamlessly integrating concepts from both theoretical and practical perspectives. Designed to recognize context-free languages, PDAs extend the capabilities of finite automata by incorporating a stack—a dynamic, last-in-first-out (LIFO) storage system. This additional resource allows PDAs to handle nested structures and recursion, essential for interpreting complex programming languages and grammars. A PDA transitions between states not only based on input symbols but also contingent on the stack’s top element, enabling it to process an extensive range of language patterns that finite automata cannot. Understanding PDAs offers significant insights into compilers’ design and the parsing of programming languages, as they empower computers to grasp syntactical nuances with precision. Central to their operation is the stack operations—push, pop, and no-operation—each manipulating the stack to maintain crucial intermediate information, while instantaneous descriptions assist in modeling every computational step. These fundamental concepts help unfold the sophisticated nature of context-free languages which underlie many real-world applications. Additionally, deterministic and non-deterministic PDAs encapsulate varied expressive powers, which are central to many advanced computational theories. For students and professionals delving into computer science, mastering the intricacies of Pushdown Automata reveals an advanced understanding of language hierarchies and computation, forming a foundation for further exploration into computational complexity and algorithm design. As you engage with “Pushdown Automata,” you’ll unlock a pivotal avenue for analyzing and constructing algorithms that reflect the core principles of automata theory, integral to modern computing. This exploration not only enriches your theoretical computer science knowledge but also enhances your proficiency in applying these principles practically, ensuring a deep-seated comprehension of one of the critical components of computational theory. Relation to Context-Free Grammars In the study of formal languages, the relationship between Pushdown Automata (PDA) and Context-Free Grammars (CFG) is both profound and pivotal. Pushdown Automata are computational models that extend finite automata with an additional stack-based memory, allowing them to recognize a broader class of languages known as context-free languages. On the other hand, Context-Free Grammars are a set of production rules that define how strings in a language can be generated. Each context-free language can be generated by a corresponding CFG and equivalently recognized by a PDA, forming a cornerstone of automata theory. This relationship is established through the Chomsky Hierarchy, which categorizes languages into different classes. Notably, for every context-free grammar, there exists a pushdown automaton that accepts the same language, demonstrating that PDAs and CFGs are two sides of the same coin in formal language theory. Additionally, the conversion processes between the two, such as transforming a CFG into a PDA and vice versa, highlight the interconnectedness of these concepts. Understanding this relationship is essential for areas such as compiler design, programming language theory, and artificial intelligence, where context-free languages commonly model syntactic structures. By mastering the intricacies of PDAs and their corresponding context-free grammars, learners can deepen their comprehension of computational theory and enhance their ability to implement efficient algorithms in software development. In summary, the interplay between Pushdown Automata and Context-Free Grammars is fundamental in computer science, providing critical insights into language recognition and processing. As we conclude this advanced course in Automata Theory, we find ourselves at the confluence of abstraction and concrete application, where the elegance of theoretical constructs meets the ingenuity of real-world problem-solving. Automata Theory, at its core, is a journey through the formal worlds of finite automata, context-free grammars, Turing machines, and beyond. Throughout this course, we have delved deep into these fascinating realms, exploring the foundational aspects that underpin modern computing systems and stretch the boundaries of what is computationally feasible. Reflecting on the topics covered, it’s crucial to appreciate the breadth and depth of Automata Theory. We began with finite automata and regular languages, unraveling their simplicity and power in recognizing patterns and establishing basic computational boundaries. This set the stage for an exploration of context-free grammars and pushdown automata, expanding our ability to model languages, including programming languages, with hierarchical structures. The course then propelled us into the realm of Turing machines—an abstract yet potent representation of computation that offers profound insights into the limits of what can be computed. One of the recurring themes that permeate Automata Theory is the concept of computational equivalence and complexity. The P vs NP problem, for instance, stands as one of the most intriguing and unsolved questions in computer science, tantalizing us with its potential implications across cryptography, algorithm design, and artificial intelligence. Engaging with such profound questions not only sharpens our analytical skills but also encourages innovation in solving real-world problems, keeping the spirit of inquiry alive. Moreover, Automata Theory finds its way into numerous applications, from compiler design and text processing to artificial intelligence and bioinformatics. Each tool, each theorem, forms a crucial piece of the larger puzzle, part of a burgeoning field that continues to evolve. By understanding the underlying theories, students are equipped to approach these challenges creatively and As the course draws to a close, I hope you recognize not only the intricacies of the models and theories we’ve studied but also the vast ocean of unanswered questions and uncharted territories that await exploration. The toolkit you’ve acquired here is just the beginning. Whether you pursue further academic research, delve into advanced applications, or innovate in the tech industry, your understanding of Automata Theory will be a steadfast companion. Let this course be more than just an academic checkpoint. May it ignite a lifelong curiosity and passion for discovery. I encourage you not to be content with what you know but to strive to push the boundaries of what is possible. Participate in projects that challenge your understanding, contribute to open-source initiatives, or conduct research that explores the intersections of automata with other scientific disciplines. In conclusion, Automata Theory serves as a gateway to a deeper comprehension of computation’s capabilities and limits—a reminder of the elegance and intricacy that computing offers. As you step forward, take with you the spirit of exploration and the courage to delve into the unknown. I look forward to hearing about the innovative paths you will undoubtedly forge and the remarkable contributions you will make to the field of computer science. Thank you for embarking on this intellectual journey, and I wish you every success in your future endeavors.
{"url":"https://curioustoons.in/automata-theory/","timestamp":"2024-11-09T20:25:25Z","content_type":"text/html","content_length":"121880","record_id":"<urn:uuid:d2acbbf7-2f7b-4467-9299-d86f0eb9f099>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00070.warc.gz"}
ComplexityMeasures.jl · ComplexityMeasures.jl ComplexityMeasures.jl is a Julia-based software for calculating 1000s of various kinds of probabilities, entropies, and other so-called complexity measures from a single-variable input datasets. For relational measures across many input datasets see its extension CausalityTools.jl. If you are a user of other programming languages (Python, R, MATLAB, ...), you can still use ComplexityMeasures.jl due to Julia's interoperability. For example, for Python use juliacall. A careful comparison with alternative widely used software shows that ComplexityMeasures.jl outclasses the alternatives in several objective aspects of comparison, such as computational performance, overall amount of measures, reliability, and extendability. See the associated publication for more details. The key features that it provides can be summarized as: • A rigorous framework for extracting probabilities from data, based on the mathematical formulation of probability spaces. • Several (12+) outcome spaces, i.e., ways to discretize data into probabilities. • Several estimators for estimating probabilities given an outcome space, which correct theoretically known estimation biases. • Several definitions of information measures, such as various flavours of entropies (Shannon, Tsallis, Curado...), extropies, and other complexity measures, that are used in the context of nonlinear dynamics, nonlinear timeseries analysis, and complex systems. • Several discrete and continuous (differential) estimators for entropies, which correct theoretically known estimation biases. • An extendable interface and well thought out API accompanied by dedicated developer documentation. This makes it trivial to define new outcome spaces, or new estimators for probabilities, information measures, or complexity measures and integrate them with everything else in the software without boilerplate code. ComplexityMeasures.jl can be used as a standalone package, or as part of other projects in the JuliaDynamics organization, such as DynamicalSystems.jl or CausalityTools.jl. To install it, run import Pkg; Pkg.add("ComplexityMeasures"). All further information is provided in the documentation, which you can either find online or build locally by running the docs/make.jl file. Previously, this package was called Entropies.jl. ComplexityMeasures.jl has been updated to v3! The software has been massively improved and its core principles were redesigned to be extendable, accessible, and more closely based on the rigorous mathematics of probabilities and entropies. For more details of this new release, please see our announcement post on discourse or the central Tutorial of the v3 documentation. In this v3 many concepts were renamed, but there is no formally breaking change. Everything that changed has been deprecated and is backwards compatible. You can see the CHANGELOG.md for more The input data type typically depend on the outcome space chosen. In general though, the standard DynamicalSystems.jl approach is taken and as such we have three types of input data: • Timeseries, which are AbstractVector{<:Real}, used in e.g. with WaveletOverlap. • Multi-variate timeseries, or datasets, or state space sets, which are StateSpaceSets, used e.g. with NaiveKernel. The short syntax SSSet may be used instead of StateSpaceSet. • Spatial data, which are higher dimensional standard Arrays, used e.g. with SpatialOrdinalPatterns. StateSpaceSet{D, T, V} <: AbstractVector{V} A dedicated interface for sets in a state space. It is an ordered container of equally-sized points of length D, with element type T, represented by a vector of type V. Typically V is SVector{D,T} or Vector{T} and the data are always stored internally as Vector{V}. The underlying Vector{V} can be obtained by vec(ssset), although this is almost never necessary because StateSpaceSet subtypes AbstractVector and extends its interface. StateSpaceSet also supports almost all sensible vector operations like append!, push!, hcat, eachrow, among others. When iterated over, it iterates over its contained points. Constructing a StateSpaceSet is done in three ways: 1. By giving in each individual columns of the state space set as Vector{<:Real}: StateSpaceSet(x, y, z, ...). 2. By giving in a matrix whose rows are the state space points: StateSpaceSet(m). 3. By giving in directly a vector of vectors (state space points): StateSpaceSet(v_of_v). All constructors allow for the keyword container which sets the type of V (the type of inner vectors). At the moment options are only SVector, MVector, or Vector, and by default SVector is used. Description of indexing When indexed with 1 index, StateSpaceSet behaves exactly like its encapsulated vector. i.e., a vector of vectors (state space points). When indexed with 2 indices it behaves like a matrix where each row is a point. In the following let i, j be integers, typeof(X) <: AbstractStateSpaceSet and v1, v2 be <: AbstractVector{Int} (v1, v2 could also be ranges, and for performance benefits make v2 an SVector{Int}). • X[i] == X[i, :] gives the ith point (returns an SVector) • X[v1] == X[v1, :], returns a StateSpaceSet with the points in those indices. • X[:, j] gives the jth variable timeseries (or collection), as Vector • X[v1, v2], X[:, v2] returns a StateSpaceSet with the appropriate entries (first indices being "time"/point index, while second being variables) • X[i, j] value of the jth variable, at the ith timepoint Use Matrix(ssset) or StateSpaceSet(matrix) to convert. It is assumed that each column of the matrix is one variable. If you have various timeseries vectors x, y, z, ... pass them like StateSpaceSet (x, y, z, ...). You can use columns(dataset) to obtain the reverse, i.e. all columns of the dataset in a tuple. ComplexityMeasures.jl offers thousands of measures computable right out of the box. To see an exact number of how many, see this calculation page.
{"url":"https://juliadynamics.github.io/DynamicalSystemsDocs.jl/complexitymeasures/stable/","timestamp":"2024-11-03T21:38:24Z","content_type":"text/html","content_length":"24230","record_id":"<urn:uuid:589bd90a-7744-41ae-bfaf-454b5345ba33>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00891.warc.gz"}
Air resistance force created at May 22, 2021 Air resistance force When body moves through a viscous fluid (air, water, etc.), the fluid acts on the body with some resistance force trying to stop the body. This is also called a drag force. Of coarse, the direction of force vector is the opposite to the body velocity vector. Despite the fluid flow is a process quite difficult to understand and calculate, the drag force equation is simple: $F_{d}=C_{d}\cdot A\cdot \frac{\rho \cdot V^{2}}{2}$ • Cd - the drag coefficient, which depends on the shape of the body • A - the cross sectional area • ρ - air density • V - velocity of the body (relative to the fluid) The drag coefficient C[d] depends on the shape of the body, it is dimensionless and is typically obtained through the experiments. You can find the values in the handbooks for the different shapes. Examples are: Sphere 0.5 Cube 0.8 Squared flat plate (90°) 1.2 flat plate along turbulent flow 0.005 typical saloon car 0.3 Bicycle 0.9 Airplane wing at normal position 0.05 In general, the drag coefficient C[d] is not a constant, and depends on Reynolds number. For some very low velocities as well as very high supersonic velocities it will be different, but for the most practical cases (for subsonic flow) it may be assumed as a constant. The cross sectional area A is typically the maximum area frontal to the motion direction. In other words, if you project your body shape to the plane, which is orthogonal to the motion direction - the are of the shape you will obtain is the one you need. For the sphere it is π*D^2/4. There are some exclusions though. For the airplane wing you should use the wing's area in the base plane. The air is considered as example for this case, but you can calculate the drag force for any other fluid, just put appropriate value for density ρ. To calculate the drag force in the water, use ρ = 1000 kg/m^3.
{"url":"https://noskovtools.com/en/simulation_library/mechanics/drag_force","timestamp":"2024-11-02T11:25:20Z","content_type":"text/html","content_length":"23470","record_id":"<urn:uuid:4492c70e-9159-450d-b28c-c7e9317a8ecb>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00739.warc.gz"}
UPSC Optional Subjects - UPSC Optional Syllabus - CP Animal Husbandry and Veterinary Science Civil Engineering Commerce and Accountancy Electrical Engineering Mechanical Engineering Medical Science Political Science and International Relations(PSIR) Public Administration Literature of Assamese Literature of Bengali Literature of Bodo Literature of Dogri Literature of Gujarati Literature of Hindi Literature of Kannada Literature of Kashmiri Literature of Konkani Literature of Maithili Literature of Malayalam Literature of Manipuri Literature of Marathi Literature of Nepali Literature of Oriya Literature of Punjabi Literature of Sanskrit Literature of Santhali Literature of Sindhi Literature of Tamil Literature of Telugu Literature of Urdu Literature of English syllabus of physics Syllabus of Physics Optional Subject PAPER-I1. (a) Mechanics of Particles :Laws of motion; conservation of energy and momentum, applications to rotating frames, centripetal and Coriolis accelerations; Motion under a central force; Conservation of angular momentum, Kepler’s laws; Fields and potentials; Gravitational field and potential due to spherical bodies, Gauss and Poisson equations, gravitational self-energy; Two-body problem; Reduced mass; Rutherford scattering; Centre of mass and laboratory reference frames.(b) Mechanics of Rigid Bodies :System of particles; Centre of mass, angular momentum, equations of motion; Conservation theorems for energy, momentum and angular momentum; Elastic and inelastic collisions; Rigid Body; Degrees of freedom, Euler’s theorem, angular velocity, angular momentum, moments of inertia, theorems of parallel and perpendicular axes, equation of motion for rotation; Molecular rotations (as rigid bodies); Di and tri-atomic molecules; Precessional motion; top, gyroscope.(c) Mechanics of Continuous Media :Elasticity, Hooke’s law and elastic constants of isotropic solids and their inter-relation; Streamline (Laminar) flow, viscosity, Poiseuille’s equation, Bernoulli’s equation, Stokes’ law and applications.(d) Special Relativity :Michelson-Morely experiment and its implications; Lorentz transformations length contraction, time dilation, addition of relativistic velocities, aberration and Doppler effect, mass-energy relation, simple applications to a decay process. Four dimensional momentum vector; Covariance of equations of physics.2. Waves and Optics :(a) Waves :Simple harmonic motion, damped oscillation, forced oscillation and resonance; Beats; Stationary waves in a string; Pulses and wave packets; Phase and group velocities; Reflection and refraction from Huygens’ principle.(b) Geometrial Optics :Laws of reflection and refraction from Fermat’s principle; Matrix method in paraxial optic-thin lens formula, nodal planes, system of two thin lenses, chromatic and spherical aberrations.(c) Interference :Interference of light -Young’s experiment, Newton’s rings, interference by thin films, Michelson interferometer; Multiple beam interference and Fabry Perot interferometer.(d) Diffraction :Fraunhofer diffraction - single slit, double slit, diffraction grating, resolving power; Diffraction by a circular aperture and the Airy pattern; Fresnel diffraction: half-period zones and zone plates, circular aperture.(e) Polarisation and Modern Optics :Production and detection of linearly and circularly polarized light; Double refraction, quarter wave plate; Optical activity; Principles of fibre optics, attenuation; Pulse dispersion in step index and parabolic index fibres; Material dispersion, single mode fibers; Lasers-Einstein A and B coefficients. Ruby and He-Ne. Characteristics of laser light-spatial and temporal coherence; Focusing of laser beams. Three-level scheme for laser operation; Holography and simple applications.3. Electricity and Magnetism :(a) Electrostatics and Magnetostatics :Laplace and Poisson equations in electrostatics and their applications; Energy of a system of charges, multipole expansion of scalar potential; Method of images and its applications. Potential and field due to a dipole, force and torque on a dipole in an external field; Dielectrics, polarisation. Solutions to boundaryvalue problems-conducting and dielectric spheres in a uniform electric field; Magnetic shell, uniformly magnetised sphere; Ferromagnetic materials, hysteresis, energy loss.(b) Current Electricity :Kirchhoff's laws and their applications. Biot-Savart law, Ampere’s law, Faraday’s law, Lenz’ law. Selfand mutual- inductances; Mean and rms values in AC circuits; DC and AC circuits with R, L and C components; Series and parallel resonance; Quality factor; Principle of transformer.4. Electromagnetic Waves and Blackbody Radiation :Displacement current and Maxwell’s equations; Wave equations in vacuum, Poynting theorem; Vector and scalar potentials; Electromagnetic field tensor, covariance of Maxwell’s equations; Wave equations in isotropic dielectrics, reflection and refraction at the boundary of two dielectrics; Fresnel’s relations; Total internal reflection; Normal and anomalous dispersion; Rayleigh scattering; Blackbody radiation and Planck ’s radiation law- Stefan-Boltzmann law, Wien’s displacement law and Rayleigh-Jeans law.5. Thermal and Statistical Physics :(a) Thermodynamics :Laws of thermodynamics, reversible and irreversible processes, entropy; Isothermal, adiabatic, isobaric, isochoric processes and entropy changes; Otto and Diesel engines, Gibbs’ phase rule and chemical potential; Van der Waals equation of state of a real gas, critical constants; Maxwell-Boltzmann distribution of molecular velocities, transport phenomena, equipartition and virial theorems; Dulong-Petit, Einstein, and Debye’s theories of specific heat of solids; Maxwell relations and application; ClausiusClapeyron equation. Adiabatic demagnetisation, Joule-Kelvin effect and liquefaction of gases.(b) Statistical Physics :Macro and micro states, statistical distributions, Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac Distributions, applications to specific heat of gases and blackbody radiation; Concept of negative temperatures.PAPER-II1. Quantum Mechanics :Wave-particle duality; Schroedinger equation and expectation values; Uncertainty principle; Solutions of the one-dimensional Schroedinger equation for free particle (Gaussian wave-packet), particle in a box, particle in a finite well, linear harmonic oscillator; Reflection and transmission by a step potential and by a rectangular barrier; Particle in a three dimensional box, density of states, free electron theory of metals; Angular momentum; Hydrogen atom; Spin half particles, properties of Pauli spin matrices.2. Atomic and Molecular Physics :Stern-Gerlach experiment, electron spin, fine structure of hydrozen atom; L-S coupling, J-J coupling; Spectroscopic notation of atomic states; Zeeman effect; Franck-Condon principle and applications; Elementary theory of rotational, vibrational and electronic spectra of diatomic molecules; Raman effect Mand molecular structure; Laser Raman spectroscopy; Importance of neutral hydrogen atom, molecular hydrogen and molecular hydrogen ion in astronomy. Fluorescence and Phosphorescence; Elementary theory and applications of NMR and EPR; Elementary ideas about Lamb shift and its significance.3. Nuclear and Particle Physics :Basic nuclear properties-size, binding energy, angular momentum, parity, magnetic moment; Semi-empirical mass formula and applications. Mass parabolas; Ground state of a deuteron, magnetic moment and non-central forces; Meson theory of nuclear forces; Salient features of nuclear forces; Shell model of the nucleus - success and limitations; Violation of parity in beta decay; Gamma decay and internal conversion; Elementary ideas about Mossbauer spectroscopy; Q-value of nuclear reactions; Nuclear fission and fusion, energy production in stars. Nuclear reactors. Classification of elementary particles and their interactions; Conservation laws; Quark structure of hadrons : Field quanta of electroweak and strong interactions; Elementary ideas about unification of forces; Physics of neutrinos.4. Solid State Physics, Devices and Electronics :Crystalline and amorphous structure of matter; Different crystal systems, space groups; Methods of determination of crystal structure; X-ray diffraction, scanning and transmission electron microscopies; Band theory of solids—conductors, insulators and semi-conductors; Thermal properties of solids, specific heat, Debye theory; Magnetism: die, para and ferromagnetism; Elements of super-conductivity, Meissner effect, Josephson junctions and applications; Elementary ideas about high temperature superconductivity. Intrinsic and extrinsic semi-conductors- p-n-p and n-p-n transistors; Amplifiers and oscillators. Op-amps; FET, JFET and MOSFET; Digital Electronics-Boolean identities, De Morgan’s laws, Logic gates and truth tables. Simple logic circuits; Thermistors, solar cells; Fundamentals of microprocessors and digital computers.
{"url":"https://competitionpedia.in/upsc/optional-subjects?filter=syllabus-of-physics","timestamp":"2024-11-07T17:23:27Z","content_type":"text/html","content_length":"91844","record_id":"<urn:uuid:edc02bf2-39f2-4ca9-94fa-63a32410e51d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00151.warc.gz"}
Math Quizzes, Math Tests, and Multiplication Worksheets Need to quickly print out a math exercise sheet to practice arithmetic problems? Here, Knowledge Mouse provides a simple tool to generate random math problems. Addition, subtraction, multiplication, and division - you can choose what operations are used. How It Works Step 1: Pick a name First, choose a name. Almost anything will do. This is mainly so you can go back later and retrieve the same set of questions that you are generating here. Step 2: Choose number range The second step is to pick your number range. For example, if you want multiplication problems using the numbers from 1 through 12, you would specify 1 as the minimum and 12 as the maximum. For all single- and double-digit numbers, choose 0 as the minimum and 99 as the maximum. For only three-digit addition problems, you could choose 100 as the minimum and 999 as the maximum. You can even test negative numbers by specifying a negative number as the minimum. Step 3: Choose the number of questions Next, choose how many questions should be generated. The default is 16, and in general 16 questions should print out on a single page. Step 4: Choose the operations There are four operations supported: addition, multiplication, subtraction, and division. Check the boxes for whichever one(s) you wish to be used. You can use all four, or just one. Obviously, you must select at least one operation. Step 5: Other options If you click the "Show more options" link, you can see a few more settings. You can specify instructions, which will be printed out at the top of each sheet. For example, "Answer as many questions as you can in 60 seconds." You can also print out Name and Date lines at the top of each page. If you have a Knowledge Mouse account, you can also choose to make the quiz "private" so that no one else will be able to see it and prevent others from modifying it. Step 6: Done! When you are satisfied with the settings you've chosen, press the "Create Math Quiz!" button, and the worksheet should be generated. Each question will be generated using random numbers and a random choice for the operation, if more than one was selected. At this point, you can click "Make print-friendly" to remove the header and other parts, which should make it suitable for printing. The Print dialog should pop up automatically as well. If you do not wish to print right now, simply Close/Cancel the dialog. You may be able to use the Print Preview feature of your browser to view what it would look like when printed. When done, you can use the Back button on your browser to go back to the page where you created the math quiz. You can generate a different set of random arithmetic questions using the same options here if you wish. Having a strong foundation of arithmetic knowledge is one important factor for success for more advanced mathematical topics, including algebra and pre-calculus. This Knowledge Mouse math quiz creator can help pratice these concepts and achieve proficiency in these important areas. Use it to test students' arithmetic skills and practice four-digit multiplcation problems. Or use it to help practice memorization of multiplication tables. Other Activities Knowledge Mouse also offers an online math game. You can set up in a similar way, choosing the number range and the operations. Then the game begins. The player must answer the questions that appear and click on the correct answer. The total of correct and incorrect answers will be tallied. The game finishes when they've answered all then questions correctly. Questions may be repeated, and answering a question incorrectly makes it more likely for that question to be repeated, making it useful for practice and memorization. This game is written in HTML and Javascript, so Adobe Flash Player is not required, and it will work on iPhones, iPads, Android tablets, and any other device with a modern browser. Note that if you do not yet have a PRO account, the game will be in free-trial mode with a limited choice for the number range and operations. You can click here for more information on PRO accounts. Also, you can view our brand new animated math page, demonstrating the concepts of the commutative, associative, and distributive properties using animation showing falling cupcakes and moving And in addition to math, Knowledge Mouse offers other activites, such as a printable flash card creator, printable and online quizzes, and a brand new foreign language learning section.
{"url":"https://knowledgemouse.com/math_quizzes/new","timestamp":"2024-11-14T17:34:24Z","content_type":"text/html","content_length":"25059","record_id":"<urn:uuid:b70a0bb6-e24d-4bdb-af26-aceef4722874>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00890.warc.gz"}
How do you calculate total ordering cost in EOQ? How do you calculate total ordering cost in EOQ? How to calculate EOQ with the ordering cost formula 1. Determine your annual demand. To apply the ordering cost formula, find the annual demand value for the product your company needs to order. 2. Find the cost per order. 3. Calculate the carrying cost per unit. 4. Complete the ordering cost formula. How do you find the total cost per order? To calculate cost per order, you first need to add up all of your order expenses — everything you spend to acquire, fulfill, package, and ship orders — for a set period of time. Then, you divide your order expenses by the total number of orders you received during the same timeframe. How do you calculate total variable cost in EOQ? To determine the total variable cost the company will spend to produce 100 units of product, the following formula is used: Total output quantity x variable cost of each output unit = total variable How do you solve for Q in EOQ? We can calculate the order quantity as follows: Multiply total units by the fixed ordering costs (3,500 × $15) and get 52,500; multiply that number by 2 and get 105,000. Divide that number by the holding cost ($3) and get 35,000. Take the square root of that and get 187. That number is then Q. How do you calculate total cost example? Total Cost = Total Fixed Cost + Average Variable Cost Per Unit * Quantity of Units Produced 1. Total Cost = $10,000 + $5 * $2,000. 2. Total Cost = $20,000. How many orders will be placed per year using the EOQ? The number of orders in a year = Expected annual demand/EOQ. Total annual holding cost = Average inventory (EOQ/2) x holding cost per unit of inventory. Total annual ordering cost = Number of orders x cost of placing an order. What is carrying cost and ordering cost? Ordering costs are costs incurred on placing and receiving a new shipment of inventories. These include communication costs, transportation costs, transit insurance costs, inspection costs, accounting costs, etc. Carrying costs represent costs incurred on holding inventory in hand. What is order cost? Ordering costs are the expenses incurred to create and process an order to a supplier. These costs are included in the determination of the economic order quantity for an inventory item. Examples of ordering costs are as follows: Cost to prepare a purchase requisition. Cost to prepare a purchase order. What is the total cost equation? The total cost formula is used to combine the variable and fixed costs of providing goods to determine a total. The formula is: Total cost = (Average fixed cost x average variable cost) x Number of units produced. What is the total cost of a product? Total product costs can be determined by adding together the total direct materials and labor costs as well as the total manufacturing overhead costs. 1 Data like the cost of production per unit can help a business set an appropriate sales price for the finished item. How do you calculate the number of orders placed in a year? Which of the following is an example of ordering costs? Examples of order costs include the costs of preparing a requisition, a purchase order, and a receiving ticket, stocking the items when they arrive, processing the supplier’s invoice, and remitting the payment to the supplier. Are annual ordering and carrying costs always equal at the EOQ? c) Except for rounding, annual ordering and carrying costs are ALWAYS equal at the EOQ. What is total cost example? Total Costs For example, suppose a company leases office space for $10,000 per month, rents machinery for $5,000 per month, and has a $1,000 monthly utility bill. In this case, the company’s total fixed costs would be $16,000. What is the example of ordering cost? What does ordering costs include? What is the definition of ordering costs? Typically, ordering costs include expenses for a purchase order, labor costs for the inspection of goods received, labor costs for placing the goods received in stock, labor costs for issuing a supplier’s invoice and labor costs for issuing a supplier payment. What is the EOQ – economic order quantity formula for holding cost? So, the calculation of EOQ – Economic Order Quantity Formula for holding cost is = (200/2) * 1. The below table shows the calculation of the combined ordering and holding cost at economic order How do you calculate EOQ for inventory? However, as the size of inventory grows, the cost of holding the inventory rises. EOQ is the exact point that minimizes both these inversely related costs. EOQ Formula. The Economic Order Quantity formula is calculated by minimizing the total cost per order by setting the first order derivative to zero. What does EOQ mean in accounting? ECONOMIC ORDER QUANTITY (EOQ) MODEL. The economic order quantity (EOQ) is the order quantity that minimizes total holding and ordering costs for the year. Even if all the assumptions don’t hold exactly, the EOQ gives us a good indication of whether or not current order quantities are reasonable. What are the components of the total cost per order? The components of the formula that make up the total cost per order are the cost of holding inventory and the cost of ordering that inventory. The key notations in understanding the EOQ formula are as follows: The number of orders that occur annually can be found by dividing the annual demand by the volume per order.
{"url":"https://www.wren-clothing.com/how-do-you-calculate-total-ordering-cost-in-eoq/","timestamp":"2024-11-06T20:21:27Z","content_type":"text/html","content_length":"64745","record_id":"<urn:uuid:7a28cd13-3eeb-47cd-86a0-262282375bda>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00008.warc.gz"}
Who do you want to replace Kirk? Figured this should get it's own thread. I've seen Norvel, Prime, Stoops, Caldwell, and Frost mentioned in the season thread. Some of y'all are retarded. Dec 3, 2007 Reaction score May 27, 2014 Reaction score Ferentz's replace should be someone who fucks your mom. So either your dad or me Feb 25, 2010 Reaction score Anyone who says Prime or Frost should be permabanned. Dec 5, 2007 Reaction score Frost was mentioned as a replacement for Brian, not Kirk. And it wasn’t a serious suggestion. Jul 25, 2020 Reaction score AI version of Dennis Green. Loved and Defended The Criminal Until the Bitter End Sep 21, 2009 Reaction score I want someone who has an offense that scores 40 ppg and shuts every team out on defense. Jul 14, 2009 Reaction score Dec 30, 2008 Reaction score Subscribes to Playboy for the articles Aug 29, 2007 Reaction score I want someone who has an offense that scores 40 ppg and shuts every team out on defense. Jan 25, 2008 Reaction score Aug 24, 2007 Reaction score Chris Klieman Mark Stoops Phil Parker PJ Fleck Pat Fitzgerald Matt Campbell Aug 23, 2007 Reaction score Tito’s started kicking in halfway through Wade’s post. Frost was mentioned as a replacement for Brian, not Kirk. And it wasn’t a serious suggestion. I understand. Aug 28, 2007 Reaction score You are not a serious people. Dec 5, 2007 Reaction score Oct 21, 2007 Reaction score May 1, 2016 Reaction score I would be cool with Leonard Chris Klieman Mark Stoops Phil Parker PJ Fleck Pat Fitzgerald Matt Campbell This is a bad troll post, and you should feel bad Depending on how they interview/fill out staff any of: UW OC Chubb (sp) Klieman if he is realistic Has Shoved A Live Shrimp Up His Ass Apr 10, 2010 Reaction score Whoever it is should hire Oregon States OC. Whoever it is should hire Oregon States OC. That seems like Jonathan Smith who also goes in the absolutely would if possible bucket. Dec 3, 2007 Reaction score Aug 23, 2007 Reaction score Dec 5, 2007 Reaction score Aug 24, 2007 Reaction score May 1, 2016 Reaction score Feb 14, 2010 Reaction score I don’t have a name, but someone with success as a HC who wins the job after an unbiased and thorough search where one of the criteria is not “has ties to Iowa” Fiercely Advocated for Unlimited Transfers Dec 1, 2008 Reaction score This is like when Rec suggested Marcus Newsome was a good pick to coach Iowa track. Feb 9, 2014 Reaction score When Fran was hired, part of the reason was his style of play. I can see that playing out again. Maybe someone in their early 50’s who brings their young, dynamic OC with them. If they want to remain similarish, maybe Mark Stoops or Dave Doren. Dec 30, 2008 Reaction score This is like when Rec suggested Marcus Newsome was a good pick to coach Iowa track. Well, he’s an Iowa State grad and former player, so zero chance of that happening. Are you suggesting that his 52-4 record (zero regular season losses) might be inflated by playing in an absolutely terrible football conference? Why don’t you respect the NAIA?! Jan 3, 2014 Reaction score Fiercely Advocated for Unlimited Transfers Dec 1, 2008 Reaction score Id take a hard look at Alex Golesch. Drinks liquor with a straw Oct 3, 2011 Reaction score Ferentz's replace should be someone who fucks your mom. So either your dad or me Jan 21, 2015 Reaction score Depending on how they interview/fill out staff any of: UW OC Chubb (sp) Klieman if he is realistic knows things he probably shouldn’t know Oct 3, 2015 Reaction score Dec 31, 2013 Reaction score The only logical options are Steve or James. Aug 24, 2007 Reaction score Lee Elia. Much better with fans than Kirk. Jan 11, 2012 Reaction score Sep 29, 2014 Reaction score IMO Kirk will have some influence on who is the next HC. I’d be surprised if it’s not Phil or Woods Jan 3, 2014 Reaction score Aug 24, 2007 Reaction score IMO Kirk will have some influence on who is the next HC. I’d be surprised if it’s not Phil or Woods So, Brian? Sep 4, 2007 Reaction score God, the Hawks are soooo fucked- lulz at Fort Kirk Mar 6, 2011 Reaction score Crack started kicking in at the beginning of Wade’s mom’s pregnancy. Dec 8, 2009 Reaction score
{"url":"https://www.hawkeyelounge.com/threads/who-do-you-want-to-replace-kirk.221692/","timestamp":"2024-11-13T21:49:19Z","content_type":"text/html","content_length":"242558","record_id":"<urn:uuid:ca28b4af-e9fd-408c-813a-c15ee92a22e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00622.warc.gz"}
How To Calculate Average Annual Growth Rate In Microsoft Excel | SpreadCheaters How to calculate average annual growth rate in Microsoft Excel In Microsoft Excel, the Annual Average Growth Rate refers to a metric used to determine the growth rate of an investment or business-related data, such as revenue, profit, or market share, on a year-over-year basis. This calculation provides an estimation of the average change rate during a specific time frame, based on the assumption that the growth is uniform throughout the period. In this tutorial, we will learn how to calculate the average annual growth rate in Microsoft Excel. In Excel calculating the average annual growth rate is a common task that can be achieved by the formula “[(Ending Value / Beginning Value)^(1/Number of Years)] – 1”. Also, we can utilize the AVERAGE function for calculating the average annual growth rate in Microsoft Excel. Method 1: Calculating the Average Annual Growth Rate by Utilizing the AVERAGE Function Step 1 – Choose an Empty Cell Step 2 – Utilize the AVERAGE Function • Utilize the AVERAGE function: • Where B3:B8 is the range containing the total revenue for each year and the range B4:B7 is the range excluding the initial and the final value. Step 3 – Hit the Enter Key Method 2: Calculating AAGR from Individual Growth Rate Per Year Step 1 – Calculate the Growth Rate for the Second Year • Calculate the growth rate for the second year. • For this utilize the formula: • Where B4 is the cell with the final value for that interval and B3 is the cell with the initial value. • Hit the Enter key. Step 2 – Utilize Autofill to Calculate the Growth Rate for Each Year • Utilize Autofill to calculate the growth rate for each year. Step 3 – Apply the Percentage Format to the Growth Rate of Each Year • Apply the percentage format to the growth rate of each year. Step 4 – Utilize the Average Function to Calculate the Average Annual Growth Rate • Utilize the Average function in a cell: • The range C4:C8 holds the growth rate for each year in the percentage format. • Hit the Enter key to get the average annual growth rate.
{"url":"https://spreadcheaters.com/how-to-calculate-average-annual-growth-rate-in-microsoft-excel/","timestamp":"2024-11-14T18:32:27Z","content_type":"text/html","content_length":"60873","record_id":"<urn:uuid:482dec77-8496-4ea5-a1f0-69e4abacac53>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00560.warc.gz"}
Understanding the Shell Method in Calculus: A Comprehensive Guide Understanding the Shell Method in Calculus: A Comprehensive Guide Cylindrical shell diagram The shell method calculator is an important concept in integral calculus used to calculate the volume of solid three-dimensional shapes with known cross sections. This comprehensive guide will provide an in-depth understanding of the shell method, its applications, and step-by-step solutions to sample problems. An Introduction to the Shell Method The shell method, also known as the cylindrical shells method, is a technique used to calculate the volume of a solid by integrating the area of circular discs or cylindrical shells. Here are some key things to know about this method: What is the Shell Method? • A technique to find the volume of a three-dimensional solid by adding up the volumes of infinitely thin cylindrical shells • Cylindrical shells are formed by rotating a region bounded by two curves around an axis • The radius of each shell is determined by the distance of the curves from the axis of rotation When is it Used? • To calculate volumes of solids that have known cross sections perpendicular to an axis • Used as an alternative to disc/washer method when one bound forms the axis of revolution • Common on AP and IB Calculus exams Key Elements: • Axis of rotation • Bounds – two curves that define the region to be revolved • Radius of each cylindrical shell – distance of curves from axis • Height of each shell – length parallel to axis of rotation • Infinitesimally thin shells Understanding these foundational elements is key before applying the shell method to solve specific problems. Determining When to Use the Shell Method Choosing the best method to find the volume – disc/washer, cylindrical shells, or other – is crucial for accurately solving calculus problems on exams and in real-world applications. Consider using the shell method when: • One bound forms or lies along the axis of rotation • When it is difficult or impossible to solve using washers/discs • When the solid has known cross sections perpendicular to an axis The shell method may be preferable: • When one bound is simpler than the other • When one bound is given as a function and the other geometrically • To avoid negative signs and absolute value calculations For example, calculate the volume of a solid bounded by: • $y = x^2$ and $y = 9 – x^2$ revolved about the x-axis Here the x-axis forms one bound, making the shell method a good choice. Evaluating the merits and limitations of techniques is an key skill in calculus. Understanding when the shell method is optimal over other strategies is imperative for success. Using Cylindrical Shells to Calculate Volume The central premise of the shell method is using cylindrical shells to approximate the volume iteratively. Here is a step-by-step process: Step 1) Identify the axis of revolution • This will form the central axis of the cylindrical shells Step 2) Determine the bounds • The curves that define the region to be revolved (f(x) and g(x) typically) Step 3) Find the radius R of each shell • The distance between the axis and the outer bound curve Step 4) Find the height h of each shell • The difference between the upper and lower bound curves Step 5) Set up the integral $\int_{a}^{b} 2\pi Rh \;dx$ • a and b are the endpoints where the bounds cross the axis • Integrate between limits to sum all the shells Step 6) Evaluate the integral to calculate volume This methodology breaks down a complex 3D shape into simple cylindrical shells. By integrating along the axis and summing the shells’ volumes, we can accurately calculate the overall volume. Shell Method Formula The general formula used in the shell method for determining the volume V of a solid by integrating cylindrical shells is: $$V = \int_{a}^{b} 2\pi R h \,dx$$ • a and b are the endpoints along the axis of revolution x • $R$ is the radius of each cylindrical shell • $h$ is the height of each cylindrical shell parallel to the axis • $2\pi Rh$ gives the volume of each infinitesimally thin shell By integrating along the axis between the endpoints, we sum the volumes of all the shells to calculate the overall volume of the 3D solid. Sample Problem 1 Let’s walk through an example to demonstrate using cylindrical shells to calculate volume. Find the volume of the solid generated when the region bounded by $y = \sqrt{x}$ and $y = x$, $0 \leq x \leq 1$ is revolved about the x-axis. Step 1) Identify the axis of revolution The region is revolved about the x-axis. This will be the axis of the cylinders. Step 2) Determine the bounds Upper bound: $y = \sqrt{x}$ Lower bound: $y = x$ The curves intersect at $x = 1$. Step 3) Find R, the radius of each shell The shells are centered on the x-axis. The outer edge of each shell follows $y = \sqrt{x}$. So the radius $R = \sqrt{x}$ Step 4) Find h, the height of each shell Upper bound (outer radius): $y = \sqrt{x}$ Lower bound (inner radius): $y = x$ Height: $h = R{outer} – R{inner} = \sqrt{x} – x$ Step 5) Set up the integral Axis: x-axis from $x = 0$ to $x = 1$ Radius: $R = \sqrt{x}$ Height: $h = \sqrt{x} – x$ $ \int_{0}^{1} 2\pi(\sqrt{x})\left(\sqrt{x} – x\right)dx$ Step 6) Evaluate the integral $ V = \int{0}^{1} 2\pi x – 2\pi x^{3/2} \; dx$ $V = \left[\pi x^2 – \frac{2}{3}\pi x^{3/2}\right]{0}^{1}$ $V = \frac{\pi}{6}$ Therefore, the volume is $\frac{\pi}{6}$ cubic units when this region is revolved about the x-axis. Sample Problem 2 Let’s look at one more example applying the shell method. Find the volume generated when revolving the region bounded by $y = 2x – x^2$ and $y = 0$, about the line $x = 2$. Step 1) Identify the axis of revolution The region is revolved about the line $x = 2$. This forms the axis. Step 2) Determine the bounds Outer bound: $y = 2x – x^2$ Inner bound: $y = 0$ These curves intersect at $x = 0$ and $x = 2$. Step 3) Find R, the radius of each shell The outer edge of the shells follow $y = 2x – x^2$. The axis is at $x = 2$. So the radius $R = 2x – x^2$ Step 4) Find h, the height of each shell Outer radius: $R = 2x – x^2$ Inner radius (axis): $x = 2$ Height: $h = R{outer} – R{inner} = (2x – x^2) – 2$ Step 5) Set up the integral Axis: Line $x = 2$ from $x=0$ to $x=2$ Radius: $R = 2x – x^2$ Height: $h = (2x – x^2) – 2$ $\int_{0}^{2} 2\pi(2x – x^2)((2x – x^2) – 2)dx$ Step 6) Evaluate the integral After evaluating, the volume is $\frac{32\pi}{3}$ cubic units These examples demonstrate the six central steps for applying the shell method to calculate volumes. With some practice, this technique can be broadly utilized to find volumes efficiently. Choosing the Best Method When facing a volume problem with solid revolution, how do you decide whether cylindrical shells, disc/washer, or another method is best? Here is a quick guide: • If one bound is the axis of revolution, use shells. • If centering washers/discs on the axis is very simple, use washers/discs. • If one function is given parametrically or implicitly, avoid shells/discs. • For a region between two curves and two horizontal/vertical lines, use geometry. • For rectangular solids or slices, use geometry. With experience identifying the optimal technique, applying them becomes straightforward. Evaluating multiple methods also builds mathematical maturity. Pros and Cons of the Shell Method Advantages of the shell method: • Allows calculation of volume when one bound is on the axis of revolution • Avoids abs() calculations with bounds increasing/decreasing • Simpler with one bound a geometric shape • Often avoids negative signs in calculations • More visualizing required compared to disc/washer • Setting up integral can be tricky • Multiple steps can lead to errors • Choice between shells vs washers not always obvious The shell method excels when: • One bound is the axis of revolution • One functions is simpler than the other • The region is concave down or up Understanding the strengths and limitations helps apply it judiciously. With practice visualizing shapes as nested cylinders, the setup gets easier. Common Curve Functions The shell method can be applied to solids generated by many types of curve functions. Here are some of the most common types used: • Polynomials: $y = x^n$ • Radicals: $y = \sqrt[n]{x}$ • Trigonometric: $y = \sin(x)$, $y =\cos(x)$ • Exponential/Logarithmic: $y=b^x$, $y=\ln(x)$ • Absolute: $y=|x|$ To apply the shell method: • Identify the axis of revolution • Determine if one bound forms this axis • Set up integral with bounds, radius, and height • Recognize curve types to evaluate integral With repeated practice, applying the shell method to these curve types becomes second nature. Relationship to Cross Sections An intuitive way to understand the shell method is visualizing how circular cross sections stack to form a solid. Some key connections: • The bounds describe the curve functions forming the edges of the cross sections • The radius R of shells is determined by the outer edge of the cross sections • The height h of shells is found from the difference of the bounds • Stacking integrals sum the volumes of infinitesimally thin cross sections For example, revolving the region between $y = x$ and $y=x^2$ from $x=0$ to $x=1$ around the y-axis stacks circular cross sections with height $x^2 – x$ and radius $x$. Building the solid mentally from such cross sections provides an intuitive basis for setting up the shell integral. Strategies for Solving Problems Here are some helpful strategies for setting up and efficiently solving shell method problems: 1. Visualize the Solid Sketch the bounds, orientation with axis, cross sections. Helps choose discs/washers vs shells. 2. Identify the Axis of Revolution This will align with the axis of the cylindrical shells. 3. Determine the Bounds The upper and lower curves that define the region to revolve. 4. Decide the Order of Integration Integrate with respect to the axis of revolution. 5. Simplify the Bounds If Possible Rewrite parametrically, via trig identities when helpful. 6. Draw Sample Cylindrical Shells Determine radius R and height h for representative shells. 7. Set up the Integral Put all the pieces together to integrate. 8. Evaluate and Solve Symbolically first then numerically to determine volume. This clear step-by-step process minimizes mistakes. With practice, pattern recognition speeds solving. Real-World Applications While exams feature abstract shapes, the shell method can calculate real volumes critical across fields like: • Determine metal required to form cylindrical containers like tanks or pipes Food Production • Calculate ingredients to make cylindrical meats and cheeses • Identify materials needed for cylindrical support beams • Determine the volume and thus mass and density of oddly shaped rigid bodies Understanding the shell method and how to apply it to diverse solids provides a practical advantage in many STEM domains dealing with real-world objects. Common Exam and Test Questions Being able to accurately solve shell method problems swiftly is key for exam success. Here are some common question types: AP Calculus BC: • Use integration techniques to calculate volumes of revolution and surface areas • Combine multiple calculus methods including shells in one question • Conceptual questions on determining when shells are optimal Multivariable Calculus: • Setting up triple integrals using shells for volume • Shell method for vector valued functions • Spherical and cylindrical coordinate conversions IB Math HL: • Identifying shell method from diagram • Volume optimization problems with 2 variables using shells • Comparing washers/shells for unusual solids Practicing such exam-style questions builds intuition and speeds solving. Finding volumes with shells relies more on visualizing and geometric recognition than algebraic manipulation – a skill developed gradually by comprehensive practice across diverse cases. Topic Key Points – Technique to find volume by integrating cross sectional area of cylindrical shells What is the Shell Method? – Cylindrical shells formed by rotating a region about an axis – Radius is distance of bounds from axis – Height is difference in bound distances – One bound forms the axis of revolution When To Apply – Alternate to disc method when appropriate – Solids with known perpendicular cross sections – Identify axis of revolution Using Cylindrical Shells – Determine upper and lower bound functions – Find radius R and height h – Set up and evaluate integral summing shells $V = \int_{a}^{b} 2\pi R h \;dx$ Formula – a and b are endpoints along axis – R is radius of cylindrical shell – h is height of cylindrical shell – Integrate with respect to axis – Shells if one bound is axis Choosing the Best Method – Washers if placing discs on axis is simple – Avoid shells and washers if one function implicit/parametric – Use geometry for regions between lines and curves – Allows volumes where one bound is axis – Avoids abs() calculations Pros and Cons – Simpler with one geometric boundCons: – More visualizing – Setting up integral tricky – Multiple steps can induce errors – Choice between methods unclear – Polynomials – Radicals Common Curve Functions – Trig functions – Exponential/Logarithmic – Absolute value – Bounds describe edges of cross sections Relationship to Cross Sections – Shell radius from outer edge – Shell height from bound difference – Stacking cross sections sums volumes 1) Visualize 2) Identify Axis 3) Determine Bounds Problem Solving Strategies 4) Decide Order of Integration 5) Simplify Bounds 6) Draw Sample Shells 7) Set up Integral 8) Evaluate – Manufacturing: metal required for cylindrical containers Real-World Applications – Food production: ingredients for cylindrical meats/cheeses – Architecture: materials for support beams – Physics: volume and density of rigid solids – AP Calculus BC: volumes, surface areas, conceptual Common Exam Questions – Multivariable Calculus: triple integrals, vectors – IB Math HL: identifying method from diagram, volume optimization with shells In summary, fully comprehending the shell method theory, real-world applications, exam problem nuances, and solution strategies sets up students for success in calculus and beyond. This guide covers all the key facets in detail. With robust understanding and comprehensive practice, mastery of this volume calculation technique is within reach.
{"url":"https://shellmethodcalculator.info/understanding-the-shell-method-in-calculus-a-comprehensive-guide/","timestamp":"2024-11-02T11:39:23Z","content_type":"text/html","content_length":"168767","record_id":"<urn:uuid:01094048-015c-4fad-9234-7b73be73d89c>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00859.warc.gz"}
Algebraically coherent categories Alan S. Cigoli, James R. A. Gray and Tim Van der Linden We call a finitely complete category algebraically coherent if the change-of-base functors of its fibration of points are coherent, which means that they preserve finite limits and jointly strongly epimorphic pairs of arrows. We give examples of categories satisfying this condition; for instance, coherent categories and categories of interest in the sense of Orzech. We study equivalent conditions in the context of semi-abelian categories, as well as some of its consequences: including amongst others, strong protomodularity, and normality of Higgins commutators for normal subobjects, and in the varietal case, fibre-wise algebraic cartesian closedness. Keywords: Coherent functor; Smith, Huq, Higgins commutator; semi-abelian, locally algebraically cartesian closed category; category of interest 2010 MSC: 20F12, 08C05, 17A99, 18B25, 18G50} Theory and Applications of Categories, Vol. 30, 2015, No. 54, pp 1864-1905. Published 2015-12-08. Revised 2018-12-17. Original version at http://www.tac.mta.ca/tac/volumes/30/54/30-54a.pdf TAC Home
{"url":"http://www.tac.mta.ca/tac/volumes/30/54/30-54abs.html","timestamp":"2024-11-07T03:29:13Z","content_type":"text/html","content_length":"2135","record_id":"<urn:uuid:4c2879fa-3d3a-44a5-b200-ded7c539483c>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00656.warc.gz"}
How To Find Absolute Deviation [Master Data Analysis Now] » EML Are you searching for a clear guide on how to calculate absolute deviation? We’ve got you covered. We understand the frustration that can come with trying to grasp this concept, but don’t worry – we’re here to simplify it for you. Feeling lost and overstimulated by the complexities of absolute deviation? It’s not only you. Many struggle with this mathematical concept, don’t worry, as we’re here to alleviate your confusion and provide you with a straightforward solution. With years of experience in mathematics and data analysis, we’ve mastered the art of finding absolute deviation. Trust us to break it down in a way that echoes you, making it easier to understand and apply in your own calculations. Let’s immerse hand-in-hand and unpack the secret of absolute deviation. Key Takeaways • Absolute deviation measures the deviation of each data point from the mean in a data set, helping us understand variability. • To calculate absolute deviation, find the absolute value of the not the same between each data point and the mean, then average these values. • Understanding absolute deviation is critical in statistics and data analysis for assessing data spread and making smart decisionss. • The formula for calculating absolute deviation involves finding the mean of the data set and determining the average absolute changes from it. • Absolute deviation is important for identifying outliers, forms the basis for advanced statistical analyses like standard deviation, and has practical applications in finance, quality control, economics, and education. Understanding Absolute Deviation When it comes to absolute deviation, it’s super important to assimilate the concept thoroughly. Absolute deviation measures the deviation of each data point in a set from the mean of the data set. By calculating this, we can better understand the variability in our data. To calculate absolute deviation, we must find the absolute value of the not the same between each data point and the mean. Once we have these deviations, we can sum them up and divide by the total number of data points to get the average absolute deviation. After all, absolute deviation is critical in statistics and data analysis as it helps us assess the dispersion or spread of data points. It provides us with useful ideas into how consistent or varying our data is, aiding in making smart decisionss and drawing accurate endings. For a more in-depth understanding of absolute deviation, you can refer to this insightful article on Statistics How To. It investigates the concept further, giving clear examples and explanations to improve your knowledge. Formula for Calculating Absolute Deviation When it comes to finding absolute deviation, the process involves computing the absolute value of the changes between individual data points and the mean. This calculation allows us to measure the distance of each data point from the average, providing ideas into the variability of the data set. The formula for calculating absolute deviation can be summarized as follows: • Step 1: Find the mean of the data set. • Step 2: Subtract the mean from each data point. • Step 3: Take the absolute value of each not the same. • Step 4: Find the average of these absolute changes. By following these steps, we can determine the average absolute deviation, which helps us evaluate the dispersion of data points and make smart decisionss based on the variability observed. For a more detailed explanation and examples of calculating absolute deviation, you can refer to the article on Statistics How To. Stay tuned as we investigate more into the application of absolute deviation in statistical analysis. Example Calculations To better grasp the concept of absolute deviation, let’s walk through a couple of Example Calculations. • Example 1: Consider a dataset of 5 numbers – 10, 15, 20, 25, and 30. • Step 1: Find the mean: (10 + 15 + 20 + 25 + 30) / 5 = 20. • Step 2: Calculate the deviations from the mean: |10-20| = 10, |15-20| = 5, |20-20| = 0, |25-20| = 5, |30-20| = 10. • Step 3: Find the average of these absolute deviations: (10 + 5 + 0 + 5 + 10) / 5 = 6. • Example 2: Let’s work with a different set of numbers – 3, 7, 9, 12, and 18. • Step 1: Determine the mean: (3 + 7 + 9 + 12 + 18) / 5 = 9.8. • Step 2: Compute the absolute deviations from the mean: |3-9.8| = 6.8, |7-9.8| = 2.8, |9-9.8| = 0.8, |12-9.8| = 2.2, |18-9.8| = 8.2. • Step 3: Find the mean of these absolute deviations: (6.8 + 2.8 + 0.8 + 2.2 + 8.2) / 5 = 4.96. By following these steps, we can effectively calculate absolute deviation for a given dataset. For a detailed guide on statistical calculations, you may refer to the Statistics How To resource. Importance of Absolute Deviation in Data Analysis When exploring data analysis, understanding absolute deviation is critical for accurate interpretation. It helps us grasp the variability present in a dataset, giving insight into how spread out the data points are from the mean. This statistical measure enables us to assess dispersion effectively, important for making smart decisionss in various fields. In data analysis, outliers can significantly impact our endings. Absolute deviation allows us to identify these outliers by highlighting data points that deviate notably from the average. By calculating the absolute not the same between each data point and the mean, we can pinpoint these influential values with precision. Also, absolute deviation serves as a key building block for more advanced statistical analyses. Techniques such as standard deviation and variance rely on the concept of absolute deviation to provide more ideas into the dataset’s characteristics. Mastering absolute deviation sets a strong foundation for tackling more complex statistical tough difficulties efficiently. For a full guide on statistical calculations, including absolute deviation, we recommend exploring the resources available at Statistics How To. Their detailed explanations and examples can further improve our understanding and proficiency in data analysis. Practical Applications of Absolute Deviation When it comes to Practical Applications of Absolute Deviation, we can see its significance in various fields. Here are a few key areas where understanding and calculating absolute deviation can be highly beneficial: • Financial Analysis: In finance, absolute deviation is used to measure the dispersion of data points in a dataset, aiding in risk assessment and investment decision-making. • Quality Control: Industries often use absolute deviation to monitor the consistency and precision of manufacturing processes, ensuring products meet quality standards. • Economics: Economists rely on absolute deviation to evaluate market volatility and trends, allowing for more accurate forecasting and policy decisions. • Educational Assessment: In education, absolute deviation helps educators evaluate student performance and identify areas needing improvement, leading to adjusted interventions. Mastering absolute deviation opens doors to a more understanding of data variability and enables us to make smart decisionss across explorerse fields. To investigate further into Practical Applications of Absolute Deviation and improve your statistical skills, check out resources like Statistics How To For full guidance on statistical calculations. After all, absolute deviation serves as a key tool in data analysis that paves the way for more advanced statistical measures. Latest posts by Stewart Kaplan (see all)
{"url":"https://enjoymachinelearning.com/blog/how-do-you-find-absolute-deviation/","timestamp":"2024-11-05T13:36:49Z","content_type":"text/html","content_length":"118786","record_id":"<urn:uuid:c0b09291-11f5-4b47-b809-bf8af0166aba>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00384.warc.gz"}
How do you determine the y-intercepts of quadratic functions? | Socratic How do you determine the y-intercepts of quadratic functions? 1 Answer In order to find the y-intercept $b$ of any function $f \left(x\right)$ is $f \left(0\right)$. So, the y-intercept of $f \left(x\right) = a {x}^{2} + b x + c$ is $f \left(0\right) = a {\left(0\right)}^{2} + b \left(0\right) + c = c$. The constant term c of a quadratic function is always its y-intercept. I hope that this was helpful. Impact of this question 6707 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-determine-the-y-intercepts-of-quadratic-functions","timestamp":"2024-11-06T02:05:38Z","content_type":"text/html","content_length":"32829","record_id":"<urn:uuid:a962c973-b379-434c-bf7d-cac9ccf16c63>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00365.warc.gz"}
Course Overview Get a basic understanding of the learning outcomes of the course. We'll cover the following Why use R? With rapid progress in statistical computing, proficiency in using statistical software has become almost a universal requirement in statistical methods courses, albeit to varying degrees. Popular software choices include SAS, SPSS, Stata, and R. There are three advantages of using R compared with commercial packages like SAS, SPSS, and Stata. • R is a well-thought-out, coherent system that comes with a suite of software facilities for data management, visualization, and analysis. • A large community of R users constantly develops new open-source add-on packages. There are already over 10,000 of these packages. • Finally, perhaps the greatest perk of the software is that it’s free. There are many reasons why R is preferred to other statistical software packages in higher education. But R’s greatest shortcoming to its widespread use in the social sciences is its steep learning What’s this course about? This course seeks to teach learners in the field of social sciences. It covers how to use R to manage, visualize, and analyze data to answer substantive research questions and reproduce the statistical analysis in published journal articles. What’s different in this course? This course distinguishes itself from other introductory R or statistics courses in three important ways. • First, it intends to serve as an introductory text on using R for data analysis projects, targeting an audience rarely exposed to statistical programming. • A second unique feature of this course is its emphasis on meeting the practical needs of students using R to conduct statistical analysis for research projects driven by substantive questions in social sciences. • A third unique feature of this course is its emphasis on teaching students how to replicate statistical analyses in published journal articles. This course primarily explains one continuous outcome variable and relevant statistical techniques, such as mean, a difference of means, covariance, correlation, and cross-sectional regression. So, comprehensiveness in both programming and statistics is purposefully sacrificed for greater accessibility, clarity, and depth. The goal is to make this course accessible and useful for novices in both programming and data analysis. Learning outcomes In sum, this course integrates R programming, the logic and steps of statistical inference, and the process of empirical social science research in a highly accessible and structured fashion. The course will guide us on how to do the following: • Use R to import data, inspect data, identify dataset attributes, and manage observations, variables, and datasets. • Use R to graph simple histograms, box plots, scatter plots, and research findings. • Use R for summarizing data, conducting a one-sample t-test, testing the difference of means between groups, computing covariance and correlation, estimating and interpreting ordinary least square (OLS) regression, and diagnosing and correcting regression assumption violations. • Replicate research findings in published journal articles.
{"url":"https://www.educative.io/courses/using-r-data-analysis-social-sciences/course-overview","timestamp":"2024-11-06T22:06:07Z","content_type":"text/html","content_length":"774902","record_id":"<urn:uuid:04f6d7a2-90fd-4b4c-b215-01577cc35d02>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00850.warc.gz"}
What Is the Sum of the First 100 Even Numbers? The sum of the first 100 even numbers is 10,100. This is calculated by taking the sum of the first 100 numbers, which is 5,050, and multiplying by 2. To find the total of the first 100 numbers, multiply 50 by 101. The calculation for the first 100 numbers is based on a legend of mathematician Carl Friedrich Gauss. According to the story, as a child his teacher asked the class to find the sum of the first 100 numbers to keep them busy. Gauss was able to determine it was 5,050 in a few seconds. He realized that the sum of the first and last number was 101, the second number and the second to last number was 101, and so on, resulting in 50 pairs of 101. He then multiplied 50 by 101 to get 5,050. Since the sum of the first 100 even numbers is double that of the first 100, a person can multiply 5,050 by 2 to get 10,100.
{"url":"https://www.reference.com/world-view/sum-first-100-even-numbers-92d4ce8658f0228a","timestamp":"2024-11-08T18:03:54Z","content_type":"text/html","content_length":"61508","record_id":"<urn:uuid:fdaa6daf-064e-4cf6-9631-bb319d0cdd14>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00671.warc.gz"}
Surface Areas and Volumes Class 10 - NCERT Solutions (with Video) 2024 Click on any of the links below to start learning from Teachoo... Get Solutions of all NCERT Exercise Questions and Examples of Chapter 12 Class 10 Surface Area and Volumes. All questions are solved in an easy way, with video explanation of each question. We have studied Surface Area and Volumes in Class 9, where we looked at the formulas of Area and Volume of Different Figures Click on an exercise link below to start doing the chapter from the NCERT. Or you can do the chapter from Concept Wise. In concept wise, each chapter is divided into some concepts. First concept is explained, and then questions of the concept is solved.
{"url":"https://www.teachoo.com/subjects/cbse-maths/class-10th/ch13-10th-surface-areas-and-volumes/","timestamp":"2024-11-03T23:31:27Z","content_type":"text/html","content_length":"108548","record_id":"<urn:uuid:21c1e08e-62d7-4742-ba1f-d2ddbcdae827>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00016.warc.gz"}
PID Control Use this calculator to compute the control output for a PID controller in a servomotor system. Enter the PID gains (Kp, Ki, Kd), the current error, and the time step, then click "Calculate Control Output" to see the result. The calculation uses the PID control law: u(t) = Kp * e(t) + Ki * ∫e(t)dt + Kd * de(t)/dt, where: • u(t) is the control output (e.g., motor voltage) • e(t) is the error (desired position - actual position) • Kp is the proportional gain • Ki is the integral gain • Kd is the derivative gain Note: This calculator provides a simplified single-step calculation. In a real system, the PID controller would run continuously, updating the control output at each time step. Understanding PID Controllers PID controllers are essential in control systems, offering a way to regulate processes through feedback loops. They adjust control inputs based on the error between desired and actual outputs. The proportional term addresses present errors, the integral term corrects accumulated past errors, and the derivative term predicts future errors, enabling precise control. Importance in Servo Systems In servomotor applications, PID controllers ensure precise movement and positioning, which is crucial for industrial automation. By continuously adjusting the control signal (e.g., motor voltage), they maintain the desired position despite external disturbances or changes in system dynamics. Use this calculator to compute the control output for a PID controller in a servomotor system. Enter the PID gains (Kp, Ki, Kd), the current error, and the time step, then click "Calculate Control Output" to see the result. The calculation uses the PID control law: u(t) = Kp * e(t) + Ki * ∫e(t)dt + Kd * de(t)/dt, where: • u(t) is the control output (e.g., motor voltage) • e(t) is the error (desired position - actual position) • Kp is the proportional gain • Ki is the integral gain • Kd is the derivative gain Note: This calculator provides a simplified single-step calculation. In a real system, the PID controller would run continuously, updating the control output at each time step. Understanding PID Controllers PID controllers are essential in control systems, offering a way to regulate processes through feedback loops. They adjust control inputs based on the error between desired and actual outputs. The proportional term addresses present errors, the integral term corrects accumulated past errors, and the derivative term predicts future errors, enabling precise control. Importance in Servo Systems In servomotor applications, PID controllers ensure precise movement and positioning, which is crucial for industrial automation. By continuously adjusting the control signal (e.g., motor voltage), they maintain the desired position despite external disturbances or changes in system dynamics.
{"url":"https://www.wakeindustrial.com/tools/pid-control-calculator","timestamp":"2024-11-02T03:08:45Z","content_type":"text/html","content_length":"62253","record_id":"<urn:uuid:cc19edb2-b3fd-4144-867a-5129ccecd7af>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00540.warc.gz"}
The CP of 25 articles is equal to SP of 20 articles. Find the loss or gain percent. The CP of 25 articles is equal to SP of 20 articles. Find the loss or gain percent. Solution 1 Step 1: Understand the problem The cost price (CP) of 25 articles is the same as the selling price (SP) of 20 articles. This means that the seller is selling 20 articles for the price he bought 25 Step 2: Set up the equation Let's assume the cost price of 1 article is $1. Therefore, the Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI Upgrade your grade with Knowee Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions.
{"url":"https://knowee.ai/questions/39891158-the-cp-of-articles-is-equal-to-sp-of-articles-find-the-loss-or-gain","timestamp":"2024-11-10T04:27:43Z","content_type":"text/html","content_length":"364143","record_id":"<urn:uuid:35f54d84-00be-4f3d-85b4-6c1c2751cbce>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00080.warc.gz"}
(graduate Studies In Mathematics 120) Qing Han-a Basic Course In Partial Differential Equations-american Mathematical Society (2011).pdf [k0pvwrd29101] A Basic Course in Partial Differential 'Equations Qing Han Graduate -Stijd'iaes, in Math ° ati cs. Volume 120 mer can'Mathematical SOciey A Basic Course in Partial Differential Equations A Basic Course in Partial Differential Equations Qing Han Graduate Studies in Mathematics Volume 120 American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE David Cox (Chair) Rafe Mazzeo Martin Scharlemann Gigliola Stafl'ilani 2000 Mathematics Subject Classification. Primary 35-01. For additional information and updates on this book, visit www.ams.org/bookpages/gsm- 120 Library of Congress Cataloging-in-Publication Data Han, Qing. A basic course in partial differential equations / Qing Han. p. cm. - (Graduate studies in mathematics ; v. 120) Includes bibliographical references and index. ISBN 978-0-8218-5255-2 (alk. paper) 1. Differential equations, Partial. I. Title. QA377.H31819 515'. 353-dc22 2010043189 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to reprint-permission@ams . org. © 2011 by the author. The American Mathematical Society retains all rights except those granted to the United States Government. Printed in the United States of America. 0 The paper used in this book is acid-free and falls within the guidelines established to ensure permanence and durability. Visit the AMS home page at http: //www. ams . org/ To Yansu, Raymond and Tommy Chapter 1. Introduction §1.1. Notation §1.2. Well-Posed Problems § 1.3. Chapter 2. First-Order Differential Equations §2.1. Nonchaxacteristic Hypersurfaces §2.2. The Method of Characteristics §2.3. A Priori Estimates §2.4. Chapter 3. An Overview of Second-Order PDEs §3.1. Energy Estimates §3.3. Separation of Variables §3.2. §3.4. Chapter 4. Laplace Equations §4.1. Fundamental Solutions §4.2. Mean-Value Properties §4.3. The Maximum Principle §4.4. Poisson Equations §4.5. Chapter 5. Heat Equations §5.1. Fourier Transforms §5.2. Fundamental Solutions §5.3. The Maximum Principle §5.4. Chapter 6. Wave Equations §6.1. One-Dimensional Wave Equations §6.2. Higher-Dimensional Wave Equations §6.3. Energy Estimates §6.4. Chapter 7. First-Order Differential Systems §7.1. Noncharacteristic Hypersurfaces §7.2. Analytic Solutions §7.3. Nonexistence of Smooth Solutions §7.4. Chapter 8. Epilogue §8.1. Basic Linear Differential Equations §8.2. Examples of Nonlinear Differential Equations Is it really necessary to classify partial differential equations (PDEs) and to employ different methods to discuss different types of equations? Why is it important to derive a priori estimates of solutions before even proving the existence of solutions? These are only a few questions any students who just start studying PDEs might ask. Students may find answers to these questions only at the end of a one-semester course in basic PDEs, sometimes after they have already lost interest in the subject. In this book, we attempt to address these issues at the beginning. There are several notable features in this book. First, the importance of a priori estimates is addressed at the beginning and emphasized throughout this book. This is well illustrated by the chapter on first-order PDEs. Although first-order linear PDEs can be solved by the method of characteristics, we provide a detailed analysis of a priori estimates of solutions in sup-norms and in integral norms. To emphasize the importance of these estimates, we demonstrate how to prove the existence of weak solutions with the help of basic results from functional analysis. The setting here is easy, since L2-spaces are needed only. Meanwhile, all important ideas are in full display. In this book, we do attempt to derive explicit expressions for solutions whenever possible. However, these explicit expressions of solutions of special equations usually serve mostly to suggest the correct form of estimates for solutions of general equations. The second feature is the illustration of the necessity to classify secondorder PDEs at the beginning. In the chapter on general second-order linear PDEs, immediately after classifying second-order PDEs into elliptic, parabolic and hyperbolic type, we discuss various boundary-value problems and initial/boundary-value problems for the Laplace equation, the heat equation and the wave equation. We discuss energy methods for proving uniqueness and find solutions in the plane by separation of variables. The explicit expressions of solutions demonstrate different properties of solutions of different types of PDEs. Such differences clearly indicate that there is unlikely to be a unified approach to studying PDEs. Third, we focus on simple models of PDEs and study these equations in detail. We have chapters devoted to the Laplace equation, the heat equation and the wave equation, and use several methods to study each equation. For example, for the Laplace equation, we use three different methods to study its solutions: the fundamental solution, the mean-value property and the maximum principle. For each method, we indicate its advantages and its shortcomings. General equations are not forgotten. We also discuss maximum principles for general elliptic and parabolic equations and energy estimates for general hyperbolic equations. The book is designed for a one-semester course at the graduate level. Attempts have been made to give a balanced coverage of different classes of partial differential equations. The choice of topics is influenced by the personal tastes of the author. Some topics may not be viewed as basic by others. Among those not found in PDE textbooks at a comparable level are estimates in L°°-norms and L2-norms of solutions of the initial-value problem for the first-order linear differential equations, interior gradient estimates and differential Harnack inequality for the Laplace equation and the heat equation by the maximum principle, and decay estimates for solutions of the wave equation. Inclusions of these topics reflect the emphasis on estimates in this book. This book is based on one-semester courses the author taught at the University of Notre Dame in the falls of 2007, 2008 and 2009. During the writing of the book, the author benefitted greatly from comments and suggestions of many of his friends, colleagues and students in his classes. Tiancong Chen, Yen-Chang Huang, Gang Li, Yuanwei Qi and Wei Zhu read the manuscript at various stages. Minchun Hong, Marcus Khuri, Ronghua Pan, Xiaodong Wang and Xiao Zhang helped the author write part of Chapter 8. Hairong Liu did a wonderful job of typing an early version of the manuscript. Special thanks go to Charles Stanton for reading the entire manuscript carefully and for many suggested improvements. I am grateful to Natalya Pluzhnikov, my editor at the American Mathematical Society, for reading the manuscript and guiding the effort to turn it into a book. Last but not least, I thank Edward Dunne at the AMS for his help in bringing the book to press. Qing Han Chapter 1 This chapter serves as an introduction of the entire book. In Section 1.1, we first list several notations we will use throughout this book. Then, we introduce the concept of partial differential equations. In Section 1.2, we discuss briefly well-posed problems for partial differential equations. We also introduce several function spaces whose associated norms are used frequently in this book. In Section 1.3, we present an overview of this book. 1.1. Notation In general, we denote by x points in IRn and write x = (Xi,'. , x7) in terms of its coordinates. For any x e IRn, we denote by lxi the standard Euclidean norm, unless otherwise stated. Namely, for any x = (Xi,'' , x7 ), we have 1 Sometimes, we need to distinguish one particular direction as the time direction and write points in IRn+1 as (x, t) for x e IRn and t e JR. In this case, we call x = (xi,'' , x7) E IRn the space variable and t e IR the time variable. In R2, we also denote points by (x, y). Let 1 be a domain in W, that is, an open and connected subset in IRn. We denote by C(1) the collection of all continuous functions in 1, by C(1) the collection of all functions with continuous derivatives up to order m, for any integer m > 1, and by C°°(1) the collection of all functions with continuous derivatives of arbitrary order. For any u e Cm(), we denote by 1 1. Introduction Vmu the collection of all partial derivatives of u of order m. For m = 1 and m = 2, we usually write Vmu in special forms. For first-order derivatives, we write Vu as a vector of the form Vu=(u51, ,u5). This is the gradient vector of u. For second-order derivatives, we write V2u in the matrix form V2u = ux1x1 ux2x1 This is a symmetric matrix, called the Hessian matrix of u. For derivatives of order higher than two, we need to use multi-indices. A multi-index a E 7L+ is given by a = (al,.'. , a7) with nonnegative integers ai, , a7. We write n IaI=a. 2=1 For any vector = (1, ,fin ) C W, we denote - tai The partial derivative &u is defined by as u = aSi . .. axn u, and its order is I al. For any positive integer m, we define 1 2) \IaI=m In particular, 1 and 1 IV2ul = uxix i, j=1 A hypersurface in W is a surface of dimension n - 1. Locally, a Cnhypersurface can be expressed by {co = 0} for a Cn-function co with Vco Alternatively, by a rotation, we may take co(x) = xn -'b(x1, , x_1) for a Ctm-function b of n - 1 variables. A domain 11 C W is Cn if its boundary Oh is a Cm-hypersurface. 1.2. Well-Posed Problems A partial differential equation (henceforth abbreviated as PDE) in a domain SZ C Rn is a relation of independent variables x e fZ, an unknown function u defined in fZ, and a finite number of its partial derivatives. To solve a PDE is to find this unknown function. The order of a PDE is the order of the highest derivative in the relation. Hence for a positive integer m, the general form of an mth-order PDE in a domain 1 C Rn is given by , V'nu(x)) = 0 F(x, u, Vu(x), V2u(x), for x E fZ. Here F is a function which is continuous in all its arguments, and u is a Ctm-function in fZ. A Cn-solution u satisfying the above equation in the pointwise sense in SZ is often called a classical solution. Sometimes, we need to relax regularity requirements for solutions when classical solutions are not known to exist. Instead of going into details, we only mention that it is an important method to establish first the existence of weak solutions, functions with less regularity than Ctm and satisfying the equation in some weak sense, and then to prove that these weak solutions actually possess the required regularity to be classical solutions. A PDE is linear if it is linear in the unknown functions and their derivatives, with coefficients depending on independent variables x. A general mth-order linear PDE in SZ is given by aa(x)aau = f (x) for x E fZ. IoI<m Here as is the coefficient of 0&u and f is the nonhomogeneous term of the equation. A PDE of order m is quasilinear if it is linear in the derivatives of solutions of order m, with coefficients depending on independent variables x and the derivatives of solutions of order <m. In general, an mth-order quasilinear PDE in SZ is given by a« (x, u, ... , Qm- l u) a«u = f(x, u, ... , Qm- l u) for x e c. I&I =m Several PDEs involving one or more unknown functions and their deriva- tives form a partial differential system. We define linear and quasilinear partial differential systems accordingly. In this book, we will focus on first-order and second-order linear PDEs and first-order linear differential systems. On a few occasions, we will diverge to nonlinear PDEs. 1.2. Well-Posed Problems What is the meaning of solving partial differential equations? Ideally, we obtain explicit solutions in terms of elementary functions. In practice this is only possible for very simple PDEs or very simple solutions of more general 1. Introduction PDEs. In general, it is impossible to find explicit expressions of all solutions of all PDEs. In the absence of explicit solutions, we need to seek methods to prove existence of solutions of PDEs and discuss properties of these solutions. In many PDE problems, this is all we need to do. A given PDE may not have solutions at all or may have many solutions. When it has many solutions, we intend to assign extra conditions to pick up the most relevant solutions. Those extra conditions usually are in the form of boundary values or initial values. For example, when we consider a PDE in a domain, we can require that solutions, when restricted to the boundary, have prescribed values. This is the so-called boundary-value problems. When one variable is identified as the time and a part of the boundary is identified as an initial hypersurface, values prescribed there are called initial values. We use data to refer to boundary values or initial values and certain known functions in the equation, such as the nonhomogeneous term if the equation is linear. Hadamard introduced the notion of well-posed problems. A given problem for a partial differential equation is well-posed if (i) there is a solution; (ii) this solution is unique; (iii) the solution depends continuously in some suitable sense on the data given in the problem, i.e., the solution changes by a small amount if the data change by a small amount. We usually refer to (i), (ii) and (iii) as the existence, uniqueness and continuous dependence, respectively. We need to emphasize that the wellposedness goes beyond the existence and uniqueness of solutions. The continuous dependence is particularly important when PDEs are used to model phenomena in the natural world. This is because measurements are always associated with errors. The model can make useful predictions only if solutions depend on data in a controllable way. In practice, both the uniqueness and the continuous dependence are proved by a priori estimates. Namely, we assume solutions already exist and then derive certain norms of solutions in terms of data in the problem. It is important to note that establishing a priori estimates is in fact the first step in proving the existence of solutions. A closely related issue here is the regularity of solutions such as continuity and differentiability. Solutions of a particular PDE can only be obtained if the right kind of regularity, or the right kind of norms, are employed. Two classes of norms are used often, sup-norms and L2-norms. 1.3. Overview Let 1 be a domain in R. For any bounded function u in 1, we define the sup-norm of u in 1 by kLIL00() = sup IuI For a bounded continuous function u in St, we may also write IuIc(c) instead of IuILoo(c). Let m be a positive integer. For any function u in St with bounded derivatives up to order m, we define the C"''-norm of u in SZ by IaI<m If 1 is a bounded Cm-domain in Ian, then Cm (11), the collection of functions which are Ctm in 1, is a Banach space equipped with the Cn-norm. Next, for any Lebesgue measurable function u in 1, we define the L2norm of u in 1 by 2 (f where integration is in the Lebesgue sense. The L2-space in 1 is the collecIIUIIL2() - u2 dx tion of all Lebesgue measurable functions in 1 with finite L2-norms and is denoted by L2(1). We learned from real analysis that L2 (1) is a Banach space equipped with the L2-norm. Other norms will also be used. We will introduce them as needed. The basic formula for integration is the formula of integration by parts. Let 1 be a piecewise C1-domain in Ian and v = (v1,... , vn) be the unit exterior normal vector to D1. Then for any u, v e C1(1) f1 C(1), f uxZv dx = - uvx2 dx + uvv2 dS, , n. Such a formula is the basis for L2-estimates. for i = 1, In deriving a priori estimates, we follow a common practice and use the "variable constant" convention. The same letter C is used to denote constants which may change from line to line, as long as it is clear from the context on what quantities the constants depend. In most cases, we are not interested in the value of the constant, but only in its existence. 1.3. Overview There are eight chapters in this book. The main topic in Chapter 2 is first-order PDEs. In Section 2.1, we introduce the basic notion of noncharacteristic hypersurfaces for initial-value problems for first-order PDEs. We discuss first-order linear PDEs, quasilinear PDEs and general nonlinear PDEs. In Section 2.2, we solve initial-value 1. Introduction problems by the method of characteristics if initial values are prescribed on noncharacteristic hypersurfaces. We demonstrate that solutions of a system of ordinary differential equations (ODES) yield solutions of the initial-value problems for first-order PDEs. In Section 2.3, we derive estimates of solutions of initial-value problems for first-order linear PDEs. The L°°-norms and the L2-norms of solutions are estimated in terms of those of initial values and nonhomogeneous terms. In doing so, we only assume the existence of solutions and do not use any explicit expressions of solutions. These estimates provide quantitative properties of solutions. Chapter 3 should be considered as an introduction to the theory of second-order linear PDEs. In Section 3.1, we introduce the Laplace equa- tion, the heat equation and the wave equation. We also introduce their general forms, elliptic equations, parabolic equations and hyperbolic equations, which will be studied in detail in subsequent chapters. In Section 3.2, we derive energy estimates of solutions of certain boundary-value problems. Consequences of such energy estimates are the uniqueness of solutions and the continuous dependence of solutions on boundary values and nonhomogeneous terms. In Section 3.3, we solve these boundary-value problems in the plane by separation of variables. Our main focus is to demonstrate different regularity patterns for solutions of different differential equations, the Laplace equation, the heat equation and the wave equation. In Chapter 4, we discuss the Laplace equation and the Poisson equation. The Laplace equation is probably the most important PDE with the widest range of applications. In the first three sections, we study harmonic functions (i.e., solutions of the Laplace equation), by three different methods: the fundamental solution, the mean-value property and the maximum principle. These three sections are relatively independent of each other. In Section 4.1, we solve the Dirichlet problem for the Laplace equation in balls and derive Poisson integral formula. Then we discuss regularity of harmonic functions using the fundamental solution. In Section 4.2, we study the mean-value property of harmonic functions and its consequences. In Section 4.3, we discuss the maximum principle for harmonic functions and its applications. In particular, we use the maximum principle to derive interior gradient estimates for harmonic functions and the Harnack inequality for positive harmonic functions. We also solve the Dirichlet problem for the Laplace equation in a large class of bounded domains by Perron's method. Last in Section 4.4, we briefly discuss classical solutions and weak solutions of the Poisson equation. In Chapter 5, we study the heat equation, which describes the temperature of a body conducting heat, when the density is constant. In Section 5.1, we introduce Fourier transforms briefly and derive formally an explicit 1.3. Overview expression for solutions of the initial-value problem for the heat equation. In Section 5.2, we prove that such an expression indeed yields a classical solution under appropriate assumptions on initial values. We also discuss regularity of arbitrary solutions of the heat equation by the fundamental solution. In Section 5.3, we discuss the maximum principle for the heat equation and its applications. In particular, we use the maximum principle to derive interior gradient estimates for solutions of the heat equation and the Harnack inequality for positive solutions of the heat equation. In Chapter 6, we study the n-dimensional wave equation, which represents vibrations of strings or propagation of sound waves in tubes for n = 1, waves on the surface of shallow water for n = 2, and acoustic or light waves for n = 3. In Section 6.1, we discuss initial-value problems and various initial/boundary-value problems for the one-dimensional wave equation. In Section 6.2, we study initial-value problems for the wave equation in higher-dimensional spaces. We derive explicit expressions of solutions in odd dimensions by the method of spherical average and in even dimensions by the method of descent. We also discuss global behaviors of solutions. Then in Section 6.3, we derive energy estimates for solutions of initial-value problems. Chapter 6 is relatively independent of Chapter 4 and Chapter 5 and can be taught after Chapter 3. In Chapter 7, we discuss partial differential systems of first order and focus on existence of local solutions. In Section 7.1, we introduce noncharacteristic hypersurfaces for partial differential equations and systems of arbitrary order. We demonstrate that partial differential systems of arbitrary order can always be changed to those of first order. In Section 7.2, we discuss the Cauchy-Kovalevskaya theorem, which asserts the existence of analytic solutions of noncharacteristic initial-value problems for differential systems if all data are analytic. In Section 7.3, we construct a first-order linear differential system in R3 which does not admit smooth solutions in any subsets of ]R3. In this system, coefficient matrices are analytic and the nonhomogeneous term is a suitably chosen smooth function. In Chapter 8, we discuss several differential equations we expect to study in more advanced PDE courses. Discussions in this chapter will be brief. In Section 8.1, we discuss basic second-order linear differential equations, including elliptic, parabolic and hyperbolic equations, and first-order linear symmetric hyperbolic differential systems. We will introduce appropriate boundary-value problems and initial-value problems and introduce appropriate function spaces to study these problems. In Section 8.2, we introduce several important nonlinear equations and focus on their background. This chapter is designed to be introductory. 1. Introduction Each chapter, except this introduction and the final chapter, ends with exercises. Level of difficulty varies considerably. Some exercises, at the most difficult level, may require long lasting Chapter 2 First-Order Differential Equations In this chapter, we discuss initial-value problems for first-order PDEs. Main topics include noncharacteristic conditions, methods of characteristics and a priori estimates in L°°-norms and in L2-norms. In Section 2.1, we introduce the basic notion of noncharacteristic hypersurfaces for initial-value problems. In an attempt to solve initial-value problems, we illustrate that we are able to compute all derivatives of solutions on initial hypersurfaces if initial values are prescribed on noncharacteristic initial hypersurfaces. For first-order linear PDEs, the noncharacteristic condition is determined by equations and initial hypersurfaces, independent of initial values. However, for first-order nonlinear equations, initial values also play a role. Noncharacteristic conditions will also be introduced for second-order linear PDEs in Section 3.1 and for linear PDEs of arbitrary order in Section 7.1, where multi-indices will be needed. In Section 2.2, we solve initial-value problems by the method of characteristics if initial values are prescribed on noncharacteristic hypersurfaces. For first-order homogeneous linear PDEs, special curves are introduced along which solutions are constant. These curves are given by solutions of a system of ordinary differential equations (ODEs), the so-called characteristic ODEs. For nonlinear PDEs, characteristic ODEs also include additional equations for solutions of PDEs and their derivatives. Solutions of the characteristic ODEs yield solutions of the initial-value problems for first-order PDEs. In Section 2.3, we derive estimates of solutions of initial-value problems for first-order linear PDEs. The L°°-norms and the L2-norms of solutions 2. First-Order Differential Equations are estimated in terms of those of initial values and nonhomogeneous terms. In doing so, we only assume the existence of solutions and do not use any explicit expressions of solutions. These estimates provide quantitative properties of solutions. In the final part of this section, we discuss briefly the existence of weak solutions as a consequence of the L2-estimates. The method is from functional analysis and the Riesz representation theorem plays an essential role. 2.1. Noncharacteristic Hypersurfaces Let St be a domain in ][8n and F = F(x, u, p) be a smooth function of (x, u, p) E 12 x ]I8 x W. A first-order PDE in St is given by F(x, u, Vu)= 0 for x E St. Solving (2.1.1) in the classical sense means finding a smooth function u satisfying (2.1.1) in St. We first examine a simple example. Example 2.1.1. We consider in ][82 = {(x, t)} the equation u + ut = 0. This is probably the simplest first-order PDE. Obviously, u(x, t) = x - t is a solution. In general, u(x, t) = uo(x - t) is also a solution for any C'function up. Such a solution has a physical interpretation. We note that u(x, t) = uo (x - t) is constant along straight lines x - t = xo. By interpreting x as location and t as time, we can visualize such a solution as a wave propagating to the right with velocity 1 without changing shape. When interpreted in this way, the solution u at later time (t > 0) is determined uniquely by its value at the initial time (t = 0), which is given by uo(x). The function uo is called an initial value. U u(',ti) u(',t2) Figure 2.1.1. Graphs of u at different times t2 > t1. 2.1. Noncharacteristic Hypersurfaces In light of Example 2.1.1, we will introduce initial values for (2.1.1) and discuss whether initial values determine solutions. Let E be a smooth hypersurface in I[8Th with St fl E Ql. We intend to prescribe u on E to find a solution of (2.1.1). To be specific, let uo be a given smooth function on E. We will find a solution u of (2.1.1) also satisfying u = uo on. We usually call E the initial hypersurface and uo the initial vale or Cauchy vale. The problem of solving (2.1.1) together with (2.1.2) is called the initial-value problem or Cauchy problem. Our main focus is to solve such an initial-value problem under appropriate conditions. We start with the following question. Given an initial value (2.1.2) for equation (p.1.1), can we compute all derivatives of u at each point of the initial hypersurface E? This should be easier than solving the initial-value problem (2.1.1)-(2.1.2). To illustrate the main ideas, we first consider linear PDEs. Let S2 be a domain in I[8n containing the origin and ai, b and f be smooth functions in , n. We consider St, for any i = 1, n ai(x)u-I- b(x)u = f(x) in ft (2.1.3) i=1 Here, ai and b are coefficients of uxi and u, respectively. The function f is called the nonhomogeneous term. If f - 0, (2.1.3) is called a homogeneous equation. We first consider a special case where the initial hypersurface is given by the hyperplane {xn = 0}. For x E Rn, we write x = (x', xn) for x' = x_1) E Rn-1. Let uo be a given smooth function in a neighborhood of the origin in Rn-1. The initial condition (2.1.2) has the form (x1, u(x', 0) = for any x' E Rn-1 sufficiently small. Let u be a smooth solution of (2.1.3) and (2.1.4). In the following, we will investigate whether we can compute all derivatives of u at the origin in terms of the equation and the initial value. It is obvious that we can find all x'-derivatives of u at the origin in terms of those of uo. In particular, we have, for i = 1, , n - 1, u(0) = To find u(0), we need to use the equation. We note that an is the coefficient of uin (2.1.3). If we assume (2.1.5) an(0) L 0, 2. First-Order Differential Equations then by (2.1.3) n-1 u(0) _ - (ai(0)uxi(0) + b(o)u(o) - 1(0)). Z-1 Hence, we can compute all first-order derivatives of u at 0 in terms of the coefficients and the nonhomogeneous term in (2.1.3) and the initial value uo in (2.1.4). In fact, we can compute all derivatives of u of any order at the origin by using uo and differentiating (2.1.3). We illustrate this by finding all the second-order derivatives. We first note that ux2x (0) = uo,x2x (0), for i, j = 1, ,n-i. To find uxkxn for k = 1, with respect to xk to get , n, we differentiate (2.1.3) aiuxix i=1 ai,xk ux2 + buxk + bxk u = fXk' For k = 1, , n - 1, the only unknown expression at the origin is uxk xn , whose coefficient is an. If (2.1.5) holds, we can find uxkxn (0) for k _ 1, , n-1, ,n-i. Fork = n, with uxixn (0) already determined for i = 1, we can find uxnxn (0) similarly. This process can be repeated for derivatives of arbitrary order. In summary, we can find all derivatives of u of any order at the origin under the condition (2.1.5), which will be defined as the noncharacteristic condition later on. More generally, consider a hypersurface given by {co = 0} for a smooth function So in a neighborhood of the origin with V p 0. Assume that is normal to passes through the origin, i.e., p(0) = 0. We note that at each point of . Without loss of generality, we assume 7'xn (0) 0. Then by the implicit function theorem, we can solve So = 0 around x = 0 for xn = b (x 1, , x7_). Consider a change of variables x H y = (xl,... , xn_ , SP(x)) . This is a well-defined transform in a neighborhood of the origin. Its Jacobian matrix J is given by ... ,y) _ a(,... J = a(xl, ... , xn) Hence det J(0) = cpXTh (0) 2.1. Noncharacteristic Hypersurfaces In the following, we denote by L the first-order linear differential operator defined by the left-hand side of (2.1.3), i.e., ai(x)u-I- b(x)u. Lu = By the chain rule, >yk,xuyk. lc=1 We write the operator L in the y-coordinates as Lu = uyk + b(x(y))u. In the y-coordinates, the initial hypersurface E is given by {yn = 0}. With yam, = cp, the coefficient of uy, is given by az Hence, for the initial-value problem (2.1.3) and (2.1.2), we can find all deriva- tives ofuatOEEif 0. a2 Co) z Co) 2=1 is normal to E _ {gyp = 0}. When We recall that Dip = _ {x= 0} or (x) = x7, then Dip = (0,... , 0,1) and a(x). 2=1 This reduces to the special case we discussed earlier. Definition 2.1.2. Let L be a first-order linear differential operator as in (2.1.6) in a neighborhood of xo E RT and E be a smooth hypersurface containing xo. Then is noncharacteristic at xo if where v = (Vi,... , v7) is normal to E at xo. Otherwise, E is characteristic at xo. A hypersurface is noncharacteristic if it is noncharacteristic at every point. Strictly speaking, a hypersurface is characteristic if it is not noncharacteristic, i.e., if it is characteristic at some point. In this book, we will abuse this terminology. When we say a hypersurface is characteristic, we mean it is characteristic everywhere. This should cause few confusions. In 2. First-Order Differential Equations I[82, hypersurfaces are curves, so we shall speak of characteristic curves and noncharacteristic curves. The noncharacteristic condition has a simple geometric interpretation. If we view a = (al,.. , an) as a vector in 1[8n, then condition (2.1.7) holds if and only if a(xo) is not a tangent vector to E at xo. This condition assures that we can compute all derivatives of solutions at xp. It is straightforward to check that (2.1.7) is maintained under C'-changes of coordinates. The discussion leading to Definition 2.1.2 can be easily generalized to first-order quasilinear equations. Let SZ be a domain in W containing the origin as before and a2 and f be smooth functions in SZ x R, for any i = 1, , n. We consider n f(x,u) in ft ai (x, 2=1 Again, we first consider a special case where the initial hypersurface E is given by the hyperplane {xn = 0} and an initial value is given by (2.1.4) for a given smooth function uo in a neighborhood of the origin in I[8n-1. Let u be a smooth solution of (2.1.8) and (2.1.4). Then u( O) = for i = 1, uo,xi (O) , , n - 1, and n-1 - f(Ouo(O))) u(O) = -__________ a(O,uo(O)) an (O,uo(0)) L0. Similar to (2.1.5), this is the noncharacteristic condition for (2.1.8) at the origin if the initial hypersurface E is given by {xn = 0}. In general, let xO be a point in ][8n and E be a smooth hypersurface containing xo. Let uo be a prescribed smooth function on E and ai and f be smooth functions in a neighborhood of (xO, u(xp)) E IISn X IIB, for i = 1, , n. Then for quasilinear PDE (2.1.8), E is noncharacteristic at xO with respect to uo if n az (xo,uo(xo))vz 2=1 where v = (vi,... , vn) is normal to > at xo. 2.1. Noncharacteristic Hypersurfaces There is a significant difference between (2.1.7) for linear PDEs and (2.1.9) for quasilinear PDEs. For linear PDEs, the noncharacteristic condition depends on initial hypersurfaces and equations, specifically, the coefficients of first-order derivatives. For quasilinear PDEs, it also depends on initial values. Next, we turn to general nonlinear partial differential equations as in (2.1.1). Let SZ be a domain in W containing the origin as before and let F be a smooth function in SZ x It x R. Consider F(x,u,Vu)=O inft We ask the same question as for linear equations. Given an initial hypersurface Econtaining the origin and an initial value uo on E, can we compute all derivatives of solutions at the origin? Again, we first consider a special case where the initial hypersurface E is given by the hyperplane {xn = 0} and an initial value is given by (2.1.4) for a given smooth function uo in a neighborhood of the origin in Il8n-1 Example 2.1.3. Consider n 1 i=1 and u(x', 0) = u0(x'). It is obvious that u = x2 is a solution for u0 (x') = x2, i = 1, However, if I V ' u0 (x') l 2 > 1, there are no solutions for such an initial value. In light of Example 2.1.3, we first assume that there exists a smooth function v in a neighborhood of the origin having the given initial value u0 and satisfying F = 0 at the origin, i.e., F(0, v(0), Vv(0)) = 0. Now we can proceed as in the discussion of linear PDEs and ask whether we can find urn at the origin. By the implicit function theorem, this is possible if F(0,v(0),Vv(0)) L 0. This is the noncharacteristic condition for F = 0 at the origin. Now we return to Example 2.1.3. We set F(x, u, p) _ 1p12 - 1 for any p E I[8n. We claim that the noncharacteristic condition holds at 0 with respect to uo if 1. 2. First-Order Differential Equations In fact, let v = uo + cx, for a constant c to be determined. Then IVv(O)12 By choosing = IV'uo(O)I2 + c2. 1- Ivx'uo(o)12 o, v satisfies the equation at x = 0. For such two choices of v, we have F.(0, v(0), Vv(0)) = 2v(0) = 2c This proves the claim. In general, let F = 0 be a first-order nonlinear PDE as in (2.1.1) in a neighborhood of xo E R. Let E be a hypersurface containing xo and uo be a prescribed function on E. Then E is noncharacteristic at xo E E with respect to uo if there exists a function v such that v = uo on E, F(xo, v(xo), Vv(xo)) = 0 and n 0, i=1 where v = (ii1,... , vn) is normal to at xo. 2.2. The Method of Characteristics In this section, we solve initial-value problems for first-order PDEs by the method of characteristics. We demonstrate that solutions of any first-order PDEs with initial values prescribed on noncharacteristic hypersurfaces can be obtained by solving systems of ordinary differential equations (ODEs). Let S 1 C Rn be a domain and F a smooth function in S 1 x R x Ian. The general form of first-order PDEs in S1 is given by F(x,u,Vu)=0 foranyxES1. Let be a smooth hypersurface in Rn with f1 SZ ; 0 and uo be a smooth function on . Then we prescribe an initial value on by u = uo f1 S 1. If S 1 is a domain containing the origin and is noncharacteristic at the origin with respect to uo, then we are able to compute derivatives of u of arbitrary order at the origin by discussions in the previous section. Next, we investigate whether we can solve the initial-value problem at least in a neighborhood of the origin. Throughout this section, we always assume that S 1 is a domain the origin and that the initial hypersurface is given by the hyperplane { xn = 0}. Obviously, { xn = 0} has (0,... , 0,1) as a normal vector field. If 2.2. The Method of Characteristics x E W, we write x = (x', xn), where x' E R-1. Our goal is to solve the initial-value problem F(x, u, emu) = 0, u(x',O) = UO(X). 2.2.1. Linear Homogeneous Equations. We start with first-order linear homogeneous equations. Let ai be smooth in a neighborhood of 0 E I[8n, i = 1, , n, and uo be smooth in a neighborhood of 0 E I [8n-1. Consider (2.2.1) 0, 2=1 u(x', 0) = uo(x'). By introducing a = (ai,''' , an), we simply write the equation in (2.2.1) as a(x) . Vu = 0. Here a(x) is regarded as a vector field in a neighborhood of 0 E R. Then a(x) O is a directional derivative along a(x) at x. In the following, we assume that the hyperplane {xn = 0} is noncharacteristic at the origin, i.e., an(0) L0. Here we assume that a solution u of (2.2.1) exists. Our strategy is as follows. For any x E I[8n close to the origin, we construct a special curve along which u is constant. If such a curve starts from x and intersects ][8n-1 x {0} at (y, 0) for asmall y E I[8n-i, then u(x) = uo(y). To find such acurve x = x(s), we consider the restriction of u to it and obtain aone-variable function u(x(s)). Now we calculate the s-derivative of this function and obtain In order to have a constant value of u along this curve, we require ds (u(x(s))) =0. A simple comparison with the equation in (2.2.1) yields dx2 ds = ai(x) for i = 1, This naturally leads to the following definition. Definition 2.2.1. Let a = a(x) : St - ][8n be a smooth vector field in SZ and x = x(s) be a smooth curve in S2. Then x = x(s) is an integral curve of a if dx = a(x). (2.2.2) ds 2. First-Order Differential Equations The calculation preceding Definition 2.2.1 shows that the solution u of (2.2.1) is constant along integral curves of the coefficient vector field. This yields the following method of solving (2.2.1). For any x E ][near the origin, we find an integral curve of the coefficient vector field through x by solving dx = a(x), ds x(0) = x. If it intersects the hyperplane {x= 0} at (y, 0) for some y sufficiently small, then we let u(x) = uo(y). Since (2.2.3) is an autonomous system (i.e., the independent variable s does not appear explicitly), we may start integral curves from initial hyperplanes. Instead of (2.2.3), we consider the system dx ds = x(0) = (y,O). In (2.2.4), integral curves start from (y, 0). By allowing y E ][8n-1 to vary in a neighborhood of the origin, we expect integral curves x(y, s) to reach any x E IIBn in a neighborhood of the origin for small s. This is confirmed by the following result. Lemma 2.2.2. Let a be a smooth vector field in a neighborhood of the origin with a(0) 74 0. Then for any sufficiently small y E Il8n-1 and any sufficiently small s, the solution x = x(y, s) of (2.2.4) defines a diffeomorphism in a neighborhood of the origin in IIBn. Proof. This follows easily from the implicit function theorem. By standard results in ordinary differential equations, (2.2.4) admits a smooth solution x = x(y, s) for any sufficiently small (y, s) E IIBn-1 X IIg. We treat it as a map (y, s) i- x and calculate its Jacobian matrix J at (y, s) _ (0, 0). By x(y, 0) _ (y, 0), we have ai(0) .Tfnl - ate,-i (0) (y,s)=(0,0) 0 Hence det J(0) = a(0) 74 0. Therefore, for any sufficiently small x, we can solve x(y, s) = x 2.2. The Method of Characteristics uniquely for small y and s. Then u(x) = uo(y) yields a solution of (2.2.1). Note that s is not present in the expression of solutions. Hence the value Figure 2.2.1. Solutions by integral curves. of the solution u(x) depends only on the initial value uo at (y, 0) and, meanwhile, the initial value uo at (y, 0) influences the solution u along the integral curve starting from (y, 0). Therefore, we say the domain of dependence of the solution u(x) on the initial value is represented by the single point (y, 0) and the range of influence of the initial value at a point (y, 0) on solutions consists of the integral curve starting from (y, 0). For n = 2, integral curves are exactly characteristic curves. This can be seen easily by (2.2.2) and Definition 2.1.2. Hence the ODE (2.2.2) is referred to as the characteristic ODE. This term is adopted for arbitrary dimensions. We have demonstrated how to solve homogeneous first-order linear PDEs by using characteristic ODEs. Such a method is called the method of characteristics. Later on, we will develop a similar method to solve general first-order PDEs. We need to emphasize that solutions constructed by the method of characteristics are only local. In other words, they exist only in a neighborhood of the origin. A natural question here is whether there exists a global solution for globally defined a and uo. There are several reasons that local solutions cannot be extended globally. First, u(x) cannot be evaluated at x E I[8n if x is not on an integral curve from the initial hypersurface, or equivalently, the integral curve from x does not intersect the initial hypersurface. Second, u(x) cannot be evaluated at x E Il8n if the integral curve starting from x intersects the initial hypersurface more than once. In this case, we cannot prescribe initial values arbitrarily. They must satisfy a compatibility condition. Example 2.2.3. We discuss the initial-value problem for the equation in Example 2.1.1. We denote by (x, t) points in IE82 and let uo be a smooth 2. First-Order Differential Equations function in 1I8. We consider ut + uX= 0 in Il8 x (0, oo), onTR. It is easy to verify that {t = 0} is noncharacteristic. The characteristic ODE and corresponding initial values are given by dt dx ds t(0) = 0. Here, both x and t are treated as functions of s. Hence x(0) = XO, x=s+xp, t=s. By eliminating s, we have x - t=xo. This is a straight line containing (xo, 0) and with a slope 1. Along this straight line, u is constant. Hence u(x,t) = uo(x - t). We interpreted the fact that u is constant along the straight line x - t = xo in Example 2.1.1. With t as time, the graph of the solution represents a wave propagating to the right with velocity 1 without changing shape. It is clear that u exists globally in ll82. 2.2.2. Quasilinear Equations. Next, we discuss initial-value problems for first-order quasilinear PDEs. Let 1 C Jn be a domain containing the origin and a2 and f be smooth functions in SZ x J. For a given smooth function uo in a neighborhood of 0 E JR 1, we consider a2 (x, u)u= f(x,u), z=1 u(x', 0) = uo(x'). Assume the hyperplane {xn = 0} is noncharacteristic at the origin with respect to uo, i.e., an(0,uo(0)) 0. Suppose (2.2.5) admits a smooth solution u. We first examine integral curves dx = a (x, u), ds x(0) _ (y, 0), Contrary to the case of homogenous linear equations where y E we studied earlier, we are unable to solve this ODE since u, the unknown 2.2. The Method of Characteristics function we intend to find, is present. However, viewing u as a function of s along these curves, we can calculate how u changes. A similar calculation as before yields d `uCx('S))) = ds dx2 2Gxi a2(x, u)uxi = f(x,u). Z=1 Then du = f(x,u), u(0) = uo(y) Hence we have an ordinary differential system for x and u. This leads to the following method for quasilinear PDEs. Consider the ordinary differential system dx = a(x, u), ds du = f(x,u), ds with initial values x(0) _ (y, 0), u(0) = uo(y), where y e I[8"-1. In formulating this system, we treat both x and u as functions of s. This system consists of n + 1 equations for n + 1 functions and is the characteristic ODE of the first-order quasilinear PDE (2.2.5). By solving the characteristic ODE, we have a solution given by x = x(y,s), u = (y,s). As in the proof of Lemma 2.2.2, we can prove that the map (y, s) H x is a diffeomorphism. Hence, for any x e ]E8sufficiently small, there exist unique y e IEBn-1 and s e ]E8 sufficiently small such that Then the solution u at x is given by u(x) = (V' S) We now consider an initial-value problem for a nonhomogeneous linear equation. Example 2.2.4. We denote by (x, t) points in I[82 and let f be a smooth function in ]E8 x (0, oo) and uo be a smooth function in I[8. We consider ut - u = f u(.,0)=uo in I[8 x (0, oo), 2. First-Order Differential Equations It is easy to verify that {t = 0} is noncharacteristic. The characteristic ODE and corresponding initial values are given by dt du dx 1' as as as ds and t(O) = 0, x(O) = XO, u(0) = UO(XO). Here, x, t and u are all treated as functions of s. By solving for x and t first, we have x=xp - s, t =s. Then the equation for u can be written as du f(xo - s, s). ds A simple integration yields - T, T) 2G = 2Gp(xp) + f By substituting xo and s by x and t, we obtain ft u(x,t) = 2Gp(x + t)'+ f(x+t - T,T)CLT. Next, we consider an initial-value problem for a quasilinear equation. Example 2.2.5. We denote by (x, t) points in I182 and let uo be a smooth function in I1t Consider the initial-value problem for Burgers' equation ut -I- uu = 0 0) = uo in It x (0, oo), on I1t It is easy to check that {t = 0} is noncharacteristic with respect to any u0. The characteristic ODE and corresponding initial values are given by dx = u, dt = 1, du = 0, t(0) = 0, u(0) = UO(XO). Here, x, t and u are all treated as functions of s. By solving for t and u first and then for x, we obtain x(O) = XO, X = UO(XO)5 + XO, t = s, u = uo(xo). By eliminating s from the expressions of x and t, we have (2.2.6) x = uo(xo)t -I- xo. 2.2. The Method of Characteristics By the implicit function theorem, we can solve for xo in terms of (x, t) in a neighborhood of the origin in 1E82. If we denote such a function by xo = xo(x,t), then we have a solution u = up(xp(x,t)), for any (x, t) sufficiently small. By eliminating xo and s from the expressions of x, t and u, we may also write the solution u implicitly by u = UO(x - Ut). It is interesting to ask whether such a solution can be extended to ][82. Let coo be the characteristic curve given by (2.2.6). It is a straight line in l[82 with a slope 1/uo(xo), along which u is the constant uo(xo). For xo < xl with uo(xo) > uo(xl), two characteristic curves coo and cal intersect at (X, T) with xo - x1 T=- uo(xo) - Hence, u cannot be extended as a smooth solution up to (X, T), even as a continuous function. Such a positive T always exists unless uo is nondecreasing. In a special case where up is strictly decreasing, any two characteristic curves intersect. xO I Figure 2.2.2. Intersecting characteristic curves. Now we examine a simple case. Let uo (x) = -x. Obviously, this is strictly decreasing. In this case, coo in (2.2.6) is given by x=xo- xot, and the solution on this line is given by u = -xo. We note that each coo contains the point (x, t) _ (0, 1) and hence any two characteristic curves 2. First-Order Differential Equations intersect at (0, 1). Then, u cannot be extended up to (0, 1) as a smooth solution. In fact, we can solve for xo easily to get x xO u(x, t) = t x 1 - t for any (x,t) E ][8 x (0, 1). Clearly, u is not defined at t = 1. In general, smooth solutions of first-order nonlinear PDEs may not exist globally. When two characteristic curves intersect at a positive time T, solutions develop a singularity and the method of characteristics breaks down. A natural question arises whether we can define solutions beyond the time T. We expect that less regular functions, if interpreted appropriately, may serve as solutions. For an illustration, we return to Burgers' equation and employ its divergence structure. We note that Burgers' equation can be written as 2 ut+ (2) =0. This is an example of a scalar conservation law, that is, it is a first-order quasilinear PDE of the form (2.2.7) where F : ut -I-F(u)r = 0 in ll8 x (0, oo), ][8 is a given smooth function. By taking a C1-function cp of compact support in I[8 x (0, oo) and integrating by parts the product of cp and the equation in (2.2.7), we obtain (2.2.8) JRx (o,oo) (ucpt + dxdt = 0. The integration by parts is justified since cp is zero outside a compact set in I[8 x (0, oo). By comparing (2.2.7) and (2.2.8), we note that derivatives are transferred from u in (2.2.7) to cp in (2.2.8). Hence, functions u with no derivatives are allowed in (2.2.8). A locally bounded function u is called an integral solution of (2.2.7) if it satisfies (2.2.8) for any C1-function cp of compact support in ][8 x (0, oo). The function cp in (2.2.8) is often referred to as a test function. In this formulation, discontinuous functions are admitted to be integral solutions. Even for continuous initial values, a discontinuity along a curve, called a shock, may develop for integral solutions. Conservation laws and shock waves are an important subject in PDEs. The brief discussion here serves only as an introduction to this field. It is beyond the scope of this book to give a presentation of conservation laws and shock waves. 2.2. The Method of Characteristics Now we return to our study of initial-value problems of general firstorder PDEs. So far in our discussion, initial values are prescribed on noncharacteristic hypersurfaces. In general, solutions are not expected to exist if initial values are prescribed on characteristic hypersurfaces. We illustrate this by the initial-value problem (2.2.5) for quasilinear equations. Suppose the initial hyperplane {xn = 0} is characteristic at the origin with respect to the initial value uo. Then an(O,uo(O)) = 0. Hence u(0) is absent from the equation in (2.2.5) when evaluated at x = 0. Therefore, (2.2.5) implies n-1 (0,u(0(0) = f(O,uo(O)). (2.2.9) Z=1 This is the compatibility condition for the initial value uo. Even if the origin is the only point where {xn = 0} is characteristic, solutions may not exist in any neighborhood of the origin for initial values satisfying the compatibility condition (2.2.9). Refer to Exercise 2.5. 2.2.3. General Nonlinear Equations. Next, we discuss general firstorder nonlinear PDEs. Let St C Il8n be a domain containing the origin and F = F(x, u, p) be a smooth function in (x, u, p) E St x I[8 x W. Consider (2.2.10) F(x, u, Du) = 0, for any x e St, and prescribe an initial value on {xn = 0} by (2.2.11) u(x', 0) = wo(x ), for any x' with (x', 0) E S2. Assume there is a scalar ao such that F(0,uo(0),V'uo(0),ao) = 0. The noncharacteristic condition with respect to uo and ao is given by (2.2.12) Fp(0, uo(0), ao) # 0. By (2.2.12) and the implicit function theorem, there exists a smooth function such that a(0) = ao and a(x') in a neighborhood of the origin in (2.2.13) F(x', 0, uo(x'), a(x')) = 0, for any x' E Il8n-1 sufficiently small. In the following, we will seek a solution of (2.2.10)-(2.2.11) and urn (x', O) = for any x' small. 2. First-Order Differential Equations We start with a formal consideration. Suppose we have a smooth solution u. Set (2.2.14) p2 = uxi for i = 1, , n. Then (2.2.15) , xn, u, p1, ... , pn) = 0. Differentiating (2.2.15) with respect to x2, we have n Fpj pj,xi + Fxi + Fuuxi = 0 for i = 1,... , n. j=1 we obtain n F p2 xj = -Fx i - Faux i for i = 1, , n. We view (2.2.16) as a first-order quasilinear equation for p2, for each fixed i = 1, , n. An important feature here is that the coefficient for p3 is Fpj , which is independent of i. For each fixed i = 1, , n, the characteristic ODE associated with (2.2.16) is given by ds dp2 - Fup2 - Fxi . We also have du ds n ux j j=1 dx j pj F3. j=1 Now we collect ordinary differential equations for x j , u and p2 . The characteristic ODE for the first-order nonlinear PDE (2.2.10) is the ordinary differential system dxj ds dp2 (2.2.17) ds du ds - Fpj (x, u, p) for j = 1, , n, = - Fu (x, u, p)p2 - Fxi (x, u, p) n pj Fpj (x, u, p) j=1 for i = 1, , n, 2.2. The Method of Characteristics With initial values at s = 0, x(0) _ (y, 0), u(0) = uo(y), u ( 0 p(O) = where y e ]R-1 is the initial value as in (2.2.11) and a is the function chosen to satisfy (2.2.13). This is an ordinary differential system of 2n + 1 equations for the 2n + 1 functions x, u and p. Here we view x, u and p as functions of s. Compare this with a similar ordinary differential system of n + 1 equations for n -I-1 functions x and u for first-order quasilinear PDEs. Solving (2.2.17) with (2.2.18) near (y,s) _ (0, 0), we have x = x(y, S), u_ P(y,S), p = p(y,S), for any y and s sufficiently small. We will prove that the map (y, s) F- x is a difFeomorphism near the origin in W. Hence for any given x near the origin, there exist unique y e Rn-1 and s E IE8 such that x = x(). Then we define u by mi(x) _ 2.2.6. The function u defined above is a solution of (2.2.10)(2.2.11). We should note that this solution u depends on the choice of the scalar ao and the function a(xe). Proof. The proof consists of several steps. Step 1. The map (y, s) H x is a diffeomorphism near (0, 0). This is proved as in the proof of Lemma 2.2.2. In fact, the Jacobian matrix of the map (y, s) H x at (0,0) is given by ax D(y,s) y=0,s=0 Where 0) = F(0, u(0), V'uo(0), ao). 0 by the noncharacteristic condition (2.2.12). By the Hence det J(0) implicit function theorem, for any x e 1[8n sufficiently small, we can solve 2. First-Order Differential Equations x = x(y, s) uniquely for y E I[8"-1 and s E ll8 sufficiently small. Then define u(x) = o(y, s). We will prove that this is the desired solution and pz(y, s) _ Uxi(x(y, s)) for i = , n. Step 2. We claim that F'(x(y,S),(P(y,S),p(y,S)) = 0, for any y and s sufficiently small. Denote by f(s) the function in the lefthand side. Then by (2.2.18) f(0) = F(y,O,uo(y),Vxiuo(y),a(y)) = 0. Next, we have by (2.2.17) df (S) ds du dpj + Fu ds + :i: ds ._1 Fps ds dx2 Fx2 Fp j (- Fupj - F'x j) = 0. pj Fp j + Fx2 Fpi + Fu Hence f(s) .= 0. Step 3. We claim that pz (y, s) = ux2 (x (y, s)) fori = for any y and s sufficiently small. Let 1,... ,rt. for i = w2 (s) = ux2 (x(y, s)) - pz (y, s) We will prove that w2 (s) = 0 for any s small and i = 1, , n. We first evaluate w2 at s = 0. By initial values (2.2.18), we have w2(0) = 0 for i = 1, , n - 1. Next, we note that, by (2.2.17), du (2.2.19) 0 = ds or pjFp, = j=1 ux j=1 ' ds - pj Fp Fps (ux j - pj) , j=1 n O. This implies w(0) = 0 since wi(0) = 0 for i = 1, by the noncharacteristic condition (2.2.12). , n -1, and Fp, =0 2.2. The Method of Characteristics Next, we claim that d is a linear combination of wj, j = 1, dwi , n, i.e., for i = 1, , n, for some functions a3, i, j = 1, , n. Then basic ODE theory implies wi - 0 for i = 1, , n. To prove the claim, we first note that, by (2.2.17), dwi ds uxixj .-1 dx j uxixj Fpj + Fupi + Fxz To eliminate the second-order derivatives of u, we differentiate (2.2.19) with respect to xi and get n pjxi) + Fpj (uxixj (Fpj )xiwj = 0. j=1 A simple substitution implies dwi ds p xZ p7 x2 u z By Step 2, F(x,u(x),pi(x),... ,pn(X)) =0. Differentiating with respect to xi, we have n Fxi + Fuuxi + Fpj pj,xi = 0. j=1 Hence dwi _ - Fx2 - FuuxZ - (Fpj )xz w j + Fupi + Fxi _ - Fuwi - (Fpj )xiwj j=1 or dwi ds (FuSZ + (F))w. j=1 This ends the proof of Step 3. Step 2 and Step 3 imply that u is the desired solution. To end this section, we briefly compare methods we used to solve firstorder linear or quasi-linear PDEs and general first-order nonlinear PDEs. In solving a first-order quasi-linear PDE, we formulate an ordinary differential 2. First-Order Differential Equations system of n + 1 equations for n + 1 functions x and u. For a general firstorder nonlinear PDE, the corresponding ordinary differential system consists of 2n + 1 equations for 2n +1 functions x, u and Du. Here, we need to take into account the gradient of u by adding n more equations for Du. In other words, we regard our first-order nonlinear PDE as a relation for (u, p) with a constraint p = Du. We should emphasize that this is a unique feature for single first-order PDEs. For PDEs of higher order or for first-order partial differential systems, nonlinear equations are dramatically different from linear equations. In the rest of the book, we concentrate only on linear equations. 2.3. A Priori Estimates A priori estimates play a fundamental role in PDEs. Usually, they are the starting point for establishing existence and regularity of solutions. To derive a priori estimates, we first assume that solutions already exist and then estimate certain norms of solutions by those of known functions in equations, for example, nonhomogeneous terms, coefficients and initial values. Two frequently used norms are L°°-norms and L2-norms. The importance of L2norm estimates lies in the Hilbert space structure of the L2-space. Once L2estimates of solutions and their derivatives have been derived, we can employ standard results about Hilbert spaces, for example, the Riesz representation theorem, to establish the existence of solutions. In this section, we will use first-order linear PDEs to demonstrate how to derive a priori estimates in L°°-norms and L2-norms. We first examine briefly first-order linear ordinary differential equations. Let /3 be a constant and f = f(t) be a continuous function. Consider du A simple calculation shows that u(t) = eatu(0) + et-3> f (s) ds. For any fixed T > 0, we have I< et (1uo1 + T sup If I for any t E (0, T). Here, we estimate the sup-norm of u in [0, T] by the initial value u(0) and the sup-norm of the nonhomogeneous term f in [O, T]. Now we turn to PDEs. For convenience, we work in I[8 x (0, oo) and denote points by (x, t), with x E ][8n and t E (0, oo). In many applications, we interpret x as the space variable and t the time variable. 2.3. A Priori Estimates 2.3.1. L°°-Estimates. Let ai, b and f be continuous functions in Tl [0, oo) and uo be a continuous function in W. We assume that a = an) satisfies IaI < 1 in I[8n x [0, oo), for a positive constant K. Consider n ut + (2.3.2) in I[8n x (0, oo), 2 (x, O) = Up(x) in W. It is obvious that the initial hypersurface {t = 0} is noncharacteristic. We may write the equation in (2.3.2) as ut + a(x, t) Vxu + b(x, t)u = f(x, t). We note that a(x, t) V + 8t is a directional derivative along the direction (a(x, t),1). With (2.3.1), it is easy to see that the vector (a(x, t),1) (starting from the origin) is in fact in the cone given by {(y,s): i'y s} c W x R. This is a cone opening upward and with vertex at the origin. Figure 2.3.1. The cone with the vertex at the origin. For any point P = (X, T) E ]E8n x (0, oo), consider the cone Ck(P) (opening downward) with vertex at P defined by Ck(P) _ {(x, t) : 0 < t < T, IcIx - X I< T - t}. We denote by 83Ck(P) and 8_Ck(P) the side and bottom of the boundary, respectively, i.e., 83Ck(P) _ {(x,t): 0< t 2. First-Order Differential Equations We note that 8_Ck(P) is simply the closed ball in ]E8' x {0} centered at (X,O) with radius T/rc. For any (x,t) E 83Ck(P), let a(x, t) be a vector in I[8" satisfying (2.3.1). Then the vector (a(x, t),1), if positioned at (x, t), points outward from the cone Ck(P) or along the boundary 83Ck(P). Hence for a function defined only in Ck (P), it makes sense to calculate ut + a Vu at (x, t), which is viewed as a directional derivative of u along (a(x, t),1) at (x, t). This holds in particular when (x, t) is the vertex P. Figure 2.3.2. The cone Ck (P) and positions of vectors. Now we calculate the unit outward normal vector of 83Ck(P) \ {P}. Set o(x,t)= klx-XI-(T-t). Obviously, 83Ck(P) \ {P} is a part of {cp = 0}. Then for any (x, t) E aSCk(P) \ {P}, = (V, t) = (k x-X i Xv i). Therefore, the unit outward normal vector v of 83Ck(P) \ {P} at (x, t) is given by Ix-XI' For n = 1, the cone Ck(P) is a triangle bounded by the straight lines X) = T - t and t = 0. The side of the cone consists of two line U the left segment: - ic(x - X) = T - t, 0 < t < T, with a normal vector (-ic, 1), the right segment: ic(x - X) = T - t, 0 < t < T, with a normal vector (ic, 1). 2.3. A Priori Estimates It is easy to see that the integral curve associated with (2.3.2) starting from P and going to the initial hypersurface I[8" x {0} stays in Ck(P). In fact, this is true for any point (x, t) E Ck(P). This suggests that solutions in Ck(P) should depend only on f in Ck(P) and the initial value uo on 8_Ck(P). The following result, proved by a maximum principle type argument, confirms this. Figure 2.3.3. The domain of dependence. Theorem 2.3.1. Let ai, b and f be continuous functions in I[8" x [0, oo) satisfying (2.3.1) and ua be a continuous function in R. Suppose u E Cl(I[8" x (0,oo)) fl C(][87 ` x [0,oo)) is a solution of (2.3.2). Then for any P = (X,T) E lI8" x (0,oo), sup IuoI +T sup Ie_tf I, where Q is a nonnegative constant such that > 0, we take ,Q = 0 and have sup ui < sup Iuoi + T sup if I. Proof. Take any positive number Q' > /3 and set M = sup a_ c,c(P) F = sup C,( P) We will prove ie_'tu(x,t)I < M + tF for any (x,t) E Ck(P). For the upper bound, we consider w(x, t) = t) - M - tF. 2. First-Order Differential Equations A simple calculation shows that n +(b + Q')w = -(b +fi') (M +tF) +e-Q't f - F. wt + 2=1 Since b + /3' > 0, the right-hand side is nonpositive by the definition of M and F. Hence (b +,8')w < 0 in Ck(P). wt + a Let w attain its maximum in Ck(P) at (xO, to) E Ck(P). We prove w(xo,to) First, it is obvious if (xO, to) E 8_Ck(P), since w(xo, to) = uo(xo)-M < 0 by the definition of M. If (xO, to) E Ck(P), i.e., (xO, to) is an interior maximum point, then (Wt + a Vxw)I(0,t0) = 0. If (xO, to) E 83Ck(P), by the position of the vector (a(xo, to), 1) relative to the cone Ck(P), we can take the directional derivative along (a(xo, to), 1), obtaining (Wt + a Vxw)l(0,t0) 0. Hence, in both cases, we obtain (b + fi')wI(0,t0) Since b + /3' > 0, this implies w(xo, to) < 0. (We need the positivity of b + /3' here!) Hence w(xo, to) < 0 in all three cases. Therefore, w < 0 in Ck(P), or tF) for any (x, t) E Ck(P). u(x, t) < We simply let /3' -+ ,6 to get the desired upper bound. For the lower bound, we consider v(x, t) = e-Q'tu(x, t) + M + tF. The argument is similar and hence omitted. For n = 1, (2.3.2) has the form ut + a(x, t)u + b(x, t)u = f(x, t). In this case, it is straightforward to see that (wt + aw') I (xo,to) ? 0, if w assumes its maximum at (xO, to) E DSCK (P) . To prove this, we first note that at +a and at - ate are directional derivatives along the straight lines t - to = ic(x - xo) and t - to = -ic(x - xo), respectively. Since w assumes its maximum at (xO, to), we have 1 > / - 0, (wt--w)I 1 > 0. (xo,to) 2.3. A Priori Estimates In fact, one of them is zero if (xO, to) E {P}. Then we obtain wt(xo,to) ? HwxI(xo,to) ? Iawxl(xo,to) One consequence of Theorem 2.3.1 is the uniqueness of solutions of (2.3.2). Corollary 2.3.2. Let a2, b and f be continuous functions in I[8n x [0, oo) satisfying (2.3.1) and uo be a continuous function in W. Then there exists at most one Cl(Il8" x (0, oo)) fl C(W1 x [0, oo)) -solution of (2.3.2). Proof. Let ul and u2 be two solutions of (2.3.2). Then ul - u2 satisfies (2.3.2) with f = 0 in Ck(P) and no = 0 on 8_Ck(P). Hence ul - u2 = 0 in Ck(P) by Theorem 2.3.1. Another consequence of Theorem 2.3.1 is the continuous dependence of solutions on initial values and nonhomogeneous terms. Corollary 2.3.3. Let a2, b, fl, f2 be continuous functions in I[8n x [0, oo) satisfying (2.3.1) and uol, u02 be continuous functions in W1. Suppose ul, u2 E Cl(I[8n x (0, oo)) flC(W1 x [0, oo)) are solutions of (2.3.2), with fl, f2 replac- ing f and u01iuo2 replacing uo, respectively. Then for any P = (X, T) E R x (0,oo), sup l- u2)I C sup a-ck(p) Iuoi - u021 + T sup l- f2)I, Ck(P) where 3 is a nonnegative constant such that b>-3 The proof is similar to that of Corollary 2.3.2 and is omitted. Theorem 2.3.1 also shows that the value u(P) depends only on f in C,c(P) and uo on 8_Ck(P). Hence contains the domain of dependence of u(P) on f, and 8_Ck(P) contains the domain of dependence of u(P) on uo. In fact, the domain of dependence of u(P) on f is the integral curve through P in and the domain of dependence of u(P) on uo is the intercept of this integral curve with the initial hyperplane {t = 0}. We now consider this from another point of view. For simplicity, we assume that f is identically zero in ][8n x (0, oo) and the initial value no at t = 0 is zero outside a bounded domain Do C W. Then for any t> 0, t) = 0 outside Dt = {(x, t) : r'CIx - xol In other words, uo influences u only in U{t>o} D. This is the finite-speed propagation. 2. First-Order Differential Equations n Figure 2.3.4. The range of influence. 2.3.2. L2-Estimates. Next, we derive an estimate of the L2-norm of u in terms of the L2-norms of f and uo. Theorem 2.3.4. Let ai be C1 functions in Ian x [0, oo) satisfying (2.3.1), b and f be continuous functions in Ian x [0, oo) and uo be a continuous function in Rn . Suppose u E C1(Rn x (0, oo)) n C(RT x [0, oo)) is a solution of (2.3.2). Then for any P = (X,T) E Ian x (O,oo), uo dx + f e-atu2 dxdt < e-"t f 2 dxdt, , (p) , (p) where a is a positive constant depending only on the C1 -norms of ai and the sup-norm of b in Ck(P). Proof. For a nonnegative constant a to be determined, we multiply the equation in (2.3.2) by 2e-«tu. In view of 2e-«tuut = (e-«tu2)t + ae-«tu2, 2aie-«tuuxZ = (e_ataju2)xZ - e-«tai,xZu2, we have n (e-«tu2)t + (e_ataju2)xZ + e-«t a +2b - n ai xZ u2 = 2e-«tuf An integration in Ck (P) yields e-«t sc,(P) vt + ai vi u2 dS -}- L uo dx + a -}- 2b - u2 dxdt 2e-«tu f dxdt, 2.3. A Priori Estimates where the unit exterior normal vector on 83Ck(P) is given by (lJx,lJt)=(lJ1.,lJn,lJt)=j2 (2.3.1) and the Cauchy inequality, we have IIk2 < vt, \/1+ and hence vt + ai v2 > 0 on DSCk (P) . Next, we choose a such that n a + 2b - > 2 in Ck (P) . i=1 Then e-atu2 dxdt < 2C ,c (P) uo dx +- f dxdt. ,c (P) Here we simply dropped the integral over DSCk (P) since it is nonnegative. The Cauchy inequality implies e-atu2 dxdt +- 2e-atu f dxdt < C,c(P) We then have the desired result. e-at f2 dxdt. CA(P) The proof illustrates a typical method of deriving L2-estimates. We multiply the equation by its solution u and rewrite the product as a linear combination of u2 and its derivatives. Upon integrating by parts, domain integrals of derivatives are reduced to boundary integrals. Hence, the resulting integral identity consists of domain integrals and boundary integrals of u2 itself. Derivatives of u are eliminated. We note that the estimate in Theorem 2.3.4 is similar to that in Theorem 2.3.1, with the L2-norms replacing the L°°-norms. As consequences of Theorem 2.3.4, we have the uniqueness of solutions of (2.3.2) and the continuous dependence of solutions on initial values and nonhomogeneous terms in L2-norms. We can also discuss domains of dependence and ranges of influence using Theorem 2.3.4. We now derive an L2-estimate of solutions in the entire space. Theorem 2.3.5. Let ai be bounded C1 functions, b and f be continuous functions in Rn x [0, oo) and uo be a continuous function in W. Suppose 2. First-Order Differential Equations U E Cl(][8n x (O,oo)) fl C(][8n x [0,oo)) is a solution of (2.3.2). For any T> 0, if f E L2(][81 x (0, T)) and uo E L2(I[8n), then e-«tu2 dx -Ix{T} e-«tu2 dxdt x (O,T) e-«t f2 dxdt, uo dx -IR x (O,T) where a is a positive constant depending only on the C1 -norms of aZ and the sip-norm of b in ][8n x (0,T). Proof. We first take ic> 0 such that (2.3.1) holds. Take any t > T and Figure 2.3.5. A domain of integration. D(t) = {(x, t) : iclxl = {(x,0): iclxl We now proceed as in the proof of Theorem 2.3.4, with D() replacing Ck(P). We note that there is an extra portion 8+D() in the boundary 8D(t'). A 2.3. A Priori Estimates similar integration over D() yields e-«tu2 dx + e-atu2 dxdt up dx + e-at f2 dxdt. We note that t enters this estimate only through the domain D(). Hence, we may let t -+ oo to get the desired result. We point out that there are no decay assumptions on u as x -+ oo in Theorem 2.3.5. 2.3.3. Weak Solutions. Anyone beginning to study PDEs might well ask, what a priori estimates are good for. One consequence is of course the uniqueness of solutions, as shown in Corollary 2.3.2. In fact, one of the most important applications of Theorem 2.3.5 is to prove the existence of a weak solution of the initial-value problem (2.3.2). We illustrate this with the homogeneous initial value, i.e., uo = 0. To introduce the notion of a weak solution, we fix a T> 0 and consider functions in Rn x (0, T). Set n Lu = ut + in Rn x (0, T). + bu i=1 Obviously, L is a linear differential operator defined in Cl(]E8n x (0, T)). For any u, v e Cl (][8n x (O, T)) f1 C(][8" x [O, T]), we integrate vLu in II8n x (O, T). To this end, we write n vLu=ul -vt- + (uv)t + (aiuv)x. i=1 z= i=1 This identity naturally leads to an introduction of the adjoint differential operator L* of L defined by n L*v = -yt - by = -yt i=1 Then n vLu = uL*v + (uV)t + i=1 ai,xi i=1 2. First-Order Differential Equations We now require that u and v vanish for large x. Then by a simple integration in Ian x (0, T), we obtain vLu dxdt = IISn x (O,T) uL*v dxdt IISn x (O,T) uv dx - / uv dx. We note that derivatives are transferred from u in the left-hand side to v in the right-hand side. Integrals over {t = 0} and {t = T} will disappear if we require, in addition, that uv = 0 on {t = 0} and {t = T}. Definition 2.3.6. Let f and u be functions in L2(][8x (O, T)). Then u is a weak solution of Lu =fin ]E8n x (O, T) if (2.3.4) uL*v dxdt = x (O,T) JIItT x (O,T) for any Cl-functions v of compact support in I[8n x (O, T). The functions v in (2.3.4) are called test functions. It is worth restating that no derivatives of u are involved. Now we are ready to prove the existence of weak solutions of (2.3.3) with homogeneous initial values. The proof requires the Hahn-Banach theorem and the Riesz representation theorem in functional analysis. Theorem 2.3.7. Let ai be bounded C1 -functions in ][8n x (0, T), i = 1, , n, and b a bounded continuous function in IISn x (O, T). Then for any f e L2 (]E8' x (0, T) ), there exists a u e L2 (]E8n x (0, T)) such that Jllt x (O,T) uL*v dxdt = JII8n x (O,T) for any v e Cl (][8n x (0, T)) fl C(I[8Th x [0, T]) with v(x, t) = 0 for any (x, t) with large x and any (x, t) _ (x, T). The function u in Theorem 2.3.7 is called a weak solution of the initialvalue problem Lu = f in W1 x (0, T), (2.3.5) u=0 onIfn. We note that test functions v in Theorem 2.3.7 are not required to vanish on {t = 0}. To prove Theorem 2.3.7, we first introduce some notation. We denote by Co (I[8' x (0, T)) the collection of Cl-functions in ][8n x (0, T) with compact support, and we denote by C( W x (O, T)) the collection of Cl-functions in ]E8n x (0, T) with compact support in x-directions. In other words, functions 2.3. A Priori Estimates in C( W x (O, T)) vanish for large x and for t close to 0 and T, and functions in C( R x (O, T)) vanish only for large x. We note that, with L in (2.3.3), we can rewrite the estimate in Theorem 2.3.5 as IIUIIL2(unx(o,T)) C'(Ilu(', 0)IIL2(II$n) + where C is a positive constant depending only on T, the C'-norms of ai and the sup-norm of b in I[8n x (O, T). This holds for any u e Cl(I[8n x (O, T)) f1 C(IEBn x [O, T)) with Lu E L2(][8n x (0, T)) and L2(IEBn). In particular, we have (2.3.6) 0) E for any u e Co (][8n x (O, T)) fl C(I[8n x [O, T)) with u = 0 on {t = 0}. Proof of Theorem 2.3.7. In the following, we denote by the L2-inner product in I[8n x (0, T). Now L* is like L, but the terms involving derivatives have opposite signs. When we consider an initial-value problem for L* in IEBn x (O, T), we view {t = T} as the initial hyperplane for the domain IEBn x (O, T). Thus (2.3.6) also holds for L*, and we obtain (2.3.7) IIVIIL2(nx(O,T)) < for any v e C( R x (O, T)) fl C(I[8n x (O, T]) with v = 0 on {t = T}, where C is a positive constant depending only on T, the C1-norms of ai and the sup-norm of b in W x (O, T). We denote by Cl(][8n x (O, T)) the collection of functions v e C( W x (O, T)) f1 C(I[89 x (O, T]) with v = 0 on {t = T}. Consider the linear functional F : L*Cl(IEBn x (0, T)) -+ ][8 given by F(L*v) = for any v E C1 (W x (0, T)). We note that F acting on L*v in the left-hand side is defined in terms of v itself in the right-hand side. Hence we need to verify that such a definition makes sense. In other words, we need to prove that L*vl = L*v2 implies (f, vl) L2 (fin x (O,T)) (f, v2) L2 (fin x (O,T) )' for any v, v2 E C1(RTh x (0, T)). By linearity, it suffices to prove that L*v = 0 implies v = 0 for any v e Cl (][8n x (0, T) ). We note that it is a consequence of (2.3.7). Hence, F is awell-defined linear functional on L*Cl(IEBn x (0, T)). Moreover, by the Cauchy inequality and (2.3.7) again, we have IF(L*v)I C If IIL2(TEThx(O,T))IIvIIL2(Ilx(O,T)) CII.fIIL2(TEnx(O,T))IIL*vIIL2(TE7 x(O,T))' 2. First-Order Differential Equations for any v E Cl (][8n x (O, T)). Therefore, F is awell-defined bounded linear functional on the subspace L*Cl(ll8n x (O, T)) of L2(1[8x (0, T)). Thus we apply the Hahn-Banach theorem to obtain a bounded linear extension of F (also denoted by F) defined on L2(1l8n x (O, T)) such that CII,fIIL2(Rnx(O,T)) Here, IF II is the norm of the linear functional F on L2(1l8n x (0, T)). By the Riesz representation theorem, there exists a u E L2(1[8x (O, T)) such that for any w E L2(1[8n x (0, T)), F(w) _ (u, w)L2(RnX(OT)) and IIUIIL2(Rnx(OT)) = IIFII. In particular, we have F(L*v) = (u, L*v)L2(Rn x (O,T)) for any v E al(IRTh x (0, T)), and hence by the definition of F, (u, L*v)L2(Rn x (OAT)) = (f, v)L2(fi n x (O,T))' Then u is the desired function. Theorem 2.3.7 asserts the existence of a weak solution of (2.3.5). Now we show that the weak solution u is a classical solution if u is Cl in 1[8n x (0, T) and continuous up to {t = 0}. Under these extra assumptions on u, we integrate uL*v by parts to get uv dx = vLu dxdt +- fv dxdt, Rn x {t=0} Rn x (O,T) Rn x (O,T) for any v E Co (x (0, T)) f1 C (JRn x [0, T]) with v = 0 on {t = T }. There are no boundary integrals on the vertical sides and on the upper side since v vanishes there. In particular, J1R'x (O,T) vLu dxdt = JIl8 x (O,T) fv dxdt, for any v E Co (1[8'x (O, T)). Since Co (I[8n x (O, T)) is dense in L2(118n x (O, T)), we conclude that Lu = f in 1[8n x (O, T). Therefore, fRx{t=O} uv dx = 0, for any v E Co (I[8x (0, T)) fl C(][8Th x [0, T]) with v = 0 on {t = T}. This implies 0)cp dx = 0 for any cp E Co (][8n) 2.4. Exercises Again by the density of Co (W1) in L2 (R ), we conclude that 0) = 0 on Rn. We note that a crucial step in passing from weak solutions to classical solutions is to improve the regularity of weak solutions. Now we summarize the process of establishing solutions by using a priori estimates in the following four steps: Step 1. Prove a priori estimates. Step 2. Prove the existence of a weak solution by methods of functional analysis. Step 3. Improve the regularity of a weak solution. Step 4. Prove that a weak solution with sufficient regularity is a classical solution. In discussions above, we carried out Steps 1, 2 and 4. Now we make several remarks on Steps 3 and 4. We recall that in Step 4 we proved that weak solutions with continuous derivatives are classical solutions. The requirement of continuity of derivatives can be weakened. It suffices to assume that u has derivatives in the L2-sense and to verify that the integration by parts can be performed. Then we can conclude that Lu = f almost everywhere. Because of this relaxed regularity requirement, we need only prove that weak solutions possess derivatives in the L2-sense in Step 3. The proof is closely related to a priori estimates of derivatives of solutions. The brief discussion here suggests the necessity of introducing new spaces of functions, functions with derivatives in L2. These are the Sobolev spaces, which play a fundamental role in PDEs. In subsequent chapters, Sobolve spaces will come up for different classes of equations. We should point out that Sobolev spaces and weak solutions are not among the main topics in this book. Their appearance in this book serves only as an illustration of their importance. 2.4. Exercises Exercise 2.1. Find solutions of the following initial-value problems in ][82: (1) guy - u + xu = 0 with u(x, 0) = 2xeX2/2; (2) uy + (1+ x2)u - u = 0 with u(x, 0) = arctan x. Exercise 2.2. Solve the following initial-value problems: (1) uy + u = u2 with u(x, 0) = h(x); (2) uz +xu -F yuy =u with u(x, y, 0) = h(x, y). 2. First-Order Differential Equations Exercise 2.3. Let Bl be the unit disc in ][82 and a and b be continuous functions in Bl with a(x, y)x + b(x, y)y > 0 on 8B1. Assume u is a C1solution of a(x, y)ux +b(x, y)uy = -u in Bl. Prove that u vanishes identically. Exercise 2.4. Find a smooth function a = a(x, y) in ][82 such that, for the equation of the form + a(x, y)Ux = 0, there does not exist any solution in the entire ][82 for any nonconstant initial value prescribed on {y = 0}. Exercise 2.5. Let a be a number and h = h(x) be a continuous function in ][8. Consider yux + xuy = au, u(x, 0) = h(x). (1) Find all points on {y = 0} where {y = 0} is characteristic. What is the compatibility condition on h at these points? (2) Away from the points in (1), find the solution of the initial-value problem. What is the domain of this solution in general? (3) For the cases h(x) = x, a = 1 and h(x) = x, a = 3, check whether this solution can be extended over the points in (1). (4) For each point in (1), find all characteristic curves containing it. What is the relation of these curves and the domain in (2)? Exercise 2.6. Let a E ][8 be a real number and h = h(x) be continuous in ][8 and Cl in ][8 \ {0}. Consider xux + yuy = au, u(x,0) = h(x). (1) Check that the straight line {y = 0} is characteristic at each point. (2) Find all h satisfying the compatibility condition on {y = 0}. (Con- sider three cases, a> 0, a = 0 and a < 0.) (3) For a > 0, find two solutions with the given initial value on {y = 0}. (This is easy to do simply by inspecting the equation, especially for a=2.) Exercise 2.7. In the plane, solve uy = 4u near the origin with u(x, 0) = x2 on {y = 0}. 2.4. Exercises Exercise 2.8. In the plane, find two solutions of the initial-value problem xux + yuy + u(x,O) = 12 + uy) = u, 2 (1 - x2). Exercise 2.9. In the plane, find two solutions of the initial-value problem 4u + uuy = u, U I x, 1x2 _ -12 Exercise 2.10. Let az, b and f be continuous functions satisfying (2.3.1) and u be a C'-solution of (2.3.2) in ][8Th x [0, oo). Prove that, for any P = (X,T) e RTh x (0,oo), sup le-«tul < Iuol + sup Ie_Qtf i, a inf b in ][8n x [0, oo) satisfying (2.3.1) and uo be a Cl-function in ][8n. Suppose u is a CZ-solution of (2.3.2) in Ilgn x [0, oo). Prove that, for any P = (X, T) E ][8n x (0, oo), hlC1(C,c(P)) _< C(I uoI cl(a_ck(P)) + If lC'(C,c(P))), where C is a positive constant depending only on T and the C'-norms of a and b in Ck(P). Exercise 2.12. Let a be a C'-function in ][8 x [0, oo) satisfying la(x,t)l and let bz be continuous in ][8 x [0, oo), for i, j = 1, 2. Suppose (u, v) is a Cl-solution in ][8 x (0, oo) of the first-order differential system ut - a(x,t)v + bll(x,t)u + b12(x,t)v = fi(x, t), vt - a(x,t)u + b21(x,t)U + b22(x,t)v = f2(X, t), _ po(x), v(x, o) = vo(x). Derive an LZ-estimate of (u, v) in appropriate cones. U(x, o) Chapter 3 An Overview of Second-Order PDEs This chapter should be considered as an introduction to second-order linear PDEs. In Section 3.1, we introduce the notion of noncharacteristics for initialvalue problems. We proceed here for second-order linear PDEs as we did for first-order linear PDEs in Section 2.1. We show that we can compute all derivatives of solutions on initial hypersurfaces if initial values are prescribed on noncharacteristic initial hypersurfaces. We also introduce the Laplace equation, the heat equation and the wave equation, as well as their general forms, elliptic equations, parabolic equations and hyperbolic equations. In Section 3.2, we discuss boundary-value problems for the Laplace equation and initial/boundary-value problems for the heat equation and the wave equation. Our main tool is a priori estimates. For homogeneous boundary values, we derive estimates of L2-norms of solutions in terms of those of nonhomogeneous terms and initial values. These estimates yield uniqueness of solutions and continuous dependence of solutions on nonhomogeneous terms and initial values. In Section 3.3, we use separation of variables to solve Dirichlet problems for the Laplace equation in the unit disc in R2 and initial/boundary-value problems for the 1-dimensional heat equation and the 1-dimensional wave equation. We derive explicit expressions of solutions in Fourier series and discuss the regularity of these solutions. Our main focus in this section is to demonstrate different regularity patterns for solutions. Indeed, a solution of the heat equation is smooth for all t > 0 regardless of the regularity of 3. An Overview of Second-Order PDEs its initial values, while the regularity of a solution of the wave equation is similar to the regularity of its initial values. Such a difference in regularity suggests that different methods are needed to study these two equations. 3.1. Classifications The main focus in this section is the second-order linear PDEs. We proceed as in Section 2.1. bi and c be Let S1 be a domain in W containing the origin and continuous functions in S1, for i, j = 1, , n. Consider a second-order linear differential operator L defined by n Lu = aij i,j=1 in SZ. u, respectively. We usually Here a3, bi, c are called coefficients of assume , n. Hence, (a) is a symmetric matrix for any i, j = 1, = in S1. For the operator L, we define its principal symbol by n for any x E St and E l[8n. Let f be a continuous function in S2. We consider the equation (3.1.2) Lu = f (x) in St. The function f is called the nonhomogeneous term of the equation. Let E be the hyperplane {x= 0}. We now prescribe values of u and its normal derivative on E so that we can at least find all derivatives of u at the origin. Let uo, ul be functions defined in a neighborhood of the origin in we prescribe (3.1.3) u(x',O) = uo(x ), u(x", 0) = ui(x ), for any x' E Rn-1 small. We call the initial hypersurface and no, u1 the initial values or Cauchy values. The problem of solving (3.1.2) together with (3.1.3) is called the initial-value problem or the Cauchy problem. Let u be a C2-solution of (3.1.2) and (3.1.3) in a neighborhood of the origin. In the following, we will investigate whether we can compute all derivatives of u at the origin in terms of the equation and initial values. It is obvious that we can find all x'-derivatives of u and urn at the origin in terms of those of uo and u1. In particular, we can find all first derivatives at the origin in terms of appropriate and all second derivatives, except 3.1. Classifications derivatives of uo and ul. In fact, uxi (0) = uo,xi (0) for i = 1,... ,n - 1, urn (0) = ul (0) and uxzx (0) = uo,xZx (0) for i, j uxnxn (0) = ul,xi (0) for i = 1, = 1,... ,n - 1, ,n - 1. To compute uxnxn (0), we need to use the equation. We note that ann is the coefficient of uxnxn in (3.1.2). If we assume a(0) # 0, then by (3.1.2) 1 uxnxn (0) _ - a(0) b(0)u(0) + c(0)u(0) - 1(0)). Hence, we can compute all first-order and second-order derivatives at 0 in terms of the coefficients and nonhomogeneous term in (3.1.2) and the initial values uo and ul in (3.1.3). In fact, if all functions involved are smooth, we can compute all derivatives of u of any order at the origin by using uo, ul and their derivatives and differentiating (3.1.2). In summary, we can find all derivatives of u of any order at the origin under the condition (3.1.4), which will be defined as the noncharacteristic condition later on. Comparing the initial-value problem (3.1.2) and (3.1.3) here with the initial-value problem (2.1.3) and (2.1.4) for first-order PDEs, we note that there is an extra condition in (3.1.3). This reflects the general fact that two conditions are needed for initial-value problems for second-order PDEs. More generally, consider the hypersurface E given by {cp = 0} for a smooth function cp in a neighborhood of the origin with Vcp 0. We note that the vector field Vcp is normal to the hypersurface E at each point of E. We take a point on E, say the origin. Then (0) = 0. Without loss of generality, we assume cp(0) 0. Then by the implicit function theorem, we can solve cp = 0 for xn = b(xl, , x,i_1) in a neighborhood of the origin. Consider the change of variables x H y = (Xi,... , x_1, o(x)). 3. An Overview of Second-Order PDEs This is a well-defined transformation with a nonsingular Jacobian in a neighborhood of the origin. With n uxi = yk,xi uyk k=1 uxixj = yk,xixj uyk yk,xiY1,xj uykyl + k=1 we can write the operator L in the y-coordinates as n Lu = aij yk,xi yl,xj n bi yk,xi + The initial hypersurface ai j yk,xix j uyk + Cu. is given by {Yn = 0} in the y-coordinates. With yn = gyp, the coefficient of uynyn is given by n aij i,j=1 This is the principal symbol p(x; ) evaluated at = o(x). Definition 3.1.1. Let L be a second-order linear differential operator as in (3.1.1) in a neighborhood of xo E W and be a smooth hypersurface containing xo. Then is noncharacteristic at xo if n aij (xo)vivj Where v = (vi,... , vn) is normal to at xo. Otherwise, it is characteristic at xo. A hypersurface is noncharacteristic if it is noncharacteristic at every point. Strictly speaking, a hypersurface is characteristic if it is not noncharacteristic, i.e., if it is characteristic at some point. In this book, we will abuse this terminology. When we say a hypersurface is characteristic, we mean it is characteristic everywhere. This should cause few confusions. In R2, hypersurfaces are curves, so we shall speak of characteristic curves and noncharacteristic curves. When the hypersurface is given by {gyp = 0} With V 0, its normal vector field is given by V _ ........ , ). Hence we may take v = V p (xo) in (3.1.5). We note that the condition (3.1.5) is preserved under 3.1. Classifications C2-changes of coordinates. Using this condition, we can find successively the values of all derivatives of u at xo, as far as they exist. Then, we could write formal power series at xo for solutions of initial-value problems. If the initial hypersurface is analytic and the coefficients, nonhomogeneous terms and initial values are analytic, then this formal power series converges to an analytic solution. This is the content of the Cauchy-Kovalevskaya theorem, which we will discuss in Section 7.2. Now we introduce a special class of linear differential operators. Definition 3.1.2. Let L be asecond-order linear differential operator as in (3.1.1) defined in a neighborhood of xo E ][8n. Then L is elliptic at xo if n for any E Rn \ {0}. An operator L defined in St is called elliptic in S2 if it is elliptic at every point in SZ. According to Definition 3.1.2, linear differential operators are elliptic if every hypersurface is noncharacteristic. We already assumed that (a23) is an n x n symmetric matrix. Then L is elliptic at xo if (a23(xo)) is a definite matrix-positive definite or negative definite. We now turn our attention to second-order linear differential equations in ][82, where complete classifications are available. Let S2 be a domain in ][82 and consider 2 Lu = c(x)u = f(x) in SZ. Here we assume (a23) is a 2 x 2 symmetric matrix. Definition 3.1.3. Let L be a differential operator defined in a neighborhood of xo E 1[82 as in (3.1.6). Then (1) L is elliptic at xo E SZ if det(a23(xo)) > 0; (2) L is hyperbolic at xo E SZ if <0; (3) L is degenerate at xo E St if det(a23(xo)) = 0. The operator L defined in S2 C IL82 is called elliptic (or hyperbolic) in SZ if it is elliptic (or hyperbolic) at every point in St. It is obvious that the ellipticity defined in Definition 3.1.3 coincides with that in Definition 3.1.2 for n = 2. 3. An Overview of Second-Order PDEs For the operator L in (3.1.6), the symmetric matrix (a23) always has two (real) eigenvalues. Then L is elliptic if the two eigenvalues have the same sign; L is hyperbolic if the two eigenvalues have different signs; L is degenerate if at least one of the eigenvalues vanishes. The number of characteristic curves is determined by the type of the operator. For the operator L in (3.1.6), there are two characteristic curves if L is hyperbolic; there are no characteristic curves if L is elliptic. We shall study several important linear differential operators in R2. The first of these is the Laplacian. In R2, the Laplace operator 0 is defined by Du = uxlxl + uX2X2. It is easy to see that the Laplace operator is elliptic. In polar coordinates xl = r cos 9, x2 = r sin 9, the Laplace operator 0 can be expressed by 1 Du = urr + -ur + 2 u99 r r The equation Du = 0 is called the Laplace equation and its solutions are called harmonic functions. By writing x = xl and y = x2, we can associate with a harmonic function u(x, y) a conjugate harmonic function v(x, y) such that u and v satisfy the first-order system of Cauchy-Riemann equations uvy, uy=-vx. Any such a pair gives an analytic function 1(z) = u(x, y) + iv(x, y) of the complex argument z = x + iy, if we identify C with R2. Physically, (u, -v) is the velocity field of an irrotational, incompressible flow. Conversely, for any analytic function f, functions u = Re f and v = Im f are harmonic. In this way, we can find many nontrivial harmonic functions in the plane. For example, for any positive integer k, Re(x + iy)' and Im(x + iy)' are homogeneous harmonic polynomials of degree k. Next, with es cos y + ies sin y, we know ex cos y and es sin y are harmonic ez = functions. Although there are no characteristic curves for the Laplace operator, initial-value problems are not well-posed. 3.1. Classifications Example 3.1.4. Consider the Laplace equation in ][82 uxx + uyy = 0, with initial values prescribed on {y = 0}. For any positive integer k, set uk(X,y) = Then uk is harmonic. Moreover, uk y(x, y) = sin(kx)e'y, uk,x(x, y) = cos(kx)e' ", and hence VuA(x, y)I2 = y) + y) = e2'y. Ifor any x E ]E8 and any k, Ias k -+oo, for any x E ][8 and y > 0. There is no continuous dependence on initial values in C'-norms. In R2, the wave operator D is given by Du = ux2x2 - ux1x1. It is easy to see that the wave operator is hyperbolic. It is actually called the one-dimensional wave operator. This is because the wave equation flu = 0 in R2 represents vibrations of strings or propagation of sound waves in tubes. Because of its physical interpretation, we write u as a function of two independent variables x and t. The variable x is commonly identified with position and t with time. Then the wave equation in R2 has the form utt - uxx = 0 The two families of straight lines t = ±x + c, where c is a constant, are characteristic. The heat operator in R2 is given by Lu = ux2 - ux1x1. This is a degenerate operator. The heat equation ux2 - ux1x1 = 0 is satisfied by the temperature distribution in a heat-conducting insulated wire. As with the wave operator, we refer to the one-dimensional heat operator and we write u as a function of the independent variables x and t. Then the heat equation in R2 has the form ut - = 0. It is easy to see that {t = 0} is characteristic. If we prescribe u(x, 0) _ uo(x) in an interval of {t = 0}, then using the equation we can compute all derivatives there. However, uo does not determine a unique solution even 3. An Overview of Second-Order PDEs in a neighborhood of this interval. We will see later on that we need initial values on the entire initial line {t = 0} to compute local solutions. Many important differential operators do not have a definite type. In other words, they are neither elliptic nor hyperbolic in the domain where they are defined. We usually say a differential operator is of mired type if it is elliptic in a subdomain and hyperbolic in another subdomain. In general, it is more difficult to study equations of mixed type. Example 3.1.5. Consider the Tricomi equation uX2X2 + in 1102 It is elliptic if x2 > 0, hyperbolic if x2 <0 and degenerate if x2 = 0. Characteristic curves also arise naturally in connection with the propagation of singularities. We consider a simple case. Let SZ be a domain in IR2, F be a continuous curve in SZ and w be a continuous function in SZ \ F. For simplicity, we assume F divides SZ into two parts, SZ+ and L. Take a point xo E F. Then w is said to have a jump at xo across F if the two limits w_ (x0) = w (x), x-+xo, xEf _ w+ (xo) = x-+xo, XES2+ w (x) exist. The difference [w](xO) = w+(xO) - w_(xO) is called the jump of w at xo across F. The function w has a jump across I' if it has a jump at every point of t across F. If w has a jump across I', then [w] is awell-defined function on F. It is easy to see that [w] = 0 on I' if w is continuous in ft Proposition 3.1.6. Let SZ be a domain in lEg2 and I' be a C1 -curve in dividing S2 into two parts. Suppose and u E C1(1Z) fl C2(S2 \ I') satisfies 2 cu = f + i, j=1 bi, c, f are continuous functions in SZ in SZ \ F. If 02u has a jump across F, then F is a characteristic curve. Proof. Since u is C1 in SZ, we have [u] _ [u1J _ [u2] = 0 on F. Let the vector field (v1, 112) be normal to F. Then v2 axl - v13x2 is a directional derivative along F. Hence on F (v23X1 - 3.1. Classifications (v2ax1 - v2[uxlxl] vl By the continuity of a3, bi, c and f in SZ, we have 0 all [ux1x1 ] + 2a12 [uX1X2] + a22 on I'. Thus, the nontrivial vector ([u11J, [u12J, [u22]) satisfies a 3 x 3 homogeneous linear system on F. Hence the coefficient matrix is singular. That is, on F v2 -111 v2 -vl I = 0, al l 2aa1212 all allvl + 2a12v1v2 + a22v2 = 0. This yields the desired result. The Laplace operator, the wave operator and the heat operator can be generalized to higher dimensions. Example 3.1.7. The n-dimensional Laplace operator in Rn is defined by n Du = i=1 and the Laplace equation is given by Du = 0. Solutions are called harmonic functions. The principal symbol of the Laplace operator D is given by II2, for any E TP. Obviously, O is elliptic. Note that 0 is invariant under rotations. If x = Ay for an orthogonal matrix A, then n uxixi i=1 uyiyi i=1 For a nonzero function f, we call the equation Du = f the Poisson equation. The Laplace equation has a wide variety of physical backgrounds. For example, let u denote a temperature in equilibrium in a domain SZ C W with the flux density F. Then for any smooth subdomain S1' C SZ, the net flux of u through DSZ' is zero, i.e., aii' 3. An Overview of Second-Order PDEs where v is the unit exterior normal vector to 11'. Upon integration by parts, we obtain F = 0 in 1 since SZ' is arbitrary. In a special case where the flux F is proportional to the gradient Vu, we have F = -aVu, for a positive constant a. Here the negative sign indicates that the flow is from regions of higher temperature to those of lower one. Now a simple substitution yields the Laplace equation Du = div(Vu) = 0. Example 3.1.8. We denote points in n+1 by operator in n+1 is given by xn, t). The heat Lu=ut - Lu. It is often called the n-dimensional heat operator. Its principal symbol is given by p(x, t,, T) _ -II2, e W and T E ]R. A hypersurface {(xi,... , xn, t) = 0} is non- for any characteristic for the heat operator if, at each of its points, - IV 'I2 0. Likewise, a hypersurface {(x, t) = 0} is characteristic if pxcp = 0 and cpt 0 at each of its points. For example, any horizontal hyperplane {t = to}, for a fixed t0 E III, is characteristic. The heat equation describes the evolution of heat. Let u denote a temperature in a domain 1Z C W with the flux density F. Then for any smooth subdomain SZ' C 1, the rate of change of the total quantity in SZ' equals the negative of the net flux of u through 811', i.e., d u dx = - where v is the unit exterior normal vector to 11'. Upon integration by parts, we obtain u dx = - div F dx. This implies ut = - div F in 11, 3.1. Classifications since SZ' is arbitrary. In a special case where the flux F is proportional to the gradient Du, we have F = -aVu, for a positive constant a. Now a simple substitution yields ut = a div(Du) = aDu. This is the heat equation if a = 1. Example 3.1.9. We denote points in Ian+1 by (xi, , xn, t). The wave operator Eli in Ian+1 is given by Ellu = utt - Oxu. It is often called the n-dimensional wave operator. Its principal symbol is given by p(x,t;S,T) = T2 ISI2 for any E Il8n and T E R. A hypersurface {p(xi,'.. , xn, t) = 0} is noncharacteristic for the wave operator if, at each of its points, - VPL 0. For any (xO, to) E IlSn x 1[8, the surface Ix - x12 = (t - to)2 is characteristic except at (moo, to). We note that this surface, smooth except at (xO, to), is the union of two cones. It is usually called the characteristic cone. Figure 3.1.1. The characteristic cone. To interpret the wave equation, we let u(x, t) denote the displacement in some direction of a point x E 1 C Ian at time t > 0. For any smooth subdomain f' C 1, Newton's law asserts that the product of mass and the acceleration equals the net force, i.e., 4udx= - 3. An Overview of Second-Order PDEs where F is the force acting on Q' through aQ' and the mass density is taken to be 1. Upon integration by parts, we obtain d2 u dx = SZ div F dx. SZ This implies utt = - div F in Q, since Q' is arbitrary. In a special case F = -aDu for a positive constant a, we have utt = a div(Du) = aDu. This is the wave equation if a = 1. The heat equation and the wave equation can be generalized to parabolic equations and hyperbolic equations in arbitrary dimensions. Again, we denote points in by (Xi,'.. , xn, t). Let a23, b2, c and f be functions , n. We assume (a23) is an n x n defined in a domain in i, j = 1, positive definite matrix in this domain. An equation of the form n ut - a( x, b2 (x, is parabolic, and an equation of the form n utt - b2 (x, aZj (x, 2=1 t)u = f(x,t) is hyperbolic. 3.2. Energy Estimates In this section, we discuss the uniqueness of solutions of boundary-value problems for the Laplace equation and initial/boundary-value problems for the heat equation and the wave equation. Our main tool is the energy estimates. Specifically, we derive estimates of L2-norms of solutions in terms of those of boundary values and/or initial values. We start with the Laplace equation. Let Q C W be a bounded C1domain and (p be a continuous function on 9Q. Consider the Dirichlet boundary-value problem for the Laplace equation: Du = 0 in Q, u = (p on 9Q. We now prove that a CZ-solution, if it exists, is unique. To see this, let ul and u2 be solutions in C2(St) fl Cl(S2). Then, the difference w = ul - u2 3.2. Energy Estimates Ow = 0 w=0 on aSZ. We multiply the Laplace equation by w and write the resulting product as n - IVwI2. 0 =wow = 2=1 An integration by parts in SZ yields 0=- J IVwI 2 dx w av dS. With the homogeneous boundary value w = 0 on aSZ, we have IVwI2dx = 0, and then Ow = 0 in ft Hence w is constant and this constant is zero since w is zero on the boundary. Obviously, the argument above applies to Dirichlet problems for the Poisson equation. In general, we have the following result. Lemma 3.2.1. Let S2 C ll8n be a bounded C' -domain, f be a continuous function in SZ and cp be a continuous function on aSZ. Then there exists at most one solution in C2(SZ) fl Cl(SZ) of the Dirichlet problem Du = f inn, u = cp on aSt. By the maximum principle, the solution is in fact unique in C2 (SZ)f1C(St), as we will see in Chapter 4. Now we discuss briefly the Neumann boundary-vale problem, where we prescribe normal derivatives on the boundary. Let Eli be a continuous function on BSt. Consider Du = 0 au _ av on aSZ. We can prove similarly that solutions are unique up to additive constants if SZ is connected. We note that if there exists a solution of the Neumann problem, then Eb necessarily satisfies bdS=O. This can be seen easily upon integration by parts. 3. An Overview of Second-Order PDEs Next, we derive an estimate of a solution of the Dirichlet boundary-value problem for the Poisson equation. We need the following result, which is referred to as the Poincare lemma. Lemma 3.2.2. Let St be a bounded Cl -domain in Il8n and u be a Cl -function in St with u = 0 on BSt. Then 11U11L2(Il) < diam(1)IIVuIIL2(St) Here diam(St) denotes the diameter of SZ and is defined by diam(S2) = sup x x,yESl Proof. We write IRn = x Ilk. For any xo E IRn-1, let lxo be the straight line containing xo and normal to W-1 x {0}. Consider those xo such that lxo n SZ 0. Now lxo (1 11 is the union of a countable collection of pairwise disjoint open intervals. Let I be such an interval. Then I C SZ and I has the form I = {(xo, xn) : a < xn < b}, where (x'0, a), (x'0, b) E 9i Since u(xo, a) = 0, then IISn-1 u(xo, xn) = uxn (xo, s)ds for any xn E (a, b). wn-1 Figure 3.2.1. An integration along l. The Cauchy inequality yields u2 (xo, xn) < (xn - a) uxn (xo, s) ds for any xn E (a, b). By a simple integration along I, we have b f u2 (xo, xn) dxn < (b - a)2 a uxn (xo, xn) dxn. a 3.2. Energy Estimates By adding the integrals over all such intervals, we obtain u2 (x p, xn) dxn < C2, l , nc uxn (x p, xn) dxn, l , nc where Cxo is the length of lxo in ft Now a simple integration over xo yields the desired result. Now consider Du = f in S2, u=0 on51. We note that u has a homogeneous Dirichlet boundary value on BSt. Theorem 3.2.3. Let SZ C 1[8be a bounded C1 -domain and f be a continuous function in S2. Suppose u E C2(11) fl Cl(St) is a solution of (3.2.1). Then IIUIIL2(cl) + IIVUIIL2(cl) cII f II L2(c), where C is a positive constant depending only on St. Proof. Multiply the equation in (3.2.1) by u and write the resulting product in the left-hand side as n (uu)xZ - I Du12. 2G02G = Upon integrating by parts in 11, we obtain fudS_ f IVuI2dx = fufdx. av t With u = 0 on 852, we have fu f dx. The Cauchy inequality yields (L1vu12dx)2= (fufdx)2fu2dx.ff2dx. By Lemma 3.2.2, we get dx < (diam(1O)2ff2dx. t Using Lemma 3.2.2 again, we then have u2 dx < (diam(11)) This yields the desired estimate. 3. An Overview of Second-Order PDEs Now we study initial/boundary-value problems for the heat equation. Suppose S2 is a bounded C'-domain in ][8n, f is continuous in St x [0, oo) and uo is continuous in St. Consider ut - Du = f in S2 x (0, oo), u(.,0) = uo in S2, u=0 onax(0,oo). The geometric boundary of S2 x (0, oo) consists of three parts, S2 x {0}, 8S2 x (0, oo) and aS2 x {0}. We treat S2 x {0} and aS2 x (0, oo) differently and refer to values prescribed on S2 x {0} and 8SZ x (0, oo) as initial valves and boundary values, respectively. Problems of this type are usually called initial/boundary-value problems or mired problems. We note that u has a homogeneous Dirichlet boundary value on 81Z x (0, oo). We now derive an estimate of the L2-norm of a solution. For each t > 0, we denote by u(t) the function defined on SZ by u(t) = t). Theorem 3.2.4. Let SZ be a bounded Cl-domain in IEgn, f be continuous in St x [0, oo) and uo be continuous in St. Suppose u E C2(SZ x (0, oo)) f1 C1(1 2 x [0,oo)) is a solution of (3.2.2). Then Pt IC 1U112() +J Ids for any t > 0. 0 Theorem 3.2.4 yields the uniqueness of solutions of (3.2.2). In fact, if f - 0 and uo - 0, then u - 0. We also have the continuous dependence on initial values in L2-norms. Let fl, f2 be continuous in SZ x [0, oo) and uol, u02 be continuous in ft. Suppose ul, u2 E C2(S2x (0, oo))f1C1(S2x [0, oo)) are solutions of (3.2.2) with fl, uol and f2, u02 replacing f, uo, respectively. Then for any t > 0, t I- 262(t)IIL2(SZ) IIuoi - u02IIL2(SZ) + I- f2(s)IIL2() (LAS. Proof. We multiply the equation in (3.2.2) by u and write the product in the left-hand side as uut - uDu = 2 (u2)t - IVuI2. i=1 Upon integration by parts in S2 for each fixed t > 0 and u(t) = 0 on 9, we have 2 dt f u2(t) dx + Idx = f f(t)u(t) dx. An integration in t yields, for any t> 0, fu2(t)dx+2fflVul2dxds = fudx+2fffudxds. 3.2. Energy Estimates E(t) = I (E(t))2 + 2ftf 1Vu12 dads = (E(0))2 + 2ft Differentiating with respect to t, we have 2E(t)E'(t) < 2E(t)E'(t) + 2 f IVu(t)12 dx =2 f f(t)u(t) dx t ' = 2E(t)Il.f(t)IIz,2(st) E'(t) c If(t)IIL2(). Integrating from 0 to t gives the desired estimate. Now we study initial/boundary-value problems for the wave equation. Suppose SZ is a bounded Cl-domain in ][8n, f is continuous in SZ x [0, oo), uo is Cl in St and ui is continuous in SZ. Consider utt - 0u= f in St x (0, oo), 0) _ uo, ut(',0) = ul in St, u=0 onD1x(0,oo). Comparing (3.2.3) with (3.2.2), we note that there is an extra initial condition on Ut in (3.2.3). This relates to the extra order of the t-derivative in the wave equation. Theorem 3.2.5. Let St be a _bounded C' -domain in ][8n, f be continuous in SZ x [0, oo), uo be Cl in S2 and ul be continuous in SZ. Suppose U E CZ(St x (0,oo)) fl Cl(S2 x [0,oo)) is a solution of (3.2.3). Then for any (IIUt(t)II2() + I - (Ilullli2(sz) + Iloxuolli2(sl)) 2 t 0 IIIU0IIL2(cz) +t(Ilu1lli2(sl) + Iloxuolli2(sl))2 + / (t-s)IIf(s)IIL2()ds. 0 As a consequence, we also have the uniqueness and continuous dependence on initial values in L2-norms. 3. An Overview of Second-Order PDEs Proof. Multiply the equation in (3.2.3) by Ut and write the resulting product in the left-hand side as n z=1 Upon integration by parts in 12 for each fixed t> 0, we obtain dx - 2 dt ,f but (t) + Ut(t) aU (t) ds = f f(t)ut(t) dx. Note that Ut = 0 on 81 Z x (0, oo) since u = 0 on 81 t x (0, oo). Then 2 dt f (u(t) + Idx - ft f(t)ut(t) dx. Define the energy by E(t) _ (mot (t) + I If f - 0, then Hence for any t > 0, E(t) = E(0) = ft (u + IVuoI2) dx. This is the conservation of energy. In general, E(t) = E(0) + 2 Jo ffutdxds. To get an estimate of E(t), set J(t) _ (E(t)) 2. Then t (J(t))2 = (J(0))2 + 2 0 fut dxds. By differentiating with respect to t and applying the Cauchy inequality, we get 2J(t)J'(t) = 2f f(t)ut(t)dx < 211.f (t)11zz(Q)IIut(t)11z,z(Q) 2J(t)II.f(t)11L2(Q) Hence for any t > 0, Integrating from 0 to t, we obtain 3.2. Energy Estimates This is the desired estimate for the energy. Next, to estimate the L2-norm of u, we set = f u2(t) dx. A simple differentiation yields 2F(t)F'(t) = 2J u(t)ut(t)dx < 211u(t)IIL2(st)IIut(t)IIL2(st) = 2F(t) IIut(t) IIL2(). Hence F'(t) IJ(0) + f t Ids. Integrating from 0 to t, we have I<_ IIU0IIL2(c) +tJ(o) + f t f t 0 By interchanging the order of integration in the last term in the right-hand side, we obtain the desired estimate on u. LI There are other forms of estimates on energies. By squaring the first estimate in Theorem 3.2.5 and applying the Cauchy inequality, we obtain f (u(t) + Idx < 2f (ui + Idx ft f + 2t J Js f 2 dads. Integrating from 0 to t, we get f (ut + Idxds < 2t f (ui + Idads + t2 f 2 dads. Next, we briefly review methods used in deriving estimates in Theorems 3.2.3-3.2.5. In the proofs of Theorems 3.2.3-3.2.4, we multiply the Laplace equation and the heat equation by u and integrate the resulting product over 1, while in the proof of Theorem 3.2.5, we multiply the wave equation by ut and integrate over ft It is important to write the resulting product as a linear combination of u2, u2, IVUI2 and their derivatives. Upon integrating by parts, domain integrals of derivatives are reduced to boundary integrals. Hence, the resulting integral identity consists of domain integrals and boundary integrals of u2, u2 and IVUI2. Second-order derivatives of 3. An Overview of Second-Order PDEs u are eliminated. These strategies also work for general elliptic equations, parabolic equations and hyperbolic equations. Compare methods in this section with those used to obtain L2-estimates of solutions of initial-value problems for first-order linear PDEs in Section 2.3. To end this section, we discuss an elliptic differential equation in the entire space. Let f be a continuous function in TR. We consider -Du + u = f in ][8n. Let u be a C2-solution in TR. Next, we demonstrate that we can obtain estimates of L2-norms of u and its derivatives under the assumption that u and its derivatives decay sufficiently fast at infinity. To obtain an estimate of u and its first derivatives, we multiply (3.2.4) by u. In view of n uDu = we write the resulting product as VuJ2 + U2 (uuxk)xk = fu. We now integrate in TR. Since u and uxk decay sufficiently fast at infinity, we have f (JVuJ2+u2)dx=f Rigorously, we need to integrate in BR and let R -+ oo after integrating by parts. By the Cauchy inequality, we get 22n fudx A simple substitution yields Hence, the L2-norm of f controls the L2-norms of u and Du. In fact, the L2-norm of f also controls the L2-norms of the second derivatives of u. To see this, we take square of the equation (3.2.4) to get (L\u)2 - 2uL\u + U2 = f2. 3.3. Separation of Variables We note that n (Lu)2 n n )mil I v2ul2 + 2 IVuI2 + u2+ = f2 Integration in ll8Th yields (IVu+ 2IDuI2 + u2) dx = f f2 dx. Therefore, the L2-norm of f controls the L2-norms of every second deriva- tives of u, although f is related to u by Du, which is just one particular combination of second derivatives. As we will see, this is the feature of elliptic differential equations. We need to point out that it is important to assume that u and its derivatives decay sufficiently fast. Otherwise, the integral identity (3.2.5) does not hold. By taking f = 0, we obtain u = 0 from (3.2.5) if u and its derivatives decay sufficiently fast. We note that u(x) =eel is a nonzero solution of (3.2.4) for f = 0. 3.3. Separation of Variables In this section, we solve boundary-value problems for the Laplace equation and initial/boundary-value problems for the heat equation and the wave equation in the plane by separation of variables. 3.3.1. Dirichlet Problems. In this subsection we use the method of separation of variables to solve the Dirichlet problem for the Laplace equation in the unit disc in ll82. We will use polar x= r cos 8, y= r sin B in 1[82, and we will build up solutions from functions that depend only on r and functions that depend only on B. Our first step is to determine all harmonic functions u in ][82 having the form u(r, B) = f(r)g(6), 3. An Overview of Second-Order PDEs where f is defined for r > 0 and g is defined on S1. (Equivalently, we can view g as a 2ii--periodic function defined on ][8.) Then we shall express the solution of a Dirichlet problem as the sum of a suitably convergent infinite series of functions of this form. In polar coordinates, the Laplace equation is -(rur)r + -uoo = 0. Thus the function u(r, B) = f(r)g(0) is harmonic if and only if 1 + r f (r)9 (e) _ 0, that is, (fh'(r) + rf/(r)J 9(e) + When u(r, 8) r f(r)g"(0) =0. 0, this equation is equivalent to r (f/Il r) + r f(r) g"(0) g(0) The left-hand side of this equation depends only on r and the right-hand side depends only on 8. Thus there is a constant A such that (f 11(r) + lf f(r) = 0 for r > 0, f11(r) + r f'(r) and g" (e) + Ag (e) = 0 for 0 E S1. Our next step is to analyze the equation for g. Then we shall recall some facts about Fourier series, after which we shall turn to the equation for f. The equation for g describes the eigenvalue problem for - de on 1. This equation has nontrivial solutions when A = k2, k = 0,1, 2, . When A = 0, the general solution is g(0) = ao, where ao is a constant. For A = k2, k = 1, 2, , the general solution is g(0) = ak cos k0 + bk sin k0, where aj and bk are constants. Moreover, the normalized eigenfunctions 1 1 , cos k0, sin k0, k = 1, 2, form an orthonormal basis for L2 (S1) . In other words, for any v e L2 (S1), 00 (akcos k9 + bksin k9) , ao -I- 3.3. Separation of Variables ap = J 1 v(9) d9, v(e) cos k9 d8, and for k = 1, 2, ak = bk = v(B) sin kB dB. This series for v is its Fourier series and ao, ak, bk are its Fourier coefficients. The series converges in L2 (S1) . Moreover, 1 1- a0 + + bk) As for f, when A = 0 the general solution is f(r) = co + do log r, where co and do are constants. Now we want u(r, B) = f(r)g(0) to be harmonic in IR2, thus f must remain bounded as r tends to 0. Therefore we must have do = 0, and so f(r)=co is a constant function. For A = k2, k = 1, 2, ,the general solution is f(r) = ckrk + dkr_k, where cj and dk are constants. Again f must remain bounded as r tends to 0, so dk = 0 and f(r) = ckrk. In summary, a harmonic function u in l[82 of the form u(r, B) = f(r)g(0) is given by u(r, B) = ap, or by u(r, 8) = akrk cos k8 + bkr"' sin k8, for k = 1, 2, where ao, ak, bk are constants. Remark 3.3.1. Note that r'' cos k8 and r'' sin k8 are homogenous harmonic polynomials of degree k in JR2. Taking z = x + iy, we see that r cos k8 + irk sin k8 = r1 eZ '0 = (x + and hence r cos kB = Re(x + iy)k, r'' sin kB = Im(x + iy)k. 3. An Overview of Second-Order PDEs Now, we are ready to solve the Dirichlet problem for the Laplace equation in the unit disc B1 C R2. Let cp be a function on O`1B1 = S1 and consider Du = 0 in B1, (3.3.1) u = co on S1. We first derive an expression for the solution purely formally. We seek a solution of the form u(r, 8) = ao + = j (ark cos k8 + bark sin k9). The terms in the series are all harmonic functions of the form f(r)g(0) that we discussed above. Thus the sum u(r, B) should also be harmonic. Letting r = 1 in (3.3.2), we get 00 cp(6) = u(1, B) = ao + 1 (ak cos k9 + bk sin k6). Therefore, the constants ao, ak and bk, k = 1, 2, should be the Fourier coefficients of cp. Hence, P(e) den and for k = 1, 2, ak = (3.3.4) bk _ o(e)Los ke de, o(e) sin k8 d8. Theorem 3.3.2. Suppose cp E L2(S1) and u is given by (3.3.2), (3.3.3) and (3.3.4). Then u is smooth in Bl and satisfies Du = 0 in B1. Moreover, Proof. Since cp E L2(S1), we have 00 1k011L2(s1) = a + I:(a + b) < k=o In the following, we fix an R E (0, 1). 3.3. Separation of Variables First, we set Iakrk cos k9 -}- bark sin k91. Soo(r, 8) _ k=1 By (3.3.2), we have u(r,G)I IaoI + To estimate Soo, we note that, for any r E [0, R] and any 8 E S1, (a + Soo(r, e) 2 Rk. By the Cauchy inequality, we get a (a + bk) S'oo(r, B) k=1 Hence, the series defining u in (3.3.2) is convergent absolutely and uniformly in BR. Therefore, u is continuous in BR. Next, we take any positive integer m and any nonnegative integers m1 and m2 with m1 + m2 = m. For any r E [0, R] and any 9 E 51, we have formally 00 ax ay 'n2 u(r, 9) = ax 'nl ay 'n2 (akrk cos k9 - bkrk sink9 ) . k=1 In order to justify the interchange of the order of differentiation and summation, we need to prove that the series in the right-hand side is convergent absolutely and uniformly in BR. Set (3.3.5) 5152 (akrk cos k9 + bkrk sin k9) Sm1m2 (r, 9) = (We note that this is Soo defined earlier if m1 = m2 = 0.) By using rectangular coordinates, it is easy to check that, for k < m, 515 2 (akrk cos k9 + bkrk sin k9) = 0, and fork > m, aTf18y 2 (akrk cos kB -{- bkrk sin kB) (a + b) 2 kmRk-m. 1 5m1m2 (r, e) k=m 3. An Overview of Second-Order PDEs By the Cauchy inequality, we have 1 a2 + bl 2 Smim2 (r, 8) ; < oo. This verifies that the series defining a1 a2 u is convergent absolutely and uniformly in BR, for any ml and m2 with ml + m2 > 1. Hence, u is smooth in BR for any R < 1 and all derivatives of u can be obtained from term-byterm differentiation in (3.3.2). Then it is easy to conclude that Du = 0. We now prove the L2-convergence. First, by the series expansions of u and gyp, we have w(r, e) - (e) _ (ajcosk9 + bk sin k9)(r' - 1), k-1 and then f Iu(r e) - (9)I2d9 = (ak + 1 - l)a We note that rk -+ 1 as r -+ 1 for each fixed k > 1. For a positive integer K to be determined, we write K I- (O)I2dO = (a + bk) (rk - 1)2 Is' 00 For any > 0, there exists a positive integer K = K(E) such that 00 Then there exists a 6> 0, depending on E and K, such that (a+b)(rk_1)2<E for any r E (1 - S,1), since the series in the left-hand side consists of finitely many terms. Therefore, we obtain I- (e)I2 de < ZE for any r E (1-8,1). This implies the desired L2-convergence as r -+ 1. 3.3. Separation of Variables We note that u is smooth in Bl even if the boundary value co is only L2. Naturally, we ask whether u in Theorem 3.3.2 is continuous up to 8B1, or, more generally, whether u is smooth up to 8B1. We note that a function is smooth in Bl if all its derivatives are continuous in B. Theorem 3.3.3. Suppose cp E COO(S1) and u is given by (3.3.2), (3.3.3) and (3.3.4). Then u is smooth in Bl with u(1,.) = co. Proof. Let m1 and m2 be nonnegative integers with m1 + m2 = m. We need to prove that the series defining a'n19 2u(r, B) converges absolutely and uniformly in B1. Let Smlm2 be the series defined in (3.3.5). Then for any r E [0,1] and B E S 1, 00 (a + by2km. To prove that the series in the right-hand side is convergent, we need to improve estimates of ak and bk, k = 1, 2, (3.3.4) and integrations by parts, we have By definitions of ak and bk in f (e)cos k9 d8 = - f ak _ bk _ / (e si p(B) sin k9 dB = 7=Jco (B) k9 dB d8. Hence, {kbk, -kak} are the coefficients of the Fourier series of cps, so 00 By continuing this process, we obtain for any positive integer 2, kae(a + iicoML2(S1) < oo. Hence by the Cauchy inequality, we have for any r E [0, 1] and B E S1, 00 1 (a + b) 2m Smlm2 (r, e) k=1 k2(m+1) (a k-2 (i This implies Smlm2 (r, e) < Cm IIm+1) IIL2(S1) where Cm is a positive constant depending only on m. Then the series defining 9' 92 u converges absolutely and uniformly in 131. Therefore, aml ay 1 u is continuous in B1. 3. An Overview of Second-Order PDEs By examining the proofs of Theorem 3.3.2 and Theorem 3.3.3, we have the following estimates. For any integer m> 0 and any R E (0, 1), IkIICm(R) where C,,,,,R is a positive constant depending only on m and R. This estimate controls the Cm-norm of u in BR in terms of the L2-norm of co on S1. It is referred to as an interior estimate. Moreover, for any integer m > 0, m+l IIUIICm(.i) < Cm where C,,,, is a positive constant depending only on m. This is referred to as a global estimate. If we are interested only in the continuity of u up to aBl, we have the following result. Corollary 3.3.4. Suppose co E C1(S1) and u is given by (3.3.2), (3.3.3) and (3.3.4). Then u is smooth in Bl, continuous in Bl and satisfies (3.3.1). Proof. It follows from Theorem 3.3.2 that u is smooth in B1 and satisfies Du = 0 in B1. The continuity of u up to aBl follows from the proof of 0 Theorem 3.3.3 with ml = m2 = 0. The regularity assumption on co in Corollary 3.3.4 does not seem to be optimal. It is natural to ask whether it suffices to assume that co is in C(S1) instead of C1 (S1). To answer this question, we need to analyze pointwise convergence of Fourier series. We will not pursue along this direction in this book. An alternative approach is to rewrite the solution u in (3.3.2). With the explicit expressions of ao, a/ c, b in terms of cp as in (3.3.3) and (3.3.4), we can write (3.3.6) u(r, B) = K(r, B, ) dry, where 00 The integral expression (3.3.6) is called the Poisson integral formula and the function K is called the Poisson kernel. We can verify that (3.3.7) K(r, 8, r1) = 1 - 2r cos(B - ri) + r2 We leave this verification as an exercise. In Section 4.1, we will prove that u is continuous up to aBl if co is continuous on aBl. In fact, we will derive Poisson integral formulas for arbitrary dimension and prove they provide 3.3. Separation of Variables solutions of Dirichlet problems for the Laplace equation in balls with continuous boundary values. Next, we compare the regularity results in Theorems 3.3.2-3.3.3. For Dirichlet problems for the Laplace equation in the unit disc, solutions are always smooth in B1 even with very weak boundary values, for example, with L2-boundary values. This is the interior smoothness, i.e., solutions are always smooth inside the domain regardless of the regularity of boundary values. Moreover, solutions are smooth up to the boundary if boundary values are also smooth. This is the global smoothness. 3.3.2. Initial/Boundary-Value Problems. In the following, we solve initial/boundary-value problems for the 1-dimensional heat equation and the 1-dimensional wave equation by separation of variables, and discuss regularity of these solutions. We denote by (x, t) points in [0, ir] x [0, oo), with x identified as the space variable and t as the time variable. We first discuss the 1-dimensional heat equation. Let uo be a continuous function in [0, ir]. Consider the initial/boundary-value problem ixxO in(0,ir)x(0,oo), (3.3.8) u(x, 0) = uo(x) for x E (0, ir), u(0, t) = u(ir, t) = 0 for t E (0,oo). Physically, u represents the temperature in an insulated rod with ends kept at zero temperature. We first consider = 0 in (0,ir) x (0,oo), (3.3.9) u(0, t) = ufr, t) = 0 for t E (0,oo). We intend to find its solutions by separation of variables. Set u(x, t) = a(t)w(x) for (x,t) E (0,ir) x (0,oo). a'(t)w(x) - a(t)w"(x) = 0, and hence w"(x) w(x) Since the left-hand side is a function of t and the right-hand side is a function ap(t) of x, there is a constant A such that each side is -A. Then a (t) + fa(t) = 0 for t E (O,oo), and w"(x) -I- Aw(x) = 0 (3.3.10) w(0) = w(ir) = 0. for x E (0, ir), 3. An Overview of Second-Order PDEs We note that (3.3.10) describes the homogeneous eigenvalue problem for , dx in (0, 7r). The eigenvalues of this problem are Ak = k2, k = 1, 2, and the corresponding normalized eigenfunctions wk(x) = 1 / 2 sin kx form a complete orthonormal set in L2(0, 71). For any v E L2(0, 7r), the Fourier series of v with respect to { sinkx} is given by v (x) _ I-)vksinkx, where v _ -2 v(x) sin kx dx. The Fourier series converges to v in L2(0, 7r), and oo V I I LZ (0,ir) _ For k= uk(x, t) = ak(t)wk(x) be a solution of (3.3.9). Then ap(t) satisfies the ordinary differential equation ap(t) + k2ak(t) = 0. Thus, ak(t) has the form ak(t) =aye-z t where aj is constant. Therefore, for k = 1, 2, uk(x, t) _ , we have ake-kzt sin kx for (x,t) E (0,) x (0,oo). We note that uk satisfies the heat equation and the boundary value in (3.3.8). In order to get a solution satisfying the equation, the boundary value and the initial value in (3.3.8), we consider an infinite linear combination of uk and choose coefficients appropriately. We emphasize that we identified an eigenvalue problem (3.3.10) from the initial/boundary-value problem (3.3.8). We note that - in (3.3.10) originates from the term evolving spatial derivative in the equation in (3.3.8) and that the boundary condition in (3.3.10) is the same as that in (3.3.8). Now, let us suppose that (3.3.11) u(x, t) _ /Yake_1c2tsinkx 3.3. Separation of Variables solves (3.3.8). In order to identify the coefficients a, k = 1, 2, calculate formally: , we u(x, 0) _ I-Laks1nkx, but we are given the initial condition u(x, 0) = uo(x) for x e (0, it). Thus we take the constants ak, k = 1, 2, , to be the Fourier coefficients of uo with respect to the basis {/sin kx} of LZ(0, it), i.e., uo(x) sin kx dx for k = 1, 2, Next we prove that u in (3.3.11) indeed solves (3.3.8). To do this, we need to prove that u is at least C2 in x and Cl in t and satisfies (3.3.8) under appropriate conditions on uo. We first have the following result. Theorem 3.3.5. Suppose uo E L2(0, it) and u is given by (3.3.11) and (3.3.12). Then u is smooth in [0,ir] x (0,oo) and ut - = 0 in (0, it) x (0, oo), u(0, t) = u(ir, t) = 0 for t e (0, oo). Moreover, t 0 I- uOIILz(p,) = O. Proof. Let i and j be nonnegative integers. For any x e [0, it] and t e (0,oo), we have formally axa u(x, t) = sin kx). In order to justify the interchange of the order of differentiation and summation, we need to prove that the series in the right-hand side is convergent absolutely and uniformly for any (x, t) E [0, it] x [to, oo), for an arbitrarily fixed to > 0. Set SZ (x,t) _ ak dt3 (e ) x2 (sin Fix to > 0. Then for any (x, t) E [0, it] x [to, oo), °° laki k=1 Since uo e L2(0,it), we have laki k2to k=1 e 3. An Overview of Second-Order PDEs Then the Cauchy inequality implies, for any (x, t) E [0, it] x [to, oo), 00 00 k2i+4j Sid (x, t) C k=1 where C0 is a positive constant depending only on i, j and to. This verifies that the series defining t) is convergent absolutely and uniformly for (x, t) E [0, it] x [to, oo), for any nonnegative integers i and j. Hence u is smooth in [0, it] x [to, oo) for any t0 > 0. Therefore, all derivatives of u can be obtained from term-by-term differentiation in (3.3.11). It is then easy to conclude that u satisfies the heat equation and the boundary condition in (3.3.8). We now prove the L2-convergence. First, from the series expansions of u and u0, we see that ak(e- t - 1)sin kx, 2 u(x, t) - up(x and then Iu(x,t) - uo(x) I2 dx = _ 1112 We note that e-k2t -+ 1 as t -+ 0 for each fixed k > 1. For a positive integer K to be determined, we write - up(x) I2dx = ak2 e _ 1)2 + ak2 (e k2t _ 1)2. k=K+1 For any S > 0, there exists a positive integer K = K(s) such that Then there exists a b > 0, depending on s and K, such that K a e-k2t - 1)2 <5 for any t E (0, b), since the series in the left-hand side consists of finitely many terms. Therefore, we obtain Iu(x, t) - uo(x)12 dx <2s for any t e (0, b). This implies the desired L2-convergence as t -+ 0. 3.3. Separation of Variables In fact, (3.3.14) implies the following estimate. For any integer m > 0 and any to > 0, IIUIICm([O,ir] X [to,oo)) Cm,to IIuo IIL2(0,ir), where Cm,to is a positive constant depending only on m and to. This estimate controls the C"2-norm of u in [0, ?f] x [to, oo) in terms of the L2-norm of uo on (0, ii). It is referred to as an interior estimate (with respect to t). We note that u becomes smooth instantly after t = 0 even if the initial value uo is only L2. Naturally, we ask whether u in Theorem 3.3.5 is continuous up to { t = 0}, or, more generally, whether u is smooth up to {t = 0}. First, we assume that u is continuous up to {t = 0}. Then uo E C[0, it]. By comparing the initial value with the homogeneous boundary value at corners, we have = 0, Utj(it) = 0. Next, we assume that u is C2 in x and Cl in t up to {t = 0}. Then uo E C2 [0, it]. By the homogeneous boundary condition and differentiation with respect to t, we have ut(O,t) = 0, ut(ii,t) = 0 fort > 0. Evaluating at t = 0 yields ut(0,0) = 0, Ut(ii,0) = 0. Then by the heat equation, we get u(0, 0) = 0, u(ir, 0) = 0, and hence If u is smooth up to {t = 0}, we can continue this process. Then we have a necessary condition (3.3.15) uo2e)(0) =0, uo2e)(it) =0 for any Q = 0, 1, Now, we prove that this is also a sufficient condition. Theorem 3.3.6. Suppose up E C°O[0, it] and u is given by (3.3.11) and (3.3.12). If (3.3.15) holds, then u is smooth in [0,ir] x [0,oo), and u(,0) _ uo. Proof. Let i and j be nonnegative integers. We need to prove that the series defining t) converges absolutely and uniformly for (x, t) E 3. An Overview of Second-Order PDEs [0, it] x [0, oo). be the series defined in (3.3.13). Then for any x E [0,ir] and t > 0, Sij (x, t) < kZ+2j I ak I . k=1 To prove that the series in the right-hand side is convergent, we need to improve estimates of ak, the coefficients of Fourier series of u0. With (3.3.15) for £ = 0, we have, upon simple integrations by parts, ir ak = uo(x) sin kx dx = o u0(x) 0 cos x k sin /<'r We note that values at the endpoints are not present since u(0) = uo (lr) = 0 in the first integration by parts and sin kx = 0 at x = 0 and x = it in the second integration by parts. Hence for any in > 1, we continue this process with the help of (3.3.15) for £ = 0, , [(m - 1) /2] and obtain ak = (-1) if in is odd, aj _ (-1) 2 1/ v if in is even. In other words, {kmak} is the sequence of coefficients of the Fourier series of ±m) with respect to {/sin kx} or {/cos kx}, where in determines uniquely the choice of positive or negative sign and the choice of the sine or the cosine function. Then, we have 00 Hence, by the Cauchy inequality, we obtain that, for any (x, t) E [0, 7r] x [0, oo) and any in, 00 (x, t)i+2j I ak I 2 k2(i+2j-m) By taking m = i + 2 j + 1, we get Si (x , t) < Ci u(m) II I L2 0 where Ci j is a positive constant depending only on i and j. This implies that the series defining axat u(x, t) converges absolutely and uniformly for (x, t) E [0, ir] x [0, oo). Therefore, axat u is continuous in [0, ir] x [0, oo). D 3.3. Separation of Variables If we are interested only in the continuity of u up tot = 0, we have the following result. Corollary 3.3.7. Suppose up E Cl [0, it] and u is given by (3.3.11) and (3.3.12). If uo(0) = uo(it) = 0, then u is smooth in [0, it] x (0, oo), continuous in [0,it] x [0,oo) and satisfies (3.3.8). Proof. It follows from Theorem 3.3.5 that u is smooth in [0, it] x (0, oo) and satisfies the heat equation and the homogeneous boundary condition in (3.3.8). The continuity of u up tot = 0 follows from the proof of Theorem The regularity assumption on uo in Corollary 3.3.7 does not seem to be optimal. It is natural to ask whether it suffices to assume that uo is in C[0, it] instead of in Cl [0, it]. To answer this question, we need to analyze pointwise convergence of Fourier series. We will not pursue this issue in this book. Now we provide another expression of u in (3.3.11). With explicit expressions of ak in terms of uo in (3.3.12), we can write (3.3.16) u(x,t) = G(x, y; t)uo(y) dy, J0 where G(x, y; t) = - r k=1 e sin kx sin Icy, for any x, y e [0, it] and t> 0. The function G is called the Green's function of the initial/boundary-value problem (3.3.8). For each fixed t > 0, the series for G is convergent absolutely and uniformly for any x, y e [0, it]. In fact, this uniform convergence justifies the interchange of the order of summation and integration in obtaining (3.3.16). The Green's function G satisfies the following properties: (1) Symmetry: G(x, y; t) = G(y, x; t). (2) Smoothness: G(x, y; t) is smooth in x, y e [0, it] and t> 0. (3) Solution of the heat equation: Gt - G= 0. (4) Homogeneous boundary values: G(0, y; t) = G(it, y; t) = 0. These properties follow easily from the explicit expression for G. They imply that u in (3.3.16) is a smooth function in [0, it] x (0, oo) and satisfies the heat equation with homogeneous boundary values. We can prove directly with the help of the explicit expression of G that u in (3.3.16) is 0) = uo under appropriate assumpcontinuous up tot = 0 and satisfies tions on uo. We point out that G can also be expressed in terms of the 3. An Overview of Second-Order PDEs fundamental solution of the heat equation. See Chapter 5 for discussions of the fundamental solution. Next we discuss initial/boundary-value problems for the 1-dimensional wave equation. Let uo and ul be continuous functions on [0, ir]. Consider = 0 in (0, it) x (0, oo), Utt (3.3.17) u(x,0) = uo(x), ut(x,0) = ul(x) for x E (0, ir), u(0, t) = u(ir, t) = 0 for t E (0,oo). We proceed as for the heat equation, first considering the problem (3.3.18) utt - = 0 in (0,ir) x (0,oo), t) = 0 for t E (0, oo), u(0, t) = and asking for solutions of the form u(x,t) = c(t)w(x). An argument similar to that given for the heat equation shows that w must be a solution of the homogeneous eigenvalue problem for -ate on (0, ir). The and the corresponding eigenvalues of this problem are Ak = k2, k = 1, 2, normalized eigenfunctions wk(x) = 1 / -sin kx V 7r form a complete orthonormal set in L2(0, it). For k = 1, 2, uk(x, t) = ck(t)wk(x) be a solution of (3.3.18). Then ca(t) satisfies the ordinary differential equation c(t) + k2Ck(t) = 0. Thus, ck(t) has the form ck (t) = ak cos lit + bk sin lit, where ak and bk are constants. Therefore, for k = 1, 2, , we have uk(x, t) _ /(ak cos kt + bk sin kt) sin kx. Now, let us suppose that (3.3.19) u(x, t) _ / - , (ak cos kt + bk sin kt) sin kx 3.3. Separation of Variables solves (3.3.17). In order to identify the coefficients ak and bk, k = 1, 2, we calculate formally: u(x, 0) _ 'l/ - Laksin kx, but we are given the initial condition u(x, 0) = uo(x) for x e (0, it). Thus we take the constants ak, k = 1, 2, , to be the Fourier coefficients of uo with respect to the basis { \ sin kx} of L2(0, it), i.e., /3.3.20) l a = U- f uOl x sin kx dx for k= Differentiating (3.3.19) term by term, we find ut(x, t) _ (-kak sin kt + kbk cos kt) sin kx, and evaluating at t = 0 gives ut (x, 0) _ From the initial condition ut (x, 0) = u1(x), we see that kbk, for k = 1, 2, , are the Fourier coefficients of u1 with respect to the basis { sin lax} of L (0, ir), i.e., (3.3.21) u (x) 0 sin kx k for k = We now discuss the regularity of u in (3.3.19). Unlike the case of the heat equation, in order to get differentiability of u now, we need to impose similar differentiability assumptions on initial values. Proceding as for the heat equation, we note that if u is a C2-solution, then uo(o) = o, ul (o) = o, uo (o) = o, (3.3.22) uo(lr) = 0, u(ir) = 0, ug(ir) = 0. Theorem 3.3.8. Suppose uo e C3[0, it], ul e C2[0, it] and u is defined by (3.3.19), (3.3.20) and (3.3.21). If uo, ul satisfy (3.3.22), then u is C2 in [0,it] x [0,oo) and is a solution of (3.3.17). Proof. Let i and j be two nonnegative integers with 0 < i + j < 2. For any x e [0, it] and t e (0, oo), we have formally t) _ d - -(ak cos kt + bk sin kt) -. (sin kx). dx z dti k=1 3. An Overview of Second-Order PDEs In order to justify the interchange of the order of differentiation and summation, we need to prove that the series in the right-hand side is convergent absolutely and uniformly for any (x, t) E [0, 7r] x [0, oo). Set 00 TZj(x, t) _ > dt I (akcos kt + bksin kt) (sin kx) Hence, for any (x, t) E [0, 7r] x [0, oo), kz+3(a + b 2 . 7'z (x,t) To prove the convergence of the series in the right-hand side, we need to improve estimates for ak and bk. By (3.3.22) and integration by parts, we have 2 ak = /o uo(x) sin kx dx = Vu bk=4IJ sin lcx k dx = uo (x) 241(x) cos lax 3 sin k x In other words, {k3ak} is the sequence of Fourier coefficients of -uo'(x) with respect to {- cos kx}, and {k3bk} is the sequence of Fourier coefficients of -u(x) with respect to sinkx}. Hence (k6ak + +11 ui Lz(O,,r) By the Cauchy inequality, we obtain that, for any (x, t) E [0, 71] x [0, oo), 0o T(x, t) + b2k)) < ( k=1 . (k=1 ) Therefore, u is CZ in [0, 7r] x [0, oo) and any derivative of u up to order two may be calculated by a simple term-by-term differentiation. Thus u satisfies (3.3.17). By examining the proof, we have 3 Io O II'(Z) IIL2(O,) + i=o where C is a positive constant independent of u. In fact, in order to get a CZ-solution of (3.3.1?), it suffices to assume uo e C2[0, 71], ul e Cl [0,71] and the compatibility condition (3.3.22). We 3.3. Separation of Variables will prove this for a more general initial/boundary-value problem for the wave equation in Section 6.1. See Theorem 6.1.3. Now, we compare the regularity results for solutions of initial/ boundaryvalue problems in Theorems 3.3.5, 3.3.6 and 3.3.8. For the heat equation in Theorem 3.3.5, solutions become smooth immediately after t = 0, even for L2-initial values. This is the interior smoothness (with respect to time). We also proved in Theorem 3.3.6 that solutions are smooth up to {t = 0} if initial values are smooth with a compatibility condition. This property is called the global smoothness. However, solutions of the wave equation exhibit a different property. We proved in Theorem 3.3.8 that solutions have a similar degree of regularity as initial values. In general, solutions of the wave equation do not have better regularity than initial values, and in higher dimensions they are less regular than initial values. We will discuss in Chapter 6 how solutions of the wave equation depend on initial values. To conclude, we point out that the methods employed in this section to solve initial/boundary-value problems for the 1-dimensional heat equation and wave equation can actually be generalized to higher dimensions. We illustrate this by the heat equation. Let St be a bounded smooth domain in ][fin and uo be an L2-function in St. We consider ut-Lu=0 inIIx(0,oo), u(.,0) = uo in 12, u=0 onaIIx(0,oo). To solve (3.3.23) by separation of variables, we need to solve the eigenvalue problem of -O in St with homogeneous boundary values, i.e., 0o + in u p=0 on aS2. This is much harder to solve than its 1-dimensional counterpart (3.3.10). Nevertheless, a similar result still holds. In fact, solutions of (3.3.24) are given by a sequence (Ak, cpk), where Ak is a nondecreasing sequence of positive numbers such that Ak -+ oo as k -+ oo and cps is a sequence of smooth functions in 12 which forms a basis in L2(11). Then we can use a similar method to find a solution of (3.3.23) of the form u(x, t) _ for any (x, t) E S2 x (0, oo). k=1 We should remark that solving (3.3.24) is a complicated process. We need to work in Sobolev spaces, spaces of functions with L2-integrable derivatives. A brief discussion of Sobolev spaces can be found in Subsection 4.4.2. 3. An Overview of Second-Order PDEs 3.4. Exercises Exercise 3.1. Classify the following second-order PDEs: n = 0. uxi x j = Exercise 3.2. (1) Let (r, 8) be polar coordinates in R2, i.e., x=rcos8, y=rsin8. Prove that the Laplace operator 0 can be expressed by 1 Du = urr + -ur + (2) Let (r, 8, q5) be spherical coordinates in R3, i.e., z = r cos 8. y = r sin 8 sin co, x = r sin 8 cos co, Prove that the Laplace operator 0 can be expressed by Du r Dr (r2 ar) + r2 sin 8 8 (sin e ae I + r2 sin2 8 5P2 Exercise 3.3. Discuss the uniqueness of the following problems using energy methods: (1) Jzu_u3=f in S2, lu= cp on 852; in St, on aSl. Exercise 3.4. Let St be a bounded C'-domain in ][8n and u be a CZ-function in St x [0, T] satisfying ut - Du = f in S2 x (0, oo), u(.,0)=u in1, u=0 onDIIx(0,oo). Prove u2 dxdt 0 dx + J J f 2 dxdt), n where C is a positive constant depending only on ft 3.4. Exercises Exercise 3.5. Prove that the Poisson kernel in (3.3.6) is given by (3.3.7). Exercise 3.6. For any uo E L2(0, 7r), let u be given by (3.3.11). For any nonnegative integers i and j, prove sup I-+0 as t - oo. I< t Exercise 3.7. Let G be defined as in (3.3.16). Prove 1 for any x, y E [0,] and t>0. Exercise 3.8. For any uo E L2(0, 7r), solve the following problem by separation of variables: ut - = 0 in (0,7r) x (0, oo), u(x, 0) = uo (x) for any x E (0, 7r), U(0, t) = t) = 0 for any t E (0,oo). Exercise 3.9. For any uo E L2(0, 7r) and f E L2((0, 7r) x (0, oo)), find a formal explicit expression of a solution of the problem ut - uxx = f in (0, 7r) x (0, oo), u(x, 0) = uo(x) for any x E (0, 7r), u(0, t) = u(7r, t) = 0 for any t E (0, oo). Exercise 3.10. For any uo, ul E L2(0, 7r) and f E L2((0, 7r) x (0, oo)), find a formal explicit expression of a solution of the problem utt - uxx = f in (0, 7r) X (0, oo), u(x, 0) = uo(x), ut(x, 0) = ul(x) for any x E (0, 7r), u(0, t) = u(7r, t) = 0 for any t E (0,oo). Exercise 3.11. Let T be a positive constant, St be a bounded C1-domain in 1[8n and u be C2 in x and Cl in t in SZ x [O, T]. Suppose u satisfies ut - Du = 0 in St x (0, T), U(,T)0 in1, U=0 onaclx(0,T). Prove that u = 0 in S2 x (0, T). Hint: The function J(t) =log f u2(x, t) dx is a decreasing convex function. Exercise 3.12. Classify homogeneous harmonic polynomials in ] [83 by following the steps outlined below. Let (r, 8, q5) be spherical coordinates in ][83. (Refer to Exercise 3.2.) Suppose u is a homogeneous harmonic polynomial cp) for some function Qm defined in of degree m in ll83 and set u = 3. An Overview of Second-Order PDEs (1) Prove that Q,,,, satisfies 1a/ m (m + 1)Qm. + I s i n 9 aQ sin 8 8B aB sine 8 2 aQ'" ,=0 (2) Prove that, if Qom,, is of the form f(6)g(), then Qm(6, gyp) _ (Acoskcp + B sin k fm,k(/t) = (1 - µ2) 2 dµm,+k C1 - for µ E [-1,1], (3) Sketch the zero set of Qm on S2 according to k = 0, 1 < k < m -1 and k = m. Chapter 4 Laplace Equations The Laplace operator is probably the most important differential operator and has a wide range of important applications. In Section 4.1, we discuss the fundamental solution of the Laplace equation and its applications. First, we introduce the important notion of Green's functions, which are designed to solve Dirichlet boundary-value problems. Due to the simple geometry of balls, we are able to find Green's functions in balls and derive an explicit expression of solutions of the Dirichlet problem in balls, the so-called Poisson integral formula. Second, we discuss regularity of harmonic functions using the fundamental solution. We derive interior gradient estimates and prove that harmonic functions are analytic. In Section 4.2, we study the mean-value property of harmonic functions. First, we demonstrate that the mean-value property presents an equivalent description of harmonic functions. Due to this equivalence, the mean-value property provides another tool to study harmonic functions. To illustrate this, we derive the maximum principle for harmonic functions from the mean-value property. In Section 4.3, we discuss harmonic functions using the maximum principle. This section is independent of Section 4.1 and Section 4.2. The maximum principle is an important tool in studying harmonic functions, or in general, solutions of second-order elliptic differential equations. In this section, the maximum principle is proved based on the algebraic structure of the Laplace equation. As an application, we derive a priori estimates for solutions of the Dirichlet boundary-value problem. We also derive interior gradient estimates and the differential Harnack inequality. As a final application, we solve the Dirichlet problem for the Laplace equation in a large class of bounded domains by Perron's method. 89 4. Laplace Equations We point out that several results in this chapter are proved by multiple methods. For example, interior gradient estimates are proved by three methods: the fundamental solution, the mean-value property and the maximum principle. In Section 4.4, we discuss the Poisson equation. We first discuss regularity of classical solutions using the fundamental solution. Then we discuss weak solutions and solve the Dirichlet problem in the weak sense. The method is from functional analysis, and the Riesz representation theorem plays an essential role. The presentation in this part is brief. The main purpose is to introduce notions of weak solutions and Sobolev spaces. 4.1. Fundamental Solutions The Laplace operator 0 is defined on C2-functions u in a domain in W by n Du = i=1 The equation Du = 0 is called the Laplace equation and its C2-solutions are called harmonic functions. 4.1.1. Green's Identities. One of the important properties of the Laplace equation is its spherical symmetry. As discussed in Example 3.1.7, the Laplace equation is preserved by rotations about some point in Rn, say the origin. Hence, it is plausible that there exist special solutions that are invariant under rotations. We now seek harmonic functions u in Rn which are radial, i.e., functions depending only on r = lxi. Set v(r) = u(x). Foranyi=1, ,nandxz40,weget xi = v' (r)-, r uxixi = 2+ r -r1 - x3r , Du=v + n-1 v =0. r (logv')'+ n-1 r (log(r'v'))' = 0. 4.1. Fundamental Solutions A simple integration then yields, for n = 2, for any r > 0, v(r) = Cl + C2 log r and for n > 3, for any r > 0, where c2 are constants for i = 1, 2, 3, 4. We note that v (r) has a singularity at r = 0 as long as it is not constant. For reasons to be apparent soon, we are interested in solutions with a singularity such that c4r2-n v (r) = C3 + aBr 01 dS=1 foranyr>0. In the following, we set Cl = C3 = 0 and choose C2 and C4 accordingly. In fact, we have and 1 C4 = (2 - n)wn' where wn is the surface area of the unit sphere in R. Definition 4.1.1. Let I' be defined for x e Rn \ {0} by log x for n = 2, I'(x) _ and for n> 3. (2 - The function I' is called the fundamental solution of the Laplace operator. We note that I' is harmonic in Ilgn \ {0}, i.e., Or=O lriRn\l0I, and v dS=1 foranyr>0. f Moreover, r has a singularity at the origin. By a simple calculation, we Br have, for any i, j = 1, ,n and any x I'( x) = 0, 1 wn x n rxi x3 = jxn S nxjx3 IXIn+2 We note that r and its first derivatives are integrable in any neighborhood of the origin, even though r has a singularity there. However, the second derivatives of r are not integrable near the 4. Laplace Equations To proceed, we review several integral formulas. Let S2 be a Cl-domain in IIand v = (v1,.. ,v) be the unit exterior normal to 852. Then for any u, v e Cl (S2) fl C(S2) and i = 1, ,n, [uv dx = J uvv2 dS - This is the integration by parts in higher-dimensional Euclidean space. Now for u to for any w E C2(S2) fl Cl(S2) and v e Cl(SZ) fl C(SZ), substitute get (vwxixi + vxi wxi) dx = By summing up for i = 1, vwxi U2 dS. , n, we get Green's formula, L (vzw + Vv Ow) dx = J v dS. For any v, w e C2(S2) fl C' (1), we interchange v and w and subtract to get a second version of Green's formula, Dv - wOv) dx = Jasp (Dw v wav) dS. \ av - Taking v - 1 in either version of Green's formula, we get Ow dx = aw dS. 8v We note that all these integral formulas hold if SZ is only a piecewise C1domain. Now we prove Green's identity, which plays an important role in discussions of harmonic functions. Theorem 4.1.2. Suppose S2 is a bounded Cl-domain in II8n and that u e Cl(S2) fl C2(SZ). Then for any x e SZ, u(x) = I'(x - y)Dyu(y) dy - Jas IF(x-y)--(y)-u(y)--(x-y)JdSy. y Proof. We fix an x E S2 and write I' = I'(x - ) for brevity. For any r > 0 such that Br(x) C S2, the function I' is smooth in S2 \ Br(x). By applying Green's formula to u and I' in St \ Br(x), we get (Fzu - uDI') dy = f + f .Iag,.cX> r au - u ar) dsy, aU 4.1. Fundamental Solutions where v is the unit exterior normal to 8(SZ\B,.(x)). Now DI' = 0 in St\Br(x), so letting r -+ 0, we have av v Js I'Du dyasp= f 1F--u1dS+lim y/ asT (x) av IF--uldS. v y For n > 3, by the definition of I', we get Br (x) r au dSy aU r2-n I (2 - n)wn J aU dsy max Iv'uH+o as r-+0, u aBr (x) ar avy dSy = n_ 1 u dSy as r-+0, aBr (x) where v is normal to DBr (x) and points to x. This implies the desired result 0 for n > 3. We proceed similarly for n = 2. Remark 4.1.3. We note that Jest avy for any x E S2. This can be obtained by taking u - 1 in Theorem 4.1.2. If u has a compact support in SZ, then Theorem 4.1.2 implies F(x - y)zu(y) dy. u(x) _ By computing formally, we have u(x) = F(x - y)u(y) dy. In the sense of distributions, we write LyF(xy)Ox. Here S is the Dirac measure at x, which assigns unit mass to x. The term "fundamental solution" is reflected in this identity. We will not give a formal definition of distribution in this 4.1.2. Green's Functions. Now we discuss the Dirichlet boundary-value problem using Theorem 4.1.2. Let f be a continuous function in SZ and cp a continuous function on aSZ. Consider (4.1.11 Du = f in S2, u= cp on BSt. Lemma 3.2.1 asserts the uniqueness of a solution in C2(S2) n C1 (fl). An alternative method to obtain the uniqueness is by the maximum principle, 4. Laplace Equations which will be discussed later in this chapter. Let u E CZ(SZ) fl Cl(S2) be a solution of (4.1.1). By Theorem 4.1.2, u can be expressed in terms of f and cp, with one unknown term aU on BSZ. We intend to eliminate this term by adjusting F. We emphasize that we cannot prescribe av on 811 together with u on BSt. For each fixed x E SZ, we consider a function (x,.) E CZ(SZ) fl Cl(S2) with DyI(x, y) = 0 in St. Green's formula implies 0= f (x, y)Du(y) dy Set ((x y) au (y) - y)J dSy. (x,y) = F(x - y) - (x,y). By a substraction from Green's identity in Theorem 4.1.2, we obtain, for any x E S2, (x,y)u(y) dy -Jest - u(y) avy (x y)J dSy. appropriately so that ry(x, ) = 0 on BSt. Then, av on 811 is eliminated from the boundary integral. The process described above leads to the important concept of Green's functions. We will choose To summarize, for each fixed x E SZ, we consider (x,.) E Cl(S2)f1C2(St) such that y) = 0 for any y E SZ, (x,y) = I'(x - y) for any y E BSZ. The existence of 1 in general domains is not the main issue in our discussion here. We will prove later that 'I(x, ) is smooth in St for each fixed x if it exists. (See Theorem 4.1.10.) Definition 4.1.4. The Green's function G for the domain St is defined by C(x,y) = F(x y) - for any x, y Eli with x # y. In other words, for each fixed x E SZ, G(x, ) differs from I'(x - ) by a harmonic function in SZ and vanishes on BSt. If such a G exists, then the solution u of the Dirichlet problem (4.1.1) can be expressed by (4.1.3) u(x) = G(x, y) f (y) dy + J ci y (x, y) dSy. We note that the Green's function G(x, y) is defined as a function of y E SZ \ {x} for each fixed x E SZ. Now we discuss properties of G as a function of x and y. As was mentioned, we will not discuss the existence of the Green's function in general domains. However, we should point out 4.1. Fundamental Solutions that the Green's function is unique if it exists. This follows from Lemma 3.2.1 or Corollary 4.2.9, since the difference of any two Green's functions is harmonic, with vanishing boundary values. Lemma 4.1.5. Let G be the Green's function in 12. Then G(x, y) = G(y, x) for any x, y E 1 2 with x Proof. For any x1, X2 E 1 2 with xl x2i taker > 0 small enough that BT(xl) C St, BT(x2) C St and Br(xl) f1 Br(x2) = Ql. Set G(y) = G(xi, y) and F(y) = I'(x2 - y) for i = 1, 2. By Green's formula in 1 2 \ (Br(Xi) U Br(x2)), we get (GizG2_G2zGi)dY=f (Gl a 2 - G2 aG1 I dSy sp / aG2 av fdS+ LBr(X2) where v is the unit exterior normal to B(St\ (Br(Xi) UBr(x2))). Since Gi(y) is harmonic for y # xi, i = 1, 2, and vanishes on BSt, we have cds+ dso. DG 2e Gl av G2 8v - G2 v J sT(Xi) C Gl 8v - G2 8v aBr(X2) Now we replace Gl in the first integral by I'1 and replace G2 in the second integral by I'2. Since Gl - I'1 is C2 in St and G2 is C2 in 1 2 \ B,. (x2 ), we have G1 - rl)av - G2 a(G1-r1)1 dSy av as r I dSy as r Similarly, asr(X2) (Gla(G 8v r2) - (G2 - I'a) Therefore, we obtain r1 B,.(xi) aG2 Dv - G2 art Dv (cl av2 - r2 as 1) dsy dSy + as r -+ 0. On the other hand, by explicit expressions of r1 and F2, we have aGl aG2 r2 dSy o, r1 dSy o, Dv av aB,.(xi) LBr(X2) and /' G28I'1 Dv G2(xi), -f p as r -+ 0. These limits can be proved similarly as in the proof of Theorem 4.1.2. We point out that v points to xi on aB,.(xi), for i = 1, 2. We then obtain G2(xl) - Gl(x2) = 0 and hence G(x2, xl) = G(xl, x2). 4. Laplace Equations Finding a Green's function involves solving a Dirichlet problem for the Laplace equation. Meanwhile, Green's functions are introduced to yield an explicit expression of solutions of the Dirichlet problem. It turns out that we can construct Green's functions for some special domains. 4.1.3. Poisson Integral Formula. In the next result, we give an explicit expression of Green's functions in balls. We exploit the geometry of balls in an essential way. Theorem 4.1.6. Let G be the Green's function in the ball BR C W. (1) In case n > 3, (lyI2-n _ R2-n1l s (2 - n)Wn for any y e BR \ {0}, and n-2 Iy G(x, y) - (2 - n)wn (IY IRI2 x12-n - xl2-n - \ / for anyxEBR\{O} andyEBR\{x}. (2) In case n = 2, G(O, y) - flog ICI -log R) for any y e BR \ {0}, and (lxi ly (log y - x- log for any x E BR \ {0} and y E BR \ {x}. Proof. By Definition 4.1.4, we need to find first. For x = 0, r(o - y) = 1 (2 - n)wn in (4.1.2). We consider n > 3 y2-Th. Hence we take cT(O,y)= (2 - n)w for any y e BR. Next, we fix an x e BR \ {0} and let X = R2x/1x12. Obviously, we have X BR and hence I'(y - X) is harmonic for y e BR. For any y e BBR, by xl we have DOxy xl' R DOyX. Then for any y E aBR, xl Iy - x 4.1. Fundamental Solutions and hence, iy-xi= lRi This implies R n-2 r(y-X), r(y-x)- IxIJ for any x E BR \ {0} and y E BBR. Then we take R n-2 for any x E BR \ {0} and y E BR \ {x}. The proof for n = 2 is similar and O is omitted. Figure 4.1.1. The reflection about the sphere. Next, we calculate normal derivatives of the Green's function on spheres. Corollary 4.1.7. Let G be the Green's function in BR. Then 8G/ 8vy R2- 1x12 - WRI x - yln for any x E BR and y E BBR. Proof. We first consider n > 3. With X = R2x/1x12 as in the proof of Theorem 4.1.6, we have R n2 - xI2n for any x E BR \ {0} and y e BR \ {x}. Hence we get, for such x and y, 1 yZ _ xi R n-2 yti _ XZ 4. Laplace Equations By (4.1.4) in the proof of Theorem 4.1.6, we have, for any x E BR \ {0} and y E aBR, yz Ra _ Ixl a This formula also holds when x = 0. With vz = y2/R for any y E BBR, we obtain DC 2_I I2 This yields the desired result for n > 3. The proof for n = 2 is similar and is omitted. Denote by K(x, y) the function in Corollary 4.1.7, i.e., Rz K(x,y) = WnRI x - yIn for any x E BR and y e BBR. It is called the Poisson kernel. Lemma 4.1.8. Let K be the Poisson kernel defined by (4.1.5). Then (1) K(x, y) is smooth for any x E BR and y E aBR; (2) K(x, y) >0 for any x E BR and y E BBR; (3) for any fixed xo E 8BR and 6> 0, K (x, y) = 0 uniformly in y E 8BR \ Bb(xo); lim x-+xo,IxI y) = 0 for any x E BR and y E aBR; (5) faBR K(x, y) dSy = 1 for any x E BR. (4) Proof. First, (1), (2) and (3) follow easily from the explicit expression for K as in (4.1.5), and (4) follows easily from the definition K(x, y) = a G(x, y) and the fact that G(x, y) is harmonic in x. Of course, we can also verify (4) by a straightforward calculation. An easy derivation of (5) is based on (4.1.3). By taking a C2(BR) harmonic function u in (4.1.3), we conclude f that u(x) = K(x, y)u(y) dSy for any x E BR. Then we have (5) by taking u - 1. Now we are ready to solve the Laplace equation in balls, with prescribed Dirichlet boundary values. Theorem 4.1.9. Let cp be a continuous function on 8BR and u be defined by u(x) = f K(x, y)cp(y) dSy BR for any x E BR, 4.1. Fundamental Solutions where K is the Poisson kernel given by (4.1.5). Then u is smooth in BR and Du = 0 in BR. Moreover, for any xo E BBR, lim u(x) = cp(xo). Proof. By Lemma 4.1.8(1) and (4), we conclude easily that u defined by (4.1.6) is smooth and harmonic in BR. We need only prove the convergence of u up to the boundary BBR. We fix xo E 8BR and take an x E BR. By Lemma 4.1.8(5), we have (xO)= f Then u(x) (xO) _ (xO)) dSy = Ii + I2, aBRnB5 (xo) aBR\Ba (xo) for a positive constant S to be determined. For any 6> 0, we can choose S = 6(6) > 0 small so that I'() - (xo)I <6 for any y E 8BR fl Ba(xo), because cp is continuous at xo. Then by Lemma 4.1.8(2) and (5), (xO) I dSy <6. By Lemma 4.1.8(3), we can find a S' > 0 such that K(x' y) C e 2MwRn-1' for any x E BR f1 Bay (xo) and any y E 8BR \ Ba (xo). We note that S' depends on 6 and S = S(e), and hence only on 6. Then 1I21 Idsy < 6. BR\Ba (moo) I'u(x) - (xO)I <26, for any x E BRf1Ba' (xO). This implies the convergence of u at xo E BBR. 4. Laplace Equations We note that the function u in (4.1.6) is defined only in BR. We can extend u to aBR by defining u = cp on aBR. Then u e C°O(BR) fl C(BR). Therefore, u is a solution of Du = 0 in BR, u = co on aBR. The formula (4.1.6) is called the Poisson integral formula, or simply the Poisson formula. For n = 2, with x = (r cos B, r sin B) and y = (R cos ij, R sin ij) in (4.1.6), we have u(r cos B, r sin B) 12ir f K(r, B, ri)cp(Rcosri, Rsinrj) dry, K(r, B, ri) = - r2 RZ - 2Rr cos(B - ri) + r2 Compare with (3.3.6) and (3.3.7) in Section 3.3. Now we discuss properties of the function defined in (4.1.6). First, u(x) in (4.1.6) is smooth for lxi < R, even if the boundary value cp is simply continuous on aBR. In fact, any harmonic function is smooth. We will prove this result later in this section. Next, by letting x = 0 in (4.1.6), we have ,fnl = u(y) dSy. We note that wnRn-1 is the surface area of the sphere BBR. Hence, values of harmonic functions at the center of spheres are equal to their average over spheres. This is the mean-value property. Moreover, by Lemma 4.1.8(2) and (5), u in (4.1.6) satisfies min co < u < max co in BR. aBR In other words, harmonic functions in balls are bounded from above by their maximum on the boundary and bounded from below by their minimum on the boundary. Such a result is referred to as the maximum principle. Again, this is a general fact, and we will prove it for any harmonic function in any bounded domain. The mean-value property and the maximum principle are the main topics in Section 4.2 and Section 4.3, respectively. 4.1. Fundamental Solutions 4.1.4. Regularity of Harmonic Functions. In the following, we discuss regularity of harmonic functions using the fundamental solution of the Laplace equation. First, as an application of Theorem 4.1.2, we prove that harmonic functions are smooth. Theorem 4.1.10. Let St be a domain in I[8n and u E C2(St) be a harmonic function in 12. Then u is smooth in f. Proof. We take an arbitrary bounded C1-domain f' in 11 such that 11' C 11. Obviously, u is C1 in 11' and Du = 0 in 11'. By Theorem 4.1.2, we have (F(x_Y)(Y)_u(y) a y (x - for any x E 11'. There is no singularity in the integrand, since x E 11' and D y E 911'. This implies easily that u is smooth in 11'. We note that, in its definition, a harmonic function is required only to be C2. Theorem 4.1.10 asserts that the simple algebraic relation Du = 0 among some of second derivatives of u implies that all partial derivatives of u exist. We will prove a more general result later in Theorem 4.4.2 that u is smooth if Du is smooth. Harmonic functions are not only smooth but also analytic. We will prove the analyticity by estimating the radius of convergence for Taylor series of harmonic functions. As the first step, we estimate derivatives of harmonic functions. For convenience, we consider harmonic functions in balls. The following result is referred to as an interior gradient estimate. It asserts that first derivatives of a harmonic function at any point are controlled by its maximum absolute value in a ball centered at this point. Theorem 4.1.11. Suppose u E C(BR(xp)) is harmonic in BR(xp) C ][8n. Then IVu(xo)I R B( o) where C is a positive constant depending only on n. Proof. Without loss of generality, we may assume xo = 0. We first consider R = 1 and employ a local version of Green's identity. Take a cutoff function co E Co (Bg14) such that cp = 1 in B1,2 and 0 < cp < 1. For any x E B1,4i we write I' = I'(x - ) temporarily. For any r > 0 small 4. Laplace Equations enough, applying Green's formula to u and cpI' in Bl \ B,.(x), we get u0(cpI')) dy = Jasl + (cor- ua(a ) aBT(X) av - u a(a r where v is the unit exterior normal to a(Bl \ Br(x)). The boundary integral over aBl is zero since cp = a = 0 on 8B1. In the boundary integral over we may replace cp by 1 since Br(x) C B1,2 if r G 1/ 4. As shown in the proof of Theorem 4.1.2, we have arl (Du u(x)=hmi iP--u---dS, r-+0 aBT av av J where v is normal to BB,. (x) and points toward x. For the domain integral, the first term is zero since Du = 0 in Bl. For the second term, we have o(ar) = yr + nor. We note that DP = 0 in Bl \ Br(x) and that the derivatives of cp are zero for Ii < 1/2 and 3/4 < < 1 since cp is constant there. Then we obtain u(x) _ -f y) + - y)) dye for any x E B114. We note that there is no singularity in the integrand for lxi <1/4 and 1/2 < y <3/4. (This also gives an alternative proof of the smoothness of u in B114.) Therefore, vu(x) _ - VV,,P(x - y)) dy, for any x E B114. Hence, we obtain lfor any x E B1 where C is a positive constant depending only on n. We obtain the desired result by taking x = 0. The general case follows from a simple dilation. Define u(x) = u(Rx) for any x E Bl. Then u is a harmonic function in Bl. By applying the result we just proved to u, we obtain l< csup B1 Since Du(0) = RDu(0), we have the desired result. 4.1. Fundamental Solutions We note that the proof above consists of two steps. We first prove the desired estimate for R = 1 and then extend such an estimate to arbitrary R by a simple scaling. Such a scaling argument is based on the following fact: If u is a harmonic function in BR, then u(x) = u(Rx) is a harmonic function in Bl. We point out that this scaling argument is commonly used in studying elliptic and parabolic differential equations. Next, we estimate derivatives of harmonic functions of arbitrary order. Theorem 4.1.12. Suppose u E C(BR(xa)) is harmonic in BR(XO) C 118'x. Then for any multi-index a with al = m, cmem-lml IDau(xo)I max lul, where C is a positive constant depending only on n. Proof. The proof is by induction on m > 1. The case of m = 1 holds by Theorem 4.1.11. We assume it holds for m and consider m + 1. Let v be an arbitrary derivative of u of order m. Obviously, it is harmonic in BR(xo). For any 8 E (0, 1), by applying Theorem 4.1.11 to v in B(1-B)R(xo), we get lVv(xo)l B(lma,x(xo) IvI. I1 8)R For any x E B(1-e)R(xo), we have BBR(x) C BR(xo). By the induction assumption, we obtain max ui, BOR(X) for any x E B(l_o)R(xo), and hence Cmem-im! max Therefore, IvI < max uI. BR(xo) (1 - 9)9R-' max uI. Rxo By taking 8 = ,nt+l , we have (l -e)e This implies (1+ mJ (m+1)<e(m+1). cm+le m(m + 1)! max lul. R BR(XO) Hence the desired result is established for any derivatives of u of order 4. Laplace Equations As a consequence of the interior estimate on derivatives, we prove the following compactness result. Corollary 4.1.13. Let SZ be a bounded domain in IlBn, M be a positive constant and {uk} be a sequence of harmonic functions in SZ such that sup ukl < M for any k. s Then there exist a harmonic function u in SZ and a subsequence {uk' } such that uk' -+ u uniformly in 1' as k' - oo, for any SZ' with 1' C fZ. Proof. Take any SZ' with 1' C SZ and set d = dist (SZ', aSZ) . For any x E SZ', we have Bd(x) C fZ. By applying Theorem 4.1.12 to uk in Bd(x), we get, for any integer m > 1, < Cm -m < Vmuk (x) I sup uk I Bd (X) - Cmd where Cm is a positive constant depending only on n and m. Hence max I Vmuk I < Cmd-mM. S For any 2 = 0, 1, ,the mean-value theorem implies VQuk(x) - DQUk(y)I C2+11Z-E-1MIx _ yl, for any k= 1, 2, , and any x, y E SZ'. Next, we take a sequence of domains {ll} with SZ C Std+i C and d3 = dist(St3, aSZ) < 1/j. Then Q-iMlx _ yl, IDeuk(x) VQUk(y)I C SZ for any 2 = 0, 1, ,and any x, y E 52,. By Arzela's theo, any k = 1, 2, rem and diagonalization, we can find a function u in SZ and a subsequence {uk'} such that uk' - u in the Ct-norm in SZ as k' -+ oo, for any j = 1, 2, and any £ = 0,1, Du = 0 in each l from Duk' = 0. By taking £ = 2, we then get As shown in the proof, uk' converges to u in Ct(SZ') for any SZ' with 1' dl and any Now we are ready to prove that harmonic functions are analytic. Real analytic functions will be studied in Section 7.2. Now we simply introduce the notion. Let u be a (real-valued) function defined in a neighborhood of 4.2. Mean-Value Properties x0 E R. Then u is analytic near x0 if its Taylor series about x0 is convergent to u in a neighborhood of x0, i.e., for some r > 0, 1 u(x) = a «u(xo) (x - xo) « for any x E B,.(xo). Theorem 4.1.14. Harmonic functions are analytic. Proof. Let S2 be a domain in I[8' and u be a harmonic function in St. For any fixed xo E S2, we prove that u is equal to its Taylor series about xp in a neighborhood of xo. To do this, we take BZR(xo) C S2 and h e Il87 with R. For any integer m > 1, we have, by the Taylor expansion, m-1 1 u(xo + h) = u(xo) + + ... + jtnaxn)i u] (xO) + R,,,(h), [(ha, + ... + h3)rn U] (xO + Oh), Rm(h) = for some B E (0, 1). Note that xo + h e BR(xo) for Ihi R < m' nm ' CCneih11 Rmax B C o) lul max lul. 2R(X0) Then for any h with Cneihi + ... + h3j2 u] (xO), u(x0 + h) = z-O for any h with Ihi < (2Cne)-1R. 4.2. Mean-Value Properties It is a simple consequence of the Poisson integral formula that the mean value of a harmonic function over a sphere is equal to its value at the center. Indeed, this mean-value property is equivalent to harmonicity. In this section, we briefly discuss harmonic functions using the mean-value property. The fundamental solution and the Poisson integral formula are not used to prove the equivalence of harmonicity and the mean-value property. We point out that the mean-value property is special and cannot be generalized to solutions of general elliptic differential equations. Many results in this 4. Laplace Equations section were either proved in the previous section or will be proved in the next section. We first define the mean-value property. There are two versions of the mean-value property, mean values over spheres and mean values over balls. Definition 4.2.1. Let St be a domain in II8" and u be a continuous function in ft Then (i) u satisfies the mean-value property over spheres if for any Br(x) C St, (ii) u satisfies the mean-value property over balls if for any BT(x) C St, n u(x) = u(y) dy, n wnr where wn is the surface area of the unit sphere in W. We note that wnrr_l is the surface area of the sphere OBr(x) and that wnrn/n is the volume of the ball Br (x) . These two versions of the mean-value property are equivalent. In fact, if we write (i) - 1 f s r(x) u(y) dsy, Wn we can integrate with respect to r to get (ii). If we write (ii) as u(x)rTh = Wn u(y) dy, r(a) we can differentiate with respect to r to get (i). By a change of variables, we also write mean-value properties in the following equivalent forms: for any Br(X) C St, u(x) = 1Wn 8B1 u(x - ry) dSy J A function satisfying mean-value properties is required only to be continuous to start with. However, a harmonic function is required to be C2. We now prove that these two requirements are actually equivalent. Theorem 4.2.2. Let 1 2 be a domain in II8" and u be a function in 12. (i) If u E C2(1) is harmonic in 12, then u satisfies the mean-value property in S2. 4.2. Mean-Value Properties (ii) If U E C(St) satisfies the mean-value property in St, then u is smooth and harmonic in St. Proof. Take any ball Br(X) C St. Then for any u E C2(1t) and any p E (0, r), we have adS - pn-1 Dudy = a(x -pw)dSw u(x + = per'-i a (i) Let u E C2(1t) be harmonic in St. Then for any p E (0, r), a -s--- asl Integrating from 0 to r, we obtain f u(x -I- rw)dS. _ sl and hence u(x dSw = 11/ u(x+rw)dS. u(x)=Wn 8B1 This yields the desired mean-value property. (ii) Let u E C(12) satisfy the mean-value property. For the smoothness, we prove that u is equal to the convolution of itself with some smooth function. To this end, we choose a smooth function b in [0, 1] such that b is constant in [0, E] and b = 0 in [1 - e,1] for some E E (0,1/2), and fi wJ rte'-1b(r) dr = 1. 0 The existence of such a function can be verified easily. Define cp(x) = b(IxI). Then cp E C(B1) and cp d x = 1. B1 Next, we define cob (x) = cp () for any E> 0. Then supp cpe C B6. We claim that u(x) = x) dye for any x E St with dist(x, a12) > E. Then it follows easily that u is smooth. Moreover, by (4.2.1) and the mean-value property, we have, for any Br(X) C S2, Du dy = r"-i ar This implies Du = 0 in St. u(x -I- rw) dSw = 7'n-1 a (wu(x)) = 0. 4. Laplace Equations Now we prove the claim. For any x e SZ and e < dist(x, 81Z), we have, by a change of variables and the mean-value property, - x) dy = =1 = (x + ez)cp(z) dz l u(x + erw)(rw)r1 dSdr B1 E fB6 u(x +y) P (e ) dy JB u(x + u(x + erw) dSw dr 8B1 = u(x)wJ e(r)r"-1 dr = u(x). 0 This proves the claim. By combining both parts of Theorem 4.2.2, we have the following result. Corollary 4.2.3. Harmonic functions are smooth and satisfy the meanvalue property. Next, we prove an interior gradient estimate using the mean-value property. Theorem 4.2.4. Suppose u E C(BR(xp)) is harmonic in BR(xp) C R. Then IVu(xo)I C n max lul. R BR(So) We note that Theorem 4.2.4 gives an explicit expression of the constant C in Theorem 4.1.11. Proof. Without loss of generality, we assume u E C1(BR (xo)) . Otherwise, we consider u in Br (xo) for any r and then let r -+ R. Since u is smooth, L (u) = 0. In other words, satisfies the mean-value property. is also harmonic in BR (xo) . Hence Upon a simple integration by parts, we obtain n n 2( xo = ) wn R 2n f u(y) dy R (x0) wn R u(y)vz dS'y BR (xo ) and hence (xo)I < n WnR n max wn RnaBR (xo) n -max Iui. R BR (x0 ) 4.2. Mean-Value Properties This yields the desired result. When harmonic functions are nonnegative, we can improve Theorem 4.2.4. Theorem 4.2.5. Suppose u E C(BR(xp)) is a nonnegative harmonic function in BR(xo) C R. These IV'u(xo)I This result is referred to as the differential Harnack inequality. It has many important consequences. Proof. As in the proof of Theorem 4.2.4, from integration by parts and the nonnegativeness of u, we have IC n f wnR" aBR(xo) R where in the last equality we used the mean-value property. As an application, we prove the Liouville theorem. Corollary 4.2.6. Any harmonic function in ][8n bounded from above or below is constant. Proof. Suppose u is a harmonic function in Rn with u > c for some constant c. Then v = u - c is a nonnegative harmonic function in R. Let x E Rn be an arbitrary point. By applying Theorem 4.2.5 to v in BR(x) for any R> 0, we have IC Rv(x) By letting R -+ oo, we conclude that Vv(x) = 0. This holds for any x E ][8Th. Hence v is constant and so is u. As another application, we prove the Harnack inequality, which asserts that nonnegative harmonic functions have comparable values in compact subsets. Corollary 4.2.7. Let u be a nonnegative harmonic function in BR(xp) C ][8Th. Then u(x) < Cu(y) for any x, y E B 2 (xp), where C is a positive constant depending only on n. Proof. Without loss of generality, we assume that u is positive in BR (xo) . Otherwise, we consider u -I- s for any constants > 0, derive the desired 4. Laplace Equations estimate for u + e and then let e -f 0. For any x e BR/2(xo), we have BR/2(x) C BR(xo). By applying Theorem 4.2.5 to u in BR/2(x), we get IVu(x)I c R u(x), IVlogu(x)I < For any x, y e BR/2(xp), a simple integration yields log u(y) logu(tx + (1 - t)y) dt J (x-y). O logu(tx + (1 - t)y) dt. 0 Since tx + (1 - t)y e BR/2(xo) for any t e [0,1] and Ix - y < R, we obtain 1 Therefore u(x) < This is the desired result. In fact, Corollary 4.2.7 can be proved directly by the mean-value property. Another proof of Corollary 4.2.7. First, we take any B4,.(x) C BR(xo) and claim that u(x) for any x, x E Br(). To see this, we note that Br(x) C B3r(x) C B4r(x) for any x, x E Br(). Then the mean-value property implies n udy < udy = 3nu x . ux= n W nr Br (x) W nr B3r (x) Next we take r = R/8 and choose finitely many x1, , xpr e BR/2 (xo ) such that {Br(j)}i covers BR/2 (xo) . We note that B4r (xi) C BR (xo) , for any i = 1, , N, and that N is a constant depending only on n. E BR/2, for some For any x, y e BR/2 (xo) , we can find x1, ,x k < N, such that any two consecutive points in the ordered collection of x, x1, xk, y belong to a ball in {Br(j)}i. Then we obtain u(x) < 3nu(x1) < 32nu(x2) < ... < 3nku(xk) < 3n(k+1)u(y) Then we have the desired result by taking C = 3n(N+1) . 4.2. Mean-Value Properties As the final application of the mean-value property, we prove the strong maximum principle for harmonic functions. Theorem 4.2.8. Let 1 be a bounded domain in Rn and u E C(SZ) be harmonic in 1. Then u attains its maximum and minimum only on 9f unless u is constant. In particular, inf u < u < sup u asp in IL Proof. We only discuss the maximum of u. Set M = maxi u and D={xE1 : u(x)=M}. C D, It is obvious that D is relatively closed; namely, for any sequence if the continuity of u. x xE Next we show that D is open. For any x0 E D, we take r > 0 such that Br (x0) C fZ. By the mean-value property, we have M=u(xO)=wnr nn udy< Br(x0) n Wnr n This implies u = M in Br (x0) and hence Br (x0) C D. In conclusion, D is both relatively closed and open in fZ. Therefore either D = 0 or D = fZ. In other words, u either attains its maximum only on aSZ or u is constant. A consequence of the maximum principle is the uniqueness of solutions of the Dirichlet problem in a bounded domain. Corollary 4.2.9. Let 1 2 be a bonded domain in I[8n. Then for any f E C(SZ) and cp E C(81Z), there exists at most one solution u E C2(S2) fl C(SZ) of the problem Du = f u = cp in S2, on aSt. Proof. Let w be the difference of any two solutions. Then Ow = 0 in 1 and w = 0 on E11. Theorem 4.2.8 implies w = 0 in 1. Compare Corollary 4.2.9 with Lemma 3.2.1, where the uniqueness was proved by energy estimates for solutions u E C2(1) fl Cl (11). The maximum principle is an important tool in studying harmonic func- tions. We will study it in detail in Section 4.3, where we will prove the maximum principle using the algebraic structure of the Laplace equation and discuss its applications. 4. Laplace Equations 4.3. The Maximum Principle One of the important methods in studying harmonic functions is the maximum principle. In this section, we discuss the maximum principle for a class of elliptic differential equations slightly more general than the Laplace equation. As applications of the maximum principle, we derive a priori estimates for solutions of the Dirichlet problem, and interior gradient estimates and the Harnack inequality for harmonic functions. 4.3.1. The Weak Maximum Principle. In the following, we assume SZ is a bounded domain in R. We first prove the maximum principle for subharmonic functions without using the mean-value property. Definition 4.3.1. Let u be a C2-function in St. Then u is a subharmonic (or superharmonic) function in 1 2 if Du > (or <) 0 in St. Theorem 4.3.2. Let St be a bounded domain in ][8and u e C2(1t) fl C(St) be subharmonic in ft Then u attains on 81 2 its maximum in St, i. e., max u = max u. Proof. If u has a local maximum at a point xo in St, then the Hessian matrix (V2u(xo)) is negative semi-definite. Thus, Lu(xo) = tr(V2u(xo)) <0. Hence, in the special case that Du > 0 in 12, the maximum value of u in 12 is attained only on BSt. We now consider the general case and assume that 1 2 is contained in the ball BR for some R> 0. For any e > 0, consider u(x) = u(x) - E(R2 - 1x12). Then Dub = Du + 2ns > 2ns > 0 in 1. By the special case we just discussed, u6 attains its maximum only on aSZ and hence max u6 = max u6 . Then max u < max u6 + sR2 = max u6 + sR2 < max u + sR2. We have the desired result by letting s -+ 0 and using the fact that 31 C SZ. 4.3. The Maximum Principle A continuous function in SZ always attains its maximum in 11. Theorem 4.3.2 asserts that any subharmonic function continuous up to the boundary attains its maximum on the boundary aSZ, but possibly also in ft Theorem 4.3.2 is referred to as the weak maximum principle. A stronger version asserts that subharmonic functions attain their maximum only on the boundary. We will prove the strong maximum principle later. Next, we discuss a class of elliptic equations slightly more general than the Laplace equation. Let c and f be continuous functions in 1. We consider Du +cu = f in S2. Here, we require u e C2(St). The function c is referred to as the coefficient of the zeroth-order term. It is obvious that u is harmonic if c = f = 0. A C2-function u is called a subsolution (or supersolution) if Du + cu > f (or Du + cu < f). If c = 0 and f = 0, subsolutions (or supersolutions) are subharmonic (or superharmonic). Now we prove the weak maximum principle for subsolutions. Recall that u+ is the nonnegative part of u defined by u+ = max{0, u}. Theorem 4.3.3. Let S2 be a bounded domain in IESn and c be a continuous function in SZ with c < 0. Suppose u e C2(S2) fl C(St) satisfies Du +cu > 0 in 1. Then u attains on aSZ its nonnegative maximum in SZ, i. e., max u < max u+. Proof. We can proceed as in the proof of Theorem 4.3.2 with simple modifications. In the following, we provide an alternative proof based on Theorem 4.3.2. Set SZ+ _ {x E S2; u(x) > 0}. If S2+ = fD, then u < 0 in St, so u+ - 0. If S2+ 0, then Du = Du + cu - cu > -cu > 0 in SZ+ . Theorem 4.3.2 implies maxis=maxis=maxis+. asp This yields the desired result. If c - 0 in 1, Theorem 4.3.3 reduces to Theorem 4.3.2 and we can draw conclusions about the maximum of u rather than its nonnegative maximum. A similar remark holds for the strong maximum principle to be proved later. We point out that Theorem 4.3.3 holds for general elliptic differential equations. Let a23, b2 and c be continuous functions in S1 with c < 0. We 4. Laplace Equations a(x)j > for any x E St and any for some positive constant A. In other words, we have a uniform positive lower bound for the eigenvalues of (a) in SZ. For u e C2(St) fl C(St) and f e C(St), consider the uniformly elliptic equation biu+ cu = f in f Lu i,j=1 Many results in this section hold for uniformly elliptic equations. As a simple consequence of Theorem 4.3.3, we have the following result. Corollary 4.3.4. Let St be a bounded domain in I[8n and c be a continuous function in St with c < 0. Suppose u E C2(St) fl C(St) satisfies Du + cu > 0 u<0 on BSZ. Then u < 0 in SZ. More generally, we have the following comparison principle. Corollary 4.3.5. Let St be a bonded domain in 1[8' and c be a continuous function in St with c < 0. Suppose u, v E C2(SZ) f1 C(SZ) satisfy Du + cu > Ov + cv inn, u < v on BSZ. Then u 0 in SZ and w <0 on Df Then Corollary 4.3.4 implies w < 0 in f. 0 The comparison principle provides a reason that functions u satisfying Du + cu > f are called subsolutions. They are less than a solution v of Ov + cv = f with the same boundary values. In the following, we simply say by the maximum principle when we apply Theorem 4.3.3, Corollary 4.3.4 or Corollary 4.3.5. A consequence of the maximum principle is the uniqueness of solutions of Dirichlet problems. Corollary 4.3.6. Let St be a bonded domain in W and c be a continuous ,function in S2 with c < 0. For any f E C(St) and cp E C(D), there exists at 4.3. The Maximum Principle most one solution u e C2 (S2) fl C(S2) of Du + cu = f in S2, u = cp on 852. Proof. Let u1iu2 E C2(1Z) fl C(Sl) be two solutions. Then w = ul - u2 satisfies Ow + cw = 0 w=0 on 0. By the maximum principle (applied to w and -w), we obtain w = 0 and hence ul = u2 in 12. The boundedness of the domain S1 is essential, since it guarantees the existence of the maximum and minimum of u in ft The uniqueness may not hold if the domain is unbounded. Consider the Dirichlet Du = 0 in SZ u=0 on acI, where St = ][8n \ Bl. Then a nontrivial solution u is given by 1 x-1 for n = 2; for n > 3. Note that u(x) -+ oo as lxi -3 0o for n = 2 and u is bounded for n > 3. Next, we consider the same problem in the upper half-space SZ = {x E I[8 xn > 0}. Then u(x) = xis a nontrivial solution, which is unbounded. These examples demonstrate that uniqueness may not hold for the Dirichlet problem in unbounded domains. Equally important for uniqueness is the x (0, it) C ][8n, condition c < 0. For example, we consider S2 = (0, it) x and u(x) _ sin x. z=1 Then u is a nontrivial solution of the problem inn, u=0 on acI. Du + nu = 0 In fact, such a u is an eigenfunction of 0 in SZ with zero boundary values. 4. Laplace Equations 4.3.2. The Strong Maximum Principle. The weak maximum principle asserts that subsolutions of elliptic differential equations attain their nonnegative maximum on the boundary if the coefficients of the zeroth-order term is nonpositive. In fact, these subsolutions can attain their nonnegative maximum only on the boundary, unless they are constant. This is the strong maximum principle. To prove this, we need the following Hopf lemma. For any C1-function u in SZ that attains its maximum on 31, say at xo E 31, we have av (xO) > 0. The Hopf lemma asserts that the normal derivative is in fact positive if u is a subsolution in fZ. Lemma 4.3.7. Let B be an open ball in l[8n with xo E aB and c be a continuous function in B with c < 0. Suppose u E C2(B) fl Cl(B) satisfies Du + cu > 0 in B. Assume u(x) < u(xo) for any x E B and u(xo) > 0. Then 0' where v is the exterior unit normal to B at xp. Proof. Without loss of generality, we assume B = BR for some R> 0. By the continuity of u up to DBR, we have u(x) < u(xo) for any x E BR. For positive constants a and E to be determined, we set w(x) = v(x) = u(x) - u(x0) + EW(x). We consider wand v in D = BR \ BR!2. Figure 4.3.1. The domain D. 4.3. The Maximum Principle A direct calculation yields Ow -I- cw = (42112 - 2na + c - ce-«R2 - 2na -I- c where we used c < 0 in BR. Since R/2 < x < R in D, we have Ow -I- cw > (a2R2 - 2na + c) > 0 in D, if we choose a sufficiently large. By c < 0 and u(xo) > 0, we obtain, for any L\v + cv = Du + cu + E(L\w -}- cw) - cu(xp) > 0 in D. We discuss v on 8D in two cases. First, on aBR/2i we have u - u(xo) <0, and hence u - u(xo) < - for some > 0. Note that w < 1 on aBR/2. Then for such an , we obtain v <0 on 8BR/2. Second, for x E BBR, we have w(x) = 0 and u(x) < u(xo). Hence v(x) < 0 for any x E 8BR and v(xo) = 0. Therefore, v < 0 on 8D. In conclusion, Ov + cv > 0 in D, v<0 on DD. By the maximum principle, we have v<0 in D. In view of v(x0) = 0, then v attains at x0 its maximum in D. Hence, we obtain -(xO) and then 8u(xo) > -Eaw(xo) = 8v av This is the desired result. Remark 4.3.8. Lemma 4.3.7 still holds if we substitute for B any bounded C1-domain which satisfies an interior sphere condition at x0 E DSZ, namely, if there exists a ball B C SZ with x0 E DB. This is because such a ball B is tangent to DSZ at x0. We note that the interior sphere condition always holds for C2-domains. Now, we are ready to prove the strong maximum principle. Theorem 4.3.9. Let SZ be a bounded domain in ][8n and c be a continuos function in SZ with c < 0. Suppose u E C2(SZ) fl C(SZ) satisfies Du + cu > 0 in ft 4. Laplace Equations Figure 4.3.2. Interior sphere conditions. Then u attains only on D1 its nonnegative maximum in SZ unless u is a constant. Proof. Let M be the nonnegative maximum of u in Sl and set D = {x E u(x) = M}. We prove either D = 0 or D = S2 by a contradiction argument. Suppose D is a nonempty proper subset of ft It follows from the continuity of u that D is relatively closed in ft. Then S2 \ D is open and we can find an open ball D B c 1\D D such that 8B fl D # 0. In fact, we may choose a point x* E 1\D with dist(x*, D) < dist(x*, 812) and then take the ball centered at x* with radius dist(x*, D). Suppose xo E 8B fl D. : Figure 4.3.3. The domain Sl and its subset D. Obviously, we have Du + cu > 0 in B, and u(x) < u(xo) for any x E B and u(xo) = M > O. By Lemma 4.3.7, we have 8v(x0) ' 4.3. The Maximum Principle where v is the exterior unit normal to B at xo. On the other hand, xo is an interior maximum point of SZ . This implies Du (xo) = 0, which leads to a contradiction. Therefore, either D = 0 or D = S1. In the first case, u attains only on aSZ its nonnegative maximum in 1; while in the second case, u is constant in 1. The following result improves Corollary 4.3.5. Corollary 4.3.10. Let St be a bounded domain in I[8" and c be a continuous function in St with c < 0. Suppose u E C2(11) fl C(SZ) satisfies Du + cu > 0 in S2, u<0 on D11. Then either u < 0 in SZ or u is a nonpositive constant in 11. We now consider the Neumann problem. Corollary 4.3.11. Let St be a bounded C' -domain in ][8" satisfying the interior sphere condition at every point of aSZ and c be a continuous function in SZ with c < 0. Suppose u E C2(11) fl Cl(SZ) is a solution of the boundaryvalue problem Du -I- cu = f au - cp av in St, on BSZ, for some f E C(St) and cp E C(aSZ). Then u is unique if c up to additive constants if c - 0. 0 and is unique Proof. We assume f = 0 in 1 and p = 0 on aSZ and consider Du -+- cu = 0 au Si) in 1, = 0 on aSZ. We will prove that u = 0 if c 0 and that u is constant if c - 0. We first consider the case c 0 and prove u = 0 by contradiction. Suppose u has a positive maximum at xo E 1. If u is a positive constant, then c - 0 in 1, which leads to a contradiction. If u is not a constant, then xo E aSZ and u (x) < u (xo) for any x E 1 by Theorem 4.3.9. Then Lemma 4.3.7 implies av (xO) > 0, which contradicts the homogeneous boundary condition. Therefore, u has no positive maximum and hence u < 0 in 1. Similarly, -u has no positive maximum and then u > 0 in 1. In conclusion, u = 0 in 1. We now consider the case c - 0. Suppose u is a nonconstant solution. Then its maximum in SZ is attained only on 511 by Theorem 4.3.9, say at xo E 4. Laplace Equations aSZ. Lemma 4.3.7 implies a (xo) > 0, which contradicts the homogeneous LI boundary value. This contradiction shows that u is constant. 4.3.3. A Priori Estimates. As we have seen, an important application of the maximum principle is to prove the uniqueness of solutions of boundaryvalue problems. Equally or more important is to derive a priori estimates. In derivations of a priori estimates, it is essential to construct auxiliary functions. We will explain in the proof of the next result what auxiliary functions are and how they are used to yield necessary estimates by the maximum principle. We point out that we need only the weak maximum principle in the following discussion. We now derive an a priori estimate for solutions of the Dirichlet Theorem 4.3.12. Let 1 2 be a bounded domain in ]IBn, c and f be continuous functions in St with c < 0 and cp be a continuous function on 852. Suppose U E C2(12) fl C(St) satisfies Du + cu = f in St, u = cp on aSl. sup ui < sup kol + Csup If I where C is a positive constant depending only on n and diam(St). Proof. Set F=suplfl, 4=supIcpl s asp Then (t+c)(±u)=±f -F in1, fu = fcp on 852. Without loss of generality, we assume St C BR, for some R> 0. Set > 0 in 1 since BR C ft Then, by the property c < 0 in 1, we have Ov+cv = -F+cv <-F. We also have v > on DIZ. Hence v satisfies Ov+cv<-F in1, v>4 onc91. 4.3. The Maximum Principle Therefore, (L+c)(±u) > (L+c)v fu < v in S2, on BSZ. By the maximum principle, we obtain fu < v in St, and hence u < v in 12. Therefore, Ifor any x E SZ. 0 This yields the desired result. If St = BR(xo), then we have sup u < sup BR (X0 ) R2 cp P + 8BR (Xo ) 2n BR (X0 This follows easily from the proof. The function v in the proof above is what we called an auxiliary function. In fact, auxiliary functions were already used in the proof of Lemma 4.3.7. 4.3.4. Gradient Estimates. In the following, we derive gradient estimates, estimates of first derivatives. The basic method is to derive a differential equation for Vu 12 and then apply the maximum principle. This is the Bernstein method. There are two classes of gradient estimates, global gradient estimates and interior gradient estimates. Global gradient estimates yield estimates of gradients Du in 1 in terms of Du on 91, as well as u in 1, while interior gradient estimates yield estimates of Du in compact subsets of 1 in terms of u in 1. In the next result, we will prove the interior gradient estimate for harmonic functions. Compare with Theorem 4.1.11 and Theorem 4.2.4. Theorem 4.3.13. Suppose u E C(B1) is harmonic in B1. Then sup Du < C sup u , B1 where C is a positive constant depending only on n. Proof. Recall that u is smooth in B1 by Theorem 4.1.10. A direct calculation yields n 2,j=1 0(iouI2) = 2 2 2=1 where we used Du = 0 in B1. We note that Vu 2 is a subharmonic function. Hence we can easily obtain an estimate of Du in B1 in terms of Du on DB1. 4. Laplace Equations This is the global gradient estimate. To get interior estimates, we need to introduce a cutoff function. For any nonnegative function cp E Co(Bl), we have n o('IouI2) = (o)IVuI2 +4 7''xiuxjuxixj + By the Cauchy inequality, we get 2 4 I (0xi ux j uxi x j I < 27' uxi x + rn- 7' xi ux 0. To interpret E Co(Bl). Then IVuI2 > -CIDuI2, We note that the ratio VcpI2/cp makes sense only when cp this ratio in Bl, we take cp = 2 for some (7l2IVuI2) > (2zr - where C is a positive constant depending only on and n. Note that 0(u2) = 2IDuI2 + 2uDu = 2IVuI2, since u is harmonic. By taking a constant a large enough, we obtain (7l2IVuI2 + au2) > (2a - C)IDuI2 > 0. By the maximum principle, we obtain au2) < Bl au2). 8B1 In choosing E Co (Bl), we require in addition that r - 1 in B112. With = 0 on 8B1i we get sup IVuI2 < cx sup u. B112 This is the desired estimate. As consequences of interior gradient estimates, we have interior estimates on derivatives of arbitrary order as in Theorem 4.1.12 and the compactness as in Corollary 4.1.13. The compactness result will be used later in Perron's method. Next we derive the differential Harnack inequality for positive harmonic functions using the maximum principle. Compare this with Theorem 4.2.5. Theorem 4.3.14. Suppose u is a positive harmonic function in Bl. Then sup D log u where C is a positive constant depending only on n. 4.3. The Maximum Principle Proof. Set v = log u. A direct calculation yields Ov = -IOvI2. Next, we prove an interior gradient estimate for v. By setting w = IVvI2, we get Ow + 2 n vx i x j . vxi wxi = 2 2,j=1 As in Theorem 4.3.13, we need to introduce a cutoff function. First, by 2 n vxi xi n 2 vxi xi , CT i=1 we have n vxixj > i, j=1 2 vxixi > - (ov)2 n Take a nonnegative function cp E Co (Bl). A straightforward calculation yields n 0(cow) + 2 vxi (pw)xi = xivxj vxixj i,j=1 xivxi + ()w. i=1 The Cauchy inequality implies 2 4 17' xi vxj vxi x j 2 pvxi x j+ vx j. Then n (w) + 2 v (w) > v. - 2IVIIVvI3 z,j=1 + (o _ l21 IovI2 in the right-hand side instead of dropping Here we keep one term of it entirely as in the proof of Theorem 4.3.13. To make sense of IVcpI2/cp in Bl, we take cp = rfor some r E Co(Bl). In addition, we require that r = 1 in B112. We obtain, by (4.3.1), n 0(i14w) + i=1 87I3IV1'IIIVvI3 + 131 01I2)IVvl2. 4. Laplace Equations We note that the right-hand side can be regarded as a polynomial of degree 4 in with a positive leading coefficient. Other coefficients depend on i and hence are bounded functions of x. For the leading term, we save half of it for a later purpose. Now, 2n t4 - 8IVIt3 + 4(- 131 -C for any t E ILB, where C is a positive constant depending only on n and Hence with we get n O (4w) v (4w) - C. We note that q4w is nonnegative in Bl and zero near 8B1. Next, we assume that r/4w attains its maximum at xo E Bl. Then 0 and 0(r74w) < 0 at xo. Hence 4w2(xo) If w(xo) > 1, then C. Otherwise ing these two cases, we obtain q4w < C* ii4(xo). By combin- in B1, where C* is a positive constant depending only on n and i. With the definition of w and i = 1 in B112, we obtain the desired result. The following result is referred to as the Harnack inequality. Compare it with Corollary 4.2.7. Corollary 4.3.15. Suppose u is a nonnegative harmonic function in Bl. Then u(xl) < Cu(x2) for any X1, X2 E B112, where C is a positive constant depending only on n. The proof is identical to the first proof of Corollary 4.2.7 and is omitted. We note that u is required to be positive in Theorem 4.3.14 since log u is involved, while u is only nonnegative in Corollary 4.3.15. The Harnack inequality describes an important property of harmonic functions. Any nonnegative harmonic functions have comparable values in a proper subdomain. We point out that the Harnack inequality in fact implies the strong maximum principle: Any nonnegative harmonic function in a domain is identically zero if it is zero somewhere in the domain. 4.3. The Maximum Principle 4.3.5. Removable Singularity. Next, we discuss isolated singularity of harmonic functions. We note that the fundamental solution of the Laplace operator has an isolated singularity and is harmonic elsewhere. The next result asserts that an isolated singularity of harmonic functions can be removed, if it is "better" than that of the fundamental solution. Theorem 4.3.16. Suppose u is harmonic in BR \ {0} C Il8and satisfies u(x) _ Jo(logx), n=2, as x -4 0. lo(1x12_Th), n> 3 Then u can be defined at 0 so that it is harmonic in BR. Proof. Without loss of generality, we assume that u is continuous in 0 < v=u on aBR. The existence of v is guaranteed by the Poisson integral formula in Theorem 4.1.9. Set M = maxaBR We note that the constant functions fM are obviously harmonic and -M < v < M on aBR. By the maximum principle, we have -M < v < M in BR and hence, lvl< M in BR. Next, we prove u =vin BR \ {0}. Set w = v - u in BR \ {0} and MT = maxas,. wi for any r 3. First, we have rn-2 <W(X)<Mr' Mr for any x e aBr U aBR. It holds on BBr by the definition of Mr and on aBR since w = 0 on aBR. Note that w and IxI2-n are harmonic in BR \ B,.. Then the maximum principle implies rn-2 rn-2 xl n-2' IxI and hence xl n-2' for any x e BR \ B,.. With M,. = max v - u < max v + max iuI <M + max asr asr asr asr we then have rn-2 ic Ixln-2M+ IxIri_2 4. Laplace Equations 0, we taker < lxi and then let r -+ 0. By the assumption on u, we obtain w(x) = 0. This implies w = 0 O and hence u =vin BR \ {0}. for any x E BR \ B,.. Now for each fixed x 4.3.6. Perron's Method. In this subsection, we solve the Dirichlet problem for the Laplace equation in bounded domains by Perron's method. Essentially used are the maximum principle and the Poisson integral formula. The latter provides the solvability of the Dirichlet problem for the Laplace equation in balls. We first discuss subharmonic functions. By Definition 4.3.1, a C2function v is subharmonic if Ov > 0. Lemma 4.3.17. Let S2 be a domain in Il8n and v be a C2-function in SZ. Then Ov > 0 in S2 if and only if for any ball B C S2 and any harmonic function w E C(B), v < w on aB implies v < w in B. Proof. We first prove the only if part. For any ball B C S2 and any harmonic function w E C(B) with v < w on aB, we have Ov > Ow in B, v < w on 8B. By the maximum principle, we have v < w in B. Now we prove the if part by a contradiction argument. If Ov < 0 somewhere in I, then Ov < 0 in B for some ball B with B C ft Let w solve Ow = 0 in B, w = v on aB . The existence of w in B is implied by the Poisson integral formula in Theorem 4.1.9. We have v <w in B by the assumption. Next, we note that Ow = 0 > Ov in B, w=v on aB. We have w Ov in B. Therefore, Ov > 0 in SZ. El Lemma 4.3.17 leads to the following definition. Definition 4.3.18. Let S2 be a domain in Il8n and v be a continuous function in SZ. Then v is subharmonic (superharmonic) in St if for any ball B C S2 and any harmonic function w E C(B), v < (>) w on aB implies v < (>) w in B. 4.3. The Maximum Principle We point out that in Definition 4.3.18 subharmonic (superharmonic) functions are defined for continuous functions. We now prove a maximum principle for such subharmonic and superharmonic functions. Lemma 4.3.19. Let St be a bounded domain in ][8n and u, v E C(S2). Suppose u is subharmonic in 12 and v is superharmonic in St with u < v on aS2. Then u < v in St. Proof. We first note that u - v <0 on 852. Set M = v) and D={xEII: u(x)-v(x)=M}C11. Then D is relatively closed by the continuity of u and v. Next we prove that D is open. For any xo E D, we taker < dist(xo, 811). Let u and v solve, respectively, Lu=0 inB(xo), u =u on aBr(xp), and z=0 0 in Br(xp), v = v on aBr(xp). The existence of u and v in Br(xo) is implied by the Poisson integral formula in Theorem 4.1.9. Definition 4.3.18 implies 2 -v u-v In Br(xp). Next, z(u-'i)=O inB(xo), -1=u-v onaB(xo). With u - v < M on aB,.(xo), the maximum principle implies u - v < M in BT(xp). In particular, Hence, (u - v)(xo) = M and then u - v has an interior maximum at xo. By the strong maximum principle, u - v -Min BT(xo). Therefore, u - v = M on 8Br(xo). This holds for any r < dist(xo, 812). Then, u-v =Min B,.(xo) and hence Br(xo) C D. In conclusion, D is both relatively closed and open in 12. Therefore either D = Ql or D = 12. In other words, u - v either attains its maximum only on aS2 or u - v is constant. Since u < v in BSZ, we have D u < v in 11 in both cases. 4. Laplace Equations The proof in fact yields the strong maximum principle: Either u < v in St or u - v is constant in SZ. Next, we describe Perron's method. Let SZ be a bounded domain in I[8n and cp be a continuous function on BSt. We will find a function u E C°°(St) fl C(St) such that Du = 0 (4.3.2) u = cP in SZ, on Dt Then for any v E C(SZ) which is Suppose there exists a solution u = subharmonic in SZ with v < P on au, we obtain, by Lemma 4.3.19, inf Hence for any x E SZ u(x) =sup{v(x) : v E C(S2) is subharmonic in St and v < P on DSZ } . We note that the equality holds since u is obviously an element of the set in the right-hand side. Here, we assumed the existence of the solution u. In Perron's method, we will prove that the function u defined in (4.3.3) is indeed a solution of (4.3.2) under appropriate assumptions on the domain. The proof consists of two steps. In the first step, we prove that u(), is harmonic in SZ. This holds for arbitrary bounded domains. We note that u in (4.3.3) is defined only in SZ. So in the second step, we prove that u() has a limit on au and this limit is precisely gyp. For this, we need appropriate assumptions on Dt Before we start our discussion of Perron's method, we demonstrate how to generate greater subharmonic functions from given subharmonic functions. Lemma 4.3.20. Let v e C(SZ) be a subharmonic function in St and B be a ball in 12 with B C 12. Let w be defined by w=v inSZ\B, and Ow=O in B. Then w is a subharmonic function in 12 and v < w in 12. The function w is called the harmonic lifting of v (in B). Proof. The existence of w in B is implied by the Poisson integral formula in Theorem 4.1.9. Then w is smooth in B and is continuous in f We also have v < w in B by Definition 4.3.18. 4.3. The Maximum Principle Next, we take any B' with B' C SZ and consider a harmonic function u e C(B') with w < u on DB'. By v < w on DB', we have v subharmonic in SZ by Definition 4.3.18. Lemma 4.3.20 asserts that we obtain greater subharmonic functions if we preserve the values of subharmonic functions outside the balls and extend them inside the balls by the Poisson integral Now we are ready to prove that u in (4.3.3) is a harmonic function in Lemma 4.3.21. Let S2 be a bounded domain in ][8' and cp be a continuous function on 852. Then u defined in (4.3.3) is harmonic in SZ. Proof. Set S,={v: v E C(S2) is subharmonic in SZ and v < cp on 8St}. Then for any x E SZ, u(x) =sup{v(x) : v e In the following, we simply write <S = Step 1. We prove that u is well defined. To do this, we set m = min co, 812 M=maxco. asp We note that the constant function m is in s and hence the set s is not empty. Next, the constant function M is obviously harmonic in SZ with co < M on 511. By Lemma 4.3.19, for any v e s, v<M inf Thus u is well defined and u <M in 11. Step 2. We prove that s is closed by taking maximum among finitely , v E s and set many functions in S. We take arbitrary vl , v=max{vl, ,vk}. It follows easily from Definition 4.3.18 that v is subharmonic in 11. In fact, we take any ball B C SZ and any harmonic function w e C(B) with v < w , k. Since v2 is subharmonic, we on SB. Then v2 <w on SB, for i = 1, get v2 <w in B, so v < w in B. We conclude that v is subharmonic in 11. Hence v E s. 4. Laplace Equations Step 3. For any Br (x0) C SZ, we prove that u, is harmonic in Br (x0) First, by the definition of u(,c, there exists a sequence of functions vi E s such that lira vi (xo) = We point out that the sequence {vi} depends on x0. We may replace vi above by any v2 E s with v2 > v2 since vi (x0) < ?Ii (x0) < u (x0) . Replacing, if necessary, vi by max{m, v} E s, we may also assume m < vi < in ft Z. For the fixed Br(x0) and each vi, we let wi be the harmonic lifting in Lemma 4.3.20. In other words, wi = vi in 1 \ Br (x0) and Owi = 0 in Br (x0) . By Lemma 4.3.20, wi E s and vi < wi in fZ. Hence, lim wi (xo) = m < wi < u in 1 , for any i = 1, 2, In particular, {w} is a bounded sequence of harmonic functions in Br (x0) . By Corollary 4.1.13, there exists a harmonic function w in Br (x0) such that a subsequence of {w}, still denoted by {w}, converges to w in any compact subset of Br (x0) . We then conclude easily that w < u in Br (x0) and w (x0) = u (x0) . We now claim u = w in Br (x0) To see this, we take any x E Br (x0) and proceed similarly as before, with x replacing x0. By the definition of u(,c, there exists a sequence of functions vi E s such that lim vi (x) = . Replacing, if necessary, vi by max{vi, wi} E s, we may also assume wi < vi < u(p in ft Z. For the fixed Br(x0) and each vi, we let wi be the harmonic lifting in Lemma 4.3.20. Then, wi E s and vi < wi in 1. Moreover, wi is harmonic in Br (x0) and satisfies lim wi (x) = i-oo and m < wz < vz < w2 < u inn, for any i = 1, 2, . By Corollary 4.1.13, there exists a harmonic function zu in Br(X) such that a subsequence of zuz converges to w in any compact subset of B,.(xo). We then conclude easily that w < w < u in Br(X) and w(xo) = w(xo) = 4.3. The Maximum Principle w(x) = Next, we note that w-w is a harmonic function in B,.(xo) with a maximum attained at xp. By applying the strong maximum principle tow - win Br(xo), we conclude that w - iu is constant, which is obviously zero. This implies w =win B,.(xo), and in particular, w(x) = w(x) = then have w = u, in B,.(xo) since x is arbitrary in B,.(xo). Therefore, u,, is harmonic in B,.(xo). We note that u in Lemma 4.3.21 is defined only in ft. We have to discuss limits of u( x) as x approaches the boundary. For this, we need to impose additional assumptions on the boundary aSt. Lemma 4.3.22. Let cp be a continuous function on 8St and u be the function defined in (4.3.3). For some xo E aS2, suppose woo E C(SZ) is a subharmonic function in S2 such that (4.3.4) woo (xo) = 0, woo (x) <0 for any x e 81Z \ {xo}. Imo Proof. As in the proof of Lemma 4.3.21, we set S _ {v: v e C(St) is subharmonic in SZ and v We simply write w =woo and set M = maxa Let E be an arbitrary positive constant. By the continuity of cp at xp, there exists a positive constant S such that ko(x) - p(xo) for any x e aSZ fl Bb(xo). We then choose K sufficiently large so that for any x e 8St \ Bb(xo). Hence, ko-2(xo)IE-Kw on a. Since cp(xo)-e+Kw(x) is a subharmonic function in SZ with cp(xo)-E+Kw < cp on BSZ, we have cp(xo) - e + Kw E S. The definition of u implies p(xo)-E+Kw cp on SZ. Hence for any v e we obtain, by Lemma 4.3.19, 4. Laplace Equations Again by the definition of u(p, we have This implies I- cp(xo)I < E. lim sup x-*xo We obtain the desired result by letting e -4 0. The function woo satisfying (4.3.4) is called a barrier function. As shown in the proof, woo provides a barrier for the function u, near xo. We note that u, in Lemma 4.3.21 is defined only in St. It is natural cp(xo) for xo E 852. If (4.3.4) is to extend u(p to 8St by defining satisfied for xo, Lemma 4.3.22 asserts that u, is continuous at xo. If (4.3.4) is satisfied for any xo E BSZ, we then obtain a continuous function u, in S2. Barrier functions can be constructed for a large class of domains St. Take, for example, the case where f satisfies the exterior sphere condition at xo E aSt in the sense that there exists a ball B,.o (yo) such that f2 n Br (Yo) _ 0, SZ n BTO (yo) _ {xO}. To construct a barrier function at xo, we set woo (x) = I'(x - yo) - I'(xp - yp) for any x E S2, where I' is the fundamental solution of the Laplace operator. It is easy to see that woo is a harmonic function in St and satisfies (4.3.4). We note that the exterior sphere condition always holds for C2-domains. Figure 4.3.4. Exterior sphere conditions. Theorem 4.3.23. Let S2 be a bounded domain in ][8n satisfying the exterior sphere condition at every boundary point. Then for any cp E C((9S2), (4.3.2) admits a solution u E C°O(12) fl C(12). 4.4. Poisson Equations In summary, Perron's method yields a solution of the Dirichlet problem for the Laplace equation. This method depends essentially on the maximum principle and the solvability of the Dirichlet problem in balls. An important feature here is that the interior existence problem is separated from the boundary behavior of solutions, which is determined by the local geometry of domains. 4.4. Poisson Equations In this section, we discuss briefly the Poisson equation. We first discuss regularity of classical solutions using the fundamental solution. Then we discuss weak solutions and introduce Sobolev spaces. 4.4.1. Classical Solutions. Let SZ be a domain in W` and f be a continuous function in SZ. The Poisson equation has the form Du = f If u is a smooth solution of (4.4.1) in SZ, then obviously f is smooth. Con- versely, we ask whether u is smooth if f is smooth. At first glance, this does not seem to be a reasonable question. We note that Du is just a linear combination of second derivatives of u. Essentially, we are asking whether all partial derivatives exist and are continuous if a special combination of second derivatives is smooth. This question turns out to have an affirmative answer. To proceed, we define (4.4.2) w f(x) F(x - y)f(y) dy, where F is the fundamental solution of the Laplace operator as in Definition 4.1.1. The function W f is called the Newtonian potential of f in f We will write w f, to emphasize the dependence on the domain f It is easy to see that W f is well defined in W if SZ is a bounded domain and f is a bounded function, although F has a singularity. We recall that the derivatives of F have asymptotic behavior of the form vr(x - y) Ix_In_1' v2r(x - y) as y -+ x. By differentiating under the integral sign formally, we have DxZw f(x) = D F(x - y)f(y) dy, for any x e W1 and i = 1, , n. We note that the right-hand side is a well-defined integral and defines a continuous function of x. We will not use this identity directly in the following and leave its proof as an exercise. Assuming its validity, we cannot simply differentiate the expression of Dxi w f 4. Laplace Equations F. In fact, extra to get second derivatives of W f due to the singularity of conditions are needed in order to infer that W f is C2. If W f is indeed CZ and Ow f =fin Sl, then any solution of (4.4.1) differs from W f by an addition of a harmonic function. Since harmonic functions are smooth, regularity of arbitrary solutions of (4.4.1) is determined by that of W f. Lemma 4.4.1. Let 1 2 be a bonded domain in 12, f be a bounded function in for some integer 1 2 and W f be defined in (4.4.2). Assume that f E k > 2. Then W f E Ck(S2) and Ow f =fin 12. Moreover, if f is smooth in 12, then W f is smooth in ft Proof. For brevity, we write w = W f. We first consider a special case where f has a compact support in St. For any x E 12, we write w(x) =f F(x-y)f(y)dy. We point out that the integration is in fact over a bounded region. Note that I' is evaluated as a function of x - yI. By the change of variables z = y - x, we have w(x) _ I'(z) f (z + x) dz. Rn By the assumption, f is at least Cl. By a simple differentiation under the integral sign and an integration by parts, we obtain w( x) = I'(z) f(z + x) dz = I'(z) fx (z + x) dz _ -f I'zy (z) f (z -I- x) dz. n For f E for some k > 2, we can differentiate under the integral sign to get Fzi (z)aa f (z + x) dz, for any a E Z+ with al < k - 1. Hence, w is Ck in 1. Moreover, if f is smooth in 1, then w is smooth in 1. Next, we calculate Ow if f is at least C'. For any x E 1, we have n Ow(x) = (x) = i=1 Fzi (z)fzi (z i=1 = - lim e-*O R \B i=1 Fzi (z) fzi (z x) dz. x) dz 4.4. Poisson Equations We note that f(. +x) has a compact support in ][8Th. An integration by parts implies Ow(x) _ (z)f (z + x) dSz, - E o Jaaf 8v where v is the unit exterior normal to the boundary 8BE of the domain ][8" \ BE, which points toward the origin. With r = Izi, we obtain Ow(x) = Eo = lim W1 1 (z)f(z + x) dSz aJ6 f(z -I- x) dSz = f(x), by the explicit expression of F. Next, we consider the general case. For any xo E St, we prove that w is C' and Ow = f in a neighborhood of xo. To this end, we taker G dist(xo, aSt) and a function cp E Co (B,.(xo)) with cp - 1 in B,.12(x0). Then we write w(x) = f I'(x - y) (1- P(y)) 1(y) f r(x t = wl(x) + wll(x). The first integral is actually over 1 2 \ B,.12 (xo) since cp - 1 in B,.12 (xo) . Then there is no singularity in the first integral if we restrict x to Br14(xo). Hence, wj is smooth in Br/4(xo) and Owj = 0 in Br14(xo). For the second integral, cp f is a C1-1-function of compact support in ft We can apply what we just proved in the special case to cp f. Then w jI is C' in 1 2 and OwII = cp f. Therefore, w is a C1-function in B,.14(xo) and Ow(x) = 'p(x)f(x) = f(x), for any x E B,./4(xo). Moreover, if f is smooth in St, so are wII and w in Lemma 4.4.1 is optimal in the C°°-category in the sense that the smoothness of f implies the smoothness of W f. However, it does not seem optimal concerning finite differentiability. For example, Lemma 4.4.1 asserts that W f is C2 in St if f is Cl in St. Since f is related to second derivatives of wf, it seems natural to ask whether W f is C2 in St if f is continuous in St. We will explore this issue later. We now prove a regularity result for general solutions of (4.4.1). Theorem 4.4.2. Let 1 2 be a domain in Il8n and f be continuous in ft Sup- pose u E CZ(St) satisfies Du =fin ft If f E Ck-1(1t) for some integer k > 3, then u E C'(St). Moreover, if f is smooth in 1, then u is smooth in 4. Laplace Equations Proof. We take an arbitrary bounded subdomain 1' in 1 and let w 1,c' be the Newtonian potential of f in SZ'. By Lemma 4.4.1, if f E C'() for some integer k > 3, then w f,cl' is Cc in 1' and Ow f,ci' = f in SZ'. Now we set v = u - w f,cl' . Since u is C2 in SZ', so is v. Then, Ov = Du - Ow f,cl/ = 0 in ft'. In other words, v is harmonic in SZ', and hence is smooth in 1' by Theorem 4.1.10. Therefore, u = v + w f,cl' is Cc in SZ'. It is obvious that if f is smooth in 1, then w1,' and hence u are smooth in SZ'. Theorem 4.4.2 is an optimal result concerning the smoothness. Even though Du is just one particular combination of second derivatives of u, the smoothness of Du implies the smoothness of all second derivatives. Next, we solve the Dirichlet problem for the Poisson equation. Theorem 4.4.3. Let 1 be a bounded domain in ]Rn satisfying the exterior sphere condition at every boundary point, f be a bounded Cl function in S1 and cp be a continuous function on of Then there exists a solution u E C2 (1Z) f1 C(Z) of the Dirichlet problem Du = f in 1, u=cp onaf Moreover, if f is smooth in SZ, then u is smooth in 11. Proof. Let w be the Newtonian potential of f in SZ. By Lemma 4.4.1 for k = 2, w E C2(SZ) fl C(S2) and Ow =fin SZ. Now consider the Dirichlet problem Ov = 0 v = cp -w on BSt. Theorem 4.3.23 implies the existence of a solution v E C°°(1) fl C(St). (The exterior sphere condition is needed in order to apply Theorem 4.3.23.) Then u = v + w is the desired solution of the Dirichlet problem in Theorem 4.4.3. If f is smooth in SZ, then u is smooth there by Theorem 4.4.2. Now we raise a question concerning regularity of the lowest order in the classical sense. What is the optimal assumption on f to yield a C2-solution u of (4.4.1)? We note that the Laplace operator O acts on C2-functions and Du is continuous for any C2-function u. It is natural to ask whether the equation (4.4.1) admits any C2-solutions if f is continuous. The answer turns out to be negative. There exists a continuous function f such that (4.4.1) does not admit any C2-solutions. Example 4.4.4. Let f be a function in Bi C Il82 defined by f(0) = 0 and x -x f f(x) = 21x12 1 (_logx)h/2 + 4(-log Ixl)3/2 J' 4.4. Poisson Equations for any x E Bl \ {0}. Then f is continuous in Bl. Consider Du = f in B1. Define u in Bl by u(0) = 0 and u(x) _ (xi - x2)(-log xI)1/2, for any x E Bl \ {0}. Then u E C(B1) fl C°°(Bl \ {0}). A straightforward calculation shows that u satisfies (4.4.3) in Bl \ {0} and lim u is not in C2(B1). Next, we prove that (4.4.3) has no C2-solutions. The proof is based on Theorem 4.3.16 concerning removable singularities of harmonic functions. Suppose, to the contrary, that there exists a C2solution v of (4.4.3) in Bl. For a fixed R E (0, 1), the function w = u - v is harmonic in BR \ {0}. Now u E C(BR) and v E C2 (BR), so w E C(BR). Thus w is continuous at the origin. By Theorem 4.3.16, w is harmonic in BR and therefore belongs to CZ(BR). In particular, u is CZ at the origin, which is a contradiction. Example 4.4.4 illustrates that the C2-spaces, or any C'-spaces, are not adapted to the Poisson equation. A further investigation reveals that solutions in this example fail to be C2 because the modulus of continuity of f does not decay to zero fast enough. If there is a better assumption than the continuity of f, then the modulus of continuity of V2u can be estimated. Better adapted to the Poisson equation, or more generally, the elliptic differential equations, are Holder spaces. The study of the elliptic differential equations in Holder spaces is known as the Schauder theory. In its simplest form, it asserts that all second derivatives of u are Holder continuous if Du is. It is beyond the scope of this book to give a presentation of the Schauder theory. 4.4.2. Weak Solutions. In the following, we discuss briefly how to extend the notion of classical solutions of the Poisson equation to less regularized solutions, the so-called weak solutions. These functions have derivatives only in an integral sense and satisfy the Poisson equation also in an integral sense. The same process can be applied to general linear elliptic equations, or even nonlinear elliptic equations, of divergence form. To introduce weak solutions, we make use of the divergence structure or variation structure of the Laplace operator. Namely, we write the Laplace operator as Du = div(Vu). 4. Laplace Equations In fact, we already employed such a structure when we derived energy estimates of solutions of the Dirichlet problem for the Poisson equation in Section 3.2. Let St be a bounded domain in W and f be a bounded continuous function in Q. Consider -Du = f in Q. We intentionally put a negative sign in front of Du. Let u e C2(S2) be a solution of (4.4.4). Take an arbitrary cp e Co(St). By multiplying (4.4.4) by cp and then integrating by parts, we obtain pu pcp dx = In (4.4.5), cp is referred to as a test function. We note that upon integrating by parts, we transfer one derivative for u to test functions. Hence we only need to require u to be Cl in (4.4.5). This is the advantage in formulating weak solutions. If u E CZ(S2) satisfies (4.4.5) for any cp e Co(S2), we obtain from (4.4.5), upon a simple integration by parts, - J cp0u dx = J f cp dx for any cp E Co (S2). This implies easily -Du = f in S2. In conclusion, a C2-function u satisfying (4.4.5) for any cp e Co (S2) is a classical solution of (4.4.4). We now raise the question whether less regular functions u are allowed in (4.4.5). For any cp e C( Q), it is obvious that the integral in the lefthand side of (4.4.5) makes sense if each component of Vu is an integrable function in Q. This suggests that we should introduce derivatives in the integral sense. Definition 4.4.5. For i = 1, , n, an integrable function u in Q is said to have a weak x2-derivative if there exists an integrable function v2 such that (4.4.6) dx = - vjcp dx for any co E Co (Q). Here v2 is called a weak x2 -derivative of u and is denoted by uxi , the same way as for classical derivatives. It is easy to see that weak derivatives are unique if they exist. We also point out that classical derivatives of C1-functions are weak derivatives upon a simple integration by parts. 4.4. Poisson Equations Definition 4.4.6. The Sobolev space Hl (St) is the collection of L2-functions in SZ with L2-weak derivatives in SZ. The superscript 1 in the notation Hl (St) indicates the order of differentiation. In general, functions in Hl (St) may not have classical derivatives. In fact, they may not be continuous. We are ready to introduce weak solutions. Definition 4.4.7. Let f e L2(S2) and u e Hl(S2). Then u is a weak solution of -Du =fin SZ if (4.4.5) holds for any cp e Co (S2), where the components of Du are given by weak derivatives of u. We now consider the Dirichlet problem for the Poisson equation with the homogeneous boundary value, -Du = f inn, (4.4.71 on a. We attempt to solve (4.4.7) by methods of functional analysis. It is natural to start with the set C = {u E Cl (St) fl C(SZ) : u = 0 on 8S2}. We note that the left-hand side of (4.4.5) provides an inner product in C. To be specific, we define the Ho -inner product by (u,v)H1() = f u. Vvdx, Jcz for any u, v e C. It induces a norm defined by 1 \f Dudx I/ This is simply the L2-norm of the gradient of u, and it is referred to as the Ho -norm. Here in the notation H, the supercript 1 indicates the order of differentiation and the subscript 0 refers to functions vanishing on aSt. The Poincare inequality in Lemma 3.2.2 has the form (4.4.8) IIuIIL2(cl) $ foranyuEC. For f e L2(St), we define a linear functional F on C by (4.4.9) F(cp) = for any cp e C. By the Cauchy inequality and (4.4.8), we have I<_ III IIL2(cZ)II(PIIL2(cZ) <_ cII.fIIL2()IIpIIHo(). 4. Laplace Equations This means that F is a bounded linear functional on C. If C were a Hilbert space with respect to the Ho -inner product, we would conclude by the Riesz representation theorem that there exists a u E C such that (u, So)xo(st) = for any cp E C. Hence, u satisfies (4.4.5). With u = 0 on 852, u is interpreted as a weak solution of (4.4.7). However, C is not complete with respect to the Ho-norm, for the same reason that C(SZ) is not complete with respect to the LZ-norm. For the remedy, we complete C under the Ho-norm. Definition 4.4.8. The Sobolev space Ho (11) is the completion of Co (S2) under the Ho-norm. We point out that we may define H(11) by completing C under the Ho-norm. It yields the same space. The space Ho (11) defined in Definition 4.4.8 is abstract. So what are the elements in Ho (11)? The next result provides an answer. Theorem 4.4.9. The space Ho (1) is a subspace of Hl (S2) and is a HilbeTt space with respect to the Ho -inner product. Proof. We take a sequence {uk} in C0 1(f) which is a Cauchy sequence in the Ho (11)-norm. In other words, {uk, } is a Cauchy sequence in LZ(SZ), for any i = 1, , n, such that , n. Then there exists a vz E L2 (S2), for i = 1, uk,xi - v2 in L2 (S2) as k - oo. By (4.4.8), we obtain Iluk - ul II L2(S2) < Cu - ul IIHo(S2) This implies that {uk} is a Cauchy sequence in LZ(St). We may assume for some u E LZ(SZ) that uk- u in L2(S2) as k - oo. Such a convergence illustrates that elements in Ho (11) can be identified as LZ-functions. Hence we have established the inclusion Ho(SZ) C L2(SZ). Next, we prove that u has LZ-weak derivatives. Since uk E C(11), upon a simple integration by parts, we have dx = - uk,X co dx for any SP E C0'(11). By taking k - oo, we obtain easily dx = - f vzcp dx for any cp E Co (11). 4.4. Poisson Equations Therefore, v2 is the weak xi-derivative of u. Then u e Hl (S2) since v2 E L2(S2). In conclusion, Ho(S2) C Hl(St). With weak derivatives replacing classical derivatives, the inner product well defined for functions in H0' (a). We then conclude that Ho (St) is complete with respect to its induced norm ' It is easy to see by approximations that (4.4.8) holds for functions in Now we can prove the existence of weak solutions of the Dirichlet problem for the Poisson equation with homogeneous boundary value. Theorem 4.4.10. Let St be a bounded domain in ][8n and f e L2(1t). Then the Poisson equation -Du = f admits a weak solution u E H(). The proof is based on the Riesz representation theorem, and major steps are already given earlier. Proof. We define a linear functional F on Ho (S2) by F(cp) = ffdx, for any cp e H0'(1). By the Cauchy inequality and (4.4.8), we have If IIL2(IOIIcOIIL2(cz) <_ Hence, F is a bounded linear functional on Ho (S2). By the Riesz representation theorem, there exists a u e Ho (S2) such that (u, co)xo(st) = for any cp e Ho (St). Therefore, u is the desired function. According to Definition 4.4.7, u in Theorem 4.4.10 is a weak solution of -Du = f. Concerning the boundary value, we point out that u is not defined on 8S2 in the pointwise sense. We cannot conclude that u = 0 at each point on D. The boundary condition u = 0 on 8S2 is interpreted precisely by the fact that u e Ho (St), i.e., u is the limit of a sequence of Co (S2)-functions in the Ho-norm. One consequence is that uIaI is a welldefined zero function in L2(8). Hence, u is referred to as a weak solution of the Dirichlet problem (4.4.7). Now we ask whether u possesses better regularity. The answer is affirmative. To see this, we need to introduce more Sobolev spaces. We first point out that weak derivatives as defined in (4.4.6) can be generalized to higher orders. For any a e 7L+ with al = m, an integrable function u in S2 4. Laplace Equations is said to have a weak xa-derivative if there exists an integrable function va such that dx = (-1)'"' f v«cp dx for any cp E Co (SZ). t Here va is called a weak xa -derivative of u and is denoted by Dan, the same notation as for classical derivatives. For any positive integer m, we denote by Hm (1) the collection of L2-functions with L2-weak derivatives of order up to m in 11. This is also a Sobolev space. The superscript m indicates the order of differentiation. We now return to Theorem 4.4.10. We assume, in addition, that 11 is a bounded smooth domain. With f e L2(1), the solution u in fact is a function in H2(1). In other words, u has L2-weak second derivatives uxix3, for i, j = 1, ,n. Moreover, uxixi = f a.e. in 11. i=1 In fact, if f e HIc(SZ) for any k > 1, then u e H2(S2). This is the L2-theory for the Poisson equation. We again encounter an optimal regularity result. If Du is in the space HIc(SZ), then all second derivatives are in the same space. It is beyond the scope of this book to give a complete presentation of the L2-theory. An alternative method to prove the existence of weak solutions is to minimize the functional associated with the Poisson equation. Let 12 be a bounded domain in IlBn. For any Cl-function u in 12, we define the Dirichlet energy of u in St by E(u) _ Js IVuI2 dx. For any f e L2(12), we consider J(am) = E(u) - f fu dx = 1 f IVuI2 dx - f fu dx. 2 For any u e C' (II) fl C(S2), we consider Cl-perturbations of u which leave the boundary values of 'a unchanged. We usually write such perturbations in the form of u + cp for cp e C(1). We now compare J(u + cp) and J(u). A straightforward calculation yields J(u + cp) = J(u) + E(cp) + f Vu. pcp dx - f f cp dx. t We note that E(cp) > 0. Hence, if u is a weak solution of -Du = f, we have, by (4.4.5), J(u + cp) > J(u) for any cp E Co (SZ). 4.5. Exercises Therefore, u minimizes J among all functions with the same boundary value. Now we assume that u minimizes J among all functions with the same boundary value. Then for any cp E Co (St), J(u -}- Ecp) > J(u) for any . In other words, j() - J(u + (p) has a minimum at = 0. This implies j'(0) = 0. A straightforward calculation shows that u satisfies (4.4.5) for any cp E C(). Therefore, u is a weak solution of -Du = f. In conclusion, u is a weak solution of -Du = f if and only if u minimizes J among all functions with the same boundary value. The above calculation was performed for functions in C' (a). A similar calculation can be carried out for functions in Ho (a). Hence, an alternative way to solve (4.4.7) in the weak sense is to minimize J in Ho (a). We will not provide details in this book. The weak solutions and Sobolev spaces are important topics in PDEs. The brief discussion here serves only as an introduction. A complete presentation will constitute a book much thicker than this one. 4.5. Exercises Exercise 4.1. Suppose u(x) is harmonic in some domain in I[8n. Prove that v(x) _ is also harmonic in a suitable domain. Exercise 4.2. For n = 2, find the Green's function for the Laplace operator on the first quadrant. Exercise 4.3. Find the Green's function for the Laplace operator in the upper half-space {xn > 0} and then derive a formal integral representation for a solution of the Dirichlet problem L\u=0 in{x>0}, u=cp on{xn=0}. Exercise 4.4. (1) Suppose u is a nonnegative harmonic function in BR(X0) C W. Prove by the Poisson integral formula the following Harnack inequality: n-2 R- ru(xo) R )Th2 R+ r / -C where r = Ix - xal u(x) ( 4. Laplace Equations (2) Prove by (1) the Liouville theorem: If u is a harmonic function in I[8n and bounded above or below, then u is constant. Exercise 4.5. Let u be a harmonic function in ]E8' with fRn IuIdx < oo for some p E (1, oo). Prove that u - 0. Exercise 4.6. Let m be a positive integer and u be a harmonic function in I[8" with u(x) = O(ixim) as lxi -+ oo. Prove that u is a polynomial of degree at most m. Exercise 4.7. Suppose u E C(Bt) is harmonic in Bi = {x E Bi : x> 0} with u = 0 on {xn = 0} fl Bl. Prove that the odd extension of u to Bl is harmonic in Bl. Exercise 4.8. Let u be a C2-solution of Lu=0 inRTh\BR, u=0 on aBR. Prove that u - 0 if . IxI-+oo In xi =0 form=2, lim u(x) = 0 for n > 3. Exercise 4.9. Let S2 be a bounded C'-domain in I[8" satisfying the exterior sphere condition at every boundary point and f be a bounded continuous function in St. Suppose u E C2(1) f1 C1(1t) is a solution of Du = f inn, u=0 on 852. Prove that as av I < C sup .f where C is a positive constant depending only on n and S2. Exercise 4.10. Let S2 be a smooth bounded domain in ]EBn, c be a continuous function in S2 with c < 0 and a be a continuous function on 81 2 with a > 0. Discuss the uniqueness of the problem Du + cu = f in '9" + au = cp on 852. Exercise 4.11. Let S2 be a bounded C1-domain and let P and a be continuous functions on 91 with a > ao for a positive constant ao. Suppose 4.5. Exercises U E CZ (SZ) f1 Cl (S2) satisfies -Du + u3 = 0 in Q, av + au = cp on aSt. Prove that 1 iul <_ eix lpI. Exercise 4.12. Let f be a continuous function in BR. Suppose u E CZ(BR) f1 C(BR) satisfies Du = f in BR. Prove that Vu(O)I lul + BRX If I Hint: In B, set v(x x) = x) - u(x , -x)). Consider an auxiliary function of the form w(x , xn) = Alx'I2 + Bxn + Cxn. Use the comparison principle to estimate v in BR and then derive a bound for v(0). Exercise 4.13. Let u be a nonzero harmonic function in Bl C I[8' and set N(r) = r fsr VuI2dx for any r E (0 , 1) . a f8Br u dS (1) Prove that N(r) is a nondecreasing function in r E (0, 1) and identify lim N(r). (2) Prove that, for any 0 < r < R < 1, 1 u2dS < IUBR 2N ( R ) Remark: The quantity N(r) is called the frequency. The estimate in (2) for R = 2r is referred to as the doubling condition. Exercise 4.14. Let St be a bounded domain in ][8n and f be a bounded function in SZ. Suppose w1 is the Newtonian potential defined in (4.4.2). 4. Laplace Equations (1) Prove that wf e f aI'(x - y)f (y) dye ,n. (2) Assume, in addition, that f is Ca in Sl for some a e (0, 1), i.e., for for any x E ][8n and i = 1, any x, y E Sl, 1.f (x) - f()I Prove that W f E C2(1), Ow f =fin SZ and the second derivatives of W f are ca in St. Chapter 5 Heat Equations The n-dimensional heat equation is given by ut - Du = 0 for functions u = u(x, t), with x e IRn and t e III. Here, x is the space variable and t the time variable. The heat equation models the temperature of a body conducting heat when the density is constant. Solutions of the heat equation share many properties with harmonic functions, solutions of the Laplace equation. In Section 5.1, we briefly introduce Fourier transforms. The Fourier transform is an important subject and has a close connection with many fields of mathematics, especially with partial differential equations. In the first part of this section, we discuss basic properties of Fourier transforms and prove the important Fourier inversion formula. In the second part, we use Fourier transforms to discuss several differential equations with constant coefficients, including the heat equation, and we derive explicit expressions for their solutions. In Section 5.2, we discuss the fundamental solution of the heat equation and its applications. We first discuss the initial-value problem for the heat equation. We prove that the explicit expression for its solution obtained formally by Fourier transforms indeed yields a classical solution under appropriate assumptions on initial values. Then we discuss regularity of arbitrary solutions of the heat equation using the fundamental solution and derive interior gradient estimates. In Section 5.3, we discuss the maximum principle for the heat equation and its applications. We first prove the weak maximum principle and the strong maximum principle for a class of parabolic equations more general than the heat equation. As applications, we derive a priori estimates of solutions of the initial/boundary-value problem and the initial-value problem. 147 5. Heat Equations We also derive interior gradient estimates by the maximum principle. In the final part of this section, we study the Harnack inequality for positive solutions of the heat equation. We point out that the Harnack inequality for the heat equation is more complicated than that for the Laplace equation we discussed earlier. As in Chapter 4, several results in this chapter are proved by multiple methods. For example, interior gradient estimates are proved by two methods: the fundamental solution and the maximum 5.1. Fourier Transforms The Fourier transform is an important subject and has a close connection with many fields of mathematics. In this section, we will briefly introduce Fourier transforms and illustrate their applications by studying linear differential equations with constant coefficients. 5.1.1. Basic Properties. We define the Schwartz class s as the collection of all complex-valued functions u e C°° (Rn) such that x133 u(x) is bounded in Ian for any c,,@ E Z, i.e., sup I xEIE In other words, the Schwartz class consists of smooth functions in Ian all of whose derivatives decay faster than any polynomial at infinity. It is easy to e_Ix12 is in the Schwartz class. check that u(x) = Definition 5.1.1. For any u e s, the Fourier transform u of u is defined by eu(x) dx for any E 1[8n. We note that the integral on the right-hand side makes sense for u e S. In fact, sup lul n f Iu(x)Idx for any e Ian, 1 n IIE This suggests that Fourier transforms are well defined for L1-functions. We will not explore this issue in this book. We now discuss properties of Fourier transforms. First, it is easy to see that the Fourier transformation is linear, i.e., for any u1, u2 E s and cl, c2 E C, (ciul + C2u2r= C1 u1 + C2u2. 5.1. Fourier Transforms The following result illustrates an important property of Fourier transforms. Lemma 5.1.2. Let u E S. Then u E S and for any multi-indices a, /3 E 7L+, = ()9) and = (-i)"31x'3u(e). Proof. Upon integrating by parts, we have f e_&au(x) dx f (i eu(x) dx = (ie)(e). Next, it follows easily from the definition of u that u E C°°(][8n). Then we have v u(S) _ Jan P e-u(x) dx (-ix)eu(x) dx=(_Z)IQIxQu(e) The interchange of the order of differentiation and integration is valid beS, we take any two multi-indices a and ,6. cause x'3u E S. To prove is bounded in W. For this, we first note It suffices to prove that a8 raie =(i) ax/3u(e) = (i)IH13I (ie)axt3u(e) =(_i)H&(xI3u) () (21) (-i)IaI+IQI fTh ea (xu(x)) dx. Hence sup I &1() I 1 n (271-)2 f l a (xu(x)) I dx <0°, -n since each term in the integrand decays faster than any polynomial because xQu E S. The next result relates Fourier transforms to translations and dilations. Lemma 5.1.3. Let u E S, a E IIBn, and k E I[8 \ {0}. Then u. - a) () = 5. Heat Equations Proof. By a simple change of variables, we have u(. - a)() = e_u(x - a) dx f e_u(x) dx = By another change of variables, we have (2ir) (27r) dx x We then obtain the desired results. For any u, v E S, it is easy to check that u * v E S, where u * v is the convolution of u and v defined by f u(x - y)v(y) dy. (u * v)(x) = Lemma 5.1.4. Let u, v E S. Then = (2r)(I(). Proof. By the definition of the Fourier transform, we have f e_u * v(x) dx (271-)2 JR __ (271-) 2 f e-Z R 2 u(x - y)v(y) dy) dx 1 n y)ev(y) dydx y) dxJ dy l) f ev(y) dy = (2)). The interchange of the order of integrations can be justified by Fubini's theorem. 5.1. Fourier Transforms To proceed, we note that 00 dx = . f00 The next result will be useful in the following discussions. Proposition 5.1.5. Let A be a positive constant and u be the function defined in Rn by u(x) = e-AHz. Then 2G() = n e 4A . (2A) 2 Proof. By the definition of Fourier transforms, we have n dx - (2ir) 2 i k=1 (27x)2 Hence it suffices to compute, for any 1) E R, °° e-Ztl1- A ts2 (27r) 2 1-00 After the change of variables s = t/A, we have 00 e = e- A 2 -(tV `1+Z -)2 dt e 4A 2ds = - o0 where L is the straight line Im z = ij/2/A in the complex z-plane. By the Cauchy's integral theorem and the fact that the integrand decays at the exponential rate as Re z -+ 00, we have e_z2 dz = - oo 1-00 e dt = . - (2A)a e 4A . dt _ Therefore, 1 (2ir) 2 ex-2 dx = This yields the desired result. (2A) 2 e4A M2 U We now prove the Fourier inversion formula, one of the most important results in the theory of Fourier transforms. 5. Heat Equations Theorem 5.1.6. Suppose u E S. Then u(x) = (2r)2n 1 f n eu(e) d The right-hand side of (5.1.1) is the Fourier transform of u evaluated at -x. Hence, u(x) _ (u)" (-x). It follows that the Fourier transformation u H u is a one-to-one map of S onto S. A natural attempt to prove (5.1.1) is to use the definition of Fourier transforms and rewrite the right-hand side as 1 dy d However, as an integral in terms of (y, ) E W x Rn, it is not absolutely convergent. Proof. Letting A = 1/2 in Proposition 5.1.5, we see that if uo(x) _ (5.1.2) then uo(1)= e Since up(x) = uo(-x), we conclude (5.1.1) for u = uo. Now we prove (5.1.1) for any u E S. We first consider u E S with u(0) = 0. We claim that there exist vi, , vn E S such that n for any x E ][8. 2G(x) _ To see this, we note that f u(x) = 1 t (u(tx)) dt = j=1 for some w E C°°(][8n), j = 1, Bl, we write , n. u(tx) dt = j=1 By taking cc E C( W) with cc = 1 in u(x) = cp(x)u(x) + (1 - cp(x))u(x) (X)WX) + We note that functions in the parentheses are in S, for j = 1, proves the claim. Lemma 5.1.2 implies = j=1 , n. This 5.1. Fourier Transforms We note that v E S by Lemma 5.1.2. Upon evaluating the right-hand side of (5.1.1) at x = 0, we obtain S 2= (`,7f)2 fRn (2ir) 2 JRfl =0. -1 We conclude that (5.1.1) holds at x = 0 for all u e S with u(0) = 0. We now consider an arbitrary u e S and decompose = (O) + (u - where uo is defined in (5.1.2). First, (5.1.1) holds for uo and hence for Next, since u - (O) o is zero at x = 0, we see that (5.1.1) holds for u - u(0)uo at x = 0. We obtain (5.1.1) for u at x = 0. Next, for any xo E Il8, we consider v(x) = u(x + xo). By Lemma 5.1.3, Then by (5.1.1) for v at x = 0, 1 xo) = v(O) = v)d = (2ir) 2f2 (2ir 1 2 JRn e0ui(e) d. This proves (5.1.1) for u at x = xo. Motivated by Theorem 5.1.6, we define v for any v E s by v(x) = e Il8"`. The function v is called the inverse Fourier transform of v. It is obvious that u(x) = u(-x). Theorem 5.1.6 asserts that u = (u)v. Next, we set, for any u, v E (u, v)L2(Rn) = uv dx. The following result is referred to as the Pars eval formula. Theorem 5.1.7. Suppose u, v e S. Then (u, v)L2(Rn) = (2G, v)L2(lRn). 5. Heat Equations Proof. We note (2;, 11)L2 (Rn) 2 Jl / IRn u(x) 1 / u(x)(x) dx = (u, V)L2(Rn), where we applied Theorem 5.1.6 to v. The interchange of the order of integrations can be justified by Fubini's theorem. O As a consequence, we have Plancherel's theorem. Corollary 5.1.8. Suppose u E S. Then IIUIIL2 (RTh) = IkLIIL2 (Rn). In other words, the Fourier transformation is an isometry in S with respect to the LZ-norm. Based on Corollary 5.1.8, we can extend Fourier transforms to L2(][8n). Note that the Fourier transformation is a linear operator from S to S and that S is dense in L2(]R). For any u E L2(][8n), we can take a sequence {uk} C S such that Uk -+ u in L2 (Wi) as k -+ oo. Corollary 5.1.8 implies lU/c - ulIIL2(lIFm) - IIUk1IIL2(Rm) - Iluk - ulIILa(IIgn) Then, {uk} is a Cauchy sequence in L2(][8n) and hence converges to a limit in LZ(IlSn). This limit is defined as u, i.e., u/c-3u inL2(W) ask-oo. It is straightforward to check that u is well defined, independently of the choice of sequence {u/c}. 5.1.2. Examples. The Fourier transform is an important tool in studying linear partial differential equations with constant coefficients. We illustrate this by two examples. Example 5.1.9. Let f be a function defined in W. We consider (5.1.3) -Du +u = f in IlS". 5.1. Fourier Transforms Obviously, this is an elliptic equation. We obtained an energy identity in Section 3.2 for solutions decaying sufficiently fast at infinity. Now we attempt to solve (5.1.3). We first seek a formal expression of its solution u by Fourier transforms. In doing so, we will employ properties of Fourier transforms without justifications. By taking the Fourier transform of both sides in (5.1.3), we obtain, by Lemma 5.1.2, (1+ iei2e) = 1(e). Then 1(C) l+lEl By Theorem 5.1.6, u(x) = 1+ICI It remains to verify that this indeed yields a classical solution under appropriate assumptions on f. Before doing so, we summarize the simple process we just carried out. First, we apply Fourier transforms to equation (5.1.3). Basic properties of Fourier transforms allow us to transfer the differential (27r)2 equation (5.1.3) for u to an algebraic equation (5.1.4) for u. By solving this algebraic equation, we have an expression for u in terms of f. Then, by applying the Fourier inversion formula, we obtain u in terms of f. We should point out that it is not necessary to rewrite u in an explicit form in terms of f. Proposition 5.1.10. Let f E S and u be defined by (5.1.5). Then u is a smooth solution of (5.1.3) in S. Moreover, (lull + 2IVu12 + IV2u12) dx = If I2 dx. Proof. We note that the process described above in solving (5.1.3) by Fourier transforms is rigorous if f E S. In the following, we prove directly from (5.1.5) that u is a smooth solution. By Lemma 5.1.2, f E S for f E S. Then f/(1 + Cl2) E S. Therefore, u defined by (5.1.5) is in S by Lemma 5.1.2. For any multi-index a E 7L+, we have 3&u(x) = f RTh eZX 1+2 d. In particular, ou(x) _ uxkxk(x) _ k=1 eX 1e121(e) dC, 1+1C12 RTh 5. Heat Equations and hence -u(x) + u(x) = f ef() d. n By Theorem 5.1.6, the right-hand side is 1(x). To prove the integral identity, we obtain from (5.1.4) that II2 + 2II2IuI2 + 1e141u12 = By writing it in the form n we have, by Lemma 5.1.2, n A simple integration yields By Corollary 5.1.8, we obtain l I l 2+2u2+ k=1 dx = This is the desired identity. IfI2dx. D Example 5.1.11. Now we discuss the initial-value problem for the nonhomogeneous heat equation and derive an explicit expression for its solution. Let f be a continuous function in 1[8n x (0, oo) and uo a continuous function in R. We consider 1bt - 026 = ,f (5.1.6) lri RmX (0,00), u(.,0)=uo onR. Although called an initial-value problem, (5.1.6) is not the type of initialvalue problem we discussed in Section 3.1. The heat equation is of the second order, while only one condition is prescribed on the initial hypersurface {t = 0}, which is characteristic. Suppose u is a solution of (5.1.6) in C2(][8x (0, oo)) fl C(ll8Th x [0, oo)). We now derive formally an expression of u in terms of Fourier transforms. In 5.1. Fourier Transforms the following, we employ Fourier transforms with respect to space variables only. With an obvious abuse of notation, we write ^ , t) = u( u(x, t) dx. We take Fourier transforms of both sides of the equation and the initial condition in (5.1.6) and obtain, by Lemma 5.1.2, ut + II2u =f in Rn x (0, oo), uo on Rn. E W as a parameter. This is an initial-value problem for an ODE with Its solution is given by , t) = s) ds. Now we treat t as a parameter instead. For any t> 0, let K(x, t) satisfy K t) _ Then , t) _ (2)(, t - s).f (, s) ds. Therefore Theorem 5.1.6 and Lemma 5.1.4 imply u(x, t) = (5.1.7) K(x - y, t)up(y) dy K(x - y, t - s) f(y, s) dyds, J J for any (x, t) E I[8n x (0, oo). By Theorem 5.1.6 and Proposition 5.1.5, we have K (x, t) = (2) n fRfl e e, ZX' _I I2t d or (5.1.8) K(x, t) _ (47rt) 2 4t , for any (x, t) E ][8n x (0, oo). The function K is called the fundamental solution of the heat equation. The derivation of (5.1.7) is formal. Having derived the integral formula for u, we will prove directly that it indeed defines a solution of the initialvalue problem for the heat equation under appropriate assumptions on the initial value uo and the nonhomogeneous term f. We will pursue this in the next section. 5. Heat Equations 5.2. Fundamental Solutions In this section, we discuss the heat equation using the fundamental solution. We first discuss the initial-value problem for the heat equation. We prove that the explicit expression for its solution obtained formally by Fourier transforms indeed yields a classical solution under appropriate assumptions on initial values. Then we discuss regularity of solutions of the heat equation. Finally we discuss solutions of the initial-value problem for nonhomogeneous heat equations. The n-dimensional heat equation is given by ut - Du=O, for u = u(x, t) with x E ][8n and t E R. We note that (5.2.1) is not preserved by the change t H -t. This indicates that the heat equation describes an irreversible process and distinguishes between past and future. This fact will be well illustrated by the Harnack inequality, which we will derive later in the next section. Next, (5.2.1) is preserved under linear transforms x' _ Ax and t' _ A2t for any nonzero constant A, which leave the quotient 1x12/t invariant. Due to this fact, the expression x12/t appears frequently in connection with the heat equation (5.2.1). In fact, the fundamental solution has such an expression. If u is a solution of (5.2.1) in a domain in ][8n x I[8, then for any (xo, to) in this domain and appropriate r > 0, uxo,r(x, t) = u(xo + rx, to + r2t) is a solution of (5.2.1) in an appropriate domain in Rn x R. In the following, we denote by C2" the collection of functions which are C2 in x and C1 in t. These are the functions for which the heat equation is well defined classically. 5.2.1. Initial-Value Problems. We first discuss the initial-value problem for the heat equation. Let uo be a continuous function in R. We consider (,.2.21 ut-L U=0 inRm x (0,00), U(.,0)=UO onR. We will seek a solution u E C2'1(][8n x (0, oo)) fl C(I[8n x [0, oo)). We first consider a special case where uo is given by a homogeneous polynomial P of degree d in R. We now seek a solution u in I[8n x (0, oo) which is a p-homogeneous polynomial of degree d, i.e., u (Ax, A2t) _ Adu(x, t), 5.2. Fundamental Solutions for any (x, t) E ][8n x (0, oo) and A> 0. To do this, we expand u as a power series of t with coefficients given by functions of x, i.e., u(x,t) = ak(x)tk. k=0 Then a straightforward calculation yields a0=P, ak = for any k > 1. Therefore for any k > 0, ak _ Since P is a polynomial of degree d, it follows that [d/2}+1 p = 0, where [d/2] is the integral part of d/2, i.e., [d/2] = d/2 if d is an even integer and [d/2] _ (d - 1)/2 if d is an odd integer. Hence [z] kP(x)tk' k-o We note that u in fact exists in I[8x R. For n = 1, let ud be a p-homogeneous polynomial of degree d in ][8 x ][8 satisfying the heat equation and ud(x, 0) _ xd. The first five such polynomials are given by ul(x,t) =x, U2(X, t) = x2 + 2t, 263x, t) = x3 + 6xt, u4(x, t) = x4 + 12x21 + 12t2, us(x, t) = x5 + 20x3t + 60xt2. We now return to (5.2.2) for general uo. In view of Example 5.1.11, we set, for any (x,t) E ll8Th x (0,oo), (5.2.3) K (x, t) = if 4t and (5.2.4) u(x, t) = K(x - y, t)uo(y) dy. In Example 5.1.11, we derived formally by using Fourier transforms that any solution of (5.2.2) is given by (5.2.4). Having derived the integral formula for u, we will prove directly that it indeed defines a solution of (5.2.2) under appropriate assumptions on the initial value up. Definition 5.2.1. The function K defined in I[8n x (0, oo) by (5.2.3) is called the fundamental solution of the heat equation. 5. Heat Equations We have the following result concerning properties of the fundamental solution. Lemma 5.2.2. Let K be the fundamental solution of the heat equation defined by (5.2.3). Then (1) K(x, t) is smooth for any x E 1[8n and t> 0; (2) K(x, t) > 0 for any x E 1[8n and t> 0; (3) (at - 0)K (x, t) = 0 for any x E ][8n and t> 0; (4) K(x, t)dx = 1 for any t > 0; (5) for any 6> 0, t-+O+ Rn\B5 K(x, t) dx = 0. Proof. Here (1) and (2) are obvious from the explicit expression of K in (5.2.3). We may also get (3) from (5.2.3) by a straightforward calculation. For (4) and (5), we simply note that K(x, t)dx = ? f l>2f e-'' dry. This implies (4) for S = 0 and (5) for 6> 0. = K(.,t2) Figure 5.2.1. Graphs of fundamental solutions for t2 > ti > 0. Now we are ready to prove that the integral formula derived by using Fourier transforms indeed yields a classical solution of the initial-value problem for the heat equation under appropriate assumptions on u0. Theorem 5.2.3. Let uo be a bounded continuous function in I[8n and u be defined by (5.2.4). Then u is smooth in I[8n x (0, oo) and satisfies '1dt - 026 = 0 29Z Rn X (0,00). 5.2. Fundamental Solutions Moreover, for any xo E u(x,t) = uo (xo) We note that the function u in (5.2.4) is defined only fort > 0. We can extend u to {t = 0} by setting 0) = uo on ][87. Then u is continuous up to {t = 0} by Theorem 5.2.3. Therefore, u is a classical solution of the initial-value problem (5.2.2). The proof of Theorem 5.2.3 proceeds as that of the Poisson integral formula for the Laplace equation in Theorem 4.1.9. Proof. Step 1. We first prove that u is smooth in Ian x (0, oo). For any multi-index a e Z+ and any nonnegative integer k, we have formally a at u(x, t) = f a at x(x - y, t)up(y) dy. n In order to justify the interchange of the order of differentiation and integration, we need to check that, for any nonnegative integer m and any Ix - yltme Jdy < oo. This follows easily from the exponential decay of the integrand if t > 0. Hence u is a smooth function in 1[87 x (0, oo). Then by Lemma 5.2.2(3), (Ut - Du)(x,t) = f (Kt - y,t)uo(y) dy = 0. We point out for future references that we used only the boundedness of uo. Step 2. We now prove the convergence of u(x, t) to uO(xO) as (x,t) (xO, 0). By Lemma 5.2.2(4), we have uO(xO) f K(x - y, t)uo(xo) dy. Then u(x,t) - uO(xO) = f nK(x - uo(xo)) dy = I1 + I2, Ii = . . I2Wz\BS (xo) for a positive constant S to be determined. For any given e > 0, we can choose S = S(e) > 0 small so that Iuo(y) - uo(xo)J <, 5. Heat Equations for any y E Bb(xp), by the continuity of uo. Then by Lemma 5.2.2(2) and 4) Ilil < Ba(xo) K(x - y, t)Iuo(y) - uo(xo)I dy < s. Since uo is bounded, we assume that IuoI <M for some positive constant M. We note that Ix - I > 6/2 for any y E ][8Th \ Bb(xo) and x E Bb12(xo). By Lemma 5.2.2(5), we can find a b' > 0 such that Lfl\B6 (xo) for any x E Bb12(xo) and t E (0, S'), where S' depends on s and S = S(s), and hence only on e. Then 1I21 < f"\Ba (moo) K(x - Iu(x,t) -uo(0)I 2s, for any x E Ba12(xo) and t E (0, S'). We then have the desired result. Under appropriate assumptions, solutions defined by (5.2.4) decay as time goes to infinity. Proposition 5.2.4. Let up E Ll(I[8) and u be defined by (5.2.4). Then for anyt>0, su The proof follows easily from (5.2.4) and the explicit expression for the fundamental solution K in (5.2.3). Now we discuss a result more general than Theorem 5.2.3 by relaxing the boundedness assumption on uo. To seek a reasonably more general assumption on initial values, we examine the expression for the fundamental solution K. We note that K in (5.2.3) decays exponentially in space variables with a large decay rate for small time. This suggests that we can allow an exponential growth for initial values. In the convolution formula (5.2.4), a fixed exponential growth from initial values can be offset by the fast exponential decay in the fundamental solution, at least for a short period of time. To see this clearly, we consider an example. For any a > 0, set (1 - 4at) 2 for any x E RT and t < 1/4a. It is straightforward to check that 5.2. Fundamental Solutions Note that G(x, 0) = for any x E Rn. Hence, viewed as a function in Rn x [0,1/4a), G has an exponential growth initially for t = 0, and in fact for any t < 1/4a. The growth rate becomes arbitrarily large as t approaches 1/4a and G does not exist beyond t = 1/4a. Now we formulate a general result. If uo is continuous and has an exponential growth, then (5.2.4) still defines a solution of the initial-value problem in a short period of time. Theorem 5.2.5. Suppose up E C(Rn) satisfies uC Mc2 for any x E IlBn, for some constants M, A > 0. Then u defined by (5.2.4) is smooth in Rn x (0, 4A ) and satisfies ut - L u = 0 in IRn X Moreover, for any xo E Rn, lim u (x, t) = uo (xo) . The proof is similar to that of Theorem 5.2.3. Proof. The case A = 0 is covered by Theorem 5.2.3. We consider only A> 0. First, by the explicit expression for K in (5.2.3) and the assumption on uo, we have (4irt) 2 A simple calculation shows that 2 1-4Atx + 1 - 4At IxI 2 . Hence for any (x, t) E Rn x (0,1/(4A)), we obtain < < n e 1-4At ICI (4irt) 2 - (1-4At) 2 2 I e 1-4At A The integral defining u in (5.2.4) is convergent absolutely and uniformly for 1/(4A)], for any > 0 small. Hence, u is continuous (x, t) E Il8n x 5. Heat Equations in Il8x (0,1/(4A)). To show that u has continuous derivatives of arbitrary order in Il8x (0,1/(4A)), we need only verify Ix _ dy < oo, for any m > 0. The proof form > 1 is similar to that for m = 0 and we omit the details. Next, we need to prove the convergence of u(x, t) to up(xp) as (x, t) O (xO, 0). We leave the proof as an exercise. Now we discuss properties of the solution u given by (5.2.4) of the initial- value problem (5.2.2). First for any fixed x E ll8n and t> 0, the value of u(x, t) depends on the values of uo at all points. Equivalently, the values of up near a point xo E I[8n affect the value of u(x, t) at all x as long as t> 0. We interpret this by saying that the effects travel at an infinite speed. If the initial value uo is nonnegative everywhere and positive somewhere, then the solution u in (5.2.4) at any later time is positive everywhere. We will see later that this is related to the strong maximum principle. Next, the function u(x, t) in (5.2.4) becomes smooth fort > 0, even if the initial value uo is simply bounded. This is well illustrated in Step 1 in the proof of Theorem 5.2.3. We did not use any regularity assumption on uo there. Compare this with Theorem 3.3.5. Later on, we will prove a general result that any solutions of the heat equation in a domain in ][8n x (0, oo) are smooth away from the boundary. Refer to a similar remark at the end of Subsection 4.1.2 for harmonic functions defined by the Poisson integral formula. We need to point out that (5.2.4) represents only one of infinitely many solutions of the initial-value problem (5.2.2). The solutions are not unique without further conditions on u, such as boundedness or exponential growth. In fact, there exists a nontrivial solution u E C°° (1Rn x ][8) of ut - Du = 0, with u - 0 for t <0. In the following, we construct such a solution of the one-dimensional heat equation. Proposition 5.2.6. There exists a nonzero smooth function u E C°° (R x [0, oo)) satisfying in Ilk x [0, oo), on Ilk. Proof. We construct a smooth function in Il8 x II8 such that ut =0 in Il8 x ][8 and u - 0 for t < 0. We treat {x = 0} as the initial curve and 5.2. Fundamental Solutions attempt to find a smooth solution of the initial-value problem ut - uxx = 0 in R x R, u(0, t) = a(t), ux (0, t) = 0 for t E R, for an appropriate function a in R. We Write u as a power series in x: u(x, t) = >ak(t)xk. =o Making a simple substitution in the equation ut = uxx and comparing the coefficients of powers of x, we have a''_2=k(k-1)ak foranyk>2. Evaluating u and ux at x = 0, we get Hence for any k > 0, a2k(t) = a2k+i (t) = 0. Therefore, we have a formal solution 00 We need to choose a(t) appropriately so that u(x, t) defined above is a smooth function and is identically zero fort < 0. To this end, we define a(t) _ fort > 0, tort < U. Then it is straightforward to verify that the series defining u is absolutely convergent in ][8 x R. This implies that u is continuous. In fact, we can prove that series defining arbitrary derivatives of u are also absolutely convergent in Il8 x Ilk. We skip the details and leave the rest of the proof as an exercise. Next, we discuss briefly terminal-value problems. For a fixed constant T > 0, we consider ut - uxx = 0 in Ilk x (0, T), T) = cp on R. Here the function co is prescribed at the terminal time T. This problem is not well posed. Consider the following example. For any positive integer m, let um(x, t) = em2(T -t) sin(mx), 5. Heat Equations solves this problem with the terminal for any (x, t) E Il8 x [O, T). Then value corn(x) = for any x E ]E8. We note that sup kPrnI = 1 and for any t e [0, T), em2(T-t) -f oo as m -f oo. There is no continuous dependence of solutions on the values prescribed at the terminal time T. 5.2.2. Regularity of Solutions. Next, we discuss regularity of solutions of the heat equation with the help of the fundamental solution. We will do this only in special domains. For any (xO, to) E ] [U x ][8 and any R> 0, we define QR(xo, t0) = BR(xo) X (to - R2, to]. We point out that subsets of the form QR(xo, to) play the same role for the heat equation as balls for the Laplace equation. If u is a solution of the heat equation ut - Du = 0 in QR(O), then uR(x, t) = u(Rx, RZt) is a solution of the heat equation in Q1(0). Figure 5.2.2. The region QR (xo, to). For any domain D in IRn x Ilt, we denote by C2" (D) the collection of functions in D which are C2 in x and C1 in t. We first have the following regularity result for solutions of the heat equation. 5.2. Fundamental Solutions Theorem 5.2.7. Let u be a C2" -solution of ut - Du = 0 in QR(xo, to) for some (xO, to) E ][8' x 1[8 and R> 0. Then u is smooth in QR(xo, to). Proof. For simplicity, we consider the case (xO, to) _ (0,0) and write QR = BR x (-R2,0]. Without loss of generality, we assume that u is bounded in QR. Otherwise, we consider u in Qr for any r u(x, t) = 'BR K( x - y, t + R2)u(y, -R2) dy t avy 1 Rz asR K(x-y,t-s)(y,s) -u(y, s) -(x (x - y, t - s)J dSyds. y We first assume this identity and prove that it implies the smoothness of u. We note that the integrals in the right-hand side are only over the bottom and the side of the boundary of BR x (-R2, t]. The first integral is over BR x {-R2}. For (x, t) E QR, it is obvious that t + R2 > 0 and hence there is no singularity in the first integral. The second integral is over 8BR x (-R2, t]. By the change of variables r = t - s, we can rewrite it as T a lye t - T - t - T) There is also no singularity in the integrand since x E BR, y E SBR, and r > 0. Hence, we conclude that u is smooth in QR. We now prove the claim. Let K be the fundamental solution of the heat equation as in (5.2.3). Denoting by (y, s) points in QR, we set K(y, s) = K(x - y, t - s) = Ix-y12 e 4W ) for s< t. R3+zk=o. Hence, 0 = K(us - Dyu) _ (uk)8 + (uky2 z=i = (uk)8 + (uky2 - Kuy2)y2 2=1 u(Ks + DyK) 5. Heat Equations For any e > 0 with t - e > -R2, we integrate with respect to (y, s) in BR x (-R2, t - e). Then BR K(x - y, e)u(y, t - e) dy = f K(x - y, t - (-R2))u(y, -R2) dy R Ii R2 - y,t - -u(y, s) (x - y,t - s)] dSds. Now it suffices to prove that lim K(x - y, e)u(y, t - e)dy = u(x, t). The proof proceeds similarly to that in Step 2 in the proof of Theorem 5.2.3. The integral here over a finite domain introduces few changes here. We omit the details. Now we prove interior gradient estimates. Theorem 5.2.8. Let u be a bounded C2" -solution of ut - Du = 0 in QR(xo, to) for some (xO, to) E IIBn x ][8 and R> 0. Then I c sup R QR(xo,to) where C is a positive constant depending only on n. Proof. We consider the case (xO, to) _ (0,0) and R = 1 only. The general case follows from a simple translation and dilation. (Refer to Lemma 4.1.11 for a similar dilation for harmonic functions.) In the following, we write Q,. = Br x (-r2, 0] for any r E (0, 1]. We first modify the proof of Theorem 5.2.7 to express u in terms of the fundamental solution and cutoff functions. We denote points in Qi by (y, s). Let K be the fundamental solution of the heat equation given in (5.2.3). As in the proof of Theorem 5.2.7, we set, for any fixed (x, t) E Ql/4, K(y, s) = K(x - y, t - s) = By choosing a cutoff function cp e C°° (Q,) with supp cp C in Q12, we set v = cpK. for s 5.2. Fundamental Solutions We need to point out that v(y, s) is defined only for s < t. For such a function v, we have n Q = 4J(263 - Dy4l) _ (uv)8 + 2G1JyZ - u(ZJs + For any e > 0, we integrate with respect to (y, s) in Bl x (-1, t-e). We note that there is no boundary integral over Bl x {-1} and aBl x (-1, t - e), since cp vanishes there. Hence f (u)(y,t - ')K(x - y, e) dy = f l x(83 + i x (-L,t-E) Then similarly to the proof of Theorem 5.2.3, we have, as e -+ 0, (x, t)u(x, t) = f u(as + l x (-1,t) In view of we obtain for any (x, t) E Q1/4 that u(x, t) = JB1 x (-it) u((cps + Dycp)K + 2Vycp VyK) dyds. We note that each term in the integrand involves a derivative of cp, which is zero in Q1/2 since cp - 1 there. Then the domain of integration D is actually given by D = B4 x (-(3/4)2, t] \ B2 x (-(1/2)2, t]. The distance between any (y, s) E D and any (x, t) E Q1/4 has a positive lower bound. Therefore, the integrand has no singularity in D. (This gives an alternate proof of the smoothness of u in Q1/4.) D2 t Figure 5.2.3. A decomposition of D for n = 1. Next, we have, for any (x, t) E Q1/4, vxu(x, t) = u((cps + Dycp)vxK + 2pycp vxV R) dyds. D 5. Heat Equations Let C be a positive constant such that 2lVy(pl $ C, Iosl + IVcoI C C. t)I C C Df (IvkI + IVVkDIuIdyds. By the explicit expression for K, we have - ynI 1e c _Ix (t - s) 2+ Io KI and -gtys) l rylx-'LJI2+(t-S) (t-s)2 e Obviously, for any (x, t) E Q1/4 and any (y, s) E D, x-yJ<1, 0 t) I C i=1 n+z e s) 2 _ Ix-yI2 4(t-s) I()I dyds. Now we claim that, for any (x, t) E Q1/4, (y, s) E D and i = 1, 2, 1 n _ Ix-yI2 4(t-s) Then we obtain easily for any (x, t) E Q1/4 that t) I < c sup lul . To prove the claim, we decompose D into two parts, Dl = BZ x (-(3/4)2, _(1/2)2), DZ = (B3\Bl) x (-(3/4)2, t). 4 We first consider Dl. For any (x, t) E Q1/4 and (y, s) E Dl, we have t-s> g,1 and hence 1 .e - 4(t-s) (t-s) 2 Next, we consider D2. For any (x, t) E Q1/4 and (y, s) E D2, we have y-x> 4, 0 5.2. Fundamental Solutions and hence, with r = (t - s)-1, 1 1 (t - s) n2 +i e 4(t-s) < (t - s) n2 +i e -' 44 = T 2 +Ze - C, a for any 'r> (4/3)2. This finishes the proof of the claim. Next, we estimate derivatives of arbitrary order. Theorem 5.2.9. Let u be a bounded C2" -solution of ut - Du = 0 in QR(xo, to) for some (xO, to) E ]E8n x ][8 and R> 0. Then for any nonnegative integers m and k, R + 21 nkem+2k-1(m + 2k)! where C is a positive constant depending only on n. Proof. For x-derivatives, we proceed as in the proof of Theorem 4.1.12 and obtain that, for any c e Z+ with Ic I = m, C'mem-1 ml I a«u (x0 ,t0) I For t-derivatives, we have ut = Du and hence at u = Du for any positive integer k. We note that there are r terms of x-derivatives of u of order 2k in OJu. Hence I nk This implies the desired result easily. The next result concerns the analyticity of solutions of the heat equation on any time slice. Theorem 5.2.10. Let u be a C2" -solution of ut - Du = 0 in QR(xo, to) for t) is analytic in BR(xo) for some (xo, to) E ]E8n x ][8 and R> 0. Then t) is any t e (to - R2, to]. Moreover, for any nonnegative integer k, at analytic in BR(xp) for any t e (to - R2, to]. The proof is identical to that of Theorem 4.1.14 and is omitted. In general, solutions of ut - Du = 0 are not analytic in t. This is illustrated by Proposition 5.2.6. 5. Heat Equations 5.2.3. Nonhomogeneous Problems. Now we discuss the initial-value problem for the nonhomogeneous equation. Let f be continuous in ll8x (O,oo). Consider ut - Du = f lri ll8n X (0, oo), Let K be the fundamental solution of the heat equation as in (5.2.3), 1 K(x, t) _ (4irt) 2 e- 4t for any (x,t) E I[8Th x (O,oo). Define u(x, t) = K(x - y, t - s) f (y, s) dyds, for any (x, t) E ][8n x (0, oo). If f is bounded in ll8n x (0, oo), it is straightforward to check that the integral in the right-hand side of (5.2.5) is well defined and continuous in (x, t) E ll8Th x (0, oo). By Lemma 5.2.2(4), we have t I< sup Iflff K(y, s ) dyds = t x (o,t) If I 11n x (o,t) fn To discuss whether u is differentiable, we note that 1 x2 _ 1j (x, t) _ - (4irt) n2 2t e 4t (x,t) = (xx _ Szj Ii 4t For any t> 0, by the change of variables x = 2z/, we have IK(x, t) I dx = e -Iz12 1z2I dz = -7r2/ n Vt 1 n and 1 J IKxx(x,t)Idx= n Hence E Ll(I[8n x (O, T)) and formal differentiation of (5.2.5) yields (5. 2.6) u(x, t) _ z2 zi - bi j L1(Il8' x (O, T)) for any T> 0. A (x - y, t - s) f (y, s) dyds. fl 5.2. Fundamental Solutions We denote by I the integral in the right-hand side. If f is bounded in 1[8Th x (0,oo), then ff t If I R X (o,t) (x - y, t - s) I dyds n 1 ds = If I Jo If I (t - s) (,t) R Hence, the integral in the right-hand side of (5.2.6) is well defined and continuous in (x, t) E 1[8n x (0, oo). We will justify (5.2.6) later in the proof of Theorem 5.2.11 under extra assumptions. Even assuming the validity of (5.2.6), we cannot continue differentiating (5.2.6) to get the second xderivatives of u if f is merely bounded, since Ll (][8' x (0, T)) for any T > 0. In order to get the second x-derivatives of u, we need extra assumptions on f. x (o ,t) Theorem 5.2.11. Let f be a bounded continuous function in ][8n x (0, oo) with bounded and continuous 0 f in ][8Th x (0, oo) and u be defined by (5.2.5) for (x, t) E I[8n x (0, oo). Then u is C2" in 1 [8Th x (0, oo) and satisfies ut - Du = f in 1[8" x (0, oo), and for any xo E Ian, u(x, t) = 0. lim (x,t)-+(xo,O) Moreover, if f is smooth with bounded derivatives of arbitrary order in W x (0, oo), then u is smooth in Rn x (0, oo). Proof. We first assume that f and V f are continuous and bounded in I[8' x (0, oo). By the explicit expression for K and the change of variables y = x + 2z t - s, we obtain from (5.2.5) that u(x, t) 7r 2 eII2 f (x + 2z s) dzds, for any (x, t) E 1[8n x (0, oo). It follows easily that the limit of u(x, t) is zero ast-*0. A simple differentiation yields u(x, t) = x2 t- s s) dzds + 2z, 2 t-s af (x + 2z 1 72 n Upon integrating by parts, we have uxi(x, t) = n ?f 2 f(x + 2z t s) dzds. s, s) dzds. 5. Heat Equations (We note that this is (5.2.6) by the change of variables y = x + 2z t - s.) A differentiation under the integral signs yields 1n ff _Iz12 e fXj (x + 2z t - s, s) dzds. A similar differentiation of (5.2.7) yields ut(x, t) = (x, t) dz n 7T 2 e_I zl2 n 2=1 (x + 2z t - s, s) dzds. In view of the boundedness of V f , we conclude that ut and Ux x3 are continuous in (x, t) E Rn x (0, oo). We note that the first term in the righthand side of ut (x, t) is simply f(x, t). Hence, n ut (x, t) - 0U (x, t) = ut (x, t) - Cx t) = f(x,t), i=1 for any (x,t) E IIS"' x (0,oo). If f has bounded x-derivatives of arbitrary order in II8Th x (0, oo), by (5.2.7) we conclude that x-derivatives of U of arbitrary order exist and are continuous in I[8x (0, oo). By the equation Ut = 0U + f, we then conclude that ut and all its x-derivatives exist and are continuous in ll8n x (0, oo). Next, utt = Dut + ft = 0 +1) + ft. Hence wtt and all its x-derivatives exist and are continuous in I[8x (0, oo). Continuing this process, all derivatives of u exist and are continuous in W1x(0,oo). U By combining Theorem 5.2.3 and Theorem 5.2.11, we conclude that, under the assumptions on uo and f as above, the function u given by w(x, t) = K(x - y, t)uo (y) dy t is a solution of 26t - L2G = f lri fin" X (0,00), U(,0)U0 onR. Theorem 5.2.11 is optimal in the C°°-category in the sense that the smoothness of f implies the smoothness of U. However, it is not optimal 5.3. The Maximum Principle concerning finite differentiability. In the equation ut - Du = f, f is re- lated to the second x-derivatives and the first t-derivative of u. Theorem 5.2.11 asserts that the continuity of f and its first x-derivatives implies the continuity of Vu and ut. It is natural to ask whether the continuity of f itself is sufficient. This question has a negative answer, and an example can be constructed by modifying Example 4.4.4. Hence, spaces of functions with continuous derivatives are not adequate for optimal regularity. What is needed is the Holder spaces adapted to the heat equation, referred to as the parabolic Holder spaces. The study of the nonhomogeneous heat equation, or more generally, nonhomogeneous parabolic differential equations, in parabolic Holder spaces is known as the parabolic version of the Schauder theory. It is beyond the scope of this book to give a presentation of the Schauder theory. Refer to Subsection 4.4.1 for discussions of the Poisson equation. 5.3. The Maximum Principle In this section, we discuss the maximum principle for a class of parabolic differential equations slightly more general than the heat equation. As applications of the maximum principle, we derive a priori estimates for mixed problems and initial-value problems, interior gradient estimates and the Harnack inequality. 5.3.1. The Weak Maximum Principle. Let D be a domain in Il8Th x R. The parabolic boundary apD of D consists of points (xO, to) E 8D such that Br(X) x (to - r2, to] contains points not in D, for any r > 0. We denote by C2" (D) the collection of functions in D which are C2 in x and Cl in t. We often discuss the heat equation or general parabolic equations in cylinders of the following form. Suppose St C Il8" is a bounded domain. For any T> 0, set StT=Stx(O,T]={(x,t): xESt,O (s X {t = o}) u (asp X (0,T]) u (asp X {o}). In other words, parabolic boundary consists of the bottom, the side and the bottom corner of the geometric boundary. For simplicity of presentation, we will prove the weak maximum principle only in domains of the form 12T. We should point out that the results in this subsection hold for general domains in Rx R. We first prove the weak maximum principle for the heat equation, which asserts that any subsolution of the heat equation attains its maximum on 5. Heat Equations the parabolic boundary. Here, a C2" (T)-function u is a subsolution of the heat equation if ut - Du < 0 in SZT. Theorem 5.3.1. Suppose U E C2"(S2T) fl C(S2T) satisfies ut-Du<0 inS2T. Then u attains on 8PS2T its maximum in S2T, i. e., maxis = max u. ap T Proof. We first consider a special case where ut - Du < 0 and prove that u cannot attain in StT its maximum in StT. Suppose, to the contrary, that there exists a point Po = (xo, to) E StT such that u(Po) =maxis. 0 and the Hessian matrix is nonpositive definite. Moreover, ut(Po) = 0 if to e (O, T), and ut(Po) > 0 if to = T. Hence ut - Du > 0 at Po, which is a contradiction. We now consider the general case. For any s > 0, let U6(X,t) =U(x,t)-Et. Then (D,-L)U6=ut-ou-E<0. By the special case we just discussed, u6 cannot attain in SZT its maximum. Hence max u6 = max u6 . 8p T Then max u (x, t) = max(u6 (x, t) + Et)) < max u (x, t) + ET = max u6 (x, t) + ET < max u(x, t) + sT. ap T ap cT Letting E - 0, we obtain the desired result. Next, we consider a class of parabolic equations slightly more general than the heat equation. Let c be a continuous function in SZT. Consider Lu = ut - Du -I- cu in S2T. We prove the following weak maximum principle for subsolutions of L. Here, a C2"(StT)-function u is a subsolution of L if Lu < 0 in SZT. Similarly, a C2'1 (T)-function u is a supersolution of L if Lu > 0 in S2T. 5.3. The Maximum Principle Theorem 5.3.2. Let c be a continuous function in SZT with c > 0. Suppose u E C2'1(S2T) fl C(SZT) satisfies in S2T. Then u attains on apI T its nonnegative maximum in SZT, i. e., m_ax u_< max u+ . ap T We note that u+ is the nonnegative part of u given by u+ = max{0, u}. The proof of Theorem 5.3.2 is a simple modification of that of Theorem 5.3.1 and is omitted. Now, we consider a more general Theorem 5.3.3. Let c be a continuous function in S2T with c > -co for a nonnegative constant co. Suppose u E C2" (SZT) fl C(S2T) satisfies ut - Du + cu in S2T, u<0 onD ZT. Then u < 0 in SZT. Continuous functions in 12T always have global minima. Therefore, c > -co in S2T for some nonnegative constant co if c is continuous in S2T. Such a condition is introduced to emphasize the role of the minimum of c. Proof. Let v(x, t) = e-c0tu(x, t). Then u = c°0tv and ut - Du + cu = eC0t (vt - Ov -I- (c -I- co)v). Hence vt -Ov+ (c+co)v <0. With c + co > 0, we obtain, by Theorem 5.3.2, that m_ax v < max v+ = max ap T SZT ap T = 0. Hence u < 0 in SZT . The following result is referred to as the comparison principle. Corollary 5.3.4. Let c be a continuous function in SZT with c > -co for a nonnegative constant co. Suppose u, v E C2"(S2T) f1 C(T) satisfy ut - Du - cu < vt - Ov - cv Then u in S2T, 5. Heat Equations In the following, we simply say by the maximum principle when we apply Theorem 5.3.2, Theorem 5.3.3 or Corollary 5.3.4. Before we discuss applications of maximum principles, we compare maximum principles for elliptic equations and parabolic equations. Consider Leu = -Du + c(x)u in SZ Lpu = ut - Du + c(x, t)u in I T - I x (O,T). We note that the elliptic operator Le here has a form different from those in Section 4.3.1, where we used the form 0 + c. Hence, we should change the assumption on the sign of c accordingly. If c > 0, then Leu < 0 = u attains its nonnegative maximum on aSZ, Lpu < 0 = u attains its nonnegative maximum on apSZT. If c - 0, the nonnegativity condition can be removed. For c > 0, comparison principles can be stated as follows: Lpu < Lpv in QT, u < v on D I T = u < v in I T. In fact, the comparison principle for parabolic equations holds for c > -CO, for a nonnegative constant co. In applications, we need to construct auxiliary functions for comparisons. Usually, we take x12 or for elliptic equations and Kt + 1x12 for parabolic equations. Sometimes, auxiliary functions are constructed with the help of the fundamental solutions for the Laplace equation and the heat equation. 5.3.2. The Strong Maximum Principle. The weak maximum principle asserts that subsolutions of parabolic equations attain on the parabolic boundary their nonnegative maximum if the coefficient of the zeroth-order term is nonnegative. In fact, these subsolutions can attain their nonnegative maximum only on the parabolic boundary, unless they are constant on suitable subsets. This is the strong maximum principle. We shall point out that the weak maximum principle suffices for most applications to the initial/boundary-value problem with values of the solutions prescribed on the parabolic boundary of the domain. We first prove the following result. Lemma 5.3.5. Let (xo, to) be a point in ][8n x ][8, R and T be positive constants and Q be the set defined by Q = BR(XO) x (to - T, ta]. 5.3. The Maximum Principle Suppose c is a continuous function in Q and u e C2"(Q) fl C(Q) satisfies ut - Du + cu > 0 in Q. If u > 0 in Q and u(xo,to -T) >0, then u(x, t) >0 for any (x,t) e Q. Lemma 5.3.5 asserts that a nonnegative supersolution, if positive somewhere initially, becomes positive everywhere at all later times. This can be interpreted as infinite-speed propagation. Proof. Take an arbitrary t* E (to - T, to]. We will prove that u(x, t*) > 0 for any x E BR(xo). Without loss of generality, we assume that xo = 0 and t* = 0. We take a > 0 such that to - T = -aR2 and set D = BR x (-cR2,0]. By the assumption u(0, -aR2) > 0 and the continuity of u, we can assume that u(x, -aR2) > m for any x E BER, for some constants m > 0 and E (0, 1). Here, m can be taken as the (positive) minimum of -aR2) on BER Now we set ( l It is easy to see that Don{t=0}=BR, Doff{t=-aR2}=BER Set 2 wl (t) t + R2 1w2(x,t) = wl(t) - 1x12 = a t +R2-1x12, and for some j3 to be determined, w = w1 We will consider w 1, W2 and w in D0. 5. Heat Equations Figure 5.3.1. The domain D0. We first note that e2R2 < wl < R2 and w2 > 0 in Do. A straightforward calculation yields wt - _Qurl Q-18tw1w2 -I- 2wiw2atw2 2(1 - e2)W1w21 = wi Q_1 (-Q(1 a Ow = wi (2w20w2 -I- 2IOw2I2) = W18 (-4flW2 + 81x12). Since 1x12 = wl - w2, we have Ow = wi (8w1 - (4n -I- 8)2v2) = wi Therefore, wt - Ow + cw = w1 _1 - (4n + 8)w1w2). (((1_E2))2 a -I- 4n -I- 8 w1w2 -8w? wt - 0 2v -I- c2v < -wi (((1_ e2) - R21c1) w2 (2(1 - E2) 1 21 + 4n +8) 2v1w2 -I- 82v1). The expression in the parentheses is a quadratic form in wl and w2 with a positive coefficient of wi. Hence, we can make this quadratic form nonnegative by choosing Q sufficiently large, depending only on e, a, R and sup id. Hence, wt - Ow + cw < 0 in Do. 5.3. The Maximum Principle Note that the parabolic boundary apDO consists of two parts and 2 given by lxi <ER, t = -cR2}, = {(x,t) _ {(xt): Ixl2 _ ' t = R2, -R2 2 For (x, t) E El, we have t = -aR2 and lxi < ER, and hence w(x, -aR) _ _ Ix12)2 Next, on 2, we have w = 0. In the following, we set v = m(ER)24w in Do, where m is the minimum of u over defined earlier. Then vt - Ov + cv < 0 in Do, and v < u on BPDo, since u > m on El and u > 0 on E2. In conclusion, vt - Ov + cv < Ut - pu + cu in Do, v < u on BpDo. By the maximum principle, we have v < u in Dp. This holds in particular at t = 0. By evaluating v at t = 0, we obtain u(x, 0) > mE2Q-4 (1 - IR2 ) for any x E BR. This implies the desired result. We point out that the final estimate in the proof yields a lower bound of u over BR x {0} in terms of the lower bound of u over BER x {-cJl2}. This is an important estimate. Now, we are ready to prove the strong maximum principle. Theorem 5.3.6. Let S2 be a bounded domain in ][8n and T be a positive constant. Suppose c is a continuous function in S2 x (0, T] with c > 0, and x (0, T]) satisfies ue ut - Du + cu < 0 in St x (0, T]. If for some (x*, t*) E SZ x (O, T], u(x*, t*) = sup u > 0, six (o,T] 5. Heat Equations u(x, t) = u(x*, t*) for any (x, t) E Sl x (0, t*). M= sup u> 0, 1 x (O,T] v=M-u in1x(0,TJ. Then v(x*, t*) = 0, v > 0 in 1 2 x (0,TJ and vt - Ov + cv > 0 in St x (O, T]. We will prove that v(xo, to) = 0 for any (xO, to) E Sl x (0, t*). To this end, we connect (xO, to) and (x*, t*) by a smooth curve ry C Sl x (0, TJ along which the t-component is increasing. In fact, we first connect xo and x* by a smooth curve -yo = yo(s) C St, for s E [0, 1], with yo(O) = xo and yo(l) = x*. Then we may take ry to be the curve given by (yo(s), st* + (1 - s)to). With such a -y, there exist a positive constant R and finitely (x*, t*) Figure 5.3.2. y and the corresponding covering. many points (xk, tk) on y, for k = 1, that N, with (xN, tN) _ (x*, t*), such -y c U =o x {tk,tk + RZ] c 12 x (0, T]. We may require that tk = tk_1 -}- RZ for k = 0, , N - 1. If v(xo, to) > 0, then, applying Lemma 5.3.5 in BR(xo) x [to, tp -}- R2], we conclude that v(x, t) > 0 in BR(xo) x (to, to + RZJ, and in particular, v(xl, ti) > 0. We may continue this process finitely many times to obtain v(x*, t*) = v(xN, tN) > 0. This contradicts the assumption. O Therefore, v(xo, to) = 0 and hence u(xo, to) = M. 5.3. The Maximum Principle Related to the strong maximum principle is the following Hopf lemma in the parabolic version. Lemma 5.3.7. Let (xO, to) be a point in ][8n x ][8, R and r be two positive constants and D be the set defined by D = {(x, t)E I[8n X R: It c to}. Suppose c is a continuous function in D with c > 0, and u e C2" (D) fl C(D) satisfies ut-Du+cu<0 in D. Assume, in addition, for some x e ]I8n with x - xol = R, that u(x, t) < u(x, to) for any (x, t) E D and u(x, to) > 0, u(x, t) < u(x, to) for any (x, t) E D with Ix - xol < R. If Vu is continuous up to (x, to), then v = (x - xo)/Ix - xol Proof. Without loss of generality, we assume that (xO, to) _ (0, 0). Then D={(x,t)E][8Thx][8: IxI2-t For positive constants a and e to be determined, we set w(x, t) = e_2_7t) and v(x, t) = u(x, t) - u(x, 0) + ew(x, t). We consider w and v in {(xt) A direct calculation yields - 2m a - a - c) - ce-«RZ G -e-aI2-t) (42IxI2 - 2na - rya - c), wt - Ow + cw = where we used c > 0 in D. By taking into account that R/2 < IxI < R in Do and choosing a sufficiently large, we have 4a2IxI2 - 2na - u7a - c > 0 in Do, 5. Heat Equations Figure 5.3.3. The domain D0. and hence wt - Ow +cw < 0 in Do. Since c> 0 and u(x, 0) > 0, we obtain for any s> O that vt - Ov + cv = ut - Du + cu + e(wt - Ow +cw) - cu(x, 0) < 0 in Dp. The parabolic boundary BDo consists of two parts E1 and E2 given by E1 = E2 = {(x,t): IxI2-t < R2, t < 0, ixl= ZR , J -rat = R2, t < 0, lxi? 2R}. First, on E1i we have u - u(x, 0) <0, and hence u - u(x, 0) <-s for some s> 0. Note that w < 1 on El. Then for such an e, we obtain v <0 on E1. Second, for (x,t) E E2, we have w(x, t) = 0 and u(x, t) < u(x, 0). Hence v(x, t) < 0 for any (x, t) E E2 and v(x, 0) = 0. Therefore, v < 0 on E2. In conclusion, vt - Ov + cv < 0 in Do, v<0 By the maximum principle, we have v<0 in Do. Then, by v(x, 0) = 0, v attains at (x, 0) its maximum in Do. In particular, v (x, 0) < v (x, 0) for any x e BR \ B 2 R. Hence, we obtain and then > au (x, 0) _ av This is the desired result. 13w -sa(x, 0) _ v > 0. 5.3. The Maximum Principle To conclude our discussion of the strong maximum principle, we briefly compare our approaches for elliptic equations and parabolic equations. For elliptic equations, we first prove the Hopf lemma and then prove the strong maximum principle as its consequence. See Subsection 4.3.2 for details. For parabolic equations, we first prove infinite speed of propagation and then obtain the strong maximum principle as a consequence. It is natural to ask whether we can prove the strong maximum principle by Lemma 5.3.7, the parabolic Hopf lemma. By an argument similar to the proof of Theorem 4.3.9, we can conclude that, if a subsolution u attains its nonnegative maximum at an interior point (x0, to) E 1 x (0, T], then u is constant on 1 x {t0}. In order to conclude that u is constant in SZ x (0, t0) as asserted by Theorem 5.3.6, we need a result concerning the t-derivative at the interior maximum point, similar to that concerning the x-derivative in the Hopf lemma. We will not pursue this issue in this book. 5.3.3. A Priori Estimates. In the rest of this section, we discuss applications of the maximum principle. We point out that only the weak maximum principle is needed. As the first application, we derive an estimate of the sup-norms of solutions of initial/boundary-value problems with Dirichlet boundary values. Compare this with the estimate in integral norms in Theorem 3.2.4. As before, for a bounded domain 1 C Rn and a positive constant T, we set 12T=12x(0,T]={(x,t): xE12,0 -CO for a nonnegative constant co. Suppose u e C2" (fT) fl C(T) is a solution of ut - Du -I- cu = f in StT, u = cp on 8S2 x (0, T), for some f e C(ST), uo E C(St) and cp e C(aSZ x [0, T]). Then sup ui < e°pT (max sup iuoi, sup asp X (o,T) i(ioi } -I- Tsup ill siT Proof. Set Lu = ut - Du -+- Cu and B = max sup iuoI, sup ci asp X (o,T) F = sup if I. ciT 5. Heat Equations L(fu) < F in StT, fu < B on 3'. Set v(x, t) = eCot (B + Ft). Since c + co > 0 and eC0t > 1 in S2T, we have Lv = (co + c)ec0t(B + Ft) + ecOtF > F in StT and v>B on3f. L(fu) < Lv in StT, fu < v on apS2T. By the maximum principle, we obtain fu < v in StT. Iu(x, t) I < e°0t(B + Ft) This implies the desired estimate. for any (x,t) E S2T. O Next, we derive a priori estimates of solutions of initial-value problems. Theorem 5.3.9. Let c be continuous in I[8n x (O, T] with c > -co for a nonnegative constant co. Suppose u e C2"(]E8n x (O, T]) fl C(II87 x [O, T]) is a bounded solution of ut - Du + cu = f in ]E8" x (0, T], for some bounded f e C(II8n x (O, T]) and uo E sup Ian x (O,T) U I < ecOT (sup lUol + T sup R x (o,T) We note that the maximum principle is established in bounded domains such as S2 x (0, T]. In studying solutions of the initial-value problem where solutions are defined in W x (0, T], we should first derive suitable estimates of solutions in BR x (0, T] and then let R -f oo. For this purpose, we need to impose extra assumptions on u as x -f oo. For example, u is assumed to be bounded in Theorem 5.3.9 and to be of the exponential growth in Theorem 5.3.10. 5.3. The Maximum Principle Proof. Set Lu = ut - Du -I- cu and F= sup 1Rx (o,T] fl, Il2 L(fu) < F in ]I8" x (O, T], fu < B on ]I8". Since u is bounded, we assume that ui < M in Rn x (0, T] for a positive constant M. For any R> 0, consider w (x, t) = eC0t (B + Ft) + VR (x, t) in BR x (0, T], where vR is a function to be chosen. By c + co > 0 and eC0t > 1, we have Lw = (c + co)eC0t (B + Ft) + eCOtF + LvR > F + LvR in BR x (0, T]. w(.,0) = B+VR(.,0) in BR, and w on EJBR X (0, T]. We will choose vR such that LvR > 0 in BR x (0, T], inBR, vR > fu on 8BR X [0, T]. To construct such a vR, we consider vR(x, t) =Re°pt(2nt + IxI2). Obviously, vR > 0 for t = 0 and vR > M on ixi = R. Next, LvR = R2 e°pt(c -I- co)(2nt + ixi2) >0 in BR X (O, T]. With such a vR, we have L(fu) < Lw in BR X (O, T], fu < w on ap(BR x (O,T]). Then the maximum principle yields fu < w in BR x (O, T]. Hence for any (x,t) E BR x (0, T], iu(x, t)l < eC0t(B + Ft) + eC0t(2nt + ix12). Now we fix an arbitrary (x, t) E Il8n x (O, T]. By choosing R> lxi and then letting R -+ oo, we have lu(x,t)I :; ecot(B+Ft). 5. Heat Equations This yields the desired estimate. Next, we prove the uniqueness of solutions of initial-value problems for the heat equation under the assumption of exponential growth. Theorem 5.3.10. Let u E C2" (Il8n x (0, T]) fl C(Il8n x [O, T]) satisfy Ut-LU=O inR7'x(O,T], onR7. Suppose, for some positive constants M and A, Iu(x,t)I for any (x,t) E Il8' x (0,T]. Then u - 0 in Il8n x [ Proof. For any constant a > A, we prove that u=0 inRn x We then extend u = 0 in the t-direction successively to [4« , 4« ], [ 4« , 4« For any constant R> 0, consider «Ixl2 vR(x, t) = n e1-4t, (1 - 4at) 2 for any (x, t) E BR x (0,1/4a). We note that vR is modified from the example we discussed preceding Theorem 5.2.5. Then atvR O vR = BR R x (o, 0) > 0 = Next, for any (x, t) E DBR x (0,1 /4a), in BR. Me' 2 > ±u(x, t). vR(x, t) > In conclusion, fu < vR on 8P (BR x `10, By the maximum principle, we have I< vR(x, t) for any (x,t) E BR x (o, 5.3. The Maximum Principle Now we fix an arbitrary (x, t) E ]I8" x (0,1/4a) and then choose R > xI We note that vR(X, t) -+ 0 as R -+ oo, since a > A. We therefore obtain u(x,t)=0. 5.3.4. Interior Gradient Estimates. We now give an alternative proof, based on the maximum principle, of the interior gradient estimate. We do this only for solutions of the heat equation. Recall that for any r > 0, Qr = Br X (r2,0]. Theorem 5.3.11. Suppose u E C2"(Q1) fl C(Ql) satisfies ut-Du=O inQl. Then sup I Vu I < C sup Iui, aPQi where C is a positive constant depending only on rL. The proof is similar to that of Theorem 4.3.13, the interior gradient estimate for harmonic functions. Proof. We first note that u is smooth in Q by Theorem 5.2.7. A straightforward calculation yields n (at - )IVuI2 = -2 i,j=1 n = -2 To get interior estimates, we need to introduce a cutoff function. For any smooth function cp in C°°(Q1) with suppcp C we have (at - o)(ploxul2) _ (pt n ( uu- 2cp Now we take cp = rj2 for some r) E C°°(Q1) with r) - 1 in Q1/2 and supp r) C Q3/4. Then (at - (2rnt - 2rjL - 8j By the Cauchy inequality, we obtain WT ' + 9 5. Heat Equations Hence, ioxui2 (at - C is a positive constant depending only on r and n. Note that 2u(ut - Du) _ (at - L)(u2) _ By taking a constant a large enough, we get au2) <(C - (at - By the maximum principle, we have sup(i2IDyuI2 + au2) < sup (2IVuI2 +au2). Qi This implies the desired result since ri = 0 on 8Q1 and ri = 1 in Q1/2. 5.3.5. Harnack Inequalities. For positive harmonic functions, the Harnack inequality asserts that their values in compact subsets are comparable. In this section, we study the Harnack inequality for positive solutions of the heat equation. In seeking a proper form of the Harnack inequality for solutions of the heat equation, we begin our discussion with the fundamental solution. We fix an arbitrary e Rn and consider for any (x, t) E Rn x (0, oo), 1 u(x, t) = (4irt) 2 Ix-12 4t Then u satisfies the heat equation ut - Ou = 0 in Rn x (0, oo). For any (x1, t1) and (x2, t2) E Rn x (0,oo), n _u(x2, t2) 26(x1, t1) Recall that (p+q)2 + b' for any a, b > 0 and any p, q e R, and the equality holds if and only if by = aq. This implies, for any t2 > tl > 0, 1x2 < 1x2 -x112 + Ixi - IZ t2 _ tl tl and the equality holds if and only if 12x1 - t1 x2 t2 - t1 5.3. The Maximum Principle u(xi, ti) $ 4(t2-tl)u(x2, t2), for any x i , x2 E W and any t2 > ti > 0, and the equality holds if is chosen as above. This simple calculation suggests that the Harnack inequality for the heat equation has an "evolution" feature: the value of a positive solution at a certain time is controlled from above by the value at a later time. Hence, if we attempt to establish the estimate u(xi, ti) < Cu(x2, t2), the constant C should depend on t2/ti, 1x2 - x i , and most importantly (t2 - ti)-i(> 0). Suppose u is a positive solution of the heat equation and set v = log u. In order to derive an estimate for the quotient u(xi, ti) u(x2, t2) it suffices to get an estimate for the difference v(xi, ti) - v(x2, t2). To this end, we need an estimate of vt and IVvI. For a hint of proper forms, we again turn our attention to the fundamental solution of the heat equation. Consider for any (x, t) E W x (0, oo), 1 u(x, t ) = _ 1x12 4t Then v(x, t) = l og u (x, t) and hence n vt _ - 2 l og( 4t) - 2t + 4t2 ' 2 I x v Therefore, 2t + We have the following differential Harnack inequality for arbitrary positive solutions of the heat equation. Theorem 5.3.12. Suppose u E C2'1(Il8n x (O, T]) satisfies ut = Du, in Il8" x (O, T]. Then v = log u satisfies Vt + in RTh x (0,T]. 5. Heat Equations The differential Harnack inequality implies the Harnack inequality by a simple integration. Corollary 5.3.13. Suppose u E C2>1(II8n x (0, T]) satisfies ut = Du, u > 0 in ][8n x (O, T]. Then for any (Xi, tl), (x2, t2) E II87 x (0, TJ with t2 > ti > 0, u(x2, t2) \ti j 4(t - ti) Proof. Let v = log u be as in Theorem 5.3.12 and take an arbitrary path x = x(t) for t E [t1, t2] with x(t) = x2, i = 1, 2. By Theorem 5.3.12, we have dtv(x(t), t) = vt + Ov dt > IVvI2 - v dt - 2t d> -t By completing the square, we obtain tv(x(t), t) I dI Then a simple integration yields t2 v(xl, tl) < v(x2i t2) - Z log t?1 To seek an optimal path which makes the last integral minimal, we require d2x dt 2 along the path. Hence we set, for some a, b E W, x(t) = at + b. Since x2 = at + b, i = 1, 2, we take x2 xi b = t2xi t2 - ti t2 - ti Then, t2 dt = 1x2-x112 Therefore, we obtain 2 v(x2,t2) - n log t? + 1 1x2 - xl u(xl,t1) < u(x2,t2) This is the desired estimate. -tl t2 t2 - tl 12- xll2 4(t2 - ti) 0 5.3. The Maximum Principle Now we begin to prove the differential Harnack inequality. The basic idea is to apply the maximum principle to an appropriate combination of derivatives of v. In our case, we consider IVvI2 - vt and intend to derive an upper bound. First, we derive a parabolic equation satisfied by IVVI2 - Vt. A careful analysis shows that some terms in this equation cannot be controlled. So we introduce a parameter a e (0, 1) and consider vt instead. After we apply the maximum principle, we let a -4 1. The proof below is probably among the most difficult ones in this book. Proof of Theorem 5.3.12. Without loss of generality, we assume that u is continuous up to {t = 0}. Otherwise, we consider u in Il8n x [E, T] for any constant s E (O, T) and then let E -4 0. We divide the proof into several steps. In the following, we avoid notions of summations if possible. Step 1. We first derive some equations involving derivatives of v = log u. A simple calculation yields vt = Ov -I- Consider w = Ov . Then Wt = Ovt = (zV + 1= Ow -I- OIOvI2. 21V2v12 + 20V 0(Ov) = 21V2v12 -I- 20V Ow, we have (5.3.1) Note that Vv is to be controlled and appears as a coefficient in the equation (5.3.1). So it is convenient to derive an equation for VV. Set iu = IVVI2. Then, iut = 20v Ovt = 2Vv 0(Ov + IVvI2) = 2VV V(LV) + 2VV Vw = IVVI2 - 2102v12 + 20V Oiu = Oiu + 20V Therefore, (5.3.2) Note that, by the Cauchy inequality, n 1V2V12 = n i=1 (Vxx)i i=1 = n-1 (oV)2. 5. Heat Equations Hence, (5.3.1) implies 2 -w2. n Step 2. For a constant a E (0, 1), set f = IVvI2 - Ov - IVvI2 = -Ov - (1 - a)lOvl2 = -w - (1- c)t, and hence by (5.3.1) and (5.3.2), ft-LV-2Vv'Vf=-2oIV2vI2. Next, we estimate 102v12 by f. Note that - vt)2 - n ((1 - a)1vv12 + n (lVvI2 1(f2 + 2(1 - )lVvl2f + (1- )2IVvI4) > + 2(1 - )IVvI2f). We obtain (5.3.3) <-2n (f2+2(1-a)IVvI2f). We should point out that 1Vv12 in the right-hand side plays an important role later on. Step 3. Now we introduce a cutoff function cp E C( W) with cp > 0 and set g= We derive a differential inequality for g. Note that 9t = Pf + t f VcP, 09 = too f + 2tvp V f + t fog. 5.3. The Maximum Principle 9t - f = v9 - v-s, (vg - tcp0 f = Dg - 2 Multiplying (5.3.3) by t2cp2 and substituting It, V f and L f by above equalities, we obtain 09) -I- 2t(Vcp - (ppv) Vg < g Jcp l 2a - ng + t (2 4a(1 - a) n 2Vcp pv/ To eliminate IVvI from the right-hand side, we complete the square for the last two terms. (Here we need a < 1! Otherwise, we cannot control the expression -2V(p Vv in the right-hand side.) Hence, tcP(gt - D9) + 2t(Vcp - (pOv) Vg v2 2n1V12 (p l )3' whenever g is nonnegative. We point out that there are no unknown expressions in the right-hand side except g. By choosing cp = r > 0, we get r tr12(9t - Og) -I-- 2t(2rlVil - iJZVv) - 2a g + t 6V7 -2i&+ a(1-a)' I e Co (B1), with 1 in Bi12. For any fixed R > 1, we consider whenever g is nonnegative. Now we fix a cutoff function 1 in Bl and Then (6IvI2 - a(1- a) IVI2) (x) a(in a) = R2 Therefore, we obtain that in BR x (0, T), Dg) + C«tl (2a Vg < g 1 - n g -I- RZ 5. Heat Equations whenever g is nonnegative. Here, C« is a positive constant depending only on a and r)o. We point out that the unknown expression Vv in the left-hand side appears as a coefficient of Vg and is unharmful. Step 4. We claim that (5.3.4) in BR x (O, T]. Note that g vanishes on the parabolic boundary of BR x (0, T) since g = trj2 f . Suppose, to the contrary, that 2a Cat h-1- ng+ R2 has a negative minimum at (xO, to) E BR x (0, T]. Hence, h(xo,to) < 0, ht < 0, Oh = 0, Oh > 0 at (xo, to). Thus, 9(xo, to) > 0, gt > 0, Vg=0, Og < 0 at (xp, tp). Then at (xO, to), we get 0< Og) + 2t(2i1Vi1- i12Vv) Vg This is a contradiction. Hence (5.3.4) holds in BR x (0, T). Therefore, we obtain (5.3.5) 1- 2n tr)2(aiVvi2 - vt) + R2 > 0 in BR x (0,T]. For any fixed (x, t) E Ilgn x (O, T], choose R> lxi. Recall that r,i = and rjo = 1 in B112. Letting R -+ oo, we obtain 2c 2 1--t(ciVVi -Vt) 0. 72 We then let a -+ 1 and get the desired estimate. We also have the following differential Harnack inequality for positive solutions in finite regions. 5.4. Exercises Theorem 5.3.14. Suppose u e x (0, 1]) satisfies ut - Du = 0, u > 0 in Bl x (0, 1]. Then for any a e (0,1), v = log u satisfies vt - a(VvI2 + C > 0 in B112 x (0, 1], 2at + where C is a positive constant depending only on n and a. Proof. We simply take R = 1 in (5.3.5). Now we state the Harnack inequality in finite regions. Corollary 5.3.15. Suppose u e C2"(B1 x (0, 1]) satisfies ut - Du = 0, u > 0 in Bl x (0, 1]. Then for any (xi, tl), (x2, t2) E B112 x (0, 1] with t2 > ti, u(xl, tl) < Cu(x2, t2), where C is a positive constant depending only on n, t2/ti and (t2 - t1)'. The proof is left as an exercise. We point out that u is assumed to be positive in Theorem 5.3.14 and only nonnegative in Corollary 5.3.15. The Harnack inequality implies the following form of the strong maxi- mum principle: Let u be a nonnegative solution of the heat equation ut Du = 0 in Bl x (0, 1]. If u(xo, to) = 0 for some (xO, to) E Bl x (0, 1], then u = 0 in Bl x (0, to]. This may be interpreted as infinite-speed propagation. 5.4. Exercises Exercise 5.1. Prove the following statements by straightforward calculations: (1) K(x, t) = t 2 e- 4t satisfies the heat equation for t> 0. n (2) For any a > 0, G(x, t) = (1- 4at)- 2 e 1-4«t satisfies the heat equation for t < 1/4a. Exercise 5.2. Let uo be a continuous function in I[8n and u be defined in (5.2.4). Suppose uo(x) -+ 0 uniformly as x -+ oo. Prove lim u(x, t) = 0 uniformly in x. Exercise 5.3. Prove the convergence in Theorem 5.2.5. 5. Heat Equations Exercise 5.4. Let uo be a bounded and continuous function in [0, oo) with uo(0) = 0. Find an integral representation for the solution of the problem ut - u= 0 for x > 0, t > 0, for x> 0, u(0, t) = 0 fort > 0. u(x, 0) = uo (x) Exercise 5.5. Let u E C2"(][8x (-oo, 0)) be a solution of in fin' X (-00,0). 2Gt - 02G = 0 Suppose that for some nonnegative integer m, lU(X,t)l C(1 + lxi + for any (x, t) E II8"` x (-oo, 0). Prove that u is a polynomial of degree at most m. Exercise 5.6. Prove that u constructed in the proof of Proposition 5.2.6 is smooth in ][8 x R. Exercise 5.7. Let St be a bounded domain in ][8n and uo E C(St). Suppose U E C2"(1 t x (0, oo)) fl C(SZ x [0, oo)) is a solution of ut - Du = 0 in S2 x (0, oo), U(,0)U0 on1, ii=0 onD1x(0,oo). Prove that sup Jt) l < Ce-ut sup lUol for any t> 0, where µ and C are positive constants depending only on n and SZ. Exercise 5.8. Let St be a bounded domain in R, c be continuous in SZ x [0, T] with c > -co for a nonnegative constant co, and uo be continuous in St with uo > 0. Suppose U E C2"(1 Z x (0, T]) fl C(St x [0, T]) is a solution of ut - Du -I- cu = -u2 in 1 2 x (0, T], U(,0)U0 on1, U=0 onD1x(0,T). Prove that 0 < u < eCOT sup uo in 1 x (0,TI. 5.4. Exercises Exercise 5.9. Let St be a bounded domain in ][8n, uo and f be continuous in SZ, and cp be continuous on 8S2 x [O, T]. Suppose u E C2>1(St x (O, T]) fl C(S2 x [O, T]) is a solution of ut - Du = e-U - 1(x) in St x (O, T], u(',0)=u onh, U=(p onOhx(0,T). Prove that -M < u < TeM + M in SZ x (O, T], where M = T sup f + sup { sup IUoI, cp } as)x(o,T) I. s Exercise 5.10. Let Q = (0,1) x (0, oo) and uo E Cl [0, l] with uo(0) _ uo(l) = 0. Suppose U E C3" (Q) fl Cl(Q) is a solution of ut - uxx= 0 in Q, U(., 0) = U0 on (0,1), U(0,.)=U(l,.)=0 on(0,oo). Prove that sup ux I < sup U. Q Exercise 5.11. Let 12 be a bounded domain in W. Suppose ul, ,Urn E C21(S2 x (0, T]) r1 C(SZ x [0, T]) satisfy Otui = Dui for i = 1, in SZ x (0, T], , m. Assume that f is a convex function in R"2. Prove that sup f(Ui, ,Urn) < f(U,,... ,Urn). sup 8 (SZ x (O,T] ) S2 x (O,T] Exercise 5.12. Let uO be a bounded continuous function in TR?. Suppose U E C2" (W x (0, T]) n C(W x [0, T]) satisfies ut-1u=0 inW' x (0,T], 0) = u0 on TW . Assume that u and Du are bounded in W x (0, T]. Prove that sup t)I < 2tsup n duo for any t E (O, T]. Hint: With IU0 <M in W, consider w = u2 + 2t1 Exercise 5.13. Prove Corollary 5.3.15. 5. Heat Equations Chapter 6 Wave Equations The n-dimensional wave equation is given by utt - Du = 0 for functions u = u(x, t), with x E R and t E R. Here, x is the space variable and t the time variable. The wave equation represents vibrations of strings or propagation of sound waves in tubes for n = 1, waves on the surface of shallow water for n = 2, and acoustic or light waves for n = 3. In Section 6.1, we discuss the initial-value problem and mixed problems for the one-dimensional wave equation. We derive explicit expressions for solutions of these problems by various methods and study properties of these solutions. We illustrate that characteristic curves play an important role in studying the one-dimensional wave equation. They determine the domain of dependence and the range of influence. In Section 6.2, we study the initial-value problem for the wave equation in higher-dimensional spaces. We derive explicit expressions for solutions in odd dimensions by the method of spherical averages and in even dimensions by the method of descent. We study properties of these solutions with the help of these formulas and illustrate the importance of characteristic cones for the higher-dimensional wave equation. Among applications of these explicit expressions, we discuss global behaviors of solutions and prove that solutions decay at certain rates as time goes to infinity. We will also solve the initial-value problem for the nonhomogeneous wave equation by Duhamel's principle. In Section 6.3, we discuss energy estimates for solutions of the initialvalue problem for a class of hyperbolic equations slightly more general than the wave equation. We introduce the important concept of space-like and time-like hypersurfaces. We demonstrate that initial-value problems for hyperbolic equations with initial values prescribed on space-like hypersurfaces 201 6. Wave Equations are well posed. We point out that energy estimates are fundamental and form the basis for the existence of solutions of general hyperbolic equations. 6.1. One-Dimensional Wave Equations In this section, we discuss initial-value problems and initial/boundary-value problems for the one-dimensional wave equation. We first study initial-value 6.1.1. Initial-Value Problems. For f E C(I[8 x (0, oo)), cp E C2(][8) and E Cl(Il8), we seek a solution u E C2(][8 x [0, oo)) of the problem utt - uxx = f in It x (0,00), onllt We will derive expressions for its solutions by several different methods. Throughout this section, we denote points in I[8 x (0, oo) by (x, t). How- ever, when (x, t) is taken as a fixed point, we denote arbitrary points by (y,s). The characteristic curves for the one-dimensional wave equation are given by the straight lines s = fy + c. (Refer to Section 3.1 for the detail.) In particular, for any (x, t) E ][8 x (0, oo), there are two characteristic curves through (x, t) given by s-y=t-x and s+y=t+x. These two characteristic curves intercept the x-axis at (x-t, 0) and (x+t, 0), respectively, and form a triangle Ci (x, t) with the x-axis given by Cl(x,t)={(y,s): This is the cone we introduced in Section 2.3 for n = 1. We usually refer to C, (x, t) as the characteristic triangle. We first consider the homogeneous wave equation (6.1.2) u1-u=0 in ][8 x (0, oo). We introduce new coordinates along characteristic curves by =x-t, r=x+t. In the new coordinates, the wave equation has the form 0. By a simple integration, we obtain u(,r) = g() + h(i), for some functions g and h in Il8. Therefore, (6.1.3) u(x, t) = g(x - t) + h(x + t). 6.1. One-Dimensional Wave Equations This provides a general form for solutions of (6.1.2). As a consequence of (6.1.3), we derive an important formula for the solution of the wave equation. Let u be a C2-solution of (6.1.2). Consider a parallelogram bounded by four characteristic curves in ll8 x (0, oo), which is referred to as a characteristic parallelogram. (This parallelogram is in fact a rectangle.) Suppose A, B, C, D are its four vertices. Then tl D X Figure 6.1.1. A characteristic parallelogram. u(A) + u(D) = u(B) + u(C). In other words, the sums of the values of u at opposite vertices are equal. This follows easily from (6.1.3). In fact, if we set A = (XA, tA), B = (XB, tB), C = (xc, tc) and D = (XD, tD), we have xB - tB = xA - tA, xB + tB = XD + tD, xC-tC=xD-tD, xC+ tC=x,q+ t,q. We then get (6.1.4) by (6.1.3) easily. An alternative method to prove (6.1.4) rj)-coordinates, where A, B, C, D are the vertices of is to consider it in a rectangle with sides parallel to the axes. Then we simply integrate u, which is zero, in this rectangle to get the desired relation. We now solve (6.1.1) for the case f - 0. Let u be a C2-solution which is given by (6.1.3) for some functions g and h. By evaluating u and ut at t = 0, we have U(X, 0) = g(X) + h(X) = ut(x, 0) _ -g'(x) + h'(x) _ fi(x). 6. Wave Equations g'(x) = he(x) = 2cp (x) + A simple integration yields g(x) = (x) - LX for a constant c. Then a substitution into the expression of u(x, 0) implies fs h(x) = (x) + (s)ds - C. 0 u(x, t) = 2 t) + cp(x + t)) + 2 t t (s) t) for This is d'Alembert's formula. It clearly shows that regularity of 0) and is 1-degree better any t> 0 is the same as that of the initial value than 0). There is no improvement of regularity. We see from (6.1.5) that u(x, t) is determined uniquely by the initial values in the interval [x - t, x + t] of the x-axis, which is the base of the characteristic triangle Cl (x, t). This interval is the domain of dependence for the solution u at the point (x, t). We note that the endpoints of this interval are cut out by the characteristic curves through (x, t). Conversely, the initial values at a point (xO, 0) of the x-axis influence u(x, t) at points (x, t) in the wedge-shaped region bounded by characteristic curves through (xO, 0), i.e., for xo - t <x < xo + t, which is often referred to as the range of influence. Figure 6.1.2. The domain of dependence and the range of influence. Next, we consider the case f - 0 and co . 0 and solve (6.1.1) by the method of characteristics. We write Utt - UXX = (3 + '9X)(t9t - 6.1. One-Dimensional Wave Equations By setting v = ut for first-order PDEs, we decompose (6.1.1) into two initial-value problems 2Gt - 26y = 7J 111 RX (0,00), U(.,0)=0 on ][8, vt + v = 0 in III x 0, oo , (f M 71 v(.,0)=b onR. The initial-value problem (6.1.7) was discussed in Example 2.2.3. Its solution is given by v(x,t) = b(x - t). The initial-value problem (6.1.6) was discussed in Example 2.2.4. Its solution is given by U(x, t) (x + t - 2r) dr. By a change of variables, we obtain 1 b(s)ds. / x-t This is simply a special case of d'Alembert's formula (6.1.5). Now we derive an expression of solutions in the general case. For any 2 (x, t) E Il8 x (0, oo), consider the characteristic triangle Cl(x, t) _ {(y, s) : y - xI 0}. The boundary of Ci (x, t) consists of three parts, L ={(y,s) = {(y,s) s=-y+x+t, 0<s Lo={(y,0): x-t 1(1, 1)// on L, (-1, i)/J ((0,-i) on L_, onL0. Upon integrating by parts, we have 6. Wave Equations Figure 6.1.3. A characteristic triangle. fdyds = i (x,t) (utt - uxx) dyds = i (x,t) 1 (ut - ux) dl -}- (UtV2 - uxvl) dl ci (x,t) (Ut + ux) dl ut(s, 0) ds, where the orientation of the integrals over L+ and L_ is counterclockwise. Note that (8t - 8x)// is a directional derivative along L+ with unit length and with direction matching the orientation of the integral over L+. Hence (ut - dl = u(x, t) - u(x + t, 0). On the other hand, fat + ate)// is a directional derivative along L_ with unit length and with direction opposing the orientation of the integral over L_. Hence 1 do = - (u(x - t,0) - u(x, t)). (ut + Therefore, a simple substitution yields x+t (s)ds (6.1.8) +-2 1 / / 0 Theorem 6.1.1. Let m > 2 be an integer, cp E C'n(I[8),b e C"'' -1(I[8) and f E C"'-1(I[8 x [0, oo)). Suppose u is defined by (6.1.8). Then u E C"'L(Il8 x (0, oo)) and utt - u = f in ][8 x (0, oo). 6.1. One-Dimensional Wave Equations Moreover, for any xp E ]I8, u(x, t) = cp(xp), lim (x,t)-a (xo,O) ut(x, t) _ b(xo). lim (x,t)-; (moo ,O) Hence, u defined by (6.1.8) is a solution of (6.1.1). In fact, u is Ctm in R x [0,oo). The proof is a straightforward calculation and is omitted. Obviously, CZ-solutions of (6.1.1) are unique. Formula (6.1.8) illustrates that the value u(x, t) is determined by f in the triangle Cl (x, t), by b on the interval [x - t, x + t] x {0} and by cp at the two points (x + t, 0) and (x - t, 0). In fact, without using the explicit expression of solutions in (6.1.8), we can derive energy estimates, the estimates for the LZ-norms of solutions of (6.1.1) and their derivatives in terms of the LZ-norms of cp, L, and f. To obtain energy estimates, we take any constants 0 < T < t and use the domain {(x,t): IxI<-t,0 6.1.2. Mixed Problems. In the following, we study mixed problems. For simplicity, we discuss the wave equation only, with no nonhomogeneous terms. First, we study the half-space problem. Let cp E CZ [0, oo), b E C' [O, oo) and a E C2 [0, oo). We consider utt - = 0 in (0, oo) x (0, oo), on ,oo), u(0, t) = a(t) fort > 0. We will construct a CZ-solution under appropriate compatibility conditions. We note that the origin is the corner of the region (0, oo) x (0, oo). In order to have a C2-solution u, the initial values cp and b and the boundary value a have to match at the corner to generate the same u and its first-order and second-order derivatives when computed either from cp and b or from a. If (6.1.9) admits a solution which is C2 in [0, oo) x [0, oo), a simple calculation shows that (6.1.10) = a'(O), = a"(O). This is the compatibility condition for (6.1.9). It is the necessary condition for the existence of a C2-solution of (6.1.9). We will show that it is also sufficient. 6. Wave Equations We first consider the case a - 0 and solve (6.1.9) by the method of reflection. In this case, the compatibility condition (6.1.10) has the form /'(0) = 0, o(O) = 0, o"(O) = 0. Now we assume that this holds and proceed to construct a CZ-solution of (6.1.9). We extend cp and /i to 1[8 by odd reflection. In other words, we set for x > 0 -cp(-x) for x <0, (x) = { -(-x) for x > 0 for x <0. Then cp and b are C2 and Ci in I[8, respectively. Let u be the unique C2-solution of the initial-value problem utt - = 0 in I[8 x (0, oo), We now prove that u(x, t) is the solution of (6.1.9) when we restrict x to [0, oo). We need only prove that u(0,t)=0 foranyt>0. In fact, for v(x, t) _ -u(-x, t), a simple calculation yields vtt - v= 0 in I[8 x (0, oo), v is also a C2-solution of the initial-value problem for the wave equation with the same initial values as u. By the uniqueness, u(x, t) _ v(x, t) _ -u(-x, t) and hence u(0, t) = 0. In fact, u is given by d'Alembert's formula (6.1.5), i.e., u(x, t) = ((x + t) + (x - t)) + (s) ds. By restricting (x, t) to [0, oo) x [0, oo), we have, for any x > t > 0, u(x, t) = ((x + t) + (x - t)) + and foranyt>x>0, (6.1.11) u(x, t) = Z ((x + t) - cp(t - x)) + /' +t J _ (s) ds, since cp and b are odd in 1[8. We point out that (6.1.11) will be needed in solving the initial-value problem for the wave equation in higher dimensions. 6.1. One-Dimensional Wave Equations Now we consider the general case of (6.1.9) and construct a solution in [0, oo) x [0, oo) by an alternative method. We first decompose [0, oo) x [0, oo) into two regions by the straight line t = x. We note that t = x is the characteristic curve for the wave equation in the domain [0, oo) x [0, oo) passing through the origin, which is the corner of [0, oo) x [0, oo). We will solve for u in these two regions separately. First, we set 11i_ {(x, t): x > t > o}, and 112={(x,t): t>x>0}. We denote by ul the solution in Stl. Then, ul is determined by (6.1.5) from the initial values. In fact, Ui (x, t) = ((x + t) + (x - t)) + sb(s) ds, / -'Jx-t for any (x, t) E Sll. Set for x > 0, 2x ry(x) = ul(x, x) = Z ((2x) + (0)) + (s) ds. We note that '-y(x) is the value of the solution u along the straight line t = x for x > 0. Next, we consider utt - uXX= 0 in SZ2i u(0, t) = a(t), u(x, x) = y(x). We denote its solution by u2. For any (x, t) E 112, consider the characteristic and (t t In parallelogram with vertices (x, t), (0, t - x), (t other words, one vertex is (x, t), one vertex is on the boundary {x = 0} and the other two vertices are on {t = x}. By (6.1.4), we have u2 (x , t) + u 2 I t-x tt- x ) =u 2 2 t+x (0t-x)+u \ t+x 2 2 , 2 ( u2(x,t) =a(t- x)-ryl(t-2x I +ryl x2 t = a(t - x) + 2 (p(x + t) - p(t - x)) +- for any (x, t) E 112. Set u = ul in hi and u = u2 in 112. Now we check that u, ut, ux, utt, uxx, utx are continuous along {t = x}. By a direct calculation, 6. Wave Equations Figure 6.1.4. Division by a characteristic curve. we have ui(x, t)It= - u2(x, t)It= = ry(0) - a(0) _ p(0) - a(0) ui(x, t)It= - 3xU2(X, t)It= = -b(0) + cV(0), aui(x,t)It= - au2(x,t)It= = d'(O) - cV'(O). Then (6.1.10) implies 9xu1= axtL2i = u2, It is easy to get at'Ul = {t = x}. Similarly, we get on {t = x}. on {t = x} by ul = u2 and 8xtu2 and attui = 8xu2 on on {t = x}. Therefore, u is CZ across t = x. Hence, we obtain the following result. Theorem 6.1.2. Suppose cc e C2[0, oo), b e Cl [0, oo), a e C2 [0, oo) and the compatibility condition (6.1.10) holds. Then there exists a solution u E c2([0, oo) X [0,oo)) of (6.1.9). We can also derive a priori energy estimates for solutions of (6.1.9). For any constants T> 0 and xo > T, we use the following domain for energy estimates: {(x,t): 0<x<xo-t,0 0, assume that cc E C2[0, l], b E Cl [0,1] and a, Q E C2 [0, oo). Consider = 0 in (0,1) x (0, oo), tt - _ cc, ut(',O) _ on [0, l], u(0, t) = a(t), u(l, t) _ /3(t) fort > 0. O) The compatibility condition is given by w(o) _ mo(o), b(a) c(l) = /3(0), _ a'(o). v'(o) _ °(o), (l) = /3'(O), c"(l) = f3"(O). 6.1. One-Dimensional Wave Equations We first consider the special case a = /3 - 0. We discussed this case using separation of variables in Section 3.3 if l = ir. We now construct solutions by the method of reflection. We first extend cp to [-l, 0] by odd reflection. In other words, we define x _ Ico(x) for x E [0, l], (-(p(-x) for x E [-l, 0]. We then extend cp to ](8 as a 21-periodic function. Then cp is odd in R. We extend b similarly. The extended functions cp and b are C2 and Cl on ](8, respectively. Let u be the unique solution of the initial-value problem utt - = 0 in IE8 x (0, oo), onR. We now prove that u(x, t) is a solution of (6.1.12) when we restrict x to [0, l]. We need only prove that (0, t) = 0, u(l, t) = 0 for any t> 0. The proof is similar to that for the half-space problem. We prove that u(0, t) = 0 by introducing v(x, t) _ -u(-x, t) and prove u(l, t) = 0 by introducing w(x, t) _ -u(2l - x, t). We now discuss the general case and construct a solution of (6.1.12) by an alternative method. We decompose [0,1] x [0, oo) into infinitely many regions by the characteristic curves through the corners and through the intersections of the characteristic curves with the boundaries. Specifically, we first consider the characteristic curve t = x. It starts from (0, 0), one of the two corners, and intersects the right portion of the boundary x = l at (l, l). Meanwhile, the characteristic curve x + t = l starts from (l, 0), the other corner, and intersects the left portion of the boundary x = 0 at (0, l). These two characteristic curves intersect at (l/2, 1/2). We then consider the characteristic curve t-x = l from (0, l) and the characteristic curve t+x = 21 from (l, l). They intersect the right portion of the boundary at (l, 21) and the left portion of the boundary at (0, 21), respectively. We continue this process. We first solve for u in the characteristic triangle with vertex (l/2, l/2). In this region, u is determined by the initial values. Then we can solve for u by forming characteristic parallelograms in the triangle with vertices (0, 0), (l/2, 1/2) and (0, l) and in the triangle with vertices (l, 0), (l/2, l/2) and (l, l). In the next step, we solve for u again by forming characteristic in the rectangle with vertices (0, l), (l/2, l/2), (l, l) and (l/2, 3l/2). We note that this rectangle is a characteristic parallelogram. By continuing this process, we can find u in the entire region [0,1] x [0, oo). 6. Wave Equations Figure 6.1.5. A decomposition by characteristic curves. Theorem 6.1.3. Suppose cp e C2 [0, l], E Cl [0, l], a, Q E CZ [0, oo) and the compatibility condition (6.1.13) holds. Then there exists a solution u e C2([0, l] x [O,oo)) of (6.1.12). Theorem 6.1.3 includes Theorem 3.3.8 in Chapter 3 as a special case. Now we summarize various problems discussed in this section. We em- phasize that characteristic curves play an important role in studying the one-dimensional wave equation. First, presentations of problems depend on characteristic curves. Let 12 be a piecewise smooth domain in ][82 whose boundary is not characteristic. In the following, we shall treat the initial curve as a part of the boundary and treat initial values as a part of boundary values. We intend to prescribe appropriate values on the boundary to ensure the well-posedness for the wave equation. To do this, we take an arbitrary point on the boundary and examine characteristic curves through this point. We then count how many characteristic curves enter the domain 1 2 in the positive t-direction. In this section, we discussed cases where SZ is given by the upper half-space ll8 x (0, oo), the first quadrant (0, oo) x (0, oo) and I x (0, oo) for a finite interval I. We note that the number of boundary values is the same as the number of characteristic curves entering the domain in the positive tdirection. In summary, we have ult=o = P, utIt=o = '/' for initial-value problems; ult=o = o, utIt=o = '/', uIx=o = a for half-space problems; uIt=o = (P, utIt=o = '/', uIx=o = a, Ix=1 = Q for initial/boundary-value problems. 6.2. Higher-Dimensional Wave Equations Figure 6.1.6. Characteristic directions. Second, characteristic curves determine the domain of dependence and the range of influence. In fact, as illustrated by (6.1.5), initial values propagate along characteristic curves. Last, characteristic curves also determine domains for energy estimates. We indicated domains of integration for initial-value problems and for halfspace problems. We will explore energy estimates in detail in Section 6.3. 6.2. Higher-Dimensional Wave Equations In this section, we discuss the initial-value problem for the wave equation in higher dimensions. Our main task is to derive an expression for its solutions and discuss their properties. 6.2.1. The Method of Spherical Averages. Let cp E C2(R) and b E Cl(I[8n). Consider 'att (6.2.1) = 0 in R x (0, oo), u(',0)- o, ut(',0)_ b Ori fin'. We will solve this initial-value problem by the method of spherical averages. We first discuss briefly spherical averages. Let w be a continuous func- tion in R. For any x E Rn and r > 0, set 1 W (x; r) = wnr n_ 1 w (y) dSy UBr (X) where wn is the surface area of the unit sphere in R. Then W (x; r) is the average of w over the sphere DBr (x) . Now, w can be recovered from W by W (x; r) = w(x) for any x E IlBn. 6. Wave Equations Next, we suppose u is a C2-solution of (6.2.1). For any x E Ian, t> 0 and r > 0, set 1 U(x;r,t) = (x;r) = u(y, t) dSy nr n_1 Br (x) co(y) dSy, W(x;r) = WT' farx (y) dSp. t), cp In other words, U(x; r, t), J(x, r) and W(x, r) are the averages of and /i over the sphere aBr(x), respectively. Then U determines u by lim U(x; r, t) = u(x, t). Now we transform the differential equation for u to a differential equation for U. We claim that, for each fixed x e Ian, U(x; r, t) satisfies the EulerPoisson-Darboux equation (6.2.4) = Urr+ Ur forr>Oandt>0, with initial values U(x; r, 0) _ J(x; r), Ut(x; r, 0) _ W(x; r) for r > 0. It is worth pointing out that we treat x as a parameter in forming the equation (6.2.4) and its initial values. To verify (6.2.4), we first write U(x; r, t) = 1 GJn u(x + rw, t) dSw. By differentiating under the integral sign and then integrating by parts, we have Ur = if Iwl=1 cvnr n- 1 (x + rw, t) dSw = cvnrn -1 aBr(x) (y, t) dSy Du(y, t) dy. Then by the equation in (6.2.1), r n-1 Ur = fBr (x) Du(y, t) dy = Wn Br (x) utt (y, t) dy 6.2. Higher-Dimensional Wave Equations (rn_1 UT )T Wn 1 utt(y, t) day JBBr (x) u (y, t) dSy = rn-1 Utt . For the initial values, we simply have for any r > 0, U(x, r,0) = Ut(x;r,0) =war _1 (y)dS, 8Br (x) BB,. (x) 6.2.2. Dimension Three. We note that the Euler-Poisson-Darboux equation is a one-dimensional hyperbolic equation. In general, it is a tedious process to solve the corresponding initial-value problems for general n. However, this process is relatively easy for n = 3. If n = 3, we have Hence for r > 0 and t> 0, (rU)tt = (rU). We note that rU satisfies the one-dimensional wave equation. Set U(x;r,t) = rU(x; r, t) and (x;r) = r1(x; r), 1Y(x; r) = rW(x; r). Then for each fixed x E Il83, UtUTT forr>Oandt>0, x; r), Ut(x;r,0) _ lY(x; r) for r >0, U(x;0,t)=0 fort > 0. U(x;r,0) _ This is a half-space problem for U studied in Section 6.1. By (6.1.11), we obtain formally for any t > r> 0, U(x; r, t) = 2 ((x; r + t) - t - r)) + 2 (x; s) ds. U(x; r, t) _ r ((t+r)(x;t+r) - (t-r)(x;t-r)) 1 t+ T f+_ ,. 6. Wave Equations Letting r -+ 0, we obtain u(x, t) = li o U(x; r, t) = 8t (tI(x; t)) -F tiY(x; t). Note that the area of the unit sphere in R3 is 4ir. Then 1 (x;t) W(x; t) = dsy, b(y) dsy. Therefore, we obtain formally the following expression of a solution u of (6.2.1): (6.2.5) u(x, t) = at (y) dsy + 4-j ) dsy, for any (x, t) E I[83 x (0, oo). We point out that we did not justify the compatibility condition in applying (6.1.11). Next, we prove directly that (6.2.5) is indeed a solution u of (6.2.1) under appropriate assumptions on cp and zb. Theorem 6.2.1. Let k > 2 be an integer, cP E Ck+1(IL83) and /i E Ck(It3) Suppose u is defined by (6.2.5) in I[83 x (0,oo). Then u E x (0,oo)) and utt - Du = 0 in R3 x (0, oo). Moreover, for any xo E u(x, t) = So(xo), ut(x, t) = b(xo). In fact, u can be extended to a C1-function in R3 x [0, oo). This can be easily seen from the proof below. Proof. We will consider cp = 0. By (6.2.5), we have u(x, t) = tiY(x, t), where b(y) dSy. By the change of coordinates y = x + wt, we write W(x,t) = 6.2. Higher-Dimensional Wave Equations In this form, u(x, t) is defined for any (x, t) E ][83 x [0, oo) and 0) = 0. Since Eb e Ck(][83), we conclude easily that Vu exists and is continuous in , k. In particular, Il83 x [0, oo), for i = 0, u(x, t) = 4 For t-derivatives, we take (x, t) E Il83 x (0, oo). Then utt = 2't + tWtt. A simple differentiation yields = 41i= x [0, oo) and ut 0) = 'b. Hence, ut (x, t) is defined for any (x, t) E ,k - 1. After Moreover, Vxut is continuous in R3 x (0, oo), for i = 0,1, the change of coordinates y = x + wt and an integration by parts, we first have LB(X) 8v (y) dsy tt = - JBt(x) 2 utt = (y) dSp. By setting y = x + wt again, we have xb(x + dSW = Du. x [0, oo)). This implies easily that u E A similar calculation works for zb = 0. We point out that there are other methods to derive explicit expressions for solutions of the wave equation. Refer to Exercise 6.8 for an alternative approach to solving the three-dimensional wave equation. By the change of variables y = x + tw in (6.2.5), we have u(x, t) = at (x+tw)dS)+f 4ir J A simple differentiation under the integral sign yields u(x, t) = ((x+tw)+tV(x+tw) w + 6. Wave Equations ((y) + - x) + t(y)) dsy, for any (x, t) E ][83 x (0, oo). We note that u(x, t) depends only on the initial values co and b on the sphere 8Bt(x). 6.2.3. Dimension Two. We now solve initial-value problems for the wave equation in Il82 x (0, oo) by the method of descent. Let cp E C2(R2) and b E Cl(][82). Suppose u E C2(][82 x (0, oo))f1C1(Il82 x [0, oo)) satisfies (6.2.1), i.e., utt - Du = 0 in ][82 x (0, oo), 0) = cp, 0) = zb on I[82. Any solutions in ][82 can be viewed as solutions of the same problem in ][83, which are independent of the third space variable. Namely, by setting x = (x, x3) for x = (X1,X2) E I[82 and u(x, t) = u(x, t), we have iLtt - LTL = 0 in R3 x (0, oo), By (6.2.5), we have u(x, t) = at 4lrt aBt() ) dsy +4it1 f st(er) ) dsy, where y = (yl,y2,y3) _ (y> y3) The integrals here are over the surface 8Bt(x) in ][83. Now we evaluate them as integrals in ][82 by eliminating y3. For x3 = 0, the sphere Iy - x =tin ][83 has two pieces given by CJs=f and its surface area element is 1 dSy- _ (1 + (ay2y3)2) 2 dyidya = yt2 - - xl2 6.2. Higher-Dimensional Wave Equations Therefore, we obtain u(x, t) = 2 at t2 _ Iy_ xla dy /' 2 it JBt(x) 1 t2 - I - xl2 for any (x, t) E 1182 x (0, oo). We put the factor 1/2 separately to emphasize that it is the area of the unit disc in II82. Theorem 6.2.2. Let k > 2 be an integer, cp E Ck+l (][82) and 'i/i E C (][82) . Suppose u is defined by (6.2.6) in ][82 x (0, oo). Then u e C'(][82 x (0, oo)) and utt - Du = 0 in ][82 x (0, oo). Moreover, for any xo E ][82, lim u(x, t) = co (xo ), ut (x, t) = This follows from Theorem 6.2.1. Again, u can be extended to a Ckfunction in 1182 x [0, oo). By the change of variables y = x + tz in (6.2.6), we have u(x, t) - at cp(x - tz) lB1 dz - 1 - 1z12 (x - tz) dz. 1- A simple differentiation under the integral sign yields u(x, t) = cp(x + tz) + tOcp(x + tz) z + -I- tz) 1 _ Izl2 Hence 1 u(x, t) = 2 (y - a;) + t2'/i(y) t2 JB() t2 - I for any (x, t) E ][82 x (0, oo). We note that u(x, t) depends on the initial values co and i in the solid disc Bt(x). 6.2.4. Properties of Solutions. Now we compare several formulas we obtained so far. Let u be a C2-solution of the initial-value problem (6.2.1). 6. Wave Equations We write un for dimension n. Then for any (x, t) E ][8x (0, oo), x+t ui(x,t) = ((x+t)+(x -t)) + 2ft x= 2 t2 (y)dy, l2 x) +t2b(y) dy ( Bt (x) (y) + V(y)' (y - x) + t(y)) dSy. These formulas display many important properties of solutions u. According to these expressions, the value of u at (x, t) depends on the values of cp and b on the interval [x - t, x + t] for n = 1 (in fact, on cp only at two endpoints), on the solid disc Bt(x) of center x and radius tfor n = 2, and on the sphere 8Bt(x) of center x and radius tfor n = 3. These regions are the domains of dependence of solutions at (x, t) on initial values. Conversely, Figure 6.2.1. The domain of dependence. the initial values cp and b at a point xo on the initial hypersurface t = 0 influence u at the points (x, t) in the solid cone Ix - xol < t for n = 2 and only on the surface Ix I = t for n = 3 at a later time t. The central issue here is that the solution at a given point is determined by the initial values in a proper subset of the initial hypersurface. An important consequence is that the process of solving initial-value problems for the wave equation can be localized in space. Specifically, changing initial values outside the domain of dependence of a point does not change the values of solutions at this point. This is a unique property of the wave equation which distinguishes it from the heat equation. Before exploring the difference between n = 2 and n = 3, we first note that it takes time (literally) for initial values to make influences. Suppose that the initial values cp, rb have their support contained in a ball B,.(xo). 6.2. Higher-Dimensional Wave Equations Figure 6.2.2. The range of influence. Then at a later time t, the support of t) is contained in the union of all balls Bt(x) for x E B,.(xo). It is easy to see that such a union is in fact the ball of center xo and radius r + t. The support of u spreads at a finite speed. To put it in another perspective, we fix an x B,.(xo). Then u(x, t) = 0 for t < Ix - xoI - r. This is a finite-speed propagation. For n = 2, if the supports of cp and /i are the entire disc B,.(xo), then the support of t) will be the entire disc B,.+t(xo) in general. The influence from initial values never disappears in a finite time at any particular point, like the surface waves arising from a stone dropped into water. For n = 3, the behavior of solutions is different. Again, we assume that the supports of cp and are contained in a ball B,.(xo). Then at a later t) is in fact contained in the union of all spheres time t, the support of 8Bt(x) for x E B,.(xo). Such a union is the ball Bt+,.(xo) fort < r, as in the two-dimensional case, and the annular region of center xo and outer and inner radii t + r and t - r, respectively, fort > r. This annular region has a thickness 2r and spreads at a finite speed. In other words, u(x, t) is not zero only if t - r < Ix - XOl Ix-xo-r Ix - xol + r. So, the influence from the initial values lasts only for an interval of length 2r in time. This phenomenon is called Huygens' principle for the wave equation. (It is called the strong Huygens' principle in some literature.) In fact, Huygens' principle holds for the wave equation in every odd space dimension nexcept n = 1 and does not hold in even space dimensions. 6. Wave Equations Figure 6.2.3. The range of influence for n = 2. Figure 6.2.4. The range of influence for n = 3. Now we compare regularity of solutions for n = 1 and n = 3. For n = 1, 0) and one order better than the regularity of u is clearly the same as ut 0). In other words, u E Cn and ut E cm initially at t = 0 guarantee u E Cm at a later time. However, such a result does not hold for n = 3. The formula for n = 3 indicates that u can be less regular than the initial values. There is a possible loss of one order of differentiability. Namely, U E CIc and ut E Cc initially at t = 0 only guarantee u E Cc at a later time. Example 6.2.3. We consider an initial-value problem for the wave equation in JR3 of the form utt - Du = 0 in Il83 x (0, oo), u(.,0)=0, Ut(',O)=l/) onJR3. Its solution is given by u(x, t) = b(y) dsy, for any (x, t) E I[g3 x (0, oo). We assume that zb is radially symmetric, i.e., b(x) = h(IxI) for some function h defined in [0, oo). Then u(0' t) fBt b(y) dSy = th(t). 6.2. Higher-Dimensional Wave Equations For some integer k > 3, if b(x) is not C' at lxi = 1, then h(t) is not Ck at t = 1. Therefore, the solution u is not Cc at (x, t) _ (0, 1). The physical interpretation is that the singularity of initial values at lxi = 1 propagates along the characteristic cone and focuses at its vertex. We note that (x, t) _ (0, 1) is the vertex of the characteristic cone {(x, t) t = 1- ixi} which intersects {t = 0} at lxi = 1. : This example demonstrates that solutions of the higher-dimensional wave equation do not have good pointwise behavior. A loss of differentiability in the pointwise sense occurs. However, the differentiability is preserved in the L2-sense. We will discuss the related energy estimates in the next section. 6.2.5. Arbitrary Odd Dimensions. Next, we discuss how to obtain explicit expressions for solutions of initial-value problems for the wave equation in an arbitrary dimension. For odd dimensions, we seek an appropriate combination of U (x; r, t) and its derivatives to satisfy the one-dimensional wave equation and then proceed as for n = 3. For even dimensions, we again use the method of descent. Let n > 3 be an odd integer. The spherical average U(x; r, t) defined by (6.2.2) satisfies (6.2.7) Utt _ Urr + n- 1 Ur, r for any r > 0 and t> 0. First, we write (6.2.7) as 1 Utt = - (1'Urr + (n - 1)Ur). (rU)rr = rUrr + 2Ur, Utt = r ((rU)rr + (n - 3)UT), (6.2.8) (rU)tt = (rU)rr + (n - 3)Ur If n = 3, then rU satisfies the one-dimensional wave equation. This is how we solved the initial-value problem for the wave equation in dimension three. By differentiating (6.2.7) with respect to r, we have Urtt = Urrr + n-1 Urr - n-1 fir r r2 = r2 (r2Urrr+(n_1)rUrr_(n_1)Ur). (r2Ur)rr = r2Urrr + 4rUrr + 2Ur, 6. Wave Equations we obtain UTtt = r2 ((r2U)rr + (n - 5)rUTT - (n + 1)UT), or (6.2.9) (r2U,.)tt = (r2Ur)rr T (n - 5)rU,.,. - (n + 1)UT. The second term in the right-hand side of (6.2.9) has acoefficient n - 5, which is 2 less than n - 3, the coefficient of the second term in the righthand side of (6.2.8). Also the third term involving Ur in the right-hand side of (6.2.9) has a similar expression as the second term in the right-hand side of (6.2.8). Therefore an appropriate combination of (6.2.8) and (6.2.9) eliminates those terms involving (Jr. In particular, for n = 5, we have (r2Ur + 3rU)tt = (r2Ur + 3rU)TT In other words, r2U,.+3rU satisfies the one-dimensional wave equation. We can continue this process to obtain appropriate combinations for all odd dimensions. Next, we note that r2UT + 3rU = r (r3U)T. It turns out that the correct combination of U and its derivatives for arbitrary odd dimension n is given by la r ar n-3 2 We first state a simple calculus lemma. Lemma 6.2.4. Let m be a positive integer and v = v(r) be a C"17'+1 _ function on (0, oo). Then for any r > 0, d2 m-1 (r2v(r)) (m1-ire-1 (r21v(r)) _ (1) (-) (-f) (2) r dr) - (-f) dv (r2(r)); where c,,,,,z is a constant independent of v, for i = 0, 1, , m - 1, and Imo-1.g...(2m-1). The proof is by induction and is omitted. Now we let n > 3 be an odd integer and write n = 2m + 1. Let cc E C"`(][8n) and 'b E C1(Il8'1). We assume that u E C"`+1(I[8n x [0, oo)) is a solution of the initial-value problem (6.2.1). Then U defined by (6.2.2) is 6.2. Higher-Dimensional Wave Equations Cn+1, and and W defined by (6.2.3) are Cm and Cm-1, respectively. For R, r>0 and t>0, set (r2mU(x; r, t)), U(x;r,t) = (r ar-) (6.2.10) and (x;r) = _1 (x;r) = _1 r ar m-1 r ar We now claim that for each fixed x E Ian, Utt - Urr = 0 in (0,oo) x (0,oo), U(x;r,O) = (x; r), Ut (x; r, 0) = 'i'(x; r) for r > 0, U(x;0,t) =0 fort >0. This follows by a straightforward calculation. First, in view of (6.2.4) and -r1Dra (r 2m Ur) = r 2m-1 Urr + 2mr2m- 2 Ur = r2m-1(Urr + n - 1 Ur) _r2m-1 Utt Then by (6.2.10) and Lemma 6.2.4(1), we have () 15 Urr = () 1 r ar (r2U) = (T2rn_iU) = Utt. The initial condition easily follows from the definition of U, 1 and 'I'. The boundary condition U(x; 0, t) = 0 follows from Lemma 6.2.4(2). Asforn=3, we have foranyt>r>0, U(x;r,t) = 2 (x;t+r) - (x;t -r)) + 2 W(x; s) ds. Note that by Lemma 6.2.4(2), m-1 U(x;r,t) = m-1 ai cr,ir2+1 (r2mU(x; r, ari U(x; r, t). 6. Wave Equations Hence lim im U(x, r, t) = u(x t). r t = 1U(x; ) r-+0 Cm,Or Therefore, we obtain u(x, t) = Cm, p ( (x; t - r) - (x; t - r)) - 2r t_r t (x t) + (x;t)). Using n =2m+ 1, the expression for Cm,p in Lemma 6.2.4 and the definitions of and 'I', we can rewrite the last formula in terms of co and b. Thus, we obtain for any x E Ian and t > 0, 1 1(a(1a n-3 2 la + t n-3 2 wn t UBt (x) where n is an odd integer, wn is the surface area of the unit sphere in Ian and (6.2.12) We note that c3 = 1 and hence (6.2.11) reduces to (6.2.5) for n = 3. Now we check that u given by (6.2.11) indeed solves the initial-value problem (6.2.1). Theorem 6.2.5. Let n > 3 be an odd integer and k > 2 be an integer. and u is defined in (6.2.11). Suppose o E Cn21+1c(Il8n), b E Then u E C'(Il8n x (0, oo)) and 2ltt - 02l = O 21Z Itn X (0,00). Moreover, for any xo E IlBn, lim u(x, t) = cp(xo), ut(x, t) = vb(xo). In fact, u can be extended to a CI-function in Ian x [0, oo). Proof. The proof proceeds similarly to that of Theorem 6.2.1. We consider cp = 0. Then for any (x,t) E Ian x (0,oo), / 6.2. Higher-Dimensional Wave Equations where 1 5Bt (x) By Lemma 6.2.4(2), we have n-3 2 u(x, t) = cn-1,Zt Z+1 (x, t). Note that cn in (6.2.12) is c(n_1)/2,o in Lemma 6.2.4. By the change of coordinates y = x -I- wt, we write 1 Therefore, a2 W (x, t) = IwI=1 avi (x -I- tw) dSw . Hence, u(x, t) is defined for any (x, t) E ItSn x [0, oo) and 0) = 0. Since 3+1(W ), we conclude easily that Vu exists and is continuous in E Cn2 , k. For t-derivatives, we conclude similarly that W x [0, oo), for i = 0,1, ut (x, t) is defined for any (x, t) E W x [0, oo) and ut 0) = b. Moreover, Vu t is continuous in W x (0, oo), for i = 0,1, a dS k - 1. In particular, 5Bt (x) Bt (x) LW(x, t) = L'b(x + tw) dS O d 1 Wnt1 Du(x, t) _ n-3 2 0 dsy On the other hand, Lemma 6.2.4(1) implies utt = (j) la n-1 2 6. Wave Equations utt = (13'\ 2n-1 tat n-3 t aBt(x) This implies that utt - Du = 0 at (x, t) E l[8n x (0, oo) and then u E WnCn x [O,oo)). = 0 in a similar way. We can discuss the case 6.2.6. Arbitrary Even Dimensions. Let n > 2 be an even integer with it = 2m - 2, By setting x = (x, xn+l) for x = (Xi,.. , xn) E IISn and 'a(, t) = u(x, t), we have utt - Oxu = 0 in Rn+1 x (0, oo), 0) = cp, on W+1, b() = b(x). = o(x), As n + 1 is odd, by (6.2.11), with n -I-1 replacing n, we have u(x, t) = n-2 2 (13\( + (L1 t f8Bt() ) dSy n-2 2 Wn+1 t x J8Bt() where y = yn, yn+1) = (y, yn+1) The integrals here are over the surface aBt (x) in Rn+1. Now we evaluate them as integrals in Rn by eliminating For xn+1 = 0, the sphere I y - x = t in Rn+1 has two pieces given by yn+1. 2Jn+1 = ±t2 - I y and its surface area element is dS-y - (1+I0 yyn+ll212 dy = t dy. t2_ly_xl2 6.2. Higher-Dimensional Wave Equations Hence 1 Wlt [ f t(x) /t2 - y nf Wfl JBt(x) - x12 We point out that wn/n is the volume of the unit ball in A similar expression holds for b. By a simple substitute, we now get an expression of u in terms of cp and b. We need to calculate the constant in the formula. Therefore, we obtain for any x E W and t > 0, ux't- 1() t at Wn JBt(x) + tat t2 _ Iq - xl2 Bt(x) /t2-Iy-xI2 where n is an even integer, wn/n is the volume of the unit ball in Ian and cn is given by Cn In fact, we have en=2 4 n. We note that c2 = 2 and hence (6.2.13) reduces to (6.2.6) for n = 2. Theorem 6.2.6. Let n be an even integer and k > 2 be an integer. Suppose E and u is defined in (6.2.13). Then u e cP E Cc(I[8Th x (0,oo)) and utt - Du = 0 in Itn x (0, oo) . Moreover, for any xo E Ilgn lim u(x, t) = Sp(xo), ut(x, t) = b(xo). lim (x,t)-*(xo,o) This follows from Theorem 6.2.5. Again, u can be extended to a Cfunction in Ian x [0,oo). 6.2.7. Global Properties. Next, we discuss global properties of solutions of the initial-value problem for the wave equation. First, we have the following global boundedness. Theorem 6.2.7. For n > 2, let b be a smooth function in Ian and u be a solution of utt - Du = 0 in x (0, oo), u(.,0)=0, u(,0)=b 6. Wave Equations Then for any t > 0, n-1 I< C IIoZlbIILI(Rn), i=0 where C is a positive constant depending only on n. Solutions not only are bounded globally but also decay as t -+ oo for n > 2. In this aspect, there is a sharp difference between dimension 1 and higher dimensions. By d'Alembert's formula (6.1.5), it is obvious that solutions of the initial-value problem for the one-dimensional wave equation do not decay as t - oo. However, solutions in higher dimensions have a different behavior. Theorem 6.2.8. For n > 2, let b be a smooth function in ]I8n and u be a solution of 26tt - 0'ib = 0 in n X (0, 00), ut(',0)='i/.' onW. Then for any t> 1, [z] where C is a positive constant depending only on n. Decay estimates in Theorem 6.2.8 are optimal for large t. They play an important role in the studies of global solutions of nonlinear wave equations. We note that decay rates vary according to dimensions. Before presenting a proof, we demonstrate that t-1 is the correct decay rate for n = 3 by a simple geometric consideration. By (6.2.5), the solution u is given by 1 (y) dsy, u(x' t) LB(X) for any (x, t) E 1[83 x (0, oo). Suppose zb is of compact support and supp b C BR for some R> 0. Then u(x, t) = b(y) dSy. A simple geometric argument shows that for any x E ][83 and any t> 0, Area(BR n aBt(x)) < CR2, where C is a constant independent of x and t. Hence, 2 -sup kbl. R3 6.2. Higher-Dimensional Wave Equations This clearly shows that u(x, t) decays uniformly for x E ][83 at the rate of t-1 as t -3 00. The drawback here is that the diameter of the support appears explicitly in the estimate. The discussion for n = 2 is a bit complicated and is left as an exercise. Refer to Exercise 6.7. We now prove Theorem 6.2.7 and Theorem 6.2.8 together. The proof is based on explicit expressions for u. Proof of Theorems 6.2.7 and 6.2.8. We first consider n = 3. By assuming that /i is of compact support, we prove that for any t > 0, Iu(x,t)I c and for any t> 0, Iu(x,t)I By (6.2.5), the solution u is given by for any (x, t) E ][83 x (0, oo). Since b has compact support, we have ib(x+tw)=-J Then u(x, t) _ - LL1=i 8s For s > t, we have t < s2/t and hence lu(x, t) < sW)I dSwds <_ 2 For the global boundedness, we first have 2z/i(x+sw)ds. Then u(x, t) = S as2 b(x + sW) dSWds. Hence l 1J 47r J IWI=1 I V2'V(x + sW)I dSWds < 1 47r We now discuss general '/i. For any (x, t) E ][83 x (0, oo), we note that u depends on b only on 8Bt(x). We now take a cutoff function r,i E C( R3) with r,, = 1 in Bt+l (x), r,, = 0 in Il83 \ Bt+2 (x) and a uniform bound on Vi. We can obtain the Then in the expression for u, we may replace /i by 6. Wave Equations desired estimates by repeating the argument above. We simply note that derivatives of r have uniform bounds, independent of (x, t) E ][83 x (0, oo). Now we consider n = 2. By assuming that b is of compact support, we prove that for any t> 0, and for any t> 1, 2Y` (IkbIILl(R2) + IIV'bIIL1(R2)) The general case follows similarly to the case of n = 3. By (6.2.6) and a change of variables, we have u(x, t) = 1 Jft (x + rw) dSdr. t2 - r2 J11=1 As in the proof for n = 3, we have for r > 0, (x+ sw) dSds, and hence s fwI=1 sw)f dSWds Therefore, Pt For the decay estimate, we write u as u(x,t) = t21 r2 dr 4 IIVIIL1(R2). where s > 0 is a positive constant to be determined. We can estimate similarly to the above. In fact, 121J1 = f rJ - r2 1 t2 t2 - r ( R2) . 6.2. Higher-Dimensional Wave Equations A simple calculation yields 1 t2 - r2 dr=Jt [ (t + r)(t-r) Ill! = t2 r r2 t2 - (t - (x+rw)dSdr f r 'WI_1 IJ zit1 - 2 Therefore, we obtain Iu(x,t)I V2Et_E21R2) + For any t> 1, we take e = 1/2 and obtain the desired result. We leave the proof for arbitrary n as an exercise. 6.2.8. Duhamel's Principle. We now discuss the initial-value problem for the nonhomogeneous wave equation. Let cp and b be C2 and C1 functions in Rn, respectively, and f be a continuous function in Rn x (0, oo). Consider 2Gtt - 026 = f (6.2.14) lri fin' X (0,00), For f - 0, the solution u of (6.2.14) is given by (6.2.11) for n odd and by (6.2.13) for n even. We note that there are two terms in these expressions, one being a derivative in t. This is not a coincidence. We now decompose (6.2.14) into three problems, (Fi_2.7.ril . utt - Du = 0 in Ifn x (0, oo), 0) = cp, u(.,0)=0 on Itn, utt - Ou = 0 in Ifn x (0, oo), on lltn, 6. Wave Equations utt - Du = f (621 7' in ][8n x (0, oo), u(.,O)=O, t(',O)=O onW. Obviously, a sum of solutions of (6.2.15)-(6.2.17) yields a solution of (6.2.14). For any /i E C[ 2 H-1(W`), set for (x,t) E ]I8' x (O,oo), M. (x, t) - 'ra-3 2 t at) cent Jast(X) if n > 3 is odd, and (6.2.19) Mp(x, t = 1 fia cn, n-2 2 'b(y) d y t2 - l y - xl2 if n > 2 is even, where wn is the surface area of the unit sphere in Rn and 1 .3 Cn = (n - 2) fore > 3 odd, for n > 2 even. We note that [ 2 ] + 1 = n 21 if n is odd, and [ 2 ] + 1 = n 2 2 if n is even. Theorem 6.2.9. Let m > 2 be an integer, b E C[ 2 l+m-1(Rn) and set u = Mp. Then u E Cn (Rn x (0, oo)) and utt - Du = 0 in Wn x (0, oo) Moreover, for any xo E lim u (x, t) = 0, ut (x, t) _ 'ib(xo). Proof. This follows easily from Theorem 6.2.5 and Theorem 6.2.6 for p = 0. As we have seen, u is in fact cm in Rn x [0, oo). LI We now prove that solutions of (6.2.15) can be obtained directly from those of (6.2.16). Theorem 6.2.10. Let m > 2 be an integer, P E C[ 2 H m (W) and set u = Then u E Cn (Rn x (0, oo)) and utt - Du = 0 in x (0, oo) . Moreover, for any xo E lim (x,t)-+(xo,o) u (x, t) = cp (xo) , lim (x,t)-+(xo,o) ut (x, t) = 0. 6.2. Higher-Dimensional Wave Equations Proof. The proof is based on straightforward calculations. We point out that u is C"2 in Ian x [0, oo). By the definition of t), we have attMc - OMB =0 in Ian x (0, oo), on Ian. 0) = cP attu - Du = ('9tt - 0)atM(p = at(attM -OMB) = 0 in RTh x (0,00), and aM,(,t)(,0) = ( on RTh, 0) = 8tu(', U) = 0) = 0 on Rn. We have the desired result. The next result is referred to as Duhamel's principle. Theorem 6.2.11. Let m > 2 be an integer, f e C[ 2 H m-1(Rn x [0, oo)) and u be defined by t u(x,t) _ f Mf(x,t - T)CLT, where f.7- = f(.,r). Then u e C"`(IE8Th x (0,00)) and utt - Du = f in ]L8mx (0, oo). Moreover, for any xo E IE8, lim u(x, t) = 0, ut (x, t) = 0. Proof. The regularity of u easily follows from Theorem 6.2.9. We will verify that u satisfies utt - Du = f and the initial conditions. For each fixed T > 0, w(x, t) = MfT (x, t - T) satisfies Wtt - LtW = 0 in R x (r, oo), f(.,r) on R. = 0, We note that the initial conditions here are prescribed on {t = T}. Then ut = MfT (x, t - T) I T=t + f 0 atMfT (x, t - T) dT 6. Wave Equations utt = aMf(, t - T) ITt + = f(xt)+f aM fT (x, t - T) dT M fT(x,t-T)dT t = f(xt)+fMfT(xt_r)dr = f(x,t)+zu. Hence utt - Du =fin I[8n x (0, oo) and 0) = 0, 0) = 0 in IlBn. As an application of Theorem 6.2.11, we consider the initial-value problem (6.2.17) for n = 3. Let u be a CZ-solution of utt - Du = f in 1[83 x (0> oo)> u(',O)=O, Ut(,0)0 onR3. By (6.2.18) for n = 3, we have for any z/> E CZ(I[83), 4Rt fBt(x) Then, by Theorem 6.2.11, u(x, t) = t MfT (x, t - T) dT = 14 oJ t - T JaBt-T t By the change of variables T = t - s, we have u(x t) _ JO J8Bs(x) Therefore, (6.2.20) u(x, t) = f(y, iy I for any (x, t) E I183 x (0, oo). We note that the value of the solution u at (x, t) depends on the values of f only at the points (y, s) with Theorem 6.2.12. Let m> 2 be an integer, f E C( R x [0, oo)) and u be defined by (6.2.20). Then u e Cm (R3 x (0, oo)) and utt - Du = f in R x (0, oo). Moreover, for any xo E I183, lim (x,t)-+(xo,o) u(x, t) = 0, lim (x,t)-+(xo,o) ut (x, t) = 0. 6.3. Energy Estimates 6.3. Energy Estimates In this section, we derive energy estimates of solutions of initial-value problems for a class of hyperbolic equations slightly more general than the wave equation. Before we start, we demonstrate by a simple case what is involved. Suppose u is a CZ-solution of Utt-LU=O inRmx(O,oo). We assume that 0) have compact support. By finite-speed 0) and propagation, t) also has compact support for any t > 0. We multiply the wave equation by Ut and integrate in BR x (0, t). Here we choose R sufficiently large such that BR contains the support of s), for any s e (0, t). Note that ututt - utoU = Z (mot + I Then a simple integration in BR x (0, t) yields Z JRnX{t} (U+ IVxUI2)dX = 12 J (mot + 0xU) dx. This is conservation of energy: the LZ-norm of derivatives at each time slice is a constant independent of time. For general hyperbolic equations, conservation of energy is not expected. However, we have the energy estimates: the energy at later time is controlled by the initial energy. Let a, c and f be continuous functions in II8" x [0, oo) and co and /i be continuous functions in IIBn. We consider the initial-value problem (6.3.1) Utt - aLu + cu = f in ][8n x (0, oo), 0) = cp, in ][8n. 0) We assume that a is a positive function satisfying (6.3.2) A < a(x, t) < A for any (x,t) E ][8" x [0,oo), for some positive constants A and A. For the wave equation, we have a = 1 and c = 0 and hence we can choose A = A = 1 in (6.3.2). In the following, we set 1 For any point P = (X, T) E ][8n x (0, oo), consider the cone Ck(P) (opening downward) with vertex at P defined by Ck(P) _ {(x, t) : 0 < t < T, Ic!x - X I< T - t}. 6. Wave Equations As in Section 2.3, we denote by DSCk (P) and a_ Ck (P) the side and bottom of the boundary, respectively, i.e., xI= T - t}, asck(P)_ { (x, t): o < t < T, a_ck(P) _ {(x, o): ,'iIx - x < T}. We note that a_Ck(P) is simply the closed ball in ][8x {0} centered at (X, 0) with radius T/ic. Figure 6.3.1. The cone Ck (P) . Theorem 6.3.1. Let a be Ci, c and f be continuous in I[8n x [0, oo), and let cp be Ci and zb be continuous in I[8n. Suppose (6.3.2) holds and u E C2(I[8x (0, oo)) fl Ci(I[8x [0, oo)) is a solution of (6.3.1). Then for any point P = (X,T) E W` x (0,oo) and any 'i > io, e-t(u2 + ut + alVul2) dxdt w(P) (cp2 - 2 + alOcpI2) dx - f 2 dxdt, where rip is a positive constant depending only on n, A, the C'-norm of a and the L°° -norm of c in Ck(P). Proof. We multiply the equation in (6.3.1) by 2e-'tut and integrate in Ck(P), for a nonnegative constant r to be determined. First, we note that 6.3. Energy Estimates and n -2e-taut0u = -2e-'taut i=1 n (- 2(e tautux2)xZ + 2e-tauxZutx2 -+ 2e (- 2(e i=1 n + (e-tau)t + + where we used 2uut= (U.)t. Therefore, we obtain r/e-tau- e '7tatu) , (e_tautux)xi + e-t(ut + aIVuI2) (e-'tut -f- e-taIVuI2)t - 2 i=1 n + i=1 = 2e-'tut f. We note that the first two terms in the left-hand side are derivatives of quadratic expressions in Vu and ut and that the next three terms are quadratic in Vxu and ut. In particular, the third term is a positive quadratic form. The final term in the left-hand side involves u itself. To control this term, we note that (e-tu2)t + r/e-tu2 - 2e-tuut = 0. Then a simple addition yields (e_t(U2+U? -t- aIVuI2))t - i=1 ut + aIVuI2) = RHS, where n RHS = a-'tat I VuI2 (c - 1)uut -+ 2e-'tut f. The first three terms in RHS are quadratic in ut, ux2 and u. Now by (6.3.2) and the Cauchy inequality, we have 2 2 I 2 I ut2 + 1 au2z and similar estimates for other three terms in RHS. Hence f2, RHS < ut + aIVuI2) + 6. Wave Equations where r7o is a positive constant which can be taken as o = -A1 sup ati + n + 11 - sup IVaI + sup ci +2. C(P) Then a simple substitution yields n (e_t(u2 + ut + aIDuI2))t - Z (il ut + Upon integrating over f 2. we obtain e-t(u2 + ut + aIDuI2) dxdt + fsc,c(p) e-(u2 + ut + aIDul2)vt - 2 2=1 Ja_ck(P) (u2+u+alVul2)dx+fe-t f 2 dxdt, w(P) where the unit exterior normal vector on 83Ck(P) is given by (x-X kIx-XI'1 We need only prove that the integrand for 83Ck(P) is nonnegative. We claim that BI - (u+alVul2)vt - 2 autuv2 >0 on To prove this, we first note that, by the Cauchy inequality, 22 n 7b (2) In 7L With vt = 1/ 1 + k2, we have BI > (ut +aIDuI2 - 2kaIutl IBy (6.3.2) and ic = 1/JX, we have ic/Ei < 1. Hence BI > 1 - k2 (ut + alDul2 Therefore, the boundary integral on 83Ck(P) is nonnegative and can be discarded. 6.3. Energy Estimates A consequence of Theorem 6.3.1 is the uniqueness of solutions of (6.3.1). We can also discuss the domain of dependence and the range of influence as in the previous section. We note that the cone Ck(P) in Theorem 6.3.1 plays the same role as the cone in Theorem 2.3.4. The constant ic is chosen so that the boundary integral over 83Ck(P) is nonnegative and hence can be dropped from the estimate. Similar to Theorem 2.3.5, we have the following result. Theorem 6.3.2. Let a be Cl, c and f be continuous in Il8n x [0, oo), and let cp be Cl and b be continuous in ][8n. Suppose (6.3.2) holds and u E C2(I[8n x (0, oo)) fl Cl(][8n x [0, oo)) is a solution of (6.3.1). For a firmed b E L2(][8n), then for any rj > rjo, T> 0, if f E L2(][8n x (0, T)) and cp, e_t(u2 + u + aIVuI2) dx + e-t(u2 + ut + aIVuI2) dxdt e-t f 2 dxdt, x (O,T) where ijo is a positive constant depending only on n, A, the C1-norm of a and the L°°-norm of c in W x [0, T]. Usually, we call ut + aIDuI2 the energy density and its integral over Il8n x {t} the energy at time t. Then Theorem 6.3.2 asserts, in the case of c = 0 and f = 0, that the initial energy (the energy at t = 0) controls the energy at later time. Next, we consider the initial-value problem in general domains. Let SZ be a bounded domain in W and h_ and h+ be two piecewise C1-functions in SZ with h_ D= {(x,t): h_(x) utt - aDu + cu = f in D. 6. Wave Equations Figure 6.3.2. A general domain. We can perform a similar integration in D as in the proof of Theorem 6.3.1 and obtain f8D e-t (2 + u+ aIVuI2)v+t - 2 -I- (-X70) f Ja- n autuxiv+2 dS i=1 e-t(u2 + ut + aIVuI2) dxdt e-t ((2 + ut + aIVuI2)v_t - 2 autuxiv_i dS i=1 e t f2 dxdt, Where v = (v±i, , v±n, v±t) are unit normal vectors pointing in the positive t-direction along 9D. We are interested in whether the integrand for a+D is nonnegative. As in the proof of Theorem 6.3.1, we have, by the Cauchy inequality, 1 _ IVuI\/1 - v+t. ?Lx v+i Then it is easy to see that n (mot + aIVuI2)v+t -2 z=1 rel="nofollow">_ (ut -I- aI0uI2)v+t - 2/a(1 - v+t) ' \1IutI ' IVuI > 0 on a+D if v+t > all - v+t) This condition can be written as (6.3.4) v+t > on B+D. 6.3. Energy Estimates In conclusion, under the condition (6.3.4), we obtain dS + (-X70) ID e-t(u2 + ut + aIDuI2) dxdt Jr D e-t + ((U2 + ut + aIDuI2)v_t - 2 f2 dxdt. If we prescribe u and ut on 3_D, then Vu can be calculated on 3_D in terms of u and ut. Hence, the expressions in the right-hand side are known. In particular, if u = 'at = 0 on 3_D and f = 0 in D, then u = 0 in D. Now we introduce the notion of space-like and time-like surfaces. Definition 6.3.3. Let be a C1-hypersurface in Rn x R+ and v = (vim, vt) be a unit normal vector field on with vt > 0. Then is space-like at (x, t) for (6.3.3) if vt (x, t) > a(x, t) 1 +a(x,t)' E is time-like at (x, t) if vt (x, t) < a(x, t) If the hypersurface E is given by t = t(x), it is easy to check that E is space-like at (x,t(x)) if IVt.(rll Now we consider the wave equation utt - Du = f. With a = 1, the hypersurface E is space-like at (x, t) if vt(x, t) > 1//. If (6.3.5) E is given by t = t(x), then E is space-like at (x, t(x)) if I< 1. In the following, we demonstrate the importance of space-like hypersurfaces by the wave equation. Let E be aspace-like hypersurface for the wave equation. Then for any (xO, to) E E, the range of influence of (x0, to) is given by the cone {(x, t) : t - to > lx - xoI} and hence is always above E. This suggests that prescribing initial values on space-like surfaces yields awell-posed problem. 6. Wave Equations Figure 6.3.3. A space-like hypersurface. Figure 6.3.4. An integral domain for space-like initial hypersurfaces. In fact, domains of integration for energy estimates can be constructed accordingly. Next, we discuss briefly initial-value problems with initial values prescribed on a time-like hypersurface. Consider utt = u+ uyy for x > 0 and y, t e III, u= 1.sin my, 1.sin my on {x = 0}. m2 ax m Here we treat {x = 0} as the initial hypersurface, which is time-like for the wave equation. A solution is given by 1 um x , y) = m2 em5 sin my. Note that um40, DUm ax on {x=0} asm-+oo. Meanwhile, for any x > 0, sup I as m - 00. m2 Ilga Therefore, there is no continuous dependence on the initial values. 6.4. Exercises To conclude this section, we discuss a consequence of Theorem 6.3.2. In Subsection 2.3.3, we proved in Theorem 2.3.7 the existence of weak solutions of the initial-value problem for the first-order linear PDEs with the help of estimates in Theorem 2.3.5. By a similar process, we can prove the existence of weak solutions of (6.3.1) using Theorem 6.3.2. However, there is a significant difference. The weak solutions in Definition 2.3.6 are in L2 because an estimate of the L2-norms of solutions is established in Theorem 2.3.5. In the present situation, Theorem 6.3.2 establishes an estimate of the L2-norms of solutions and their derivatives. This naturally leads to a new norm defined by 1 (u2 + ut + Idxdt IIUIIH1(Rnx(OT)) = (LX The superscript 1 in Hl indicates the order of derivatives. With such a norm, we can define the Sobolev space Hl (][8n x (O, T)) as the completion of smooth functions of finite Hl-norms with respect to the H1-norm. Obviously, Hl(][8n x (O, T)) defined in this way is complete. In fact, it is a Hilbert space, since the Hl-norm is naturally induced by an Hl-inner product given by (u, v)H1(Rn x (o,T)) _ (uv + Utvt + Vu Vv) dxdt. n x (O,T) Then we can prove that (6.3.1) admits a weak Hl-solution in ][8n x (O, T) if cp = = 0. We will not provide the details here. The purpose of this short discussion is to demonstrate the importance of Sobolev spaces in PDEs. We refer to Subsection 4.4.2 for a discussion of weak solutions of the Poisson equation. 6.4. Exercises Exercise 6.1. Let l be a positive constant, cp e C2([0, l]) and b e C'([0, l]). Consider Utt - Usa; = 0 in (0, 1) X (0, oo), u(.,0) = (,o, u(',0) = b in [0,1], u(0, t) = 0, u5(l, t) = 0 fort > 0. Find a compatibility condition and prove the existence of a C2-solution under such a condition. 6. Wave Equations Exercise 6.2. Let cpl and cp2 be C2-functions in {x < 0} and {x > 0}, respectively. Consider the characteristic initial-value problem utt - = 0 fort > lxi, u(x, -x) = (P1 (X) for x <0, u(x, x) = cp2(x) for x >0. Solve this problem and find the domain of dependence for any point (x, t) with t> lxi. Exercise 6.3. Let cpl and cp2 be C2-functions in {x > 0}. Consider the Goursat problem utt - u=0 for 0 < t < x, for x > 0. u(x, 0) = cpl(x), u(x, x) = cp2(x) Solve this problem and find the domain of dependence for any point (x, t) with 0 < t <x. Exercise 6.4. Let a be a constant and cp and which vanish near x = 0. Consider be C2-functions on (0, oo) utt-u=0 forx>0, t>0, u(x, 0) = cp(x), ut(x, 0) _ b(x) for x> 0, u(0, t) = a for a L -1 and prove that in general there exist no solutions for a = -1. Exercise 6.5. Let a be a constant with al < 1. Prove that the wave equation utt - L u= 0 in R3 x R is preserved by a Lore ntz transformation, i.e., a change of variables given by t - axl 1-a2' xl - at Y1= 1- a2' yz=xi fori=2, 3. Exercise 6.6. Let A be a positive constant and i4' E C2(]I82). Solve the following initial-value problems by the method of descent: utt = Du + AZU in 1[82 x (0, oo), 0) _ i/i on ]182, 6.4. Exercises utt = Du - A2u in Il82 x (0, oo), Hint: Use complex functions temporarily to solve the second problem. Exercise 6.7. Let b be a bounded function defined in ]E82 with /i = 0 in ][82 \ Bl. For any (x, t) E ][82 x (0, oo), define utx, t = ) t2 - IY - xI (1) For any a e (0, 1), prove supIu(',t)I Bat CSUPII foranyt> 1, t Il2z where C is a positive constant depending only on a. (2) Assume, in addition, that zb = 1 in Bl. For any unit vector e E Il82, find the decay rate of u(te, t) as t -+ oo. Exercise 6.8. Let cp e C2(R3) and zb e C1(Il83). Suppose that u E C2(Il83 x [0, oo)) is a solution of the initial-value problem u-/u=0 in ][83 x (0, oo), onR3. (1) For any fixed (xO, to) E Il83 x (0, oo), set for any x e Bto (xo) \ {xo}, (Vxu(xo IJ Prove that div v = 0. (2) Derive an expression of u(xo, to) in terms of cp and zb by integrating div v in Bto (xO) \ BE (xo) and then letting e - 0. Remark: This exercise gives an alternative approach to solving the initialvalue problem for the three-dimensional wave equation. Exercise 6.9. Let a be a positive constant and u be a CZ-solution of the characteristic initial-value problem utt - Du = 0 in {(x, t) E ][83 x (0,00): t> lxi > a}, u(x, lxi) = 0 for lxi > a. (1) For any fixed (xo, to) E Il83 x Il8+ with to > Ixol > a, integrate dive (introduced in Exercise 6.8) in the region bounded by ix-xol+ lxi _ to, lxi = a and lx - xoi = e. By letting e -+ 0, express u(xp, to) in terms of an integral over BBa. 6. Wave Equations (2) For any w e §2 and r > 0, prove that the limit lim (ru(rw,r+'r)) exists and the convergence is uniform for w e §2 and r E (0, To], for any fixed TO>0. Remark: The limit in (2) is called the radiation field. 1 Exercise 6.10. Prove Theorem 6.2.7 and Theorem 6.2.8 for n > 2. Exercise 6.11. Set QT = {(x, t) 0 < x < 1, 0 < t < T}. Consider the Lu - 2utt + 3ut + = 0. (1) Give a correct presentation of the boundary-value problem in QT. (2) Find an explicit expression of a solution with prescribed boundary values. (3) Derive an estimate of the integral of u + ut in QT. Hint: For (2), divide QT into three regions separated by characteristic curves from (0, 0). For (3), integrate an appropriate linear combination of utLu and to make integrands on [0,1] x {t} and {l} x [0, t] positive definite. Exercise 6.12. For some constant a > 0, let f be a C1-function in a < I x 0. Consider the characteristic initial-value problem for the wave equation utt - Du = f (x, t) in a < lxi < t +a, u = cp(x, t) on lxl> a, t = ixi- a, u='b(x,t) onx=a, t>0. Derive an energy estimate in an appropriate domain in a < lxi 1F. G. Friedlander, On the radiation field of pulse solutions of the wave equation, Proc. Roy. Soc. A, 269 (1962), 53-65. Chapter 7 First-Order Differential Systems In this chapter, we discuss partial differential systems of the first order and focus on local existence of solutions. In Section 7.1, we introduce the notion of noncharacteristic hypersurfaces for initial-value problems. We proceed here for linear partial differential equations and partial differential systems of arbitrary order similarly to how we did for first-order linear PDEs in Section 2.1 and second-order linear PDEs in Section 3.1. We show that we can compute all derivatives of solutions on initial hypersurfaces if initial values are prescribed on noncharacteristic initial hypersurfaces. We also demonstrate that partial differential systems of arbitrary order can always be transformed to those of the first order. In Section 7.2, we discuss analytic solutions of the initial-value problem for first-order linear differential systems. The main result is the CauchyKovalevskaya theorem, which asserts the local existence of analytic solutions if the coefficient matrices and the nonhomogeneous terms are analytic and the initial values are analytic on analytic noncharacteristic hypersurfaces. The proof is based on the convergence of the formal power series of solutions. In this section, we also prove a uniqueness result due to Holmgren, which asserts that the solutions in the Cauchy-Kovalevskaya theorem are the only solutions in the C°°-category. In Section 7.3, we construct a first-order linear differential system in R3 that does not admit smooth solutions in any subsets of R3. In this system, the coefficient matrices are analytic and the nonhomogeneous term 7. First-Order Differential Systems is a suitably chosen smooth function. (For analytic nonhomogeneous terms there would always be solutions by the Cauchy-Kovalevskaya theorem). We need to point out that such a nonhomogeneous term is proved to exist by a contradiction argument. An important role is played by the Baire category theorem. 7.1. Noncharacteristic Hypersurfaces The main focus in this section is on linear partial differential systems of arbitrary order. 7.1.1. Linear Partial Differential Equations. We start with linear partial differential equations of arbitrary order and proceed here as in Sections 2.1 and 3.1. Let 1 be a domain in 1Rn containing the origin, m be a positive integer and as be a continuous function in 1, for cx e Z+ with of < in. Consider an mth-order linear differential operator L defined by (7.1.1) aa(x)aau in 1. Lu = IaI<m Here, as is called the coefficient of aau. Definition 7.1.1. Let L be a linear differential operator of order m as in (7.1.1) defined in 1 C 1R7. The principal part Lo and the principal symbol p of L are defined by Lou = in SZ, and aa(x)r, IaI=m for any XES2and EIlBn. The principal part Lo is a differential operator consisting of terms involving derivatives of order m in L, and the principal symbol is a homogeneous polynomial of degree m with coefficients given by the coefficients of Lo. Principal symbols play an important role in discussions of differential operators. We discussed first-order and second-order linear differential operators in Chapter 2 and Chapter 3, respectively. Usually, they are written in the forms Lu = in 1, 7.1. Noncharacteristic Hypersurfaces Lu = bi (x)u+ c(x)u in ft aij i,j=1 Their principal symbols are given by n p(xi) = i=1 n p(xiS) for any x E St and e E l[8. For second-order differential operators, we usually assume that (a) is a symmetric matrix in ft Let f be a continuous function in ft. We consider the equation Lu = f (x) in S2. The function f is called the nonhomogeneous term of the equation. Let E be a smooth hypersurface in St with a unit normal vector field v = (vi, , vn). For any integer j > 1, any point xp E E and any C-function u defined in a neighborhood of xo, the jth normal derivative of u at xo is defined by au avj U« a« u= U1 1 ... Un n «1 ... «n n2G We now prescribe the values of u and its normal derivatives on so that we can find a solution u of (7.1.2) in ft Let u0, u1, , ur_1 be continuous functions defined on . We set Ur _iu = u1, ... , (7.1.3) u = u0, = um-1 on I'. OU the initial hypersurf ace and u0, , We call 1 the initial values or Cauchy values. The problem of solving (7.1.2) together with (7.1.3) is called the initial-value problem or Cauchy problem. We note that there are m functions u0, u1, , ur_1 in (7.1.3). This reflects the general fact that m conditions are needed for initial-value problems for PDEs of order m. As the first step in discussing the solvability of the initial-value problem (7.1.2)-(7.1.3), we intend to find all derivatives of u on . We consider a special case where is the hyperplane {xn = 0}. In this case, we can take v = en and the initial condition (7.1.3) has the form (7.1.4) 0) = uo, U 0) = u1, ... , am 1 u(., 0) = on Rn-1. 7. First-Order Differential Systems Ian-1 and u be a smooth Let u0, ui, , u,n_i be smooth functions on solution of (7.1.2) and (7.1.4) in a neighborhood of the origin. In the following, we investigate whether we can compute all derivatives of u at the origin in terms of the equation and initial values. We write x = (x', xn) for x' E Rn-1. First, we can find all x'-derivatives of u at the origin in terms of those of u0. Next, we can find all x'-derivatives of urn at the origin in terms of those of ui. By continuing this process, we can find all x'-derivatives of u, urn , , Dm 1 u at the origin in terms of those of uO, u i , , um- i In particular, for derivatives up to order m, we find all except am u. To find DU(O), we need to use the equation. We note that a(0,... ,O,m) is the coefficient of Dm u in (7.1.2). If we assume (7.1.5) a(0,... ,0,m)(0) 4 0, then by (7.1.2), DU(0) = - a«(0)a «u (0) - f(0) a(O,... ,O,m) (o) 0 m) Hence, we can compute all derivatives up to order m at 0 in terms of the coefficients and nonhomogeneous term in (7.1.2) and the initial values u0, ui, , ur_i in (7.1.4). In fact, we can compute the derivatives of u of arbitrary order at the origin. For an illustration, we find the derivatives of u of order m + 1. By (7.1.5), a(0,... ,O,m) is not zero in a neighborhood of the origin. Hence, by (7.1.2), 1 CL(0,... ,0,m) a« a« u - f \a(0,... ,0,m) By evaluating at x E Ian-i x {0} close to the origin, we find am u(x) for x E n-1 x {0} sufficiently small. As before, we can find all x'-derivatives of DU at the origin. Hence for derivatives up to order m + 1, we find all except D +iu. To find D +iu(0), we again need to use the equation. By differentiating (7.1.2) with respect to xn, we obtain a(O,...,O,m) n u + ... = f, a1 where the dots denote a linear combination of derivatives of u whose values on Ian-1 x {0} are already calculated in terms of the derivatives of u0, ui, , u,n_i, f and the coefficients in the equation. By (7.1.5) and the above equation, we can find am +i u (0) . We can continue this process for derivatives of arbitrary order. In summary, we can find all derivatives of u of any order at the origin under the condition (7.1.5), which will be defined as the noncharacteristic condition later on. 7.1. Noncharacteristic Hypersurfaces In general, consider the hypersurface given by {(p = 0} for a smooth function co in a neighborhood of the origin with 0. We note that the vector field Vcp is normal to the hypersurface at each point of . We take a point on , say the origin. Then (p(0) = 0. Without loss of generality, 0. Then by the implicit function theorem, we we assume that coffin (0) can solve co = 0 for xn = '/'(xl , Consider the change of variables , xn_ 1) in a neighborhood of the origin. x H y = (Xi,... , xn_i, co(x)) This is a well-defined transformation with a nonsingular Jacobian in a neighborhood of the origin. With n uyn + terms not involving uyn , y,X2 uy _ =1 and in general, for any a e Z+ with al = m, a«u = a«i 9u = (P«i oEy u + terms not involving 5u, we can write the operator L in the y-coordinates as Lu = co ,13 u + terms not involving 13u. a« (x(y)) co«i Jai=rn The initial hypersurface is given by {yn = 0} in the y-coordinates. With yn = co, the coefficient of am u is given by a« (x)co«i ... 70X This is the principal symbol p(x; ) evaluated at _ V(p(x). Definition 7.1.2. Let L be a linear differential operator of order m defined as in (7.1.1) in a neighborhood of xo E W and be a smooth hypersurface containing xo. Then E is noncharacteristic at xo if p(xo; v) = as (xo) v« where v = (ill,..' , vn) is normal to at xo. Otherwise, is characteristic at x0. A hypersurface is noncharacteristic if it is noncharacteristic at every point. Strictly speaking, a hypersurface is characteristic if it is not noncharacteristic, i.e., if it is characteristic at some point. In this book, we will abuse this terminology. When we say a hypersurface is characteristic, we mean it is characteristic everywhere. This should cause no confusion. In 112, hypersurfaces are curves, so we shall speak of characteristic curves and noncharacteristic curves. 7. First-Order Differential Systems When the hypersurface vector field is given by is given by {cp = 0} with Vcp 0, its normal Vcp = (,'.. , j. . Hence we may take v = V p(xo) in (7.1.6). We note that the condition (7.1.6) is preserved under C"2-changes of coordinates. By this condition, we can find successively the values of all derivatives of u at xo, as far as they exist. Then, we could write formal power series at xo for solutions of initial-value problems. If the initial hypersurface is analytic and the coefficients, nonhomogeneous terms and initial values are analytic, then this formal power series converges to an analytic solution. This is the content of the Cauchy-Kovalevskaya theorem, which we will discuss in Section 7.2. Now we introduce a special class of linear differential operators. Definition 7.1.3. Let L be a linear differential operator of order m defined as in (7.1.1) in a neighborhood of xo E ][8'1. Then L is elliptic at xo if 0, IaI=m for any E I[8n \ {0}. A linear differential operator defined in SZ is called elliptic in SZ if it is elliptic at every point in 1. According to Definition 7.1.3, linear differential operators are elliptic if every hypersurface is noncharacteristic. Consider a first-order linear differential operator of the form n a2 (x)u+ b(x)u in SZ C R. Lu = 2=1 Its principal symbol is given by for any x e SZ and any e W1. Hence first-order linear differential equations with real coefficients are never elliptic. Complex coefficients may yield elliptic equations. For example, take a1 = 1/2 and a2 = i/2 in IR2. Then az = (awl + i32)/2 is elliptic. The notion of ellipticity was introduced in Definition 3.1.2 for secondorder linear differential operators of the form n Lu = b2(x)u+ c(x)u in SZ C R. + 2,j=1 7.1. Noncharacteristic Hypersurfaces The principal symbol of L is given by p(x; ) _ z,j=1 for any x E SZ and any E ][8n. Then L is elliptic at x E S2 if 0 for any E Il8" \ {0}. If (aj(x)) is areal-valued n x n symmetric matrix, L is elliptic at x if (a,(x)) is a definite matrix at x, positive definite or negative definite. 7.1.2. Linear Partial Differential Systems. The concept of noncharacteristics can be generalized to linear partial differential equations for vector- valued functions. Let m, N > 1 be integers and S1 C Rn be a domain. A smooth N x N matrix A in SZ is an N x N matrix whose components are smooth functions in ft Similarly, a smooth N-vector u is a vector of N components which are smooth functions in ft Alternatively, we may call them a smooth N x N matrix-valued function and a smooth N-vector-valued function, or a smooth RN-valued function, respectively. In the following, a function may mean a scalar-valued function, a vector-valued function, or a matrix-valued function. This should cause no confusion. Throughout this chapter, all vectors are in the form of column vectors. Let Aa be a smooth N x N matrix in SZ, for each cx e Z+ with Ic <m. Consider a linear partial differential operator of the form (7.1.7) Lu = Aa (x)aau in SZ, I aI<m where u is a smooth N-vector in ft Here, Aa is called the coefficient matrix of &u. We define principal parts, principal symbols and noncharacteristic hypersurfaces similarly to those for single differential equations. Definition 7.1.4. Let L be a linear differential operator defined in St C ][8n as in (7.1.7). The principal part Lo and the principal symbol p of L are defined by Aa(x)aau inn, Lpu = IaI=m and det for any x E S2 and E IlBn. A«(x)r \IaI=m 7. First-Order Differential Systems Definition 7.1.5. Let L be a linear differential operator defined in a neighborhood of xo E ][8n as in (7.1.7) and E be a smooth hypersurface containing xo. Then E is noncharacteristic at xo if A« p(xo; v) = det where v = (v1,.. , vn) is normal to E at xo. Otherwise, E is characteristic at xp. Let f be a smooth N-vector in 11. We consider the linear differential equation Lu = f (x) in 11. The function f is called the nonhomogeneous term of the equation. We often call (7.1.8) a partial differential system, treating (7.1.8) as a collection of partial differential equations for the components of U. Let E be a smooth hypersurface in 11 with a normal vector field v and let u0, u1, , be smooth N-vectors on E. We prescribe am-1 au = ul, ... , m-1u = um-1 on . (7.1.9) u = u0i ov av We call E the initial hypersurf ace and u0 i the initial values or , Cauchy values. The problem of solving (7.1.8) together with (7.1.9) is called the initial-value problem or Cauchy problem. We now examine first-order linear partial differential systems. Let A1, , An and B be smooth N x N matrices in a neighborhood of x0 E R. Consider a first-order linear differential operator n Lu = AiUX2 + Bu. i=1 A hypersurface E containing x0 is noncharacteristic at x0 if n ; 0, where v = (vi,... , v1) is normal to E at the x0. We now demonstrate that we can always reduce the order of differential systems to 1 by increasing the number of equations and the number of components of solution vectors. Proposition 7.1.6. Let L be a linear differential operator defined in a neighborhood of x0 E Rn as in (7.1.7), E be a smooth hypersurface containing x0 which is noncharacteristic at x0 for the operator L, and u0, u1, , um-1 7.1. Noncharacteristic Hypersurfaces be smooth on E. Then the initial-value problem (7.1.8)-(7.1.9) in a neighborhood of xo is equivalent to an initial-value problem for afirst-order differential system with appropriate initial values prescribed on E, and E is a noncharacteristic hypersurface at xo for the new first-order differential system. Proof. We assume that xo is the origin. In the following, we write x = (x', xn) E Ian and a = (a', an) E Z. Step 1. Straightening initial hypersurfaces. We assume that is given by {gyp = 0} for a smooth function P in a neighborhood of the origin with coffin 0. Then we introduce a change of coordinates x = (x', xn) - (x', (x)). In the new coordinates, still denoted by x, the hypersurface is given by {xn = 0} and the initial condition (7.1.9) is given by D u(x', 0) = u3(x') for j = 0,1, Step 2. Reductions to canonical forms and zero initial values. In the new coordinates, {xn = 0} is noncharacteristic at 0. Then, the coefficient matrix ,o,rn) is nonsingular at the origin and hence also in a neighborhood of the origin. Multiplying the partial differential system (7.1.8) by the inverse of this matrix, we may assume that A(o, ,o,rn) is the identity matrix in a neighborhood of the origin. Next, we may assume =0 for j =0,1, ,m-1. To see this, we introduce a function v such that rn-1 1 -u( x u(x) = v(x) + j=0 Then the differential system for v is the same as that for u with f replaced by A(x)Da (-ui(x')x). 1(x) j=0 D v (x', 0) = 0 for j = 0,1, ,m- 1. With Step 1 and Step 2 done, we assume that (7.1.8) and (7.1.9) have the form A«a«u = f, «n=o «' I D u(x', 0) = 0 for j = 0,1, 7. First-Order Differential Systems Step 3. Lowering the order. We now change this differential system to an equivalent system of order m - 1. Introduce new functions UZ = uxi for i = 1, , n, T TT U=(U" °,U1,...,Un) where T indicates the transpose. We note that U is a column vector of (n + 1)N components. Then , n - 1. Ui,xn = Un,xi for i = 1, U°,xn = Un, Hence (7.1.11) am-1 U0 U =0, n - am-2 xn lug-axiam 2Un=0 To get an (m-1)th-order differential equation for U, we write the equation for u as m-1 u = f. Aaaau + an=1 Ia' I <m-an I a'I <m We substitute Un = uxn in the first two terms in the left-hand side to get m-2 (7.1.13) ax 1 Un + A(a ° Aa as Un + an=0 a' I <m-an -1 u = f. a' I <m In the last summation in the left-hand side, any mth-order derivative of u can , n -1, be changed to an (m -1)th-order derivative of UZ for some i = 1, since no derivatives with respect to xn are involved. Now we can write a differential system for U in the form m-2 (7.1.14) ax -1U + an=0 a'l<m-an-1 The initial value for U is given by axnu(x',0) =0 for j =0,1, ,m-2. Hence, we reduce the original initial-value problem for a differential system of order m to an initial-value problem for the differential system of the form (7.1.14) of order m - 1. Now let U be a solution of (7.1.14) with zero initial values. By writing U as in (7.1.10), we prove that U° is a solution of the initial-value problem for the original differential system of order m. To see this, we first prove 7.2. Analytic Solutions that Ui = for i = 1, , n. By (7.1.11) and the initial conditions for U, we have m-2 (Un - Up,) - 0, and on { xn = 0}, =0 for j =0, ,m-3. Next, for i = 1, This easily implies Un am 1UZ-%, am 2Un=t9 -D,t9 1U0 =t9 By (7.1.12) and the initial conditions, we have am-1(Ui - D Up) = 0, n and on {xn = 0} =0 for j =0, ,m-2. for i = 1, , n - 1. Substituting Ui = Hence, Ui = for i = 1, , n, in (7.1.13), we conclude that Up is a solution for the original mthorder differential system. Now, we can repeat the procedure to reduce m to 1. We point out that straightening initial hypersurfaces and reducing initial values to zero are frequently used techniques in discussions of initial-value problems. 7.2. Analytic Solutions For a given first-order linear partial differential system in a neighborhood of xp E Rn and an initial value up prescribed on a hypersurface containing xp, we first intend to find a solution u formally. To this end, we need to determine all derivatives of u at xp, in terms of the derivatives of the initial value up and of the coefficients and the nonhomogeneous term in the equation. Obviously, all tangential derivatives (with respect to ) of u are given by derivatives of up . In order to find the derivatives of u involving the normal direction, we need help from the equation. It has been established that, if is noncharacteristic at xp, the initial-value problem leads to evaluations of all derivatives of u at xp. This is clearly a necessary first step to the determination of a solution of the initial-value problem. If the coefficient matrices and initial values are analytic, a Taylor series solution could be developed for u. The Cauchy-Kovalevskaya theorem asserts the convergence of this Taylor series in a neighborhood of xp. To motivate our discussion, we study an example of first-order partial differential systems which may admit no solutions in any neighborhood of 7. First-Order Differential Systems the origin, unless the initial values prescribed on analytic noncharacteristic hypersurfaces are analytic. Example 7.2.1. Let g = g(x) be areal-valued function in ]l8. Consider the partial differential system in ][8+ _ {(x, y) : y> 0}, uy + v = 0, (7.2.1) with initial values given by u=g(x),v=0 on{y=0}. We point out that (7.2.1) is simply the Cauchy-Riemann equation in C _ 1182. It can be written in the matrix form (1 0 (u \0 1) t\V (0 i(u -(0 \-i 0) \v) - Note that {y = 0} is noncharacteristic. In fact, there are no characteristic curves. To see this, we need to calculate the principal symbol. By taking _ ('1, '2) E ][82, we have -f- bl - (S2 The determinant of this matrix is i + , which is not zero for any 0. Therefore, there are no characteristic curves. We now write (7.2.1) in a complex form. Suppose we have a solution (u, v) for (7.2.1) with the given initial values and let w = u + iv. Then Therefore, w is (complex) analytic in the upper half-plane and its imaginary part is zero on the x-axis. By the Schwaxtz reflection principle, w can be extended across y = 0 to an analytic function in C = It82. This implies in particular that g is (real) analytic since 0) = g. We conclude that (7.2.1) admits no solutions with the given initial value g on {y = 0} unless g is real analytic. Example 7.2.1 naturally leads to discussions of analytic solutions. 7.2.1. Real Analytic Functions. We introduced real analytic functions in Section 4.2. We now discuss this subject in detail. For (real) analytic functions, we need to study convergence of infinite series of the form 7.2. Analytic Solutions where the ca are real numbers defined for all multi-indices a e Z. Throughout this section, the term convergence always refers to absolute convergence. Hence, a series a ca is convergent if and only if > ca < oo. Here, the summation is over all multi-indices a e Z. Definition 7.2.2. A function u : Ian - R is called analytic near xo E Ian if there exist an r > 0 and constants {Ua} such that ua(x - xo)a for x E Br(xp). u(x) = a If u is analytic near xo, then u is smooth near xo. Moreover, the constants ua are given by 1 ua = -8au(xa) for a E Z. a! Thus u is equal to its Taylor series about xo, i.e., x E B,.(xo). u(x) _ a For brevity, we will take xo = 0. Now we discuss an important analytic function. Example 7.2.3. For r> 0, set (k)a lal_k This power series is absolutely convergent for lxi < r// since 00 Ixil + ... + Ixn lal! a _ a rl«lal for lxii +... + IxI < Ixlv4 < r. We also note that 8«u(0) = ra for a E 7G+. We point out that all derivatives of u at 0 are positive. An effective method to prove analyticity of functions is to control their derivatives by the derivatives of functions known to be analytic. For this, we introduce the following terminology. 7. First-Order Differential Systems Definition 7.2.4. Let u and v be smooth functions defined in Br C Rn, for some r > 0. Then v majorizes u in Br, denoted by v >> u or u << v, if for any a E Z. a«v (0) > I We also call v a majorant of u in Br. The following simple result concerns the convergence of Taylor series. Lemma 7.2.5. Let u and v be smooth functions in Br. If v>> u and the Taylor series of v about the origin converges in B,., then the Taylor series of u about the origin converges in Br. Proof. We simply note that a I< 8av(0) Ix" I < oo for x E B,.. Hence we have the desired convergence for u. Next, we prove that every analytic function has a majorant. Lemma 7.2.6. If the Taylor series of u about the origin is convergent to u in Br and 0 < s/ < r, then u has an analytic majorant in Proof. Set y = s(1, ,1). Then, II = s/ < r and 1 a a constant C such that for any a E 7L+, 1 < C, and in particular, a a"u(0) C y1 C laP s Now set v (x) - s _ (x1 Cs + ... + xn) lal! x « . sk I a. Then v is analytic in B81 and majorizes u. So far, our discussions are limited to scalar-valued functions. All definitions and results can be generalized to vector-valued functions easily. For example, a vector-valued function u = UN) is analytic if each of its components is analytic. For vector-valued functions u = (Ui,... , UN) and We have the following results for compositions of functions. 7.2. Analytic Solutions Lemma 7.2.7. Let u, v be smooth functions in a neighborhood of 0 E Il8n with range in ][8"z and f,g be smooth functions in a neighborhood of 0 E Il8"` with range in I[8N, with u(0) = 0, f(0) = 0, u 7.2.2. Cauchy-Kovalevskaya Theorem. Now we are ready to discuss real analytic solutions of initial-value problems. We study first-order quasilinear partial differential systems of N equations for N unknowns in {(x, t) } with initial values prescribed on the noncharacteristic hyperplane {t=0}. Let A1, , An be smooth N x N matrices in Rn+1+N, F be a smooth N-vector in Rn+1+N and uo be a smooth N-vector in R. Consider n ut = Aj (x, t, u)ux + F(x, t, u), j=1 with (7.2.3) 0) = uo. , A, F and uo are analytic in their arguments and We assume that Al, seek an analytic solution u. We point out that {t = 0} is noncharacteristic for (7.2.2). Noncharacteristics was defined for linear differential systems in Section 7.1 and can be generalized easily to quasilinear differential systems. We refer to Section 2.1 for such a generalization for single quasilinear differential equations. The next result is referred to as the Cauchy-Kovalevskaya theorem. Theorem 7.2.9. Let uo be an analytic N-vector near 0 E Il8", and let Al, , Abe analytic N x N matrices and F be an analytic N-vector near (0,0, o(0)) E I[8"+1+N Then the problem (7.2.2)-(7.2.3) admits an analytic solution u near 0 E 1[8"+1 Proof. Without loss of generality, we assume uo = 0. To this end, we introduce v by v(x, t) = u(x, t) - uo(x). Then the differential system for v is similar to that for u. Next, we add t as an additional component of u by introducing un,+l such that uN+l,t = 1 and unr+l 0) = 0. This increases the number of equations and the number of components of the solution , A, and vector in (7.2.2) by 1 and at the same time deletes t from Al, 7. First-Order Differential Systems F. For brevity, we still denote by N the number of equations and the number of components of solution vectors. In the following, we study n ut = with 0) = 0, , An are analytic N x N matrices and F is an analytic N-vector in a neighborhood of the origin in Rn+N. We seek an analytic solution u in a neighborhood of the origin in Rn+1. To this end, we will compute derivatives of u at 0 E Rn+l in terms of derivatives of A1, , An and F at (0,0) E Rn+N and then prove that the Taylor series of u at 0 converges in a neighborhood of 0 E n+1. We note that t does not appear explicitly in the right hand side of (7.2.4). Since u=0 on {t = 0}, we have where A1, au(0)=0 For any i = 1, for any a E Z+ . , n, by differentiating (7.2.4) with respect to xZ, we get n In view of (7.2.5), we have u(O) = F(0,0). More generally, we obtain by induction 99u(0) = aaF(0, 0) for any a E Z. Next, for any a E Z+, we have n aaat u = aaatut = aaat j=1 = as (Aj j=1 t + Aj,uutu j) + Fuut Here we used the fact that Aj and F are independent of t. Thus, n aaa2u(o) = as Fuut j=1 7.2. Analytic Solutions The expression in the right-hand side can be worked out to be a polynomial with nonnegative coefficients in various derivatives of A1, , An and F and the derivatives with 1/31 +1 < Ic +2 and l < 1. More generally, for any c E Z+ and k > 0, we have aaatu(0) = Btu - where pa,k is a polynomial with nonnegative coefficients and the indices y, /3,1 range over /3 E Z+, y E Z+ and l E Z+ with I r I+ I I a I+ k -1, is considered as a polynomial in the point out that pa,k We denote by pa,k (IDDAi I , ) the value of components of are replaced by their absolute values. pa,k when all components Since pa,k has nonnegative coefficients, we conclude that spa, axaryAn, axaryF, < pa,k(I axauA1I, ... , IaxauAnl,'9 9 F', I axatul ) t u -o We now consider a new differential system n vt = (7.2.s) G(x, v), Bj (x, j=1 0, Bn are analytic N x N matrices and G is an analytic N-vector , Bn and G in a neighborhood of the origin in R -1 . We will choose B1, such that where B1, B> >A for j = 1, Hence, for any (r,'y) (0) , rz and C >>F. > Ifor j = 1,... , n, and I The above inequalities should be understood as holding componentwise. Let v be a solution of (7.2.8). We now claim that IDDu0I < a°at v (0) for any (c,k) E 7L+ 1 7. First-Order Differential Systems The proof is by induction on the order of t-derivatives. The general step follows since l ax at u(0) I = pa,k (a'1aP'A1 , ... , ax au An, ax au F, aaalu) <pa,k(laaAl1,... pa,k(axau-'-'l,' .. , axauBn, aaaly) x t u =o = axat v(0), where we used (7.2.6), (7.2.7) and the fact that pa,k has nonnegative coefficients. Thus (7.2.10) v >> u. It remains to prove that the Taylor series of v at 0 converges in a neighborhood of 0 E Rn+1. To this end, we consider B1 T - (xl + ... + xnvl 1 and G r-(x1+...+xfl+v1+...+vN)) 1 for positive constants C and r, with lxi + lvi < r/ n -F- N. As demonstrated in the proof of Lemma 7.2.6, we may choose C sufficiently large and r sufficiently small such that (7.2.9) holds. Set v=w for some scalar-valued function w in a neighborhood of 0 E I[Sn+l (7.2.8) is reduced to wt = w(.,0) = 0. This is a (single) first-order quasilinear partial differential equation. We now seek a solution w of the form w(x1 i ... , xn, t) = ti (x1 + ... + xn, t). 7.2. Analytic Solutions Then w = w (z, t) satisfies Wt = r - z - Nw (nNwz + 1), w(, o) = o. By using the method of characteristics as in Section 2.2, we have an explicit solution w(z, t) _ (n + 1)N {r - z- [(r - z)2 - 2Cr(n + 1)Nt] 2 }, and hence 1 w(x, t) _ (n+ 1)N {r_xi_ [(r_xi)2_2Cr(n+l)Nt] This function is analytic near the origin and its Taylor series about the origin is convergent for I (x, t) I < s, for sufficiently smalls > 0. Hence, the corresponding solution v of (7.2.8) is analytic and its Taylor series about the origin is convergent for I(x, t) I < s. By Lemma 7.2.5 and (7.2.10), the Taylor series of u about the origin is convergent and hence defines an analytic function for (x, t) J < s, which we denote by u. Since the Taylor series of the analytic functions ut and A3 (x, u) have the same coefficients at the origin, they agree throughout the region (x, t) < s. At the beginning of the proof, we introduced an extra component for the solution vector to get rid of t in the coefficient matrices of the differential system. Had we chosen to preserve t, we would have to solve the initial-value problem r - z - t - Nw (nNt+1), w(, o) = o. It is difficult, if not impossible, to find an explicit expression of the solution w. 7.2.3. The Uniqueness Theorem of Holmgren. The solution given in Theorem 7.2.9 is the only analytic solution since all derivatives of the solution are computed at the origin and they uniquely determine the analytic solution. A natural question is whether there are other solutions, which are not analytic. Let A0i A1, An and B be analytic N x N matrices, and let F be an analytic N-vector in a neighborhood of the origin in Rn+l and uo be an 7. First-Order Differential Systems analytic N-vector in a neighborhood of the origin in W. We consider the initial-value problem for linear differential systems of the form Aj(x, t)u+ B(x, t)u = F(x, t), Ao(x, t)ut + u(x, 0) = uO(x). The next result is referred to as the local Holmgren uniqueness theorem. It asserts that there do not exist nonanalytic solutions. Theorem 7.2.10. Let Ao, Al , , Aand B be analytic N x N matrices and F be an analytic N-vector near the origin in ][8n+1 and uo be an analytic N-vector near the origin in W. If {t = 0} is noncharacteristic at the origin, then any C' -solution of (7.2.11) is analytic in a sufficiently small neighborhood of the origin in Il81 For the proof, we need to introduce adjoint operators. Let L be a differential operator defined by n Lu = Ao(x, t)ut + Ai(x, t)uxi + B(x, t)u. i=1 For any N-vectors u and v, we write n = (vTAou)t + (vTAiu)xi - ((Av)t + (Av)xi i=1 We define the adjoint operator L* of L by n L*v = - (4v)t - (ATv)x2 + BTv i=1 n = - 4vt - AT vxi + i=1 BT - Ao t Then vT Lu (vTAiu)xi + (L*v)Tu. = (vTAou)t + z=1 Proof of Theorem 7.2.10. We will prove that any C1-solution u of Lu = 0 with a zero initial value on {t = 0} is in fact zero. We introduce an analytic change of coordinates so that the initial hypersurface {t = 0} becomes a paraboloid t= 1x12. For any > 0, we set = {(x,t) : 1x12< t < }. 7.2. Analytic Solutions We will prove that u = 0 in SZ for a sufficiently small e. In the following, we denote by a+SZ and a_ SZ the upper and lower boundary of respectively, i.e., ac = {(x,t) x2 < t = a_cz6={(x,t): 1x12=t<e}. We note that det(Ao(0)) L 0 since E is noncharacteristic at the origin. Hence Ao is nonsingular in a neighborhood of the origin. By multiplying the equation in (7.2.11) by A1, we assume Ao = I. Figure 7.2.1. A parabola. For any N-vector v defined in a neighborhood of the origin containing we have 0 = f vT Lu dxdt = f UT L*v dxdt + f+s uvT dx. There is no boundary integral over a_ SZ since u = 0 there. Let Pk = Pk(x) be an arbitrary polynomial in Rn , k = 1, , N, and form P = (F1,... , PN). We consider the initial-value problem L*v = 0 in v=P onBrf1{t=e}, where BT is the ball in with center at the origin and radius r. The principal part of L* is the same as that of L, except a different sign and a transpose. We fix r so that {t = e} f1 BT is noncharacteristic for L*, for each small e. By Theorem 7.2.9, an analytic solution v exists in BT for a small. We need to point out that the domain of convergence of v is independent of P, whose components are polynomials. We choose a small such that SZe C BT . Then we have 7. First-Order Differential Systems By the Weierstrass approximation theorem, any continuous function in a compact domain can be approximated in the L°°-norm by a sequence of polynomials. Hence, U. w dx = 0, for any continuous function w on D1 n Br. Therefore, u = 0 on D1 for any small e and hence in 11,. Theorem 7.2.9 guarantees the existence of solutions of initial-value problems in the analytic setting. As the next example shows, we do not expect any estimates of solutions in terms of initial Example 7.2.11. In R2, consider the first-order homogeneous linear differential system (7.2.1), uy + v = 0, Note that all coefficients are constant. As shown in Example 7.2.1, {y = 0} is noncharacteristic. For any integer k > 1, consider uk(x, y) = sin(kx)eky, vk(x, y) = cos(kx)eky for any (x,y) E IL82. Then (Uk, vk) satisfies (7.2.1) and on {y = 0}, uk(x, 0) = sin(kx), vk(x, 0) = cos(kx) for any x E ][8. Obviously, uk (x, 0) -I- vk (x, 0) = 1 for any x E ][8, and for any y> 0, sup (u(x,y) -}- vk(x, y)) = e21 -+ oo as k -+ oo. zE1R Therefore, there is no continuous dependence on initial values. 7.3. Nonexistence of Smooth Solutions In this section, we construct a linear differential equation which does not admit smooth solutions anywhere, due to Lewy. In this equation, the coefficients are complex-valued analytic functions and the nonhomogeneous term is a suitably chosen complex-valued smooth function. We need to point out that such a nonhomogeneous term is proved to exist by a contradiction argument. This single equation with complex coefficients for a complexvalued solution is equivalent to a system of two differential equations with real coefficients for two real-valued functions. Define a linear differential operator L in Il83 = {(x, y, z) } by (7.3.1) Lu = u + iuy - 2i(x + iy)uz. 7.3. Nonexistence of Smooth Solutions We point out that L acts on complex-valued functions. The main result in this section is the following theorem. Theorem 7.3.1. Let L be the linear differential operator in ]E83 defined in (7.3.1). Then there exists an f E C°°(R3) such that Lu = f has no C2solutions in any open subset of ]E83. Before we prove Theorem 7.3.1, we rewrite L as a differential system of two equations with real coefficients for two real-valued functions. By writing u = v + iw for real-valued functions v and w, we can write L as a differential operator acting on vectors (v, w)T. Hence (vs - w,,, + 2(yvz + xwz) wX + vy + 2(ywz - xvz) L w In the matrix form, we have (2y 2x(v By a straightforward calculation, the principal symbol is given by p(P; ) = (1 + 2y)2 + (2 - 2X3)2, for any P = (x, y, z) E II83 and 2, 3) E I[83. For any fixed P E I[83, p(P; ) is a nontrivial quadratic polynomial in II83. Therefore, if f is an analytic function near P, we can always find an analytic solution of Lu = f near P. In fact, we can always find an analytic hypersurface containing P which is noncharacteristic at P. Then by prescribing analytic initial values on this hypersurface, we can solve Lu = f by the Cauchy-Kovalevskaya theorem. Theorem 7.3.1 illustrates that the analyticity of the nonhomogeneous term f is necessary in solving Lu = f even for local solutions. We first construct a differential equation which does not admit solutions near a given point. Lemma 7.3.2. Let (xO, yo, zo) be cc point in I[83 and L be the differential op- erator defined in (7.3.1). Suppose h = h(z) is areal-valued smooth function in z e ]E8 that is not analytic at zo. Then there exist no C1 -solutions of the equation Lu = h'(z - 2yox + 2xoy) in any neighborhood of (ceO, yon zo) Proof. We first consider the special case xo = yo = 0 and prove it by contradiction. Suppose there exists a C1-solution u of Lu = h'(z) 7. First-Order Differential Systems in a neighborhood of (0,0, zO), say x (zo - R, zo + R) C ll82 x ][8, SZ = for some R> 0. Set v(r, (9, z) = sin (9, z). As a function of (r, (9, z), v is Cl in (0, R) x ll8 x (zO - R, zo + R) and is continuous at r = 0 with v(0, (9,z) = 0. Moreover, u is 2ir-periodic in 8. A straightforward calculation yields Lu = 2vr + ? ve - 2ivz = h'(z). Consider the function f2ir V(r, z) = v(r, (9, z) dB. 0 J - R, zo + R), is continuous up to r = 0 Then V is Cl in (r, z) E (0, R) x (zo with V(0, z) = 0, and satisfies r Vz - iV,. = rve - 2ivz) f2 (v,. + d8 = W= V(r,z) -iirh(z). Then W is Cl in (0, R) x (zo - R, zo + R), is continuous up to r = 0, and satisfies Wz -I- iW,. = 0. Thus W is an analytic function of z + it for (r, z) E (0, R) x (zo - R, zo + R), continuous at r = 0, and has a vanishing real part there. Hence we can extend W as an analytic function of z + it to (r, z) E (-R, R) x (zO - R, zo + R). Hence -irh(z), the imaginary part of W(0, z), is real analytic for z E (zO - R, zo + R). Now we consider the general case. Set x=x-xo, j=y-yo, z=z-2yox+2xoy, and = u(x, y, z) Then u(x, y, z) is C1 in a neighborhood of (0,0, zo). A straightforward calculation yields ux + guy - 22(x + i )uz = h'(z). We now apply the special case we have just proved to u. 0 In the following, we let h = h(z) be areal-valued periodic smooth function in I[8 which is not real analytic at any z E R. We take a sequence of points Pk _ (xk, yk, zk) E I[83 which is dense in ll83 and set Pry = 2 (IxkI + ykI), 7.3. Nonexistence of Smooth Solutions and 2-ke_. ck = We also denote by £°° the collection of bounded infinite sequences 'r = (a1, a2, ) of real numbers a2. This is a Banach space with respect to the norm fHieoo = suP lakl k For any r = (al, a2, fT(x,y,z) _ we set in R. akckh'(z - 2ykx + 2xky) k=1 We note that fT depends on r linearly. This fact will be needed later on. Lemma 7.3.3. Let fT be defined as in (7.3.2) for some r E 2°°. Then fT E C°° (R3). Moreover, for any a e 7G+, (-s-) lpas Proof. We need to prove that all formal derivatives of fT converge uniformly in II83. Set Mk = sup IzER Then Mk <00 since h is periodic. Hence for any a e 7G+ with a = m, I 2-k IftIIooMm+ipe-p < 2-kIITIIeooMm+l (rn)rn e In the last inequality, we used the fact that the function f(r) = r"te-"' in [0, oo) has a maximum mT"e-T" at r = m. This implies the uniform convergence of the series for 8"fT We introduce a Holder space which will be needed in the next result. Let µ E (0, 1) be a constant and St C ][81 be a domain. We define C"`(St) as the collection of functions u e Cl (St) with Vu(x) - Vu(y) < for any x, y e SZ, where C is a positive constant. We define the C1"-norm in St by IuIc1i(cl) = sup u +sup IVul + S2 sup x,yESt,x#y Vu(x) - Vu(y) I x - yl µ We will need the following important compactness property. 7. First-Order Differential Systems Lemma 7.3.4. Let 1 2 be a domain in I[8n, and µ E (0,1) and M > 0 Suppose {uk} is a sequence of functions in C"`(12) with IIukIIc1(c) <_ M for any k. Then there exist a function u e C"(1) and be constants. a subsequence {uk' } such that uk' -+ u in C' (1) for any bounded subset 1 with Sty C St and I<M. Proof. We note that a uniform bound of C'-norms of uk implies that uk and their first derivatives are equibounded and equicontinuous in 1. Hence, U the desired result follows easily from Arzela's We point out that the limit is a C"`-function, although the convergence is only in C1. Next, we set Bk,m = B 1 (Pk) . We fix a constant µ E (0, 1). Definition 7.3.5. For positive integers m and k, we denote by Ek,m the collection of T E £°O such that there exists a solution u E C1'(Bk,m) of Lu = f-,- in Bk,m, u(Pk) = 0, IUIC1)IL(Bkm) < m, where fT is the function defined in (7.3.2). We have the following result concerning Ek,m. Lemma 7.3.6. For any positive integers k and m, ek,m is closed and nowhere dense in £°°. We recall that a subset is nowhere dense if it has no interior points. Proof. We first prove that Ek,m is closed. Take any Ti, T2, E ek,m and T E £°O such that -TIIP°O = 0. By Lemma 7.3.3, we have p IfT3 - fT l C IITj - T I I2°° SUB I/ilL For each j, let u3 E C1'(Bk,m) be as in Definition 7.3.5 for in Bk,m, u3(Pk) = 0 and IUjIc1)1(Bkm) < m. i.e., Lug = fT.9 7.3. Nonexistence of Smooth Solutions By Lemma 7.3.4, there exist a u E C1 '(Bk,m) and a subsequence {u3i} such that u3' converges uniformly to u together with its first derivatives in any compact subset of Bk,m. Then, Lu = fT in Bk,m, u(Pk) = 0 and k1IC1tL(Bk,m) < m. Hence r E ek,m. This shows that ek,m is closed. Next, we prove that ek,m has no interior points. To do this, we first denote by 1 E £°° the bounded sequence all of whose elements are zero, except the lath element, which is given by lick. By (7.3.2), we have f_ h' (z - 2ykx + 2xk y) . By Lemma 7.3.2, there exist no C1-solutions of Lu = f in any neighborhood of Pk. For any r E ek,m, we claim that T + Eli for any E. We will prove this by contradiction. Suppose r + Eli E ek,m for some E. Set T = T + Eli and let u and u be solutions of Lu = fT and Lu = fT, respectively, as in Definition 7.3.5. Set v = (u - u) /E. Then v is a C1'-solution of Lv = fin Bk,m. This leads to a contradiction, for ICI can be arbitrarily small. D Now we are ready to prove Theorem 7.3.1. Proof of Theorem 7.3.1. Let p E (0, 1) be the constant as in the definition of ek,m. We will prove that for some r E £°O, the equation Lu = fT admits no C1"-solutions in any domain S1 C R3. If not, then for every T E £°O there exist an open set SZT C R3 and a u E C1 " (11T) such that Lu = fT SZT . By the density of {Pk} in R3, there exists a Pk E SZT for some k > 1. Then Bk,m C i- for all sufficiently large m. Next, we may assume u(Pk) = 0. Otherwise, we replace u by u - u(Pk). Then, for m sufficiently large, we have IUIcl,IL(Bk) < m. This implies r E ek,m. Hence °° _ k,m=1 Therefore, the Banach space £°O is a union of a countable set of closed 0 nowhere dense subsets. This contradicts the Baire category theorem. 7. First-Order Differential Systems 7.4. Exercises Exercise 7.1. Classify the following 4th-order equation in R3: 23u + 233u + E4u - 2332u+3u = f. Exercise 7.2. Prove Lemma 7.2.7 and Lemma 7.2.8. Exercise 7.3. Consider the initial-value problem uu-u-u=0 inRx(0,oo), u(x, 0) = x, ut(x, 0) _ -x. Find a solution as a power series expansion about the origin and identify this solution. Exercise 7.4. Let A be an N x N diagonal C1-matrix on Il8 x (0, T) and f : I[8 x (O, T) x I[8N Il8N be a CZ-function. Consider the initial-value problem for u : Il8 x (O, T) -+ RN of the form ut + A(x, t)u = f(x, t, u) in Il8 x (0, T), with U(,0)0 onR. Under appropriate conditions on f, prove that the above initial-value problem admits a Cl-solution by using the contraction mapping principle. Hint: It may be helpful to write it as a system of equations instead of using a matrix form. Exercise 7.5. Set D = {(x, t) x > 0, t > 0} C ][82 and let a be Cl, b2 be continuous in D, and cp, '/' be continuous in [0, oo) with cp(O) _ Suppose (u, v) E Cl (D) fl C(D) is a solution of the problem : ut +au +b11u+b,2v = 1 v + b12u + b22v = g, u(x, 0) = cp(x) for x >0 and v(0, t) _ fort >0. (1) Assume a(0, t) < 0 for any t > 0. Derive an energy estimate for (u, v) in an appropriate domain in D. (2) Assume a(0, t) < 0 for any t> 0. For any T> 0, derive an estimate for sup[o,T] I')I in terms of sup-norms of f, g, cp and 'r/'. (3) Discuss whether similar estimates can be derived if a(0, t) is positive for some t> 0. 7.4. Exercises Exercise 7.6. Let a, bz be analytic in a neighborhood of 0 E ][82 and cp, b be analytic in a neighborhood of 0 E R. In a neighborhood of the origin in ][82 = {(x,t)}, consider f, v + b12u + b22v = g, with the condition u(x, 0) = cp(x) and v(0, t) _ (1) Let (u, v) be a smooth solution in a neighborhood of the origin. Prove that all derivatives of u and v at 0 can be expressed in terms f, g, cp and 'i/j' at 0. of those of a, (2) Prove that there exists an analytic solution (u, v) in a neighborhood of 0Ell82. Chapter 8 In the final chapter of this book, we present a list of differential equations we expect to study in more advanced PDE courses. Discussions in this chapter will be brief. We mention several function spaces, including Sobolev spaces and Holder spaces, without rigorously defining them. In Section 8.1, we talk about several basic linear differential equations of the second order, including elliptic, parabolic and hyperbolic equations, and linear symmetric hyperbolic differential systems of the first order. These equations appear frequently in many applications. We introduce the appropriate boundary-value problems and initial-value problems and discuss the correct function spaces to study these problems. In Section 8.2, we discuss more specialized differential equations. We introduce several important nonlinear equations and focus on the background of these equations. Discussions in this section are extremely brief. 8.1. Basic Linear Differential Equations In this section, we discuss several important linear differential equations. We will focus on elliptic, parabolic and hyperbolic differential equations of the second order and symmetric hyperbolic differential systems of the first order. 8.1.1. Linear Elliptic Differential Equations. Let SZ be a domain in Rn and a3,, bi and c be continuous functions in 1. Linear elliptic differential equations of the second order are given in the form n (8.1.1) i,j=1 biu-I- cu = f in SZ, i=1 279 8. Epilogue where the az satisfy n for any x E St and E ][8n, for some positive constant A. The equation (8.1.1) reduces to the Poisson equation if aZ3 = SZ and bi = c = 0. In many cases, it is advantageous to write (8.1.1) in the form n (au) x + (8.1.2) i,j=1 biux2 + cu = f in SZ, by renaming the coefficients bi. The equation (8.1.2) is said to be in the divergence form. For comparison, the equation (8.1.1) is said to be in the nondivergence form. Naturally associated with the elliptic differential equations are boundaryvalue problems. There are several important classes of boundary-value prob- lems. In the Dirichlet problem, the values of solutions are prescribed on the boundary, while in the Neumann problem, the normal derivatives of solutions are prescribed. In solving boundary-value problems for elliptic differential equations, we work in Holder spaces C' and Sobolev spaces Wk,P. Here, k is a nonnegative integer, p> 1 and a e (0, 1) are constants. For elliptic equations in the divergence form, it is advantageous to work in Sobolev spaces H' = due to their Hilbert space structure. W',2 8.1.2. Linear Parabolic Differential Equations. We denote by (x, t) points in Rn x R. Let D be a domain in Rn x Ilk and aij, bi and c be continuous functions in D. Linear parabolic differential equations of the second order are given in the form n ut - biux2 + cu = f aij uxzx + i,j=1 in D, where the aij satisfy n aij (x, t)ij > for any (x,t) E D and for some positive constant A. The equation (8.1.3) reduces to the heat equation if az3 = b2j and bi = c = 0. Naturally associated with the parabolic differential equations are initialvalue problems and initial/boundary-value problems. In initial-value problems, D = ][8n x (0, oo) and the values of solutions are prescribed on IEBn x {0}. In initial/boundary-value problems, D has the form SZ x (0, oo), where SZ is 8.1. Basic Linear Differential Equations a bounded domain in appropriate boundary values are prescribed on aSZ x (0, oo) and the values of solutions are prescribed on SZ x {0}. Many results for elliptic equations have their counterparts for parabolic equations. 8.1.3. Linear Hyperbolic Differential Equations. We denote by (x, t) points in W x TI. Let D be a domain in W x IIS and a3, bi and c be continuous functions in D. Linear hyperbolic differential equations of the second order are given in the form n utt - where the biuxi + cu = f in D, aijuxZxj + i=1 satisfy n aij (x, t) i j I I2 for any (x,t) ED and e E Ilgn, for some positive constant A. The equation (8.1.4) reduces to the wave equation if = Si j and bi = c = 0. Naturally associated with the hyperbolic differential equations are initialvalue problems. We note that {t = 0} is a noncharacteristic hypersurface for (8.1.4). In initial-value problems, D = W x (0, oo) and the values of solutions together with their first t-derivatives are prescribed on IIST x {0}. Solutions can be proved to exist in Sobolev spaces under appropriate assumptions. Energy estimates play fundamental roles in hyperbolic differential equations. 8.1.4. Linear Symmetric Hyperbolic Differential Systems. We denote by (x, t) points in W x TI. Let N be a positive integer, A0, A1, and B be continuous N x N matrices and f be continuous N-vector in W x TI. We consider a first-order linear differential system in W x IIS of the form n Aout + Akuxk + Bu = f. We always assume that Ao(x, t) is nonsigular for any (x, t), i.e., det(Ao(x, t)) Hence, the hypersurface {t = 0} is noncharacteristic. Naturally associated with (8.1.5) are initial-value problems. If N = 1, the system (8.1.5) is reduced to a differential equation for a scalar-valued function u, and the initial-value problem for (8.1.5) can be solved by the method of characteristics. For N > 1, extra conditions are needed. 8. Epilogue The differential system (8.1.5) is symmetric hyperbolic at (x, t) if Ao (x, t), Al (x, t), , A(x, t) are symmetric and Ao(x, t) is positive definite. It is symmetric hyperbolic in ][8n x ][8 if it is symmetric hyperbolic at every point in RTh X R. For N> 1, the symmetry plays an essential role in solving initial-value problems for (8.1.5). Symmetric hyperbolic differential systems in general dimensions behave like single differential equations of a similar form. We can derive energy estimates and then prove the existence of solutions of the initial-value problems for (8.1.5) in appropriate Sobolev spaces. We need to point out that hyperbolic differential equations of the second order can be transformed to symmetric hyperbolic differential systems of the first order. 8.2. Examples of Nonlinear Differential Equations In this section, we introduce some nonlinear differential equations and sys- tems and discuss briefly their background. The aim of this section is to illustrate the diversity of nonlinear partial differential equations. We have no intention of including here all important nonlinear PDEs of mathematics and physics. 8.2.1. Nonlinear Differential Equations. We first introduce some important nonlinear differential equations. The Hamilton-.Iacobi equation is a first-order nonlinear PDE for a function u = u(x, t), u+H(Du,x) =0. This equation is derived from Hamiltonian mechanics by treating u as the generating function for a canonical transformation of the classical Hamiltonian H = H(p, x). The Hamilton-Jacobi equation is important in identifying conserved quantities for mechanical systems. Apart of its characteristic ODE is given by x2 = Hpi(p x) pi = -Hxi (p, x) This is referred to as Hamilton's ODE, which arises in the classical calculus of variations and in mechanics. In continuum physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves. In mathematics, a scalar conservation law is a first-order nonlinear PDE Ut + (F(u)) =0. 8.2. Examples of Nonlinear Differential Equations Here, F is a given function in R and u = u(x, t) is the unknown function in R x R. It reduces to the inviscid Burgers' equation if F(u) = u2/2. In general, global smooth solutions do not exist for initial-value problems. Even for smooth initial values, solutions may develop discontinuities, which are referred to as shocks. Minimal surfaces are defined as surfaces with zero mean curvature. The minimal surface equation is a second-order PDE for u = u(x) of the form 1+ IThis is a quasilinear elliptic differential equation. Let St be a domain in IE8Th. For any function u defined in S2, the area of the graph of u is given by A(u) 1 +IVuI2dx. The minimal surface equation is the Euler-Lagrange equation of the area functional A. A Monge-Ampere equation is a nonlinear second-order PDE for a function u = u(x) of the form det(VZU) = f(x), where f is a given function defined in ][8n. This is an elliptic equation if u is strictly convex. Monge-Ampere equations arise naturally from many problems in Riemannian geometry and conformal geometry. One of the simplest of these problems is the problem of prescribed Gauss curvature. Suppose that S2 is a bounded domain in ][8Th and that K is a function defined in St. In the problem of prescribed Gauss curvature, we seek a hypersurface of IE8n+1 as a graph y = u(x) over x E S2 so that at each point (x, w(x)) of the surface, the Gauss curvature is given by K(x). The resulting partial differential equation is det(V2u) = K(x)(1 + DuI2)P. Scalar reaction-diffusion equations are second-order semilinear parabolic differential equations of the form ut - aLu = f(u), where u = u(x, t) represents the concentration of a substance, a is the diffusion coefficient and f accounts for all local reactions. They model changes of the concentration of substances under the influence of two processes: local chemical reactions, in which the substances are transformed into each other, and diffusion, which causes the substances to spread out in space. 8. Epilogue have a wide range of applications in chemistry as well as biology, ecology and physics. In quantum mechanics, the Schrodinger equation describes how the quantum state of a physical system changes in time. It is as central to quantum mechanics as Newton's laws are to classical mechanics. The Schrodinger equation takes several different forms, depending on physical situations. For a single particle, the Schrodinger equation takes the form i26t = -/.u -I- Vu, where u = u(x, t) is the probability amplitude for the particle to be found at position x at time t, and V is the potential energy. We allow u to be complex-valued. In forming this equation, we rescale position and time so that the Planck constant and the mass of the particle are absent. The nonlinear Schrodinger equation has the form jut = -026 + IcIuI2u, where ,c is a constant. The Korteweg-de cries equation (KdV equation for short) is a mathematical model of waves on shallow water surfaces. The KdV equation is a nonlinear, dispersive PDE for a function u = u(x, t) of two real variables, space x and time t, in the form It admits solutions of the form v(x - ct), which represent waves traveling to the right at speed c. These are called soliton solutions. 8.2.2. Nonlinear Differential Systems. Next, we introduce some nonlinear differential systems. In fluid dynamics, the Euler equations govern inviscid flow. They are usually written in the conservation form to emphasize the conservation of mass, momentum and energy. The Euler equations are a system of firstorder PDEs given by pt + V. (pu) = 0, (pu)t + V (u (pu)) + Op = 0, (pE)t + V (u(pE + p)) =0, where p is the fluid mass density, u is the fluid velocity vector, p is the pressure and E is the energy per unit volume. We assume Z IuI2, 8.2. Examples of Nonlinear Differential Equations where e is the internal energy per unit mass and the second term corresponds to the kinetic energy per unit mass. When the flow is incompressible, If the flow is further assumed to be homogeneous, the density p is constant and does not change with respect to space. The Euler equations for incompressible flow have the form ut + u Vu = - Vp, In forming these equations, we take the density p to be 1 and neglect the equation for E. The Navier-Stokes equations describe the motion of incompressible and homogeneous fluid substances when viscosity is present. These equations arise from applying Newton's second law to fluid motion under appropri- ate assumptions on the fluid stress. With the same notation for the Euler equations, the Navier-Stokes equations have the form where v is the viscosity constant. We note that (incompressible) Euler equations correspond to the (incompressible) Navier-Stokes equations with zero viscosity. It is a Millennium Prize Problem to prove the existence and smoothness of solutions of the initial-value problem for Navier-Stokes equations. In differential geometry, a geometric low is the gradient flow associated with a functional on a manifold which has a geometric interpretation, usually associated with some extrinsic or intrinsic curvature. A geometric flow is also called a geometric evolution equation. The mean curvature flow is a geometric flow of hypersurfaces in Euclidean space or, more generally, in a Riemannian manifold. In mean curvature flows, a family of surfaces evolves with the velocity at each point on the surface given by the mean curvature of the surface. For closed hypersurfaces in Euclidean space Rn+1, the mean curvature flow is the geometric evolution equation of the form Ft = Hz', where F(t) : M ][8n+1 is an embedding with an inner normal vector field v and the mean curvature H. We can rewrite this equation as Ft = O9(t)F 8. Epilogue where g (t) is the induced metric of the evolving hypersurface F(t). When expressed in an appropriate coordinate system, the mean curvature flow forms a second-order nonlinear parabolic system of PDEs for the components of F. The Ricci flow is an intrinsic geometric flow in differential geometry which deforms the metric of a Riemannian manifold. For any metric g on a Riemannian manifold M, we denote by Ric its Ricci curvature tensor. The Ricci flow is the geometric evolution equation of the form atg = -2Ric. Here we view the metric tensor and its associated Ricci tensor as functions of a variable x e M and an extra variable t, which is interpreted as time. In local coordinate systems, the components Rj3 of the Ricci curvature tensor can be expressed in terms of the components g2j of the metric tensor g and their derivatives up to order 2. When expressed in an appropriate coordinate system, the Ricci flow forms a second-order quasilinear parabolic system of PDEs for gig. The Ricci flow plays an essential role in the solution of the Poincare conjecture, a Millennium Prize Problem. In general relativity, the Einstein field equations describe how the curvature of spacetime is related to the matter/energy content of the universe. They are given by G = T, where G is the Einstein tensor of a Lorentzian manifold (M, g), or spacetime, and T is the stress-energy tensor. The Einstein tensor is defined by 1 G= Ric - - Sg, 2 where Ric is the Ricci curvature tensor and S is the scalar curvature of (M, g). While the Einstein tensor is a type of curvature, and as such relates to gravity, the stress-energy tensor contains all the information concerning the matter fields. Thus, the Einstein field equations exhibit how matter acts as a source for gravity. When expressed in an appropriate gauge (coordinate system), the Einstein field equations form a second-order quasilinear hyperbolic system of PDEs for components g2j of the metric tensor g. In general, the stress-energy tensor T depends on the metric g and its first derivatives. If T is zero, then the Einstein field equations are referred to as the Einstein vacuum field equations, and are equivalent to the vanishing of the Ricci curvature. Yang-Mills theory, also known as non-Abelian gauge theory, was formulated by Yang and Mills in 1954 in an effort to extend the original concept of gauge theory for an Abelian group to the case of a non-Abelian group and has great impact on physics. It explains the electromagnetic and the strong 8.2. Examples of Nonlinear Differential Equations and weak nuclear interactions. It also succeeds in studying the topology of smooth 4-manifolds in mathematics. Let M be a Riemannian manifold and P a principal G-bundle over M, where G is a compact Lie group, referred to as the gauge group. Let A be a connection on P and F be its curvature. Then the Yang-Mills functional is defined by Fl2dV9. fM The Yang-Mills equations are the Euler-Lagrange equations for this functional and can be written as dAF = 0, where dA is the adjoint of dA, the gauge-covariant extension of the exterior derivative. We point out that F also satisfies dAF=O. This is the Bianchi identity, which follows from the exterior differentiation of F. In general, Yang-Mills equations are nonlinear. It is a Millennium Prize Problem to prove that a nontrivial Yang-Mills theory exists on R and has a positive mass gap for any compact simple gauge group G. 8.2.3. Variational Problems. Last, we introduce some variational problems with elliptic characters. As we know, harmonic functions in an arbitrary domain 1 C IiSn can be regarded as minimizers or critical points of the Dirichlet energy IVuI2 dx. This is probably the simplest variational problem. There are several ways to generalize such a problem. We may take a function F : W - IIS and consider F(Vu) dx. It is the Dirichlet energy if F(p) _ p12 for any p E W. When F(p) _ the integral above is the area of the hypersurface of the graph y = u(x) in ][fin x lit. This corresponds to the minimal surface equation we Vi + have introduced earlier. Another generalization is to consider the Dirichlet energy, IVuI2dx, IiSm with an extra requirement for vector-valued functions u : c c 1 n that the image u(1) lies in a given submnifold of IIS'n. For example, we may take this submanifold to be the unit sphere in IISm. Minimizers of such a variational problem are called minimizing harmonic maps. In general, 8. Epilogue minimizing harmonic maps are not smooth. They are smooth away from a subset E, referred to as a singular set. The study of singular sets and behavior of minimizing harmonic maps near singular sets constitutes an important subject. One more way to generalize is to consider the Dirichlet energy, IVuI2 dx, for scalar-valued functions u : 1 2 C ][8n - ][8 with an extra requirement that u > in St for a given function 'i/j'. This is the simplest obstacle problem or free boundary problem, where /i is an obstacle. Let u be a minimizes and set A = {x E St; u(x) > fi(x)}. It can be proved that u is harmonic in A. The set 8A in S2 is called the free boundary. It is important to study the regularity of free boundaries. Alinhac, S., Hyperbolic Partial Differential Equations, Springer, 2009. Carlson, J., Jaffe, A., Wiles, A. (Editors), The Millennium Prize Problems, Clay Math. Institute, 2006. [3] Chen, Y.-Z., Wu, L.-C., Second Order Elliptic Equations and Elliptic Systems, Amer. Math. Soc., 1998. [4] Courant, R., Hilbert, D., Methods of Mathematical Physics, Vol. II, Interscience Publishers, 1962. [5] DiBenedetto, E., Partial Differential Equations, Birkhauser, 1995. [6] Evans, L., Partial Differential Equations, Amer. Math. Soc., 1998. [7] Folland, G., Introduction to Partial Differential Equations, Princeton University Press, 1976. [8] Friedman, A., Partial Differential Equations, Holt, Rinehart, Winston, 1969. [9] Friedman, A., Partial Differential Equations of Parabolic Type, Prentice-Hall, 1964. [10] Garabedian, P., Partial Differential Equations, Wiley, 1964. [11] Gilbarg, D., Trudinger, N., Elliptic Partial Differential Equations of Second Order (2nd ed.), Springer, 1983. [12] Han, Q., Lin, F.-H., Elliptic Partial Differential Equations, Amer. Math. Soc., 2000. [13] Hormander, L., Lectures on Nonlinear Hyperbolic Differential Equation, Springer, 1996. [14] Hormander, L., The Analysis of Linear Partial Differential Operators, Vols. 1-4, Springer, 1983-85. [15] John, F., Partial Differential Equations (4th ed.), Springer, 1991. [16] Lax, P., Hyperbolic Partial Differential Equations, Amer. Math. Soc., 2006. [17] Lieberman, G. M., Second Order Parabolic Partial Differential Equations, World Scientific, 1996. [18] MacRobert, T. M., Spherical Harmonics, An Elementary Treatise on Harmonic Functions with Applications, Pergamon Press, 1967. [19] Protter, M., Weinberger, H., Maximum Principles in Differential Equations, PrenticeHall, 1967. 289 [20] Rauch, J., Partial Differential Equations, Springer, 1992. [21] Schoen, R., Yau, S.-T., Lectures on Differential Geometry, International Press, 1994. [22] Shatah, J., Struwe M., Geometric Wave Equations, Amer. Math. Soc., 1998. [23] Smoller, J., Shock Waves and Reaction-Diffusion Equations, Springer, 1983. [24] Strauss, W., Partial Differential Equations: An Introduction, Wiley, 1992. [25] Taylor, M., Partial Differential Equations, Vols. I-III, Springer, 1996. a priori estimates, 4 adjoint differential operators, 39, 268 analytic functions, 105, 261 auxiliary functions, 121 Bernstein method, 121 Burgers' equation, 22 Cauchy problems, 11, 48, 251, 256 Cauchy values, 11, 48, 251, 256 Cauchy-Kovalevskaya theorem, 263 characteristic cones, 57 characteristic curves, 14, 50, 253 characteristic hypersurfaces, 13, 14, 16, 50, 253, 256 noncharacteristic hypersurfaces, 13, 14, 16, 50, 253, 256 characteristic ODEs, 19, 21, 26 characteristic triangle, 202 compact supports, 41 comparison principles, 114, 119, 177 compatibility conditions, 25, 79, 83, 207, 210 conservation laws, 24, 282 conservation of energies, 64, 237 convergence of series, 105, 260 absolute convergence, 260 convolutions, 150 d'Alembert's formula, 204 decay estimates, 230 degenerate differential equations, 51 diameters, 60 differential Harnack inequalities heat equations, 191 Laplace equations, 109, 122 Dirichlet energy, 142 Dirichlet problems, 58, 93, 111 Green's function, 94 domains, 1 domains of dependence, 19, 35, 204, 220 doubling condition, 145 Duhamel's principle, 235 eigenvalue problems, 75, 85 Einstein field equations, 286 elliptic differential equations, 51, 254, 279 energy estimates first-order PDEs, 37 heat equations, 62 wave equations, 63, 238, 241 Euclidean norms, 1 Euler equations, 284 Euler-Poisson-Darboux equation, 214 exterior sphere condition, 132 finite-speed propagation, 35, 221 first-order linear differential systems, 281 first-order linear PDEs, 11 initial-value problems, 31 first-order quasilinear PDEs, 14 Fourier series, 76 Fourier transforms, 148 inverse Fourier transforms, 153 frequency, 145 291 fundamental solutions heat equations, 157, 159 Laplace equations, 91 Holmgren uniqueness theorem, 268 Hopf lemma, 116, 183 hyperbolic differential equations, 51, 58, Goursat problem, 246 gradient estimates interior gradient estimates, 101, 108, hypersurfaces, 2 121, 168, 189 gradients, 2 Green's formula, 92 Green's function, 81, 94 Green's function in balls, 96 Green's identity, 92 half-space problems, 207 Hamilton-Jacobi equation, 282 harmonic functions, 52, 90 conjugate harmonic functions, 52 converegence of Taylor series, 105 differential Harnack inequalities, 109, 122 doubling condition, 145 frequency, 145 Harnack inequalities, 109, 124 interior gradient estimates, 101, 108, 121 Liouville theorem, 109 mean-value properties, 106 removable singularity, 125 subharmonic functions, 113, 126 superharmonic functions, 126 harmonic lifting, 128 Harnack inequalities, 109, 124, 192, 197 differential Harnack inequalities, 109, 122, 191, 196 heat equations n dimensions, 56 1 dimension, 53 analyticity of solutions, 171 differential Harnack inequalities, 191, 192, 196 fundamental solutions, 157, 159 Harnack inequalities, 197 initial/boundary-value problems, 62, 75 interior gradient estimates, 168, 189 maximum principles, 176 strong maximum principles, 181 subsolutions, 176 supersolutions, 176 weak maximum principles, 176 Hessian matrices, 2 infinite-speed propagation, 179 initial hypersurfaces, 11, 48, 251, 256 initial values, 11, 48, 251, 256 initial-value problems, 251, 256 first-order PDEs, 11, 16 second-order PDEs, 48 wave equations, 202, 213, 233 initial/boundary-value problems heat equations, 62, 75 wave equations, 63, 82, 210 integral curves, 18 integral solutions, 24 integration by parts, 5 interior sphere condition, 117 KdV equations, 284 Laplace equations, 52, 55 fundamental solutions, 91 Green's identity, 92 maximum principles, 112 Poisson integral formula, 100 Poisson kernel, 98 strong maximum principles, 117 weak maximum principles, 113 linear differential systems mth-order, 255 first-order, 281 linear PDEs, 3 mth-order, 250 first-order, 11 second-order, 48 Liouville theorem, 109 loss of differentiations, 222 majorants, 262 maximum principles, 111 strong maximum principles, 111, 117, 181 weak maximum principles, 112, 113, 176 mean curvature flows, 285 mean-value properties, 106 method of characteristics, 19 method of descent, 218 method of reflections, 208, 211 method of spherical averages, 213 minimal surface equations, 283 minimizing harmonic maps, 288 mixed problems, 62 Monge-Ampere equations, 283 multi-indices, 2 Navier-Stokes equations, 285 Neumann problems, 59 Newtonian potential, 133 noncharacteristic curves, 14, 50, 253 noncharacteristic hypersurfaces, 13, 14, 16, 50, 253, 256 nonhomogeneous terms, 11, 48, 251, 256 normal derivatives, 251 parabolic boundaries, 175 parabolic differential equations, 58, 280 Parseval formula, 153 partial differential equations (PDEs), 3 elliptic PDEs, 51 hyperbolic PDEs, 58 linear PDEs, 3 mixed type, 54 parabolic PDEs, 58 quasilinear PDEs, 3 partial differential systems, 256 Perron's method, 126 Plancherel's theorem, 154 Poincare lemma, 60 Poisson equations, 55, 133 weak solutions, 139 Poisson integral formula, 75, 100 Poisson kernel, 75, 98 principal parts, 250, 255 principal symbols, 48, 250, 255 propagation of singularities, 54 quasilinear PDEs, 3 first-order, 14 radiation field, 248 range of influence, 19, 35, 204, 220 reaction-diffusion equations, 283 removable singularity, 125 Ricci flows, 286 Schrodinger equations, 284 Schwartz class, 148 second-order linear PDEs, 48 in the plane, 51 elliptic PDEs, 51, 279 hyperbolic PDEs, 58, 281 parabolic PDEs, 58, 280 separation of variables, 67 shocks, 24 Sobolev spaces, 139, 140, 142 space variables, 1 space-like surfaces, 243 subharmonic functions, 113, 126 subsolutions, 113 heat equation, 176 subharmonic functions, 113 superharmonic functions, 126 supersolutions, 113 heat equation, 176 superharmonic functions, 113 symmetric hyperbolic differential systems, 282 Taylor series, 105, 261 terminal-value problems, 165 test functions, 24 time variables, 1 time-like surfaces, 243 Tricomi equation, 54 uniform ellipticity, 114 wave equations n dimensions, 57, 213, 233 1 dimension, 53, 202 2 dimensions, 218 3 dimensions, 215 decay estimates, 230 energy estimates, 237 half-space problems, 207 initial-value problems, 202, 213, 233 initial/boundary-value problems, 63, 82, 210 radiation field, 248 weak derivatives, 138, 142 weak solutions, 40, 139, 245 Weierstrass approximation theorem, 270 well-posed problems, 4 Yang-Mills equations, 287 Yang-Mills functionals, 287 This is a textbook for an introductory graduate course on partial differential equations. Han focuses on 'Linear equations of first and second order. An important feature of his treatment is that the majority of the techniques are applicable more generally. In particular, Han emphasizes a priori estimates throughout the text, even for those equations that can be solved explicitly. Such estimates are indispensable tools for proving the existence and uniqueness of solutions to PDEs, being especially important for nonlinear equations.The estimates are also crucial to establishing properties of the solutions, such as the continuous dependence on parameters. Han's book is suitable for students interested in the mathematical theory of partial. differential equations, either as an overview of the subject or as an introduction leading to further study. ISBN 978-0-8218-5255-2 For additional information and updates on this book, visit www.ams.org/bookpages/gsm-120 9 780821 1852552 GSM/ 120 AMS on the Web
{"url":"https://doku.pub/documents/graduate-studies-in-mathematics-120-qing-han-a-basic-course-in-partial-differential-equations-american-mathematical-society-2011pdf-k0pvwrd29101","timestamp":"2024-11-05T23:51:41Z","content_type":"text/html","content_length":"577698","record_id":"<urn:uuid:d9aed551-6e01-4e90-8cbb-67a301e3ac5c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00661.warc.gz"}
Clay Content - Water Repellency - 0 - 10 cm Scatter Plot Graphs A scatter diagram visually displays the relationship between the two chosen soil quality indicators. A positive upward relationship occurs when data points with higher values on the horizontal (x-axis) correspond to higher values on the vertical (y-axis). A negative downward relationship occurs when data points with higher values on the horizontal (x-axis) correspond to lower values on the vertical (y-axis). If the data points are scattered with no obvious trend then there is no simple relationship between the two soil quality indicators.
{"url":"https://soilquality.org.au/au/nsw/nsw-sh/relate/soil-clay/water-repellency-0-10","timestamp":"2024-11-12T13:34:12Z","content_type":"text/html","content_length":"11207","record_id":"<urn:uuid:20c9fd41-8847-4472-9083-fe4910cfb32f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00380.warc.gz"}
1057 - A Game with Marbles There are n bowls, numbered from 1 to n. Initially, bowl i contains mi marbles. One game step consists of removing one marble from a bowl. When removing a marble from bowl i (i > 1), one marble is added to each of the first i-1 bowls; if a marble is removed from bowl 1, no new marble is added. The game is finished after each bowl is empty. Your job is to determine how many game steps are needed to finish the game. You may assume that the supply of marbles is sufficient, and each bowl is large enough, so that each possible game step can be executed. The input contains several test cases. Each test case consists of one line containing one integer n (1 ≤ n ≤ 50), the number of bowls in the game. The following line contains n integers mi (1 ≤ i ≤ n, 0 ≤ mi ≤ 1000), where mi gives the number of marbles in bowl i at the beginning of the game. The last test case is followed by a line containing 0. For each test case, print one line with the number of game steps needed to finish the game. You may assume that this number fits into a signed 64-bit integer (in C/C++ you can use the data type "long long", in JAVA the data type "long"). sample input sample output
{"url":"http://hustoj.org/problem/1057","timestamp":"2024-11-13T15:54:33Z","content_type":"text/html","content_length":"8388","record_id":"<urn:uuid:233b0762-8bf6-4d01-a940-94704bb14c26>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00484.warc.gz"}
Digital Math Resources Display Title Video Tutorial: Integers, Video 9 Video Tutorial: Integers: Integers and Exponents This is part of a series of videos on the topic of Integers. This includes defining integers, modeling integers, integer operations, and integer expressions. To see the complete collection of these videos on integers, click on this link. The following section includes background information on integers (and also rational numbers). Refer to this section as you view the videos, or as review material afterward. What Are Integers? In arithmetic you learned that whole numbers include zero and the counting numbers from 1 to infinity. Whole numbers don’t include: • Fractional values • Decimal values Integers include the whole numbers, zero, and positive numbers 1, 2, 3, etc., but also includes a different class of numbers, negative numbers. Representing Integers You can use a number line to represent the integers. Notice that every integer and its opposite is the same distance from 0 on the number line. Also, the arrow heads on the number line mean that the integers extend to infinity. Representing distance on a number line is shown below. Distance is always a positive number, so use the absolute value symbol to ensure the result is positive. Subtract one value from another and find the absolute value of the difference. Every integer has its opposite. Notice on the number line that 1 and -1 are opposites, as are 2 and -2, and so on. Every integer and its opposite is the same distance from zero on the number line, as shown below. As you can see both 4 and -4 are four units from zero. This same pattern applies to all integers and their opposites. Also, the sum of any integer and its opposite is zero. 1 + (-1) = 0 2 + (-2) = 0 This pattern continues for all integers and their opposites. The technique of finding the absolute value of a difference applies to any pair of integers, as shown below. Integers can also be represented using algebra tiles, as shown below. Comparing and Ordering Integers You can use a number line to compare and order numbers. In going from left to right on the number line, numbers increase in value. In going from right to left on the number line, numbers decrease in value. See the example below. Knowing this property of integers on a number line, suppose there are four integers that satisfy these inequalities: C < D < B < A We can graph these integers on a number line as shown below. Adding and Subtracting Integers When adding and subtracting integers, you need to keep track of the sign of each number. There are several examples to consider. Example 1: Adding Two Positive Integers 1 + 2 + 3 This is similar to adding whole numbers. The result is positive. Example 2: Adding Two Negative Integers -2 + (-3) = -5 Adding two negative integers results in a negative integer. Example 3: Adding a Positive Integer and a Negative Integer 5 + (-2) = 5 - 2 = 3 Adding integers with opposite signs can be rewritten as whole number subtraction. The result can be positive, negative, or zero. Example 4: Subtracting Two Positive Integers 2 - 5 = -3 This is somewhat similar to whole number subtraction. The result can be positive, negative or zero. Example 5: Subtracting Two Negative Integers -5 - (-7) = -5 + 7 = 2 When subtracting by a negative, the negative number changes to a positive number. The result can be positive, negative, or zero. Example 1: Subtracting Positive and Negative Integers -7 - 2 = -9 When subtracting a positive, treat it the same as adding a negative. When subtracting a negative, the operation changes to addition. Multiplying Integers When multiplying integers, you need to keep track of the sign of each number. There are several cases to consider. Case 1: Multiplying Two Positive Integers 2 • 4 = 8 This is similar to multiplying whole numbers. The result is positive. Case 2: Multiplying a Positive Integer and a Negative Integer -2 • 4 = -8 This is somewhat similar to multiplying whole numbers. The result is negative. Case 3: Multiplying Two Negative Integers -2 • (-4) = 8 When multiplying two negatives, the product is positive. Dividing Integers By definition a rational number is the ratio of two integers. This is the same as dividing by two integers. Integer division is the definition of a rational number. Consider two integers a and b. A rational number is defined as follows: a ÷ b With both rational numbers and integer division, b cannot equal zero. The rules for integer division, in terms of the sign of the quotient, are similar to those for multiplication. This is part of a collection of video tutorials on the topic of Integers. To see the complete collection of the video tutorials on this topic, click on this link. Note: The download is an MP4 video file. Related Resources To see additional resources on this topic, click on the Related Resources tab. Video Library To see the complete collection of math videos, click on this link. Closed Captioned Video Library This video is available in closed captioned format. To see the complete collection of captioned videos, click on this link. Video Transcripts This video has a transcript available. To see the complete collection of video transcripts, click on this link. Common Core CCSS.MATH.CONTENT.6.NS.C.5, CCSS.MATH.CONTENT.6.NS.C.6.A, CCSS.MATH.CONTENT.6.NS.C.6.C, CCSS.MATH.CONTENT.7.NS.A.1.A, CCSS.MATH.CONTENT.7.NS.A.1.B, CCSS.MATH.CONTENT.7.NS.A.2.B, Standards CCSS.MATH.CONTENT.6.EE.A.1 Duration 4.00 minutes Grade Range 6 - 8 Curriculum Nodes • The Language of Math • Numerical Expressions Copyright Year 2017 Keywords integers, integers video tutorials, integer exponents, video tutorial
{"url":"https://www.media4math.com/library/video-tutorial-integers-video-9","timestamp":"2024-11-12T09:01:33Z","content_type":"text/html","content_length":"61908","record_id":"<urn:uuid:9d63adb4-6172-4c79-913b-6b315580d2d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00291.warc.gz"}
S&P 500 Elliott Wave Technical Analysis – 26th February, 2015 Upwards movement was expected, but price moved lower for the session to complete a red candlestick. The invalidation point was not breached. Summary: I expect more upwards movement to a target at 2,133. If it ends in another five sessions minor wave 5 may total a Fibonacci 21. Click charts to enlarge. Bullish Wave Count Upwards movement from the low at 666.79 subdivides as an incomplete 5-3-5. For the bull wave count this is seen as primary waves 1-2-3. The aqua blue trend lines are traditional technical analysis trend lines. These lines are long held (the lower one has its first anchor in November, 2011), repeatedly tested, and shallow enough to be highly technically significant. When the lower of these double trend lines is breached by a close of 3% or more of market value that should indicate a trend change. It does not indicate what degree the trend change should be though. It looks like the last four corrections may have ended about the lower aqua blue trend line, which gives the wave count a typical look. To see a weekly chart where I have drawn these trend lines go here. I have pulled the upper trend line down a little to touch the low of minute wave a within minor wave 4. This may be a better position for recent movement. The wave count looks at intermediate wave (5) as an ending contracting diagonal. Ending diagonals require all sub waves to be zigzags. So far this is a perfect fit. Minor wave 3 has stronger momentum than minor wave 5 on the daily chart. The diagonal is contracting. The only problem with this possibility is that minor waves 2 and 4 are more shallow than second and fourth waves within diagonals normally are. In this case they may have been forced to be more shallow by support offered from the double aqua blue trend lines. Because the third wave within the contracting diagonal is shorter than the first wave and a third wave may never be the shortest wave, this limits the final fifth wave to no longer than equality with the third wave at 2,253.79. Within intermediate wave (5) minor wave 1 lasted 238 days (5 days longer than a Fibonacci 233), minor wave 2 lasted 18 days (2 short of a Fibonacci 21), minor wave 3 lasted 51 days (4 short of a Fibonacci 55) and minor wave 4 lasted 23 days (2 longer than a Fibonacci 21). While none of these durations are perfect Fibonacci numbers, they are all reasonably close. So far minor wave 5 has lasted 16 days and the structure is incomplete. If it continues for another five sessions it may total a Fibonacci 21 Within minor wave 5 minute wave b may not move beyond the start of minute wave a below 1,980.90. This invalidation point allows for the possibility that minute wave a is incomplete and minute wave b is yet to unfold, although this idea does not fit with momentum on the hourly chart. At 2,133 minute wave c would reach equality in length with minute wave a. Contracting diagonals normally have fifth waves which end with a slight overshoot of the 1-3 trend line. This is still my expectation. Downwards movement was micro wave 2 which unfolded as a deep zigzag. I have the end of micro wave 1 leading contracting diagonal higher today, and the trend lines are a perfect fit. I expect to see a small increase in upwards momentum as micro wave 3 unfolds tomorrow. At 2,133 micro wave 3 would reach 1.618 the length of micro wave 1. If micro wave 2 were to continue further sideways as a double combination then it may not move beyond the start of micro wave 1 below 2,103.20. Alternate Bull Wave Count This wave count is an alternate because it does not fit well with momentum at either the daily or the hourly chart levels. Within intermediate wave (5) minor wave 3 has weaker momentum than minor waves 1 and 5. This is opposite to how it should behave. However, at the weekly chart level minor wave 3 has stronger momentum than minor wave 5 so this could still fit. At 2,191 primary wave 3 would reach 1.618 the length of primary wave 1. This would expect that within minor wave 5 minute wave iii will be shorter than minute wave i, and minute wave v will be shorter still, which would be a repeat of the pattern seen within minor wave 1. Or the target is wrong. At 2,140 minute wave iii would reach 0.618 the length of minute wave i. Draw the channel for this idea using Elliott’s first technique. Minuette wave (v) may end about the upper edge of this trend line. Micro wave 2 may not move beyond the start of micro wave 1 below 2,103.20. At this stage the wave counts do not diverge. Bear Wave Count The subdivisions within primary waves A-B-C are seen in absolutely exactly the same way as primary waves 1-2-3 for the bull wave count. The alternate bull wave count idea also works perfectly for this bear wave count. To see the difference at the monthly chart level between the bull and bear ideas look at the last historical analysis here. At cycle degree wave b is over the maximum common length of 138% the length of cycle wave a, at 167% the length of cycle wave a. At 2,393 cycle wave b would be twice the length of cycle wave a and at that point this bear wave count should be discarded. While we have no confirmation of this wave count we should assume the trend remains the same, upwards. This wave count requires confirmation before I have confidence in it. This analysis is published about 09:41 p.m. EST.
{"url":"https://elliottwavestockmarket.com/2015/02/26/sp-500-elliott-wave-technical-analysis-26th-february-2015/","timestamp":"2024-11-11T17:30:56Z","content_type":"text/html","content_length":"44024","record_id":"<urn:uuid:50193694-30fa-4c8b-966e-7b6d932893c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00217.warc.gz"}
THD estimation: FFT vs Periodogram 6 years ago ●7 replies● latest reply 6 years ago 668 views Hi all. I think that maybe this question is too simple, but some explanation could give me a huge help. I am working in an simple method review, witch aims to comply all methodology needed to acquire, filter and handle voltage data to estimate the THD (Total Harmonic Distortion). Skipping the first acquisition process, assuming it was done in a right sampling frequency with enough ADC resolution, the next step is to apply some windowing to comply with periodicity requirements. Right after that, a zero padding is necessary to center the bean frequencies of the fft in the desirable frequencies. After that, a simple fft is done in matlab environment. Taking the fft results, I normalize the amplitude by the length of non-zero (non-zero-padding) data. At this point, I have the abs of fft, with gives me the exactly voltage amplitude (of a simulated signal) in frequency spectrum. Knowing the amplitude allows me to calculate the THD. So, after all this, I refer to Matlab way of calculation. I see that Matlab uses a periodogram with a Kayser windown to estimate THD. Knowing that my results are far better than Matlab THD function (at simulation), what is the difference to estimate data value over frequency spectrum using fft and using a periodogram? [ - ] Reply by ●June 27, 2018 Maybe I'm missing something. I "thought" that a periodogram "was equivalent to" an FFT for "properly sampled data". So, periodogram should not enter into the discussion (I assume Matlab does it right). Sometimes there's a scaling issue for FFTs (there was/is for the DC component in MathCad). So if yours consistently differs from Matlab's result by a constant, then this is the issue, assuming you are using the EXACT SAME DATA SET FOR BOTH ANALYSES. Also, when you say "far better than" you mean your THD is lower, right? Again, check multiple examples and see if there's just a scaling issue. So are you also using a Kaiser window on your data? If not, that would be a obvious difference, as windowing affects the amplitude and anything other than a square window (i.e. raw sampled data) results in frequency spread and its attendant amplitude reduction. There are a few definitions of THD. First of all, THD is best done on a single frequency (or set of frequencies, one at a time). But this does not capture IM (intermodulation) distortion due to nonlinear operation. The THD I am familiar with is the same was as Wikipedia explains it, except for their use of the term "RMS Voltage". In the past, I used the basic peak voltage as it comes out of an FFT (anyway, for basic sinusoids, that should not make a difference as the scaling cancels out, but only for basic sinusoids, not composite signals). If your sampled data has a lot of noise, or you are striving for super accurate THD (which is really hard in the real world because of noise for a single sample set), then you really need to use THD+N, unless you have LOTS of THD measurements so that the effects of "zero mean noise" (i.e. Gaussian) cancels out, since a Gaussian distribution is identical in the time-domain and frequency domains as the number of samples approaches infinity (or at least that what I recall from my research of several decades ago). The hardest part of giving advice is not just the giver's knowledge, but also their interpretation of the problem and whether or not that includes all relevant details. Implemented "correctly", implementations of any algorithm should always produce identical results, except for minor calculation noise. [ - ] Reply by ●June 27, 2018 Maybe I'm missing something. I "thought" that a periodogram "was equivalent to" an FFT for "properly sampled data". So, periodogram should not enter into the discussion (I assume Matlab does it right). Sometimes there's a scaling issue for FFTs (there was/is for the DC component in MathCad). So if yours consistently differs from Matlab's result by a constant, then this is the issue, assuming you are using the EXACT SAME DATA SET FOR BOTH ANALYSES. Also, when you say "far better than" you mean your THD is lower, right? Again, check multiple examples and see if there's just a scaling issue. When I said "far better than", I say that I have a calculated result (after all sampling and processing) much more closer with the real expected THD value. I only know the real THD value because I am generating the signal to this process, in the Matlab enviromment. So are you also using a Kaiser window on your data? If not, that would be a obvious difference, as windowing affects the amplitude and anything other than a square window (i.e. raw sampled data) results in frequency spread and its attendant amplitude reduction. Yes, I've used the same window (with the same characterization, provided in Matlab THD function description). There are a few definitions of THD. First of all, THD is best done on a single frequency (or set of frequencies, one at a time). But this does not capture IM (intermodulation) distortion due to nonlinear operation. Ok. What I am doing is to estimate the harmonic frequencies of a 60Hz signal, and calculating the total distortion of harmonics, relative to fundamental frequency. I also have inserted some other frequencies to test the robustness of the method. This other frequencies are located between harmonics. The THD I am familiar with is the same was as Wikipedia explains it, except for their use of the term "RMS Voltage". In the past, I used the basic peak voltage as it comes out of an FFT (anyway, for basic sinusoids, that should not make a difference as the scaling cancels out, but only for basic sinusoids, not composite signals). If your sampled data has a lot of noise, or you are striving for super accurate THD (which is really hard in the real world because of noise for a single sample set), then you really need to use THD+N, unless you have LOTS of THD measurements so that the effects of "zero mean noise" (i.e. Gaussian) cancels out, since a Gaussian distribution is identical in the time-domain and frequency domains as the number of samples approaches infinity (or at least that what I recall from my research of several decades ago). As I've explained above, there is only other frequencies, no noise at all. And, by sure, that frequencies are sampled according to nyquist rate, avoiding aliasing. Implemented "correctly", implementations of any algorithm should always produce identical results, except for minor calculation noise. I agree with that. I am using the same data set for both methods, trying to calculate all with same windows and functions, but the fft based method still giving me a different result than THD There is any reason (statstically justified) to use an periodogram instead an fft for that kind of analysis? As you say too, wikipedia explains THD in therms of RMS voltage.. I really don't know how Matlab handle this. Thanks for your time. [ - ] Reply by ●June 27, 2018 I do not use Matlab much so I may be wrong, but my first guess would be: Does Matlab mesaure THD+N instead of THD? That could make a huge difference depending on N... [ - ] Reply by ●June 27, 2018 I've explained above. Take a lookif you can, there is a better explanation of my problem now. [ - ] Reply by ●June 27, 2018 As I've explained above, there is only other frequencies, no noise at all. And, by sure, that frequencies are sampled according to nyquist rate, avoiding aliasing These other frequencies will increase the THD+N value in comparison to the THD value. THD is sum of harmonics over the energy of the fundamental. THD+N is the the energy of everything except the fundamental over the energy of the fundamental. Everything except the fundamental includes non-harmonic frequencies. [ - ] Reply by ●June 27, 2018 Ok, I understand. But Matlab THD, also my implementation, only consider harmonics. Both methods are selective in frequency domain. [ - ] Reply by ●June 27, 2018 Hi all, I've found the difference between my methodology and Matlab implementation. Matlab uses a modified periodogram, which is quite different from a periodogram (which is really basically the same estimation as FFT). Thanks for the answers and the help!
{"url":"https://www.dsprelated.com/thread/6218/thd-estimation-fft-vs-periodogram","timestamp":"2024-11-12T15:40:49Z","content_type":"text/html","content_length":"46727","record_id":"<urn:uuid:e4a12fbf-47e1-42d0-8661-ec7c6ebfea1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00893.warc.gz"}
4 Essential Tips to Teach Fractions of a Set When it comes to teaching fractions, we often visualize them using a circle divided into equal-sized pieces like a pizza – a relatable image for most students! However, it’s equally important to help students understand fractions of a set, rather than just parts of a whole, especially as they move into the upper elementary grades. Getting Started: What are fractions of a set? Fractions of a set involve a group of objects that can be divided into equal parts, and we can name the value of one or more of the groups using a fraction. For instance, in this image of connecting cubes, ⅓ of the cubes are pink, which means that ⅓ of 12 cubes is 4. The 12 represents the total number of cubes, the 4 represents the pink cubes, and the ⅓ represents one of the 3 equal groups we get from dividing the 12 cubes with each group containing 4 cubes. Today I’m going to share my top 4 tips for teaching fractions of a set to help students master this concept. Let’s get to it! 1. Build Conceptual Understanding of Fractions of a Set My first tip is to emphasize conceptual understanding over procedures. As with all math topics, when students grasp what is really happening, the procedures they learn later make much more sense and they will not have to rely on memorization. Students officially begin to interact with fractions of a set beginning in third grade, but really, kids have been encountering these kinds of problems since they started sharing toys as young kids. Imagine this scenario: Three kids are playing with 12 toy cars and decide that they should share them. They take the 12 toy cars and divide them equally between the three friends (let’s just assume everyone is in the mood to share!). Little do they know that they just solved the problem ⅓ of 12! Emphasizing conceptual understanding not only supports later work with procedures but also makes the concepts meaningful for students. 2. Provide Relatable Contexts for Fractions of a Set Speaking of making concepts meaningful…Tip number two is to use relatable contexts that mirror real-life situations. Without context, fractions of a set can seem disconnected from real life and like an impractical math skill. But in real life? Fractions of a set are everywhere! Let’s take a look. There is a certain time when kids are always encountering fractions of a set: when they are sharing! At first glance, it may seem that these contexts are introducing division and, well, you’re right! Fractions and division are intertwined but students do not need to truly understand fractions as division until fifth grade. What if rather than providing students with an image on a worksheet and asking students to find ⅓ of the picture, we gave students word problems and asked students to draw the model (we’ll talk about models shortly)? After trying this in my own classroom, I was AMAZED at the difference I saw in student understanding. Don’t worry, I’ve got you covered! Here are a few examples of word problems you can use right away: • My family was sharing French fries for dinner. We had a bag of 40 French fries and each person got ¼ of the fries. How many fries did each person get? • I have 10 pairs of pants in my dresser. ⅕ of them are leggings. How many leggings are in the drawer? • ½ of the books I read last year were realistic fiction books. I read 36 books. How many of the books I read were realistic fiction? • The deck of cards has 33 cards. If each player gets ⅓ of the cards, how many cards does each player get? • My brother was sharing his toy cars with his cousins. There were 12 toy cars and each boy got 4 cars. What fraction of the cars did each boy get? You know that old saying “A picture is worth a thousand words?”. Let me tell you, this is SO true when it comes to teaching fractions of a set. Grab an image of a set of objects (any image will do!). You could easily wander around your home or classroom and take photos of groups of objects. Below is an image of some counters. I can use this image as a warm-up to the day’s math lesson asking students, “How many do you see?” Some students will immediately count the entire set. Others might count how many of each color there are. Still, others might make a connection to fractions and name a fraction to describe how many. Even if students do not make the connection to fractions themselves, we can simply ask them to name the fraction of blue counters, for example. One simple image can lead to a very rich discussion! 3. Utilize Models of Fractions of a Set Models – manipulatives, pictures, and drawings – are some of your best tools to represent what is happening when we determine the fraction of a set. Provide manipulatives – or any objects really! – to model problems. Why? Physical objects are easy to move and rearrange. This tactile component provides the opportunity for increased student engagement. On the other hand, when students use a picture of objects or even their own drawing, it can lead to lots of erasing and restarting, which can frustrate our learners. So what might using models look like? Let’s say we’ve asked students to solve ⅓ of 12. First, students would take out 12 objects such as counters. Then, they would have to separate the 12 counters so that they have 3 equal groups. Using the counters allows students to attempt equal groups by easily moving them around and checking to see if their attempt was correct. If this was a paper-based task, a student might draw 12 stars, and then circle a set of stars to try to make three equal groups. However, if their first attempt doesn’t work, they’ll have to erase and start over – ugh! Once students have solved a problem using manipulatives, they can then show their thinking on paper by drawing what they did with the counters. Making the connection between manipulatives and drawings will support their ability to flexibly move back and forth between them. Eventually, students will be able to use just pictures to solve some problems. That doesn’t mean they should never use manipulatives again though! Students should be allowed to move between the two depending on the problem, putting the ownership onto the student to decide when a particular approach makes the most sense. 4. Investigate Patterns within Fractions of a Set Fractions of a set are related to fraction multiplication but students don’t need to multiply fractions by whole numbers until 4th and 5th grade. So where does this leave us? Instead of teaching students those procedures early, we can continue to strengthen conceptual understanding by investigating patterns. Take a look at these problems: ½ of 16 is 8 ⅓ of 12 is 4 ¼ of 20 is 5 ⅕ of 10 is 2 I love this investigation as a warm-up activity. Begin by asking students to share what they notice and what they wonder to get them thinking about the patterns in these problems. They may notice that each fraction is a unit fraction or that the answers are all smaller than the other whole numbers in the problem. To deepen the discussion, I might ask students how the digits in the fraction and the whole number relate to the answer. Even third graders can recognize that 16 divided by 2 is 8 and 12 divided by 3 is 4. Noticing these patterns can help lay a foundation for the more complex work in 5th grade. Teaching fractions of a set doesn’t have to be a worksheet overload and, truthfully, it’s so much better when it’s not! These four tips have helped my own students to master fractions of a set and I hope you find success with them too. Do you have a favorite tip for teaching this concept? Let us know! Looking for even more ideas about fractions? Check out all of our fraction articles here!
{"url":"https://jillianstarrteaching.com/fractions-of-a-set/","timestamp":"2024-11-11T18:09:46Z","content_type":"text/html","content_length":"141732","record_id":"<urn:uuid:e64689cd-e1b3-4dc9-bb3b-9d6029d31268>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00341.warc.gz"}
Geometry | aboveandbeyond top of page Click on the grade level buttons above to take you to a master list of grade-level resources. Note that a few buttons on each page are not yet active. We are working as fast as we can! Click for... Module 1: Congruence, Proof, and Constructions 45-ish days G-CO.1, G-CO.2, G-CO.3, G-CO.4, G-CO.5, G-CO.6, G-CO.7, G-CO.8, G-CO.9, G-CO.10, G-CO.11, G.CO.12, G-CO.13 In previous grades, students were asked to draw triangles based on given measurements. They also have prior experience with rigid motions—translations, reflections, and rotations—and have strategically applied a rigid motion to informally show that two triangles are congruent. In this module, students establish triangle congruence criteria, based on analyses of rigid motions and formal constructions. They build upon this familiar foundation of triangle congruence to develop formal proof techniques. Students make conjectures and construct viable arguments to prove theorems— using a variety of formats—and solve problems about triangles, quadrilaterals, and other polygons. They construct figures by manipulating appropriate geometric tools (compass, ruler, protractor, etc.) and justify why their written instructions produce the desired figure. Click for... TEACHER PLANNER under construction Module 2: Similarity, Proof, and Trigonometry 44-ish days G-SRT.1, G-SRT.2, G-SRT.3, G-SRT.4, G-SRT.5, G-SRT.6, G-SRT.7, G-SRT.8, G-MG.1, G-MG.2, G-MG.3 Students apply their earlier experience with dilations and proportional reasoning to build a formal understanding of similarity. They identify criteria for similarity of triangles, make sense of and persevere in solving similarity problems, and apply similarity to right triangles to prove the Pythagorean Theorem. Students attend to precision in showing that trigonometric ratios are well defined, and apply trigonometric ratios to find missing measures of general (not necessarily right) triangles. Students model and make sense out of indirect measurement problems and geometry problems that involve ratios or rates. Click for... Module 3: Extending to Three Dimensions 10-ish days G-GMD.1, G-GMD.3, G-GMD.4, G-MG.1, Students’ experience with two-dimensional and three-dimensional objects is extended to include informal explanations of circumference, area and volume formulas. Additionally, students apply their knowledge of two-dimensional shapes to consider the shapes of cross-sections and the result of rotating a two-dimensional object about a line. They reason abstractly and quantitatively to model problems using volume formulas. Click for... TEACHER PLANNER under construction Module 4: Connecting Algebra and Geometry through Coordinates 25-ish days G-GPE.4, G-GPE.5, G-GPE.6, G-GPE.7 Building on their work with the Pythagorean Theorem in 8th grade to find distances, students analyze geometric relationships in the context of a rectangular coordinate system, including properties of special triangles and quadrilaterals and slopes of parallel and perpendicular lines, relating back to work done in the first module. Students attend to precision as they connect the geometric and algebraic definitions of parabola. They solve design problems by representing figures in the coordinate plane, and in doing so, they leverage their knowledge from synthetic geometry by combining it with the solving power of algebra inherent in analytic geometry. Click for... Module 5: Circles with and without Coordinates 25-ish days G-C.1, G-C.2, G-C.3, G-C.5, G-GPE.1, G-GPE.4, G-MG.1 In this module, students prove and apply basic theorems about circles, such as: a tangent line is perpendicular to a radius theorem, the inscribed angle theorem, and theorems about chords, secants, and tangents dealing with segment lengths and angle measures. They study relationships among segments on chords, secants, and tangents as an application of similarity. In the Cartesian coordinate system, students explain the correspondence between the definition of a circle and the equation of a circle written in terms of the distance formula, its radius, and coordinates of its center. Given an equation of a circle, they draw the graph in the coordinate plane and apply techniques for solving quadratic equations. Students visualize, with the aid of appropriate software tools, changes to a three-dimensional model by exploring the consequences of varying parameters in the model. bottom of page
{"url":"https://www.aboveandbeyondthecore.com/geometry","timestamp":"2024-11-04T05:19:51Z","content_type":"text/html","content_length":"506787","record_id":"<urn:uuid:1821f805-8b43-4087-a094-db3d11e1eefa>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00838.warc.gz"}
nLab Pin(3) Group Theory Classical groups Finite groups Group schemes Topological groups Lie groups Super-Lie groups Higher groups Cohomology and Extensions Related concepts Spin geometry The pin group in dimension 3. $Pin_+(3) \simeq SO(3) \times C_4$ and $Pin_-(3) \simeq SU(2) \times C_2$. Created on August 22, 2019 at 13:51:40. See the history of this page for a list of all contributions to it.
{"url":"https://ncatlab.org/nlab/show/Pin%283%29","timestamp":"2024-11-02T15:04:11Z","content_type":"application/xhtml+xml","content_length":"38211","record_id":"<urn:uuid:a6855b02-ffa6-4110-a192-af8396ceb6de>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00773.warc.gz"}
Module 0 Day 16 Your Turn Part 2 This is in response to your question about the Day 16: Your Turn Explanation. I'm glad you asked, because the explanation is very interesting and a little abstract. When we discussed the last digit of a number, and a square, we were really talking about the remainder that we get after dividing that number by \(10.\) Now we are extending that idea to the case where instead of dividing by \(10,\) we are dividing by \(4.\) Let's look back to my comment in the Day 16: Challenge Discussions. In the same way that before we only cared about the last digit squared, and ignored the \(100 \times \text{ something} \) and \( 10 \times \text{ something } \) terms, here we only care about the remainder after dividing by \(4,\) so we can ignore the multiples of \(4,\) multiples of \(4^2,\) multiples of \(4^3,\) etc. We want to look at square numbers and the remainders that we get after dividing these squares by \(4.\) The clever trick is to look at just the remainders after dividing by \(4\): $$ (4 \times \text{ something } + R)^2 = 16 \times \text{ something } + 4 \times \text{ something } + R^2 $$ Something to be careful here is that the \(R \) isn't any single-digit number. It's only \(0, 1, 2 \text{ or } 3,\) because once we get \(R = 4,\) then it gets incorporated into the \(4 \times \text{ something } \) term instead. So since \(R\) can only be \(0, 1, 2, \text{ or } 3,\) that means \(R^2\) can only be \(0^2, 1^2, 2^2\) or \(3^2.\) These numbers are \(0, 1, 4\) or \(9.\) Since we care only about the remainder after dividing by \(4,\) let's remove \(4\) from the last two numbers. This means that \(R^2\) can be \(0, 1, 0,\) or \(1.\) Since we found that squares are equal to \(4 \times \text{ something } + R^2,\) and the first term is divisible by \(4,\) then the remainder of \(R^2\) (after dividing by \(4\)) is the remainder of the square number (after dividing by \(4\)). This means that squares either have remainders of \(0\) or \(1\) after dividing by \(4.\) The question asks us to add together two squares. The possibilities for the remainder of this sum (after dividing by 4) are: $$0 + 0 \\\\ 0 + 1 \\\\ 1 + 1 \\\\ $$ which give us \(0, 1 \) or \(2.\) It's impossible to get \(3.\) The question asks us for the impossible remainder, so the answer is \( \boxed{3.}\) I hope this helped! I'm really happy to answer any questions you have about the course content, no matter how big or small. I hope that you found this course beneficial to you and wish to learn more with us! Happy Learning, The Daily Challenge Team
{"url":"https://forum.poshenloh.com/category/139/module-0-day-16-your-turn-part-2","timestamp":"2024-11-12T11:44:49Z","content_type":"text/html","content_length":"47005","record_id":"<urn:uuid:2df54e82-5c75-478f-b527-5ca0e51b880c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00643.warc.gz"}
Effective Frequency: Do we teach Slope too much? One of the hardest high school math topics I teach in Algebra 2 is slope. I know what you're thinking-- slope is not an Algebra 2 topic. This is exactly what makes it so hard. I have this theory, and you may disagree with me. My theory is that slope is so hard to teach because the word is so familiar. My students may see it first in 7th grade, then again in 8th grade, again in 9th grade, then again as a review in 10th grade before the state test. By the time they get to me in 11th grade, eyes are glaze over at the mere mention of the word. But very few have really mastered the topic. So why is retention so low? My belief is that slope is either introduced too early before students are ready, and/or too little time is spent on the topic the first time it is introduced. Years ago when doing research on negative numbers for my thesis, I came across an eye-opening article from William Schmidt, Richard Houang, and Leland Cogan titled A Coherent Curriculum, The Case of Mathematics. In this article, the number of topics covered per year in the United States is compared to the number of topics covered per year in those A+ countries that the US is always pointing to with a, "Our math scores need to be more like theirs." SIDE NOTE: I wish we would celebrate all the good things about our students rather than focusing on what makes them less than kids in other countries. But that's for another day. Above is a screenshot of the graphic included in the article that shows the progression of topics in A+ countries in grades 1 through 8. Only a few topics are covered each year, allowing teachers and students to get deep into these topics. And here is the graphic from the United States. To be blunt, we're all over the place. Many topics are taught every single year, at least in the grades shown. How may times have you said to yourself, "I wish I had more time to spend on this topic because my kids ALMOST get it"? I know I have said it a lot. Do you think student understanding and retention would improve year-to-year if we were allowed this time when first introducing topics? 4 comments: 1. I have been saying for years that I think 8th grade is way too early for students to learn slope. In fact, it is introduced quickly in 7th grade. For most students, it is just too abstract for most of the students. Even with bringing it to them with concrete learning it is still hard for them. 1. 7th grade? I hadn't realized that! Wow, yes, in my opinion that is way too early. 2. AnonymousJuly 18, 2018 I am previewing our Glencoe Math, Course 2 (California Edition) textbook which is used in the 7th grade, and there are about 8 measly pages devoted to slope. 1. It's so crazy. An then the kids come to me in 11th grade not knowing how to find slope. They turn off as soon as they hear "slope" because they are so familiar with the term from so many years of it being skimmed. Thank you for your comment. I hope things change!
{"url":"https://www.scaffoldedmath.com/2017/07/slope-effective-frequency.html","timestamp":"2024-11-06T07:30:17Z","content_type":"application/xhtml+xml","content_length":"87931","record_id":"<urn:uuid:b916baf5-8171-401a-b439-584ade76dabc>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00090.warc.gz"}
3.2 — Stackelberg Competition — Practice Return to the example from lessons 2.2 and 2.3: Firm 1 and Firm 2 have a constant \(MC=AC=8\). The market (inverse) demand curve is given by: \[\begin{aligned} P&=200-2Q\\ Q&=q_1+q_2\\ \end{aligned}\] 1. Suppose Firm 1 is the Leader and Firm 2 is the follower. Find the Stackelberg Nash Equilibrium quantity for each firm. Hint, the Cournot reaction functions you found before were: \[\begin{aligned} q_1^\star & = 48 - 0.5 * q_2\\ q_2^\star & = 48 - 0.5 * q_1\\ \end{aligned}\] Substitute follower’s reaction function into market (inverse) demand function \[\begin{align*} P&=200-2q_{1}-2q_2 && \text{The inverse market demand function}\\ P&=200-2q_{1}-2(48-0.5q_{1}) && \text{Plugging in Firm 2's reaction function for} q_2\\ P&=200-2q_{1}-96+1q_{1} && \ text{Multiplying by }-3\\ P&=104-q_{1} && \text{Simplifying the right}\\ \end{align*}\] • Find \(MR\) for Firm 1 from market demand \[\begin{align*} MR&=MC && \text{Profit-max condition}\\ 104-2q_{1}&=8 && \text{Plugging in}\\ 104&=8+2q_{1} && \text{Adding }2q_{1} \text{ to both sides}\\ 96&=2q_{1} && \text{Subtracting 20 from both sides}\\ 48&=q_{1}^* && \text{Dividing both sides by 2} \\ \end{align*}\] Firm 2 will respond: \[\begin{align*} q_2^*&=48-0.5q_{1}\\ q_2^*&=48-0.5(48)\\ q_2^*&=48-24\\ q_2^*&=24\\ \end{align*}\] 2. Find the market price. With \(q^*_{1}=48\) and \(q^*_2=24\), this sets a market price of \[\begin{align*} P&=200-2Q\\ P&=200-2(72)\\ P&=56\\ \end{align*}\] 3. Find the profit for each firm. Compare their profits under Stackelberg competition to their profits under Cournot competition (from lesson 2.2). Profit for Firm 1 is \[\begin{align*} \pi_{1}&=q_{1}(P-c)\\ \pi_{1}&=48(56-8)\\ \pi_{1}&=\$2,304\\ \end{align*}\] Profit for Firm 2 is \[\begin{align*} \pi_{2}&=q_{2}(P-c)\\ \pi_{2}&=24(56-8)\\ \pi_{2}&=\$1,152\\ \end{align*}\]
{"url":"https://ios23.classes.ryansafner.com/files/practice/3.2-practice-answers","timestamp":"2024-11-04T04:06:35Z","content_type":"text/html","content_length":"1049025","record_id":"<urn:uuid:3a9c4157-7354-48f8-bee5-be54b74e3f18>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00148.warc.gz"}
Trigonometry calculator Trigonometry is a branch of mathematics devoted to triangles, which allows you to find their unknown angles and faces from known values. For example, the angle along the length of the leg and hypotenuse, or the length of the hypotenuse according to the known angle and leg. There are unique functions for calculations in trigonometry: sine, cosine, tangent, cotangent, secant and cosecant. They are often used in related sciences and disciplines, for example, in astronomy, geodesy, and architecture. Trigonometry around us Trigonometry is included in the general education curriculum and is one of the fundamental sections of mathematics. Today, with its help, geographic coordinates are found, ship routes are laid, the trajectories of celestial bodies are calculated, programs and statistical reports are compiled. This mathematical section is most in demand: • in astronomy; • in geography; • in navigation; • in architecture; • in optics; • in acoustics; • in economics (for the analysis of financial markets); • in probability theory; • in biology and medicine; • in electronics and programming. Today even such seemingly abstract branches as pharmacology, cryptology, seismology, phonetics and crystallography cannot do without trigonometry. Trigonometric functions are used in computed tomography and ultrasound, to describe light and sound waves, in the construction of buildings and structures. History of trigonometry The first trigonometric tables were used in his writings by the ancient Greek scientist Hipparchus of Nicaea in 180-125 BC. Then they were purely applied in nature and were used only for astronomical calculations. There were no trigonometric functions (sine, cosine, and so on) in the tables of Hipparchus, but there was a division of the circle into 360 degrees and the measurement of its arcs using chords. For example, the modern sine was then known as "half a chord", to which a perpendicular was drawn from the center of the circle. In the year 100 AD, the ancient Greek mathematician Menelaus of Alexandria, in his three-volume "Sphere" (Sphaericorum), presented several theorems that today can be fully considered "trigonometric". The first described the congruence of two spherical triangles, the second the sum of their angles (which is always greater than 180 degrees), and the third the "six magnitudes" rule, better known as the Menelaus theorem. Roughly at the same time, from AD 90 to 160, the astronomer Claudius Ptolemy published the most significant trigonometric treatise of antiquity, Almagest, consisting of 13 books. The key to it was a theorem describing the ratio of diagonals and opposite sides of a convex quadrilateral inscribed in a circle. According to Ptolemy's theorem, the product of the second is always equal to the sum of the products of the first. Based on it, 4 difference formulas for sine and cosine were subsequently developed, as well as the half-angle formula α / 2. Indian Studies The "chordal" form of describing trigonometric functions, which arose in ancient Greece before our era, was common in Europe and Asia until the Middle Ages. And only in the 16th century in India they were replaced by the modern sine and cosine: with the Latin designations sin and cos, respectively. It was in India that the fundamental trigonometric ratios were developed: sin²α + cos²α = 1, sinα = cos(90° − α), sin(α + β) = sinα ⋅ cosβ + cosα ⋅ sinβ and others. The main purpose of trigonometry in medieval India was to find ultra-precise numbers, primarily for astronomical research. This can be judged from the scientific treatises of Bhaskara and Aryabhata, including the scientific work Surya Siddhanta. The Indian astronomer Nilakanta Somayaji for the first time in history decomposed the arctangent into an infinite power series, and subsequently the sine and cosine were decomposed into series. In Europe, the same results came only in the next, XVII century. The series for sin and cos were derived by Isaac Newton in 1666, and for the arc tangent in 1671 by Gottfried Wilhelm Leibniz. In the 18th century, scientists were engaged in trigonometric studies both in Europe and in the countries of the Near / Middle East. After Muslim scientific works were translated into Latin and English in the 19th century, they became the property of first European and then world science, made it possible to combine and systematize all knowledge related to trigonometry. Summing up, we can say that today trigonometry is an indispensable discipline not only for the natural sciences, but also for information technology. It has long ceased to be an applied branch of mathematics, and consists of several large subsections, including spherical trigonometry and goniometry. The first considers the properties of angles between great circles on a sphere, and the second deals with methods for measuring angles and the ratio of trigonometric functions to each other.
{"url":"https://trigonometry.zone/","timestamp":"2024-11-09T07:40:24Z","content_type":"text/html","content_length":"32392","record_id":"<urn:uuid:1e10b477-8a32-4623-a297-800b14ea7f6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00040.warc.gz"}
Interest Calculation Description of the calculation of Interest with examples The interest calculation is an extension of the percentage calculation. Interest is a percent value of one underlying or principal. Base of the percentage calculation is the formula \(\displaystyle \frac{Z}{K}=\frac{P}{100}\) • \(Z\) = Interest • \(K\) = Principal • \(P\) = Interest rate The formula can be changed for the value you are looking for Interest rate Calculate interest income This example calculates the interest earned on investing \(3000\) for one year at a fixed rate of \(3\%\). Given is the interest rate \(P = 3\) and the capital = \(3000\). We are looking for the interest income \(Z\). The interest income is calculated according to the formula Calculate interest rate This example calculates the interest rate, which is required to receive \(150$\) interest in one year, from a capital of \(3000$\). The capital \(K = 300\) and the interest income \(Z = 150\) are known. We are looking for the interest rate \(P\). Calculated according to the formula Calculate starting capital What amount must be invested in order to receive an interest income of \(200$\) at a rate of \(5%\)? This question should be solved in this task. he interest rate of \(P = 5\%\) and the interest income \(Z = 200$\) are known We are looking for starting capital \(K\). It is calculated according to the formula Calculate interest income daily For example, suppose you want to invest \(5000$\) for \(2\) months at an annual interest rate of \(5\%\). For this, the interest must be calculated on a daily basis. The formula for calculating the interest income is extended accordingly by the number of \(Tage = t\). For each month, 30 days, so 360 days for 1 year are assume. The capital \(K = 5000\), the interest rate \(P = 5\) and the number of days \(t = 60\) are known We are looking for interest income \(Z\). This is calculated
{"url":"https://www.redcrab-software.com/en/Tutorial/Algebra/Interest","timestamp":"2024-11-03T06:31:36Z","content_type":"text/html","content_length":"19115","record_id":"<urn:uuid:a5623784-e61f-4652-90e2-9a467ffc2a20>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00586.warc.gz"}
How does Ray:Distance() work? So i’m trying to make a suppression effect when a raycasted bullet comes near a player. -- position is the head position from the player firing the bullet local ray = Ray.new(position, direction * 100) local distance = ray:Distance(character.Head.Position) -- character is the player being suppressed In this video, the shooter is shooting at a wall while the raycast comes near the standing player. (minor sound warning) The number that :Distance() returns is printed in the output. It increases as the shooter walks away. Isn’t it supposed to return the distance between the closest point on the ray to the Vector3 given (in this case In this video the shooter is shooting into the far distance while walking backwards to the player. When the shooter’s back is turned to the other player, the number printed in output is what I assume to be the distance of the ray’s origin to the player’s head. But when they go on the other side of the standing player, the number changes drastically! Is the function bugged, or am I just using it wrong? While I haven’t used these functions myself when raycasting, I believe the function you want to use is Ray:ClosestPoint because that returns a vector3 that’s been projected onto the ray so that it’s within the ray’s line of sight. The wiki notes that the ray must be a unit ray for this function to work as intended. So I guess from this, I would use ClosestPoint and then check the distance between that point and the character’s head to determine suppression through subtraction and magnitude. Ray:Distance returns the distance between the ray’s origin and the closest point on the ray to the one you gave the function. 1 Like Your ray needs to be a unit ray for ClosestPoint to work correctly (Ray). Distance uses the same logic under the hood so I suspect the same is true for Distance. 3 Likes These functions of Ray are ones I never use, simply because when I’m creating a Ray for a raycast, it’s almost never a unit ray. So I’d need to make another ray, and refence the API docs each time to remember what exactly these do. For me, it’s much simpler to just do the projection with Vector3 I already have (and so do you, in the form of your direction Vector3). When you already have the Ray, based on some scalar multiple of your direction vector, you can get the distance from the Ray to the Head with just a vector projection: local headToRayDistance = (Ray.Origin + (head.Position - Ray.Origin):Dot(direction) * direction - head.Position).Magnitude The first part, (Ray.Origin + (head.Position - Ray.Origin):Dot(direction) * direction, is the location of the closest point on the ray to the head, found by projecting the ray.origin-to-head vector onto the ray direction vector, and then adding it to the origin. 1 Like Thanks everyone for helping! Wouldn’t there be no need to create a new ray? You could just reference the .Unit property of the ray. Yes, but .Unit isn’t really a property, it’s a function that returns a new Ray object, equivalent to doing Ray.new(ray.Origin, ray.Direction.Unit). And ray.Direction.Unit is also not a property 1 Like
{"url":"https://devforum.roblox.com/t/how-does-raydistance-work/689938","timestamp":"2024-11-08T09:29:27Z","content_type":"text/html","content_length":"36799","record_id":"<urn:uuid:40020735-ac3c-4b35-a759-3d7d455187ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00037.warc.gz"}
ISSN: To be updated soon | Frequency: Biannualy (2 Issue Per Year) | Nature: Online | Language of Publication: English | E-mail: journals@coreresearchfoundation.com Accepted Articles: The Journal accepts the following categories of Articles: Original research Position papers/review papers Short-papers (with well-defined ideas, but lacking research results or having preliminary results)Technology Discussion/Overview Papers Peer Review Process: All submitted papers are subjected to a comprehensive blind review process by at least 2 subject area experts, who judge the paper on its relevance, originality, clarity of presentation and significance. The review process is expected to take 8-12 weeks at the end of which the final review decision is communicated to the author. In case of rejection authors will get helpful comments to improve the paper for resubmission to other journals. The journal may accept revised papers as new papers which will go through a new review cycle.
{"url":"http://coreresearchfoundation.com/j/AJMS/index.php","timestamp":"2024-11-14T01:19:32Z","content_type":"text/html","content_length":"20198","record_id":"<urn:uuid:c6bcf6ca-ae14-4a98-8538-70c209e72291>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00711.warc.gz"}
QPoint Class The QPoint class defines a point in the plane using integer precision. More... Header: #include <QPoint> CMake: find_package(Qt6 REQUIRED COMPONENTS Core) target_link_libraries(mytarget PRIVATE Qt6::Core) qmake: QT += core Note: All functions in this class are reentrant. Public Functions QPoint(int xpos, int ypos) bool isNull() const int manhattanLength() const int & rx() int & ry() void setX(int x) void setY(int y) CGPoint toCGPoint() const QPointF toPointF() const QPoint transposed() const int x() const int y() const QPoint & operator*=(float factor) QPoint & operator*=(double factor) QPoint & operator*=(int factor) QPoint & operator+=(const QPoint &point) QPoint & operator-=(const QPoint &point) QPoint & operator/=(qreal divisor) Static Public Members int dotProduct(const QPoint &p1, const QPoint &p2) Related Non-Members bool operator!=(const QPoint &p1, const QPoint &p2) QPoint operator*(const QPoint &point, float factor) QPoint operator*(const QPoint &point, double factor) QPoint operator*(const QPoint &point, int factor) QPoint operator*(float factor, const QPoint &point) QPoint operator*(double factor, const QPoint &point) QPoint operator*(int factor, const QPoint &point) QPoint operator+(const QPoint &p1, const QPoint &p2) QPoint operator+(const QPoint &point) QPoint operator-(const QPoint &p1, const QPoint &p2) QPoint operator-(const QPoint &point) QPoint operator/(const QPoint &point, qreal divisor) QDataStream & operator<<(QDataStream &stream, const QPoint &point) bool operator==(const QPoint &p1, const QPoint &p2) QDataStream & operator>>(QDataStream &stream, QPoint &point) Detailed Description A point is specified by a x coordinate and an y coordinate which can be accessed using the x() and y() functions. The isNull() function returns true if both x and y are set to 0. The coordinates can be set (or altered) using the setX() and setY() functions, or alternatively the rx() and ry() functions which return references to the coordinates (allowing direct manipulation). Given a point p, the following statements are all equivalent: A QPoint object can also be used as a vector: Addition and subtraction are defined as for vectors (each component is added separately). A QPoint object can also be divided or multiplied by an int or a qreal. In addition, the QPoint class provides the manhattanLength() function which gives an inexpensive approximation of the length of the QPoint object interpreted as a vector. Finally, QPoint objects can be streamed as well as compared. See also QPointF and QPolygon. Member Function Documentation [constexpr] QPoint::QPoint() Constructs a null point, i.e. with coordinates (0, 0) See also isNull(). [constexpr] QPoint::QPoint(int xpos, int ypos) Constructs a point with the given coordinates (xpos, ypos). See also setX() and setY(). [static constexpr] int QPoint::dotProduct(const QPoint &p1, const QPoint &p2) QPoint p( 3, 7); QPoint q(-1, 4); int dotProduct = QPoint::dotProduct(p, q); // dotProduct becomes 25 Returns the dot product of p1 and p2. [constexpr] bool QPoint::isNull() const Returns true if both the x and y coordinates are set to 0, otherwise returns false. [constexpr] int QPoint::manhattanLength() const Returns the sum of the absolute values of x() and y(), traditionally known as the "Manhattan length" of the vector from the origin to the point. For example: QPoint oldPosition; MyWidget::mouseMoveEvent(QMouseEvent *event) QPoint point = event->pos() - oldPosition; if (point.manhattanLength() > 3) // the mouse has moved more than 3 pixels since the oldPosition This is a useful, and quick to calculate, approximation to the true length: double trueLength = std::sqrt(std::pow(x(), 2) + std::pow(y(), 2)); The tradition of "Manhattan length" arises because such distances apply to travelers who can only travel on a rectangular grid, like the streets of Manhattan. [constexpr] int &QPoint::rx() Returns a reference to the x coordinate of this point. Using a reference makes it possible to directly manipulate x. For example: QPoint p(1, 2); p.rx()--; // p becomes (0, 2) See also x() and setX(). [constexpr] int &QPoint::ry() Returns a reference to the y coordinate of this point. Using a reference makes it possible to directly manipulate y. For example: QPoint p(1, 2); p.ry()++; // p becomes (1, 3) See also y() and setY(). [constexpr] void QPoint::setX(int x) Sets the x coordinate of this point to the given x coordinate. See also x() and setY(). [constexpr] void QPoint::setY(int y) Sets the y coordinate of this point to the given y coordinate. See also y() and setX(). CGPoint QPoint::toCGPoint() const Creates a CGPoint from a QPoint. See also QPointF::fromCGPoint(). [constexpr, since 6.4] QPointF QPoint::toPointF() const Returns this point as a point with floating point accuracy. This function was introduced in Qt 6.4. See also QPointF::toPoint(). [constexpr] QPoint QPoint::transposed() const Returns a point with x and y coordinates exchanged: QPoint{1, 2}.transposed() // {2, 1} See also x(), y(), setX(), and setY(). [constexpr] int QPoint::x() const Returns the x coordinate of this point. See also setX() and rx(). [constexpr] int QPoint::y() const Returns the y coordinate of this point. See also setY() and ry(). [constexpr] QPoint &QPoint::operator*=(float factor) Multiplies this point's coordinates by the given factor, and returns a reference to this point. Note that the result is rounded to the nearest integer as points are held as integers. Use QPointF for floating point accuracy. See also operator/=(). [constexpr] QPoint &QPoint::operator*=(double factor) Multiplies this point's coordinates by the given factor, and returns a reference to this point. For example: QPoint p(-1, 4); p *= 2.5; // p becomes (-3, 10) Note that the result is rounded to the nearest integer as points are held as integers. Use QPointF for floating point accuracy. See also operator/=(). [constexpr] QPoint &QPoint::operator*=(int factor) Multiplies this point's coordinates by the given factor, and returns a reference to this point. See also operator/=(). [constexpr] QPoint &QPoint::operator+=(const QPoint &point) Adds the given point to this point and returns a reference to this point. For example: QPoint p( 3, 7); QPoint q(-1, 4); p += q; // p becomes (2, 11) See also operator-=(). [constexpr] QPoint &QPoint::operator-=(const QPoint &point) Subtracts the given point from this point and returns a reference to this point. For example: QPoint p( 3, 7); QPoint q(-1, 4); p -= q; // p becomes (4, 3) See also operator+=(). [constexpr] QPoint &QPoint::operator/=(qreal divisor) This is an overloaded function. Divides both x and y by the given divisor, and returns a reference to this point. For example: QPoint p(-3, 10); p /= 2.5; // p becomes (-1, 4) Note that the result is rounded to the nearest integer as points are held as integers. Use QPointF for floating point accuracy. See also operator*=(). Related Non-Members [constexpr] bool operator!=(const QPoint &p1, const QPoint &p2) Returns true if p1 and p2 are not equal; otherwise returns false. [constexpr] QPoint operator*(const QPoint &point, float factor) Returns a copy of the given point multiplied by the given factor. Note that the result is rounded to the nearest integer as points are held as integers. Use QPointF for floating point accuracy. See also QPoint::operator*=(). [constexpr] QPoint operator*(const QPoint &point, double factor) Returns a copy of the given point multiplied by the given factor. Note that the result is rounded to the nearest integer as points are held as integers. Use QPointF for floating point accuracy. See also QPoint::operator*=(). [constexpr] QPoint operator*(const QPoint &point, int factor) Returns a copy of the given point multiplied by the given factor. See also QPoint::operator*=(). [constexpr] QPoint operator*(float factor, const QPoint &point) This is an overloaded function. Returns a copy of the given point multiplied by the given factor. Note that the result is rounded to the nearest integer as points are held as integers. Use QPointF for floating point accuracy. See also QPoint::operator*=(). [constexpr] QPoint operator*(double factor, const QPoint &point) This is an overloaded function. Returns a copy of the given point multiplied by the given factor. Note that the result is rounded to the nearest integer as points are held as integers. Use QPointF for floating point accuracy. See also QPoint::operator*=(). [constexpr] QPoint operator*(int factor, const QPoint &point) This is an overloaded function. Returns a copy of the given point multiplied by the given factor. See also QPoint::operator*=(). [constexpr] QPoint operator+(const QPoint &p1, const QPoint &p2) Returns a QPoint object that is the sum of the given points, p1 and p2; each component is added separately. See also QPoint::operator+=(). [constexpr] QPoint operator+(const QPoint &point) Returns point unmodified. [constexpr] QPoint operator-(const QPoint &p1, const QPoint &p2) Returns a QPoint object that is formed by subtracting p2 from p1; each component is subtracted separately. See also QPoint::operator-=(). [constexpr] QPoint operator-(const QPoint &point) This is an overloaded function. Returns a QPoint object that is formed by changing the sign of both components of the given point. Equivalent to QPoint(0,0) - point. [constexpr] QPoint operator/(const QPoint &point, qreal divisor) Returns the QPoint formed by dividing both components of the given point by the given divisor. Note that the result is rounded to the nearest integer as points are held as integers. Use QPointF for floating point accuracy. See also QPoint::operator/=(). Writes the given point to the given stream and returns a reference to the stream. See also Serializing Qt Data Types. [constexpr] bool operator==(const QPoint &p1, const QPoint &p2) Returns true if p1 and p2 are equal; otherwise returns false. Reads a point from the given stream into the given point and returns a reference to the stream. See also Serializing Qt Data Types. © 2024 The Qt Company Ltd. Documentation contributions included herein are the copyrights of their respective owners. The documentation provided herein is licensed under the terms of the GNU Free Documentation License version 1.3 as published by the Free Software Foundation. Qt and respective logos are trademarks of The Qt Company Ltd. in Finland and/or other countries worldwide. All other trademarks are property of their respective owners.
{"url":"https://doc-snapshots.qt.io/qt6-6.5/qpoint.html","timestamp":"2024-11-01T22:19:24Z","content_type":"text/html","content_length":"59547","record_id":"<urn:uuid:a30ae5a8-4a58-41c3-9602-73363f79debe>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00508.warc.gz"}
Error - Generate 3 X 120 phase shifted PWM Hello All, I am trying to generate 3 PWM which are 120degree phase apart in Fixed step simulation model and having trouble with choosing sample time. PWM generator high resolution period as sample time for 65khz signal which create issue in choosing integer multiple sample times ! PLECS MSG : Required sample time period (3.846153846153846e-06) Any help would be much appreciated. Thanks in Advance Please upload your model to the forum ,only in this way, can others help you examine your error. Hello ! Thank you for the reply. Attached the PLECS stand alone model PWM_PhaseShift.plecs (9.16 KB) Hello ! Thank you very much for the reply. Attached the model PWM_PhaseShift.plecs (9.16 KB) Why do you require a fixed step simulation for this model? And you really mean to feed the modulator an index of 0.5? If the reference signal is sinusoidal, for example, you can simply vectorize the phase parameter for the Sine Wave Generator block, as, e.g. [0 -2pi/3 2pi/3]. Hello Kris ! Thanks for the reply. I am trying to integrate the PLECS plant with my Simulink model which has requirement to run in fixed step. Currently just to demonstrate, 0.5 value fixed value is used to see whether I can generate PWM with 50% duty cycle and phase shift it. When using a fixed time step, you have to make sure that all fixed-step events in the model are an integer multiple of the defined fixed step size parameter. With Tsw=1/65e3: The PWM modulator requires a fixed time step that’s an integer divisor of Tsw/4 (your current time step of 3.846e-06 from the error message).Phase shifts require a time step that’s an integer divisor of Tsw/3 and 2*Tsw/3.You probably want something less than that in order to accurately generate the PWM waveforms for different duty cycles. The PWM duty cycle quantization is directly linked with the fixed step size.So with the points above, your fixed step solver size should be an integer divisor of Tsw/12 in order to eliminate the error messages. The divisor factor determines your PWM quantization error. Attached is a model that shows this. Note it uses variables set in the Model Initialization Commands.Using a variable step solver simplifies this process greatly. It might also run faster by avoiding unnecessary solver steps. Also note there’s no provision against having fixed-step periodic events when using a variable-step solver. Please reconsider if there’s another way to model the fixed-step portion of your system in a way that’s compatible with a variable-step solver. Lastly, you probably want to implement the phase delay by shifting the carrier as you would in most hardware, not by delaying the PWM signals, as shown in the attached model. PWM_PhaseShift_BL.plecs (12.8 KB) Hello Bryan, Thank you very much for the detailed information. Much appreciated.
{"url":"https://forum.plexim.com/t/error-generate-3-x-120-phase-shifted-pwm/1013","timestamp":"2024-11-07T19:06:13Z","content_type":"text/html","content_length":"34710","record_id":"<urn:uuid:d7885e2e-ba7e-4ca6-8abc-9367e946bbb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00685.warc.gz"}
The quantum coin toss Feb 13, 2013 50:50 chance, but is it classical or quantum? All unpredictability in the world around us, be it the outcome of a coin flip or the weather conditions a month from now, is a fundamentally quantum rather than classical phenomenon. This is the conclusion of two physicists in the US, who have worked out that molecular interactions in gases and liquids can amplify tiny quantum fluctuations, to the point where the fluctuations are large enough to account for the uncertainties we experience at the macroscopic scale. This insight, they argue, could prove important in cosmology, as it might rule out some theories of the multiverse that rely on classical as opposed to quantum probabilities. Classical to quantum predictions In classical theories of probability the chances we attribute to a flipped coin landing either heads-up or tails-up simply reflect how much or how little we know about the coin flipping. To say that there is a 50:50 chance means we have no idea how the coin will land. In principle, however, if we understood exactly which physical processes determine the outcome of the flip and also know with enough precision all of the relevant parameters – such as the force imparted to the coin, the height at which it lands and the air resistance – we could predict the outcome with certainty. According to the latest research, this view is not correct. Andreas Albrecht and Daniel Phillips of the University of California at Davis argue that the probabilities we use in our everyday lives and in science do not "quantify our ignorance" but instead reflect the inherently random nature of the physical world as described by quantum mechanics. They maintain that quantum fluctuations can be amplified sufficiently by known physical processes to the point where they can entirely account for the outcome of these everyday macroscopic events. In fact, they claim that all practically useful probabilities can be accounted for in this way. In other words, all classical probabilities can be reduced to quantum ones. To back up their case Albrecht and Phillips consider an idealized fluid of billiard-ball-like molecules that continually collide with one another. The Heisenberg uncertainty principle dictates that the trajectory of a billiard ball will have an inherent uncertainty, resulting from the uncertainties in its position and momentum. The researchers worked out – by inputting suitable values of radius, mean free path, average speed and mass of the billiard balls into a couple of simple equations – how much this uncertainty grows with each collision between the balls. They show that in water and air (nitrogen) the uncertainty becomes so large in the space of one collision that every single fluctuation in the properties of these fluids has a fully quantum-mechanical origin. Quantum outcomes The researchers then show that the quantum fluctuations manifest in the water can wholly determine the outcome of a coin flip. They calculate that a typical flipped coin can spin through half a revolution in about 1 ms. This is also the temporal uncertainty in the neuronal process governing the coin flipping, a process that a group of neuroscientists in 2008 argued is caused by fluctuations in the number of open neuron ion channels. Since these fluctuations are, in turn, caused by the Brownian motion of molecules called polypeptides in a fluid that is largely water, quantum uncertainty (which drives the Brownian motion) can completely randomize the coin flipping. Cat's tail As such, the researchers say that anyone tossing a coin is, in fact, performing a Schrödinger's cat style experiment. But rather than a cat that is both alive and dead, the quantum object in this case is a coin, the final state of which is simultaneously heads and tails. The outcome of the flip therefore remains genuinely open until the upwards face of the coin is looked at, at which point the system takes on a definite value of either heads or tails. The researchers admit that their example is very simplified and that they would have a hard job tracing the amplification of quantum uncertainties in all familiar contexts, be it rolling dice or picking out a card at random. They also point out that it would only take one counterexample to falsify their idea – a use of classical probabilities that is clearly isolated from the physical, quantum world. David Papineau, a philosopher at King's College London, believes that Albrecht and Phillips are likely to be correct but he doesn't think their conclusion is terribly surprising. "It is very likely that all serious probabilities, be it a coin landing heads-up or a child being female, are manifestations of quantum chanciness," he says. "Indeed we have devices, such as Geiger counters, that show how big results are often caused by chancy micro-events." Albrecht replies that he and Phillips are perhaps the first physicists to have tackled the relationship of quantum and classical probabilities head-on, and he argues that the latest research might also rule out some theories in which physical processes (such as "eternal inflation") produce multiple copies of pocket universes like the one we observe around us. Such theories of the "multiverse", he says, need to import purely classical probabilities because a quantum wave function on its own cannot determine in which universe a particular measurement would be made. But Albrecht points out that such a move would not be possible if classical probabilities are, at root, quantum. A preprint of the research is available on arXiv.
{"url":"https://seqre.net/quantum-coin-toss","timestamp":"2024-11-08T17:35:21Z","content_type":"text/html","content_length":"39027","record_id":"<urn:uuid:cd1e66d6-dc8a-4415-a5bc-a9973b4897b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00303.warc.gz"}
Functional Analysis (Winter Semester 2014/15) Functional Analysis (Winter Semester 2014/15) This webpage is not maintained anymore. Updated informations concerning this lecture you find in the Studierendenportal of the KIT. Lecture: Tuesday 9:45-11:15 Nusselt-Hörsaal Begin: 21.10.2014 Wednesday 11:30-13:00 Criegee HS (R104) Wednesday 11:30-13:00 SR 1. OG Problem class: Friday 14:00-15:30 Eiermann Begin: 24.10.2014 The lecture is concerned with Banach and Hilbert spaces as well as linear operators acting on these spaces. Typical examples are spaces of continuous and integrable functions and linear maps, which one defines via integration of such functions. In this way one can formulate integral equations as affine or linear equations on a suitable Banach space, and one can solve them by means of functional analytic methods. This class of problems was in fact the historical starting point for the development of functional analysis around 1900. In the following years it became a fundamental area of modern analysis and its applications in- and outside of mathematics. A preliminary list of topics: • basic properties and examples of metric and Banach spaces and of linear operators • principle of uniform boundedness and open mapping theorem • dual spaces, Hilbert spaces and Theorem of Hahn-Banach • weak convergence and Theorem of Banach-Alaoglu • Fourier transform, Sobolev spaces, distributions, and applications to partial differential equations Prerequisites: Analysis 1-3 and Linear Algebra 1+2. The lecture is given in English. There will be a written exam on 10 March 2015 from 11:00 to 13:00 in the Gerthsen lecture hall. More details will be given later. Details concerning the written exam: • The written exam will take place on on 10 March 2015 from 11:00 to 13:00 in the Gerthsen lecture hall. If you want to take the examination, be already present at 10:45 please, so that the exam can begin on time. • If you want to take the examination, please register for the exam, depending on your branch of study and subject area, using the QISPOS system or contact Ms. Fuchs or Heiko Hoffmann. Closing date for registration is 02 March 2015. • The content of the exam will consist of the content of the chapters 1-4. • Apart from two handwritten DIN A4 pages (or equivalently: one double-page handwritten DIN A4 sheet of paper), there are no other auxiliary means allowed. • If you have further questions, please contact Heiko Hoffmann. On my webpage one can find the PDF file of the manuscript of my lecture Functional Analysis from winter semester 2011/12. An updated version will be delivered probably in spring. A few relevant • D. Werner: Funktionalanalysis. Springer. • H.W. Alt: Lineare Funktionalanalysis. Springer. • H. Brezis: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Springer. • J.B. Conway: A Course in Functional Analysis. Springer. • M. Schechter: Principles of Functional Analysis. Academic Press. • A.E. Taylor, D.C. Lay: Introduction to Functional Analysis. Wiley.
{"url":"https://www.math.kit.edu/iana3/edu/funcana2014w/","timestamp":"2024-11-06T10:51:31Z","content_type":"text/html","content_length":"167758","record_id":"<urn:uuid:2926f1e2-ecd5-4bec-94ce-e23d877d2783>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00693.warc.gz"}
Printable Flash Printable Flashcards For Multiplication Printable Flashcards For Multiplication - Fold multiplication cards in half and glue. Web look no further than these free printable multiplication flash cards! Web print these free multiplication flashcards to help your kids learn their basic multiplication facts. These handy flash cards multiplication templates make. These flashcards start at 0 x 0 and end at 12 x 12. Our math multiplication flash cards with answers on back are easy to print. Web complete color student flash card set. Small individual student flash card set (2.25 x 3) for use with our picture and story. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. The first set of flash cards contains each of the multiplication. Free Printable Multiplication 012 Flashcards with pdf Number Dyslexia The first set of flash cards contains each of the multiplication. These handy flash cards multiplication templates make. Web here you will find our selection of printable math flash cards for multiplication facts, to help you learn your 2 3 4 5 and 10 times. Web here you will find printable sets of flashcards for the 7 times table. Our. Printable Multiplication Flash Cards 6 AlphabetWorksheetsFree Web complete color student flash card set. Our math multiplication flash cards with answers on back are easy to print. Web look no further than these free printable multiplication flash cards! Web print these free multiplication flashcards to help your kids learn their basic multiplication facts. Small individual student flash card set (2.25 x 3) for use with our picture. 9 Multiplication Flash Cards Printable Printable Cards Fold multiplication cards in half and glue. Web here you will find our selection of printable math flash cards for multiplication facts, to help you learn your 2 3 4 5 and 10 times. Web here you will find printable sets of flashcards for the 7 times table. Our math multiplication flash cards with answers on back are easy to. Free Printable Multiplication Flash Cards Double Sided The first set of flash cards contains each of the multiplication. Web print these free multiplication flashcards to help your kids learn their basic multiplication facts. Web look no further than these free printable multiplication flash cards! Web here you will find our selection of printable math flash cards for multiplication facts, to help you learn your 2 3 4. Multiplication Flashcards, Printable Flashcards, Mathematics Cards, a 2 These handy flash cards multiplication templates make. These flashcards start at 0 x 0 and end at 12 x 12. The first set of flash cards contains each of the multiplication. Small individual student flash card set (2.25 x 3) for use with our picture and story. Fold multiplication cards in half and glue. Multiplication Colorful Flashcard Sheets Kidpid Fold multiplication cards in half and glue. These handy flash cards multiplication templates make. Web here you will find our selection of printable math flash cards for multiplication facts, to help you learn your 2 3 4 5 and 10 times. Web complete color student flash card set. Web look no further than these free printable multiplication flash cards! Printable Multiplication Flashcards 012 These flashcards start at 0 x 0 and end at 12 x 12. Web print these free multiplication flashcards to help your kids learn their basic multiplication facts. Our math multiplication flash cards with answers on back are easy to print. Web look no further than these free printable multiplication flash cards! Web print these free multiplication flash cards to. Multiplication Facts Flash Cards Printable Printable Flash Cards Web complete color student flash card set. Small individual student flash card set (2.25 x 3) for use with our picture and story. These flashcards start at 0 x 0 and end at 12 x 12. Fold multiplication cards in half and glue. These handy flash cards multiplication templates make. Multiplication Flash Cards 112 Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. The first set of flash cards contains each of the multiplication. These handy flash cards multiplication templates make. Web print these free multiplication flashcards to help your kids learn their basic multiplication facts. Web look no further than these free printable multiplication flash cards! Multiplication Flash Cards 112 Printable Web print these free multiplication flashcards to help your kids learn their basic multiplication facts. The first set of flash cards contains each of the multiplication. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Our math multiplication flash cards with answers on back are easy to print. Small individual student flash Web look no further than these free printable multiplication flash cards! Web print these free multiplication flashcards to help your kids learn their basic multiplication facts. Our math multiplication flash cards with answers on back are easy to print. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Web here you will find printable sets of flashcards for the 7 times table. These flashcards start at 0 x 0 and end at 12 x 12. Web complete color student flash card set. The first set of flash cards contains each of the multiplication. Small individual student flash card set (2.25 x 3) for use with our picture and story. These handy flash cards multiplication templates make. Web here you will find our selection of printable math flash cards for multiplication facts, to help you learn your 2 3 4 5 and 10 times. Fold multiplication cards in half and glue. The First Set Of Flash Cards Contains Each Of The Multiplication. These flashcards start at 0 x 0 and end at 12 x 12. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Web here you will find printable sets of flashcards for the 7 times table. These handy flash cards multiplication templates make. Small Individual Student Flash Card Set (2.25 X 3) For Use With Our Picture And Story. Fold multiplication cards in half and glue. Our math multiplication flash cards with answers on back are easy to print. Web print these free multiplication flashcards to help your kids learn their basic multiplication facts. Web complete color student flash card set. Web Look No Further Than These Free Printable Multiplication Flash Cards! Web here you will find our selection of printable math flash cards for multiplication facts, to help you learn your 2 3 4 5 and 10 times. Related Post: Printable Ornament Shapes Free Printable Snow Globe Template 2024 Nit Tournament Bracket Printable Printable Scarecrow Face Stencil Printable Nature Pictures To Color Printable Months Of The Year Chart Printable Large Print Word Search My Little Pony Printable Images Witch Stencil Printable Coloring Pages Horses Free Printable
{"url":"https://68ore.plansverige.org/en/printable-flashcards-for-multiplication.html","timestamp":"2024-11-10T22:45:49Z","content_type":"text/html","content_length":"28653","record_id":"<urn:uuid:6e4d95d7-9160-4daf-87b0-ebc87525e91e>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00432.warc.gz"}