text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
ESP8266 Based NeoPixel Dashboard This. The construction consist of following basic parts. - ESP8266 based NodeMCU, any other board can be used () - NeoPixel matrix () - LiIo Battery (From Amazon) - RTC for time keeping (From Amazon or Ebay) - Battery charging circuit (The one found on ebay, or Amazon) - Front and Back plates from Pimoroni () The connection is straightforward, The Data In Pin of the matrix is connected to the D3 pin of NodeMCU. and the 5V Pin is connected to the 5V of battery charging circuit. This is to make sure that we do not fry the ESP. Protection diode needs to be placed between all the points, but I was lazy enough not to use one, as who cares even if I loose a cheap NodeMUC. All the connection are totally up to you, this is not the only way you can use it. The Magic lies in the Code, The meat lies in the code. Code is everything !!!!!!...... Before uploading the code make sure, you have the ESP9266 Arduino Core and that you have connected the ESP8266 to WiFi at least once, and do remember to use the libraries that I have included in this repository, the one that I have used have been modified a bit to be able to use it properly. The description is below. Original Library - Adafruit_GFX () - Adafruit_NeoPixel () - Blynk () - WiFiManager () - AsyncPing () - NTPClient () - RTC_By_Makuna () - ArduinoOTA (InBuilt) - Simple-Timer () Modified - 1Adafruit_NeoMatrix () - WS2812FX () Now once you open the main file, go to the _defines.h_ and insert yout blynk app token in the bottom last paramater named _auth_ Then upload the code to the ESP. Arrange the Widgets according to the following image making sure that you have used the exact same virtual pin, and if you want to use your own setup, you are good to go, just make sure to make chages in the defines.h file accordingly. After that is done, now you need to manually add the list of effects in the effect list drop down widget, as follow in the same order. STATIC BLINK BREATH COLOR WIPE COLOR WIPE RANDOM RANDOM COLOR SINGLE DYNAMIC MULTI DYNAMIC RAINBOW RAINBOW CYCLE SCAN DUAL SCAN FADE HEATER CHASE THEATER CHASE RAINBOW RUNNING LIGHTS TWINKLE TWINKLE RANDOM TWINKLE FADE TWINKLE FADE RANDOM SPARKLE FLASH SPARKLE HYPER SPARKLE STROBE STROBE RAINBOW MULTI STROBE BLINK RAINBOW CHASE WHITE CHASE COLOR CHASE RANDOM CHASE RAINBOW CHASE FLASH CHASE FLASH RANDOM CHASE RAINBOW WHITE CHASE BLACKOUT CHASE BLACKOUT RAINBOW COLOR SWEEP RANDOM RUNNING COLOR RUNNING RED BLUE RUNNING RANDOM LARSON SCANNER COMET FIREWORKS FIREWORKS RANDOM MERRY CHRISTMAS The Blynk Library make some blocking network calls, to make sure that it is connected to the blynk server, but when we are making certain projects like this one, we want Blynk to be an helper not the main king, that completely stops all other executuion if it is not able to find its master. So, in this program I have used a neat trick, I have used an AsyncPing Library that helps to make sure that the device is connected to the internet and not just connected to the WiFi. This is achieved by making the Blynk.connect() call only to be executed when the Ping to the Google Server returns true. This ensures that the sketch completely do not become a blocking sketch, but there are some catch, such as when the blynk server itself is down, than it will stop the sketch, as the device is able to access the internet, but not the blynk server. Some Changes can be made to the sketch, so that if the connection to the blynk server is not achieved, then wait for sometime like 1 hour before reconnecting. So let cut short the crap and let me show you the trick.... Use this library AsyncPing () #include <ESP8266WiFi.h> #include "AsyncPing.h" AsyncPing ping; //Check if internet is up an running. bool pingCheck = false, blynkConnected = false; //Check if Blynk needs to be connected or not, only if internet is up and running, not only wifi. void blynkCheckEvent() { if (blynkConnected == false && pingCheck == false) { ping.begin("8.8.8.8"); } if (blynkConnected == false && pingCheck == true) { Blynk.connect(); } } void setup() { Blynk.config(auth); if ( Blynk.connect()) { blynkConnected = true; } //The code to make sure that the reconnect code is non blocking, this is achieved by this awesome library that helps us ping in an async manner. ping.on(false, [](const AsyncPingResponse & response) { if (response.total_recv > 0) { Serial.println("Ping Sucessfull"); pingCheck = true; } else { pingCheck = false; Serial.println("Ping UnSucessfull"); } return true; //doesn't matter }); } void loop() { if (Blynk.connected()) { blynkConnected = true; Blynk.run(); pingCheck = false; } else { blynkConnected = false; } }
https://hackaday.io/project/19773-esp8266-based-neopixel-dashboard/details
CC-MAIN-2021-39
refinedweb
767
68.6
I'm getting build errors on std::shared_ptr for Android. I am using std::shared_ptr as well as C++11 iterators in some parts of the project. I managed to get the JUCE demo compiling for Android without problems and tested on my device. In my Introjucer project I have set the C++11 flag, set the toolchain to 4.8 and also tried adding to 'External libraries to link': <my user home>/dev/SDKs/android-ndk/sources/cxx-stl/gnu-libstdc++/4.8/include Its coming up with errors like jni/../../../Source/SharedObjects.h:31:5: error: 'shared_ptr' in namespace 'std' does not name a type std::shared_ptr<Scale> scale; ^ It'd be a shame to have to take out the C++11 stuff to get it working on Android. I could swap std::shared_ptr for SharedResourcePointer. Also posted on Stack Overflow:
https://forum.juce.com/t/c-11-build-errors-std-shared-ptr/14032
CC-MAIN-2018-34
refinedweb
143
66.94
By Yi Xian there's Fun, which is a tool that supports serverless application deployment and allows easy management of resources such as Function Compute, API Gateway, and Log Service. You can use Fun to develop, build, and deploy resources by describing specified resources in the template.yml file. Last, there's also Fun Local, which is a sub-command of Fun. You can use it directly through the fun local command. The Fun Local tool can fully simulate and run the functions in Function Compute locally and provides the single-step debugging feature, which makes up for Function Compute shortcomings when compared with traditional application development experience, and provide users with a new way to solve Function Compute problems. Note: The techniques described in this article are applicable to Fun 2.8.0 or later. First, you can use fun local invoke -h to view the help information to involve Fun Local: Usage: fun local start [options] Allows you to run the Function Compute application locally for quick development & testing. It will start an http server locally to receive requests for http triggers and apis. It scans all functions in template.yml. If the resource type is HTTP, it will be registered to this http server, which can be triggered by the browser or some http tools. For other types of functions, they will be registered as apis, which can be called by sdk in each language or directly via api. Function Compute will look up the code by CodeUri in template.yml. For interpreted languages, such as node, python, php, the modified code will take effect immediately, without restarting the http server. For compiled languages such as java, we recommend you set CodeUri to the compiled or packaged localtion. Once compiled or packaged result changed, the modified code will take effect without restarting the http server. Options: -d, --debug-port <port> specify the sandboxed container starting in debug mode, and exposing this port on localhost -c, --config <ide/debugger> output ide debug configuration. Options are vscode -h, --help output usage information The command format is consistent with HTTP Triggers:. First, use fun local start to. First, install the fc python sdk: pip install aliyun-fc2 Write code: import fc2 client = fc2.Client(endpoint='', accessKeyID='<your access key id>', accessKeySecret='your access key secret') resp = client.invoke_function('localdemo', 'php72') print resp.headers print resp.data Note: The accessKeyId and the accessKeySecret configured in the SDK must be consistent with the configurations in Fun. Otherwise, the signature authentication would fail during the invocation. The following figure shows the execution process. How to Develop Function Compute with WordPress Applications Run and Debug HTTP Trigger Configured Functions Locally 33 posts | 5 followersFollow Alibaba Cloud Serverless - August 21, 2019 Alibaba Cloud Serverless - August 21, 2019 Amber Wang - August 6, 2018 Alibaba Clouder - June 5, 2019 Myers Guo - May 10, 2021 amap_tech - December 2, 2019 33 posts | offers an accelerated global networking solution that makes distance learning just the same as in-class teaching.Learn More Connect your business globally with our stable network anytime anywhere.Learn More
https://www.alibabacloud.com/blog/run-and-debug-functions-locally-through-the-api_595258
CC-MAIN-2021-25
refinedweb
514
53.71
tag:blogger.com,1999:blog-107486142022-09-07T02:33:19.223-04:00FrazzledDadBleary-eyed ruminations of a work at home Father.Jim Holmes Complexity<div>So, so much of good system design is abstracting out complexity. So, so much of good testing is understanding where the complexity is and poking that with a flamethrower until you decypher as many interesting things about that complexity as you possibly can.</div><div><br /></div><div>Yes, that's a lot of badly mixed metaphors. Deal with it.</div><div><br /></div><div.</div><div><br /></div><div>Simple problem, familiar domain. Hours, rate, determine how much a worker gets paid before deductions. </div><div><br /></div><div>Right now we've finished standard time and have six total tests (three single XUnit [Fact] tests and one data-driven [Theory] with the same three test scenarios). </div><div><br /></div><div>The "system" code right now is this bit of glorious, beautiful stuff below. Please, be kind and remember the context is to show testers a bit about TDD and how code works. No, I wouldn't use int for actual payroll, m'kay?</div><div><br /></div><div> <span style="font-family: courier;"> public class PayrollCalculator {</span></div><div><span style="font-family: courier;"> public int ComputeHourlyWages(int hours, int rate)</span></div><div><span style="font-family: courier;"> {</span></div><div><span style="font-family: courier;"> return hours * rate;</span></div><div><span style="font-family: courier;"> }</span></div><div><span style="font-family: courier;"> }</span></div><div> </div><div!</div><div><br /></div><div>The intent of my tester's question was if we should make the system work like the snippet below--some hand-wavy psuedo code is inline.</div><div><br /></div><div><span style="font-family: courier;"> public class PayrollCalculator {</span></div><div><span style="font-family: courier;"><br /></span></div><div><span style="font-family: courier;"> public int ComputeOvertimeWages(int hours, int rate) {</span></div><div><span style="font-family: courier;"> //calculate ot wages</span></div><div><span style="font-family: courier;"> return otWages;</span></div><div><span style="font-family: courier;"> }</span></div><div><span style="font-family: courier;"><br /></span></div><div><span style="font-family: courier;"> public int ComputeStandardTimeWages (int hours, int rate) {</span></div><div><span style="font-family: courier;"> <span style="white-space: pre;"> </span> //calculate standard wages</span></div><div><span style="font-family: courier;"> return standardWages;</span></div><div><span style="font-family: courier;"> }</span></div><div><span style="font-family: courier;"> }</span></div><div> </div><div> </div><div>Splitting calls to compute separate parts of one overall action may seem to make sense initially, but it's far more risky and complex.</div><div><br /></div><div>What are we trying to do? We're trying to figure out what a worker gets paid for the week. That's the single outcome.</div><div><br /></div><div>Think about some of the complexities we might run in to if this was broken into two separate calls. Think of some of the risks that might be involved.</div><div><br /></div><div><ul style="text-align: left;"><li>Does the order of the calls matter? Do I need to figure standard hours first, then call the ComputeOvertimeWages method? What happens if I call overtime before standard?</li><li>Do I call overtime for only the hours above 40?</li><li>If the worker put in over 40 hours, do I call standard wages with just 40, or will the method drop extra hours and just figure using 40?</li><li>Does the code invoking these calls have to keep track of standard and overtime hours?</li><li>What happens if in the future we change the number of standard hours in the pay period?</li></ul></div><div><br /></div><div>As a system designer you're far better off abstracting all this away from the person calling your methods.</div><div><br /></div><div>One simple method, and you hide that complexity. Just give the person calling your API what they want: the actual wages for the worker.</div><div><br /></div><div>This same concept applies if you're dealing with complex workflows and state. Let's say you have a five step workflow for creating a new job bid, and you need several critical pieces of information at each step.</div><div><br /></div><div.</div><div><br /></div><div>One interaction, one nice result. Easier to write for your consumers, far easier to test, too.</div><div><br /></div><div>Someone years ago spoke of making it so your users could fall into the pit of success. Do more of that, and less of pushing them into the pit of despair. (I'm throwing out a Princess Bride reference, not one to Harry Harlow...)</div> Talk: You Got ThisLast week I was fortunate to have been the keynoter at <a href="" target="_blank">DevSpace Technical Conference</a> in Huntsville, Alabama. DevSpace's chairman Chris Gardner reached out to me some months ago and asked if I'd be willing to talk about how I've gotten through life since <a href="" target="_blank">the awful events of January 10th, 2017</a>.<br /> <br /> Below is a video of that talk. It's intense, emotional, and very likely a complete surprise to attendees who didn't already know of the tragedy that struck my family last year.<br /> <br /> As with <a href="" target="_blank">my KalamazooX conference talk last March</a>,.<br /> <br />. <br /> <br /> This talk is fairly different from the KalX talk above. Lots of overlap, but it's a different focus, because I was trying to point out to the audience that each and every one of us has the ability to weather horrible storms.<br /> <br /> You Got This.<br /> <br /> <iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="543" src="" width="800"></iframe> New Technical and Leadership Blog for Me!I decided to move my technical and leadership postings over to a new blog on my <a href="" target="_blank">Guidepost Systems site</a>.<br /> <br /> I'm doing this in the hopes of continuing to shore up and flesh out my professional branding around Guidepost Systems.<br /> <br /> I will occasionally cross-link content here as a reminder. I'll continue to post notices on my <a href="" target="_blank">Twitter timeline</a> when things go live over at <a href="" target="_blank">my blog</a> there.<br /> <br /> Please go follow along at that new location. I look forward to comments and discussions on postings there!<br /> <br /> (I've already got a series going there on creating a technical debt payment Test CredoA few years ago I scribbled down some thoughts to myself as I was struggling with my brain and a frustrating project.<br /> <br /> I pinned these notes on a cubicle wall without thinking much as a reminder to myself. Never thought much else of it, simply because this was me reminding myself of things I needed reminding of.<br /> <br /> Friday a good pal who was on that same project hit me with a shot out of nowhere when he reminded me of this. I guess it had an impact on him as well.<br /> <br /> Frankly I’d forgotten about these. His comment was a good reason to go hunt this down.<br /> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><img alt="Jim's Testing Credo" height="225" src="" style="margin-left: auto; margin-right: auto;" title="Jim's Testing Credo" width="400" /></td></tr> <tr><td class="tr-caption" style="text-align: center;">Jim's Testing Credo</td></tr> </tbody></table> A Geek Leader Podcast<p>Somehow I forgot to post here that John Rouda was kind enough to invite me on his A Geek Leader podcast some time back.</p> <p>We talk about leadership, learning, adversity, and of course The Event from Jan 10th, 2017.</p> <p>John’s a wonderful, gracious host and we had a great conversation. You can find details <a href="">at John’s site< Rationalizing Bad Coding Practices<p>Rant. (Surprise.)</p> <p?</p> <p>Believe it or not, there are times I’m OK with this.</p> <p>I’m OK with the practices above <strong>if</strong>:</p> <ul> <li>Your business stakeholders and users are happy with the system in production</li> <li>Your rework rate for defects and missed requirements is near zero</li> <li>You have fewer than six to ten defects over several months</li> <li>You have near zero defects in production</li> <li>Your codebase is simple to maintain and add features to</li> <li>Static analysis of your codebase backs up the previous point with solid metrics meeting recognized industry standards for coupling, complexity, etc.</li> <li>Everyone on the team can work any part of the codebase</li> <li>New team members can pair up with an experienced member and be productive in days, not weeks</li> </ul> <p>If you meet the above criteria, then it’s OK to pass up on disciplined, PROVEN approaches to software delivery–because you are meeting the end game: high-value, maintainable software that’s solving problems.</p> <p>The thing is, very, VERY few people, teams, or organizations can answer all those questions affirmatively if they’re being remotely honest.</p> <p.</p> <p>The rest of the 99.865% of the software industry has decades of data proving how skipping careful work leads to failed projects and lousy care of our users.</p> <p.</p> <p>Do not rationalize your concious decisions to do poor work with “I’m more effective when I just…” No. No, you are not. You think you may be, but not unless you can answer the questions above “Yes!” with confidence and honesty.</p> <p>Stop rationalizing. Stop making excuses.</p> <p>Own. Your. Shit.</p> <p>And clean it WebDriver Components<h2> Understanding WebDriver Components</h2> <b>[UPDATE]</b> This post is based on a submission I made to the official WebDriver documentation in early Spring of 2018. It's meant to help folks understand how pieces and parts fit together for WebDriver. <b>[/]</b><br /> <br /> Building a test suite using WebDriver will require you to understand and effectively use a number of different components. As with everything in software, different people use different terms for the same idea. Below is a breakdown of how terms are used in this description. <br /> <h3> Terminology</h3> <ul> <li><b>API:</b> Application Programming Interface. This is the set of "commands" you use to manipulate WebDriver. </li> <li><b>Library:</b> A code module which contains the APIs and the code necessary to implement them. Libraries are specific to each language binding, eg .jar files for Java, .dll files for .NET, etc. </li> <li><b>Driver:</b> Responsible for controlling the actual browser. Most drivers are created by the browser vendors themselves. Drivers are generally executable modules that run on the system with the browser itself, not on the system executing the test suite. (Although those may be the same system.) <i>NOTE: Some people refer to the drivers as proxies.</i> </li> <li><b>Framework:</b>. </li> </ul> <h3> The Parts and Pieces</h3> At its minimum, WebDriver talks to a browser through a driver. Communication is two way: WebDriver passes commands to the browser through the driver, and receives information back via the same route. <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div> <br />.<br /> <br /> This simple example above is <i>direct</i> communication. Communication to the browser may also be <i>remote</i> communication through Selenium Server or RemoteWebDriver. RemoteWebDriver runs on the same system as the driver and the browser. <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div> <br /> Remote communication can also take place using Selenium Server or Selenium Grid, both of which in turn talk to the driver on the host system <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div> <br /> <h3> Where Frameworks Fit In</h3> Web. <br /> This is where various frameworks come in to play. At a minimum you'll need a test framework that matches the language bindings, eg NUnit for .NET, JUnit for Java, RSpec for Ruby, etc.<br /> <br /> The test framework is responsible for running and executing your WebDriver and related steps in your tests. As such, you can think of it looking akin to the following image. <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div> <br /> The test framework is also what provides you asserts, comparisons, checks, or whatever that framework's vernacular for the actual test you're performing, eg<br /> <br /> <blockquote class="tr_bq"> AssertAreEqual(orderTotalAmount, "$42");</blockquote> <br /> Natural language frameworks/tools such as Cucumber may exist as part of that Test Framework box in the figure above, or they may wrap the Test Framework entirely in their own implementation.<br /> <br /> Natural language frameworks enable the team to write tests in plain English that help ensure clarity of <i>why</i> you are building something and <i>what</i> it is supposed to do, versus the very granular <i>how</i> of a good unit test.<br /> <br /> If you're not familiar with specifications, Gherkin, Cucumber, BDD, ATDD, or whatever other soup-of-the-day acronym/phrase the world has come up with, then I encourage you to go find a copy of <a href="" target="_blank">Specifications By Example</a>. It's a wonderful place to start. You should follow that up with <a href="" target="_blank">50 Quick Ideas to Improve Your User Stories</a>, and <a href="" target="_blank">50 Quick Ideas to Improve Your Tests</a>, both by Gojko Adzjic.<br /> <h3> Following Up</h3> Don't stop here. Go learn more about how WebDriver works. Read the <a href="" target="_blank">WebDriver documentation</a>. Sign up for Dave Haeffner's awesome <a href="" target="_blank">Elemental Selenium</a> newsletter and read his past articles.<br /> <br /> Join the <a href="" target="_blank">Slack Channel</a> and ask questions. (But please, do yourself and the Selenium community a favor and first do a little research so you're asking questions in a fashion that can help others best respond!) ThatConference.<br /> <br /> Lots there on moving testing conversations to the left. Lots there about testing as an activity.<br /> <br /> Thank you if you attended the session. I had some really good questions, folks were patient with my bad jokes, and there were some really good conversations after the talk.<br /> <br />%.<br /> <br /> Thank you.<br /> <script async="" class="speakerdeck-embed" data-</a><br /> <br />.<br /> <br /> <a href="" target="_blank"></a> <script async</a>) and is full of conversations and exercises meant to help attendees figure out <em>if</em> they want to become leaders, and what they need to learn about themselves in order to be successful as they grow. It’s also full of my bad jokes, but what else would you expect?<br /> <br /> Slides for the workshop are on SpeakerDeck at <a href="" target="_blank"></a> <script asyncthe intro article with links to others here</a>.] <br /> Titanfall 2 is a really fun game, even though the multiplayer aspect is not a type of game I do well at or even search out to play in other games. [<i>Ed.: Dude, you have 28 <b>days</b> of total gameplay and you just got Gen 50. WTF? Seriously?</i>]<br /> A few closing thoughts for this series:<br /> <ul> <li><b>Find Game Modes That Work For You.< <i>for you</i>.</li> <li><b>Figure Out Your Goals. Or If You Even Care.</b> <i>have</i> to have goals. That’s just fine too.</li> <li><b>Get a Mic. Chat With Your Team.</b> <b>exactly</b> why you ended with five kills and few points. Jerkface.</li> </ul> The vast majority of folks with mics tend to be good teammates. A very few even know how to communicate well to <i>help</i> the team, especially when you’re playing Frontier Defense.<br /> <ul> <li><b>Learn Effective Communication.</b> .”</li> </ul> Of course, there’s my always helpful running “useful” commentary: “Well, shit. That didn’t work so well.” Or “Damnit, Funky Chicken killed my ass again because I was stupid and ran in front of him.”<br /> Don’t be me. Be better than me…<br /> <ul> <li><b>Have Fun.</b>.</li> </ul> <h2 id="inclosing"> In Closing</h2> I.<br /> Look me up some time if you’re interested. My GamerTag is FrazzledDad and I’m online 9pm-ish in the Pacific timezone.<br /> In the meantime, go have some fun.: Movement and Shooting<p>[NOTE: One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] </p> <p.)</p> <h2 id="speedisyourfriend">Speed Is Your Friend</h2> <p>ProTip from Captain Obvious: The faster you’re moving, the harder it is to get shot. Duh.</p> <p>Spend time in the Gauntlet learning to move quickly, and learning how to string together moves that add to your speed: wall runs, leaps, grapple, slides, all the neat things that really make moving as a pilot so fun.</p> <p>Learning the maps well will help you out greatly with your movement, simply by knowing “Oh, yeah, I can bounce along this route right here.”</p> <p.</p> <p>Speaking of the grapple…</p> <h2 id="ilovemygrapple">I Love My Grapple</h2> <p>The various pilot tactical mods are all neat, but I have used the Grapple exclusively for many months. The Grapple lets me get to higher spots for better firing positions.</p> <p.</p> <p.</p> !</p> <p>Like everything else, the Grapple take some practice to get proficient with it. It’s freaking awesome once you’re good.</p> <p>As I’ve repeatedly said in this series, this is specific to my style of play. I’m happy for you if there are other tacticals you prefer. Honest.</p> <h2 id="changingdirectionviaslides">Changing Direction via Slides</h2> <p.</p> <h2 id="sightlocationwhilemoving">Sight Location While Moving</h2> <p>Pay attention to where you’re keeping your hip-fire sights while moving. For the longest time I’d run around with my ADS reticule down below the horizon. No clue why, it’s just how I rolled. </p> <p.</p> <h2 id="movingsidewaysorkeepyoursightonthreats">Moving Sideways, Or Keep Your Sight on Threats</h2> <p.</p> <h2 id="getfasteratgettingyoursightontarget">Get Faster at Getting Your Sight on Target</h2> <p>Getting your sight on target faster means you’ve got better odds at killing the enemy before they kill you. Hello, thanks Captain Obvious.</p> <p>One part of this is the Gun Ready mod which gets you into ADS quicker. The other part is getting better at getting your sights <strong><em>on</em></strong> the target. That comes through practice, either deliberate practice or in the game.</p> <p.</p> .</p> <p>I spent a lot of time doing things like that simple movement from various directions at various target ranges (near, mid, far).</p> <p>I’m not great, but it paid off.</p> <h2 id="learntoshootfromthehip">Learn to Shoot From the Hip</h2> <p>Firing from the hip saves you time transitioning to sights. It also leaves you a wider view versus the constrained one you get in ADS. Hip fire is especially good against opponent minions who don’t move and dodge very effectively.</p> <p>Don’t focus on improving just your ADS firing; spend time on hip fire too.<: Tactics<p>[NOTE: One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] </p> <p>Oi. Where to start?</p> <p.” </p> <p>Thanks a Ton. Next in that series: “Five Ways to Make Friends, Starting With Not Picking Your Butt in Public.”</p> <p>Here are a collection of odds and ends I’ve picked up. It’s stuff that lots of accomplished FPS players will be saying “Well, duh!” to, but hopefully some readers (all three of you) may find useful.</p> <p>This is general tactics—there’s a whole separate post on movement and shooting. Yes, there’s some overlap. Deal with it.</p> <h2 id="learnthemaps">Learn The Maps</h2> <p>Know the map. I can’t emphasize how important this is. It took me far longer to figure out just how critical this is for <strong>any</strong> FPS game. Knowing the map inside and out gives you many critical advantages. </p> <p.</p> <p>Some things to look for as you’re learning the maps:</p> <ul> <li><p>Find good shooting spots </p></li> <li><p>Find good shooting spots that help hide you</p></li> <li><p>Find good shooting spots that help hide you with good cover that protects you from fire from at least one or more angles. (Think of hiding with a wall to your side or mechanical structures on roofs behind you.)</p></li> <li><p>Find good fire lanes—areas that offer good cover for you and lots of visibility to see opponents. Think of the main street under the monorail on Eden; the main corridors on Rise; much of the open spaces on Homestead</p></li> </ul> <p.”</p> <h2 id="cover">Cover</h2> <p>It took me way, <strong><em>WAY</em></strong> too long to get better at using cover. </p> <p>If you’re moving, do so along paths that block you from fire from one or more directions. Wall running is great for several reasons. First, you’re moving fast. Secondly, you’re harder to hit. Third, nobody can shoot you anywhere from the other side of the wall.</p> <p <em>not</em> covered from.</p> <p>Keep an eye on your minimap. Keep cover in mind when you see threat indicators on the map. Keep something between you and those threat directions until you’re ready to have a look or attack out in that direction.</p> <h2 id="avoidfirelanes">Avoid Fire Lanes</h2> <p.</p> <h2 id="reloadconstantly">Reload Constantly</h2> <p>As Master Sergeant Brianna Fallon eloquently put it to her squad in <em>Chains of Command</em>, “If one of you sons of bitches gets killed for lack of shooting back because you ran out of ammo, I will personally violate your carcass.”</p> <p>You do not want to die because your mag had one round in it when you come face to face with an opponent who has you in their sights.</p> <p>Regardless if I’m in a Titan or on foot as a Pilot, I reload <em>constantly</em>. I’ll take advantage of displacing movements to reload, ducking behind cover, etc. I don’t wait for my mag to empty and auto-reload. Instead I want to make sure I’m heading to the next engagement with a full mag.</p> <p.</p> <p>Reload. Reload all. The. Freaking. Time.</p> <h2 id="firedisplacerepeat">Fire, Displace, Repeat</h2> <p>You know what I love? I love opponents who fall in love with a clever spot and hang out there firing away, giving me a chance to work around to get shots at them.</p> <p. </p> <p.”</p> <p>That’s a tactical choice, and it’s not necessarily a bad one. Just make those choices with some smarts instead of “HOLY COW IMA TOTALLY HAVING A GREAT TIME HERE OH CRAP I DIED.”</p> <h2 id="watchyourflanks.nobodyelsewill">Watch Your Flanks. Nobody Else Will</h2> <p>Very, very few teams work well together. Frankly, few teams even work modestly well together. Nearly everyone runs off in search of their own glory while ignoring that paying attention to what’s going on around them might help them and the rest of the team.</p> <p.</p> <p>Keep a weather eye on your flanks. Because it’s rare that others will.</p> <h2 id="avoidrodeoingtitansfromthefront">Avoid Rodeoing Titans From the Front</h2> <p.</p> <p>If you <strong>do</strong> from the front, use whatever ordinance you have to try and distract the Titan. This is partially why I like Firestars—you can blind a Titan and either run or rodeo with much better chance of success. Be careful, though, because you can kill <em>yourself</em> with your own Firestar or electric smoke ordinance. Ask me how I know…</p> <p>If you’re grappling from the front of a Titan, do <strong>not</strong> fly towards the titan in a straight line. Use your controller to fly up high, then loop down and mount the Titan. This will keep you out of melee range. Same thing works flying to one side or another. Point being, don’t fly in straight to the Titan.</p> <h2 id="oddsandends">Odds and Ends</h2> <p><strong>Choose Your Colors Carefully:</strong>.</p> <p><strong>Avoid Drop Ship Door Fire Lanes:</strong>.</p> <p><strong>Don’t Waste Ammo on Pilots in the Drop Ship:</strong> You can’t kill pilots in the drop ship, just the ship itself. <: Weapons, Boosts, Kits, and Ordinance<p>[NOTE: One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] </p> <p>Over the year I’ve played I’ve settled into a comfy groove with my equipment. Here’s a few thoughts on my fave and not-so-fave items.</p> <h2 id="weapons">Weapons</h2> <p).</p> <p <strong><em>for my style of play</em></strong>.</p> <p><strong>Hemlock:</strong> My go-to weapon. It’s great in hip fire, and I can snipe at very long range. The burst mode gives me several rounds accurately on target, and I can knock out pilots with two bursts. Normally. When I’m playing well.</p> <p><strong>Flatline:</strong> Second favorite weapon. Great impact, solid accuracy both from hip fire and ADS, even at longer range. Moderate recoil is controllable for me when I’m shooting past mid-range.</p> <p><strong>Devotion:</strong> My go-to weapon for Frontier Defense due to its huge ammo capacity, especially when modded up with extra ammo. Beautiful at close range with hip aiming, solid at mid-range ADS. Squirrely at long range, but hey, it’s an LMG.</p> <p><strong>Kraber:</strong>.</p> <blockquote> <p>Note: This gun is <strong><em>killer awesome</em></strong>.</p> </blockquote> <p><strong>G2:</strong>.</p> <blockquote> <p.</p> </blockquote> <p><strong>Cold War:</strong> I love, LOVE, <strong>LOVE</strong> this weapon for Bounty Hunt. See my notes about it in the Game Modes post.</p> <p><strong>CAR:</strong> I like this gun just fine. Good accuracy, nice damage. I don’t play it much because there’s other weapons I prefer.</p> <p><strong>R–97:</strong> I don’t play well at closer ranges, so I tend not to use this much. But it sounds wicked cool when it fires. Same reason I love the Vector in Call of Duty Squads. So I do run it every once in awhile for fun. Because why not? </p> <p><strong>Shotguns:</strong> No. Just. No. Like I’ve said, me no likey close range combat. I got all my shotguns to Gen 2 just to prove I could, then stopped playing them.</p> <p><strong>DMR:</strong>.</p> <p><strong>Alternator:</strong> Beloved weapon of the crazed crackhead monkey wall-running space flying stim-boosted kids who kill me all the time. I just don’t play it well. I hit Gen 2 with it and put it to rest in the same grave with the DMR.</p> <p><strong>Others:< <strong><em>for my style of play.</em></strong></p> <h3 id="anti-titanweapons">Anti-Titan Weapons</h3> <p!</p> <p>The other AT weapons aren’t bad, and I will run the Archer occasionally. The Thunderbolt is just what I prefer. I think I’m up to Gen60 with mine.</p> <h3 id="athoughtortwoonsights">A Thought or Two on Sights</h3> <p>Find a sight that works well for your style of play. I love the HCOG and have been using it exclusively for months. It takes away field of vision when you’re ADS, but it works well at all ranges for me.</p> .</p> <p>I also used this when leveling up my DMR because I hate that gun and had trouble with it. Now I don’t worry about it any more. See comments about unmarked grave in section above…</p> <h2 id="ordinance">Ordinance</h2> <p>I’ve run the Firestar solely for months. I love its area of effect, I love that I can damage and blind Titans with it, and I love its range.</p> <p>The other ordinance options are all solid, and I’ve played them a fair amount. Gravity Star is just plain wicked fun, but it doesn’t do a damned thing against Titans, and it barely knocks dust off Reapers.</p> <p>So I stick with the Firestar.</p> <h2 id="pilotkits">Pilot Kits</h2> <p>I’ve come to the place where I use Phase Embark and Titan Hunter exclusively. Phase Embark’s speed of getting into my Titan can be crucial if I’m hurt, or if my Titan’s engaged. Titan Hunter helps me get my Titan faster. Yay, Titans!</p> <p>Ordinance Expert is nice because it shows you the arc of where your ordinance will hit. This is a <strong>great</strong> training aid as you’re learning. I moved off it once I got moderately comfy understanding the arc my ordinance would travel.</p> <p>Fast Regen is also good, especially when you’re like me and tend to spend time in Leeroy Jenkins mode running into battles wiser folks might not.</p> <p>All the other kits, for my style, are boring or unhelpful.</p> <h2 id="boosts">Boosts</h2> <p>I’ve run the Pilot Sentry as my main boost for a long, long time. It helps me lock down hardpoints, control fire lanes, kill off bounty Remnant forces, and generally annoy the hell out of opposing pilots.</p> <p>The Titan Sentry is good as well, but doesn’t seem to do as well for me.</p> <p>All the other boosts are fine, although I am damned proud to say I have not once used the Smart Pistol. Not. Once. I lived on the Smart Pistol in TF1, but I’m happy with how well I’ve progressed in my gun skills in TF2.<: The Titans<br /> [NOTE: One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] <br /> <br /> Here’s some thoughts on things relating to Titans.<br /> <br /> <h2 id="abitonsometitankits"> A Bit on Some Titan Kits</h2> <br /> <b>Warpfall Transmitter:</b> I use this exclusively. Sure, Dome Shield is nice, but I have crushed a crapload of Titans, pilots, and enemy units via the fast fall feature.<br /> <br /> <b>Assault Chip:</b>.<br /> <br /> <b>Stealth Auto-Eject:</b>.)<br /> <h2 id="thetitans"> The Titans</h2> <br /> <b>Monarch:</b>.<br /> <br /> If I’m playing Attrition or similar I’ll use Overcore, Energy Thief, Arc Rounds, Fast Rearm, and Chassis. For Frontier Defense I use Nuke Eject, Energy Thief, Energy Transfer, Maelstrom, and Accelerator.<br /> <br /> <b>Legion:</b> Big, slow lard ass with a gatling gun. I love it. Great gun which works <i>really</i>…<br /> <br /> <b>Northstar:</b>.<br /> <br /> <b>Scorch:</b> Flame on! This is my go-to Titan when the opposing team has someone dashing around being a jackass in a Ronin. The Scorch’s flame shield <b><i>wrecks</i></b> Ronins in a hurry. Also does Reapers in quite nicely. If you’re playing Frontier Defense on Drydock, make <b>sure</b>.<br /> <br /> <b>Ion:</b> Not my favorite Titan, as I have lots of trouble trying to balance energy use. Effectively for me this means I’m rarely able to use the Laser Shot.<br /> <br /> <br /> <figure><br /><img alt="Frickin Lasers" src="" title="Frickin quot;Lasersquot;" /><br /><figcaption>Frickin “Lasers”</figcaption></figure><br /> Standard load out: Turbo Engine, Zero-Point Tripwire. For Frontier Defense Nuke Eject, Refraction Lens.<br /> <br /> <b>Tone:</b>.<br /> <br /> <b>Ronin:</b> I hate this Titan <b><i>with a passion</i></b>..<br /> <br />.<br /> <br /> God, I hate the Ronin. I hate it so much that I’d be lost in indecision if given the choice between kicking Paul Krugman or the designers of the Ronin in the goolies.<br /> <br /> Unfortunately, the Titanfall folks haven’t nerfed the Ronin by this point means they’re likely not going to.<br /> <br /> I don’t play Ronin any more. When I did general load out was Turbo Engine and Thunderstorm.<br /> <br /> <br /> : Frontier Defense[<b>NOTE:</b> One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] <br /> <br /> Frontier Defense (FD) is one of my favorite modes, if not outright my most favorite. I like it because it reinforces good teamwork, something the other modes absolutely do <b>not<.<br /> <h2 id="points-for-leveling-up"> Points for Leveling Up</h2> Make sure you understand the post on Maximizing Points. All that applies to Frontier Defense Mode.<br />!<br /> <h2 id="aegis-upgrades"> AEGIS Upgrades</h2> FD gives your titans a new bunch of level-up abilities. Each titan gets a unique set of mods that run from chassis and shield boosts to additional glorious OMG lethal blow stuff up way more better things.<br />.<br /> AEGIS upgrades are earned by a separate XP track. It’s similar to your pilot’s XP track and you can supplement with an extra XP by purchasing Titan skins from the store—that point will also share across your entire team.<br /> <h2 id="unique-titan-mix"> Unique Titan Mix</h2> Having four different titans on the team garners an extra AEGIS XP for the entire team. I’ll try to fit in whatever titan makes sense for the team, although I try to start with one of my favorites (Scorch, Legion, Monarch, Northstar).<br /> <h2 id="how-i-roll-for-frontier-defense"> How I Roll for Frontier Defense</h2> <b>Pilot Weapons:</b>.<br /> I rotate through sidearms, so there’s no one favorite.<br /> My preferred anti-titan weapon is the Thunderbolt since it’s an area of effect weapon that I can shoot in the general direction of a number of enemies.<br /> <b>NOTE:</b> Anti-Titan weapons in FD mode have <i>unlimited</i> ammo, another reason I love the Thunderbolt for this mode.<br /> <b>Titans:</b> I generally play Northstar, Scorch, or Legion because they’re great at dishing out damage—especially after you get them well up in the AEGIS levels.<br /> <i>valuable</i>.<br /> For Titan Kits I normally use Nuke Eject regardless of Titan type. Because if I’m gonna go, I’m gonna take a bunch of those asshat enemies with me.<br /> <i>Legion:</i> Hidden compartment. Because 2x power shots are great.<br /> <i>Scorch:</i> Wildfire Launcher. Makes total sense when you’re getting multiple thermite shots.<br /> <i>Northstar:</i> Enhanced Payload. More damage from cluster missiles? Take my money. I occasionally use Piercing Round, but frankly I’m not sure of its effectiveness.<br /> <i>Monarch:</i> Energy Thief. Even if getting a battery wasn’t such a win I’d likely keep this just because the execution is freaking awesome.<br /> <i>Ion:</i> Refraction Lens. This totally wrecks Reapers. Yes, lots of damage on other things, but I’ve noticed it the most with how fast I’m able to kill Reapers. And I hate those rat bastards.<br /> <h2 id="using-the-armory"> Using The Armory</h2> When I first started playing I spent every last cent on Nuke Rodeo bombs. I’ve blown up a <b>lot</b>.<br /> Generally I only buy turrets when playing Homestead, Rise, or Exoplanet. The large number of Plasma Drones <i>require</i> several turrets for the team. The other maps just don’t seem to make sense for turrets, or at least I haven’t found great spots to place them.<br /> <h2 id="a-few-thoughts-on-a-few-maps"> A Few Thoughts on a Few Maps</h2> There are other posts elsewhere on The Internets that break down things about the various maps. Below are a few specific things I’ve found on particular maps.<br /> <b>Angel City:</b>.<br />.<br /> This is one of the maps I rarely buy turrets for. I just haven’t found any good spot where I can get more than a few kills. A turret is nearly the same cost as two arc traps, so for me it’s just not good money spent.<br /> <b>Rise:</b> The first wave, regardless of difficulty, starts with a lone titan at the back of the map. Grapple and wall run down the corridors to go steal a battery.<br /> Arc mines are great at the main junction, the far back spawn point, and the low corridor to the right. That corridor is a serious choke point and is the <i>prime</i> spot to hang out in later waves.<br />.<br /> That same zone is also great for Scorch’s ability to stack thermite, flame wall, incendiary traps, and flame core.<br /> <b>Homestead:</b> If possible, grab a Scorch. The metric crapton of plasma drones flow on either side of the large round tower in the middle of the map. Camp out on either side and use the flame shield to destroy swaths of those nasty little bastards.<br /> I like placing one turret at the trees on the left of the small rise just in front of the harvester. I’ll regularly get 60 turret kills from this one alone. Do <i><b>NOT</b></i>.<br /> This map is one where I’ll definitely buy a few Nuke Bombs later in the game because enemy titans will cluster in midfield on the far side of the central tower.<br /> <b>Forward Base Kodai:</b> I love how the game designers included smoke. Seriously. What an awesome tactical mess to have to work around. It’s a modest thing that makes play way more interesting.<br />.<br />.<br /> Northstar with its traps is a great titan here, as you can really slow down the rush of Titans in later waves. Plus the cluster missiles do a great job with all the stalkers.<br /> <b>Blackwater Canal:</b> Load up on arc traps for the canyon at the front of the map. Scatter a few to the route left of the harvester too. I don’t bother with arc traps up top because it’s easy to defend and hold the line there.<br />.<br /> <h2 id="notes-on-scores"> Notes on Scores</h2> Getting.<br /> <img alt="9K in Frontier Defense" src="" title="9K in Frontier Defense" /><br /> Keep your eye on the prize if your main focus is leveling up. Getting MVP is cool, but it doesn’t directly level you up faster. Even if you get MVP all five rounds…<br /> <img alt="MPV All Five Waves" src="" title="MPV All Five Waves" />: Game Modes[<b>NOTE:</b> One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.]<br /> <br /> Titanfall 2 has a bunch of great different game modes. Some focus on Titan combat, some on pilot combat, some are mixed.<br />.<br />.<br /> <h2 id="learn-how-to-win-each-game-mode"> Learn How To Win Each Game Mode</h2> In most cases, for the mixtape I run, simply killing lots of pilots won’t win you every game. Several game modes require you to do other things to help your team win.<br />.<br /> <img alt="Scored Lots of Points, Lost Because Team Was Killing Pilots Instead of Getting Points" height="360" src="" title="Scored Lots of Points, Lost Because Team Was Killing Pilots Instead of Getting Points" width="640" /><br /> Focus on winning. That means understanding the requirements and scoring for the game. At least make an effort to help your team win.<br /> <h2 id="pilot-kits-and-ordinance"> Pilot Kits and Ordinance</h2> I like the Firestar because it’s persistent and an area of effect weapon. It also blinds Titans and does good damage on them. I’ve gotten several “from the dead” kills from flame damage on a Titan that did me in.<br /> Titan Hunter kit works for me because it helps me get a titan faster.<br /> I use Phase Embark because it lets me get into the shelter of my Titan as quickly as possible. He who runs away lives to run away another day.<br /> Hover? Useless for me. Why do I want to float over the battlefield where people can shoot my ass out of the sky?<br /> Stealth Kit seems to work great for others, but I still get killed regularly by electric smoke when I’m rodeoing with it, so I gave up on it.<br />.<br /> <h2 id="thoughts-on-specific-game-modes"> Thoughts on Specific Game Modes</h2> Here are a few things that work for me in various game modes.<br /> <h3 id="attrition"> Attrition</h3> Goal: Kill as many opponents as you can. Titans are ten points, pilots five. Remember that minion kills can get you serious points, especially if you take out Reapers. As of this writing they’re three points each (they used to be <strong>five!</strong>), which means they’re good for overall points. They also can kick your ass all over the place if you’re not careful while you’re trying to blow them to smithereens.<br /> Pilot Weapons: I like mid-range weapons like the Hemlock, G2, and Flatline. Again, this suits my style of play. A couple maps like Blackwater and Homestead are great for sniping with the Kraber, too. (I <em><strong>hate</strong></em> the DMR and don’t play well with it.)<br />.<br /> <h3 id="bounty-hunt"> Bounty Hunt</h3> Goal: Kill Remnant forces for their bounty. Cash bounty in at banks in between rounds. Kill opposing pilots to piss them off and steal half their bounty collection.<br /> Tactics: Focus on Remnant forces, kill pilots when they’re around. Remember your points to victory come from bounty, <em>not</em>…<br />.<br /> Boost: Pilot sentry is awesome. I regularly get it 30 seconds into the first round. Drop it in a good spot, then hide from other pilots while blasting Remnants.<br /> Titans: Legion with the extra ammo kit and Overcore kit, because you want Smartcore ASAP. It’s a thing of beauty for laying waste to the Remnants and pilots who are swanning about.<br /> <h3 id="amped-hardpoint"> Amped Hardpoint</h3> Goal: Hold the hardpoints and amp them. Prevent the opposing force from doing the same. You do not get points for killing the opponents!<br />.<br /> Keep an eye on the hardpoint status indicators and overall points. Try to keep your own hardpoints amped and the enemy’s unmapped.<br /> Pilot Weapons: My standard is Hemlock (my favorite gun), Flatline, occasionally G2. Sometimes I’ll play the R97 because it sounds cool and it’s good for the close range defense.<br />).<br /> <h3 id="last-titan-standing"> Last Titan Standing</h3> Goal: Destroy all enemy titans to win the round. No respawning. Pilots outside their Titans are nuisances, but you don’t get points for killing them. Focus on the titans!<br />.<br /> If your Titan is blown up, <strong>PAY ATTENTION!</strong> Your job is <em><strong>not</strong></em> to hide and live. Your job is to grab batteries for your team’s remaining titans, damage the enemy titans, and prevent dismounted enemy pilots from harming your titans. Damaging the enemy titans is <em>critical!</em>.<br /> Pilot Weapons: Focus on Titan damage. Great anti-Titan weapon (Thunderbolt is my preferred one) and a good grenade launcher like the EPG or Cold War.<br />.<br /> A Note On Titan Kits: Take some care with your Titan Kit options for this mode. It makes <em><strong>zero sense</strong></em> to use Assault Chip or Stealth Auto Eject kits for this mode. Zero. Sense. Nuke eject is close behind for poor value, in my experience. Assault and Stealth Eject bring nothing to the table. Use Overcore or Dash. Counter Ready with its 2x smoke <em>may</em> be beneficial if it matches your play style.<br /> <h3 id="titan-brawl"> Titan Brawl</h3> Likely my most favorite game mode. It’s just insane fun. I once got 14 kills with zero deaths running a Monarch. Screenshot below because, well, I don’t brag often but this deserves a bit of braggery.<br /> <img alt="14 Kills, Zero Deaths" height="360" src="" title="14 Kills, Zero Deaths" width="640" /><br /> Goals: Kill as many Titans as you can before the match ends. Constant respawns, no dismounts, no ejections.<br /> Tactics: Stick with your homies. Watch out for flanking enemies. ABS: Always Be Shooting. Rack up damage on the opponents, even if you’re not going to kill one. Somebody else will.<br /> Pilot Weapons: N/A for this mode.<br />…<br /> A Note On Titan Kits: Take some care with your Titan Kit options for this mode. It makes <em><strong>zero sense</strong></em> to use Assault Chip, Nuke Eject, or Stealth Auto Eject kits for this mode. Zero. Sense. You can’t actually use any of those kits in this mode. So Just. Don’t. Use Overcore or Dash. Counter Ready with its 2x smoke <em>may</em> be beneficial if it matches your play style.: Maximizing Points[<b>NOTE:</b> One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.]<br /> Leveling up requires Experience Points (XP). You get XP for your performance in a match. Points come from winning a match, meeting your performance minimums, completing a match, leveling up titans or weapons or your faction, happy hour, and elite weapon/titan bonuses.<br />.<br /> <img alt="Doubled the score of the rest of my team, got the same XP" height="360" src="" title="Doubled the score of the rest of my team, got the same XP" width="640" /><br /> So here’s the thing: focus on meeting your minimums. Focus on helping your team win, or making the evac shuttle if you should lose. Focus on knowing what weapons and titans are near leveling up.<br /> <h2 id="match-minimums"> Match Minimums</h2> Each.<br /> You’ll find the minimums on the menu accessed from the Start/Menu button. Make sure you know what your minimums when you start each match! Check regularly as the match progresses to make sure you’re going to meet them.<br /> <h2 id="leveling-up-titans-and-weapons"> Leveling Up Titans and Weapons</h2> Let:<br />!<br /> Also, don’t forget your sidearms and anti-titan weapons!<br /> <h2 id="did-you-lose-make-the-evac-ship"> Did You Lose? MAKE THE EVAC SHIP!</h2> Win or lose, you get zero points for living through the epilogue. Zero.<br />.<br /> If you care about leveling up then you need to embrace your inner Dutch Schaefer and “Get to da choppa!”<br /> If there are titans near the ship, your best bet is to try distracting them on the way to getting into the ship. Use Firestars (my personal favorite) to disrupt their vision. Fire your anti-titan weapon as fast as you can.<br />.<br /> If you die on the way to the ship, so what? If you make the evac ship and it's blown up, so what? You lose absolutely nothing.<br /> Shoot for the opportunity to make an extra point. GET TO DA CHOPPA!<br /> <h2 id="elite-squad-leader-points"> Elite Squad Leader Points</h2> You’ll get an extra XP if you bought one of the elite weapons from the store. You share that point with your teammates, which is kind of neat—the team gets a max of one Elite Squad Leader point per match.<br /> <h2 id="happy-hour-points"> Happy Hour Points</h2> Your network has a set happy hour. You’ll get an extra five points playing during this time. That’s awesome! If possible, try and save a Double XP ticket to use when you’re playing Happy Hour games. Ten points versus five points. Epic Win.<br />.<br /> <h2 id="double-xp"> Double XP</h2> Double XP are awesome. They, like, double the points you get!<br />.<br />.)<br />.<br /> <h2 id="grinding-it-out"> Grinding It Out</h2> Regener <i><b>possible</b></i> points:<br /> <b>Base Points</b><br /> <ul> <li>Match Completion = 1</li> <li>Good Performance = 1</li> <li>Match Victory / Successful Evac = 1<br /> <b>Potential Points</b></li> <li>(possible) Elite Squad Leader = 1</li> <li>(possible) Level Up Weapons = 1 per level</li> <li>(possible) Level Up Titans = 1 per level</li> <li>(possible) Level Up Faction = 1<br /> <b>Other</b></li> <li>Happy Hour (once per day) = 5</li> </ul> Ergo, on a basic match were you met minimums, and won or escaped you’re looking at three points. Throw in one or two XP for weapons level ups. I’ll semi-arbitrarily use an average of six points per match based on faction levels up and matches where you do an awesome job and level up a couple weapons/titans.<br />.<br /> No matter what way you cut it, it’s a grind.<br /> <h2 id="what-game-modes-give-the-best-xp-for-leveling-up"> What Game Modes Give The Best XP For Leveling Up</h2> Remembering.<br />?<br /> <table border="1"> <thead> <tr> <th>Mode</th> <th>Avg Points Per Match</th> <th>Avg Match Time</th> <th>Points Per Minute</th> <th>Points With 2x</th> <th>2x PPM</th> <th>2x + Happy Hour</th> <th>PPM</th> </tr> </thead> <tbody> <tr> <td>Attrition</td> <td>6</td> <td>12</td> <td>0.5</td> <td>12</td> <td>1</td> <td>22</td> <td>1.8</td> </tr> <tr> <td>Frontier Defense</td> <td>13</td> <td>35</td> <td>0.37</td> <td>26</td> <td>0.74</td> <td>36</td> <td>1.02</td> </tr> </tbody> </table> <h2 id="keep-your-eye-on-the-prize-xp"> Keep Your Eye On The Prize: XP</h2> Go into your matches with a plan for how you’re going to try and earn XP. Think about what weapons and titans you might swap between. Don’t lose sight that the game is FUN, but still think about how you can play to maximize what you earn.: IntroI’ve been playing Titanfall 2 around a year now. It’s been my go-to brainless activity when I need distraction from the rotten places life has been this last year. At the time of this writing I’m at Generation 48 working my way up to Gen 50. (Titanfall “Regeneration” is the same concept as Call of Duty’s “Prestige Up.”) I’ve played over 3,000 games, been top three around 2100 of those, and MVP 950-ish times. I’ve just passed 15,000 kills (other players) and am near having earned 30,000 credits “net worth.”<br /> <img alt="Overview stats for Titanfall 2" height="225" src="" title="My Overview Stats" width="400" /><br />.<br /> <h2 id="me-and-titanfall"> Me and Titanfall</h2> I have a long love/hate relationship with Titanfall 1 and 2. I’m not a great player, especially when having to play solely against other humans. Therefore, I avoid modes like Pilot vs. Pilot, Capture The Flag, etc.<br /> Why am I not great? Let me list the ways…<br /> <ul> <li>I’m slow on the controller when trying to get a quick lock-on against opponents, which means I die a lot.</li> <li>I have poor aim, especially when someone’s aiming at me, which means I die a lot.</li> <li>I am awful when someone gets in melee range, which means I die a lot.</li> <li>I don’t shoot well while wall-running or mid-air, which means I miss kill opportunities.</li> </ul> My play style is to work from mid-range, both as a pilot and a titan. I’m not great at close up (see points above). As a pilot I’ll spend a fair amount of time on top of various obstacles. Most of the folks playing aren’t thinking in 3D, so it’s a good tactic for me. On the downside, I also tend to go Leeroy Jenkins and run into battles likely best avoided. This is part of why my Kill-Death-Ratio (KDR) against other players is around 0.7 after 3,000+ games. (Note: I’ve been above 1.5 for the last few months, occasionally as high as a ten-game average of 2.5; however, it takes a LONG time to raise that particular statistic up—and frankly I just don’t care about KDR. I know others care a <i><b>lot.</b></i> I don’t.)<br /> <h2 id="some-things-i-dislike"> Some Things I Dislike</h2> <b>Quality and Clarity.</b>.<br /> <b>Graphics Don’t Match Algorithms.</b>.<br /> <b>Regeneration Grind.</b> I badly miss the unique regeneration challenges from TF1. Those were well-thought out and fun. And a pain in the ass at times. With TF2 you’re in for nothing more than a long grind. More on leveling up and points later.<br /> <h2 id="some-things-i-love"> Some Things I Love</h2> <b>Movement.<.)<br /> <b>The Campaign.</b> One of the best campaigns in any game I’ve played. Ever. Loved the story, loved the chapters. I know others aren’t so enamored. That’s OK.<br /> <b>Regular Updates.</b> <i><b>love</b></i> seeing a company that takes this approach of constantly adding new value.<br /> <b>It’s. Just. Fun.</b> Even when I’m getting my face beat in by some of the pros I’m still having a fairly good time. Yes, I get frustrated, yes, I cuss. A lot. It’s still fun.<br /> <b>You Always Get A Titan.</b>.<br /> <h2 id="this-series"> This Series</h2> I’m not sure how long this series will last. At a minimum I’m going to cover the following topics, either as separate posts or parts of others.<br /> <ul> <li><a href="" target="_blank">Maximizing Points</a></li> <li><a href="" target="_blank">Game Modes</a></li> <li><a href="" target="_blank">Frontier Defense</a> </li> <li><a href="" target="_blank">Thoughts on Titans</a></li> <li><a href="" target="_blank">Thoughts on Weapons, Ordinance, Boosts, and Kits</a></li> <li><a href="" target="_blank">Tactics</a></li> <li><a href="" target="_blank">Movement and Shooting</a></li> <li><a href="" target="_blank">Some Closing Thoughts</a></li> </ul> This series is pretty late to the game. Titanfall 2 has been out for quite some time. Regardless, I’ve enjoyed outlining and drafting some of the content, so it’s as much for me as it is for you. Hopefully someone finds it useful. :)<br /> <ul> </ul> LeanPub PodcastThe folks at <a href="">LeanPub</a>, the online publishing service, were kind enough to have me on their Frontmatter podcast. Len Epp chatted me up for roughly an hour on my background, my book <a href="">The Leadership Journey</a>, and how I came to write it.<br /> You can find <a href="">the podcast here</a>, with a complete transcript if you’d rather read. Len’s a great interviewer, and I really enjoyed being on the, Ever Joke About Your Teams' Career Safety<p>Long)!”</p> <p>Without missing a beat the supervisor instantly replied “Of course she won’t. I don’t even joke about something like that landing on her APR. Ever.”</p> <p>The way he said it made it even more impactful: he didn’t get intense, he didn’t yell, he didn’t joke. He just said it emphatically and in a matter-of-fact tone.</p> <p.</p> <p>Those on your teams, those who report to you, those who have any form of accountability to you should know, without a doubt, that their performance reports will be based only on merit and fact, never spite or rumor.</p> <p>You don’t do performance reports? Fine. Don’t fixate on the mechanics. This is more about the meta concept: safety in one’s career progression.</p> <p>The other day on Facebook someone posted an article that ran something like “Seven Signs You’re About to Be Fired.” The poster tagged someone on their team and made a joking comment like “Yo, pay attention!”</p> <p>I got the joke, but it also made me recall the terrific lesson I learned all those years ago.</p> <p>Some things you just shouldn’t ever joke about. And your teams should know, I Didn't Automate That Test<p>No, you don’t need to automate tests for every behavior you build in your system. Sometimes you <strong>shouldn’t</strong> automate tests because you’re taking on an unreasonable amount of technical debt and overhead.</p> <p:</p> <pre><code>requestEnd: function (e) { var node = document.getElementById('flags'); while (node.firstChild) { node.removeChild(node.firstChild); } var type = e.type; $('#flags').append('<div responseType=\'' + type + '\'/>'); }, </code></pre> <p>It’s behavior. Moreover, other automated tests rely on this, so this failing would break other tests! Why did I decide to not write an automated test for it?</p> <p>Well this is literally the only custom JavaScript I have on my site at the moment. Think of the work I’d have to do simply to get a test in place for this:</p> <ul> <li>Figure out which JS testing toolset to use</li> <li>Learn that toolset</li> <li>Integrate that toolset into my current build chain</li> </ul> <p>That’s quite a bit of work and complexity. Step back and think about a few topics:</p> <p><strong>What’s the risk of breaking this behavior?</strong></p> <ul> <li>I rarely edit that page, so the likelyhood of breaking that behavior is low</li> <li>When I do edit the page, I even more rarely touch that particular part of the page’s code. Likelyhood of breakage is even lower.</li> </ul> <p><strong>What damage happens if I break that behavior?</strong></p> <ul> <li>Other tests relying on that element will fail</li> <li>Those failures could lead me astray because they’re failing for an unexpected reason – eg an Update test isn’t failing because the Update test is busted, it’s failing because the flag element isn’t appearing</li> <li>I’ll spend extra time troubleshooting the failure</li> </ul> <p><strong>How do I make sure </strong></p> <p>It’s a pretty easy discussion at this point: does it make sense to take on the overhead for writing test automation for this particular task? No. Hell no.</p> <p>It <em>may</em> make sense as I start to flesh out more behavior soon. But not now. A simple few manual tests and it’s good.</p> <p>Roll Leadership Journey is Complete and Live<p>After 2.5 years of hard work, blood, sweat, and a <em>lot of procrastination</em> I’m happy to announce my book <em><a href="">The Leadership Journey</a></em> is complete and read for purchase! <br>.</p> <p>Instead, it’s practical stories, tips, and exercises meant to get you looking in a mirror and figuring out where you want to go—and then proving some ideas on how you can head off that direction.</p> <p>This stuff is from my heart. It started based off <a href="">my Leadership 101 series</a>, but then grew out in its own direction.</p> <p>I owe lots of folks thanks, particularly readers who purchased the book two years ago expecting a quick finish. HAH! I hope they’re pleased with the final outcome.</p> <p <strong>useful.</strong> My hard work to convey content that’s straight from my experiences, and more importantly from my heart.</p> <p>The book is on sale at LeanPub, which is great for you. Don’t like the book? You can get ALL YOUR MONEY BACK up to 45 days after purchase. </p> <p>I’m pretty sure you’ll find it useful, though!< Journey Final Draft Complete!<p>Thank you so very, very much for those of you who’ve patiently been waiting for the completion of my book <em><a href="">The Leadership Journey</a></em>!</p> <p.</p> <p>I’m playing around with variations of the cover based on the great photo my brother created. <br> <img src="" alt="Book Cover" title="Slide2.png"> <img src="" alt="enter image description here" title="Slide4.png"></p> <p>I hope to have word on the Foreword author in a week or two, and hopefully the foreword completed within the next three weeks.</p> <p>For those of you who don’t know, the book’s available <strong>right now</strong> at <a href="">its page on LeanPub</a>. You can purchase it now, and you’ll get the updates when the Foreword and cover are in the can.</p> <p>Again, thank you to all for your patience. It’s been a labor of love, sweat, and yes, some significant procrastination.</p> <p>I hope you’ll find it worth the wait!<
http://feeds.feedburner.com/Frazzleddad
CC-MAIN-2022-40
refinedweb
10,601
65.62
Hash table is a data structure that represents data in the form of key-value pairs. Each key is mapped to a value in the hash table. The keys are used for indexing the values/data. A similar approach is applied by an associative array. Data is represented in a key value pair with the help of keys as shown in the figure below. Each data is associated with a key. The key is an integer that point to the data. 1. Direct Address Table Direct address table is used when the amount of space used by the table is not a problem for the program. Here, we assume that - the keys are small integers - the number of keys is not too large, and - no two data have the same key A pool of integers is taken called universe U = {0, 1, ……., n-1}. Each slot of a direct address table T[0...n-1] contains a pointer to the element that corresponds to the data. The index of the array T is the key itself and the content of T is a pointer to the set [key, element]. If there is no element for a key then, it is left as NULL. Sometimes, the key itself is the data. Pseudocode for operations directAddressSearch(T, k) return T[k] directAddressInsert(T, x) T[x.key] = x directAddressDelete(T, x) T[x.key] = NIL Limitations of a Direct Address Table - The value of the key should be small. - The number of keys must be small enough so that it does not cross the size limit of an array. 2. Hash Table In a hash table, the keys are processed to produce a new index that maps to the required element. This process is called hashing. Let h(x) be a hash function and k be a key. h(k) is calculated and it is used as an index for the element. Limitations of a Hash Table - If the same index is produced by the hash function for multiple keys then, conflict arises. This situation is called collision. To avoid this, a suitable hash function is chosen. But, it is impossible to produce all unique keys because |U|>m. Thus a good hash function may not prevent the collisions completely however it can reduce the number of collisions. However, we have other techniques to resolve collision. Advantages of hash table over direct address table: The main issues with direct address table are the size of the array and the possibly large value of a key. The hash function reduces the range of index and thus the size of the array is also reduced. For example, If k = 9845648451321, then h(k) = 11 (by using some hash function). This helps in saving the memory wasted while providing the index of 9845648451321 to the array Collision resolution by chaining In this technique, if a hash function produces the same index for multiple elements, these elements are stored in the same index by using a doubly linked list. If j is the slot for multiple elements, it contains a pointer to the head of the list of elements. If no element is present, j contains NIL. Pseudocode for operations chainedHashSearch(T, k) return T[h(k)] chainedHashInsert(T, x) T[h(x.key)] = x //insert at the head chainedHashDelete(T, x) T[h(x.key)] = NIL Python, Java, C and C++ Implementation # Python program to demonstrate working of HashTable hashTable = [[],] * 10 def checkPrime(n): if n == 1 or n == 0: return 0 for i in range(2, n//2): if n % i == 0: return 0 return 1 def getPrime(n): if n % 2 == 0: n = n + 1 while not checkPrime(n): n += 2 return n def hashFunction(key): capacity = getPrime(10) return key % capacity def insertData(key, data): index = hashFunction(key) hashTable[index] = [key, data] def removeData(key): index = hashFunction(key) hashTable[index] = 0 insertData(123, "apple") insertData(432, "mango") insertData(213, "banana") insertData(654, "guava") print(hashTable) removeData(123) print(hashTable) // Java program to demonstrate working of HashTable import java.util.*; class HashTable { public static void main(String args[]) { Hashtable<Integer, Integer> ht = new Hashtable<Integer, Integer>(); ht.put(123, 432); ht.put(12, 2345); ht.put(15, 5643); ht.put(3, 321); ht.remove(12); System.out.println(ht); } } // Implementing hash table in C #include <stdio.h> #include <stdlib.h> struct set { int key; int data; }; struct set *array; int capacity = 10; int size = 0; int hashFunction(int key) { return (key % capacity); } int checkPrime(int n) { int i; if (n == 1 || n == 0) { return 0; } for (i = 2; i < n / 2; i++) { if (n % i == 0) { return 0; } } return 1; } int getPrime(int n) { if (n % 2 == 0) { n++; } while (!checkPrime(n)) { n += 2; } return n; } void init_array() { capacity = getPrime(capacity); array = (struct set *)malloc(capacity * sizeof(struct set)); for (int i = 0; i < capacity; i++) { array[i].key = 0; array[i].data = 0; } } void insert(int key, int data) { int index = hashFunction(key); if (array[index].data == 0) { array[index].key = key; array[index].data = data; size++; printf("\n Key (%d) has been inserted \n", key); } else if (array[index].key == key) { array[index].data = data; } else { printf("\n Collision occured \n"); } } void remove_element(int key) { int index = hashFunction(key); if (array[index].data == 0) { printf("\n This key does not exist \n"); } else { array[index].key = 0; array[index].data = 0; size--; printf("\n Key (%d) has been removed \n", key); } } void display() { int i; for (i = 0; i < capacity; i++) { if (array[i].data == 0) { printf("\n array[%d]: / ", i); } else { printf("\n key: %d array[%d]: %d \t", array[i].key, i, array[i].data); } } } int size_of_hashtable() { return size; } int main() { int choice, key, data, n; int c = 0; init_array(); do { printf("1.Insert item in the Hash Table" "\n2.Remove item from the Hash Table" "\n3.Check the size of Hash Table" "\n4.Display a Hash Table" "\n\n Please enter your choice: "); scanf("%d", &choice); switch (choice) { case 1: printf("Enter key -:\t"); scanf("%d", &key); printf("Enter data -:\t"); scanf("%d", &data); insert(key, data); break; case 2: printf("Enter the key to delete-:"); scanf("%d", &key); remove_element(key); break; case 3: n = size_of_hashtable(); printf("Size of Hash Table is-:%d\n", n); break; case 4: display(); break; default: printf("Invalid Input\n"); } printf("\nDo you want to continue (press 1 for yes): "); scanf("%d", &c); } while (c == 1); } // Implementing hash table in C++ #include <iostream> #include <list> using namespace std; class HashTable { int capacity; list<int> *table; public: HashTable(int V); void insertItem(int key, int data); void deleteItem(int key); int checkPrime(int n) { int i; if (n == 1 || n == 0) { return 0; } for (i = 2; i < n / 2; i++) { if (n % i == 0) { return 0; } } return 1; } int getPrime(int n) { if (n % 2 == 0) { n++; } while (!checkPrime(n)) { n += 2; } return n; } int hashFunction(int key) { return (key % capacity); } void displayHash(); }; HashTable::HashTable(int c) { int size = getPrime(c); this->capacity = size; table = new list<int>[capacity]; } void HashTable::insertItem(int key, int data) { int index = hashFunction(key); table[index].push_back(data); } void HashTable::deleteItem(int key) { int index = hashFunction(key); list<int>::iterator i; for (i = table[index].begin(); i != table[index].end(); i++) { if (*i == key) break; } if (i != table[index].end()) table[index].erase(i); } void HashTable::displayHash() { for (int i = 0; i < capacity; i++) { cout << "table[" << i << "]"; for (auto x : table[i]) cout << " --> " << x; cout << endl; } } int main() { int key[] = {231, 321, 212, 321, 433, 262}; int data[] = {123, 432, 523, 43, 423, 111}; int size = sizeof(key) / sizeof(key[0]); HashTable h(size); for (int i = 0; i < n; i++) h.insertItem(key[i], data[i]); h.deleteItem(12); h.displayHash(); } Good Hash Functions A good hash function has the following characteristics. - It should not generate keys that are too large and the bucket space is small. Space is wasted. - The keys generated should be neither very close nor too far in range. - The collision must be minimized as much as possible. Some of the methods used for hashing are: Division Method If k is a key and m is the size of the hash table, the hash function h() is calculated as: h(k) = k mod m For example, If the size of a hash table is 10 and k = 112 then h(k) = 112 mod 10 = 2. The value of m must not be the powers of 2. This is because the powers of 2 in binary format are 10, 100, 1000, …. When we find k mod m, we will always get the lower order p-bits. if m = 22, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 01 if m = 23, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 001 if m = 24, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 0001 if m = 2p, then h(k) = p lower bits of m Multiplication Method h(k) = ⌊m(kA mod 1)⌋ where, kA mod 1gives the fractional part kA, ⌊ ⌋gives the floor value Ais any constant. The value of Alies between 0 and 1. But, an optimal choice will be ≈ (√5-1)/2suggested by Knuth. Universal Hashing In Universal hashing, the hash function is chosen at random independent of keys. Open Addressing Multiple values can be stored in a single slot in a normal hash table. By using open addressing, each slot is either filled with a single key or left NIL. All the elements are stored in the hash table itself. Unlike chaining, multiple elements cannot be fit into the same slot. Open addressing is basically a collision resolving technique. Some of the methods used by open addressing are: Linear Probing In linear probing, collision is resolved by checking the next slot. h(k, i) = (h′(k) + i) mod m where, i = {0, 1, ….} h'(k)is a new hash function If a collision occurs at h(k, 0), then h(k, 1) is checked. In this way, the value of i is incremented linearly. The problem with linear probing is that a cluster of adjacent slots is filled. When inserting a new element, the entire cluster must be traversed. This adds to the time required to perform operations on the hash table. Quadratic Probing In quadratic probing, the spacing between the slots is increased (greater than one) by using the following relation. h(k, i) = (h′(k) + c1i + c2i2) mod m where, c1and c2are positive auxiliary constants, i = {0, 1, ….} Double hashing If a collision occurs after applying a hash function h(k), then another hash function is calculated for finding the next slot. h(k, i) = (h1(k) + ih2(k)) mod m Hash Table Applications Hash tables are implemented where - constant time lookup and insertion is required - cryptographic applications - indexing data is required
https://www.programiz.com/dsa/hash-table
CC-MAIN-2021-04
refinedweb
1,811
63.09
You can manage your policies, conditions, and muting rules programmatically using our GraphQL NerdGraph API. This is a powerful alternative to managing them in New Relic One or through the REST API. Alerts features you can manage with NerdGraph Here's what you can do in NerdGraph: The easiest way to discover alerts queries and mutations is through the NerdGraph API explorer. NerdGraph API explorer Our NerdGraph API explorer is a GraphiQL editor where you can prototype queries and mutations. Here are some examples showing how to find fields for queries and mutations. Tip For general information about NerdGraph, see Introduction to NerdGraph. Queries To explore the various queries, look for the available queries under the actor.account.alerts namespace in NerdGraph API explorer: Mutations To explore various mutations, look in the alerts dropdown in the NerdGraph API explorer: >>IMAGE.
https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/alerts-nerdgraph/nerdgraph-api-examples/
CC-MAIN-2021-17
refinedweb
140
54.12
Regex pattern to find full stop, exclamation mark or question mark in Java I am new to regex. Here is a model I could think of: Pattern pattern = Pattern.compile("[.!?]"); As the documentation says [abc] a, b, or c (simple class) . But I was somehow wrong.: - ( +3 source to share 1 answer This works for me: import java.io.IOException; import java.util.regex.Matcher; import java.util.regex.Pattern; public class Main { public static void main(String[] args) throws IOException { Pattern pattern = Pattern.compile("[.!?]"); Matcher m = pattern.matcher("Hello?World!..."); while (m.find()) { System.err.println(m.group()); } } } So what is your problem more precisely? +7 source to share
https://daily-blog.netlify.app/questions/1892201/index.html
CC-MAIN-2021-43
refinedweb
110
63.05
On Tue, 2012-03-06 at 09:32 +0000, Arnd Bergmann wrote:> On Tuesday 06 March 2012, Alex Shi wrote:> > I have one concern and one questions here:> > concern: maybe the lock is in a well designed 'packed' struct, and it is> > safe for cross lines issue. but __alignof__ will return 1;> > struct abc{> > raw_spinlock_t lock1;> > char a;> > char b;> > }__attribute__((packed));> > > > Since the lock is the first object of struct, usually it is well placed.> > No, it's actually not. The structure has an external alignment of 1, so> if you have an array of these or put it into another struct like> > struct xyz {> char x;> struct abc a;> };> > then it will be misaligned. Thre is no such thing as a well designed 'packed'> struct. The only reason to use packing is to describe structures we have no> control over such as hardware layouts and on-wire formats that have unusal> alignments, and those will never have a spinlock on them.Understand. thx. So is the following checking that your wanted?===diff --git a/include/linux/rwlock.h b/include/linux/rwlock.hindex bc2994e..64828a3 100644--- a/include/linux/rwlock.h+++ b/include/linux/rwlock.h@@ -21,10 +21,12 @@ do { \ static struct lock_class_key __key; \ \+ BUILD_BUG_ON(__alignof__(lock) == 1); \ __rwlock_init((lock), #lock, &__key); \ } while (0) #else # define rwlock_init(lock) \+ BUILD_BUG_ON(__alignof__(lock) == 1); \ do { *(lock) = __RW_LOCK_UNLOCKED(lock); } while (0) #endif diff --git a/include/linux/spinlock.h b/include/linux/spinlock.hindex 7df6c17..df8a992 100644--- a/include/linux/spinlock.h+++ b/include/linux/spinlock.h@@ -96,11 +96,13 @@ do { \ static struct lock_class_key __key; \ \+ BUILD_BUG_ON(__alignof__(lock) == 1); \ __raw_spin_lock_init((lock), #lock, &__key); \ } while (0) #else # define raw_spin_lock_init(lock) \+ BUILD_BUG_ON(__alignof__(lock) == 1); \ do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0) #endif ===Btw, 1, it is alignof bug for default gcc on my fc15 and Ubuntu 11.10 etc?struct sub { int raw_lock; char a;};struct foo { struct sub z; int slk; char y;}__attribute__((packed));struct foo f1;__alignof__(f1.z.raw_lock) is 4, but its address actually can align onone byte. > > Arnd
https://lkml.org/lkml/2012/3/7/65
CC-MAIN-2016-30
refinedweb
343
68.6
> In CPython itself: See count_set_bits in Modules/mathmodule.c Python/hamt.c contains an optimized function: static inline uint32_t hamt_bitcount(uint32_t i) { /* We could use native popcount instruction but that would require to either add configure flags to enable SSE4.2 support or to detect it dynamically. Otherwise, we have a risk of CPython not working properly on older hardware. In practice, there's no observable difference in performance between using a popcount instruction or the following fallback code. The algorithm is copied from: */ i = i - ((i >> 1) & 0x55555555); i = (i & 0x33333333) + ((i >> 2) & 0x33333333); return (((i + (i >> 4)) & 0xF0F0F0F) * 0x1010101) >> 24; } Python/pymath.c provides a "unsigned int _Py_bit_length(unsigned long d)" function used by math.factorial, _PyLong_NumBits(), int.__format__(), long / long, _PyLong_Frexp() and PyLong_AsDouble(), etc. Maybe we could add a _Py_bit_count(). See also bpo-29782: "Use __builtin_clzl for bits_in_digit if available" which proposes to micro-optimize _Py_bit_length(). -- In the meanwhile, I also added pycore_byteswap.h *internal* header which provides static inline function which *do* use builtin functions like __builtin_bswap32().
https://bugs.python.org/msg369881
CC-MAIN-2020-40
refinedweb
171
50.84
0 Pyler 2 Years Ago I can serialize an object if it's class definition has attributes. So if I have an object whose class does not have attributes, do I serialize the attributes importedi in it's class definition? For example; suppose I have the following class, #ifndef BREAD_H #define BREAD_H #include "EGGS.h"; #include "FLOUR.h"; class BREAD { public: BREAD();~BREAD(); //lame example std::vector<string> selectIngredient(std::vector<FLOUR>grocery); }; #endif How would I go about serializing attributes defined in the FLOUR.h class if incase I have to? Thanks in advance. attributes c++ databases objects serialization sqlite3 Edited 2 Years Ago by pritaeas: Moved.
https://www.daniweb.com/programming/software-development/threads/487871/serializing-an-object-with-no-attributes
CC-MAIN-2017-30
refinedweb
108
50.84
Help:Namespaces From Wikibooks, the open-content textbooks collection A namespace is the first part of a title for a wiki page. For example, this page is "Wikibooks:Namespaces". The namespace is thus "Wikibooks". The basic namespaces for MediaWiki projects are: [edit] Basic - User-namespace: pages for Wikibooks users' personal presentations and auxiliary pages for personal use, for example containing bookmarks to favorite pages. If you see in the Recent Changes list that the user named "Jimbo Wales" has updated some pages, this user name is a link not to Jimbo but to user:Jimbo Wales. - Talk-namespaces: each of the four namespaces has its own talk namespace associated with it. A 'Talk', 'Wikibooks talk' and 'Image talk' page is for discussions about the corresponding page in one of the other namespaces, a 'User talk' page is also for discussions with a user that are not specifically about one particular article. The talk page of the article "Foo" has the name "Talk:Foo". - Wikibooks-namespace: info about Wikibooks: how to use it, etc. - Image-namespace: "image description pages", i.e. info about images and sound clips, one page for each, with a link to the image or sound clip itself. - - Cookbook-namespace: Pages about cooking recipes and methods reside in this namespace. - Transwiki-namespace: Transwiki is the process of moving content from one Wikimedia project to another, when the content is then removed from the original location. Modules in the Transwiki namespace have been moved to Wikibooks, but not yet incorporated into a book. You can request pages be imported into the Transwiki namespace on the request for import page. A log of all transwikied pages can be found at Wikibooks:Transwiki log - Wikijunior-namespace: Books in this namespace are for infants to be read to and for for young children and teenangers to read to themselves. - Subject-namespace: Pages in this namespace are used for organizations books by subject, getting a brief overview of a subject, finding related subjects and for linking to and from sister projects. - Special-namespace: Pages that are created by the software on demand (for example, Special:Recentchanges) are sometimes said to form the Special namespace. They can be linked as usual, like [[Special:Recentchanges]], except when they have parameters, then the full URL has to be given, like an external link, for example (last 10 changes) The "pipe trick" works on namespace links: Automatically hide namespace: [[Wikibooks:Staff lounge|]]. [edit] Full list The 15 auxiliary namespaces and 5 extra namespaces in Wikibooks are the following (also the variables for them are shown): [edit] Bookshelves The Wikibooks namespace holds policy and other pages of interest only for editors. Bookshelves are treated as modules. [edit] Pseudonamespaces Pseudonamespaces are created when every part of a book or project begins with the parent title followed by a colon (Name:subsection). These resemble namespaces, but are not actually namespaces because their talk pages remain in the talk namespace. - Programming: Contains various books about programming computers and is being gradually phased out.
http://en.wikibooks.org/wiki/Wikibooks:Namespace
crawl-002
refinedweb
502
51.07
Macro function conflicts namespace function? Discussion in 'C++' started by Immortal Nephi, Namespace conflicts with page class - name resolution difference in compilerShadow Lynx, Feb 3, 2006, in forum: ASP .Net - Replies: - 1 - Views: - 3,039 - Christopher Reed - Feb 4, 2006 object-like macro used like function-like macroPatrick Kowalzick, Mar 14, 2006, in forum: C++ - Replies: - 5 - Views: - 500 - Patrick Kowalzick - Mar 14, 2006 c# namespace conflictsMicrosoft Newsserver, Jan 16, 2008, in forum: ASP .Net - Replies: - 1 - Views: - 512 - Alvin Bruney [ASP.NET MVP] - Jan 17, 2008 How to add namespace to macro function?Immortal Nephi, Apr 15, 2010, in forum: C++ - Replies: - 3 - Views: - 1,513 - tonydee - Apr 16, 2010 ruby macro bind conflicts tr1 on msvc9.0Christoph Heindl, Jan 17, 2009, in forum: Ruby - Replies: - 0 - Views: - 148 - Christoph Heindl - Jan 17, 2009
http://www.thecodingforums.com/threads/macro-function-conflicts-namespace-function.727785/
CC-MAIN-2014-52
refinedweb
135
69.82
0 I am trying to finish this hangman game I started. I know how to do everything I have left to do except for one thing. When I run the program and start guessing letters for instance in the word technical, if I guess the letter c, it only puts one c into the word and the other c is never copied into the word and it makes the game impossible to finish. Can anyone help me figure out what I am leaving out? By the way the lines that are commented delete are just lines I use to see the word when I am testing the program. They won't be in the final program. :) Thanks. Code: #include<iostream> #include<string> #include<ctime> using namespace std; //Hangman Game. //Guess the secret word by guessing letters of the word. int main () { srand(time (0) ); //seed for the random function. string words[5] = {"programming", "technical", "computer", "language", "difficult"};//Words I chose. string word; //string for the word with asterisks. word = words[rand() % 5]; //Randomly chooses from the lists of words I provided. //Create a temporary holder for the word without asterisks. string temporary;//string for the original word without the asterisks. Used for searching for characters. temporary = word;//Makes the temporary string equal to the word before the asterisks are placed there. cout << temporary << endl;//LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK DELETE!!!! //Change the letters in the word to asterisks. int length = word.length(); cout << word << endl;//LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK LOOK DELETE!!!!!!(This is for me to see the word during //testing of the program.) for(int i=0; i<(int)word.length(); i++ ) { word[i]='*'; } //Prompt the user to guess a character in the word. cout << "(Guess) Enter a letter in word " << word << " > "; string letter; cin >> letter; //See if the letter entered matches any of the letters in the chosen word. If so the asterisks need to be replaced with //the actual letter. //I will use the temporary string to compare my answers and then correct the word as the game progresses. //A wrong guess counter needs to start here and be incremented every time a wrong guess is made. //Placeholder is simply to keep the loop running. int wrongGuess = 0; string placeholder; placeholder = word; do { //Checks to make sure that the letter is part of the word. if (temporary.find(letter) != string::npos) { //If the letter is part of the word the letter is put in the place of the asterisk in the correct location. word.replace(temporary.find(letter), 1, letter); //Then part of the placeholder is erased. placeholder.erase(1,1); //The user is then promted to enter another word. cout << "(Guess) Enter a letter in the word " << word << " > "; cin >> letter; //If the letter entered is not part of the word... if (temporary.find(letter) == string::npos) { //The wrong guess counter is incremented. wrongGuess++; //A message is displayed. cout << letter << " is not part of the word!" << endl; //And the user is prompted to enter another word. cout << "(Guess) Enter a letter in the word " << word << " > "; cin >> letter; } //If the letter has already been guessed... if (word.find(letter) != string::npos) { //A message is displayed cout << letter << " is already part of the word." << endl; } } } //All of this is done until the placeholder is empty. while (placeholder.empty() == false); //When the placeholder word becomes empty the game is over and the word has been guessed. //Display a message to the user to show that the word has been guess and how many missed guesses there were. if (placeholder.empty() == true) cout << "You guessed the word!!! The word is " << word << "."; cout << "You missed " << wrongGuess << " time(s)." << endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/302684/hangman-game-with-ignores-duplicate-letters
CC-MAIN-2017-43
refinedweb
628
75.4
. TypeScript brings lots of advantages to the JavaScript world with almost mandatory typings. But TypeScript code is transpiled, and to play well with other libraries that aren’t originally written in TypeScript needs manually written type definition and some hacks to play well with other external tools, like code coverage and test frameworks. Isomorphic Isomorphic is a trendy word with a nice soul behind, that means sharing some code between frontend and backend with minor or no changes. Since TypeScript can be compiled to JavaScript, it can run on Node.js and in the browser. An API client sharing the same code could be written with the same code everywhere. I want my API client to fetch resources using the same simple call everywhere. const client = new coveoanalytics.analytics.Client({ token : 'YOUR-TOKEN'}) // Send your event client.sendCustomEvent({ eventType: "dog"; eventValue: "Hello! Yes! This is Dog!"; }); All this without having 2 codebases. Window, fetch and promises Let’s fix the main difference between Node.js and the browser. Getting data from the browser is done using an XMLHttpRequest or using the new fetch API that is defined on the global object window. fetch('').then( (res) => { // Do stuff with the response }) In Node.js: var http = require('http'); http.get({ hostname: 'localhost', port: 80, path: '/' }, (res) => { // Do stuff with response }) First things first, the fetch API is nice, simple and returns promises. But fetch isn’t defined in all browsers and is not even part of Node.js standard libraries. Promises aren’t defined in all browsers. Fortunately there are nice libraries for both of these cases. Let’s use them. npm install --save es6-promises isomorphic-fetch But wait, don’t go too fast! You are in TypeScript you need the type definition if you don’t want to put the any type everywhere. Again in the console: npm install --save-dev typings typings install --save --ambient isomorphic-fetch es6-promise Typings is a nice tool to find type definitions and it contains the type definition of most popular JavaScript library. Now let’s handle the 2 cases, in the browser and in Node.js. Node.js Since fetch is defined on the global object and promises are natively implemented in Node.js. Just tell the people using your library to inject isomorphic-fetch in their Node.js application. Compile using tsc with a tsconfig.json { "compilerOptions": { "module": "commonjs", "target": "es5", "outDir": "dist", "declaration": true, "noImplicitAny": true, "removeComments": true, "sourceMap": true }, "files": [ "... your files" "typings/main.d.ts" ] } With a Node.js entrypoint like this index.ts script: import * as analytics from './analytics'; import * as SimpleAnalytics from './simpleanalytics'; import * as history from './history'; import * as donottrack from './donottrack'; export { analytics, donottrack, history, SimpleAnalytics } Then build it with tsc. If you don’t have it installed globally, you can use the npm bin executable $(npm bin)/tsc Browser The browser is a special case. Not everyone is using a web bundler, and I wanted to provide a library that could be bootstrapped like Google Analytics, so I needed my own bundle. When people don’t use a module bundler, you have to expose your library via a global object. We’ll bundle our library with Webpack, and inject the promises and fetch libraries in it. We’ll also provide an entrypoint that will export variable to the global window object. First the entrypoint: import * as entrypoint from './index'; global.ourlibentrypoint = entrypoint Then the webpack configuration npm install --save-dev webpack ts-loader exports-loader var webpack = require("webpack"); module.exports = { entry: "./src/browser.ts", output: { path: "./dist/", filename: "bundle.js" }, devtool: 'source-map', resolve: { extensions: ['', '.ts'], root: __dirname }, module: { loaders: [{test: /\.ts$/, loader: 'ts-loader'}] }, plugins:[ // The injection is done here new webpack.ProvidePlugin({ 'Promise': 'es6-promise', 'fetch': 'exports?self.fetch!whatwg-fetch' }), new webpack.optimize.UglifyJsPlugin() ], ts: { compilerOptions: { // We already emit declarations in our normal compilation step // not needed here declaration: false, } } } Cook your bundle with webpack! The dist/bundle.js file can now be included in your html. Tests For sanity, let’s add tests to our library. We’ll use Ava from the prolific sindresorhus which is a modern testing library for JavaScript. Happily it comes with its own d.ts bundled so no need of typings for that one. The setup is simple. npm install --save-dev ava A different tsconfig.json is needed for tests. So here is tsconfig.test.json: { "compilerOptions": { "module": "commonjs", "target": "es5", "outDir": "dist_test", "declaration": false, "noImplicitAny": true, "removeComments": true, // Inline source map are required by nyc the coverage tool // to correctly map to good files. "inlineSourceMap": true }, "files": [ "... your test files", "test/lib.d.ts", "typings/main.d.ts" ] } Some libs forgets type definitions. In my case I had to add a special lib.d.ts for tests. test/lib.d.ts: interface IteratorResult<T> { done: boolean; value?: T; } interface Iterator<T> { next(value?: any): IteratorResult<T>; return?(value?: any): IteratorResult<T>; throw?(e?: any): IteratorResult<T>; } To enable extended babel support in ava, you have to require babel-register in AVA. You can do this in the package.json file by adding an ava key. "ava": { "require": [ "babel-register" ] } Tests can be run with tsc -p tsconfig.test.json && ava \"**/*test.js\" Coverage Adding coverage was simple, AVA runs tests in different process so you need to have a coverage runner that supports this. nyc does that task for you. npm install --save-dev nyc You’ll have to create a file which includes all your TypeScript files, so nyc and ava are aware of all the TypeScript available. I created a fake test that loads the Node.js entrypoint. That tests is always green. import test from 'ava'; import * as coveoanalytics from '../src/index'; test('coverage', t => { const _ = coveoanalytics; }); It is also nice to get code coverage in the original languague, which is TypeScript. To do this you need to place the source maps inline. In your tsconfig.test.json add this key "compilerOptions"."inlineSourceMap": true. You can then run your tests using tsc -p tsconfig.test.json && nyc ava \"**/*test.js\" Plugging all this together. If you followed the article without skipping part, you should be good to go, here’s a recap of the most important parts. package.json: { ... // your 2 compiled entry points here "main": "dist/index.js", "browser": "dist/bundle.js", ... "scripts":{ ... "build:webpack": "webpack", "build:tsc": "tsc", "build": "npm run-script lint && npm run-script build:webpack && npm run-script build:tsc", "test": "tsc -p tsconfig.test.json && nyc ava \"**/*test.js\"", ... }, ... "dependencies":{ ... "isomorphic-fetch": "2.2.1", ... }, "devDependencies":{ ... "es6-promise": "3.1.2", "ava": "0.14.0", "exports-loader": "0.6.3", "nyc": "6.4.4", "TypeScript": "1.8.10", "typings": "0.8.1", "webpack": "1.13.0" ... }, ... "ava": { "require": [ "babel-register" ] } } You also need: - 1 tsconfig file for your normal builds (Webpack and Node.js) - 1 tsconfig file for your tests - 1 typings file to have the type definitions of isomorphic-fetch and es6-promises - A lot of tests - 1 Browser entrypoint (mine is named browser.ts) - 1 Node entrypoint (mine is named index.ts) - A webpack.config.jsfile similar to the one above This was a tedious work to glue everything together, but it was worth it. TypeScript is a nice transpiler bringing a lot to a large application’s codebase. It is up to date and even transpiles to ES2015 which you can then retranspile with babel if you want more included. If you want to see an example of what came out of it see coveo.analytics.js
http://source.coveo.com/2016/05/11/isomorphic-typescript-ava-w-coverage/
CC-MAIN-2017-13
refinedweb
1,260
61.22
Now we come to the most complicated part of the program as the event handler has to do something useful with the raw bitmap data that the video camera sends to it. The raw data is packaged in the event argument. To get at it you need to go through a set of standard steps. The ColorImageFrame also includes some useful data such as the frame number and a time stamp. So let's start work on retrieving and displaying the pixel data. The first task in the event handler is to get the ColorImageFrame object: void FrameReady(object sender, ColorImageFrameReadyEventArgs e){ ColorImageFrame imageFrame = e.OpenColorImageFrame(); ColorImageFrame to a Bitmap without making any changes is a bit involved but its fairly standard "boilerplate" code that you can reuse - so lets write a function to do the job: Bitmap ImageToBitmap(ColorImageFrame Image){ You also need to add: using System.Drawing.Imaging; at the start of the program. This function takes a ColorImageFrame and returns a Bitmap with the same pixel data. Notice that this only works for a ColorImageFrame that used 32 bit RBGA format data - which is the case for the video sensor as we have set it up. You can modify the function as required as you use other sensors. First we need to retrieve the pixel data as a byte array: byte[] pixeldata = new byte[ Image.PixelDataLength];Image.CopyPixelDataTo(pixeldata); The PixelDataLength gives the number of bytes needed to store the entire image. Next we need to create a Bitmap object capable of holding the pixel data: Bitmap bmap = new Bitmap(Image.Width, Image, Image.Width, Image.Height), ImageLockMode.WriteOnly, bmap.PixelFormat); Now we have a memory buffer waiting for the bits in the byte array use the Copy method to move the data: Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength); The first instruction stores the address of the start of the image buffer in ptr. The Copy method then proceeds to copy the data in Bits to the buffer that ptr points at. The second parameter is an offset, usually zero and the final parameter gives the total number of bytes to copy. Now all we have to do is unlock the buffer which also transfers the data to the BitMap object and return it: bmap.UnlockBits(bmapdata); return bmap;} Now if we assign the bitmap that is returned to a PictureBox control on the form you can see the video from the Kinect's camera. The complete event handler is: void FrameReady(object sender, ColorImageFrameReadyEventArgs e){ ColorImageFrame imageFrame = e.OpenColorImageFrame(); Bitmap bmap = ImageToBitmap(imageFrame); pictureBox1.Image = bmap;} and the ImageToBitmap function; } If you put the whole lot together you will see an image displayed in the PictureBox when you run it. You can see a complete listing by downloading the code from the Code>
http://www.i-programmer.info/ebooks/practical-windows-kinect-in-c/3725-getting-started-with-windows-kinect-sdk-10.html?start=2
CC-MAIN-2016-50
refinedweb
468
54.22
Created on 2020-01-14 21:33 by dino.viehland, last changed 2020-01-28 02:21 by vstinner. This issue is now closed. I'm trying to create a custom module type for a custom loader where the returned modules are immutable. But I'm running into an issue where the immutable module type can't be used as a module for a package. That's because the import machinery calls setattr to set the module as an attribute on it's parent in _boostrap.py # Set the module as an attribute on its parent. parent_module = sys.modules[parent] setattr(parent_module, name.rpartition('.')[2], module) I'd be okay if for these immutable module types they simply didn't have their children packages published on them. A simple simulation of this is a package which replaces its self with an object which doesn't support adding arbitrary attributes: x/__init__.py: import sys class MyMod(object): __slots__ = ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__'] def __init__(self): for attr in self.__slots__: setattr(self, attr, globals()[attr]) sys.modules['x'] = MyMod() x/y.py: # Empty file >>> from x import y Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 971, in _find_and_load_unlocked AttributeError: 'MyMod' object has no attribute 'y' There's a few different options I could see on how this could be supported: 1) Simply handle the attribute error and allow things to continue 2) Add ability for the modules loader to perform the set, and fallback to setattr if one isn't available. Such as: getattr(parent_module, 'add_child_module', setattr)(parent_module, name.rpartition('.')[2], module) 3) Add the ability for the module type to handle the setattr: getattr(type(parent_module), 'add_child_module', fallback)(parent_module, , name.rpartition('.')[2], module) So I think this is way too marginal a use-case to expand the API to let loaders inject themselves into the semantics of it. I assume going with option 1 but raising an ImportWarning would be too noisy for your use-case? If not I'm totally fine with that solution. I think the warning shouldn't be too bad. It looks like ImportWarnings are filtered by default already, and the extra overhead of raising a warning in this case probably is nothing compared to the actual work in loading the module. I apologize for the noise caused by the wrong PR connection. New changeset 9b6fec46513006d7b06fcb645cca6e4f5bf7c7b8 by Dino Viehland in branch 'master': bpo-39336: Allow packages to not let their child modules be set on them (#18006) test_unwritable_module() fails on AMD64 Fedora Stable Clang Installed 3.x: bpo-39459. commit 2528a6c3d0660c03ae43d796628462ccf8e58190 Author: Dino Viehland <dinoviehland@gmail.com> Date: Mon Jan 27 14:04:56 2020 -0800 Add test.test_import.data.unwritable package to makefile (#18211)
https://bugs.python.org/issue39336
CC-MAIN-2020-45
refinedweb
472
56.45
Member since 03-15-2016 10 0 Kudos Received 1 Solution 04-25-2016 10:47 PM 04-25-2016 10:47 PM Hello, I have dockerized Quickstart CDH 5.5 installation on my machine with parcelled phoenix installation which works fine. I am able to create tables, insert/update rows in phoenix tables via command phoenix-sqlline.py localhost:2181 However when I attempt connecting same via Java program it just clocks. import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.Statement; public class PhoenixConnector { public static void main(String[] args) { System.out.println("START PhoenixConnector"); try{ Class.forName("org.apache.phoenix.jdbc.PhoenixDriver").newInstance(); Connection conn = DriverManager.getConnection("jdbc:phoenix:192.168.99.100:2181","",""); //Control does not go beyond the above line. Also instead of docker ip (192.168.99.100) //I also tried localhost but no help. Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("select * from US_POPULATION"); while (rs.next()) System.out.println("Name= " + rs.getString("host")); } catch(Exception e){ System.out.println(e.getMessage()); } } } Can you please help me understand what is going on? Note: Program has required dependencies and it builds fine. ... View more 03-20-2016 10:13 PM 03-20-2016 10:13 PM Hello, This is what worked for me. Increased the Region Server heap size it to 500 MiB, redeployed client configuration and restarted HBase. I was able to get sql line phoenix script working for me. ... View more 03-16-2016 04:43 PM 03-16-2016 04:43 PM. ... View more 03-16-2016 12:28 PM 03-16-2016 12:28 PM 03-16-2016 10:45 AM 03-16-2016 10:45 AM? ... View more 03-15-2016 10:33 PM 03-15-2016 10:33 PM? ... View more 03-15-2016 12:42 PM 03-15-2016 12:42 PM. ... View more 03-15-2016 12:12 PM 03-15-2016 12:12 PM I brought the cloudera manager and over there I am accepting the Parcel as mentioned in the link at cloudera blog. Now my docker container image is running, Cloudera manager is up, so where do I find "migrate CDH to a parcel installation" button? ... View more 03-15-2016 12:00 PM 03-15-2016 12:00 PM Hello, ? ... View more 03-15-2016 11:25 AM 03-15-2016 11:25 AM. ... View more
https://community.cloudera.com/t5/user/viewprofilepage/user-id/15376
CC-MAIN-2020-16
refinedweb
401
61.63
MD master + Mono master using System; using System.Threading; using System.Threading.Tasks; public class TestPostContext { static Task[] tasks; static readonly int max = 24; static void InitWithDelegate (Action action) { tasks = new Task[max]; for (int i = 0; i < max; i++) { tasks [i] = Task.Factory.StartNew (action); } } public static int Main () { int counter = 0; InitWithDelegate (delegate { counter++; throw new ApplicationException (counter.ToString ()); }); try { Task.WaitAll (tasks); } catch (AggregateException e) { Console.WriteLine (e.InnerExceptions); // Set breakpoint here and hover over e.InnerExceptions then expand Items } return 0; } } This error can occur when doing something like this using MonoTouch: We have the next code: ------------------------------------------------- string s = "Hello, World!"; byte[] b = System.Text.Encoding.UTF8.GetBytes(s); return; // Set breakpoint here ------------------------------------------------- Place breakpoint on the line with return, open Expression Evaluator and enter the next value in it: System.Text.Encoding.UTF8.GetString(b) We will get as a result "Hello, World!". But it is not possible to open text using plus sign (this is needed to copy value to the clipboard). If we do this we will get the error "Mono.Debugger.Soft.ObjectCollectedException: The requested operation cannot be completed because the object has been garbage collected." I guess what needs to happen here is that the runtime needs to keep objects alive for the duration of the method rather than allowing any variables to be collected before the method exits. AFAIK the .NET debugger extends variable liveness to the end of the method so users can inspect the values more easily. We should probably do the same. The problem here is that the debuggee keeps a weak reference on the objects it returns to the client, in order to not influence program behaviour, and the GC is not disabled while the debuggee is stopped. InnerExceptions returns a transient object: return innerExceptions.AsReadOnly (); which is not kept alive by anything, so it can be freed any time. The jdi docs says: Note that while the target VM is suspended, no garbage collection will occur because all threads are suspended. The typical examination of variables, fields, and arrays during the suspension is safe without explicitly disabling garbage collection. This is not really true in mono, because there are things done by the debugger agent which can trigger a gc, like the creation of weak references. Fixed in master/mobile-master.
https://bugzilla.xamarin.com/show_bug.cgi?format=multiple&id=1446
CC-MAIN-2017-43
refinedweb
386
56.25
One of the problems with code quality tools is that they tend to overwhelm developers with problems that aren't really problems -- that is, false positives. When false positives occur, developers learn to ignore the output of the tool or abandon it altogether. The creators of FindBugs, David Hovemeyer and William Pugh, were sensitive to this issue and strove to reduce the number of false positives they report. Unlike other static analysis tools, FindBugs doesn't focus on style or formatting; it specifically tries to find real bugs or potential performance problems. What is FindBugs? FindBugs is a static analysis tool that examines your class or JAR files looking for potential problems by matching your bytecodes against a list of bug patterns. With static analysis tools, you can analyze software without actually running the program. Instead the form or structure of the class files are analyzed to determine the program's intent, often using the Visitor pattern (see Resources). Figure 1 shows the results of analyzing an anonymous project (its name has been withheld in order to protect the horribly guilty): Figure 1. FindBugs UI Let's take a look at some of the problems that FindBugs can detect. Examples of problems found The following list doesn't include all the problems FindBug might find. Instead, I've focused on some of the more interesting ones. Detector: Find hash equals mismatch This detector finds several related problems, all centered around the implementation of equals() and hashCode(). These two methods are very important because they're called by nearly all of the Collections-based classes -- List, Maps, Sets, and so on. Generally, this detector finds two different types of problems -- when a class: - Overrides Object's equals()method, but not its hashCodeor vice-versa. - Defines a co-variant version of the equals()or compareTo()method. For example, the Bobclass defines its equals()method as boolean equals(Bob), which overloads the equals()method defined in Object. Because of the way the Java code resolves overloaded methods at compile-time, the version of the method defined in Object will almost always be the one used at runtime, not the one you defined in Bob(unless you explicitly cast the argument to your equals()method to type Bob). As a result, when one of the instances of this class is put into any of the collection classes, the Object.equals()version of the method will be used, not the version defined in Bob. In this case, the Bobclass should define an equals()method that accepts an argument of type Object. Detector: Return value of method ignored This detector looks for places in your code where the return value of a method is ignored when it shouldn't be. One of the more common instances of this scenario is found when invoking String methods, such as in Listing 1: Listing 1. Example of ignored return value 1 String aString = "bob"; 2 b.replace('b', 'p'); 3 if(b.equals("pop")) This mistake is pretty common. At line 2, the programmer thought he'd replaced all of the b's in the string with p's. He did, but he forgot that strings are immutable. All of these types of methods return a new string, never changing the receiver of the message. Detector: Null pointer dereference and redundant comparisons to null This detector looks for two types of problems. It looks for cases where a code path will or could cause a null pointer exception, and it also looks for cases in which there is a redundant comparison to null. For example, if both of the compared values are definitely null, they're redundant and may indicate a coding mistake. FindBugs detects a similar problem when it's able to determine that one of the values is null and the other one isn't, as shown in Listing 2: Listing 2. Null pointer examples 1 Person person = aMap.get("bob"); 2 if (person != null) { 3 person.updateAccessTime(); 4 } 5 String name = person.getName(); In this example, if the Map on line 1 does not contain the person named "bob," a null pointer exception will result on line 5 when the person is asked for his name. Because FindBugs doesn't know if the map contains "bob" or not, it will flag line 5 as a possible null pointer exception. Detector: Field read before being initialized This detector finds fields that are read in constructors before they're initialized. This error is often caused by mistakenly using a field's name instead of a constructor argument -- although not always, as Listing 3 shows: Listing 3. Reading a field in a constructor before it's initialized 1 public class Thing { 2 private List actions; 3 public Thing(String startingActions) { 4 StringTokenizer tokenizer = new StringTokenizer(startingActions); 5 while (tokenizer.hasMoreTokens()) { 6 actions.add(tokenizer.nextToken()); 7 } 8 } 9 } In this example, line 6 will cause a null pointer exception because the variable actions has not been initialized. These examples are only a small sampling of the types of problems that FindBugs detects (see Resources for more). At the time of this writing, FindBugs comes with a total of 35 detectors. Getting started with FindBugs To run FindBugs, you will need a Java Development Kit (JDK), version 1.4 or higher, although it can analyze the class files created by older JDKs. The first thing to do is download and install the latest release of FindBugs -- currently 0.7.1 (see Resources). Fortunately, the download and installation is pretty straightforward. After downloading the zip or tar, unzip it into a directory of your choosing. That's it -- the install is finished. Now that it's installed, let's run it on a sample class. As is often the case with articles, I will speak to the Windows users and assume that those of the Unix persuasion can deftly translate and follow along. Open a command prompt and go to the directory in which you installed FindBugs. For me, that's C:\apps\FindBugs-0.7.3. In the FindBugs home directory, there are a couple of directories of interest. The documentation is located in the doc directory, but more important for us, the bin directory contains the batch file to run FindBugs, which leads me to the next section. Running FindBugs Like most tools these days, you can run FindBugs in multiple ways -- from a GUI, from a command line, using Ant, as an Eclipse plug-in, and using Maven. I'll briefly mention running FindBugs from the GUI, but I'll primarily focus on running it from Ant and the command line. Partly that's because the GUI hasn't caught up with all of the command line options. For example, currently you can't specify filters to include or exclude particular classes in the UI. But the more important reason is because I think FindBugs is best used as an integrated part of your build, and UIs don't belong in automated builds. Using the FindBugs UI Using the FindBugs UI is straightforward, but a couple of points deserve some elaboration. As Figure 1 demonstrates, one of the advantages of using the FindBugs UI is the description provided for each type of detected problem. Figure 1 shows the description for the bug Naked notify in method. Similar. Running FindBugs as an Ant task Let's take a look at how to use FindBugs from an Ant build script. First copy the FindBugs Ant task to Ant's lib directory so that Ant is made aware of the new task. Copy FIND_BUGS_HOME\lib\FindBugs-ant.jar to ANT_HOME\lib. Now take a look at what you need to add to your build script to use the FindBugs task. Because FindBugs is a custom task, you'll need to use the taskdef task so that Ant knows which classes to load. Do that by adding the following line to your build file: <taskdef name="FindBugs" classname="edu.umd.cs.FindBugs.anttask.FindBugsTask"/> After defining taskdef, you can refer to it by its name, FindBugs. Next you'll add a target to the build that uses the new task, as shown in Listing 4: Listing 4. Creating a FindBugs target 1 <target name="FindBugs" depends="compile"> 2 <FindBugs home="${FindBugs.home}" output="xml" outputFile="jedit-output.xml"> 3 <class location="c:\apps\JEdit4.1\jedit.jar" /> 4 <auxClasspath path="${basedir}/lib/Regex.jar" /> 5 <sourcePath path="c:\tempcbg\jedit" /> 6 </FindBugs> 7 </target> Let's take a closer look at what's going on in this code. Line 1: Notice that the target depends on the compile. It's important to remember that FindBugs works on class files, not source files, so making the target depend on the compile target ensures that FindBugs will be running across the up-to-date class files. FindBugs is flexible about what it will accept as input, including a set of class files, JAR files, or a list of directories. Line 2: You must specify the directory that contains FindBugs, which I did using an Ant property like this: <property name="FindBugs.home" value="C:\apps\FindBugs-0.7.3" /> The optional attribute output specifies the output format that FindBugs will use for its results. The possible values are xml, text, or emacs. If no outputFile is specified, then FindBugs prints to standard out. As mentioned previously, the XML format has the added advantage of being viewable within the UI. Line 3: The class element is used to specify which set of JARs, class files, or directories you want FindBugs to analyze. To analyze multiple JARs or class files, specify a separate class element for each. The class element is required unless the projectFile element is included. See the FindBugs manual for more details. Line 4:. Line 5: If the sourcePath element is specified, the path attribute should indicate a directory that contains your application's source code. Specifying the directory allows FindBugs to highlight the source code in error when viewing the XML results in the GUI. This element is optional. That covers the basics. Let's fast forward several weeks. Filters You've introduced FindBugs to your team and have been running it as a part of your hourly/nightly build process. As the team has become more acquainted with the tool, you've decided that some of the bugs being detected aren't important to your team, for whatever reason. Perhaps you don't care if some of your classes return objects that could be modified maliciously -- or maybe, like JEdit, you have a real honest-to-goodness, legitimate reason to invoke System.gc(). You always have the option of "turning off" a particular detector. On a more granular level, you could exclude certain detectors from finding problems within a specified set of classes or even methods. FindBugs offers this granular control with exclude and include filters. Exclude and include filters are currently supported only in the command-line or Ant versions of FindBugs. As the name implies, you use exclude filters to exclude the reporting of certain bugs. The less popular, but still useful, include filters can be used to report targeted bugs only. The filters are defined in an XML file. They may be specified at the command-line with an exclude or include switch or by using the excludeFilter and includeFilter in your Ant build file. In the examples below, assume that the exclude switch was used. Also note in the discussion below that I use "bugcode," "bug," and "detector" somewhat interchangeably. Filters can be defined in a variety of ways: - Filters that match one of your classes. These filters could be used to ignore all problems found in a particular class. - Filters that match particular bugcodes in one of your classes. These filters could be used to ignore some bugs found in a particular class. - Filters that match a set of bugs. These filters could be used to ignore a set of bugs across all of the analyzed classes. - Filters that match particular methods in one of the analyzed classes. These filters could be used to ignore all bugs found in a set of methods for a class. - Filters that match some bugs found in methods in one of the analyzed classes. You could use these filters to ignore some of the bugs found in a particularly buggy set of methods. That's all there is to getting started. See the FindBugs documentation for more details on additional ways the FindBugs task can be customized. Now that we know how to set up a build file, let's take a closer look at integrating FindBugs into your build process. Integrating FindBugs into your build process You have several options when it comes to integrating FindBugs into your build process. You can always execute FindBugs from the command line, but more than likely you're already using Ant for your build, so using the FindBugs Ant task is the most natural. Because we've covered the basics of using the FindBugs Ant task earlier, I'll cover some of the reasons you should add FindBugs to your build process and discuss a few of the issues you may run into. Why should I integrate FindBugs into my build process? One of the first questions that's often asked is why would I want to add FindBugs into my build process? While there are a host of reasons, the most obvious answer is that you want to make sure problems are detected as soon as your build is run. As your team grows and you inevitably add more junior developers to the project, FindBugs can act as a safety net, detecting identified bug patterns. I want to reiterate some of the sentiment expressed in one of the FindBugs papers. If you put enough developers together, then you're going to have bugs in your code. Tools like FindBugs certainly won't find all the bugs, but they'll help find some of them. Finding some now is better than your customers finding them later -- especially when the cost of incorporating FindBugs into your build process is so low. Once you've stabilized which filters and classes to include, there's a negligible cost for running FindBugs, with the additional benefit that it detects new bugs. The benefit is probably even greater if you've written application-specific detectors. Generate meaningful results It's important to recognize that this cost/benefit analysis is only valid so long as you don't generate a lot of false positives. In other words, the tool's value is diminished if, from build to build, it is no longer simple to determine whether new bugs have been introduced. The more automated your analysis can be, the better. If fixing bugs means having to wade through a lot of irrelevant detected bugs, then you'll likely not use the tool very often, or at least not make good use of it. Decide which set of problems you don't care about and exclude them from the build. Otherwise, pick a small set of detectors that you do care about and run just those. Another option would be to exclude sets of detectors from individual classes, but not others. FindBugs offers a lot of flexibility with its use of filtering, which should help you generate results that are meaningful to your team, which leads us to the next section. Determine what you will do with the results of FindBugs It may seem obvious, but I've worked with more teams than you might imagine who apparently add FindBugs-like tools to their builds for the pure joy of it. Let's explore this question in a bit more detail -- what should you do with your results? It's a difficult question to answer specifically because it has a lot to do with how your team is organized, how you deal with code ownership issues, and so on. However, here are some guidelines: - You may want to consider adding the FindBugs results to your source code management (SCM) system. The general rule of thumb is don't put build artifacts into your SCM system. However, in this particular case, breaking the rule may be the right thing to do because it allows you to monitor the quality of the code over time. - You may choose to convert the XML results file into an HTML report that you post on your team's Web site. The conversion can be carried out with an XSL stylesheet or script. Check the FindBugs Web site or mailing list for examples (see Resources). - Tools like FindBugs can often turn into political weapons used to bludgeon teams or individuals. Try not to encourage that or let it happen -- remember, it's just a tool that's meant to help you improve the quality of your code. With that inspirational aside, in next month's installment I'll show you how to write custom bug detectors. Summary I encourage you to try some form of static analysis tool on your code, whether it's FindBugs, PMD, or something else. They're valuable tools that can find real problems, and FindBugs is one of the better ones for eliminating false positives. In addition, its pluggable architecture provides an interesting test bed for writing invaluable application-specific detectors. In Part 2 of this series, I'll show you how to write custom detectors to find application-specific problems. Resources - Download the latest version of FindBugs. - The FindBugs site provides a full list of bugs with descriptions. - Read more information about the Visitor pattern. - Here's more information on the Byte Code Engineering Library. - PMD is another powerful open-source static code analysis tool that lets) . - In "The future of software development" (developerWorks, June 2003), Eric Allen discusses some of the current trends in software development and predicts what they may lead to in the coming years. Check out the rest of Er.
http://www.ibm.com/developerworks/java/library/j-findbug1/
CC-MAIN-2016-07
refinedweb
2,995
62.78
How to drop all missing values from a numpy array? Droping the missing values or nan values can be done by using the function "numpy.isnan()" it will give us the indexes which are having nan values and when combined with other function which is "numpy.logical_not()" where the boolean values will be reversed. At last we want the elements which are having non-nan values which can be further filtered out and store it into another array. import numpy as np Sample_data = np.array([1,2,7,8,np.nan,9,5,np.nan,1,0]) print("This is Sample data with nan values in it:", Sample_data) This is Sample data with nan values in it: [ 1. 2. 7. 8. nan 9. 5. nan 1. 0.] remove_nan = Sample_data[np.logical_not(np.isnan(Sample_data))] print("This is the original data with nan values:", Sample_data, "\n") print("This is the data without nan values:", remove_nan) This is the original data with nan values: [ 1. 2. 7. 8. nan 9. 5. nan 1. 0.] This is the data without nan values: [1. 2. 7. 8. 9. 5. 1. 0.]
https://www.projectpro.io/recipes/drop-all-missing-values-from-numpy-array
CC-MAIN-2021-39
refinedweb
187
61.56
The Apache Jackrabbit community is pleased to announce the release of Apache Jackrabbit 2.0 alpha4. The release is available for download at: See the full release notes below for details about this release. Release Notes -- Apache Jackrabbit -- Version 2.0-alpha. 41 top level JCR 2.0 implementation issues are being tracked in the Jackrabbit issue tracker. Most of them have already been partially implemented, but the issue will only be marked as resolved once no more related work is needed. Open (16 issues) [JCR-1565] JSR 283 lifecycle management [JCR-1588] JSR 283: Access Control [JCR-1590] JSR 283: Locking [JCR-1591] JSR 283: NodeType Management [JCR-1712] JSR 283: JCR Names [JCR-1974] JSR 283: Evaluate Capabilities [JCR-2058] JSR 283: VersionManager and new versioning methods [JCR-2062] JSR 283: Repository Compliance [JCR-2085] test case (TCK) maintenance for JCR 2.0 [JCR-2092] make spi query code compatible with JCR 2.0 [JCR-2116] JSR 283: Built-In Node Types [JCR-2137] Use type StaticOperand for fullTextSearchExpression [JCR-2140] JSR 283: Baselines [JCR-2198] Text.escapeIllegalJCRChars should be adjusted to match the ... [JCR-2200] Implement Query.getBindVariableNames() [JCR-2201] Implement QueryResult.getSelectorNames() Resolved (25 issues) [JCR-1564] JSR 283 namespace handling [JCR-1589] JSR 283: Retention & Hold93] Implement QueryObjectModelFactory.fullTextSearch() in ... [JCR-2117] JSR 283: adopt CND syntax changes
http://mail-archives.us.apache.org/mod_mbox/jackrabbit-announce/200907.mbox/%3C510143ac0907150734i7414499eu4e6c00e6ef6e4fdf@mail.gmail.com%3E
CC-MAIN-2020-50
refinedweb
223
57.47
Posts14 Joined Last visited robwebb364's Achievements Newbie (1/14) 0 Reputation 1 Community Answers repeat and yoyo not working?: multiple objects with randomised attributes robwebb364 replied to robwebb364's topic in GSAPThankyou, AncientWarrior! will post shortly on lessons learned -.. apply rotation to each of a dynamically generated class robwebb364 replied to robwebb364's topic in GSAPbloody hell thank you!... and was not working on desktop due to some combination of wrong syntax and mistypes... its late here. now sorted! apply rotation to each of a dynamically generated class robwebb364 replied to robwebb364's topic in GSAPBTW the codepen doesnt complete the dynamically generated ellipses, but this works on my machine. - however the animation does not. thanks apply rotation to each of a dynamically generated class robwebb364 posted a topic in GSAPi really want to select all of a class to each be rotated: please see the codepen How should I do that? Ive seen somewhere that the selector should be $(".classname") but either that is wrong or the GSAP syntax is wrong. grateful for help -- rotation not working robwebb364 replied to robwebb364's topic in GSAPthanks, both of those work on the codepen.. I was sure Id tried that! but the problem is complicated in that i really want to select all of a class to each be animated: How should I do that? Ive seen somewhere that the selector should be $(".classname") but that dont work! grateful for help -- the js is like this: var peep=new Array(); $(document).ready(function(){ for (j=0; j<=20; j++) { /// no of paths var randx=parseInt(getRandom(0,1800)); var y=900; peep[j] = document.createElementNS('' 'ellipse'); peep[j].setAttributeNS("null", "class", "peeple"); peep[j].setAttributeNS(null, "cx", randx); peep[j].setAttributeNS(null, "cy", y); peep[j].setAttributeNS(null, "rx", getRandom(15,20)); peep[j].setAttributeNS(null, "ry", getRandom(20,40)); peep[j].setAttributeNS(null, "fill", "grey"); peep[j].setAttributeNS(null, "stroke", "none"); document.getElementById("svg").appendChild(peep[j]); } var tl = new TimelineMax({repeat:-1, yoyo:true}); tl.to($("#wob"), 1, {rotation:"90", transformOrigin:"left top"}); }); function getRandom(min, max) { return (Math.random() * (max - min) + min); } rotation not working robwebb364 posted a topic in GSAPPlease help. Despite many efforts this is not rotating the ellipses.. ive followed the same syntax which has worked before with other attributes being animated, and checked other codepens which seem to work! GSAP seems very sensitive to syntax, and the docs on line are not helpful... thanks - thanks, that's useful - think I have figured this out, using the tween.timeScale() command to vary the speed now the square on the right blinks at a varying rate how to randomise timing of repeated animation robwebb364 posted a topic in GSAPThe codepen has two black rectangles which appear and then disappear. One has an onComplete function which randomises the time variable, but this does not work.. the timing remains the same. How might one do this? Presumably we need to stop the repeat, reset the time, and restart it? Thanks for help. - OK I think I have solved the first part of this. As I am using 'yoyo' it is not necessary to put in the parts of the animation which take the opacity of each element in turn back to zero - GSAP does that for me. The code on dropbox is now modified and appears to work without pause. thanks for commenting [ im not sure the time randomisation is working, will post separately] - BTW Im running Chrome Version 36.0.1985.125 on a MacBook Air 2011, OS 10.9.1 The same problem occurs in Firefox. --- There is a secondary issue: the code on dropbox has a function to randomise the timing after each cycle, but this is not working. Presumably I have to stop and restart the GSAP animation in order for the new time variable to have an impact - How do I do this? - thanks for comments a full test file with images is at : this shows the pause, after about 10 seconds, lasting for about 4 seconds. the css file is this: and the Greensock JS files:
https://staging.greensock.com/profile/20307-robwebb364/
CC-MAIN-2022-40
refinedweb
685
67.04
Recently I got an opportunity to explore the Google Web API which can be downloaded from the Google site. So I decided to code the web-client which is very easy and interesting too! The article focuses on developing the Web-Client with which you can search items from your site itself. You must download the google web api first! Then follow the instructions below. GoogleProxy.cs c A proxy resides on the consumer's machine and acts as a rely between the consumer and the web service. When we build the proxy, we use WSDL file to create a map that tells the consumer what methods are available and how to call them. The consumer then calls the web method that is mapped in the proxy, which in turn, makes calls to the actual web service over the Internet. The proxy handles all of the network-related work and sending of data, as well as managing the underlying WSDL so the consumer doesn't have to. When we reference the web service in the consumer application, it looks as if it's part of the consumer application itself. The code is pretty straight forward. <%@ Page Language="C#" %> <@ import //Remember,GoogleWebService is the namespace you named while creating the proxy! <script runat="server"> string key="licence key you got from google"; /* I have declared the key string variable as global variable since the key variable is to be passed every time you call the methods. */ void Page_Load() { lblSpellSug.Text=""; //Label to display the Spelling Suggestion lblResultCount.Text=""; //Label to display the Estimated total result count lblSearchTime.Text=""; //Label to display the server time to return the search results, //measured in seconds. } void btnSearch_Click(Object sender, EventArgs e) { //creating the instance of the GoogleSearch class to invoke required methods GoogleSearchService obj=new GoogleSearchService(); //spelling checking and suggesting if entered wrong string suggestion=obj.doSpellingSuggestion(key,Request.Form["txtPhrase"]); if (suggestion!=null) { lblSpellSug.Text="Suggestion: "+ suggestion; } //searching the phrase..... //Regarding the parameters refer to the Google API GoogleSearchResult res=obj.doGoogleSearch(key, Request.Form["txtPhrase"], 0, 10, false,"",false,"","",""); lblResultCount.Text="Est. Total Result Count: " + Convert.ToString(res.estimatedTotalResultsCount); //to display the total estimated result count lblSearchTime.Text= "Search Time: " + Convert.ToString(res.searchTime) + "sec";//search Time //displaying the results Returned by the Search in tabular using the Table control. ResultElement[] result=res.resultElements; foreach(ResultElement r in result) { ResultTable.CellSpacing=1; ResultTable.CellPadding=2; //formatting the Server control Table TableRowCollection trCol=ResultTable.Rows; //ResultTable is the instance created for Table class //creating new Table Row and adding to the TableRowCollection TableRow tr=new TableRow(); trCol.Add(tr); TableCellCollection tcCol=tr.Cells; //creating new Table Cell, assigning the title and the summary //of the search result. TableCell tc=new TableCell(); tc."+ r.title + "</a>" + "<BR>" + r.summary; tcCol.Add(tc); } } </script> Let's share.
http://www.codeproject.com/Articles/3557/ASP-NET-web-client-for-Google-Web-API?msg=869294
CC-MAIN-2015-22
refinedweb
472
50.73
Spring to Java EE – A Migration Experience. The trouble you’re having, however, is most likely due to the fact that you’re actually trying to solve problems that don’t need to be solved. When I first made the switch to Java EE 6 from Spring – for my own personal project – Spring’s dozens of extensions, and using Hibernate Object Relational Mapping ( ORM) directly, managing transactions myself, I was trying to do things “the Spring way,” in other words – configuring everything up the wazoo, but let me try to explain some things for you that should help clear the fuzz of attempting to migrate to Java EE; they are things I ran in to, and would likely happen to others as well. The biggest difference you’ll find is, “Java EE already does that for you.” Nearly every application requires the same set of basic features: persistence, transactionality, (dependency injection is typically assumed at this point,) and a web-tier view-layer or web-services. The first thing I have to say is: “Don’t freak out when I say Enterprise Java Beans (EJB)” – they’ve truly become a system worthy of attention, and if you’re going to take advantage of the Java EE stack, you’re going to want them around; EJB 3.1 today is miles apart from what it once was, can be used standalone in WARs, and requires just one annotation to configure – soon, with JBoss AS 7, Enterprise Java Beans may simply be an extension built on CDI, like everything else. To start, I’ll review some of the problems I encountered and solved during my first transition between the Spring and Java EE 6 technologies; most – if not all – of these were due to my lack of technical understanding, the mindsets are truly very different; the results are striking. Configuring the System – “It’s all just XML in the end.” This one is simple. Where Spring has /WEB-INF/applicationContext.xml files of various types, Java EE has various distinct configuration files for each API in the framework WAR. Some of these files are required to activate the API, but some are not. The following chart overviews the most common of these configuration files – there is more to Java EE, but chances are these are all you’ll need to get started. Be careful! If these files are in the wrong place, you will not have a working system! You should also know that most of these files require some sort of schema, or “XML header” that tells the Application Server (JBoss Application Server, GlassFish, Etc…) which version of the technology to use, since most APIs attempt some level of backwards compatibility. This is similar to including new schemas in Spring’s applicationContext.xml. EJBs can be defined via annotations alone, and require no configuration file in order to enable them. JAX-RS is similar, since no configuration file is required when using a Java EE 6 certified Application Server such as JBoss Application Server, everything can be specified through annotations once it is enabled. Configuration of modules in JAR files: One of the greatest features of Java EE APIs is the ability to break application code into separate reusable JAR files – where each individual JAR contributes configuration and code to the system they are included in; for instance, you might run multiple applications for your business, but each one must have the same data access providers. You’d create a shared domain-model JAR file and include it in each application. All the configuration would be contained in that JAR, and would be done using the same set of configuration files, placed in the JAR’s META-INF directory: Note that some of the file names are different from those that are used in the main application itself. The Application Server Configuration In addition to the application configuration, one of the most notable differences between Spring and Java EE is that you actually need to know how to use the application server itself. Spring replaces application server configuration with application configuration (sometimes more convenient, sometimes less convenient, but the same result in the end.) In order to use JPA and transactions (covered later,) you will want to know how to use the transactional data-source feature of the Java Enterprise Application Server, since it makes setting up transactions as simple as writing a plain Java class with an annotation. Keep in mind that each of these configuration files may have a different syntax and format, since they are produced by different companies. These files should be used to configure settings that must exist in order for the application to run, and should be considered a natural extension to the Java EE module configuration files. If your application depends on a transactional data source, this is the place to define it – preventing manual configuration of the application server, which can be a very repetitive, monotonous task. This configuration usually only needs to happen once per-server, and allows you to keep database passwords secret, data sources controlled and separate, and JMS queues centralized; though, if you want a standalone application/data-source configuration, then you should think about using these custom configuration files. Contexts & Dependency Injection for Java – aka Beans Where spring has @Autowired, Java EE (CDI to be more specific) has @Inject. The parallel is really quite straight-forward, since every class in a WAR or JAR file (that has a beans.xml file!) is automatically a bean in CDI, any class can be provided or scoped using dependency injection, just like a bean that would be defined in applicationContext.xml in a Spring application. Before we get started, though – remember that you need to create an empty /WEB-INF/beans.xml file, or your system will not start the CDI container. Technically, you can only inject beans that are themselves in a bean archive (not just any class). However, you can use a producer method to pull in a class from a non-bean archive and make it a bean (or you can use Seam XML to add beans that aren’t in an archive that has a beans.xml file.) public class UserCredentialsBean { } Wait, that’s it? Well, you have a few more options when it comes to managing these beans and deciding how long they will “live,” or how long each bean instance will remain active. Every bean can be assigned to a “scope,” which is really defining the context in which the bean is relevant. For example, it doesn’t make sense for a logged-in user’s authentication credentials (username/password) to be retained longer than that user’s session, so you would place the bean containing that information in the Session Scope (which is just a nice, clean way of saying that we are storing that information in the HttpSession object, and when the HttpSession object dies, so does everything in Session Scope for that user.) This is done using the @SessionScoped annotation. @SessionScoped public class UserCredentialsBean { } In reality we would probably leave the details of authenticating users up to a framework like Seam Security, but just consider this as an example. There are also several more built-in scopes: Other custom scopes can be created as needed, and some frameworks such as the Seam Faces Module even provide additional scopes for you. But let’s look at how we inject an instance of our UserCredentialsBean into another bean. public class AuthorizationBean { @Inject private UserCredentialsBean credentials; } These are the basics; no configuration required. We can also scope the AuthorizationBean in order to control how long that lives as well, but we have a very subtle issue going on here. @ApplicationScoped public class AuthorizationBean { @Inject private UserCredentialsBean credentials; } We are injecting a @SessionScoped bean into an @ApplicationScoped bean, which in Spring, might cause two problems. We’ll see, though, that these problems have already been solved in Java EE: - In Spring, when the the AuthorizationBeanis created, there may not be any active user sessions, and the container may not be able to create the UserCredentialsBeandependency – resulting in a nasty exception. In CDI, however, the container knows that it will not be able to get a @SessionScopedobject at that point in the life-cycle, so it waits until the credentials are accessed until it attempts to get an instance of UserCredentialsBean. If you access that object outside of the active scope, you’ll still get an exception, but that’s a different problem, one that can easily be solved with good application design. (In other words, “you shouldn’t be doing that.”) - In Spring, when the @ApplicationScoped AuthorizationBeanis created, assuming that it can get a hold of our @SessionScoped UserCredentialsBean, the instance that is injected will be the instance that remains for the life of the bean into which it assigned. This means that the same UserCredentialsBeanwill be used for all invocations and processing in our single instance of the AuthorizationBean, and that’s most likely not what we want, there would be some pretty nasty bugs (users sharing permissions, etc.) The problem can be solved by turning the bean into a “dynamic-proxy,” in the Spring configuration. In CDI, however, this is again taken care of us already, since the container knows that @SessionScopedbeans may not live as long as an @ApplicationScopedbean, and that there may be more than one active Session. CDI will actually find the correct @SessionScoped UserCredentialsBean, and use that when performing operations on the parent bean, automatically making sure that the right objects are used. Sweet! Interacting with Beans through Java APIs If you are trying to get a handle to a bean while working in a part of the system that does not support dependency injection for some reason ( @Inject in CDI, @Autowired in Spring), it’s sometimes required to ask the framework for a bean instance manually. In Spring you can ask for an instance of a bean (an object that you can actually use to do some work,) using Java APIs – this assumes you’ve set up the appropriate listeners in your web.xml configuration. MyBean bean = ApplicationContext.getBean(“myBean”); At first glance, you might think this is not possible using CDI, but really it would be more correct to say that it is not yet as convenient. There are technical reasons for this lack of convenience, but while I disagree with that aspect, I do understand the reason for doing things the way they were done. In CDI, there is a concept of a @Dependent scoped bean, which adopts the scope of the bean into which it was injected. This means that when we use Java APIs to create a direct instance of a @Dependent scoped bean, that it will not be stored into a context object (the Request, Session, Application, or other common scope.) In other words, @PostConstruct methods will be called when the bean is created, but since there is no way for CDI to tell when the bean will be destroyed (because it is not stored in a context – which would normally take care of letting CDI know when to do its cleanup,) @PreDestroy annotated methods cannot be called automatically. You have to do this yourself, and for that reason the bean-creation process is slightly more complicated – though not that much more complicated – than in Spring; e.g: “With great power comes great responsibility.” Before you read the following code, I quote from a CDI developer who agrees that things need to be simplified a little bit for convenience – so expect that in an upcoming release of CDI: “I can see that people are going to look at the instance creation code and say that CDI is too complicated. We’ve agreed that it’s lame that a utility method is not provided by CDI for those cases when the developer just has to use it.“ CDI’s equivalent to the ApplicationContext is called the BeanManager, and can be accessed through JNDI or several other methods (the easiest method is to use the “JBoss Solder” or “DeltaSpike” projects which provide a BeanManagerAccessor.getBeanManager() static method very similar (but more generic) to Spring’s WebApplicationContext utility class: Get BeanManager from Solder/DeltaSpike: public BeanManager getBeanManager() { return BeanManagerAccessor.getManager(); } getBeanManager()” function is provided by the base class BeanManagerAware. Don’t worry about how this works for now, unless you want to get into JNDI and server specific stuff. BeanManagerif you cannot use @Inject BeanManagerdirectly. The below options are purely for example, and should be avoided if possible. Get BeanManager from ServletContext (in a JSF request):non-standard Right now this is non-standard, but works in most CDI implementations and is proposed for version 1.1. public BeanManager getBeanManager() { return (BeanManager) ((ServletContext) facesContext.getExternalContext().getContext()) .getAttribute("javax.enterprise.inject.spi.BeanManager"); } Get BeanManager from ServletContext (in any Web Servlet Request): non-standard Right now this is non-standard, but works in most CDI implementations and is proposed for version 1.1. public BeanManager getBeanManager(HttpServletRequest request) { return (BeanManager) request.getSession().getServletContext() .getAttribute("javax.enterprise.inject.spi.BeanManager"); } Get BeanManager from JNDI (does not require a Web Request): standard public BeanManager getBeanManager() { try { InitialContext initialContext = new InitialContext(); return (BeanManager) initialContext.lookup("java:comp/BeanManager"); catch (NamingException e) { log.error("Couldn't get BeanManager through JNDI"); return null; } } Once we have a BeanManager, we can ask the container to give us an instance of a bean. This is the slightly more complicated part, but that complication is necessary; again, “with great power comes great responsibility.” Instantiating a Bean using the BeanManager: Don’t get scared, you only need to write this once and put it in a utility class (or use WeldX which provides this functionality already.) @SuppressWarnings("unchecked") public static <T> T getContextualInstance(final BeanManager manager, final Class<T> type) { T result = null; Bean<T> bean = (Bean<T>) manager.resolve(manager.getBeans(type)); if (bean != null) { CreationalContext<T> context = manager.createCreationalContext(bean); if (context != null) { result = (T) manager.getReference(bean, type, context); } } return result; } Take warning, though, that the CreationalContext object this method creates before we can get a reference to the bean is the object that must be used when “cleaning up” or “destroying” the bean, thus invoking any @PreDestroy methods. (Note, because the method above is actually losing the handle to the CreationalContext object, it will not be possible to call @PreDestroy methods on @Dependent scoped objects, and there-in lies the reason why creating beans in CDI is slightly more involved, and why this convenience was omitted – in order to force people to decide for themselves how to handle behavior that might be very important architecturally.) This is the same issue that I mentioned above, when discussing cleaning up bean scopes. Interacting with Beans through Custom Scopes The best way to manage instances of objects is to access them via injection, not through the Java API; in fact, any time you find yourself needing access to a bean through a Java API, you should ask yourself why you are not operating within the realm of dependency management via @Inject. Frequently, you can fix the problem at the root – just like Seam Faces Module does for Java Server Faces (JSF) – by adding injection support in other user-objects such as validators, converters, and more, so that you can use injection with ubiquity. Or the Seam Wicket, Seam Servlet, Seam JMS, and other Seam modules. Sometimes adding injection support this means registering a custom scope, which can sound complex, but is frequently as simple as attaching a bean scope to an existing contextual object such as the HttpSession for @SessionScoped, or the JSF UIViewRoot, for @ViewScoped in Seam Faces. The “Java Persistence API” vs. “the Spring way” Spring: First you set up a data-source in applicationContext.xml, then you set up a connection pool, then you configure Hibernate to use that connection pool as a data source, then you tell Hibernate where to get its transaction manager, then you need to set up a byte-code assist library (AoP) in order to enable cross-bean transactionality and security via annotations. Not only is that a good bit confusing to work through (unless you’ve already done it a few times,) but when I started using the actual Spring setup, I got LazyInitializationExceptions all over the place because I didn’t first understand what a Hibernate session was, which took another few days to understand and get working – I’ll get back to that in a bit when I talk about Extended Persistence Contexts in Java EE – something you should be using if you can. In my opinion Spring did a tremendous disservice by trying to hide the persistence context as though it were just an adapter. The persistence context is crucial to how the ORM model works; you need both the entities and the persistence context in your toolbox in order to be successful. Put the Spring configuration aside for a moment; now let’s talk about Java EE – all you need is /META-INF/persistence.xml, of which I’ve provided a quick working example below. For the purposes of these examples, I’m going to assume that you are using JBoss AS 6, or have already installed Hibernate on GlassFish (which is very easy to do, and I recommend since trying to get my application to run on the default EclipseLink has given me a lot of problems when attempting to migrate from Hibernate; Hibernate is still the industry leader in terms of stability and functionality, in my opinion.) <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="" xmlns: <persistence-unit <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source> java:/DefaultDS </jta-data-source> <!-- Data source for GlassFish if you aren't using JBoss AS 6.0... <jta-data-source> jdbc/__default </jta-data-source> --> <exclude-unlisted-classes>false</exclude-unlisted-classes> <properties> <!-- Properties for Hibernate (default provider for JBoss AS, must be installed on GlassFish) --> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="false"/> </properties> </persistence-unit> </persistence> But where’s the configuration for transactions? Where do you set up database connections and connection pools? Well, transactions just work automatically if you are using a JTA data-source (more on that in a bit,) you can set up data-sources (and should set them up) in your application server configuration (meaning that since your data source configuration is not stored in the application itself, when your application is deployed it will use the data-source that is available on that server automatically,) and yes: “it’s that simple.” The mentality is a little different, but the end product is a much smaller, “weightless” application. You can set up data sources yourself, but for that you’ll need to read the documentation about your web application server. These files are typically kept separate in order to keep passwords out of your application source code or version control repository. So great, we have persistence set up; but what about those Hibernate LazyInitializationExceptions that we all love to hate? We’ll talk about that in the next section, but this is where EJB’s come in, and again, don’t get scared. It’s really very simple to make an EJB, and once you learn what they actually do for you, I’d be surprised if you want to go back. Here I’d like to mention that with the addition of a tool such as Seam 3‘s Persistence Module, you can use JPA and the PersistenceContext in standard CDI beans without EJB – Seam 3 also provides the same consolidated support for managed transactions, security, remoting, and messaging that is provided when you use EJB. Seam 3’s mission is to provide a unified programming model (standardized on CDI) to the Java Enterprise Framework. As we continue, I’m going to assume a little familiarity with Object Relational Mapping in general, that you know you need to configure your data entities with annotations(or XML) so that the system knows how to map your data to the database. If you are familiar with Hibernate, then JPA should be no stretch by any means of imagination because Hibernate is a JPA implementation. In fact, you can use most of the same annotations! @Stateful public class Service { } @Stateful public class Service { @PersistenceContext(type = PersistenceContextType.EXTENDED) private EntityManager em; } @Stateful public class Service { @PersistenceContext(type = PersistenceContextType.EXTENDED) private EntityManager em; public <T> void create(final T entity) { em.persist(entity); } public <T> void delete(final T entity) { em.remove(entity); } public <T> T findById(final Class<T> type, final Long id) { Class<?> clazz = getObjectClass(type); return result = (T) em.find(clazz, id); } public <T> void save(final T entity) { em.merge(entity); } } That’s all it takes to create a service class that knows how to interact with the database, so assuming that you have a few @Entity classes defined, you’ll be good to go. If you want more information on getting started with Hibernate and JPA, you can start here. But wasn’t that simple? About 10 real lines of code and you’ve got a fully functional database access object. You do need to create actual @Entity objects to save to the database, but that’s for a separate tutorial. Thanks to Dan Allen, I’ve attached a Maven project that includes an Arquillian-based test suite that you can study to get a greater handle on exactly what occurs when using a persistence context, both one that is transaction-scoped and one that is extended. This is exactly where Arquillian fills in nicely as a teaching tool. You can quickly make changes to the code and see how it affects the result…all in-container. You might want to take a look at this before continuing. You can download the lab here, unzip, open a terminal, and type: mvn test. You can also import the project into your IDE and debug to get a visual sense of what’s going on, otherwise, the tests should be pretty revealing! What about LazyInitializationException? (Persistence, Transactions, and the Conversation Scope) The first thing you need to know about LazyInitializationException is that they occur when Hibernate (or another JPA-style ORM) attempts to load a collection or related data from an @Entity object that is no longer “managed.” What is a “managed” object? To understand this term, we need to look back at how Hibernate and JPA work under the covers. Each time you ask for an object from the database, the system has some form of Session (Hibernate) or PersistenceContext (JPA) that is used to open connections, manage results, and decide whether or not the objects have been modified (which allows for clean/dirty state to determine when or when not to save objects back to the database.) Introducing a failure situation: Consider the scenario when an object is loaded from the database. This object has an ID, and holds a list of addresses that are associated with it. Person person = service.getPerson(1); Person is loaded from the database using a select statement like this: select from persons p where p.id = ? But as you can see by the SQL query, we have not let loaded the addresses since associations and collections are typically be selected only when something attempts to access them, otherwise known as “Lazy” initialization: List<Address> addresses = person.getAddresses(); Here is when you have the chance for a LazyInitializationException, because this method call actually makes a secondary query to the database, requiring an open connection: select from addresses a where address.person_id = ? That sounds fine, but let’s say for instance that a user does something on your website that triggers a Person object to be loaded from the database. Nothing special happens with this object, the application just loads it, reads some fields out of it, and goes on to the next page. When that next page is requested, we grab that same Person object that we loaded on the last page, try to pull more information out of it – for instance, the list of addresses that was not previously loaded, and oops! We got a LazyInitializationException - “Entity is not associated with a known PersistenceContext” What happened? My object was just loaded, can’t the Session or PersistenceContext object just open a new connection to the database and get the information that I need? – The answer is “yes” it can, but not if that Session or PersistenceContext has already been destroyed, and the object is no longer associated with it! You are probably not using an “extended” persistence context. We need to dig deeper… Understanding LazyInitializationException and the Extended PersistenceContext First, we need to know a few things about “extended” Persistence Contexts: - They live as long as the bean they are scoped to. - Objects with dirty changes queued up in the context until a the transaction with which a persistence context is associated is committed. If a context is destroyed outside of the transaction, the changes are never propagated to the database. It’s the transaction that triggers the session flushing. Flushing can also happen in the middle of a transaction if Hibernate/JPA needs to run a query against the database based on the state of a managed entity (e.g., a where clause) - Changes made to objects associated with the context are deferred from flushing if an un-handled exception is encountered. The changes will be flushed the next time a flush is attempted. - While the extended PersistenceContextis alive, you will never get a LazyInitializationExceptionfrom objects associated with that context, ever! @Statelessor @StatefulEJBs via the following annotation: @Stateful public class Service { @PersistenceContext private EntityManager em; } By default, this EntityManager (persistence context) will be scoped to the length of the transaction (which can be controlled by getting a handle to the user transaction and manually calling tx.begin() and tx.end(); however, if we add (type = PersistenceContextType.EXTENDED) to our injection point, we are now using extended persistence contexts, and that is what we probably wanted all along. @Stateful public class Service { @PersistenceContext(type = PersistenceContextType.EXTENDED) private EntityManager em; } Use case: the @ApplicationScoped PersistenceContext (Bad practice) Lets imagine for a moment that one single PersistenceContext is created for our entire application; the context is started when the server starts up, the context is never destroyed until the server shuts down, and all objects are held within that context. In other words, you’ll never get a LazyInitializationException, ever, because the context will always survive, and @Entity objects never lose their reference to it. But also consider that you are running this as a web-application, multi-threaded, that services multiple users at the same time. Changes made by each user are queued in our extended context until the next transaction boundary, which might be as long as until the application shuts down (destroying the extended context,) at which point all changes made by all users are saved to the database and the transaction is committed. That sounds pretty dangerous… and that’s what happens if you use an Extended PersitenceContext in an @ApplicationScoped bean. Objects associated with that PersistenceContext will stay around for the life the context itself, so obviously we must (sarcasm) need a PersistenceContext for each user, since we don’t want all changes being queued up and saved when the application shuts down – too many things could go wrong with that scenario. The @ApplicationScoped Persistence Context will start when the application starts up, and it will be destroyed when the application shuts down. There may be multiple transactions (bound to the EJB business methods) that occur within the lifespan of this context. Use case: the @SessionScoped PersistenceContext Let’s now create a PersistenceContext for each user session. Not a horrible concept, and does actually have applicable uses! Each user gets their own PersistenceContext that holds @Entity references to objects from the moment the session is created until the moment the session is destroyed. They can interact with these objects, save, update change and delete objects without fear of stomping on anyone else’s changes, and their queued changes are saved when the transaction is committed at the end of the session (or any other time a transaction is committed). The @SessionScoped persistence context will be created when the user’s session begins, and it will be destroyed when the user’s session is destroyed. There may be multiple transactions (bound to the EJB business methods) that occur within the lifespan of this context. But what if you want more fine-grained control over transactions? What if session scope is too long? We don’t want our users making changes on the site that won’t be saved until they log out or their session expires! Can’t we control transaction boundaries ourselves? I want a the context be created when they click “Add to Cart,” continue queuing up more changes, and finally I want a transaction to be committed when they click “Confirm.” We need to look at @ConversationScoped persistence contexts. But first, in your head, separate the idea of a persistence context and a transaction, since they are orthogonal. They work together, but the transaction is when the persistence context performs operations (sometimes automatically, like flushing); the persistence context is just a monitor and cache. Also, think of an extended persistence context like a @Dependent object (except it is uses a proxy). It is bound to the lifetime of the EJB into which it is injected. A transaction-scoped (default) persistence context, in contrast, lives and dies by the transaction. So you get a new one each time a business method is invoked (basically, every time you use the EJB – stateless in a sense). Use case: the @ConversationScoped PersistenceContext Conversation scope provides exactly what we’re looking for, and the way this works might be bit scary at first, “you’ll think, how can the system possibly do all of this for me?” To which I’ll answer, the magic of using a @ConversationScoped extended PersistenceContext, is that your users’ data, and your users’ saved state as they navigate between pages are living in the same place, and for the same length of time. A match made in heaven! @Stateful @ConversationScoped public class Service { @PersistenceContext(type = PersistenceContextType.EXTENDED) private EntityManager em; } The first thing to know about @ConversationScoped beans is that by default, conversation scope beings when the request begins, and ends when the request ends and the conversation is not in a long-running state. The call to converstation.begin() only states intent for the scope to perpetuate. This means that a PersistenceContext injected into a conversation scoped bean will live by default for one request, but of the conversation is started, it will live until the end of the request when the conversation ends. (The reason it is kept alive until the end of the request is because usually the end of the request is when rendering is completed, and destroying information prematurely could result in errors during that render.) The @ConversationScoped persistence context will be created with conversation.begin(), and will be destroyed at the end of the request on which conversation.end() is called. There may be multiple transactions (bound to the EJB business methods) that occur within the lifespan of this context. Use case: the @RequestScoped and custom scoped PersistenceContext It stands to be mentioned that an extended persistence context can be injected into a bean of any scope, and the same rules will apply. If @RequestScoped, for example, the context is created when the request begins, and is destroyed when the request ends; if custom scoped, the same is true: There may be multiple transactions (bound to the EJB business methods) that occur within the lifespan of any context, and when using an extended persistence context. Going beyond EJB/JTA with Seam Persistence Module: Seam 3 provides a Persistence module which provides a programming API for short-lived persistence contexts (not extended contexts,) much like what you would find in Spring. You can use declarative @Transactional(BEGIN) and @Transactional(END) methods, and the like, in addition to tying in extra security features, and the power of CDI extensions, interceptors and more. Check back on this blog or on Seam Framework.ORG for more updates on this module. EJB – Why are my thrown Exceptions wrapped in an EJBException? Systems using an ORM like hibernate will frequently utilize a Data Access Object (DAO) layer in which standard exceptions are used to reflect the outcome of common operations. Exceptions such as, NoSuchObjectException, or DuplicateObjectException, are common place. If the system relies on catching these exceptions to recover appropriately and continue functioning, developers switching to EJB may be surprised when it comes time to run their application; it quickly becomes apparent that the exceptions being thrown are not the exceptions that are expected – everything is wrapped in EJBException. At first, you might think, “This is invasive, and tight coupling!” but you have to think about this from the perspective of a transaction-aware system. EJB handles JTA for you, meaning that if you get an exception, EJB needs to know when to roll back the transaction, and when not to; in order to facilitate this decision, EJB has the concept of an @ApplicationException. “Application” exceptions versus “System” exceptions: - An application exception is one that has meaning to the client/consumer of the services, and may affect error recovery, flow of logic, navigation, or any other use within the app. - A system exception is one that represents a failure in the underlying services that cannot be recovered from, and should never be handled by the client/consumer (aside from very basic error handling like printing “500 – Something horrible just happened.”) By default, every unchecked/ RuntimeException thrown from an EJB service will be treated as a “System” exception, meaning that the transaction should be rolled back, and a complete failure has occurred; you cannot recover, and you should never catch an EJBException in order to make a decision, other than very basic error recovery – sending a user to a generic error page, for example, or restarting a web-flow / wizard. So what about the exceptions that we do want to recover from, exceptions that we know should not affect the state of the current transaction? Well, in order for EJB to respect your wishes, you must tell it which exceptions have meaning in your application; therefore, the exception classes must either be annotated with @ApplicationException, or if you cannot chance the Exception source itself, you must list your exceptions in /WEB-INF/ejb-jar.xml (example below.) Don’t worry, this doesn’t take too long if you have a good exception hierarchy; only the top-level exception must be configured because exception subclasses automatically inherit the configuration from their parent exception types. <?xml version="1.0" encoding="UTF-8"?> <ejb-jar <assembly-descriptor> <application-exception> <exception-class>javax.persistence.PersistenceException</exception-class> <rollback>true</rollback> </application-exception> <application-exception> <exception-class>com.example.exceptions.DataException</exception-class> <rollback>true</rollback> </application-exception> </assembly-descriptor> </ejb-jar> Now your exceptions will be treated as transaction boundaries, or ignored depending on how you need your system configured. This prevents things like partial-commits, and combined with the conversation-scope (next section,) also prevents things like partial-commits from wizard interfaces. Conclusion. Seam 3 in particular strives to give extensive user-documentation, hopefully making things much simpler to adopt, and easier to extend. The main purpose of this article was not to bash Spring, although I may have taken that tone on occasion just for contrast and a little bit of fun. Both Spring and Java EE are strongly engineered and have strong foundations in practical use, but if you want a clean programming experience right out of the box – use Java EE 6 on JBoss Application Server 6 – JBoss Tools – and Eclipse. I will say, though, that the feeling I’ve gotten from the Spring forums vs the Java EE forums, is that there are far many more people willing to help you work through Java EE issues, and more available developers of the frameworks themselves to actually help you than there are on the Spring side. The community for Java EE is much larger, and much more supportive (from my personal experience.) In the end, I did get my application migrated successfully, and despite these issues (from which I learned a great deal,) I am still happy with Java EE, and would not go back to Spring! But I do look forward to further enhancements from the JBoss Seam project, which continue to make developing for Java EE simpler and more fun. Don’t believe me? Try it out. Find something wrong? Tell me. Want more? Let me know what you want to hear. Shameless plugs for some other projects that I think you’ll find useful: OCPsoft PrettyFaces and Rewrite – our own open-source tools for controlling the URL of your application, making pretty, bookmarkable URLs. The easiest (if I don’t mind saying, and I don’t) way of controlling URL parameters, query-parameters, and validating input that comes into your application through the URL. JBoss Arquillian – the single most inclusive and best Unit/Integration testing experience you will find in any application framework. Not only that, but it actually runs your code IN the container, so it’s being tested like it’ll actually be run in production. Posted in Hibernate, Java, JSF2, OpenSource, Seam, Spring, Technology The line between Java EE 6 and extensions like Seam 3 is a bit blurry but that’s a nice article. Thanks for taking the time to write this. Also, web.xml and faces-config.xml are optional in Java EE 6. persistence.xml and beans.xml are the only required descriptors. Thanks for reading 🙂 I corrected the descriptors. Nice article! A small correction: the table listing all the built-in scopes talks about @DependentScoped, but the correct annotation name is @Dependent, as used later in the article. Fixed, thank you 🙂 Great post. I think EJB, finally, have done things right. Great post Correction: JTA transactions are not bound by any Scoped anotations. A JTA transaction begins with the first EJB method annotated with Required attribute (default) or with RequiresNew. The transaction is commited automatically when returning from this method. So no matter the scope of the bean (@RequestScoped or @SessionScoped) transactions will always be commited (and persistent context flushed) at the end of request. This is as it was in EJB 2.0, and stays the same in EJB 3.1. CDI does not change JTA semantics. Otherwise please include some references to pages in specifications. Thanks for the clarification! I was in the process of correcting myself as you posted this 🙂 What concerns PersistenceContext’s (extended) scope, you are absolutely correct. And @ConversationScoped extended PersistenceContexts are very handy in applications – you get JPA entity cache designated for a single conversation. No LazyInitializationException’s, yet memory consumption on the server (for JPA entity cache) is small – at the end of conversation entity cache is deleted. The problem (or maybe “thanks god” 🙂 ) is that JTA container managed transactions are always request scoped. Actually you got me thinking whether I would like to have 30 minutes (session timeout) long transaction… Probably not 🙂 Not even conversation long transaction. Transactions should be as short as possible to consume as little DBMS resources as possible. Good luck! Hmm… well actually the transactions only last for the duration of the @Stateful/@Stateless EJB business method, so unless you are controlling transactions manually, they shouldn’t be around that long 🙂 Transactions normally start and end during the same request. For an extended persistence context there is a nice trick, well known by Seam developers: you change the flush mode of the Hibernate session, disabling the auto-flushing feature at the beginning of the conversation, and manually forcing the flash at the end of the conversation. So, transactions don’t waste DBMS resources and the user-operations are still atomic with nice isolation properties, like they were in a long transaction. Are you in trouble if you suggest using Java EE out of the box instead of Hibernate? You’re working for RedHat you know 😉 Good article I already forward this as today’s mandatory reading for my team! Instead of using a conversation scoped persistence context to hold our conversation data (which seems hackish), I’m guessing it’s more natural to just use a conversation scoped object (without a persistence context) to keep our conversation data (e.g. a shopping cart class), and only in the end, when the conversation data needs to be persisted (e.g. user tries to buy the items in the shopping cart), we can use a the default and simple RequestScoped persistence context to take the required data (e.g. list of items in the cart) from the conversation scoped object (e.g. the shoppingCart) and try to validate/persist it. What do you think bout this approach? Do you see any disadvantages? And thanks for the great article. @Pooria: your approach is likely to face the well known LazyInitializationException, and King Gavin (Hibernate and Seam creator) thought up to the conversation scope to avoid this exception. Think to your flow: conversation begin and you load the user object from database. In the next page you access the user address list…and BANG! LazyInitializationException. select a EntityA a join FETCH a.bList b another LazyException solution 😉 Thanks for an interesting article! We are currently running ejb 3.0 and spring side by side via the jboss-spring deployer. One nice feature about spring is the @configurable annotation which basically spring configures any pojo without having to worry about configuring it inside the spring container (using aspects) Does ejb 3.1 offer anything similar? (I see the BeanManager exists, but this does provide a good substitution). Furthermore, can @Asynchronous be used with any old pojo (like spring’s @async method) or does it only work for EJB components? Thanks correction : Does ejb 3.1 offer anything similar? (I see the BeanManager exists, but this does NOT provide a good substitution). Furthermore, can @Asynchronous be used with any old pojo (like spring’s @async method) or does it only work for EJB components? @Asynchronous is specifically for EJBs, but don’t forget that every simple POJO can be made into an EJB by just adding the @Stateless annotation to it. So in a way, yes, in some way POJOs can use @Asynchronous 😉 So what jars do i need to include in my Maven POM so i can deploy the JEE 6 persistence on Tomcat or some other non JEE 6 container? Or do i need to constantly upgrade my entire server farm to the latest greatest application server software (most likely for a massive fee) just for the pleasure of doing what i can do with a few simple jars with Spring? Since when does open-source cost a massive fee? You’re going to pay the same price if you pay VMWare for support on the Spring stack, as if you pay a company for support on Java EE. No difference there, sorry 😉 Indeed, a still don’t get why Spring fans keep using this argument of Java EE supposedly being so massively expensive. Last time I looked Glassfish, Geronimo, JBoss AS etc could all be downloaded without paying anyone a penny. In case of Glassfish the download size is even smaller than if you downloaded everything you needed to build a complete application server with Spring. Java EE has many totally free (gratis) and fully open source implementations, which are also used a lot. And it’s worth mentioning that Spring actually takes individual pieces of Java EE and re-assembles them, so you’re still using Java EE. Java EE is still the core, and Spring is still the extension, it’s important not to forget that. I quote from a friend who sent me an email in response instead of commenting: “One phrase: web profile (or JBoss’s slightly perfected web profile) I would simply point over to (or quote) Adam Bien. He’s whole “what does lightweight really mean” entries are great. Also Gavin’s “You should update to Java EE 6” And Cay I absolutely agree that bloated runtimes and runtimes which are slow to load suck. But I don’t want to have to bring the whole internet with me just to deploy (because that slows down the deployment process). It’s about striking a happy balance, and that’s the web profile :)” Well, here’s the problem…I still fail to see a single advantage EE6 has over Spring. It’s just different annotations/config files that look very similar to the Spring ones, so what’s the benefit for an existing app? None. And here is the big one: with Spring we run our app in Jetty and can use the excellent Maven/Jetty integration for unit testing. With EE6 I need one of those big, clumsy app containers like Glassfish, JBoss or Weblogic. No thanks: I’d rather stay agile and lightweight with Jetty. Only Spring allows us that. Big, clumbsy… embedded? I never said Java EE had advantages over Spring; I simply said it did things differently, and did some things for you 🙂 When you get right down to it, the level of work is just about the same. With regards to easy testing: Heh, times are changing. Yeah it’s a little bit of up-front work, but as it says in the article, “the benefits are clear.” >I still fail to see a single advantage EE6 has over Spring. It’s just different annotations/config files that look very similar to the Spring ones, so what’s the benefit for an existing app? Well for me this is the other way around. I fail to see a single advantage Spring has over Java EE 6. The other way around the same thing really holds; Spring is very similar to Java EE, just with a different API. All our existing apps are Java EE based (5 and now 6) and we’re very happy with it. Why should I migrate those to Spring? Very nice and very useful article! I’ll bookmark this for sure for later reference. One small addition about the data source config. You say the following about this: >must be placed in the actual JBoss server deploy directory But this is not completely true. You can also include the config file in your application code by creating a jboss-app.xml in META-INF and referencing the file there. This is handy for people who like their application archives to be as self-contained as possible. Of course, for some organizations or team structures it’s better to have the data sources (and actually other config files like JMS queues) defined in the JBoss AS installation itself on each server where the app is deployed to, but just wanted to mention that there is a choice 😉 Thank you! That’s very good to know! I’ll update the article when I get a chance 🙂 This article is pure yet subtle propaganda. You do not state it clearly but with all that ‘ee6 does it for you’ things the message is that spring is complicate and ee6 is better. You work for rh yet still act ad a spring dev lookin’ for the first time at ee6! come on be more honest! and please : cut that part about getting beans programmatically, it’s better to say that ee6 doesn’t like it without scaring people that much on such basic tasks. Didn’t like your way of propaganding ee6 but I like ee6 and appreciated some of your thoughts. You’re right. I’m out to secretly undermine Spring by sharing my own personal experiences and lessons learned from doing a migration to Java EE 6 🙂 Just kidding. Also, I’m confused: wasn’t the part about getting beans programmatically admitting that Spring did something better? Why do you want me to change that? I updated the post to be more clear that I do in fact work for JBoss. PS. Sorry for coming across sarcastically. I’m hungry, and you are right. I’ll try to do better in the future. 🙂 Hey! No problem at all. I was a little aggressive too. Didn’t meant to, but I was. Don’t want to start a flame at all. So sorry for me beeing too severe. In the end EE6 and Spring are two different approaches. Sometimes I like the magics of EE6. Everything works right out of the box. No need to write the same things all over again. Sometimes there’s too much magic and I want a lighter system. But EE6 is really good and your article definitively worth reading (and posting …). three links that do not really say anything about the real issue. I can simply build a full enterprise application using Spring and deploy to any container, whether that is Weblogic 8, 9, 10, 11, Tomcat 5, 6, etc… I don’t even have to think about some vendor driven spec to know my application will work. JEE would be great if it followed the Spring model and was simple a series of jars to include in your war. Simply include the jar via Maven and done. The xml config points are interesting considering all the xml configuration you highlight in your article. Until you can simply include a couple of jars in a war file to do what 80% of java developers need like Spring can, you all will be begging for people to switch to the bloated jee platform. If you like the Spring stack better than the Java EE stack, that’s awesome. Not gonna try to stop you 🙂 Use what works for your business, that’s always the bottom line. Spring just has a different default model: include everything in the war and end up with fat jars vs deploying very slim wars and assuming your deployment platform already has the required functionality. It’s just a different thing. Hard to say which is really better. In desktop applications, do you statically link in all libraries and whatever, or do you assume the OS has some base level of functionality? In practice, both happens really. Also, including everything in the .war is arbitrary too. Since you still have to rely on a base level of Servlet and JSP that is already installed. If you want to include JPA, JTA, EJB, JSF, JMS, etc in your .war, why not include Servlet and JSP in it too? In that way, the only thing you assume to be installed on the server is a JVM. You don’t deploy a .war, you deploy e.g. Tomcat with your application already in the webapps directory. Technically there is very little difference as Tomcat itself is a Java application too. The same can be done with JBoss AS. I can assume a certain version of JBoss AS is already installed on the server and thus deploy very rich EAR apps that are extremely small (couple of 100k max), OR I can assume only a JVM is installed and deploy JBoss AS together with my app. Luckily we all have a choice as Lincoln says. If Spring works for you, by all means keep using it 😉 (although don’t forget that there are Spring Application Servers too, where basically an AS is build on top of Spring and all required libs are added to the Tomcat common libs dirs, so the picture is not that black/white) Great article, First article that I have seen that actually brings CDI and EJB3 together and in a coherent fashion. But I dont think I am going to migrate anything to JEE6 in a hurry. All the JEE has done so far is to catch up with Spring and not do anything better than them. Ok, you solved the lazy init exception, so what. EJB3’s Interceptors are still way behind what I can do with first class AOP support that I get with Spring. All JEE seems to be aiming for is simple web/EJB projects, but almost all enterprise work that I do need additional features that frameworks like Spring Integration, Spring -Batch provide out of the box.Sure, there are other open source projects out there, but Spring does all these seamlessly and adding them is very little add on work unlike EJB3 where everything is PIA. I would have really jumped on the EJB3 bandwagon if only there were things that were compelling enough, but if all the work has been done so far is to catch up with Spring, I’d rather use Spring which atleast has had the time to mature. This article wasn’t meant to be an attempt to convince anyone to switch, just an explanation of what’s going to happen if you do. @Lincoln, I really like how detailed you article is: that’s awesome! Thank you! Thing that appeared to me rather unnecessary negative are: “seemingly unresponsive Spring forums” We both know this is not true, or if it is, can you back it up by showing me at least 5 questions that you asked on Spring forum that were not answered? “download your application server of choice, which at this point will probably be one of either: JBoss AS 6, or GlassFish v3” why would I need these guys if I have plain Tomcat ( and yes, I know how to use it )? I think the overall direction is to hide the complexity and focus on the real task ( some people call it “business”, I call it a “task” ). Having that in mind, if I would really switch from vanilla Spring / Hibernate stack today, that would definitely be Grails: can’t go simpler than that. I am sure you know that Spring also has a complete annotation driven configuration, I personally _like_ my XML, but it is there. Plus if I need flexibility I would go with Grails bean builder. Finally, I know that you are a big JSF supporter, we actually met at Philly ETE conference, and I attended all your and Dan Allen’s talks, just to make sure I “get” the message. And even thought Spring Webflow is practically sits on top of JSF, I really think they could have done better 🙂 Again, thank you for a detailed article, I love those, and even though I belong to “The World” good luck in “Pure JEE vs. The World” mission. /Anatoly @Anatoly Actually, you’re right. I went back and double checked my posts on Spring’s forums and I saw that I participated and was answered serveral times. I was distinctly thinking of a real experience I had on some forums though, and I’m embarrassed to say that it was probably the Hibernate forums before they moved to JBoss.org… All I remember is asking questions about Spring open-session-in-view filters, and getting crickets. I was asking in the wrong place 🙂 You’re also right. The article did not need any inflammatory statements in the first place, and I’ve removed it. –Lincoln If there is any unresponsive forum in the world it’s indeed the Hibernate forum. Getting any question answered by anyone knowledgeable, let alone the actual developers, is near to impossible. @Anatoly >why would I need these guys if I have plain Tomcat ( and yes, I know how to use it )? Well, because the article is about Java EE and those are examples of popular Java EE implementations? Your reply is like replying to an article about iOS where the author mentions you probably would get either an iPhone or iPod in order to use iOS, that you already have a Nokia and yes, you know how to use it. Doesn’t make a lot of sense, does it mate? 😛 But I have not edited this article, I added a comment. I will let you think about my comment as “Tomcat”, and this article as “JEE”. [ it’ll “click” after you read it over a couple of times ] Going back to “Nokia” point: “why do I need iPhone, if my Android already does it, and does it better” 🙂 /Anatoly A few years ago I’d have to pay someone for this ionfrmiaton. Thanks Lincoln … for a detailed and well written blog. A lot of things about JEE6 were answered by this post. I liked your article, thanks! Two questions: (1) Does Java 6EE CDI also support annotated components like in Spring through @Component, @Service, @Repository and @Controller? (2) How would you compare Java 6EE CDI and Guice? @Erik 1. I’m not sure what you’re asking here. CDI has tons of annotations (Including @Model, @Service, and I believe @Repository), CDI also lets you create your own Stereotype annotations to extend functionality, and has *tons* of extension points. 2. CDI is a much richer (but also more heavyweight) DI container. It features the same type-safe approach, but has an extreme level of extendability. If you don’t need that, Guice is still a good option, but CDI is gaining major ground already, so it really depends on what you want to be using in 5 or 10 years, and what extensions you want to use with it. Let me run annotated transaction in tomcat and then i will make a switch from Spring to EJB. vinay, that’s a bit like saying you switch to apple mail as soon as you can run iOS on Android. It doesn’t really work like that. Java EE is a full platform, you don’t run Java EE on Tomcat. (although many Java EE implementations use Tomcat for the Servlet part). You can run bits and pieces of Java EE on top of Tomcat, but you don’t get the full conveniance then and you are basically repeating the work that eg Apache Geronimo has already done for you. Paul I remember seeing in one of articles that you could run lightweight EJB’s on tomcat. They would not be fully functional. Do not remember much but will post a link as soon as I find one. But my point is that for a lightweight application, Spring + hiberante is providing with all the feature sets which EJB + JPA is providing JPA has been derived from Hiberante so basically you are only getting EJB’s and that you are using Stateless Session bean most of time. I had worked with EJB for a long time but now with Spring , I do not find enough motivation. In most of cases, Spring’s transaction capabilities are better than EJB’s. Here is a comparison of both I think the point is that you don’t HAVE to assemble your own stack. JBoss AS basically already is Tomcat + Hibernate + EJB + JTA. Why painfully build your own stack with possible incompatibilities? Hopefully JBoss AS will also specifically add a “web profile” configuration (they have “all”, “default”, “standard”, “minimal”, “osgi” and “jboss web standalone” now. Those are just pre-assembled configurations. You can delete the other ones or make your own.) Thanks for the article. It made me want to investigate EE6 a little bit more. However I am still turned off by some of the decision that seemed to linger from the old EJB days: checked exception for example… Lastly I believe you made a mistake writing that in spring you will get an exception if you inject a session scoped bean in an application scoped bean. Spring wraps its scoped objects with proxies that are injected immediately. So on creation the application scoped bean will only get injected with its own proxy that points to the session bean. However if you try to access that session scoped bean outside of a session context you will get the exception you are talking about. From what you described this is exactly the same behavior as EE6. […] Spring to Java EE – A Migration Experience is very interesting article. I haven’t looked into EE recently but from what I read coding EJBs is very simple compared old days. Categories: Java Extreme MakeOver 6 October 2010 at 14:20 – Comments […] i believe that JEE 6 has caught up to Spring. it is light, fast, easy and standardized. but Spring ppl are very smart and are business oriented. while most ppl are debating Spring vs JEE (don’t forget python, php5, ruby on rails ..etc), Spring is fast moving ahead. Their applications are ready for the Cloud with all the tools u need. They make ideas like SaaS, PaaS accessible to ordinary developers and within their reach. see their support services are unmatched and the their sub-projects are stamped ‘Enterprise ready’. I use JEE in my applications. but I keep an eye on Springsource. to say the least, they are always one step ahead and their vision into enterprise apps is just impressive. […] 2) […] Java EE 6 vs Spring boils down to one decision: Do you want to be dependent upon the application server version again? Using Java EE you will be bound to the versions bundled with that application server. How many of you havn’t struggled with WebLogic shipping/supporting e.g. old WS-* versions?! With Spring you may use all the new features without upgrading your application server, and thereby each project may upgrade at it’s own pace. When depending upon Java EE you are forced to upgrade the application server and thereby coordinate the change with all other applications/projects running on the same application server instance/cluster. With Java EE you are send back to the days where you had to wait for all the applications to be ready (and wait for new test and production environments). With Spring you may upgrade your project as you need new features provided by Spring/Hibernate without being dependent upon others. I don’t buy into your argument that upgrading Java EE is somehow more difficult. From an application architecture point of view, migrating your application code to a major new version of Spring is just as risky as migrating to a major new version of Java EE. There is no difference here. If you’re using the Spring deployment model of more or less completely shipping an AS in your .war to blindside operations (“no, i’m not upgrading any libs, I really wrote that 100mb of code myself”) than that’s a very debatable practice. Most importantly, the argument of upgrading individual apps at their own pace because they all run on the same AS is silly. In practice you don’t do that! Every app runs on it’s own AS and that one typically runs on it’s own virtual server. Spring fans of all people should know this is the preferred deployment strategy now that VMWare own SpringSource. Totally Seam and JBoss biassed, totally antispring oriented. Spring was the revolution, it will be, just Guize compares to it’s simplicity and intelligence. Period ! The purpose of this article was not to be balanced and fair. This was a discussion about the issues encountered when moving *away* from Spring, and moving *to* Java EE / CDI. Thank you for reading. I’m curious about injecting a @SessionScoped bean into an @ApplicationScoped bean. I couldn’t quickly find where this is addressed in the JSR-299 spec. The only ways I could see this working are synchronizing access to the @ApplicationScoped bean, using ThreadLocal storage for the @SessionScoped bean, or having multiple instances of the @ApplicationScoped bean (which you wouldn’t expect if there weren’t any injected @SessionScoped beans). Is there another option that I’m missing? I’d be more comfortable understanding what the consequences are (in terms of performance, memory use, etc.) of having such a straightforward way of representing something that seems like it could have a lot of complexity (as evidenced by how you describe Spring’s handling of it). Well that’s the clever thing, and like I said this is the same in Spring if you use , but Weld/CDI use a Java Proxy to wrap the @SessionScopedbean reference that is @Injected into the @ApplicationScopedbean. This means that whenever the @ApplicationScopedbean attempts to access the @SessionScopedbean, it actually accesses the proxy, which accesses the current user’s Session (the BeanManager takes care of all of this,) and gets a reference to that current user’s @SessionScopedbean. The @ApplicationScopedbean is not inherently threadsafe since there is only one instance of it in the entire app, but its accesses to the session scoped bean *are* local to each individual user. Unless you do bad things, you won’t have bleed-over. Obviously @SessionScopedbeans are not inherently threadsafe either, but @RequestScopedbeans are, because they only live within the scope of a single request, or one single thread. Ahh, of course. I had thought of maybe putting a proxy around the @ApplicationScoped bean, but that wouldn’t work for direct field access to the @SessionScoped bean. I didn’t think of putting the proxy around the @SessionScoped bean. Thanks! Delegating via the proxy to the current user’s session is a very clever trick. It’s a little like what TLS does in the standard JDK. You seem to be accessing a static variable, but under the hood it delegates to an instance in the current executing thread. Wouldn’t it make more sense that the injection point for the @SessionScoped bean be “@Inject private Instance credentials” instead? That way you can obtain the correct instance of credentials whenever you need to do something with the object and not have to worry about the session. Oops. That should be: If it ain’t broken, don’t fix it. Spring works. Maybe JEE does too. Who cares? Do what you feel is good. Deploying a light war with a heavy server or a fat war with a light server is a matter of necessity for each situation. It depends. I concur with some earlier opinions: All JEE has done is catching up. It’s like Spring is the standard and JEE is the extension. It certainly feels that way. The only thing truly compelling I have heard (although I haven’t heard that much) is the Conversation scope. We implemented a custom conversation scope for our Spring, JSF, Hibernate application, and it wasn’t that hard (merely 4 classes with some handling of the Hibernate Session and Spring Transaction) but I would like to have that done for me. But Tomcat is easy, light, fast, and very well supported. Spring was a huge change back in the day. They are beautifully documented and developed. Transactions are good, AOP is good, and specially, their attitude is AWESOME. Spring doesn’t tie you to anything. You can integrate with what you need, run as complex or simple applications as you wish, use the server you see fit. And they do that with a smile… so much different that Gavin “I’m so much more smarter than you so don’t bother me” King attitude. In the end, and with two close specs, it’s that what keeps me in the Spring side. That, and more than 6 years doing what JEE is now coming to do: Keeping Java as an enterprise option. > If it ain’t broken, don’t fix it. Spring works. Maybe JEE does too. Java EE does. Believe me 😉 >Who cares? Do what you feel is good. I think that’s the entire idea. If Spring works for you, by all means keep using it. And of course nobody is forcing anyone to migrate existing apps from Spring to Java EE, that would be really silly. For new projects and new teams however I think Java EE is a very compelling choice. People already familiar with Spring and starting with a complete new project should really consider Java EE. I think this article will be very helpful for those people. >And they do that with a smile… so much different that Gavin “I’m so much more smarter than you so don’t bother me” King attitude. I hear you, and maybe Gavin is a little like that, but he does know his stuff. There’s a difference with thinking you know better while in reality you do not really (an all too common phenomenon under developers). Personally what drove me away from Spring is Rod’s attitude of “EJB is evil! Containers are bad! J2EE is heavyweight”, long after Java EE 5/EJB3 was released. Your experience may differ, but in my opinion if you wanted to be part of the Spring culture, then having to hate EJB and J2EE was just part of it. There are two things very wrong with that. First of all, it’s a little strange that in order to use technology A you actually have to be indoctrinated to hate technology B, and secondly, most of the things said by Rod actually applied to EJB2 and J2EE 1.4. I’m not sure why rants against EJB2 are still relevant in 2010. >It’s like Spring is the standard and JEE is the extension. […] >In the end, and with two close specs, it’s that what keeps me in the Spring side. That’s just the point: Spring is not a standard, and there is only one implementation. It’s sometimes easier (but not necessarily better) to work alone than to have to decide with other people. Spring had once more features than Java EE. That’s gone. Great article. The only thing I’m worried about is developing application on java application server like Jboss AS that is painfully slow. When project is large you have to wait up to 3 min for Jboss AS to start (or publish application). Tomcat rules! Nice article. I think both JEE 6 and Spring has their strength at different areas. I just don’t see Spring is over. JEE is getting stronger at the business-logic tier and the persistency tier, however, it still doesn’t have a real competitor to Spring at the presentation tier. The “page controller” pattern behind JSF still prevents any implementation of the specification from significantly reducing memory consumption and improving performance. Another great feature in Spring is its openness. The framework can be (in fact have been) integrated with other popular frameworks without huge efforts. JEE’s biggest advantage is its vendor support. In addition to JEE server licenses, no one would want to pay extra to support a framework running on the server. In my opinions, JEE or Spring? It depends… For addressing the page-controller JSF issue, you can use my own tool/well-adopted open-source project: [[PrettyFaces]] – You set up URL-mappings that point to one or more pages; actions can be invoked when those pages are requested, thus effectively creating a front-controller instead of a page-controller pattern. > installed Hibernate on GlassFish (which is very easy to do, and I recommend since TopLink has given me a lot of problems; Nice article, but why recommanding an implementation over another one because you had problem using it? I don’t think it’s fair. What do you mean by Toplink? If you mean Toplink Essentials, this one has been obsoleted by Eclipselink. If you mean Toplink JPA, this is a commercial product based on Eclipselink. Eclipselink is an implementation of JPA 1 and JPA 2 (RI). If you have a jpa compliant application, Eclipselink just works. Please be precise if you think it doesn’t: it’s the RI for JPA 2, so I’d really doubt it and it’s really worth filing a bug. I’ve got a lot of production apps using Eclipselink. One more detail: you don’t have to install Eclipselink in Glassfish, you can bundle it in your application if you want. That means you can have different versions for different projects, and you don’t have to install anything on glassfish to deploy your application. That’s a fair enough point. I suppose I should have said, “Coming from Hibernate, there are enough differences between it and EclipseLink that I had a lot of trouble getting my existing app to run on it.” Why? JPA is a standard and switching should be easy. If you say it’s not, it would be interesting to know why. Eclipselink is stable and has been there longer than Hibernate. In your article, it sounds as if Hibernate is better, more stable and Eclipselink is problematic. This simply isn’t the case, and I bet you can change your JPA provider easily. Like I said, I personally had problems using Eclipselink instead of Hibernate 🙂 It’s been a while since I went through it, so I can’t pinpoint them specifically right now 🙁 […] week a JBoss core developer, Lincoln Baxter, published a detailed and insightful report on his migration experience from the Spring Framework to the new Java EE 6 platform. In the post […] […] I still need those proprietary frameworks? Last week a JBoss core developer, Lincoln Baxter, published a detailed and insightful report on his migration experience from the Spring Framework to the new Java EE 6 platform. In the post […] Nice article. Unfortunately we’ve been mandated to use Websphere – which means we’ll get support for this around 2019. @Gene: maybe you are wrong: […] boost his argument in his blog post, Badani pointed to a write-up from fellow Red Hat employee, and JBoss core developer, Lincoln Baxter, which discussed the […] […] [Technik] Spring to Java EE – A Migration Experience […] Regarding your example of a EJB as generic DAO. has 2 compilation errors. Shouldn’t it just be (not tested it myself): […] which, for reasons best explained in a separate article, is not easy unless you use some “Solder.” However, this approach is recommended only […] Guys, who designed EE6 should just do the same things that were done in Spring but better. Instead they chose their own way, and this way is seemingly more complicated than Spring way. In turn, benefits are very doubtful. The case might be closed here. New EJB technology is not going to fly. It does not worth time to spend. Spring works just great. Why somebody time and money to do the same things that are already done. Everybody who remember EJB 1.0-2.1, know how much pain it was to deal with server specific deployment descriptors and other meta data. Why do we need it again ? […] The days of Spring and popular Web Frameworks are over, is clear from this article on Java EE 6. Migrating from Spring to Java EE 6 is thoroughly described here. […] CDI might replace Spring DI, but the power of Spring lies in the support for Aspect oriented programming (AOP). Though we can workaround this in JEE, with EJB interceptors, we are forced to move to EJB and the flexibility of applying aspects on Java bean level is not present. Even if projects wants to move from spring to JEE. Non support of AOP is going to be the blocking factor. That’s an interesting perspective, actually. I think that CDI interceptors provide a great deal of functionality as well, but the idea that you can extend the public interface of a class is something that is yet to come in to EE as a major practice. I think we’ll see some serious power with interface-driven implementations, however (where you define an interface that is automatically bound to an implementation to meet your needs.) That’s one place where CDI will shine well 🙂 I’m in the way to migrate mi old Spring + Vaadin application to JEE6, right now i’m having an issue with Spring-Security, how can i use Spring-Security with JEE6 ??? There is no way to @Inject the AuthenticationManager. :S Some help will be appreciated. I suggest taking a look at Seam Security or PicketLink (unfortunately the documentation link seems to be broken at the moment – trying to get that fixed.) […] forte, molto forte, ma l’arrivo di Java EE 6 sembra aver incrinato le certezze di più d’uno, ma non di […] […] be discovered and included in the larger application. (For more details on these descriptors, see this post.)To run your application with the least amount of effort, you should use a full Java EE Application […] That is why the Head First book series is so popular and effective. Those books make you think/experience the problem first and then help you finding a solution. Nice article, very nice indeed. But as you show it is possible to use an EJB as a, let say view component, so tha page can acess it as a Managed Bean. This aproch is not so heavy? EJB3.1 is pretty lightweight these days 🙂 Just a few annotations. Spring to Java EE – A Migration Experience | OCPSoft… Thank you for submitting this cool story – Trackback from Java Pins… My experience of several months with Seam 2 is that there are some inherent speed bumps introduced by JSF and bijection. For UI controls with many elements (tables, dropdown boxes), the bottleneck is not at all the database (even without 2nd level caching), but the time spent by Seam during the RenderResponse phase. The heavy use of bijection and events (see, for instance, org.jboss.seam.core.MethodContextInterceptor.aroundInvoke() which sets 5 context variables, each creating 2 events) has a visible impact on the page rendering time and we found it is hard to reduce it. There are only a few tools: bypassing interceptors, use and rethink the UI interaction to have lazy loading of data in UI controls or have autosuggestions instead of populating lists. It is somewhat frustrating to constantly have this conflict between using the framework for its features and avoiding the framework for its slowness. I don’t know if I can ever make the app really snappy (say, have responses back in less than 100ms). Any comment on Seam 3 performance (JavaEE 6 CDI + JSF) compare to Spring framework? Great article. I’ve been researching how to migrate our apps which are based on spring as jee container. We are using OSIV (filter) to avoid the LazyInitializationException. The author suggests using ConversationScoped stateful beans/services with extended PersistenceContext. (Our services are internally stateless though). Which looks promising for use with a servlet framework. But what if we want to use the same services in other cases – like EJB timers/schedule. (We use quartz scheduler now) How to deal with the (unwanted) extended scope here. Should we then programmatically begin and end conversations – or manually flush entitymanager. Could it be done declaratively? […] are a couple of very useful articles. This one is very detailed and is published by JBoss employee: and this is the presentation from IBM:. […] Before commenting, we should try out whats there…rather than basing our opinions on years old…Jboss 7 boots in 45 seconds… According to comments of CDI does not allow a conversation scoped EntityManager because the EntityManager interface is not Serializable. Is Weld more tolerant in this respect? Actually I think that this does work in Weld. I’ve used ConversationScoped EntityManager instances many times on JBoss AS 7; I’ve never had a problem with this. That’s the whole idea of the Extended PersistenceContext, so if it’s broken on WebSphere, that’s really bad. EJB3.1 the best at this time, from 2008 we used only EJB3 on several successfull projects, EJB3.1+JSF2.1 have not alternatives. Some people ask me why EJB3.1/JSF2.1 have not alternatives, why? Because XML is not a java, because with EJB3/JSF2.1 you write only java code, only 1-2 simple xml. In Spring still you must use a lot if xml what in the big projects caused failure layer. Also structure and logics EJB3.1/JSF defined in the right way. Spring still does not know what is java 6 CDI, POJO and more:((((((( I’m a “little bit” late and I’m not a Spring specialist, but I leave a comment anyhow. With Spring your application is bound to certain platform which is not the case with Java EE. For example, with async EJBs you can implement application server neutral threading, but with Spring’s @Async you cannot; you have to change TaskExecutor according to platform. For WebSphere and WLS it must org.springframework.scheduling.commonj.WorkManagerTaskExecutor but cannot be on any other platform. I would also say that WorkManagerTaskExecutor doesn’t work correctly and can make your server to hang, but lets not concentrate on that. So, depending on your platform you have to use some kind of deployment tool to change this or unpack the XML files and change them manually. And which application servers have console support or deployer tool to do it? I think the same still applies to TransactionManager; it must be changed for WebSphere. “Declarative transaction demarcation in Spring 2.5 or later are supported in WebSphere Application Server using the following declaration for the WebSphere transaction support: ” or how about not changing it? “Managing transaction demarcation through WebSphere Application Server’s UOWManager class ensures that an appropriate global transaction or LTC context is always available when accessing a resource provider. However, earlier versions of Spring used internal WebSphere interfaces that compromised the ability of the Web and EJB containers to manage resources and are unsupported for application use. This could leave the container in an unknown state, possibly causing data corruption.” See for more information: Cheers, Paci Great article! Can’t wait to see Spring disappear from every freaking project I had to put my hands on. PS: Recommending JBoss AS 7.x over GF 3.x ? Not sure about this statement… JBoss = Very buggy What bugs have you found specifically? Sorry to hear you had trouble. Was this with the final release or one of the earlier betas, or even AS 6? JBoss AS7 is incredibly stable and clean. 1 month ago i also migrated one of my applications to JEE and i could say it was easy. But the problem here is not "JEE is also easy" thing . I think what makes spring is a little bit better is its flora like spring-data,spring transactions,spring-security. In my project i did not use ejb for example instead i use seam-persistence and seam-security because i didnt want to use a jee server just because i have to persist some small data but still want to use declarative transactions and i dont want to use web.xml based security or JAAS(i think still hard to understand). Because of all those things i look forward to deltaspike project which i believe fills all those gaps. That’s a really well written and helpful piece of information. However in all the ensuing debate specifically of the difference in preference to use Tomcat vs EE severs, I did not get a mention of the newest kid on the block TomEE which according to apache is " an all-Apache stack aimed at Java EE 6 Web Profile certification where Tomcat is top dog". This seems to be pretty exciting. Will this be game changer for people who actually like Tomcat but want to move to EE technologies ? What do people think here ? Interesting article. I’ve been experimenting with JEE 6 myself and wrote a pet app for it, but the immaturity of some of its solutions (CDI being a good example) has managed to seriously put me off, and cause me drift back towards Spring. Also, JEE is supposed to be the (self-proclaimed?) standard, but in reality you end up depending on the application server and taking advantage of its features, as you pointed out in the article, and the portability effectively goes out of the window. My worst nightmare has been integration testing, which requires an unreasonable amount of effort vs. getting it right with Spring. "but in reality you end up depending on the application server" Have you really tried to switch among EE servers? if you are in the spec thinks are really straightforward, if you use maven things get even easier. Also how often you migrate from app servers? i’ve seen people arguing that in favor of Spring but they are always using Tomcat… "My worst nightmare has been integration testing" have you heard about arquillian(arquillian.org)? Another thing thats is really powerful and is growing in EE6 are the CDI portable extensions […] Annotations, or Hibernate Search, I recommend reading a section in another OCPsoft blog about using extended PersistenceContexts. You can also try some books written by my colleagues and peers. (Note, you will need to disable […] […] Spring to Java EE – A Migration Experience by Lincoln Baxter […] A Conversation is supposed to be related to a short, CLEAR, "business type process" that has a beginning a middle and and end. This may span multiple round trip to the User within the bounds of a single session. It: +Saves multiple reads from the DB by "caching" stuff in/for the Conversation span +Saves up commits to the DB and flushes them at the "natural" end of the conversation or have them ALL rolled back by not committing (very neat!). [@ExtendedPersistance] +Gives a CLEAR area to store things so that nothing is left around in the Session/Application etc. Scope when the "business type process" is completed or abandoned. Due to the "fad" to completely UsStatelesstise everything (mainly from Spring and JQuery people) this has not had much attention and has largely been forgotten and even JEE 6 (CDI) has only really given it lip service. It’s all about BIG DATA and mashups or 100 things into a facebook/twitter page… So; it seems that jPBM (and things like Oracle SOA [BPEL]) have stepped into this space and moved things into what Seam 2 called the @BusinessScope which ALSO supported the ability to come back later to an "in progress" "business type process" so you are not locked into the Session… The lack of a "good" implementation of Conversation has driven myself to jBPM (5) which is now looking very capable and is actually built around a Rules Engine (Drools). I hope it is as good as it looks. Yeah, I really think the JBPM5 stuff is *really* interesting. The Drools guys and Mark Proctor have been doing awesome work, especially with their UberFire web workspace based on the Errai UI framework. Really great stuff happening there. Yes its does look awesome. Conversation did always have the problem with the "back" button being pressed in page having to be handled. here is hoping that, like Seam 2, they handle that out of the box too. Interesting to see how they support the users "Work in Progress" UI bits and how easy it is to lock into "security" as I needed more than roles and permissions. Very good site. Have very good explanation. In coming few months, i have to migrate our legacy Spring application to java CDI and i was in search of such a good article. Thanks a lot. One Question – What is the alternate for Spring batch jobs (i know java batch jobs are also there but it is not supported by most of the servers) -Sunil I’m not very familiar with 3rd party batch libraries, but I know that Java EE 7 supports batch functionality that can be used to replace Spring Batch. Both WildFly and JBoss EAP7 support this spec.
https://www.ocpsoft.org/java/spring-to-java-ee-a-migration-guide-cdi-jsf-jpa-jta-ejb/?replytocom=1941
CC-MAIN-2019-47
refinedweb
14,640
60.45
Learning something about printf, of all things… Linux box, gcc on this Mac still thought that “long” variables were 32 bit. Various counters in Milhouse are 64 bit values, as are the hash values that are used in the transposition table, and I quickly found out that all the places where I previously used “%ld” as a format string had to be changed to “%lld”. Grumble! You see, here’s the annoying thing about C: you know that shorts can hold char values, and ints can hold shorts, and longs can hold ints, but you actually don’t know how many bits any of these have without peeking using sizeof. Luckily, the C standard requires an include file sys/types.h which has typedefs which include types of various sizes, so if you really want a 32 bit unsigned int, you can use the type uint32_t and be reasonably sure that it will work. Such was the state of my knowledge a couple of days ago. But here’s the thing: I didn’t know any way to generate the right format string for a particular size of data value. On my AMD box, %ld is used for 64 bit values. On my mac, I need to use %lld. Grumble! But apparently this was all thought of by the C99 standards committee. They created an include file called inttypes.h which includes defines which tell you what format is needed. For example: PRIu64 is the code for a 64 bit unsigned integer value. On my mac, it expands to "ll" "u", which the C preprocessor is nice enough to cat together. Therefore, to print such a value, you need a line like: printf("%" PRIu64 "\n", sixtyfourbitvalue) ; Sure, it’s ugly. You think they would at least include the % in the macro. But, it does work. I’m tidying up all my code to play by these nice rules. Comment from Mark VandeWettering Time 5/17/2009 at 8:26 pm Oh, sure, when you put it like that, it seems all so simple. *blush* Comment from Phil Howard Time 5/20/2009 at 11:29 am The solution to this that I have been using for 64 bit values for a few years is to always use %lld or %llu for the format, and always cast the argument to (long long) or (unsigned long long). Comment from Pádraig Brady Time 9/6/2009 at 5:02 pm portable printf is often confusing. I wrote some notes on it: Comment from Minimiscience Time 9/6/2009 at 6:15 pm Obligatory pedantry: is specified by POSIX, not C99. int32_t and its relatives are defined in the standard C header file . Moreover, exact-width integer types for 8, 16, 32, and 64 bits are optional and are only present if the implementation actually has types with those widths; to be truly portable, you have to use int_least32_t and friends (also defined in ). See section 7.18 of the C99 standard for details. Comment from Minimiscience Time 9/6/2009 at 6:20 pm The above comment was supposed to say that the header sys/types.h is specified by POSIX, while the types mentioned are defined in stdint.h. However, I put angle brackets around the header names, and apparently your blog software doesn’t escape special HTML characters. I now have an urge to try to break your site, but I’ll settle for blinking text instead. Comment from mark Time 9/6/2009 at 8:38 pm More reasons to hate C. The thing I hate most in C is actually the absolute path for header files. This “feature” impacted EVERYTHING in the *nix world. It dictated the layout of the structure, and it STOPPED any alternative ideas (because anyone who would want to change that, would have to use another language, and we all know that this isn’t going to happen because C is actually USEABLE.) After 25 years it shows you that a language can still surprise you? Sounds as if you are too dumb, or the language sucks. :One of these must be true, and I think you are not dumb … > Comment from Art Time 9/6/2009 at 9:47 pm I had to deal with this issue a few years back while working on some C code that had to print out a few 64bit values, and I distinctly remember coming across those #defines, and thinking that it was nifty. Thanks for reminding me about it! Comment from Nathan Time 9/7/2009 at 4:28 am @Mark, regarding absolute paths for header files, you can usually set an include path that will tell the compiler where to look for the header files, so you don’t need an absolute path. Comment from Oren Ben-Kiki Time 9/7/2009 at 7:30 am By leaving out the ‘%’ they allow you to write stuff like printf(“%04″ PRIu64 “\n”, …) which would be otherwise impossible. Comment from maht Time 9/7/2009 at 9:22 am @Mark blame the notion of relying on the disk to provide your namespace Plan9 (hi Tom) did away with this notion. Slowly Lunix is coming rount (see man mount on Linux and look for the entry on bind – though the implementaion is lame (requires root, is global etc.)) Comment from Jared Time 9/7/2009 at 11:05 am The string concatenation you referenced doesn’t come from the C preprocessor. Instead it comes from the “string concatenation” phase of translation. See: String concatenation comes out to be a pretty novel idea syntactically. Because a literal string’s type is a const char * (also written char const *), placing two strings next to each other has the effect of removing the null character at the end of the first literal string. Then the addresses of the first character up until 1 – the null character can represent the concatenated string. Comment from Keith Thompson Time 9/7/2009 at 3:06 pm A quibble: For historic reasons, C string literals are not of type const char*. In fact, a C string literal is of type char[N], where N is the length of the literal plus 1 (to allow for the terminating ”). For example, sizeof(“hello, world”) yields 13, not the size of a pointer. In most contexts, an expression of array type is implicitly converted to a pointer to the array’s first element, so string literals usually, but not always, “decay” to char*. As for the “const”, it would make more sense for this to be applied to string literals, but it would have broken old code (pre-ANSI C didn’t have “const”, and code like func(“literal”) would have required adding “const” to the declaration of func). But attempting to modify a string literal invokes undefined behavior. C++ was less concerned with backward compatibility, so C++ string literals are of type const char[N]. Pingback from Types and printf « /etc/shadow Time 9/8/2009 at 10:43 am […] and printf By etcshadow Came across this post about printf-ing fixed-size types today and am very glad to have found macros for printf! Developing for the iPhone, with its […] Comment from Tom Duff Time 5/17/2009 at 11:13 am They don’t include the % so that things like “%16″ PRIu64 “\n” will work.
http://brainwagon.org/2009/05/16/learning-something-about-printf-of-all-things/
CC-MAIN-2015-32
refinedweb
1,229
67.49
AD7705/AD7706 Library AD7705 and AD7706 are two 16-bit Sigma Delta ADCs. Equipped with on-chip digital filters and programmable gain front ends, these chips are ideal for low frequency multi-channel signal measurements. The main difference between AD7705 and AD7706 is that AD7705 has two fully differential input channels while AD7706 has three pseudo differential input channels. This library I created interfaces with Arduino using the standard SPI bus. Most of the complexity is hidden behind the scene, and only a single parameter — the reference voltage — is needed to initialize the library since the reference voltage can vary depending on the particular implementation. The following code snippet illustrates how to use this library. In this example, channel 1 of AD7706 is setup in bipolar mode with a unity gain. The common pin of the pseudo differential inputs (AIN1 to AIN3) is tied to the 2.5V voltage reference. This gives a measurement range of -2.5 V to +2.5 V. Pin SCK, CS, MOSI, MISO correspond to Arduino pin 13, 10, 11 and 12 respectively. #include <AD770X.h> //set reference voltage to 2.5 V AD770X ad7706(2.5); double v; void setup() { //initializes channel 1 ad7706.init(AD770X::CHN_AIN1); Serial.begin(9600); } void loop() { //read the converted results (in volts) v = ad7706.readADResult(AD770X::CHN_AIN1); Serial.println(v); delay(100); } Here is the schematics I used for the sample program above: For a supply voltage of 5V, the recommended voltage reference is AD780/REF192. I used two 1.25 V precision shunt voltage references ADR1581 to form a 2.5V reference instead since those are the components I have on hand. The library also provides an overloaded init method to setup channel, gain and update rate: void init(byte channel, byte gain, byte updRate); The available gain settings are defined as: static const byte GAIN_1 = 0x0; static const byte GAIN_2 = 0x1; static const byte GAIN_4 = 0x2; static const byte GAIN_8 = 0x3; static const byte GAIN_16 = 0x4; static const byte GAIN_32 = 0x5; static const byte GAIN_64 = 0x6; static const byte GAIN_128 = 0x7; And the update rate settings can be chosen from: Other settings can be accessed via the setup register directly using the writeSetupRegister method defined below: //Setup Register // 7 6 5 4 3 2 1 0 //MD10) MD0(0) G2(0) G1(0) G0(0) B/U(0) BUF(0) FSYNC(1) void AD770X::writeSetupRegister(byte operationMode, byte gain, byte unipolar, byte buffered, byte fsync) For more details, please refer to the datasheet and the code in the download section. Download AD770X.tar.gz (Obsolete) 4/18/2012: AD770X1.1.tar.gz (see this post for changes since the previous version) Hi there Kerry, I’ve just tested your library and it works great. You’ve done a pretty good job! although, I’ve got a question: How can I read all the analog input channels? Keep up the good work! pfg Thanks pfg. You can just initialize both AD770X::CHN_AIN1 and AD770X::CHN_AIN2 and use them in your code. Thanks Kerry for the quick reply. I’ve tried that already: In the setup section I added: ad7706.init(AD770X::CHN_AIN2); ad7706.init(AD770X::CHN_AIN3); and in the loop section: x = ad7706.readADResult(AD770X::CHN_AIN2); y = ad7706.readADResult(AD770X::CHN_AIN3); Is this right? I get only one value, the same for the three reading and it’s also wrong. If the code is OK then I should check my circuit again… Thanks in advance, pfg Hi, Kerry, I wired up a AD7706 and used a AD680 as a voltage reference, exactly as in your example (Vcc I drew from 5V in Arduino). I got the example to compile and upload alright, but when I opened up the serial monitor, it was blank and nothing shows up. Thanks, Chris Hmm… I guess I’ll need more detailed description in order to understand why nothing shows up on serial monitor. Question, if you do a Serial.print in the same program does anything come out? Yes, with the same circuit, if I run one of the sample code from Arduino like “AnalogReadSerial” here:, it will print out numbers. In fact, I find the code to be stuck on this line “ad7706.init(AD770X::CHN_AIN1);” in the setup, as well as this line in the loop “v = ad7706.readADResult(AD770X::CHN_AIN1)”, in a sense that if I asks it to just print something, like “1”, I need to comment both lines (like below), and if either line is active, the code will not reach to the print line. #include AD770X ad7706(2.5); double v; void setup() { //ad7706.init(AD770X::CHN_AIN1); Serial.begin(9600); } void loop() { // v = ad7706.readADResult(AD770X::CHN_AIN1); Serial.println(“1”); delay(100); } Thanks, Chris OK. Check your MISO/MOSI pin connections and make sure that they are not reversed. The symptom you described sounds like issue with the SPI bus. The MOSI and MISO are connected to Arduino digital pin 12 and 11, correct (chip pin 17 and 18). These two pins are reserved for SPI operations. Hi, Kerry, I checked my circuit, and MISO and MOSI were indeed reversed. I’ve corrected it as you said (although I think you meant the chip pins 13 and 14, not 17 and 18), so now MISO (chip pin 13) is hooked to Arduino pin 11, and MOSI (chip pin 14) is hooked to Arduino pin 12, but still nothing shows up on the serial monitor… The problem encountered as described in my last post still persists. Thanks, Chris Also, I think in your post you had the two pins reversed too, “Pin SCK, CS, MOSI, MISO correspond to Arduino pin 13, 10, 11 and 12 respectively. “ On another note, I’m using Arduino Mega 2560, maybe that’s the case? Thanks, Chris I don’t have ATmega 2560, but from the schematics () it seems that the MOSI/MISO are 50/51. The code should work with all Arduino boards. Good luck! Hi, Kerry, Thank you for your advice! Now I have the pins hooked up to the right Arduino pins, I am able to see numbers popping up on the monitor. However, they are _always_ the same as the reference voltage I set in the beginning. So if I wrote: AD770X ad7706(2.5); I’ll get “2.50” or “-2.50” printed one or the other, and which one to be printed is rather arbitrary, although neg seems to be preferred. Same if I type in arbitrary number (like 330) in the reference voltage variable, so the ADC seems to not be converting at all, and the readADResult(byte channel) function doesn’t seem to be working, in a sense that if I let it return “0” or “2*VRef”, it will still return VRef. Additional curiosity. If I say: AD770X ad7706(AD770X::CHN_AIN2); All channels (1, 2, 3, COMM) will give me 1.00 volt, and if I set the Vref to be CHN_AIN1, I will get 0.00V on all channels; CHN_AIN3 for 3V, and CHN_COMM for 2V. And it seems that the function init is read through but does not do anything, that is, it doesn’t matter if I initiate any channel at all to go to the loop. Also, just for curiosity, the function “spiTransfer()” is not defined in the library, and it seems the standard usage is to include the SPI library and then call SPI.transfer(). Regards, Chris sorry, the “include” line above I have is incomplete, should be “#include ” as in your example. Thanks! Hi I am using yor library, and thankyou for such a big job. But I have some problems: My project is weighting system with pressure sensor. Which is very sensitive – it ranges 6 kg = 3mV on 5V supply. When I try to change GAIN to 128 – the answer is only 0. What can be a problem. I also try to change bipolar-unipolar mode on GAIN 1, but nothing changes. On GAIN 1 it works fine, but I need to mesure very low voltage. Please help. It’s a little hard to tell without further details. But did you get any readings on any other gain settings between 1 and 128? Do you have a by pass capacitor in place? Thank you for your answer! Now I more understand my problem and it devided in two: 1 In any case – chip sometimes show normal result and sometimes give only zeros. I do nothing, just turn it on and off and chip sometime work and sometime not. 2 I try to read setup register and find out that my chip does not remember them. I read the same data as I set only first time and on next read I get the same random data. So I add init function to loop and it works (but problem 1 still exist) Now I have only one idea – my chip is damaged. I heve two capactors 10 мкф and 0.1 мкф on supply voltage. My schema is the same as on figure 22 from datasheet. Dear Kwong. First of all thanks for your great work. I have tested the library and works fine. One question: Could I change CS pin? I tried it but no successfully (I changed pinCS definition in the *.h file) Thanks in advance. Regards Hi Arlukas, No, you can not change the CS pin (or MISO, MOSI pins) since these pins are designated pins for SPI use (implemented in hardware). You can find out more information on this at. Hello K.Vong, I apologize for my English, because I am from Russia. I did all of your scheme, but it does not work. In the output I see -2.5 volts, ie, negative reference voltage. And does not react to the supply voltage to the analog inputs of the chip. Could you lay out a more detailed diagrams or photos of the device while operating … Since the on y7our scheme no nominal capacity after the crystal oscillator I am not sure why you would see a negative voltage from the output if you are using a single supply like the way I connected in the diagram. In regarding to the load capacitors, since I used a ceramic oscillator instead of a crystal the two load capacitors are not needed. But if you are using a crystal, you should add the appropriate load capacitors. Hope it helps. Hello K.Wong! Thank you very much for your advice, I put the generator into the circuit and it worked! But I have now another problem arose: In the void setup (), I initialize all three channels: ad7706.init (AD770X :: CHN_AIN1); ad7706.init (AD770X :: CHN_AIN2); ad7706.init (AD770X :: CHN_AIN3); And in the void loop () output their values v = ad7706.readADResult (AD770X :: CHN_AIN1); Serial.print (v); Serial.print (“|”); a = ad7706.readADResult (AD770X :: CHN_AIN2); Serial.print (a); Serial.print (“|”); b = ad7706.readADResult (AD770X :: CHN_AIN3); Serial.print (b); Serial.println (); delay (100); But at the ground the third channel, I see -2.5, and the same values on channels 1 and 2 by applying 5 volts on channel 3, and again, I see 2.5 on all three channels. That is, changing the voltage on the three channel maps at 1 and 2. But with supply 5 volts at 1 and 2 channel, I see 0 volts at the output of all channels, and at ground 1 or 2 channels, I see -2.5 volts on all channels. That is, Channel 3 is working fine, measures the voltage as it should, but why is the value displayed on the 3 channels 1 and 2 …. A channel 1 and 2 do not work correctly. Because what could it be? Thanks in advance! Hi Vasiliy, Did you try calling AD770X::readADResult() immediately after calling AD770X::init()? In other words, try using init(channel) and readADResult as a pair and let me know if it works. If not I will have to take a look to see where the issue might be. Thanks. Hello K.Wong again! I did as you said, used to initialize the values of reading. When I used the initialization with only one parameter (the port number, like this: ad7706.init (AD770X :: CHN_AIN1) ;) then the Arduino boot time I saw real value, but then they were repeated, ie, values are not updated, and displayed only the first value. Then I began to pass all parameters in the initialization function (ad7706.init (AD770X :: CHN_AIN1, AD770X :: GAIN_128, AD770X :: UPDATE_RATE_100) ;), then it work again, but began to work the same way as I described last time. Initialization void setup () using all the parameters, too, did not produce the desired effect, as the inputs were not working as it should … Hi Vasiliy, Let me take a look to see whether any portion of the code could be changed to make it a bit easier to use. It has been a while, but I didn’t remember running into this issue when I was using the chip. I will reply back when I get a chance to take a closer look. Hi Vasiliy, I have just tried using both CHN_AIN1 and CHN_AIN2 at the same time. And the following code works for me: According to the datasheet, you will need some delays between the reads (ideally you should poll the DRDY bit in the communication register or poll the dedicated DRDY pin, for simplicity reason I didn’t use that in the code library. And if timing is not critical, simple delays works fine). Also, if your clock frequency is different (I used 2Mhz), you may need to use the tweak the parameters according to the datasheet. AD7706 isn’t the easiest chip to work with in my opinion, one thing is that the power-on sequence is somewhat important. According to the datasheet (Rev. C. P30) “it is important that power is applied to the AD7705/AD7706 before signals are applied at the REF IN, AIN” And also the chip expects a write to the communication register upon power up. The library code I provided only offers very basic functionalities (e.g. bi-polar, 50Hz update rate and x1 signal gain), the chip is capable of doing much more and you will need to read the datasheet to implement these additional functionalities if needed. I have updated the library to make it easier to use. You can find the change details in this post. HI to everyone! I’m pretty new to electronics but i have to use an adc to make a data logger. I might seem a little bit dummy but since i don’t have a big knowledge in electronics, can somebody say to me which kind of oscillator has used? in digikey.com i find tons of them, but i’m nor really sure on just grabbing one of 2MHz. any advice? cheers, francesco Just wanted to say thanks for a great library. It’s been very helpful to get my project off the ground. I do have a question though. Can you explain your math for returning the voltage in the line below: return readADResult() * 1.0 / 65536.0 * VRef – refOffset I understand VRef and refOffset, but what’s the significance of dividing by 65536.0? I’ve translated the code into a .NET library for use on a Raspberry Pi 2 running Windows 10 IoT. Thanks, Jeff return readADResult() * 1.0 / 65536.0 * VRef – refOffset is because the ADC is 16 bit (2^16 = 65536) so this line is to normalize the data and convert the measurement result to between 0 and VRef. thanks for the library. i wanted to know if i can put negative voltage on ain1+ in ad7705 in bipolar mode and how, in bipolar mode i read at zero input 32768 that is maybe fine representing the middle point between -2.5 , 0 , +2.5 ok but dont want to fry the adc if putting negative voltages someone have been tried to read negative voltages? the datasheet is something confuse at this point. Hi AD 7705 library is not supporting my Arduino UNO board. any suggestion…. please help me….. hi kwong, According to the circuit diagram, I made a module,Crystals choose 4.9152 M,22 pf the resonance capacitance,choose your AD770X1.1.tar program.but it doesn’t work!display 0.because DRDY pin has always been a high level(4.9V).what is the matter?? Hey Kerry ! Thanks for your library. I was successful using it after reading comments related to this. Now, I have two questions please: 1. Why do you prefer doing polling on dataReady rather then connecting the pin DRDY to some external interrupt pin and do stuff when the routine is called ? 2. Do you think it would be possible to include support for using the library in paralel with other SPI devices ? (i.e. a sd card reader or whatever). Best !
http://www.kerrywong.com/2011/03/20/ad7705ad7706-library/?replytocom=137254
CC-MAIN-2018-43
refinedweb
2,817
73.58
Hi guys, today, I uploaded the first (relative) stable release of Satine. I invite=20 anyone to download it from. There are two=20 binaries releases for win32 and linux. The source release has some problems= =20 and I am going to correct it asap. The documentation is available at=20. =2D------------------------------------------------------------------------= =2D-- Satine is a Python library that makes XML managment easy and complete. Sati= ne=20 converts XML documents to Python lists with attributes (xlist). This=20 technology allows to: =20 =2D translate documents with namespaces, both in elements and attributes =20 =2D translate both documents without XMLSchema and documents with it. If th= e=20 XMLSchema is available, the document can be easily validated. =20 =2D random and partial access to XML documents =20 =2D work very fast. The data binding technology is coded in C. =20 The Satine WS module is a simple HTTP server that supports both normal HTTP= =20 and SOAP requests. Hence Satine WS is a web service that supports a human=20 interface, too.=20 =2D-------------------------------------------------------------------- =46rancesco Garelli Ph.D. student - Universit=E0 di Padova, Dipartimento di Ingegneria=20 dell'Informazione graduate student at the University of California - Irvine, ICS Department garelli@acm.org
http://mail.python.org/pipermail/xml-sig/2003-February/009059.html
crawl-001
refinedweb
202
58.79
0 I am trying to do this simple encoding program for class. It is supposed to encode it using the ASCII chart and some slight modding of the number. Then it should be written to a file 'encryptedmessage.txt' for later reading by a decoder later. My issue is I dont know how to add newMessage so I can write it to file. I can't use the for loop in the output file. Any help would be appreciated. i tried searching but everything I found was actually more complex then what we are into as of now. Thank you for taking the time to read this. def main(): #declare and initialize variables #string message newMessage = message = "" #Intro print("+++++++++++++++++++++++++++++++") print("Welcome to the encoder program!") print("+++++++++++++++++++++++++++++++") #Prompt the user for the message message = input("Please enter the message you would like to encode: ") #Loop through message for ch in message: #Print and calculate the new value of message print(ord(ch) * 6 - 5, end = " ") #Open a the file “encryptedmessage.txt” outfile = open("encryptedmessage.txt", "w") #Write to file “encryptedmessage.txt” #Close the file outfile.close
https://www.daniweb.com/programming/software-development/threads/386902/i-need-help-writing-to-a-file-encoding-program
CC-MAIN-2017-17
refinedweb
185
75.81
QgsFileDownloader is a utility class for downloading files. More... #include <qgsfiledownloader.h> QgsFileDownloader is a utility class for downloading files. To use this class, it is necessary to pass the URL and an output file name as arguments to the constructor, the download will start immediately. The download is asynchronous and depending on the guiNotificationsEnabled parameter accepted by the constructor (default = true) the class will show a progress dialog and report all errors in a QMessageBox::warning dialog. If the guiNotificationsEnabled parameter is set to false, the class can still be used through the signals and slots mechanism. The object will destroy itself when the request completes, errors or is canceled. An optional authentication configuration can be specified. Definition at line 45 of file qgsfiledownloader.h. Definition at line 29 of file qgsfiledownloader.cpp. Definition at line 40 of file qgsfiledownloader.cpp. Emitted when the download was canceled by the user. Emitted when the download has completed successfully. Emitted when an error makes the download fail. Emitted always when the downloader exits. Emitted when data are ready to be processed. Called when a download is canceled by the user this slot aborts the download and deletes the object. Never call this slot directly: this is meant to be managed by the signal-slot system. Definition at line 96 of file qgsfiledownloader.cpp.
http://www.qgis.org/api/classQgsFileDownloader.html
CC-MAIN-2017-43
refinedweb
222
50.94
I imported a picture onto another picture and I am wondering how to get rid of the white around the smaller picture. I imported a picture onto another picture and I am wondering how to get rid of the white around the smaller picture. I didn't import the picture INTO the picture I just imported it onto the picture. Anyway here is my script right now. Also I cannot seem to get the backbuffer working. // The "GreenScreen" class. import java.applet.*; import java.awt.*; public class GreenScreen extends Applet { Image img, backbuffer; Graphics backg; // Place instance variables here public void init () { //img = getImage (getDocumentBase(), "bug1right.jpg"); //backg = getImage (getDocumentBase(), "hedge.jpg"); img = getImage (getDocumentBase (), "bug1right.gif"); backbuffer = getImage (getDocumentBase (), "hedgelarge.gif"); backbuffer = createImage (100, 100); backg = backbuffer.getGraphics (); backg.setColor (Color.white); // Place the body of the initialization method here } // init method public void paint (Graphics g) { g.drawImage (backbuffer, 0, 0, this); g.drawImage (img, 10, 10, 50, 50, this); // Place the body of the drawing method here } // paint method } // GreenScreen class And I presume that bug1right.gif has a white frame ? Not an easy task. If it is the case, not an easy task, you can make a BufferedImage out of it Then by getRgb() method you can get an double dimension array of all the pixels in your image. The array will be a int[imageWidth][imageHeight] You can then make 2 nested loop to inspect the pixels on the border. If the color is +/- white (something like red > 240, green > 240, blue > 240) you can set the alpha to 255 for these pixels. Then you can rebuild and Image of TYPE_4BYTES_ABGR (Alpha, Reg, Green, Blue) If the frame if perfectly rectangular you can just search where the color change from left to right and top to bottom and just draw the subImage
https://www.daniweb.com/programming/software-development/threads/336148/get-rid-of-white-around-a-picture
CC-MAIN-2018-47
refinedweb
307
65.12
On Wed, Apr 19, 2000 at 10:26:43AM -0300, Roberto Ierusalimschy wrote: > > eg. "for" keyword instead of a function in Lua. > > We forgot to tell: Lua will have a "for"! (next week) What will the style of the for be? One of things which I like most about Python is the simple list based "for". It's responsible for preventing lots of bugs IMO. However, I noticed a snafu with doing this kind of thing in Lua, namely that there dosn't seem to be a "next" tag method. -- For example, given a Python-esq looping structure like: a = { 5, 6, 2 }; for num in a do print num; end -- you could have python style number loops based on a table/list, -- if you had a "next" tag. function range_next(table,curindex) if curindex == nil then return table['start']; end if curindex >= table['max'] then return nil; end return (curindex + 1); end range_tag = newtag() settagmethod(range_tag, "next", range_next) function range(max) loop_parms = { start = 0, max = max}; settag(loop_parms, range_tag); return loop_parms; end for num in range(4) do print num end --------------------------------------- -- I'm not as fond of the C style number loops -- because they are so prone to errors: a = { 1, 2, 3 } len_a = length(a) for (x=0; x=x+1; x< len_a) do print num end -- although at least you can do this: a = {1, 2, 3} for (num=next(a);num=next(a,x); num!=nil) do print num end -- With only basic number style loops, iteration will -- be somewhat strange IMO. Witness the example below: a = { 5, 6, 2 ; val = "this will cause weird behavior" } len_a = length(a) -- 4 for num = 1 to length(a) do print a[num] end -- outputs: 5, 6, 2, nil -- David Jeske (N9LCA) + + jeske@chat.net
http://lua-users.org/lists/lua-l/2000-04/msg00092.html
crawl-001
refinedweb
296
72.39
Package: Simulink Superclasses: Specify properties of bus signal Objects of the Simulink.Bus class, used with objects of the Simulink.BusElement class, specify the properties of a bus signal. Bus objects validate the properties of bus signals. When you simulate a model or update diagram, Simulink® checks whether the buses connected to the blocks have the properties specified by the bus objects. If not, Simulink halts and displays an error message. For a complete list of blocks that support using a bus object as a data type, see When to Use Bus Objects. You can use the Simulink Bus editor or MATLAB® commands to create and modify bus objects in the base MATLAB workspace. You cannot store a bus object in a model workspace. When you use the Bus Editor, you create Simulink.Bus and Simulink.BusElement objects in the base workspace or the associated Simulink data dictionary. Also, you can use a bus object to specify the attributes of a signal (for example, at the root level of a model or in a Data Store Memory block). returns a bus object with these property values: busObj = Simulink.Bus Description: '' DataScope: 'Auto' HeaderFile: '' Alignment: -1 Elements: [0×0 Simulink.BusElement] The name of the bus object is the name of the MATLAB variable to which you assign the bus object. You can set individual properties after you construct the bus object. busObject— Bus object Simulink.Busobject Bus object, returned as a Simulink.Bus object. Description— Bus object description Bus object description, specified as a character vector. Use the description to document information about the bus object, such as the kind of signal it applies to or where the bus object is used. This information does not affect Simulink processing. Elements— Bus elements Simulink.BusElementobjects Bus elements, specified as an array of Simulink.BusElement objects. Each bus element object defines the name, data type, dimensions, and other properties of the signal within a bus.. HeaderFile— C header file used with data type definition C header file used with data type definition, specified as a character vector. The header file is the file to import the data type definition from or export the data type definition to (based on the value of the DataScope property. The Simulink Coder™ software uses this property for code generation. Simulink software ignores this property. By default, the generated #include directive uses the preprocessor delimiter " instead of < and >. To generate the directive #include <myTypes.h>, specify HeaderFile as <myTypes.h>... You can use the Bus Editor to create interactively a bus object and its bus elements. For details, see Create Bus Objects with the Bus Editor. Programmatically, you can create bus objects from: External C code. See Simulink.importExternalCTypes. Simulink.Bus.cellToObject | Simulink.Bus.createObject | Simulink.BusElement
https://nl.mathworks.com/help/simulink/slref/simulink.bus-class.html
CC-MAIN-2019-30
refinedweb
460
50.23
Rover Vision A Raspberry Pi Camera module and a diagnostics system allows SunRover to see and check that all systems are go. Lead Image © Igor Vovgaliuk, 123RF.com A Raspberry Pi Camera module and a diagnostics system allows SunRover to see and check that all systems are go. This article is the fourth in a series of articles on building a working solar robot. SunRover is a tracked, solar-powered robot designed to move around and explore while sending back reports, tracking weather, managing a tight power budget, and providing a platform for testing new sensors and equipment as they become available. The motors, the controllers, the computers, and the sensors are all complex devices in their own right. In Part 1 of this series [1], I went through the motor controller/power system and described the mechanisms for connecting the I2C sensors throughout the robot. In part two [2], I redesigned part of the motor power system and then looked at the solar power charging system, which happily is working perfectly! Part 3 [3] covered the robot's navigation system, and here in Part 4, I look at the Pi Camera system and the I2C diagnostics system. In this project, I am using a standard Raspberry Pi Camera module as sold by the Raspberry Pi Foundation. This was the same camera I used in Project Curaçao [4], so I know it will handle the heat. The Raspberry Pi Camera module can be used to take high-definition video, as well as still photographs. The module has a 5-megapixel fixed-focus camera that supports 1080p30, 720p60, and VGA90 video modes, as well as still capture. It attaches via a 15cm ribbon cable to the CSI port on the Raspberry Pi. This cable was too short for SunRover, so I bought a 50cm cable from Adafruit [5]. It's a little tricky to change cables, so be careful. I use the picamera [6] pure Python library for the interface to the Raspberry Pi Camera module. The code needed to capture a single picture in Python using picamera is simple: import time import picamera with picamera.PiCamera() as camera: camera.resolution = (1024, 768) camera.start_preview() # Camera warm-up time time.sleep(2) camera.capture('foo.jpg') For all of you solar power fans, you need to know that the camera uses a lot of current (some report more than 280mA for video and about 150mA for still shots), even when the camera isn't being used. To fix this, issue camera.close(), and the power drops way down. Note that the example above does not use camera.close(). Using a 3D printer, I have built stands, prototypes, bases, and other accessories. The camera base was a natural fit for a 3D printing project. The base would be screwed onto the SunRover box, with a plastic bubble [7] (for water protection, primarily) sitting over the top of the camera that still allowed pan-and-tilt movements (Figure 1). The plastic bubble, although not perfect, has pretty good optics. The friction-fit stand (Figure 2) has one layer of tape around the base to make it snug, with built-in cableways and slots to mount those boards that need light (color sensors, light sensors, etc.). I also wanted the pan-and-tilt mechanism to fit into the base with no screws. To date, I have used the slots for two purposes: to mount the superbright LEDs for the camera at night and to mount a compass. Because reflections inside the bubble from the LEDs wiped out the camera image, I instead had to mount the LEDs on the side (Figure 3). The compass mount didn't work too well because it was too close to the pan-and-tilt motors. On the platform on the front of the robot, I mounted an ultrasonic distance ranger; I will mount the LIDAR laser sensor there in the future, too. Listing 1 shows code for the OpenSCAD platform model. As usual, it is a mixture of cubes, tubes, and blocks. Making similar shapes into modules, then invoking them multiple times, is a great way of reusing code. Listing 1 OpenSCAD Platform Model 01 // 02 // SunRover Top Bubble Plate 03 // August 2, 2015 04 // SwitchDoc Labs 05 // 06 07 module platform() 08 { 09 10 // slot for LED or board 11 difference() 12 { 13 translate([25,-3,19.9]) 14 #cube([4,6,5]); 15 16 translate([26,-3,21]) 17 cube([1.50,8,10]); 18 } 19 20 translate([23,-10,0]) 21 cube([10,20,20]); 22 } 23 24 union (){ 25 difference() 26 { 27 union() 28 { 29 difference() 30 { 31 cylinder(h=20, r=72.00/2, $fn=100); 32 cylinder(h=22, r=69.55/2, $fn=100); 33 } 34 35 // base plate 36 difference() 37 { 38 translate([-75/2, -75/2,0]) 39 cube([75, 75, 2]); 40 41 // screw holes 42 translate([-68/2, -68/2,-5]) 43 cylinder(h=10, r=2/1, $fn=100); 44 45 // screw holes 46 translate([68/2, 68/2,-5]) 47 cylinder(h=10, r=2/1, $fn=100); 48 49 // screw holes 50 translate([68/2, -68/2,-5]) 51 cylinder(h=10, r=2/1, $fn=100); 52 53 // screw holes 54 translate([-68/2, 68/2,-5]) 55 cylinder(h=10, r=2/1, $fn=100); 56 } 57 58 // camera pylon 59 60 difference() 61 { 62 cylinder(h=50, r=28/2, $fn=100); 63 cylinder(h=55, r=25.25/2, $fn=100); 64 // bar through for wires 65 translate([-6, -20, 2]) 66 #cube([12,40,10]); 67 68 rotate(a=90, v=[0,0,1]) 69 translate([-6, -20, 2]) 70 #cube([12,40,10]); 71 } 72 73 // camera servo nib 74 translate([0,13.5,45]) 75 cylinder(h=10, r=3/2, $fn=100); 76 77 translate([0,-13.5,45]) 78 cylinder(h=10, r=3/2, $fn=100); 79 } 80 translate([0,0,-5]) 81 cylinder(h=10, r=11, $fn=100); 82 } 83 84 platform(); 85 86 rotate([0,0,90]) 87 platform(); 88 89 rotate([0,0,180]) 90 platform(); 91 92 rotate([0,0,270]) 93 platform(); 94 } Price $15.99 (incl. VAT)
http://www.raspberry-pi-geek.com/Archive/2016/16/SunRover-Part-4-Adding-a-Pi-Camera-and-Diagnostics-System/(offset)/
CC-MAIN-2018-34
refinedweb
1,040
69.01
a=0b=1 =├─a└─0=├─b└─1 n=0n=0 if 0a=10else b=10 // if yspeed is greater than 0if (yspeed>0) Why do all the pro-Microsoft people have troll avatars? I even modify my code to be consistent if I change one thing. I used to use i+=1, which is okay, but once I changed it to i++ which was faster, I had to change all my previous scripts. expr: NUMBER | NAME | '-' expr %prec UMINUS | expr '+' expr | expr '-' expr | expr '*' expr | expr '/' expr | '(' expr ')' ; The only reason Josh's method is even maybe acceptable is because he's not really parsing it. He's just prettifying it so GCC can parse it. If he were going to compile it himself, his method would not work at all. struct a;namespace b;{ int a;}
https://enigma-dev.org/forums/index.php?topic=317.0
CC-MAIN-2020-10
refinedweb
137
79.3
Photo Stacking in iOS with Vision and Metal In this tutorial, you’ll use Metal and the Vision framework to remove moving objects from pictures in iOS. You’ll learn how to stack, align and process multiple images so that any moving object disappears. Version - Swift 5, iOS 12, Xcode 10 What is Photo Stacking? Well, imagine this. You’re on vacation, somewhere magical. You’re traveling around the UK visiting all the Harry Potter filming locations! It’s time to see the sites and capture the most amazing photos. How else are you going to rub it in your friends’ faces that you were there? There’s only one problem: There are so many people. :[ Ugh! Every single picture you take is full of them. If only you could cast a simple spell, like Harry, and make all those people disappear. Evanesco! And, poof! They’re gone. That would be fantastic. It would be the be[a]st. ;] Maybe there is something you can do. Photo Stacking is an emerging computational photography trend all the cool kids are talking about. Do you want to know how to use this? In this tutorial, you’ll use the Vision framework to learn how to: - Align captured images using a VNTranslationalImageRegistrationRequest. - Create a custom CIFilterusing a Metal kernel. - Use this filter to combine several images to remove any moving objects. Exciting, right? Well, what are you waiting for? Read on! Getting Started Click the Download Materials button at the top or bottom of this tutorial. Open the starter project and run it on your device. Evanesco startup screenshot You should see something that looks like a simple camera app. There’s a red record button with a white ring around it and it’s showing the camera input full screen. Surely you’ve noticed that the camera seems a bit jittery. That’s because it’s set to capture at five frames per second. To see where this is defined in code, open CameraViewController.swift and find the following two lines in configureCaptureSession(): camera.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: 5) camera.activeVideoMinFrameDuration = CMTime(value: 1, timescale: 5) The first line forces the maximum frame rate to be five frames per second. The second line defines the minimum frame rate to be the same. The two lines together require the camera to run at the desired frame rate. If you tap the record button, you should see the outer white ring fill up clockwise. However, when it finishes, nothing happens. You’re going to have to do something about that right now. Saving Images to the Files App To help you debug the app as you go along, it would be nice to save the images you’re working with to the Files app. Fortunately, this is much easier than it sounds. Add the following two keys to your Info.plist: - Application supports iTunes file sharing. - Supports opening documents in place. Set both their values to YES. Once you’re done, the file should look like this: The first key enables file sharing for files in the Documents directory. The second lets your app open the original document from a file provider instead of receiving a copy. When both of these options are enabled, all files stored in the app’s Documents directory appear in the Files app. This also means that other apps can access these files. Now that you’ve given the Files app permission to access the Documents directory, it’s time to save some images there. Bundled with the starter project is a helper struct called ImageSaver. When instantiated, it generates a Universally Unique Identifier (UUID) and uses it to create a directory under the Documents directory. This is to ensure you don’t overwrite previously saved images. You’ll use ImageSaver in your app to write your images to files. In CameraViewController.swift, define a new variable at the top of the class as follows: var saver: ImageSaver? Then, scroll to recordTapped(_:) and add the following to the end of the method: saver = ImageSaver() Here you create a new ImageSaver each time the record button is tapped, which ensures that each recording session will save the images to a new directory. Next, scroll to captureOutput(_:didOutput:from:) and add the following code after the initial if statement: // 1 guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer), let cgImage = CIImage(cvImageBuffer: imageBuffer).cgImage() else { return } // 2 let image = CIImage(cgImage: cgImage) // 3 saver?.write(image) With this code, you: - Extract the CVImageBufferfrom the captured sample buffer and convert it to a CGImage. - Convert the CGImageinto a CIImage. - Write the image to the Documents directory. CIImage, then to a CGImage, and finally back into a CIImageagain? This has to do with who owns the data. When you convert the sample buffer into a CIImage, the image stores a strong reference to the sample buffer. Unfortunately, for video capture, this means that after a few seconds, it will start dropping frames because it runs out of memory allocated to the sample buffer. By rendering the CIImageto a CGImageusing a CIIContext, you make a copy of the image data and the sample buffer can be freed to be used again. Now, build and run the app. Tap the record button and, after it finishes, switch to the Files app. Under the Evanesco folder, you should see a UUID-named folder with 20 items in it. UUID named folder If you look in this folder, you’ll find the 20 frames you captured during the 4 seconds of recording. Captured frames OK, cool. So what can you do with 20 nearly identical images? Photo Stacking In computational photography, photo stacking is a technique where multiple images are captured, aligned and combined to create different desired effects. For instance, HDR images are obtained by taking several images at different exposure levels and combining the best parts of each together. That’s how you can see detail in shadows as well as in the bright sky simultaneously in iOS. Astrophotography also makes heavy use of photo stacking. The shorter the image exposure, the less noise is picked up by the sensor. So astrophotographers usually take a bunch of short exposure images and stack them together to increase the brightness. In macro photography, it is difficult to get the entire image in focus at once. Using photo stacking, the photographer can take a few images at different focal lengths and combine them to produce an extremely sharp image of a very small object. To combine the images together, you first need to align them. How? iOS provides some interesting APIs that will help you with it. Using Vision to Align Images The Vision framework has two different APIs for aligning images: VNTranslationalImageRegistrationRequest and VNHomographicImageRegistrationRequest. The former is easier to use and, if you assume that the user of the app will hold the iPhone relatively still, it should be good enough. To make your code more readable, you’ll create a new class to handle the alignment and eventual combining of the captured images. Create a new, empty Swift File and name it ImageProcessor.swift. Remove any provided import statements and add the following code: import CoreImage import Vision class ImageProcessor { var frameBuffer: [CIImage] = [] var alignedFrameBuffer: [CIImage] = [] var completion: ((CIImage) -> Void)? var isProcessingFrames = false var frameCount: Int { return frameBuffer.count } } Here, you import the Vision framework and define the ImageProcessor class along with some necessary properties: - frameBuffer will store the original captured images. - alignedFrameBuffer will contain the images after they have been aligned. - completion is a handler that will be called after the images have been aligned and combined. - isProcessingFrames will indicate whether images are currently being aligned and combined. - frameCount is the number of images captured. Next, add the following method to the ImageProcessor class: func add(_ frame: CIImage) { if isProcessingFrames { return } frameBuffer.append(frame) } This method adds a captured frame to the frame buffer, but only if you’re currently not processing the frames in the frame buffer. Still within the class, add the processing method: func processFrames(completion: ((CIImage) -> Void)?) { // 1 isProcessingFrames = true self.completion = completion // 2 let firstFrame = frameBuffer.removeFirst() alignedFrameBuffer.append(firstFrame) // 3 for frame in frameBuffer { // 4 let request = VNTranslationalImageRegistrationRequest(targetedCIImage: frame) do { // 5 let sequenceHandler = VNSequenceRequestHandler() // 6 try sequenceHandler.perform([request], on: firstFrame) } catch { print(error.localizedDescription) } // 7 alignImages(request: request, frame: frame) } // 8 cleanup() } It seems like a lot of steps but this method is relatively straightforward. You will call this method after you’ve added all the captured frames. It will process each frame and align them using the Vision framework. Specifically, in this code, you: - Set the isProcessingFramesBoolean variable to prevent adding more frames. You also save the completion handler for later. - Remove the first frame from the frame buffer and add it to the frame buffer for aligned images. All other frames will be aligned to this one. - Loop through each frame in the frame buffer. - Use the frame to create a new Vision request to determine a simple translational alignment. - Create the sequence request handler, which will handle your alignment requests. - Perform the Vision request to align the frame to the first frame and catch any errors. - Call alignImages(request:frame:)with the request and the current frame. This method doesn’t exist yet and you’ll fix that soon. - Clean up. This method also still needs to be written. Ready to tackle alignImages(request:frame:)? Add the following code just below processFrames(completion:): func alignImages(request: VNRequest, frame: CIImage) { // 1 guard let results = request.results as? [VNImageTranslationAlignmentObservation], let result = results.first else { return } // 2 let alignedFrame = frame.transformed(by: result.alignmentTransform) // 3 alignedFrameBuffer.append(alignedFrame) } Here you: - Unwrap the first result from the alignment request you made within the forloop in processFrames(completion:). - Transform the frame using the affine transformation matrix calculated by the Vision framework. - Append this translated frame to the aligned frame buffer. These last two methods are the meat of the Vision code your app needs. You perform the requests and then use the results to modify the images. Now all that’s left is to clean up after yourself. Add this following method to the end of the ImageProcessor class: func cleanup() { frameBuffer = [] alignedFrameBuffer = [] isProcessingFrames = false completion = nil } In cleanup(), you simply clear out the two frame buffers, reset the flag to indicate that you’re no longer processing frames and set the completion handler to nil. Before you can build and run your app, you need to use the ImageProcessor in your CameraViewController. Open CameraViewController.swift. At the top of the class, define the following property: let imageProcessor = ImageProcessor() Next, find captureOutput(_:didOutput:from:). You’ll make two small changes to this method. Add the following line just below the let image = ... line: imageProcessor.add(image) And below the call to stopRecording(), still within the if statement, add: imageProcessor.processFrames(completion: displayCombinedImage) Build and run your app and… nothing happens. No worries, Mr. Potter. You still need to combine all of these images into a single masterpiece. To see how to do that, you’ll have to read on! ImageSaverin your ImageProcessor. This would allow you to save the aligned images to the Documents folder and see them in the Files app. How Photo Stacking works There are several different ways to combine or stack images together. By far the simplest method is to just average the pixels for each location in the image together. For instance, if you have 20 images to stack, you would average together the pixel at coordinate (13, 37) across all 20 images to get the mean pixel value for your stacked image at (13, 37). Pixel stacking If you do this for every pixel coordinate, your final image will be the average of all images. The more images you have the closer the average will be to the background pixel values. If something moves in front of the camera, it will only appear in the same spot in a couple of images, so it won’t contribute much to the overall average. That’s why moving objects disappear. This is how you’ll implement your stacking logic. Stacking Images Now comes the really fun part! You’re going to combine all of these images into a single fantastic image. You’re going to create your own Core Image kernel using the Metal Shading Language (MSL). Your simple kernel will calculate a weighted average of the pixel values for two images. When you average a bunch of images together, any moving objects should just disappear. The background pixels will appear more often and dominate the average pixel value. Creating a Core Image Kernel You’ll start with the actual kernel, which is written in MSL. MSL is very similar to C++. Add a new Metal File to your project and name it AverageStacking.metal. Leave the template code in and add the following code to the end of the file: #include <CoreImage/CoreImage.h> extern "C" { namespace coreimage { // 1 float4 avgStacking(sample_t currentStack, sample_t newImage, float stackCount) { // 2 float4 avg = ((currentStack * stackCount) + newImage) / (stackCount + 1.0); // 3 avg = float4(avg.rgb, 1); // 4 return avg; } }} With this code, you: - Define a new function called avgStacking, which will return an array of 4 float values, representing the pixel colors red, green and blue and an alpha channel. The function will be applied to two images at a time, so you need to keep track of the current average of all images seen. The currentStackparameter represents this average, while stackCountis a number indicating how images were used to create the currentStack. - Calculate the weighted average of the two images. Since currentStackmay already include information from multiple images, you multiply it by the stackCountto give it the proper weight. - Add an alpha value to the average to make it completely opaque. - Return the average pixel value. sample_tdata type is a pixel sample from an image. OK, now that you have a kernel function, you need to create a CIFilter to use it! Add a new Swift File to the project and name it AverageStackingFilter.swift. Remove the import statement and add the following: import CoreImage class AverageStackingFilter: CIFilter { let kernel: CIBlendKernel var inputCurrentStack: CIImage? var inputNewImage: CIImage? var inputStackCount = 1.0 } Here you’re defining your new CIFilter class and some properties you need for it. Notice how the three input variables correspond to the three parameters in your kernel function. Coincidence? ;] By this point, Xcode is probably complaining about this class missing an initializer. So, time to fix that. Add the following to the class: override init() { // 1 guard let url = Bundle.main.url(forResource: "default", withExtension: "metallib") else { fatalError("Check your build settings.") } do { // 2 let data = try Data(contentsOf: url) // 3 kernel = try CIBlendKernel( functionName: "avgStacking", fromMetalLibraryData: data) } catch { print(error.localizedDescription) fatalError("Make sure the function names match") } // 4 super.init() } With this initializer, you: - Get the URL for the compiled and linked Metal file. - Read the contents of the file. - Try to create a CIBlendKernelfrom the avgStackingfunction in the Metal file and panic if it fails. - Call the super init. Wait just a minute… when did you compile and link your Metal file? Unfortunately, you haven’t yet. The good news, though, is you can have Xcode do it for you! Compiling Your Kernel To compile and link your Metal file, you need to add two flags to your Build Settings. So head on over there. Search for Other Metal Compiler Flags and add -fcikernel to it: Metal compiler flag Next, click the + button and select Add User-Defined Setting: Add user-defined setting Call the setting MTLLINKER_FLAGS and set it to -cikernel: Metal linker flag Now, the next time you build your project, Xcode will compile your Metal files and link them in automatically. Before you can do this, though, you still have a little bit of work to do on your Core Image filter. Back in AverageStackingFilter.swift, add the following method: func outputImage() -> CIImage? { guard let inputCurrentStack = inputCurrentStack, let inputNewImage = inputNewImage else { return nil } return kernel.apply( extent: inputCurrentStack.extent, arguments: [inputCurrentStack, inputNewImage, inputStackCount]) } This method is pretty important. Namely, it will apply your kernel function to the input images and return the output image! It would be a useless filter, if it didn’t do that. Ugh, Xcode is still complaining! Fine. Add the following code to the class to calm it down: required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } You don’t need to be able to initialize this Core Image filter from an unarchiver, so you’ll just implement the bare minimum to make Xcode happy. Using Your Filter Open ImageProcessor.swift and add the following method to ImageProcessor: func combineFrames() { // 1 var finalImage = alignedFrameBuffer.removeFirst() // 2 let filter = AverageStackingFilter() //3 for (i, image) in alignedFrameBuffer.enumerated() { // 4 filter.inputCurrentStack = finalImage filter.inputNewImage = image filter.inputStackCount = Double(i + 1) // 5 finalImage = filter.outputImage()! } // 6 cleanup(image: finalImage) } Here you: - Initialize the final image with the first one in the aligned framer buffer and remove it in the process. - Initialize your custom Core Image filter. - Loop through each of the remaining images in the aligned frame buffer. - Set up the filter parameters. Pay attention that the final image is set as the current stack images. It’s important to not swap the input images! The stack count is also set to the array index plus one. This is because you removed the first image from the aligned frame buffer at the beginning of the method. - Overwrite the final image with the new filter output image. - Call cleanup(image:)with the final image after all images have been combined. You may have noticed that cleanup() doesn’t take any parameters. Fix that by replacing cleanup() with the following: func cleanup(image: CIImage) { frameBuffer = [] alignedFrameBuffer = [] isProcessingFrames = false if let completion = completion { DispatchQueue.main.async { completion(image) } } completion = nil } The only changes are the newly added parameter and the if statement that calls the completion handler on the main thread. The rest remains as it was. At the bottom of processFrames(completion:), replace the call to cleanup() with: combineFrames() This way, your image processor will combine all the captured frames after it aligns them and then pass on the final image to the completion function. Phew! Build and run this app and make those people, cars, and anything that moves in your shot disappear! And poof! The cars disappear! For more fun, wave a wand and yell Evanesco! while you use the app. Other people will definitely not think you’re weird. :] Where to Go From Here? Congratulations! You’ve made it through a lot of concepts in this tutorial. You’re now ready to work your magic in the real world! However, if you want to try to improve your app, there are a couple of ways to do so: - Use VNHomographicImageRegistrationRequestto calculate the perspective warp matrix to align the captured frames. This should create a better match between two frames, it’s just a bit more complicated to use. - Calculate the mode pixel value instead of the average. The mode is the most frequently occurring value. Doing so will remove all influence of moving objects from the image, as they won’t be averaged in. This should create a cleaner looking output image. Hint: Convert the RGB to HSL and calculate the mode based on small ranges of the hue (H) value. If you’re interested in more information about Metal, check out Metal Tutorial: Getting Started and the Metal by Tutorials book. We hope you enjoyed this tutorial, and if you have any questions or comments, please join the forum discussion below!
https://www.raywenderlich.com/3733151-photo-stacking-in-ios-with-vision-and-metal
CC-MAIN-2019-43
refinedweb
3,282
57.87
0 This seems simple, but it's not working. There IS a text file in the same directory, but it's not being opened (and with the ios::in | ios::out, it should open even if it doesn't exist). I also ran into trouble trying to use infile.fail, which is why I used !infile. Thanks for any help! Rich #include <iostream> #include <fstream> #include <string> using namespace std; int main () { // inputting a char from a file to a var and the console char ch; // The test file has one character, 'A'. fstream infile; infile.open("charsfile.txt", ios::in | ios::out); if (!infile) cout << "File didn't open" << endl; infile.get(ch); cout << "The test character is: " << ch << endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/265009/file-i-o-version-1
CC-MAIN-2018-43
refinedweb
123
85.79
A lot of people ask or have a confusion, is it possible to use jQuery with React or is it okay to use both the libraries at a time! I?ve got a rude answer to that.Don?t use jQuery. In most cases, you won?t need anything from jQuery when you?re properly using React. Well, the fact is ?in most cases? you don?t need. But sometimes when building large scale applications maybe you need to use jQuery plugins or a piece of jQuery functionality. When it?s not possible/very tough to write in React you haven?t got anyway. Well, as this article is about using jQuery in React component let?s dive into it. Firstly, you have to import the jQuery library. We also need to import findDOmNode as we?re going to manipulate the dom. And obviously, we are importing React as well. import React from ‘react’;import { findDOMNode } from ?react-dom?;import $ from ?jquery?; Let?s think we need some type of slide toggle onClick over an element. Well, we?re proceeding in ES 2015’s way. We are setting an arrow function ?handleToggle? that will fire when an icon will be clicked. We?re just showing and hiding a div with a reference naming ?toggle? onClick over an icon. We must not use class/id to manipulate dom. handleToggle = () => { const el = findDOMNode(this.refs.toggle); $(el).slideToggle(); }; We have this part of the information and on click, over the icon, we will show additional information. <ul className=?profile-info?> <li> <span className=?info-title?>User Name : </span> Shuvo Habib </li> </ul> Let?s now set the reference naming ?toggle? mentioned earlier, in JSX. <ul className=?profile-info additional-profile-info-list? ref=?toggle?> <li> <span className=?info-email?>Office Email</span> me@shuvohabib.com </li></ul> The div element where we will fire the ?handleToggle? on onClick. <div className=?ellipsis-click? onClick={this.handleToggle}> <i className=?fa-ellipsis-h?/> </div> Let review the full code below, how it looks like. import React from ?react?;import { findDOMNode } from ?react-dom?;import $ from ?jquery?;class FullDesc extends React.Component { constructor() { super(); }handleToggle = () => { const el = findDOMNode(this.refs.toggle); $(el).slideToggle(); };render() { return ( <div className=?long-desc?> <ul className=?profile-info?> <li> <span className=?info-title?>User Name : </span> Shuvo Habib </li> </ul><ul className=?profile-info additional-profile-info-list? ref=?toggle?> <li> <span className=?info-email?>Office Email</span> me@shuvohabib.com </li> </ul> <div className=?ellipsis-click? onClick={this.handleToggle}> <i className=?fa-ellipsis-h?/> </div> </div> ); }}export default FullDesc; We are done! This is the way, how we can use jQuery in React component. Want to learn about Hooks in React JS? Check the tutorial below.
https://911weknow.com/using-jquery-in-react-component-the-refs-way
CC-MAIN-2022-40
refinedweb
456
55.61
Recently I passed both my 44th birthday and my 15th wedding anniversary, just signed my daughter up for high school and was told by my doctor that my HDL was soundly thrashing my LDL. My beard, which I’ve worn since my early twenties, is now streaked with gray (a curse of red hair, I fear), and I notice that lately the stairs seem to have mysteriously begun to grow from one trip to the next. T.S. Elliot is beginning to become … relevant … to me. All signs, perhaps, that I am no longer the young spring chicken I once was. As I was thinking about things to write for this particular column, this realization about age began to sink in about the standard that I’ve spent the last decade writing about. A decade is a long time in computer circles, especially when you figure that there’s only been five or six of them in the whole history of computing. XML has gone from being a “standard” that perhaps a couple dozen people worldwide knew about to a pervasive technology that is so well entrenched that many people don’t really even think much about it any more. We argue about the XMLification of word processing and spreadsheet programs, we debate whether Atom or RSS 2.0 will predominate, we shake our heads at the whole notion of web services and how the dominant web services protocol was designed largely by bloggers to let people know about their websites. In short, while XML is not exactly doddering off to the rest home, its angle-bracket knees are no longer as flexible as they used to be. If it were a person you’d expect it to be muttering about those damn JSON punks and how property taxes and inflation are eating up its standard of living. It no longer is as flashy a technology as it used to be (even as Flash has been migrating to an XML format), and more than once I’ve run into twenty-something AJAX hot-shots who declare XML so yesterday (even as they write applications that bind AJAX objects to XML structures). It’s become the establishment, though in many respects I suspect that while its glory days are behind it, XML is becoming more integrated into the fabric of computing. To that end, I wanted to offer up an assessment of where XML itself is going. As always, this is written by a guy in a coffee-shop, so take it with the usually assortments of saline condiments: - Hello, we’re from the government and we’re here to help. XML has become the lingua franca of a surprisingly large number of government agencies, ministries and departments. Whichever divide you fall into on the ODF vs. OOXML debate, the reality here is that both of these are XML formats and they would not have emerged if the demand for an XML based word processing format did not exist, largely from this sector. An XML word document format and a couple of transformations will give you the foundation of any number of CMS systems, and that in turn is now making it easier to actually turn that mountain of documents into useful repositories of information rather than largely locked in disk drive filler. - Enterprise 2.0 is not JSON-based. Transaction validity, robustness of content, minimal semantics, proven tools, data store integration, component management .. all of these factors need to be met when businesses adopt a technology, and after a decade of development, XML is increasingly supplying all of these needs. It’s worth noting that, despite the AJAX term showing up in 2004, the technology for doing message transport via JavaScript has been around since 1999 (longer if you consider security hole hacks into IFrame), and for all that it is “hot” now, I am hearing from many of enterprise level clients that they distrust the security of AJAX, the rather poor performance of many JavaScript tools and its inability to play nicely with others. I’ve been working with JavaScript since its inception and work with it daily even now, but it works best when it can work in conjunction with an (increasingly XML based) Document Object Model. - The Marriage of XQuery and REST. I’ve written about this fairly extensively in this venue, but think it is worth reiterating here. Combine an XQuery based system with a server objects namespace and you have the foundation for a remarkably powerful server environment comparable to ASP.NET, PHP or Ruby, and what’s more, such a system is remarkably neutral in its deployment (you could deploy such solutions from ASP.NET, PHP, JSP or Ruby). - Add XForms and Stir. XQuery is effective because it reduces the middleware “translation” layer to practically nothing; if you work with UI components that can in turn consume that XML (either via standalone model instance islands a la XForms or via other XML aware toolkits) and you have a remarkably powerful combination where you are shuttling XML back and forth without ever having to worry about the underlying implementations. Such a solution isn’t a “total” solution, but then again it doesn’t need to be - you can effectively wrap services such as sendmail calls or image manipulation in an XQuery module that lets you stay at the XML abstraction layer. - Keep It Simply Semantically-neutral, Stupid. There’s an interesting trend developing at the enterprise level. XML by itself isn’t enough … a system also has to be both comparatively simple and push the semantics as late into the process as possible. If you have a system with 1000 elements, you have about 950 too many. I think one of the things that has proved a limiting case for the adoption of XAML (given the amount of resources you’d expect could be poured into it by Microsoft) is that XAML is a reflection of the .NET DOM, and as such has literally thousands of potential elements. The solutions that in general seem to have staying power tend to be “reasonably modular”, and with a clear mechanism for managing such modularization. What’s more, many of the most robust solutions work best when combined with a transformation pipeline, and as such remain semantically neutral until they become “actualized” in an appropriate viewer (such as a browser). - Mobile Technology Pushing Standards. If you want to see where the real action is in the XML sphere, get away from the doddering browsers and take a look at the mobile market. Declarative programming works better in such an environment where you can define explicit pre-defined behaviors for given elements in firmware rather than deal with the vagaries of scripting, and as such these devices are rapidly emerging as the forefront of XML implementations. Most web browsers are only just now inching up to SVG 1.1, but SVG 1.2 Tiny (such as Ikivo’s implementation) has been a staple on many phones and other mobile devices for quite some time, and even in places where the W3C standards aren’t being used, the implementations that are being used are XML based. On the XForms front, picoforms is the player to watch in this space. - Semantics isn’t just for kids anymore. The rise of folksonomies have brought the terms “taxonomist” and “ontologist” out of the domains of library science and religion respectively, and turned them into remarkably high paying jobs. We’re now discovering that the process of defining schemas is remarkably difficult, and that meaning is similarly difficult to hold and describe. While I am still not sure that RDF is the best language for describing such semantics, I see much of the borders of what used to be called AI and cybernetics increasingly described in angle-bracket terms, and declarative languages in general enjoying a renaissance as we push our awareness of meaning to the next level. If I was entering IT as a newly managed IT college graduate, I’d be looking at semantic systems and knowledge management as the “hot” fields to be getting into. - AJAX. Okay, I touched on this one before, but I think its worth making a few more comments here. AJAX is here to stay, though I see it being most predominant in the desktop/laptop presentation side more than anything, and the predominant form is going to be Firefox flavoured. Why? At least for the next few years, Firefox has developer momentum behind it, though its running into the complexity conundrum that’s making it harder and harder to push the envelope. I don’t think that a branding change is going to make Silverlight any more palatable as a technology - those who are firmly in the MS camp will use it, but there is a fair degree of hostility in the marketplace for anything Microsoft right now from the developer side, and at least for a while, those who are most heavily into AJAX development are powering up Mozilla first then maybe thinking about Microsoft as an afterthought browser (I suppose we have to support it …). Microsoft would do itself a world of good to swallow its Not-Invented-Here pride and adopt the Mozilla API - even though I believe that technologically Microsoft’s are probably better written, programmers are just as political as anyone else. - Pipelines and Work-flows. The W3C mandate is slowly moving up the stack, recognizing that in an increasingly distributed world, document management becomes an infrastructure issue. There’ve been a lot of enterprise level process flow, orchestration and work-flow management schemas developed by OASIS, WS-* and a number of individual companies or OSS projects, yet none of them have really managed to click. I believe that this is because work-flow management and orchestration are ultimately atomic processes that need to be intrinsic to the web infrastructure, and that all of the solutions presented thus far fail because they are working too high in the stack. For simple pipeline management, pay attention to XProc, which I see as being a low level specification that works in the same space as ANT (though perhaps lower in the stack). Work-flow management schemas might similarly need to be brought into the W3C (or at least OASIS); if XML development patterns hold true here, a simple workflow management schema would likely succeed where complex ones fail. - Schematron. XPath is an interesting language - where it plays an integral part in other languages, those other languages eventually do quite well even though they may get a slow start. I see Schematron as falling into that camp. Schematron can be implemented in a number of different ways, but provides a mechanism for associating more complex constraints than can be expressed in XSD in an easy to use format. While its validation aspect is its primary goal, the conditional nature of validation also opens up the possibility of introducing other constraint options (such as calculations or relevancy constraints) into systems. There is similarly an effort at the W3C level for business rule encapsulation, though my suspicion is that in the long run it will look a lot like Schematron. - HTML 5 and Bindings. Note to the HTML camp - HTML 5.0 will be XML based, it’s just a question of how much core technology will separate it from XHTML 2. There is no valid reason for HTML not to close its tags, quote its attributes, and respect namespaces. I think the bigger debates are going to be around issues like XForms vs. HTML5 Forms, which I see as the question of whether the language should be component-centric or data model centric (and there are valid arguments on both sides of that one), and the degree to which CSS and JavaScript should control things. XBL (and XBL2) bindings bring a lot to the table, including a reasonably comprehensive mechanism for mixing the structure that tags bring with the fluidity of JavaScript to manipulate those tags. Certainly, I see user defined semantics for tags as being the hallmark of the next five years just as user defined activities are defining most of the next leg of the web. It also provides a formal mechanism for building mashups (god, that term is beginning to seem antiquated!) without breaking the integrity of an XHTML structure. Public Repositories and Feeds. XML based data repositories are becoming common, and any time you provide that raw material you will see innovation. On the XForms.org site (a Drupal based site) I run a number of aggregated news feeds that listen in for XForms related content on both the general search sites and specialized technical news feeds and display them to the users; a related concept is to read (and filter) job sites to display relevant jobs in the field to members of the site. I suspect that this feed model will increasingly end up replacing the sometimes more cumbersome repositories in various verticals (such as Geographic Information Systems, or GIS). - Atom and the Atom Publishing Protocol should make it big here. Consider a “mashup” of atom and the human genome database, for instance, and extrapolate from there. Google’s adoption of Atom and the APP will likely add considerable weight to that specification, and have already moved a significant amount of my own development efforts to support atom as a general transport protocol, and the APP as a general access format, and Tim Bray’s work in creating a mod_atom module for Apache should have significant impact in the next couple of years - providing a more efficient, usable and secure layer than WebDAV by itself has been able to provide. - XML Databases and the plateauing of SQL. This will take a while longer, but I see SQL-based server systems plateauing in the next few years - they won’t go away, but those databases will likely begin to look increasingly like XML databases, while existing XML based database systems will continue to gain market share. Part of this will likely be due to XQuery (plus some form of XUpdate, which will likely be introduced shortly). I think that XUpdate has been slower out of the gate because the entrenched SQL vendors realize that by having both query and update that a big rationale for their SQL products as separate, self-contained systems goes away, but customer demand, hungry commercial upstarts and OSS projects will likely drive towards a need to readdress this issue. I find that I like getting older, even despite those treacherous stairs. The problem domains have become larger, more complex, and more political. I think this is true of XML as well; the issues involved in its use are no longer those of adoption but of scope, of using the language as a tool for achieving consensus and smoothing the barriers to entry. I see XML coming to an equilibrium position with AJAX and another one with SQL - not wiping out these technologies, but providing a bridge between technical domains. It is playing a huge part in the emerging semantic web, and in hybrid areas (such as bioinformatics) it has become the exchange medium of choice. Maybe, just maybe, XML is beginning to grow up. Kurt Cagle is an author and information architect living in Victoria, British Columbia, Canada. He will be at the O’Reilly Open Source conference in Portland Oregon, giving a paper on XQuery/XForms systems and REST Objectified XML (ROX). > (...) those damn JSON punks (...) Priceless Kurt! :D Everything we told the HTML punks would happen did happen, and everything the JSON punks tell us will happen will happen.. Len,. Love the Insight... I am A young Punk(28) who really sees the value of XML. As more 4g Languages are Speaking Directly to Web services(which Are Spitting out MBs of XML), I have the urge to Get my new'XML' Tattoo right on My neck. Underneath all of this web 2.0 Buzz is the power of XML(portable DBs) Flying though Cyberspace...I wish More Of my generation would look a little deeper into the data, rather than CFing everything and swearing the know the Way. Thanks for the Insight. Brett So, I'll admit to being something of an XML newbiebeing that I've been an unemployed stay-at-home dad since January 2001, right around the time when XML seemed to be getting hot. . . . My daughter scares me worse. My son is level headed and wants a career in computer science and game design. My daughter wants to rule the world and says she has to knock off the old man first because he is smart and will foil her. She's right on item 2. The daughter will break the heart worse because we have absolutely no defense.. Or an apple user... it's sad that the web should and can enable us to communicate better with one another, and yet people still take an iota of information about a person and extrapolate the rest from there, whether the topic is iso standards or the ipod vs. zune debate. It puts me in mind of the secret life of walter mitty: "Your small minds are musclebound with suspicion. That's because the only exercise you ever get is jumping to conclusions." XML has failed in many places. JSON is a childrens toy. AJAX is for 'inventors of wheels'. Nothing here shows that XML has managed to integrate and interoperate X with Y in some great engineering fashion. To do those you have to that kind of work everyday which sadly means it failed to do it. Simple. A,. I like JSon and I like XML . Both have their places. JSon is simple and compact and works beautifully with Java script (not surprisingly as it is Java script ...). XML has the advantage of Schema and specifications and hence is preferable for interoperability. The biggest threat to XML is not from JSon but from DSLs which offer to be both more compact and descriptive than XML and still easy to validate for correctness. Whether I prefer a set of different DSLs to the verbose but always simmilar beast called XML is however a different question ... it is nide article it is nice article. I like it ! Thank you ! Today I use XML when I am designing a document page that is also a web page that is also an application page that is also transformed into a printed form... yadda. The change most completely wrought by the HTML/XML/XSLT trinity is that when we design an application today, we are designing a document. It is so obvious we seldom mention it but the triumph of the hypertext community over almost all of the rest of computer interface designs would be what I would consider most notable were I to have slept from 1985 to the present day. liked today's topic, lately I've been tempted to think of XML as "old" but that's not the case. xschema seems like it's still in infancy, judging from the tutorials, and people (myself included) are only just now being made aware of schematron. Len,. A nod to XML as a native data type seems appropriate. E4X (as already mentioned) Scala and Xlinq come to mind. There was a proposal to add this to Java 7 but I haven't seen any activity on that lately (no surprise, it was controversial and with everything else queued up for v7 it may not stand a chance anyway). The native XML plus pattern matching in Scala has been eye-opening for me. It's almost a guilty pleasure; like I'm eating candy when I should be "coding". Nothing a little java/DOM can't cure;-)? It depends on what you mean by 'native XML'. Isn't a DOM 'native XML'? If an XML file is supported by an XML data provider, isn't that 'native XML'? X3D, XAML, XUL, SVG, aren't these 'native XML'?. len,. Ah. No disagreement with that. Back in the 1980s when discussing SGML at a design meeting, Charlie Sorgi from what was then Mentor Context made the statement that one day SGML would simply be a checkmark in a list of product features. A generation later that is pretty much the case. Maybe the answer to 'where is XML going' is 'nowhere'. It's just there. Kurt,. Aaron,.
http://www.oreillynet.com/xml/blog/2007/07/wheres_xml_going.html
crawl-002
refinedweb
3,402
57.5
Asked by: uestions on Message Source Property of the Send Email Task in SSIS My issues / questions 1) When I put HTML in the Message Source property of the Send Email Task Editor in my SSIS package, the email is sent to my outlook alias. When I open it in outlook, it's not in html format, rather I see the html instead of it converting to html. My outlook client is set to html in the properties of my client so it should be fiine. 2) I find the Message Source area to enter the body of the email a pain. I can't return lines by clicking enter on my keyboard. Is there another way to code this? 3) Is there a way I can manage this using a CSS class and somehow include a css document that the Message Source can pick up? I guess if there is another way to code the Message Source other than in the properties screen in the Send Email Task Editor such as the abilty to code this elsewhere in VS, maybe I can then reference an include somehow. Question All replies The SendMail task is plain text only. However one can easily code an HTML SendMail task, and this is a new topic in BOL for the Service Pack 1 release. Unfortunately it's a mess to copy the HTML from a BOL page. I'll make a note to do my best on Monday and post it for you. -Doug Here is the new topic as plain text. Disappointing that it's impossible to copy and paste a BOL topic with formatting...but it is. Sending an HTML Mail Message with the Script Task Send Feedback The Integration Services SendMail task only supports mail messages in plain text format. However you can easily send HTML mail messages by using the Script task and the mail capabilities of the .NET Framework. Note: If you want to create a task that you can more easily reuse across multiple packages, consider using the code in this Script task sample as the starting point for a custom task. For more information, see Extending the Package with Custom Tasks. Description The following example uses the System.Net.Mail namespace to configure and send an HTML mail message. The script obtains the To, From, Subject, and body of the email from package variables, uses them to create a new MailMessage, and sets its IsBodyHtml property to True. Then it obtains the SMTP server name from another package variable, initializes an instance of SmtpClient, and calls its Send method to send the HTML message. The sample encapsulates the message sending functionality in a subroutine that could be reused in other scripts. To configure this Script Task example without an SMTP Connection Manager Create string variables named HtmlEmailTo, HtmlEmailFrom, and HtmlSubject and assign appropriate values to them for a valid test message. Create a string variable named HtmlEmailBody and assign a string of HTML markup to it. For example: <html><body><h1>Testing</h1><p>This is a <b>test</b> message.</p></body></html> Create a string variable named HtmlEmailServer and assign the name of an available STMP server that accepts anonymous outgoing messages. Assign all five of these variables to the ReadOnlyVariables property of a new Script task. Import the System.Net and System.Net.Mail namespaces into your code. The sample code in this topic obtains the SMTP server name from a package variable. However, you could also take advantage of an SMTP Connection Manager to encapsulate the connection information, and extract the server name from the connection manager in your code. The AcquireConnection method of the SMTP connection manager returns a string in the following format: SmtpServer=smtphost;UseWindowsAuthentication=False;EnableSsl=False; You can use the String.Split method to separate this argument list into an array of individual strings at each ";" or "=", then extract the second argument (subscript 1) from the arry as the server name. To configure this Script Task example with an SMTP Connection Manager Modify the Script task configured above by removing the HtmlEmailServer variable from the list of ReadOnlyVariables. Replace the line of code that obtains the server name: Dim smtpServer As String = _ Dts.Variables("HtmlEmailServer").Value.ToString with the following lines: Dim smtpConnectionString As String = _ DirectCast(Dts.Connections("SMTP Connection Manager").AcquireConnection(Dts.Transaction), String) Dim smtpServer As String = _ smtpConnectionString.Split(New Char() {"="c, ";"c})(1) Code Sample Public Sub Main() Dim htmlMessageTo As String = _ Dts.Variables("HtmlEmailTo").Value.ToString Dim htmlMessageFrom As String = _ Dts.Variables("HtmlEmailFrom").Value.ToString Dim htmlMessageSubject As String = _ Dts.Variables("HtmlEmailSubject").Value.ToString Dim htmlMessageBody As String = _ Dts.Variables("HtmlEmailBody").Value.ToString Dim smtpServer As String = _ Dts.Variables("HtmlEmailServer").Value.ToString SendMailMessage( _ htmlMessageTo, htmlMessageFrom, _ htmlMessageSubject, htmlMessageBody, _ True, smtpServer) Dts.TaskResult = Dts.Results.Success End Sub Private Sub SendMailMessage( _ ByVal SendTo As String, ByVal From As String, _ ByVal Subject As String, ByVal Body As String, _ ByVal IsBodyHtml As Boolean, ByVal Server As String) Dim htmlMessage As MailMessage Dim mySmtpClient As SmtpClient htmlMessage = New MailMessage( _ SendTo, From, Subject, Body) htmlMessage.IsBodyHtml = IsBodyHtml mySmtpClient = New SmtpClient(Server) mySmtpClient.Credentials = CredentialCache.DefaultNetworkCredentials mySmtpClient.Send(htmlMessage) End Sub
https://social.msdn.microsoft.com/Forums/sqlserver/en-us/772fd016-a1f7-45df-bacd-8515b5d486f4/uestions-on-message-source-property-of-the-send-email-task-in-ssis?forum=sqlintegrationservices
CC-MAIN-2016-44
refinedweb
876
55.13
Dear all, I am trying to run OS command from pl/sql, this OS command like pkzipc. I do not have problem running native OS command (Windows) like dir, findstr etc, but not the ones we added, such as pkzipc. I like to find out how do I run pkzipc within the pl code. I have no problem running other commands using the following java code: CREATE OR REPLACE AND RESOLVE JAVA SOURCE NAMED "UTLcmd"AS import java.lang.Runtime;public class execHostCommand{ public static void execute (String command) throws java.io.IOException { String osName = System.getProperty("os.name"); if(osName.equals("Windows XP")) command = "cmd /c " + command; Runtime rt = java.lang.Runtime.getRuntime(); rt.exec(command); }}/ Thanks, Tina Forum Rules
http://www.dbasupport.com/forums/showthread.php?62292-Problem-on-running-os-command-from-pl-sql&p=268475
CC-MAIN-2017-04
refinedweb
121
58.38
(Sorry for hijacking the thread. This is the second time in the same day... sigh!) On Thu, 2008-02-28 at 08:04 -0500, bhaaluu wrote: <snip> > I also run Python on Linux. I've tried several of the Python IDEs > (Integrated Development Environments), such as IDLE, Eric, and > so forth, but the best (for me) has been vim. I agree. > I use the following > .vimrc file: > > -------------8<-----Cut Here----->8------------------- > " .vimrc > " > " Created by Jeff Elkner 23 January 2006 > " Last modified 2 February 2006 > " > " Turn on syntax highlighting and autoindenting > syntax enable > filetype indent on This should be: if has("autocmd") " If Vim is compiled with support for 'autocommands'... filetype indent on " ...turn on indentation accordingly. endif > " set autoindent width to 4 spaces (see > ") > set et > set sw=4 > set smarttab Not good. This will turn the above for all files edited with Vim. Here's a better setup (note: some of the lines bellow are longer than 80 column which might get wrapped by you mail reader! sorry.): if has("autocmd") " Only do this part when compiled with support for 'autocommands'. autocmd FileType python set ts=4 sw=4 et " Python autocmd FileType ruby set ts=2 sw=2 " Ruby autocmd FileType c,cpp set ts=4 sw=4 cindent " C & C++ autocmd FileType docbk,html,xhtml,xml set ts=2 sw=2 " DocBook, HTML, XHTML, and XML endif " has("autocmd") (You can add more specific options for any other file types you wish for.) > " set line number (added by bhaaluu) > set nu > " Bind <f2> key to running the python interpreter on the currently active > " file. (courtesy of Steve Howell from email dated 1 Feb 2006). > map <f2> :w\|!python %<cr> Grate! (You could use ':update' instead of ':w' which will write the file only if it has changed.) > -------------8<-----Cut Here----->8------------------- > > I run vim in Konsole, but any Xterm works, AFAIK. <snip> > > Happy Programming! Happy programming to all of you out there. Ziyad. (again, sorry for hijacking the thread!)
https://mail.python.org/pipermail/tutor/2008-February/060380.html
CC-MAIN-2014-15
refinedweb
332
74.9
27 March 2011 20:30 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> The domestic market accounted for a sizable part of the recent surge in demand, a producer said, adding that most of the increase in The original equipment manufacturer (OEM) market that provides tyres to new vehicle producers is a major demand driver for the US SBR market. When auto sales are strong, tyre demand gets a boost. “I talked to a guy at Cooper Tire recently, and his factories are running at full speed. I think it’s the same for other tyre producers,” a trader said. However, while strong domestic SBR demand was a welcome change, buyers had some real competition from “ US Gulf FOB (free on board) contract prices for March 1502-grade SBR are 124-131 cents/lb ($2,734-$2,888, €1,941-2,050/tonne), while US spot 1502 prices are 148-155 cents/lb, as assessed by ICIS. In Asia, North American SBR suppliers include Goodyear, International Specialty Products (ISP), Lion Copolymer and Negromex. Hosted by the National Petrochemical & Refiners Association (NPRA), the International Petrochemical Conference
http://www.icis.com/Articles/2011/03/27/9445821/npra-11-us-sbr-buyers-compete-with-offshore-demand.html
CC-MAIN-2015-14
refinedweb
184
50.16
On 11/04/18 12:34, Daniel P. Berrangé wrote: > On Wed, Apr 11, 2018 at 12:29:11PM +0100, Radostin Stoyanov wrote: >> This patch set contains rebased version of Katerina's work from GSoC 2016 >> [1]. >> It allows integrates CRIU [2] with the libvirt-lxc to enable save/resore of >> containers. > I vaguely recall that when Katerina first did that work, we hit some > limitations of CRIU at the time, that blocked us merging. Does anyone > recall what that was, and if & when it was addressed in CRIU ? The previous patch series (from 2016) is One current limitation of CRIU is that it fails to restore a containers with enabled user namespace (See) Advertising Radostin -- libvir-list mailing list libvir-list@redhat.com
https://www.mail-archive.com/libvir-list@redhat.com/msg161182.html
CC-MAIN-2018-17
refinedweb
124
69.41
IPv6 Internet Protocol version 6 Synopsis: #include <sys/socket.h> #include <netinet/in.h> int socket( AF_INET6, SOCK_RAW, proto); Description: The IP6 protocol is the network-layer protocol used by the Internet Protocol version 6 family (AF_INET6). Options may be set at the IP6 level when using higher-level protocols based on IP6 (such as TCP and UDP ). It may also be accessed through a raw socket when developing new protocols, or special-purpose applications. There are several IP6-level setsockopt() /getsockopt() options. They are separated into the basic IP6 sockets API (defined in RFC 2553), and the advanced API (defined in RFC 2292). The basic API looks very similar to the API presented in IP . The advanced API uses ancillary data and can handle more complex cases. Basic IP6 sockets API You can use the IPV6_UNICAST_HOPS option to set the hoplimit field in the IP6 header on unicast packets. If you specify -1, the socket manager uses the default value. If you specify a value of 0 to 255, the packet uses the specified value as it hoplimit. Other values are considered invalid and result in an error code of EINVAL. For example: int hlim = 60; /* max = 255 */ setsockopt( s, IPPROTO_IPV6, IPV6_UNICAST_HOPS, &hlim, sizeof(hlim) ); The IP6 multicasting is supported only on AF_INET6 sockets of type SOCK_DGRAM and SOCK_RAW, and only on networks where the interface driver supports multicasting. The IPV6_MULTICAST_HOPS option changes the hoplimit for outgoing multicast aren't forwarded beyond the local network. Multicast datagrams with a hoplimit of 0 won't be transmitted on any network, but may be delivered locally if the sending host belongs to the destination group and if multicast loopback hasn overrides the default for subsequent transmissions from a given socket: unsigned int outif; outif = if_nametoindex("ne0"); setsockopt( s, IPPROTO_IPV6, IPV6_MULTICAST_IF, &outif, sizeof(outif) ); (The outif argument eliminating the overhead of receiving their own transmissions. Don't use the IPV6_MULTICAST_LOOP option if there might be more than one instance of your application on a single host (e.g. a conferencing program), or if the sender doesn't belong to the destination group (e.g. a time-querying program). A multicast datagram sent with an initial hoplimit greater than 1 may be delivered to the sending host on a different interface from that on which it was sent, if the host belongs to the destination group on that other interface. The loopback control option has no effect on such a delivery. A host must become a member of a multicast group before it can receive datagrams sent to the group. To join a multicast group, use the IPV6_JOIN_GROUP option: struct ipv6_mreq mreq6; setsockopt( s, IPPROTO_IPV6, IPV6_JOIN_GROUP, &mreq6, sizeof(mreq6) ); Note that the mreq6 argument has the following structure: struct ipv6_mreq { struct in6_addr ipv6mr_multiaddr; unsigned int ipv6mr_interface; }; Set the ipv6mr_interface member to 0 to choose the default multicast interface, or set it) ); The mreq6 argument contains the same values as used to add the membership. Memberships are dropped when the socket is closed or the process exits. The IPV6_PORTRANGE option IPV6_BINDV6ONLY option controls the behavior of the AF_INET6 wildcard listening socket. The following example sets the option to 1: int on = 1; setsockopt( s, IPPROTO_IPV6, IPV6_BINDV6ONLY, &on, sizeof(on) ); If you set the IPV6_BINDV6ONLY option to 1, the AF_INET6 wildcard listening socket accepts IP6 traffic only. If set to 0, the socket accepts IPv4 traffic as well, as if it were from an IPv4 mapped address, such as ::ffff:10.1.1.1. Note that if you set the option to 0, IPv4 access control gets much more complicated. For example, even if you have no listening AF_INET socket on port X, you'll end up accepting IPv4 traffic by an AF_INET6 listening socket on the same port. The default value for this flag is copied at socket-instantiation time, from the net.inet6.ip6.bindv6only variable from the sysctl utility. The option affects TCP and UDP sockets only. Advanced IP6 sockets API The advanced IP6 sockets API lets applications specify or obtain details about the IP6 header and extension headers on packets. The advanced API uses ancillary data for passing data to or from the socket manager. There are also setsockopt() / getsockopt() options to get optional information on incoming packets: - IPV6_PKTINFO - IPV6_HOPLIMIT - IPV6_HOPOPTS - IPV6_DSTOPTS -() , as one or more ancillary data objects. If IPV6_PKTINFO is enabled, the destination IP6 address and the arriving interface index are available via struct in6_pktinfo on an ancillary data stream. You can pick the structure by checking for an ancillary data item by setting the cmsg_level argument to IPPROTO_IPV6 and the cmsg_type argument to IPV6_PKTINFO. If IPV6_HOPLIMIT is enabled, the hoplimit value on the packet is made available to the application. The ancillary data stream contains an integer data item with a cmsg_level of IPPROTO_IPV6 and a cmsg_type of IPV6_HOPLIMIT. The inet6_option_space() family of functions help you parse ancillary data items for IPV6_HOPOPTS and IPV6_DSTOPTS. Similarly, the inet6_rthdr_space() family of functions help you parse ancillary data items for IPV6_RTHDR. You can pass ancillary data items with normal payload data, using the sendmsg() function. Ancillary data items are parsed by the socket manager, and are used to construct the IP6 header and extension headers. For the cmsg_level values listed above, the ancillary data format is the same as the inbound case. Additionally, you can specify a IPV6_NEXTHOP data object. The IPV6_NEXTHOP ancillary data object specifies the next hop for the datagram as a socket address structure. In the cmsghdr structure containing this ancillary data, the cmsg_level argument is IPPROTO_IPV6, the cmsg_type argument is IPV6_NEXTHOP, and the first byte of cmsg_data is the first byte of the socket address structure. If the socket address structure contains an IP6 address (e.g. the sin6_family argument is AF_INET6 ), then the node identified by that address must be a neighbor of the sending host. If that address equals the destination IP6 address of the datagram, then this is equivalent to the existing SO_DONTROUTE socket option. For applications that don't, or can't use the sendmsg() or the recvmsg() function, the IPV6_PKTOPTIONS socket option is defined. Setting the socket option specifies any of the optional output fields: setsockopt( fd, IPPROTO_IPV6, IPV6_PKTOPTIONS, &buf, len ); The buf argument points to a buffer containing one or more ancillary data objects; the len argument is the total length of all these objects. The application fills in this buffer exactly as if the buffer were being passed to the sendmsg() function as control information. The options set by calling setsockopt() for IPV6_PKTOPTIONS are called sticky options because once set, they apply to all packets sent on that socket. The application can call setsockopt() again to change all the sticky options, or it can call setsockopt() specified that it wants to receive. The buf argument points to the buffer that the call fills in. The len argument is a pointer to a value-result integer; when the function is called, the integer specifies the size of the buffer pointed to by buf, and on return this integer contains the actual number of bytes that were stored in the buffer. The application processes this buffer exactly as if it were returned by recvmsg() as control information. Advanced API and TCP sockets When using getsockopt() with the IPV6_PKTOPTIONS option and a TCP socket, only the options from the most recently received segment are retained and returned to the caller, and only after the socket option has been set. The application isn't allowed to specify ancillary data in a call to sendmsg() on a TCP socket, and none of the ancillary data described above is ever returned as control information by recvmsg() on a TCP socket. Conflict resolution In some cases, there are multiple APIs defined for manipulating an IP6 header field. A good example is the outgoing interface for multicast datagrams: it can be manipulated by IPV6_MULTICAST_IF in the basic API, by IPV6_PKTINFO in the advanced API, and by the sin6_scope_id field of the socket address structure passed to the sendto() function. In QNX Neutrino, when conflicting options are given to the socket manager, the socket manager gets the value in the following order: - options specified by using ancillary data - options specified by a sticky option of the advanced API - options specified by using the basic API - options specified by a socket address. Raw IP6 Sockets Raw IP6 sockets are connectionless, and are normally used with sendto() and recvfrom() , although you can also use connect() to fix the destination for future packets (in which case you can use read() or recv() , and write() or send() ). If proto is 0, the default protocol IPPROTO_RAW is used for outgoing packets, and only incoming packets destined for that protocol are received. If proto is nonzero, that protocol number is used on outgoing packets and to filter incoming packets. Outgoing packets automatically have an IP6 header prepended to them (based on the destination address and the protocol number the socket is created with). Incoming packets are received without the IP6 header or extension headers. All data sent via raw sockets must be in network byte order; all data received via raw sockets is in network-byte order. This differs from the IPv4 raw sockets, which didn't specify a byte ordering and typically used the host's byte order. Another difference from IPv4 raw sockets is that complete packets (i.e. IP6 packets with extension headers) can't be read or written using the IP6 raw sockets API. Instead, ancillary data objects are used to transfer the extension headers, as described above. All fields in the IP6 header that an application might want to change (i.e. everything other than the version number) can be modified using ancillary data and/or socket options by the application for output. All fields in a received IP6 header (other than the version number and Next Header fields) and all extension headers are also made available to the application as ancillary data on input. Hence, there's no need for a socket option similar to the IPv4 IP_HDRINCL socket option. When writing to a raw socket, the socket manager automatically fragments the packet if the size exceeds the path MTU, inserting the required fragmentation headers. On input, the socket manager reassembles received fragments, so the reader of a raw socket never sees any fragment headers. Most IPv4 implementations give special treatment to a raw socket created with a third argument to socket() of IPPROTO_RAW, whose value is normally 255. We note that this value has no special meaning to an IP6 raw socket (and the IANA currently reserves the value of 255 when used as a next-header field). For ICMP6 raw sockets, the socket manager calculates and inserts the mandatory ICMP6 checksum. For other raw IP6 sockets (i.e. for raw IP6 sockets created with a third argument other than IPPROTO_ICMPV6), the application must: - Set the new IPV6_CHECKSUM socket option to have the socket manager compute and store a pseudo header checksum for output. - Verify the received pseudo header checksum on input, discarding the packet if the checksum is in error. This option prevents applications from having to perform source-address selection on the packets they send. The checksum incorporates the IP. Disabled means: - The socket manager won't calculate and store a checksum for outgoing packets. - The socket manager kernel won't verify a checksum for received packets. Based on: RFC 2553, RFC 2292, RFC 2460
http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/ip6_proto.html
CC-MAIN-2019-35
refinedweb
1,899
50.57
#include <stdio.h> int fscanf(FILE *restrict stream, const char *restrict format, ... ); int scanf(const char *restrict format, ... ); int sscanf(const char *restrict s, const char *restrict format, ... ); The fscanf() function shall read from the named input stream. The scanf() function shall read from the standard input stream stdin. The sscanf() function shall read from the string s. Each function reads bytes, interprets them according to a format, and stores the results in its arguments. Each expects, as arguments, a control string format described below, and a set of pointer arguments indicating where the converted input should be stored. The result is undefined if there are insufficient arguments for the format. If the format is exhausted while arguments remain, the excess arguments shall be evaluated}]. This feature provides for the definition of format strings that select arguments in an order appropriate to specific languages. In format strings containing the "%n$" form of conversion specifications, it is unspecified whether numbered arguments in the argument list can be referenced from the format string more than once. The format can contain either form of a conversion specification-that is, % or "%n$"-but the two forms cannot be mixed within a single format string. The only exception to this is that %% or %* can be mixed with the "%n$" form. When numbered argument specifications are used, specifying the Nth argument requires that all the leading arguments, from the first to the ( N-1)th, are pointers. The fscanf() function in all its forms shall allow detection of a language-dependent radix character in the input string. The radix character is defined in the program's locale (category LC_NUMERIC ). In the POSIX locale, or in a locale where the radix character is not defined, the radix character shall default to a period ( '.' ). The format is a character string, beginning and ending in its initial shift state, if any, composed of zero or more directives. Each directive is composed of one of the following: one or more white-space characters ( <space>s, <tab>s, <newline>s, <vertical-tab>s, or <form-feed>s); an ordinary character (neither '%' nor a white-space character); or a conversion specification. Each conversion specification is introduced by the character '%' or the character sequence "%n$", after which the following appear in sequence: The f characters shall be executed by reading input until no more valid input can be read, or up to the first byte which is not a white-space character, which remains unread. A directive that is an ordinary character shall be executed as follows: the next byte shall be read from the input and compared with the byte that comprises the directive; if the comparison shows that they are not equivalent, the directive shall fail, and the differing and subsequent bytes shall remain unread. Similarly, if end-of-file, an encoding error, or a read error prevents a character from being read, the directive shall fail. A directive that is a conversion specification defines a set of matching input sequences, as described below for each conversion character. A conversion specification shall be executed in the following steps. Input white-space characters (as specified by isspace() ) shall be skipped, unless the conversion specification includes a [ , c , C , or n conversion specifier. An item shall be read from the input, unless the conversion specification includes an n conversion specifier. An input item shall be defined as the longest sequence of input bytes (up to any specified maximum field width, which may be measured in characters or bytes dependent on the conversion specifier) which is an initial subsequence of a matching sequence. The first byte, if any, after the input item shall remain unread. If the length of the input item is 0, bytes) shall be converted to a type appropriate to the conversion character. If the input item is not a matching sequence, the execution of the conversion specification fails;ifiers are valid: Matches an optionally signed floating-point number, infinity, or NaN, whose format is the same as expected for the subject sequence of strtod(). In the absence of a size modifier, the application shall ensure that the corresponding argument is a pointer to float. If the fprintf() family of functions generates character string representations for infinity and NaN (a symbolic entity encoded in floating-point format) to support IEEE Std 754-1985, the fscanf() family of functions shall recognize them as input. If an l (ell) qualifier is present, the input is a sequence of characters that begins in the initial shift state. Each character. If an l (ell) qualifier is present, the input is a sequence of characters that begins in the initial shift state. Each character in the sequence. The conversion specification includes all subsequent bytes in the format string up to and including the matching right square bracket ( ']' ). The bytes between the square brackets (the scanlist) comprise the scanset, unless the byte after the left square bracket is a circumflex ( '^' ), in which case the scanset contains all bytes character, nor the second where the first character is a '^' , nor the last character, the behavior is implementation-defined. If an l (ell) qualifier is present, the input shall be a sequence of characters that begins in the initial shift state. Each character in the sequence resulting sequence of wide characters. No null wide character is added. If a conversion specification is invalid, the behavior is undefined. The conversion specifiers A , E , F , G , and X are also valid and shall be equivalent to a , e , f , g , and x , respectively. If end-of-file is encountered during input, conversion shall be terminated. If end-of-file occurs before any bytes matching the current conversion specification (except for %n ) have been read (other than leading white-space characters, sscanf() shall be equivalent to encountering end-of-file for fscanf(). If conversion terminates on a conflicting input, the offending input is left unread in the input. Any trailing white space (including <newline>s) shall be left unread unless matched by a conversion specification. The success of literal matches and suppressed assignments is only directly determinable via the %n conversion specification. The fscanf() and scanf() functions may mark the st_atime field of the file associated with stream for update. The st_atime field shall be marked for update by the first successful execution of fgetc(), fgets(), fread(), getc(), getchar(), gets(), fscanf(), or fscanf() functions fail and may fail, refer to fgetc() or fgetwc() . In addition, fscanf() may fail if: The following sections are informative. The call: int i, n; float x; char name[50]; n = scanf(") scanf("' . The following call uses fscanf() to read three floating-point numbers from standard input into the input array. float input[3]; fscanf (stdin, "%f %f %f", input, input+1, input+2); If the application calling fscanf() has any objects of type wint_t or wchar_t, it must also include the <wchar.h> header to have these objects defined. This function is aligned with the ISO/IEC 9899:1999 standard, and in doing so a few "obvious" things were not included. Specifically, the set of characters allowed in a scanset is limited to single-byte characters. In other similar places, multi-byte characters have been permitted, but for alignment with the ISO/IEC 9899:1999 standard, it has not been done here. Applications needing this could use the corresponding wide-character functions to achieve the desired results. None. getc() , printf() , setlocale() , strtod() , strtol() , strtoul() , wcrtomb() , the Base Definitions volume of IEEE Std 1003.1-2001, Chapter 7, Locale, <langinfo.h>, <stdio.h>, <wchar.h>
http://www.makelinux.net/man/3posix/S/sscanf
CC-MAIN-2013-20
refinedweb
1,257
51.48
Tutorial for Getting Data from DXF Files¶ In this tutorial I show you how to get data from an existing DXF drawing. At first load the drawing: import ezdxf dwg = ezdxf.readfile("your_dxf_file.dxf") See also Layouts¶ I use the term layout as synonym for an arbitrary entity space which can contain any DXF construction element like LINE, CIRCLE, TEXT and so on. Every construction element has to reside in exact one layout. There are three different layout types: - model space: this is the common construction place - paper space: used to to create printable drawings - block: reusable elements, every block has its own entity space A DXF drawing consist of exact one model space and at least of one paper space. The DXF12 standard has only one unnamed paper space the later DXF standards can have more than one paper space and each paper space has a name. Iterate over DXF Entities of a Layout¶ Iterate over all construction elements in the model space: modelspace = dwg.modelspace() for e in modelspace: if e.dxftype() == 'LINE': print("LINE on layer: %s\n" % e.dxf.layer) print("start point: %s\n" % e.dxf.start) print("end point: %s\n" % e.dxf.end) All layout objects supports the standard Python iterator protocol and the in operator. Access DXF Attributes of an Entity¶ Check the type of an DXF entity by e.dxftype(). The DXF type is always uppercase. All DXF attributes of an entity are grouped in the namespace e.dxf: e.dxf.layer # layer of the entity as string e.dxf.color # color of the entity as integer See common DXF attributes: If a DXF attribute is not set (a valid DXF attribute has no value), a ValueError will be raised. To avoid this use the GraphicEntity.get_dxf_attrib() method with a default value: p = e.get_dxf_attrib('paperspace', 0) # if 'paperspace' is left off, the entity defaults to model space An unsupported DXF attribute raises an AttributeError. Getting a Paper Space¶ paperspace = dwg.layout('layout0') Retrieves the paper space named layout0, the usage of the layout object is the same as of the model space object. The DXF12 standard provides only one paper space, therefore the paper space name in the method call dwg.layout(‘layout0’) is ignored or can be left off. For the later standards you get a list of the names of the available layouts by Drawing.layout_names(). Iterate Over All DXF Entities at Once¶ Because the DXF entities of the model space and the entities of all paper spaces are stored in the ENTITIES section of the DXF drawing, you can also iterate over all drawing elements at once, except the entities placed in the block layouts: for e in dwg.entities: print("DXF Entity: %s\n" % e.dxftype()) Retrieve Entities by Query Language¶ Inspired by the wonderful jQuery framework, I created a flexible query language for DXF entities. To start a query use the Layout.query() method, provided by all sort of layouts or use the ezdxf.query.new() function. The query string is the combination of two queries, first the required entity query and second the optional attribute query, enclosed in square brackets: 'EntityQuery[AttributeQuery]' The entity query is a whitespace separated list of DXF entity names or the special name *. Where * means all DXF entities, all other DXF names have to be uppercase. The attribute query is used to select DXF entities by its DXF attributes. The attribute query is an addition to the entity query and matches only if the entity already match the entity query. The attribute query is a boolean expression, supported operators: and, or, !. See also Get all LINE entities from the model space: modelspace = dwg.modelspace() lines = modelspace.query('LINE') The result container also provides the query() method, get all LINE entities at layer construction: construction_lines = lines.query('*[layer=="construction"]') The * is a wildcard for all DXF entities, in this case you could also use LINE instead of *, * works here because lines just contains entities of DXF type LINE. All together as one query: lines = modelspace.query('LINE[layer=="construction"]') The ENTITIES section also supports the query() method: all_lines_and_circles_at_the_construction_layer = dwg.entities.query('LINE CIRCLE[layer=="construction"]') Get all model space entities at layer construction, but no entities with the linestyle DASHED: not_dashed_entities = modelspace.query('*[layer=="construction" and linestyle!="DASHED"]') Default Layer Settings¶ See also Tutorial for Layers and class Layer
http://ezdxf.readthedocs.io/en/latest/tutorials/getting_data.html
CC-MAIN-2018-13
refinedweb
729
55.95
Guest is about 5.67E-8 W/m2 per K4 and ε is the emissivity which is 1 for an ideal black body radiator and somewhere between 0 and 1 for a non ideal system also called a gray body. Wikipedia defines a Stefan-Boltzmann gray body as one “that does not absorb all incident radiation” although it doesn’t specify what happens to the unabsorbed energy which must either be reflected, passed through or do work other than heating the matter. This is a myopic view since the Stefan-Boltzmann Law is equally valid for quantifying a generalized gray body radiator whose source temperature is T and whose emissions are attenuated by an equivalent emissivity. To conceptualize a gray body radiator, refer to Figure 1 which shows an ideal black body radiator whose emissions pass through a gray body filter where the emissions of the system are observed at the output of the filter. If T is the temperature of the black body, it’s also the temperature of the input to the gray body, thus Equation 1 still applies per Wikipedia’s over-constrained definition of a gray body. The emissivity then becomes the ratio between the energy flux on either side of the gray body filter. To be consistent with the Wikipedia definition, the path of the energy not being absorbed is omitted. A key result is that for a system of radiating matter whose sole source of energy is that stored as its temperature, the only possible way to affect the relationship between its temperature and emissions is by varying ε since the exponent in T4 and σ are properties of immutable first principles physics and ε is the only free variable. The units of emissions are Watt/meter2 and one Watt is one Joule per second. The climate system is linear to Joules meaning that if 1 Joule of photons arrives, 1 Joule of photons must leave and that each Joule of input contributes equally to the work done to sustain the average temperature independent of the frequency of the photons carrying that energy. This property of superposition in the energy domain is an important, unavoidable consequence of Conservation of Energy and often ignored. The steady state condition for matter that’s both absorbing and emitting energy is that it must be receiving enough input energy to offset the emissions consequential to its temperature. If more arrives than is emitted, the temperature increases until the two are in balance. If less arrives, the temperature decreases until the input and output are again balanced. If the input goes to zero, T will decay to zero. Since 1 calorie (4.18 Joules) increases the temperature of 1 gram of water by 1C, temperature is a linear metric of stored energy, however; owing to the T4 dependence of emissions, it’s a very non linear metric of radiated energy so while each degree of warmth requires the same incremental amount of stored energy, it requires an exponentially increasing incoming energy flux to keep from cooling. The equilibrium climate sensitivity factor (hereafter called the sensitivity) is defined by the IPCC as the long term incremental increase in T given a 1 W/m2 increase in input, where incremental input is called forcing. This can be calculated for emitting matter in LTE by differentiating the Stefan-Boltzmann Law with respect to T and inverting the result. The value of dT/dP has the required units of degrees K per W/m2 and is the slope of the Stefan-Boltzmann relationship as a function of temperature given as, 2) dT/dP = (4εσT3)-1 A black body is nearly an exact model for the Moon. If P is the average energy flux density received from the Sun after reflection, the average temperature, T, and the sensitivity, dT/dP can be calculated exactly. If regions of the surface are analyzed independently, the average T and sensitivity for each region can be precisely determined. Due to the non linearity, it’s incorrect to sum up and average all the T’s for each region of the surface, but the power emitted by each region can be summed, averaged and converted into an equivalent average temperature by applying the Stefan-Boltzmann Law in reverse. Knowing the heat capacity per m2 of the surface, the dynamic response of the surface to the rising and setting Sun can also be calculated all of which was confirmed by equipment delivered to the Moon decades ago and more recently by the Lunar Reconnaissance Orbiter. Since the lunar surface in equilibrium with the Sun emits 1 W/m2 of emissions per W/m2 of power it receives, its surface power gain is 1.0. In an analytical sense, the surface power gain and surface sensitivity quantify the same thing, except for the units, where the power gain is dimensionless and independent of temperature, while the sensitivity as defined by the IPCC has a T-3 dependency and which is incorrectly considered to be approximately temperature independent. A gray body emitter is one where the power emitted is less than would be expected for a black body at the same temperature. This is the only possibility since the emissivity can’t be greater than 1 without a source of power beyond the energy stored by the heated matter. The only place for the thermal energy to go, if not emitted, is back to the source and it’s this return of energy that manifests a temperature greater than the observable emissions suggest. The attenuation in output emissions may be spectrally uniform, spectrally specific or a combination of both and the equivalent emissivity is a scalar coefficient that embodies all possible attenuation components. Figure 2 illustrates how this is applied to Earth, where A represents the fraction of surface emissions absorbed by the atmosphere, (1 – A) is the fraction that passes through and the geometrical considerations for the difference between the area across which power is received by the atmosphere and the area across which power is emitted are accounted for. This leads to an emissivity for the gray body atmosphere of A and an effective emissivity for the system of (1 – A/2). The average temperature of the Earth’s emitting surface at the bottom of the atmosphere is about 287K, has an emissivity very close to 1 and emits about 385 W/m2 per Equation 1. After accounting for reflection by the surface and clouds, the Earth receives about 240 W/m2 from the Sun, thus each W/m2 of input contributes equally to produce 1.6 W/m2 of surface emissions for a surface power gain of 1.6. Two influences turn 240 W/m2 of solar input into 385 W/m2 of surface output. First is the effect of GHG’s which provides spectrally specific attenuation and second is the effect of the water in clouds which provides spectrally uniform attenuation. Both warm the surface by absorbing some fraction of surface emissions and after some delay, recycling about half of the energy back to the surface. Clouds also manifest a conditional cooling effect by increasing reflection unless the surface is covered in ice and snow when increasing clouds have only a warming influence. Consider that if 290 W/m2 of the 385 W/m2 emitted by the surface is absorbed by atmospheric GHG’s and clouds (A ~ 0.75), the remaining 95 W/m2 passes directly into space. Atmospheric GHG’s and clouds absorb energy from the surface, while geometric considerations require the atmosphere to emit energy out to space and back to the surface in roughly equal proportions. Half of 290 W/m2 is 145 W/m2 which when added to the 95 W/m2 passed through the atmosphere exactly offsets the 240 W/m2 arriving from the Sun. When the remaining 145 W/m2 is added to the 240 W/m2 coming from the Sun, the total is 385 W/m2 exactly offsetting the 385 W/m2 emitted by the surface. If the atmosphere absorbed more than 290 W/m2, more than half of the absorbed energy would need to exit to space while less than half will be returned to the surface. If the atmosphere absorbed less, more than half must be returned to the surface and less would be sent into space. Given the geometric considerations of a gray body atmosphere and the measured effective emissivity of the system, the testable average fraction of surface emissions absorbed, A, can be predicted as, 3) A = 2(1 – ε) Non radiant energy entering and leaving the atmosphere is not explicitly accounted for by the analysis, nor should it be, since only radiant energy transported by photons is relevant to the radiant balance and the corresponding sensitivity. Energy transported by matter includes convection and latent heat where the matter transporting energy can only be returned to the surface, primarily by weather. Whatever influences these have on the system are already accounted for by the LTE surface temperatures, thus their associated energies have a zero sum influence on the surface radiant emissions corresponding to its average temperature. Trenberth’s energy balance lumps the return of non radiant energy as part of the ‘back radiation’ term, which is technically incorrect since energy transported by matter is not radiation. To the extent that latent heat energy entering the atmosphere is radiated by clouds, less of the surface emissions absorbed by clouds must be emitted for balance. In LTE, clouds are both absorbing and emitting energy in equal amounts, thus any latent heat emitted into space is transient and will be offset by more surface energy being absorbed by atmospheric water. The Earth can be accurately modeled as a black body surface with a gray body atmosphere, whose combination is a gray body emitter whose temperature is that of the surface and whose emissions are that of the planet. To complete the model, the required emissivity is about 0.62 which is the reciprocal of the surface power gain of 1.6 discussed earlier. Note that both values are dimensionless ratios with units of W/m2 per W/m2. Figure 3 demonstrates the predictive power of the simplest gray body model of the planet relative to satellite data. Figure 3. The green line is the Stefan-Boltzmann gray body model with an emissivity of 0.62 plotted to the same scale as the data. Even when compared against short term monthly averages, the data closely corresponds to the model. An even closer match to the data arises when the minor second order dependencies of the emissivity on temperature are accounted for,. The biggest of these is a small decrease in emissivity as temperatures increase above about 273K (0C). This is the result of water vapor becoming important and the lack of surface ice above 0C. Modifying the effective emissivity is exactly what changing CO2 concentrations would do, except to a much lesser extent, and the 3.7 W/m2 of forcing said to arise from doubling CO2 is the solar forcing equivalent to a slight decrease in emissivity keeping solar forcing constant. Near the equator, the emissivity increases with temperature in one hemisphere with an offsetting decrease in the other. The origin of this is uncertain but it may be an anomaly that has to do with the normalization applied to use 1 AU solar data which can also explain some other minor anomalous differences seen between hemispheres in the ISCCP data, but that otherwise average out globally. When calculating sensitivities using Equation 2, the result for the gray body model of the Earth is about 0.3K per W/m2 while that for an ideal black body (ε = 1) at the surface temperature would be about 0.19K per W/m2, both of which are illustrated in Figure 3. Modeling the planet as an ideal black body emitting 240 W/m2 results in an equivalent temperature of 255K and a sensitivity of about 0.27K per W/m2 which is the slope of the black curve and slightly less than the equivalent gray body sensitivity represented as a green line on the black curve. This establishes theoretical possibilities for the planet’s sensitivity somewhere between 0.19K and 0.3K per W/m2 for a thermodynamic model of the planet that conforms to the requirements of the Stefan-Boltzmann Law. It’s important to recognize that the Stefan-Boltzmann Law is an uncontroversial and immutable law of physics, derivable from first principles, quantifies how matter emits energy, has been settled science for more than a century and has been experimentally validated innumerable times. A problem arises with the stated sensitivity of 0.8C +/- 0.4C per W/m2, where even the so called high confidence lower limit of 0.4C per W/m2 is larger than any of the theoretical values. Figure 3 shows this as a blue line drawn to the same scale as the measured (red dots) and modeled (green line) data. One rationalization arises by inferring a sensitivity from measurements of adjusted and homogenized surface temperature data, extrapolating a linear trend and considering that all change has been due to CO2 emissions. It’s clear that the temperature has increased since the end of the Little Ice Age, which coincidently was concurrent with increasing CO2 arising from the Industrial Revolution, and that this warming has been a little more than 1 degree C, for an average rate of about 0.5C per century. Much of this increase happened prior to the beginning the 20’th century and since then, the temperature has been fluctuating up and down and as recently as the 1970’s, many considered global cooling to be an imminent threat. Since the start of the 21’st century, the average temperature of the planet has remaining relatively constant, except for short term variability due to natural cycles like the PDO. A serious problem is the assumption that all change is due to CO2 emissions when the ice core records show that change of this magnitude is quite normal and was so long before man harnessed fire when humanities primary influences on atmospheric CO2 was to breath and to decompose. The hypothesis that CO2 drives temperature arose as a knee jerk reaction to the Vostok ice cores which indicated a correlation between temperature and CO2 levels. While such a correlation is undeniable, newer, higher resolution data from the DomeC cores confirms an earlier temporal analysis of the Vostok data that showed how CO2 concentrations follow temperature changes by centuries and not the other way around as initially presumed. The most likely hypothesis explaining centuries of delay is biology where as the biosphere slowly adapts to warmer (colder) temperatures as more (less) land is suitable for biomass and the steady state CO2 concentrations will need to be more (less) in order to support a larger (smaller) biomass. The response is slow because it takes a while for natural sources of CO2 to arise and be accumulated by the biosphere. The variability of CO2 in the ice cores is really just a proxy for the size of the global biomass which happens to be temperature dependent. The IPCC asserts that doubling CO2 is equivalent to 3.7 W/m2 of incremental, post albedo solar power and will result in a surface temperature increase of 3C based on a sensitivity of 0.8C per W/m2. An inconsistency arises because if the surface temperature increases by 3C, its emissions increase by more than 16 W/m2 so 3.7 W/m2 must be amplified by more than a factor of 4, rather than the factor of 1.6 measured for solar forcing. The explanation put forth is that the gain of 1.6 (equivalent to a sensitivity of about 0.3C per W/m2) is before feedback and that positive feedback amplifies this up to about 4.3 (0.8C per W/m2). This makes no sense whatsoever since the measured value of 1.6 W/m2 of surface emissions per W/m2 of solar input is a long term average and must already account for the net effects from all feedback like effects, positive, negative, known and unknown. Another of the many problems with the feedback hypothesis is that the mapping to the feedback model used by climate science does not conform to two important assumptions that are crucial to Bode’s linear feedback amplifier analysis referenced to support the model. First is that the input and output must be linearly related to each other, while the forcing power input and temperature change output of the climate feedback model are not owing to the T4 relationship between the required input flux and temperature. The second is that Bode’s feedback model assumes an internal and infinite source of Joules powers the gain. The presumption that the Sun is this source is incorrect for if it was, the output power could never exceed the power supply and the surface power gain will never be more than 1 W/m2 of output per W/m2 of input which would limit the sensitivity to be less than 0.2C per W/m2. Finally, much of the support for a high sensitivity comes from models. But as has been shown here, a simple gray body model predicts a much lower sensitivity and is based on nothing but the assumption that first principles physics must apply, moreover; there are no tuneable coefficients yet this model matches measurements far better than any other.casting and forecasting. The results of this analysis explains the source of climate science skepticism, which is that IPCC driven climate science has no answer to the following question: What law(s) of physics can explain how to override the requirements of the Stefan-Boltzmann Law as it applies to the sensitivity of matter absorbing and emitting energy, while also explaining why the data shows a nearly exact conformance to this law?. 3) Bode H, Network Analysis and Feedback Amplifier Design assumption of external power supply and linearity: first 2 paragraphs of the book 4) Manfred Mudelsee, The phase relations among atmospheric CO content, temperature and global ice volume over the past 420 ka, Quaternary Science Reviews 20 (2001) 583-589 5) Jouzel, J., et al. 2007: EPICA Dome C Ice Core 800KYr Deuterium Data and Temperature Estimates. 6) ISCCP Cloud Data Products: Rossow, W.B., and Schiffer, R.A., 1999: Advances in Understanding Clouds from ISCCP. Bull. Amer. Meteor. Soc., 80, 2261-2288. 7) “Diviner Lunar radiometer Experiment” UCLA, August, 2009 782 thoughts on “Physical Constraints on the Climate Sensitivity” I’m particularly interested in answers to the question posed at the end of the article. George There is no need explain an overriding of the law because there is no need to do so. The observed increase in temperature from a perfect black body to where we are today is entirely consistent with the law and can be estimated by anyone who has finished a second year heat transfer course. An exact calculation is more complex, but not beyond your average graduate mechanical engineer. And the same is true for a non ideal black body, also called a gray body. Unfortunately, consensus climate science fails to make this connection. They simply can’t connect the dots between the sensitivity of the gray body model and the claimed sensitivity which differ by about a factor of 4. The simple answer is probably that the Stefan Boltzmann law only applies to bodies in thermal equilibrium. As long as the concentrations of CO2 are changing the earth is storing energy and will continue to do so for several thousand years after CO2 levels stabilise (due to energy being stored in the ocean). It should be also be pointed out that that neither Fig. 1 or Fig. 2 conserve energy. In each case there is energy missing meaning that the analysis is wrong. Germinio, “As long as the concentrations of CO2 are changing …” The planet has completely adapted to all prior CO2 emissions, except perhaps some of the emissions in the last 8-12 months. If the climate changed as slowly as it would need to for your hypothesis to be valid, we would not even notice seasonal change, nor would hemispheric temperature vary by as much as 12C every 12 months, nor would the average temperature of the planet vary by as much as 3C during any 12 month period. No. It just means that the earth has a fast and a slow response to any perturbations. Both together need to be considered before any claims that the earth is in thermal equilibrium and that the Stefan Boltzmann law can be applied. Earth rotates. So it never ever will be in thermal equilibrium. PS I agree with your assertion as to the necessity for equilibrium. It is not sufficient. SB also assumes it is isothermal. Well silly me, so does thermal equilibrium require isothermality. G CART BEFORE HORSE? Hi again Michael, Its not normal science. Its post normal science. The key characteristic of post normal science is to question the certainty of normal science. Its a reversal of the burden proof regarding our freedom to do things without first proving no harm. The pressure on this is occurring on every farm, home, beach, city in the world. Thus any amount of normal science suggesting that it is not likely that CO2 is a problem is going to be inadequate. The “sandpile” theory of Al Gore is the operative principle here. Catastrophe always results from piling sand too high. The fact that climate science fails is irrelevant. You must keep feeding the machine until they get it right. And of course they will never get it right because all the models will continue to feature CO2 as the operative principle of the greenhouse effect as they did in AR5 after the science opinion changed from all the warming to half the warming. The models continue to push for all. There are no science arguments to change this. The change can only occur politically and via retaining our culture of individual initiative. Uh, those laws would be: 1) The law of unethical practitioner (given an accurate & accepted law of physics plus an unethical practitioner, results are unpredictable, usually catastrophically so) 2) The law of money (If you got money, I want some. When dealing with an an unethical practitioner, results are unpredictable, usually catastrophically so) 3) Stupid people (ok, those lacking minimal scientific training) can be tricked into believing stupid things. When manipulated by an unethical practitioner, results are unpredictable, usually catastrophically so) You forgot the power law (I want to control you. When dealing with sheeple, results are predictable, they will worship and follow you even into catastrophes of their own doing.) So many mistakes in this I don’ t know where to start. If anyone wants an excellent and complete and relatively simple (for such a complicated concept) discussion of the science of CO2. I suggest taking Steve McIntyre’s advice and go visit scienceofdoom. This article isn’t sky dragons, but it is close. If you think there are so many errors, pick one and I’ll tell you why it’s not an error and we can go on to the next one. Better yet, answer the question. Figure one conflates absorptivity with emissivity. As drawn the proper coefficient is alpha, not epsilon. Though absorptivity is a function of emissivity, it isn’t the same thing and your figure is mistaken. I won’t carry on pointing out your other errors, minor and major. I’ve answered the question separately. Figure 1 places a wikipedia defined black body as its source and a wikipedia defined gray body between the black body and where the output is observed. If you keep reading and go on to figure 2, you will see a more proper diagram where the equivalence between atmospheric absorption and effective emissivity of the gray body model are related. This is just another model and best practices for developing a model is to represent behavior in the simplest way possible. This way, there are fewer possibilities to make errors. Uhhh. Figure two is algebraically identical to figure one and still conflates emissivity with absorptivity. There is no conflation, although absorption and the EFFECTIVE emissivity of the gray body model are related to each other through equation 3. I got lost at Fig. 1. A black body source emits radiation – OK. A gray body filter absorbs it .. that’s only a half of the story, it also emits radiation back. You have to include this effect. Curious George, Yes, you are correct and that point is addressed in Figure 2. Figure 1 simply uses the Wikipedia definitions of a black body and a gray body (one that doesn’t absorb all of the incident energy) to show how even the constrained Wikipedia definition of a gray body is just as valid for a gray body radiator and its this gray body radiator model that closely approximates how the climate system responds to incident energy (forcing), from which the sensitivity can be calculated exactly. I now look at Figure 2, assuming that the “Gray body atmosphere” is the “Gray body filter” of Fig. 1. In order to absorb all of the Black Body radiation, the Gray Body Filter would have to be black. I have a feeling that you have a real message, but it needs work. In this form it does not get to me. Curious George, The gray body atmosphere absorbs A, passes (1-A) and redistributes A half into space and half back to the surface. The ‘grayness’ is manifested by the (1 – A) fraction that is passed through. This is the unabsorbed energy the wikipedia definition of a gray body fails to account for. There is a box in the middle with an A. On the left there is Ps=σT^4. This is correct. On the right there are three equations with two arrows. The equation Po=Ps(1-A/2) is identical to Po=εσT^4, given that you have defined ε=(1-A/2) . This is wrong. For one thing, T atmosphere is not the same as T surface. Also, the transmitted energy is a function of absorptivity, not emissivity. The correct equation is Po=ασT^4. σ is not equal to ε. You are conflating emissivity and absorptivity. If we take the temperature of the gray body “surface” as T2, The what you are showing as Ps(A/2) is actually εσT2^4, but you have shown it to be εσT^4. T is not equal to T2. I could go on, but I won’t. John, T atmosphere is irrelevant to this model. Only T surface matters. Beside, other than clouds and GHG’s, the emissivity of the atmosphere (O2 and N2) is approximately zero, so its kinetic temperature, that is, its temperature consequential to translational motion, is irrelevant to the radiative balance and corresponding sensitivity. You might also be missing the fact that the (1-A)Ps term is the power not absorbed by the gray body atmosphere, per the Wikipedia definition of a gray body (see the dotted line?). “Figure one conflates absorptivity with emissivity. As drawn the proper coefficient is alpha, not epsilon. ” Except – “To be consistent with the Wikipedia definition, the path of the energy not being absorbed is omitted.” so Figure 1 merely shows a blackbody has epsilon =1 and between 0 and 1 for the gray body. No conflating at all. Looks like you were just desperate to write “So many mistakes in this I don’ t know where to start.” rather than a honest mistake. I don’t have my glasses with me so I’ll refrain from giving it a thumbs up and i suggest that you give it a more thorough read before giving it a thumbs down. John, Could you recommend a video (30 to 45 min.) explaining the science and ramifications of CAGW? I have Dr. Dressler’s debate with Dr. Lindzen, but it’s a little old. I’d like your take on good video explaining CAWG. Robert Essenhigh developed a quantitative thermodynamic model of the atmosphere’s lapse rate based on the Stephan Boltzmann).” Prediction of the Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere by Solution of the (S-S) Integral Equations of Transfer and Evaluation of the Potential for Profile Perturbation by Combustion Emissions. Energy & Fuels, (2006) Vol 20, pp 1057-1067. Cited by How does this apply here? The only temperature in the model is the surface temperature which is at 1 ATM is still subject to the T^4 relationship. The model doesn’t care about how energy is redistributed throughout the atmosphere, just about how that energy is quantified at the boundaries and that from a macroscopic point of view of those boundaries, not only does it behave like a gray body, it must. co2isnotevil. Essenhigh’s equations enable validating and extending White’s model. Earth’s average black body radiation temperature is not from at surface but in the atmosphere. White states: Essenhigh calculates temperature and pressure with elevation. He includes average absorption/emission of H2O and CO2 as the two primary greenhouse gases: From these, a detailed thermodynamic climate sensitivity could calculated from Essenhigh’s equations. George says with regard to incoming energy : “If more arrives than is emitted, the temperature increases until the two are in balance.” … This is not necessarily true, especially when considering what happens when the incoming energy melts ice or evaporates water. The temperature remains constant while energy is absorbed, until the ice completely melts, or the water completely evaporates. Only after melting or evaporation ends can the temperature of the remaining mass begin to increase. Since there is both a lot of ice, and a lot of water on the planet earth, this presents a problem with this over-simplified model of the temperature response of our planet to incoming energy from the sun. Rob, Consider the analysis to be an LTE analysis averaged across decades or more. The seasonal formation and melting of ice, evaporation of water and condensation as rain all happens in approximately equal and opposite amounts and more or less cancel. Any slight imbalance is too far in the noise to be of any appreciable impact. There’s also incoming energy turned into work that’s not heat. Consider the origin of hydroelectric power, although it eventually turns into heat when you turn on your toaster. You have a point there co2isnotevil, consider the electromagnetic emissions visible in this picture. They do not follow the Stf-Boltz temperature relationship. They are not toasters, but a lot of sodium vapor lamps. Even LED’s emit heat, but isn’t the light still just photons leaving the planet? Sodium vapor lamps and LEDs do not produce photons like an incandescent lamp. Since an incandescent lamp is using heat to generate the photons, it follows the Stf-Bltz equations. Yes the sodium vapor lamps and LEDs produce small amounts of heat, but they are not using heat to generate the photons they emit. So the emissions you see in the picture, being mostly sodium vapor lamps and powered by a hydroelectric dam, would not follow the Stf-Bltz law. Rob, So, the biggest anthropogenic influence by man is emitting light into space (Planck spectrum or not) which means that less LWIR must leave for balance and the surface cools. Before man, the biggest influence came from fireflies. I think you’re confusing whether its a Planck spectrum or not with whether or not its emitted energy must conform to the SB Law. Consider that the clear sky emissions of the planet have a color temperature representing the surface temperature, but have an SB equivalent temperature that is lower owing to attenuation in GHG absorption bands. In effect, we can consider a sodium lamp (or even a laser) a gray body emitter with lots of bandwidth completely attenuated from its spectrum accompanied with broad band attenuation making it seem the proper distance away such that the absolute energy emitted by the lamp measured at some specific distance matches what would be expected based on the color temperature of the lamp. You missed the point co2isnotevil. The Stf-Blz analysis is inappropriate for the earth system, because there are numerous ways that incoming solar energy is stored/distributed on Earth than is reflected by a temperature differential. My point is that the analysis in this article neglects important details that make the analysis invalid. Rob, My point is that the exceptions are insignificant, relative to the required macroscopic behavior. Biology consumes energy as well and turns it into biomass. But you add all this up and you will be hard pressed to find more than 1%. Consider this co2isnotevil: The ” ε ” value for the Earth is not constant, but is a non-linear function of T. The best example would be comparing the ” ε ” value for Snowball Earth, versus the ” ε ” for Waterworld. Rob, Absolutely the emissivity is a function of T and here it that function: None the less, in LTE and averaged across the planet, it has an average value and that’s all I’m considering here. The only sensitivity that matters is the long term change in long term averages. Because my analysis emphasizes sensitivity in the energy domain (ratios of power densities), rather than the temperature domain (IPCC sensitivity), the property of superposition makes averages more meaningful. You can also look here to see other relationships between the variables provided by and derived from the ISCCP cloud data set. Of particular interest is the relationship between post albedo input power and the surface temperature, whose slope is about 0.2C per W/m^2. Where this crosses with the relationship between planet emissions and temperature is where the average is. “Biology consumes energy as well and turns it into biomass.” co2isnotevil, how much energy is “consumed” by increasing the volume of the atmosphere? Warmed gases expand, yes? It’s something I’ve not seen addressed, though maybe I missed it. mellyrn, “Warmed gases expand, yes?” Yes, warmed gases expand and do work against gravity, but it’s not enough to be significant relative to the total energies involved. What a load of complications you present. Lapse rate, can you explain it? Why is the stratosphere, well, stratified? How about that pesky lapse rate back at its shenanigans in the mesosphere? And then stratification again in the thermosphere? These questions persist because some think they know the answer but have not questioned assumptions. Just like assuming no bacteria could live at a pH of under 1 and with all sorts of digestive enzymes. ..helicobacter pylon ring a bell? Keith, “Lapse rate, can you explain it?” Gravity. None the less, as I keep trying to say, what happens inside the atmosphere is irrelevant to the model. This is a model of the transfer function between surface temperature and planet emissions. The atmosphere is a black box characterized by the behavior at its boundaries. As long as the model matches at the boundaries, how those boundaries get into the state they are in makes no difference. This is standard best practices when it comes to reverse engineering unknown systems. Anyone who thinks that the complications within the atmosphere have any effect, other than affecting the LTE surface temperature which is already accounted for by the analysis, is over thinking the problem. Part of the problem is that consensus climate science adds a lot of unnecessary complication and obfuscation to framing the problem. Many are bamboozled by the complexity which blinds them to the elegant simplicity of macroscopic behavior conforming to macroscopic physical laws. No they are not insignificant, they’re the cause of the changing emissivity in your graph. It is sign of regulation. micro6500, “they’re the cause of the changing emissivity in your graph.” I’ve identified the largest deviation (at least the one around 273K) as the consequence of the water vapor GHG effect ramping up and not as the result of the latent heat consequential to a phase change. The former represents a change to the system, while the later represents an energy flux that the system responds to. Keep in mind that the gray body model is a model of the transfer function that quantifies the causality between the behavior at the top of the atmosphere and the bottom. This transfer function is dependent on the system, and not the specific energy fluxes and at least per the IPCC, the sensitivity is defined by the relationship between the top (forcing) and bottom of the atmosphere (surface temp). I understand. I’m just pointing out that there is a physical reason for emissivity to be changing, it is the atm adapting to the differing ratios of humidity and temperature as you sweep from equator to pole and the day to day swings in temp (which everyone seems to want to toss out!). The big dips are where the limits of the regulation are reached because you’ve hit the min and max temps of your working “fluid”. But in between, you’ve seeing the blend of 2 emissivity rates getting averaged. Do all of the measurements line up on a emissivity line in Fig 3? So what I haven’t solved is the temp/humidity map that defines outoging average radiation for all conditions of humidity under clear skies. In the same black box fashion, if you have an equation that defines that line in Fig 3 (instead of an exp regression of the data points), a physical equation based on this changing ratio, would have to have the same answer, right. micro6500, “if you have an equation that defines that line in Fig 3” The green line in Figure 3 is definitely not a regression of the data, but the exact relationship given by the SB equation with an emissivity of 0.62 (power on X axis, temp on Y axis). It’s equation 1 in the post. So the average of this is about e=.62? Yes, the average EQUIVALENT emissivity is about 0.62. To be clear, this is related to atmospheric absorption by equation 3 and atmospheric absorption can be calculated with line by line simulations which gets approximately the same value of A corresponding to an emissivity of 0.62 (within the precision of the data). So in effect, both absorption (emissivity of the gray body atmosphere) and the effective emissivity of the system can be measured and/or calculated to cross check each other. Rob, you are attempting to apply local physical conditions to a global radiation model of limits on the radiation. The energy that goes to melting ice or evaporating water stays in the system, without changing the system temperature until it affects one or both of the physical boundaries- the surface or the upper atmosphere emissions. Seeing that oceans comprise almost 70% of the surface of the planet, you cannot call them “local.” Condensation happens around 18000 feet above MS on average. That corresponds to the halfway point on atmospheric mass distribution. It is also where flight levels start in the US because barometric altimetry gets dicey and one must rely on in route ATC to maintain separation …enough aviation, back to meat and taters. Average precipation is about 34″ rain per year. The enthalpy escapes sensible quantification via thermometry but once at 18,000 feet, it heats the upper troposphere and even some of the coldest layers of the stratosphere where it RISES… This is not quite true. Ice colder than the melting point will warm. Evaporation of water will only change if it warms (for a given humidity). The cooling effect of the evaporation will reduce the warming but not eliminate it. This misunderstanding is behind the reason the large negative feedback effect of evaporation cooling is largely ignored. Latent heat is moved from the surface (mostly the oceans) to the clouds when it condenses.and part is radiated to space from cloud tops. Richard, Covered this in a previous thread, but the bottom line is that the sensitivity and this model is all about changes to long term averages that are multiples of years. Ice formation and ice melting as well as water evaporation and condensing into rain happens in nearly equal and opposite amounts and any net difference is negligible relative to the entire energy budget integrated over time. , The atmosphere is a thin shell, at least relative to the BB surface beneath it. You should also look at the measured emission spectrum of the planet. Wavelengths of photons emitted by the surface that would be 100% absorbed show significant energy from space, even in the clear sky. In fact, the nominal attenuation is about 3db less then it would be without absorption lines. George, The atmosphere is optically thick at the frequencies that matter. Mean free path for photons can be tens of metres. But the more important issue is temperature gradient. You want to use S-B; what is T? It varies hugely through this “thin shell”. NIck, The atmosphere is optically thick to the relevant wavelengths only when clouds are present, but not the emissions of the clouds themselves. The clear sky lets about half of all the energy emitted by the surface pass into space without being absorbed by a GHG and more than half of the emissions by clouds owing to less water vapor between cloud tops and space. The nominal attenuation in saturated absorption bands is only about 3db (50%) owing to the nominal 50/50 split of absorbed energy. The atmospheric temperature gradient is irrelevant for the reasons I cited earlier. The model is only concerned with the relationship between the energy flux at the top and bottom of the atmosphere. How that measured and modelled relationship is manifested makes no difference. Being Canadian, I have to say. . . . Eh? Are you suggesting that the direction of radiation from any particular particle is not completely random? Given that the energy emitted by the heated particles decreases with temperature and temperature decreases with altitude, I can’t see how emissions are preferentially directed downward. The hottest stuff is the lowest. Heat moves from hot to cool. The heat move up, not down. As do the emissions. Emissive power decreases with temperature. For any particular molecule, the odds that the energy will go to space are the same as the odds it will go to ground. I’m missing something Nick. Nick. Never mind. I see it. For others. Consider a co2 molecule at 10 meters. It gets hit by a photon from the surface. It can radiate the energy from that photon in any direction. Now consider a molecule at twenty meters. It too gets hit by a photon from the surface. It is also possible for that molecule to get hit by the photon emitted by the molecule at 10 meters. There are more molecules at 10 meters than at twenty, so there is more emission downwards. Over 10’s of meters this is hard to measure. Over 10 kilometres, a bit less. Of course the odds of the molecule at 20 meters seeing a photon are less because some of those were absorbed at 10 meters. Also, the energy of the photons emitted by the molecules at 10 meters is lower because the temperature is lower. Have you done the math Nick? Is it a wash, or is there more downward emission? John, The density profile doesn’t really matter because the ‘excess’ emission downward are still subject to absorption before they get to the surface and upward emissions have a lower probability of being absorbed, Also, as I talked about in the article, if the atmosphere absorbs more than about 75% of the surface emissions, then less than half is returned to the surface. If the atmosphere absorbs less than 75% of the surface emissions, then more than half must be returned to the surface. My line by line simulations of a standard atmosphere with average clouds gets a value of A about 74.1%, so perhaps slightly more than half is returned to the surface, but it’s within the margin of error. Two different proxies I’ve developed from ISCCP data show this ratio to bounce around 50/50 by a couple of percent. John, a photon at 15 μ carries the same energy regardless of the bulk temperature of the gas. The energy increases directly with the frequency. Due to collisions some molecules always have a higher energy and can emit a photon. The frequency of the photon depends on what is emitting the photon and how the energy is distributed among the electrons in the molecule or atom. The energy of the photon doesn’t depend on the temperature, but the number emitted/volume does. Does this evolve the atm conditions second by second? If it’s just a static snapshot it is meaning less. “Does this evolve the atm conditions second by second?” Not necessary, but is based on averages of data sampled at about 4 hour intervals for 3 decades. Sensitivity represents a change in long term averages and that is all we should care about when considering what the sensitivity actually is. Then it’s wrong, the outgoing cooling rate changes at night as air temps near dew point, it is not static. You can not just average this into a “picture” of what’s happening. This is another reason the results are so wrong. micro6500, “You can not just average …” Without understanding how to properly calculate averages, any quantification of the sensitivity is meaningless and quantifying the sensitivity is what this is all about. Actually sensitivity has to be very low, Min temps are only very minimally effected by co2, it’s 98-99% WV. John, The main thing to remember is not so much the concentration gradient, but the temperature gradient. Your notion of a CO2 molecule re-radiating isn’t quite right. GHG molecules that absorb mostly lose the energy through collision before they can re-radiate. Absorption and radiation are decoupled; radiation happens as it would for any gas at that temperature. At high optical density (say 15 μ), a patch of air radiates equally up and down. Absorption is independent of T. But the re-emission isn’t. What went down is absorbed by hotter gas, and re-emitted at higher intensity. There is a standard theory in heat transfer for the high optical density case, called Rosseland radiation. The radiant transfer satisfies the diffusion equation. Flux is proportional to temperature gradient, and the conductivity is inversely proportional to optical depth (mean path length). This works as long as most of the energy is reabsorbed before reaching surface or space. Optical depth>3 is a rule of thumb, although the concept is useful lower. It’s really a grey body limit – messier when there are big spectral differences. “Have you done the math Nick? Is it a wash, or is there more downward emission?” I think the relevant math is what I said above. Overall, warmer emits more, and the emission reaching the surface is much higher than that going to space, just based on temp diff. At issue is it’s not static during the night, it changes as air temps cool toward dew point, as water vapor takes over the longer wave bands (the optical window doesn’t change temp). Nick GHG molicules also absorb energy through collision, gas what they do with that energy. Nick The atmosphere is a gas and therefore doesn’t emit blackbody/ graybody radiation. It only emits spectral lines. If you are considering particles like dust and water( in liquid and solid phase) then it can emit BB/GB radiation. Alex, “It only emits spectral lines.” Yes, but even more importantly, only a tiny percent of the gas molecules in the atmosphere have spectral lines in the relevant spectra. Oddly enough, many think that GHG absorption is rapidly ‘thermalized’ into the kinetic energy of molecular motion which would make it unavailable for emission away from the planet (O2/N2 doesn’t emit LWIR photons) and given that only about 90 W/m^2 gets through the transparent window (Trenberth claims even less), it’s hard to come up with the 145 W/m^2 shortfall without substantial energy at TOA in the absorption bands. I don’t like the term ‘thermalised’. It implies a one way direction when in fact it isn’t. Molecules can lose vibrational energy through collision, they can also obtain rotational energy through collision. It goes equally both ways. Emission and absorption are also equal. A complex interchange but always in balance(according to probability of course). It’s all a matter of detection. Most people (including scientists) don’t know how stuff works. They are basically lab rats that don’t have a clue. They don’t need to know, they just do their job accurately and precisely. Unfortunately the conclusions they draw can be totally erroneous. If you imagine a molecule as a sphere then it will emit in any direction. In fact over 41,000 directions if the directions are 1 deg wide. Good luck having a detector in the right place to do that. that’s why it’s easier to use absorption spectroscopy. All energy comes from one direction and there are enough molecules to ‘get in the way’ and absorb energy. There is no consideration for emission, which can be in any direction and undetectable. The instrumentation is perfect for finding trace quantities of molecules and things. Absolutely useless for determining the total energy emitted by molecules. Anyone who thinks they can determine emission and energy transfer through this method should have their eye removed with a burnt stick. Everything that is above zero K Temperature emits thermal radiation; including all atmospheric gases. It’s called thermal radiation because it depends entirely on the Temperature and is quite independent of any atomic or molecular SPECTRAL LINES. Its source is simply Maxwell’s equations and the fact that atoms and molecules in collision involve the acceleration of electric charge. An H2 molecule essentially has zero electric dipole moment, because the positive charge distribution and the negative charge distribution both have their center of charge at the exact same place. But during a collision between two such molecules (which is ALL that “heat” (noun) is), the kinetic energy and the momentum is concentrated almost entirely in the atomic nuclei, and not in the electron cloud. The proton and the electron have the same magnitude electric charge (+/-e) but the proton is 1836 times as massive as the electron, so in a collision it is the protons that do the billiard ball collision thing, , and the result is a separation (during the collision) of the +ve charge center, and the negative charge center due to the electrons. and that results in a distortion of the symmetry of the charge distribution which results in a non-zero electric dipole moment, so you get a radiating antenna that radiates a continuum spectrum based on just the acceleration of the charges. There also are higher order electric moments, which might be quadrupolar, Octopolar or hexadecapolar moments, and they all can make very fine radiating antennas. Yes the thermal radiation from gases is low intensity but that is because the molecular density of gases is very low. They are highly transparent (to at least visible radiation) which is why their thermal radiation isn’t black body Stefan-Boltzmann or Planck spectrum radiation. Some of the 4-H club physics that gets bandied about in these columns, makes one wonder what it is they teach in schools these days. Well I guess I actually know that since I am married to a public school teacher. G George E. Smith, “Everything that is above zero K Temperature emits thermal radiation; including all atmospheric gases.” Not at any relevant magnitude relative to LWIR and it can be ignored. In astrophysics, the way gas clouds are detected is by either emission lines if its hot enough or absorption lines of a back lit source if its not. The problem is that the kinetic energy of an atmospheric O2/N2 molecule in motion is about the same as an LWIR photon, so to emit a relevant photon, it would have to give up nearly all of its translational energy. If only laser cooling could be this efficient. A Planck spectrum arises as molecules with line spectra merge their electron clouds forming a liquid or solid and the degrees of freedom increase as more and more molecules are involved. This permits the absorption and emission of photons that are not restricted to be resonances of an isolated molecules electron shell. In one sense, its like extreme collisional broadening. Have you tried collision simulations based on nothing but the repulsive force of one electron cloud against another? The colliding molecules change direction at many atomic radii away from where the electrons get close enough to touch/merge. As they cool, they can get closer and the outer electron shells merge which initiates the phase change from a gas to a liquid. In fact, nearly all interactions between atoms and molecules occurs in the outer most electron shell. Nick StokesJanuary 5, 2017 at 6:49 pm . It goes wrong there. When you write, ” It emits far more downward than up.” Surfaces emit upwards by definition. Very hard to emit anything when it goes inwards instead of outwards. Nonetheless atoms and molecules emit in all directions equally. Hence the atmosphere, not being a surface, at all levels emits upwards, downwards and sideways equally. What you are trying to say, I guess is that there is a lot of back radiation of the same energy before it finally gets away. This does not and cannot imply that anything emits more downwards than upwards. Eventually it all flows out the upwards plughole [vacuum], while always emitting equally in all directions except from the surface. Nick, and and +0.5 W/m^2 down.. This was supposed to say: “Just because the atmosphere as a whole mass emits significantly more downward to the surface THAN upwards into space does NOT mean upwelling IR absorbed somewhere within has a greater chance of being re-radiated downwards than upwards.” This also was supposed to say: “Whether a particular layer is emitting at 300 W/m^2 or 100 W/m^2, if 1 additional W/m^2 from the surface is absorbed, that layer will re-emit +0.5 W/m^2 UP and +0.5 W/m^2 down.” ” It emits far more downward than up.” Photons are emitted equally in all directions. At optical thickness below 300 meters the atmosphere radiates as a blackbody. CO2 is absorbing and emitting (and more importantly kinetically warming the transparent bulk of the atmosphere) according to its specific material properties all the while throughout this 300m section. The specific material property of CO2 is that it is a very light shade of greybody. It absorbs incredibly well, but re-radiates only a fraction of the incident photons. It transfers radiation poorly. Radiative transfer, up or down, is simply not how it works in the atmosphere. Admittedly, a bit out of my depth here. “Photons are emitted equally in all direction”. Is this statement impacted by geometry? By this I mean, aren’t both the black body and grey body spherical or at least circular? Clif, ‘”Is this statement impacted by geometry?” Absolutely and this explains the roughly 50/50 split between absorbed energy leaving the planet or being returned to the surface. It’s for the same reason that we consider the average input about 341 W/m^2 and not 1366 W/m^2 which is the actual flux arriving from the Sun. It just arrives over 1/4 the area over which its ultimately emitted. A blackbody has no inherent dimension or shape. It is just a concept. The word “radiation” itself implies circularity, but that’s just the way we like to think of something that goes in every imaginable direction equally. gymnosperm, “It absorbs well, but re-radiates only a fraction of the incident photons.” Not necessarily so. The main way that an energized CO2 molecule returns to the ground state is by emitting a photon of the same energy that energized it in the first place and a collision has a relatively large probability of resulting in such emission. It’s a red herring to consider that much of this is ‘thermalized’ and converted into the translation energy of molecules in motion. If this was the case, we would see little, if any, energy in absorption bands at TOA since that energy would get redistributed across the whole band of wavelengths, nor would we see significant energy in absorption bands being returned to the surface. See the spectrums Nick posted earlier in the comments. CO2 has only one avenue from the ground state to higher vibrational and rotational energy levels. This avenue is the Q branch and it gets excited at WN 667.4. This fundamental transition is accompanied by constructive and destructive rotations that intermittently occupy the range between 630 and 720. CO2 also has other transitions summarized below. “Troposphere” was a mental lapse intended as tropopause, but I have left it because it is interestingly true. If you are measuring light transmission through a gas filled tube and you switch off 667.4, all the other transitions must go dark as well. The real world is not so simple and there are lots ways for molecules to gain energy. It is well known that from ~70 kilometers satellites see CO2 radiating at the tropopause. This is quite remarkable because it is also well known that CO2 continues to radiate well above the tropopause and into the mesosphere. The point here is that the original source of 667.4 photons is the earth’s surface. In a gas tube it is impossible to know if light coming out the other end has been “transmitted” as a result of transparency, or absorption and re-emission. What we do know is that within one meter 667.4 is virtually extinguished and the tube warms up. The fate of a 667.4 photon leaving the earth’s surface is the question. The radiative transfer model will have it being passed between layers of the atmosphere by absorption and re-emission like an Australian rules football… I think it’s quite possible that it really doesn’t do much until water vapor starts condensing, which has a lot of node in the 15u area, so during condensing events, the water is an bright emitter, and it could stimulate the co2 @ 15u. The stuff that goes on inside gas lasers…… Yes. And the satellites looking down see CO2 radiating at the tropopause, where absorption of solar radiation by ozone adds a lot of new energy. This in spite of looking down through~60 km of stratosphere reputedly cooling from radiating CO2. I have been reading the comments in: There is a fascinating exchange between Nasif Nahle and Science of Doom. SOD argues transmission = 1-absorption and what is absorbed must be transmitted. Nasif calculates from measurements a column emissivity of .002, and then argues absorption must be similarly low. Their arguments BOTH fail on Kirchoff’s law, which pertains only to blackbodies. CO2 is a greybody, a class of materials that DO NOT follow Kirchoff’s law. It’s to simplistic a solution. “It’s to simplistic a solution.” What’s not transmitted is absorbed and eventually re-transmitted. The difference between transmission and re-transmission is that transmission is immediate and across the same area as absorption while re-transmission is delayed and across twice the area. It’s the delayed downward re-transmission that makes the surface warmer than it would be based on incident solar input alone. Clouds and GHG’s contribute to re-transmission where the larger effect is from clouds. The only thing necessary to grasp in this perceived “torrent of words”, a tour de force unlike any on the matter, is George’s explication of the ‘gray body’. It is that simple. Bravo. Well, all of this “average” radiation calculation stuff is really good fun.. Once this is done properly one quickly concludes that the “Radiative Greenhouse Effect” simply delays the transit time of energy through the “Sun/Atmosphere/Earth’s Surface/Atmosphere/Energy Free Void of the Universe” system by some very small time increment, probably tens of milliseconds, perhaps as much as a few seconds. Given that there are about 86 million milliseconds (or 86,000 seconds) in each day this delay of a few tens/hundredths of milliseconds has NO effect on the average temperature at the surface of the Earth, I again suggest that folks “read up” about how optical integrating spheres function. The optical integrating sphere exhibits what a climate scientist would consider nearly 100% forcing (aka “back-radiation”) and yet there is no “energy gain” involved, Yes, a “light bulb” inside an integrating sphere will experience “warming” from “back radiation” and this will change it’s efficacy (aka efficiency). BUT in the absence of a “power supply”, a unit that can provide ‘unlimited” energy (within some bounds, say +/- 100%) this change in efficacy cannot raise the average temperature of the emitting body, This is all well known stuff to folks doing absolute radiometry experiments. “Self absorption” (aka the green house effect) is a well known and understood effect in radiometry. It is considered a “troublesome error source” and means to quantify and understand it are known, if only to a small set of folks that consider themselves practitioners of “absolute radiometry” Thanks for your post, Cheers KevinK. KevinK January 5, 2017 at 7:22 pm. “Radiative Greenhouse Effect” simply delays the transit time of energy by some very small time increment, probably tens of milliseconds, perhaps as much as a few seconds.” Kevin a slight problem is that that ray of light/energy package may actually hit millions of CO2 molecules on the way out. A few milliseconds no problems but a a thousand seconds is 3 hours which means the heat could and does stay around for a significant time interval. Lucky for us in summer I guess. angech, please consider that light travels at 186,000 miles per second (still considered quite speedy). So even if it “collides” with a million CO2 molecules and gets redirected to the surface it’s speed is reduced to (about) 0.186 miles per second (186,000 / 1 million). That is still about 669 miles per hour (above the speed of sound, depending on altitude). So, given that the vast majority of the mass of the atmosphere around the Earth is within ten miles of the surface, at ~669 miles per hour the “back radiation” has exited to the “energy free void of space” after 0.014 hours (10 miles / 669 mph) which equals (0.014 hours * 60 minutes/hr) = 0.84 minutes = (0.84 minutes * 60 minutes/second) = 50.4 seconds. it is very hard to see how a worst case delay of ~50 seconds can be reasonably expected to change the “average temperature” of a system with a “fundamental period” of 86,400 seconds….. Cheers, KevinK KevinK, “it is very hard to see how a worst case delay of ~50 seconds …” While this kind of delay out into space has no effect, its the delay back to the surface that does it. Here’s a piece of C code that illustrates how past emissions accumulate with current emissions to increase the energy arriving at the surface and hence, its temperature. The initial condition is 240 W/m^2 of input and emissions by the surface, where A is instantly increased to 0.75. You can plug in any values of A and K you want. #include int main() { double Po, Pi, Ps, Pa; int i; double A, K; A = 0.75; // fraction of surface emissions absorbed by the atmosphere K = 0.5; // fraction of energy absorbed by the atmosphere and returned to the surface Ps = 239.0; Pi = 239.0; Po = 0.0; for (i = 0; i < 15; i++) { printf("time step %d, Ps = %g, Po = %g\n", i, Ps, Po); Pa = Ps*A; Po = Ps*(1 – A) + Pa*(1 – K); Ps = Pi + Pa*K; } } Love your work Kevin! Have you noticed the cooling rate at night decays exponentially? micro, thanks for the compliment. I have not considered the decay of the cooling rate. Seems like some investigation is needed, where do I apply for my grant money ??? Cheers, KevinK If you find some, let me know. We’ll it looked like it was reaching equilibrium, but my ir thermometer kept telling me the optical window was still 80 to 100F colder, same as it was when it was cooling fast. Thanks for doing physics here. It’s a great refresher. Some of it I’ve not revisited since I was at uni. Ok, here are some references for folks to read at their leisure; Radiometry of an integrating sphere (see section 3,7; “Transient Response”) Tech note on integrating sphere applications (see section 1.4, “Temporal response of an Integrating Sphere”) Note, Optical Integrating Spheres have been around for over a century, well known stuff, very little “discovery/study” necessary. Another note; the ‘Transient Response” to an incoming pulse of light is always present, a continuous “steady state” input of radiation is still impacted by this impulse response. However the currently available radiometry tools cannot sense the delay when the input is “steady state”. The delay is there, we just cannot see/measure it. Cheers, KevinK. One also has to include the fact that doubling the amount of CO2 in the Earth’s atmosphere will slightly decrease the dry lapse rate in the troposphere which offsets radiative heating by more than a factor of 20. Another consideration is that H2O is a net coolant in the Earth’s atmosphere. As evidence of this the wet lapse rate is signifficlatly lower than the dry lapse rate. So the H2O feedback is really negative and so acts to diminish any remaining warming that CO2 might provide. Another consideration is that the radiant greenhouse effect upon which the AGW conjecture depends has not been obsered anywhere in the solar system. The radiant greenhouse effect is really ficititious which renders the AGW conjecture as ficititious. If CO2 really affected climate then one would xpect that the increase in CO2 over the past 30 years would have caused at least a measureable increase in the dry lapse rate in the troposphere but such has not happened. @ willhaas January 5, 2017 at 8:04 pm : Thanks, Wil.. Radiation is ineffective because of optical depth below 5km, except in the window. We do know that the faster and mightier conduction-thermalisation-water vapour convective and condensate path totally dominates in clearing the opaque bottom half of the troposphere, and then some.. As per Standard Atmospheres. But it still works on Venus and Titan, for starters. A simple model, based on known physics and 1st principles, yields an estimate of ‘climate sensitivity’ that approximates physical evidence while illustrating (yet again) that climate sensitivity estimates from complex software models of planetary climate are unrealistically way too high! Very interesting. Thank you, George White! It’s time to show some real spectra, and see what can be learnt. Here, from a text by Grant Petty, is a view looking up from surface and down from 20km, over an icefield at Barrow at thaw time. If you look at about 900 cm^-1, you see the atmospheric window. The air is transparent, and S-B from surface works. In the top plot, the radiation follows the S-B line for about 273K, the surface tempeerature. An looking up, it follows around 3K, space. But if you look at 650 cm^-1, a peak CO2 frequency, you see that it is following the 225K line. That is the temperature of TOA. The big bite there represents the GHE. It’s that reduced emission that keeps us warm. And if you look up, you see it following the 268K line. That is the temperature of air near rhe ground, which is where that radiation is coming from. And so you see that, by eye, the intensity of radiation down is about twice up. In this range radiation from the surface (high) is disconnected from what is emitted at TOA. NIck, You are conflating a Planck spectrum with conformance to SB. If you apply Wein’s displacement law the average radiation emitted by the planet, the color temperature of the planets emissions is approximately equal 287K while the EQUIVALENT temperature given by SB is about 255K owing to the attenuation you point out in the absorption bands. Moreover; as I said before, the attenuation in a absorption bands is only about 3db and it looks basically the same from 100km except for some additional ozone absorption. Where do you think the 255K equivalent temperature representing the 240 W/m^2 emitted by the planet comes from? George, “Where do you think the 255K equivalent temperature representing the 240 W/m^2 emitted by the planet comes from?” It’s an average. As you see from this clear sky spectrum, parts are actually emitted from TOA (225K) and parts from surface (273K). If you aggregate those as a total flux and put into S-B, you get T somewhere between. Actually, it’s more complicated because of clouds, which replace the surface component by something colder (top of cloud temp), and because there are some low OD frequencies where the outgoing emission comes from various levels. But the key thing is that you can’t make your assumption that the atmosphere re-radiates equally up and down. It just isn’t so. Nick, “But the key thing is that you can’t make your assumption that the atmosphere re-radiates equally up and down. It just isn’t so.” What do you think this ratio is if its not half up and half down? The sum of what goes up and down is fixed and the more you think the atmosphere absorbs (Trenberth claims even more than 75%), the larger the fraction of absorption that must go up in order to acheive balance. George, “What do you think this ratio is if its not half up and half down?” It’s frequency dependent. At 650 cm^-1, in that spectrum, it is 100:55. But it would be different elsewhere (than Barrow in spring), and at other frequencies. There is no easy way to deduce a ratio; you just have to add it all up. But 1:1 has no basis. Nick, “you just have to add it all up.” Yes, and I’ve done this and its about 50/50, but it does vary spatially and temporally a little bit on either side and the as system varies this ratio, it almost seems like an internal control valve, none the less, it has a relatively unchanging long term average. But as I keep having to say, the climate sensitivity is all about changes to long term averages and long term averages are integrated over a whole number of years and over the relevant ranges of all the dependent variables they are dependent on. That’s because there is one. The images are reading different things. Blind Freddy can see that they are the inverse of each other. The only way you could get a spectrum like the 2nd image is by looking at the sun. Black space won’t give you that spectrum. Both images are looking through the atmosphere with a background ‘light’. Looking at the same thing -the atmosphere. Please explain why the photons from the sun aren’t absorbed by the atmosphere while the photons from earth are. “Black space won’t give you that spectrum.” You aren’t seeing black space, except in the atmospheric window (around 900 cm^-1). You are seeing thermally radiating gases, mainly CO2 and H2O. Unless you are Blind Freddy. Nick Give me a link to the paper. I can’t find it. Don’t be rude. I actually like you, so don’t make enemies if you don’t have to. I feel that your ‘cut and paste’ from some source is biased (by someone). The 2 images you’ve shown are different. one is an emission spectrum and the other is an absorption spectrum. It’s clearly visible. I would like to reassure myself that the information is correct. I am certain that you would like some reassurance too Nick You haven’t answered my question. ‘Please explain why the photons from the sun aren’t absorbed by the atmosphere while the photons from earth are.’ Alex, “Give me a link to the paper. “ It’s a textbook, here. And yes, one spectrum is looking down, the other up. It shows the GHG complementary emission from near surface air and TOA. “Please explain why the photons from the sun aren’t absorbed by the atmosphere “ We’re talking about thermal IR. There just aren’t that many coming from the sun in that range, but yes, they are absorbed in that range. Someone will probably say that at all levels emission increases with temperature, so the sun should be emitting more. Well, it emits more per solid angle. You get more thermal IR from the sun than from any equivalent patch of sky. But there is a lot more sky. Thermal IR from sun is a very small fraction of total solar energy flux. Nick, The other part of the question has not been addressed yet. While you don’t yet accept that the gray body model accurately reflects the relationship between the surface temperature and emissions of the planet and since in LTE Pin == Pout this relationship sets an upper bound on the sensitivity to forcing, how can you explain Figure 3 and especially the tight distribution of samples (red dots) around the predicted transfer characteristic (green line)? BTW, of all the plots I’ve done that show one climate variable against another, the relationship in Figure 3 has the tightest distribution of samples I’ve ever seen. It’s pretty undeniable. ” While you don’t yet accept that the gray body model accurately reflects the relationship between the surface temperature and emissions of the planet” Because the concepts are all wrong. You confound surface temperature with equivalent temperature. The atmosphere is nothing like what you model. It has high opacity in frequency bands, at which it is also highly radiative. It has a large range of vertical temperature variation. Surface temperatures are very largely set by the amount of IR that is actually emitted by lower levels of the atmosphere. And they of course depend on the surface. At times you seem to say that you are just doing Trenberth type energy accounting. But Trenberth has no illusions that his accounting can determine sensitivity. The physics just isn’t there. “You confound surface temperature with equivalent temperature.” The two track changes in each other exactly. It’s a simple matter to calibrate the absolute value. “The atmosphere is nothing like what you model.” I don’t model the atmosphere, I model the relative relationship between the boundaries of that atmosphere. One boundary at the surface and another at TOA. What happens between the surface and TOA are irrelevant, all the model cares about is what the end result is. “It has high opacity in frequency bands, at which it is also highly radiative. It has a large range of vertical temperature variation.” This is why averages are integrated across wavelength and other dependent variables. This way, the averages are wavelength independent as are all the other variables. “Surface temperatures are very largely set by the amount of IR that is actually emitted by lower levels of the atmosphere.”, No. Surface temperature are set by the amount of IR the surface radiates and absorbs, which in the steady state are equal. If it helps, consider a water world and/or worlds without water, GHG’s and/or atmospheres. Nick, Another way of looking at this: The total IR flux emitted by the surface which is absorbed by the atmosphere is roughly 300 W/m^2, which happens to (coincidently) be roughly the same as the amount of IR the atmosphere as a whole mass passes to the surface. You don’t really think or believe the contribution of 300 W/m^2 of DLR at the surface is entirely sourced from and driven by the re-radiation of this 300 W/m^2 initially absorbed by the atmosphere from the surface, do you? Clearly there would be contributions from all three energy flux input sources to the atmosphere — the energy of which also radiates downward toward and to the surface. Keep in mind there are multiple energy inputs to the atmosphere besides just the upwelling IR emitted from the surface (and atmosphere) which is absorbed. Post albedo solar energy absorbed by the atmosphere and re-emitted downward to the surface would not be ‘back radiation’, but instead ‘forward radiation’ from the Sun whose energy has yet to reach the surface. And in addition to the radiant flux emitted from the surface which is absorbed there is significant non-radiant flux moved from the surface into the atmosphere, primarily as the latent heat of evaporated water, which condenses to forms clouds — whose deposited energy within (in addition to driving weather), also radiates substantial IR downward to the surface. The total amount of IR that is ultimately passed to the surface has contributions from all three input sources, and the contribution from each one cannot be distinguished or quantified in any clear or meaningful way from the other two. Thus mechanistically, the downward IR flux ultimately passed to the surface from the atmosphere has no clear relationship to the underlying physics driving the GHE, i.e. the re-radiation of initially absorbed surface IR energy back downward where it’s re-absorbed at a lower point somewhere. Thus it’s this re-radiated downward push of absorbed surface IR within the atmosphere that is slowing down the radiative cooling or resisting the huge upward IR push ultimately out the TOA. The total DLR at the surface is more just related to the rate the lower layers in combination with the surface are forced (from that downward re-radiated IR push) to be emitting up in order for the surface and the whole of the atmosphere to be pushing through the required 240 W/m^2 back into space. Nick, (as non-radiant flux). than up and +0.5 W/m^2 down. Meaning this is independent of the lapse rate. that’s slowing down the radiative cooling of the system.. I’ve noticed many people like yourself seem unable to separate radiative transfer in the atmosphere with the underlying physics of the GHE that ultimately leads to surface warming. The GHE is applied physics within the physics of atmospheric radiative transfer. Atmospheric radiative transfer is not itself (or by itself) the physics of the GHE. This means the underlying physics of the GHE are largely separate from the thermodynamic path manifesting the energy balance, and it is this difference that seems to elude so many people like yourself. Nick, I assume it is agreed by you that the constituents of the atmosphere, i.e. GHGs and clouds, act to both cool the system by emitting IR up towards space and warm it by emitting IR downwards towards the surface. Right? George is just saying that like anything else in physics or engineering, this has to be accounted for, plain and simple. He’s using/modeling the Earth/atmosphere system as a black box, constrained by COE to produce required outputs at the surface and TOA, given specific inputs: When this is applied to surface IR absorbed by the atmosphere, it yields that only about half of what’s absorbed by GHGs is acting to ultimately warm the surface, where as the other half is contributing to the radiative cooling push of the atmosphere and ultimate cooling of the system: George is not modeling the actual thermodynamics here and all the complexities associated with the thermodynamics (which isn’t possible by such methods), but rather he’s trying to isolate the effect the absorption of surface IR by GHGs, and the subsequent non-directional re-radiation of that absorbed energy, is having amongst the highly complex and non-linear thermodynamic path manifesting the surface energy balance, so far as its ultimate contribution to surface warming. Nick – that post and the info contained in those graphs are fantastically educational to me [just trying to learn here]. Now I must go and try to find the paper they came from. Just wanted to say that those observations and descriptions crystallize what is otherwise difficult to visualize [for a newbie]. Thanks much. Nick , are those spectra in a easily accessible tables somewhere ? Email me them or point me to them and I’ll calculate the actual radiative equilibrium temperature they imply . co2isnotevil is right that “lapse rate” is the equilibrium expression of gravitational energy . It cannot be explained as an optical phenomenon — which is why neither quantitative equation nor experimental demonstration of such a phenomenon has ever been presented . Bob, A couple of things to notice about the spectra. 1) There is no energy returned to the surface in the transparent regions of the atmosphere. This means that no GHG energy is being ‘thermalized’ and re-radiated as broad band BB emissions. 2) The attenuation in absorption bands at TOA (20km is high enough to be considered TOA relative to the radiative balance) is only about 3db (50%). Again, if GHG energy was being ‘thermalized’, we would see littlle, if any, energy in the absorption bands, moreover; this is consistent with the 50/50 split of absorbed energy required by geometrical considerations. 3) The small wave number data (400-600) is missing from the 20km data looking down which would otherwise illustrates that the color temperature of the emissions (where the peak is relative to Wein’s Displacement Law) is the surface temperature and the 255K equivalent temperature is a consequence of energy being removed from parts of the spectrum manifesting a lower equivalent temperature for the outgoing radiation, BTW, I’ve had discussions with Grant Perry about this and he has trouble moving away from this ‘thermalization’ point of view, despite the evidence. To be fair, it doesn’t really matter from a thermodynamic balance and temperature perspective (molecules in motion affect a temperature sensor in the same was a photons of the same energy), but only matters if you want to accurately predict the spectrum and account for 1), 2) and 3) above. Of course, this goes against the CAGW narrative which presumes that GHG absorption heats the atmosphere (O2/N2) which then heats the surface by convection, rather then the purely radiative effect it is where photons emitted by GHG’s returning to the ground state are what heat the surface. One difference is that if all GHG absorption was ‘thermalized’ as the energy of molecules in motion, all must be returned to the surface, since molecules in motion do not emit photons that can participate in the radiative balance and there wouldn’t be anywhere near enough energy to offset the incoming solar energy. That is correct. An atmosphere in hydrostatic equilibrium suspended off the surface by the upward pressure gradient force and thus balanced against the downward force of gravity will show a lapse rate slope related to the mass of the atmosphere and the strength of the gravitational field. It is all a consequence of conduction and convection NOT radiation. The radiation field is a mere consequence of the lapse rate slope caused by conduction and convection. Radiation imbalances within an atmosphere simply lead to convection changes that neutralise such imbalances in order to maintain long term hydrostastic equilibrium. No matter what the proportion of GHGs in an atmosphere the surface temperature does not change. Only the atmospheric convective circulation pattern will change. Bob A, “Nick , are those spectra in a easily accessible tables somewhere ?” Unfortunately not, AFAIK. As I mentioned above, the graph comes from a textbook. The caption gives an attribution, but I don’t think that helps. It isn’t recent. I have my own notion of the lapse rate here, and earlier posts. Yes, the DALR is determined by gravity. But it takes energy to maintain it, and the flux that passes through with radiative transfer in GHG-active regions helps to maintain it. This is one of the great atmospheric experiments of all time. I agree with everything you say except that 228K is the Arctic tropopause rather than the top of the atmosphere. The top of the atmosphere is more like 160K where water is radiating in the window. The peak CO2 frequency is actually 667.4, close enough. You can see a little spike indicating the 667.4 Q branch. Looking up, it would ordinarily be pointed down. In this case a strong surface inversion reversed it. Long wave infrared light only comes from the earth’s surface. It does not come from the sun. It is not manufactured by Carbon dioxide, or any other greenhouse gas. These gasses absorb and re-emit long wave radiation emitted from the surface according to their individual material properties. You say it emits down more than it emits up. I say the two emissions are disconnected. The boundary layer extinguishes the 667.4 band. Radiation in the band resumes generally at about the cloud condensation level as a result of condensation energy. CO2 radiates from the tropopause because massive amounts of new energy are added by ozone absorption. Presence in the phrase “the Climate Sensitivity” of the word “the” implies existence of a fixed ratio between the change in in the spatially averaged surface air temperature at equilibrium and the change in the logarithm of the atmospheric CO2 concentration. Would anyone here care to defend the thesis that this ratio is fixed? Terry, It’s definitely not a fixed ratio, either temporally or spatially, but it does have a relatively constant yearly average and changes to long term averages are all we care about when we are talking about the climate sensitivity. This is where superposition in the energy domain comes in which allows us to calculate meaningful averages since 1 Joule does 1 Joule of work and no more or no less. Thank you, c02isnotevil, for taking the time to respond. That “1 Joule does 1 Joule of work” is not a principle of thermodynamics. Did you mean to say that “1 Joule of heat crossing the boundary of a concrete object does 1 Joule of work on this boundary absent change in the internal energy of this object”? The basic point is no 1 Joule is any different than any other. That “no 1 Joule is any different than any other” is a falsehood. Terry, Energy can not be created or destroyed, only transformed from one form to another. Different forms may be incapable of doing different kinds of work, but relative to the energy of photons, it’s all the same and photons are all that matter relative to the radiative balance and a quantification of the sensitivity. The point of this is that if each of the 240 W/m^2 of incident power only results in 1.6 W/m^2 of surface emissions, the next W/m^2 can’t result in more than 4 W/m^2 which is what the IPCC sensitivity requires. The average emissivity is far from being temperature sensitive enough. If you examine the temp vs. emissivity plot I posted in response to one of Nick’s comments, the local minimum is about at the current average temperature and the emissivity increases (sensitivity decreases) whether the temperature increases or decreases, but not by very much. Terry Oldberg, ‘Presence in the phrase “the Climate Sensitivity” of the word “the” implies existence of a fixed ratio …’ Could you explain the reasoning that led you to that statement. phaedo: I can explain that. Thanks for asking. Common usage suggests that “the climate sensitivity” references a fixed ratio, The change that is in the numerator of this ratio is the equilibrium temperature. Thus, this concept is often rendered as “ECS” but I prefer “TECS” (acronym for “the equilibrium climate sensitivity”) as this usage makes clear that a constant is meant. Warmists argue that the value of TECS is about 3 Celsius per doubling of the CO2 concentration. Bayesians treat the ratio as a parameter having prior and posterior probability density functions indicating that they believe TECS to be a constant with uncertain value. It is by treating TECS as a constant that climatologists bypass the thorny issue of variability. If TECS is only the ratio of two numbers then climatologists have to make their arguments in terms of probability theory and statistics but to avoid this involvement is a characteristic of their profession. For evidence, search the literature for a description of the statistical population of global warming climatology. I believe you will find, like me, that there isn’t one. I have an effective cs for to extratropics for the seasonal changes in calculated station solar and it’s actually change in temp here micro6500 That’s a good start on a statistical investigation. To take it to the next level I’d identify the statistical population, build a model from a sample drawn randomly from this population and cross validate this model in a different sample. If the model cross validates you’ve done something worth publishing. The model “cross validates” if and only if the predictions of the model match the observations in the second of the two samples. To create a model that cross validates poses challenges not faced by professional climatologists as their models are not falsifiable. It does, it shows up as an exponential decay in cooling rates, and the some of the data (with net rad) was from Australia, and other temp charts are from data in Ohio. And it explains everything. (Clear sky cooling performance) The climate system has a couple positive feedbacks that do not violate any laws of physics. For one thing, the Bode feedback theory does not require an infinite power supply for positive feedback, not even for positive feedback with feedback factor exceeding 1. The power supply only has to be sufficient to keep the law of conservation of energy from being violated. There is even the tunnel diode oscillator, whose only components are an inductor and capacitor to form a resonator, two resistors where one of them nonlinear to have voltage and current varying inversely with each other over a certain range (the tunnel diode), and a power supply to supply the amount of current needed to get the tunnel diode into a mode where voltage across it and current passing through it vary inversely. As for positive feedbacks in the climate system: One that is simple to explain is the surface albedo feedback. Snow and ice coverage vary inversely with temperature, so the amount of sunlight absorbed varies directly with temperature. This feedback was even greater during the surges and ebbings of Pleistocene ice age glaciations, when there was more sunlight-reflecting ice coverage that could be easily expanded or shrunk by a small change in global temperature. Ice core temperature records indicate climate that was more stable during interglacial periods and less stable between interglacials, and there is evidence that at some brief times during glaciations there were sudden climate shifts – when the climate system became unstable until a temporarily runaway change reduced a positive feedback that I think was the surface albedo one. Another positive feedback is the water vapor feedback, which relates to the gray body atmosphere depiction in Figure 2. One thing to consider is that the gray body filter is a bulk one, and thankfully Figure 2 to a fair extent shows this. Another thing to consider is that this bulk gray body filter is not uniform in temperature – the side facing Earth’s surface is warmer than the side facing outer space, so it radiates more thermal radiation to the surface than to outer space. (This truth makes it easier to understand how the Kiehl Trenberth energy budget diagram does not require violation of any laws of physics for its numbers to add up with its attributions to various heat flows.) If the world warms, then there is more water vapor – which is a greenhouse gas, and the one that our atmosphere has the most of and that contributes the most to the graybody filter Also, more water vapor means greater emissivity/absorption of the graybody filter depicted in Figure 2. That means thermal radiation photons emitted by the atmosphere reaching the surface are emitted from an altitude on-average closer to the surface, and thermal radiation photons emitted by the atmosphere and escaping to outer space are emitted from a higher altitude. So, more water vapor means the bulk graybody filter depicted in Figure 2 is effectively thicker, with its effective lower surface closer to the surface and warmer. Such a thicker denser effective graybody filter has increased inequality between its radiation reaching the surface and radiation escaping to outer space. Donald, You are incorrect about Bode’s assumptions. They are laid out in the first 2 paragraphs in the book I referenced. Google it and you can find a free copy of it on-line. The requirement for a vacuum tube and associated power supply specifies the implicit infinite supply, as there are no restrictions on the output impedance in the Bode model, which can be 0 requiring an infinite power supply. This assumed power supply is the source of most of the extra 12+ W/m^2 required over and above the 3.7 W/m^2 of CO2 ‘forcing’ that is required in the steady state to sustain a 3C temperature increase. Only about 0.6W per W/m^2 (about 2.2 W/m^2) is all the ‘feedback’ the climate system can provide. Of course, the very concept of feedback is not at all applicable to a passive system like the Earth’s climate system (passive specifically means no implicit supply). Regarding ice. The average ice coverage of the planet is about 13%, most of which is where little sunlight arrives anyway. It it all melted and considering that 2/3 of the planet is covered by clouds anyway and mitigates the effects of albedo ‘fedeback’, the incremental un-reflected input power can only account for about half of the 10 W/m^2 above and beyond the 2.2 W/m^2 from 3.7 W/m^2 of forcing based on 1.6 W/m^2 of surface emissions per W/m^2 of total forcing. This does become more important as more of the planet is covered by ice and snow, but at the current time, we are pretty close to minimum possible ice. No amount of CO2 will stop ice from forming during the polar winters. Regarding water vapor. You can’t consider water vapor without considering the entire hydro cycle, which drives a heat engine we call weather which unambiguously cools based on the trails of cold water left in the wake of a Hurricane. The Second Law has something to say about this as well, where a heat engine can’t warm its source of heat. Amplifiers with positive feedback even to the point of instability or oscillation do not require zero output impedance, and they work in practice with finite power supplies. Consider the tunnel diode oscillator, where all power enters the circuit through a resistor. Nonpositive impedance in the tunnel diode oscillator is incremental impedance, and that alone being nonpositive is sufficient for the circuit to work. Increasing the percentage of radiation from a bulk graybody filter of nonuniform temperature towards what warms its warmer side does not require violation of the second law of thermodynamics, because this does not involve a heat engine. The only forms of energy here are heat and thermal radiation – there is no conversion to/from other forms of energy such as mechanical energy. The second law of thermodynamics only requires net flow to be from warmer points and surfaces to cooler points and surfaces, which is the case with a bulk graybody filter with one side facing a source of thermal radiation that warms the graybody filter from one side. Increasing the optical density of that filter will cause the surface warming it to have a temperature increase in order to get rid of the heat it receives from a kind of radiation that the filter is transparent to, without any net flows of heat from anything to anything else that is warmer. As for 2/3 of the Earth’s surface being covered by clouds: Not all of these clouds are opaque. Many of them are cirrus and cirrostratus, which are translucent. This explains why the Kiehl Trenberth energy budget diagram shows about 58% of incoming solar radiation reaching the surface. Year-round insolation reaching the surface around the north coast of Alaska and Yukon is about 100 W/m^2 according to a color-coded map in the Wikipedia article on solar irradiance, and the global average above the atmosphere is 342 W/m^2. “Amplifiers with positive feedback even to the point of instability or oscillation do not require zero output impedance” Correct, but Bode’s basic gain equation makes no assumptions about the output impedance and it can just as well be infinite or zero and it still works, therefore, the implicit power supply must be unlimited. The Bode model is idealized and part of the idealization is assuming an infinite source of Joules powers the gain. Tunnel diodes work at a different level based on transiently negative resistance, but this negative resistance only appears when the diode is biased, which is the external supply. “Not all of these clouds are opaque.” Yes, this is true and the average optical depth of clouds is accounted for by the analysis. The average emissivity of clouds given a threshold of 2/3 of the planet covered bu them, is about 0.7. Cloud emissivity approaches 1 as the clouds get taller and denser, but the average is only about 0.7. This also means that about 30% of surface emissions passes through clouds and this is something Trenberth doesn’t account for with his estimate of the transparent window. More on your statement that clouds cover 2/3 of the surface: You said “After accounting for reflection by the surface and clouds, the Earth receives about 240 W/m2 from the Sun”. That is 70% of the 342 W/m^2 global average above the atmosphere. co2isnotevil: Clouds can simultaneously have majority emissivity and majority transmission of incoming solar radiation. The conflict is resolved by incoming solar radiation and low temperature thermal radiation being at different wavelengths, and clouds have absorption/emissivity varying with wavelength while equal to each other, and higher in wavelengths longer than 1.5 micrometers (about twice the wavelength of border between visible and infrared) than in shorter wavelengths. Donald, “Clouds can simultaneously have majority emissivity and majority transmission of incoming solar radiation” Yes, this is correct. But again, we are only talking about long term changes in averages and over the long term, the water in clouds is tightly coupled to the water in oceans and solar energy absorbed by clouds can be considered equivalent to energy absorbed by the ocean (surface), at least relative to the long term steady state and the short term hydro cycle. co2isnotevil: I did not state that a tunnel diode oscillator does not require a power supply, but merely that it does not require an infinite one. For that matter, there is no such thing as an infinite power supply. Donald, “there is no such thing as an infinite power supply.” Correct, but we are dealing with idealized models based on simplifying assumptions, especially when it comes to Bode’s feedback system analysis.. co2isnotevil saying: .” Please state how this is not applicable. Cases with active gain can be duplicated by cases with passive gain, for example with a tunnel diode. The classic tunnel diode oscillator receives all of its power through a resistor whose resistance is constant, so availability of energy/power is limited. The analogue to Earth’s climate system does not forbid positive feedback or even positive feedback to the extent of runaway, but merely requires such positive feedback to be restricted to some certain temperature range, outside of which the Earth’s climate is more stable. So where’s the density component of your equations? Density is regulated by gravity alone on Earth. prjindigo, The internals of the atmosphere, which is where density comes in, are decoupled from the model which is a model that matches the measured transfer function of the atmosphere which quantifies the causal behavior between the surface temperature and output emissions of the planet. This basically sets the upper limit on what the sensitivity can be. The lower limit is the relationship between the surface temperature and the post albedo input power, whose slope is 0.19 C per W/m^2 which is actually the sensitivity of an ideal BB at the surface temperature! This is represented by the magenta line in Figure 3. I didn’t bring it up because getting acceptance for 0.3C per W/m^2 is a big enough hill to climb. Forrest, The satellite data itself doesn’t say much explicitly, but it does report GHG concentrations (H2O and O3) and when I apply a radiative transfer model driven by HITRAN absorption line data (including CO2 and CH4) to a standard atmosphere with measured clouds, I get about 74%, which is well within the margin of error. A black body is nearly an exact model for the Moon. By looking out my window I can see that this is not the case, it’s clearly a gray body. A perfect blackbody is one that absorbs all incoming light and does not reflect any.. BTW, I don’t really like the wikipedia definition which seems to obfuscate the applicability of a gray body emitter (black body source with a gray body atmosphere). This is the trouble that comes when not properly allowing for the frequency dependence of ε. For the Moon, in the SW we see absorption and reflection (but not emission), which is fairly independent of frequency in that range. But ε changes radically getting into thermal IR frequencies, where we see pretty much black body emission. “allowing for the frequency dependence of ε.” The average ε is frequency independent and that is all the model depends on. Why is it so hard to grasp that this model is concerned only with long term averages and that yes, every parameter is dependent on almost every other parameter, but they all have relatively constant long term averages. This is why we need to do the analysis in the domain of Joules where superposition applies since if 1 Joule can to X amount of work, 2 Joules can to 2X amount of work and it takes work to warm the surface and keep it warm and the sensitivity is all about doing incremental work. So many of you can’t get your heads out of the temperature domain which is highly non linear where superposition does not apply. co2isnotevil January 5, 2017 at 9:32 pm. If you’re going to do a scientific post then get the terminology right, the moon is not a black body it’s a grey body. The removal of the reflected light is exactly what a greybody does, the blackbody radiation is reduced by the appropriate fraction in the gray body, that’s what the non unity constant is for. Also the atmosphere is not a greybody because its absorption is frequency dependent. Phill, “The removal of the reflected light is exactly what a greybody does” This is not the only thing that characterizes a gray body. Energy passed through a semi-transparent body also implements grayness as does energy received by a body that does work other than affecting the bodies temperature (for example, photosynthesis). My point is that if you don’t consider reflected input, the result is indistinguishable from a BB. And BTW, there is no such thing as an ideal BB in nature. All bodies are gray. Considering something to be EQUIVALENT a body black is a simplifying abstraction and this is what modelling is all about. I don’t understand why the concept of EQUIVALENCE is so difficult for others to understand as without understanding EQUIVALENCE there’s no possibility of understanding modelling. Just to be clear for me, I understand equivalency very well. I also understand fidelity, and reusability. I’m just trying to understand and discuss the edges that define that fidelity. “A gray body emitter is one where the power emitted is less than would be expected for a black body at the same temperature.” A gray body emitter has a rather specific meaning, not observed here. The power is less, but uniformly distributed over the spectrum. IOW, ε is independent of frequency. This is very much not true for radiative gases. “Trenberth’s energy balance lumps the return of non radiant energy as part of the ‘back radiation’ term, which is technically incorrect since energy transported by matter is not radiation.” A flux is a flux. Trenberth is doing energy budgetting; he’s not restricting to radiant. The discussion here is wrong. Material transport does count; it helps bring heat toward TOA, so to maintain the temperature at TOA as it loses heat by radiation. Wiki’s article on black body is more careful and correct. It says “A source with lower emissivity independent of frequency often is referred to as a gray body.”. The independence is important. “It is why 11 μ with ε = 0, IR goes straight from surface to space, while at 15μ, where ε is high, IR is radiated from TOA and not lower, because the atmosphere is opaque.” The energy of ALL light is frequency dependent regardless the color (black or grey) of the emitting body. Emissivity has nothing to do with frequency, except as regards the wavelengths the emitting body happens to absorb and emit. Wiki simply has it wrong. CO2 has very LOW emissivity at about 15 microns. Otherwise it would not be extinguished within a meter of standard atmosphere. In order for surface energy to travel to the tropopause at 15 microns it would have to TRANSMIT. It doesn’t. Transmission is 1-absorption. There is ZERO transmission to the tropopause at 15/667.4. From Science of Doom: gymnosperm Where did this graph come from? I would expect to find that the std atm had conditions that would lead to near 100% rel humidity for this spectrum. Do you have a link to the data to see exactly what they were doing with it. This is what I’ve been blathering about, Or I don’t understand just exactly what (or where?) is being measured here. If this is surface up to space, it should only be like this if the rel humidity is pretty high. micro6500, This plot looks like the inverse of absorption, which is not specifically transmission since transmission includes the fraction of absorption that is eventually transmitted into space. Except it isn’t just co2. “The power is less, but uniformly distributed over the spectrum” As I pointed out, this is not a requirement. Joules are Joules and the frequency of the photons transporting those Joules is irrelevant relative to the energy balance and subsequent sensitivity. Again, the 240 W/m^2 of emissions we use SB to convert to an EQUIVALENT temperature of 255K is not a Planck distribution, moreover; the average emissivity is spectrally independent since it’s integrated across the entire spectrum. “Material transport does count” Relative to the radiant balance of the planet and the consequential sensitivity, it certainly does, since only photons can enter or leave the top boundary of the atmosphere. Adding a zero sum source and sink of energy transporter by matter to the radiant component of the surface flux shouldn’t make a difference, but it adds a layer of unnecessary obfuscation that does nothing but confuse people. The real issue is that he calls the non radiant return of energy to the surface radiation which is misrepresentative at best. “As I pointed out, this is not a requirement.” It is a requirement of the proper definition of grey body. It is why grey, as opposed to blue or red. And it is vitally important to atmospheric radiative transport. It is why 11 μ with ε = 0, IR goes straight from surface to space, while at 15μ, where ε is high, IR is radiated from TOA and not lower, because the atmosphere is opaque. Nick, “It is a requirement of the proper definition of grey body.” Then the association between 255K and the average 240 W/m^2 emitted by the planet is meaningless as is the 390 W/m^2 (per Trenberth) emitted by the surface (he uses about 287.5K). There is no requirement for a Planck distribution when calculating the EQUIVALENT temperature of matter based on its radiative emissions. This is what the word EQUIVALENT means. That is, an ideal BB at the EQUIVALENT temperature (or a gray body at an EQUIVALENT temperature and EQUIVALENT emissivity) will emit the same energy flux as the measured radiative emissions, albeit with a different spectra. “Then the association between 255K and the average 240 W/m^2 emitted by the planet is meaningless…” Nobody thinks that there is actually a location at 255K which emits the 240 W/m2. “as is the 390 W/m^2 (per Trenberth) emitted by the surface (he uses about 287.5K).” No, the surface is a black body (very dark grey) in thermal IR. It is a more or less correct application of S-B, though the linearising of T^4 in averaging involves some error. “Planck distribution when calculating the EQUIVALENT temperature of matter” You can always calculate an equivalent temperature. It’s just a rescaled expression of flux, as shown on the spectra I included. But there is no point in defining sensitivity as d flux/d (equivalent temperature). That is curcular. You need to identify the ET with some real temperature. “But there is no point in defining sensitivity as d flux/d (equivalent temperature)” But, this is exactly what the IPCC defines as the sensitivity because dFlux is forcing. The idea of representing the surface temperature as an equivalent temperature of an ideal BB is common throughout climate science on both sides of the debate. It works because the emissivity of the surface itself (top of ocean + bits of land that poke through) is very close to 1. Only when an atmosphere is layered above it does the emissivity get reduced. ” this is exactly what the IPCC defines as the sensitivity “ Not so. I had the ratio upside down, it is dT/dP. But T is measured surface air temperature, not equivalent temperature. We know how equivalent temp varies with P; we have a formula for it. No need to measure anything. “But T is measured surface air temperature, not equivalent temperature. ” T is the equivalent surface temperature which is approximately the same as the actual near surface temperature measured by thermometers. This is common practice when reconstructing temperature from satellite data and the fact that they are close is why it works. “… doubling CO2 is equivalent to 3.7 W/m2 of incremental, post albedo solar power” The sun when moving from its perihelion to aphelion each year produces as change of a massive 91 W/m2. It has absolutely ZERO impact on global temperatures, thanks to the Earth’s negative feedbacks. Why does everyone keep ignoring them? “It has absolutely ZERO impact on global temperatures” The difference gets buried in seasonal variability since perihelion is within a week and a half of the N hemisphere winter solstice and the difference contributes to offset some of the asymmetry between the response of the 2 hemispheres. In about 10K years, it will be reversed and N hemisphere winters will get colder as its summers get warmer, while the reverse happens in the S hemisphere. Tony, The closest (most heat rays from the sun) is in Jan. So you could think the global temperature would be the warmest then. But it is in July. See here; I think the difference is mainly due to the difference in the amount of continent surface in the Northern Hemisphere vs the South. Where does convection as a heat transfer mechanism enter the model? hanelyp, Convection and heat transfer are internal to the atmosphere and the model only represents the results of what happens in the atmosphere, not how it happens. Convection itself is a zero sum influence on the radiant emissions from the surface since what goes up must come down (convection being energy transported by matter) and whatever effect it has is already accounted for by the surface temperature and its consequent emissions. “since what goes up must come down (convection being energy transported by matter) “ That’s just not true. Trenberth’s fluxes are in any case net (of up and down). Heat is transported up (mainly by LH); the warmer air at altitude then emits this heat as IR. It doesn’t go back down. Nick, “Heat is transported up” The heat you are talking about is the kinetic energy consequential to the translational motion of molecules which has nothing to do with the radiative balance or the sensitivity. Photons travel in any direction at the speed of light and I presume you understand that O2 and N2 neither absorb or emit photons in the relevant bands. ” the kinetic energy consequential to the translational motion of molecules which has nothing to do with the radiative balance or the sensitivity” It certainly does. I don’t think your argument gets to sensitivity at all.. Nick, .” The primary way that an energized GHG molecule reverts to the ground state upon collision with O2 or N2 is by emitting a photon and only a fraction of the collisions have enough energy to do this. You do understand that GHG absorption/emission is a quantum state change that is EM in nature and all of the energy associated with that state change must be absorbed or emitted at once. There is no mechanism which converts any appreciable amount of energy associated with such a state change into linear kinetic energy at the relevant energies. At best, only small amounts at a time can be converted and its equally probably to increase the velocity as it is to decrease it. This is the mechanism of collisional broadening which extends the spectrum mostly symmetrically around resonance and which either steals energy or gives up energy upon collision, resulting in the emission of a slightly different frequency photon. “A black body is nearly an exact model for the Moon.” No, The moon is definitely not a black body. Geometric albedo Moon 0.12 earth 0.434 Black-body temperature (K) Moon 270.4 earth 254.0 – “To conceptualize a gray body radiator, If T is the temperature of the black body, it’s also the temperature of the input to the gray body,To be consistent with the Wikipedia definition, the path of the energy not being absorbed is omitted.” This misses out on the energy defected back to the black body which is absorbed and remitted some of which goes back to the grey body. I feel this omission should at least be noted [I see reference to back radiation later in the article despite this definition] – ” while each degree of warmth requires the same incremental amount of stored energy, it requires an exponentially increasing incoming energy flux to keep from cooling.” It requires an exponentially increasing energy flux to increase the amount of stored energy, the energy flux must merely stay the same to keep from cooling. – “The equilibrium climate sensitivity factor (hereafter called the sensitivity) is defined by the IPCC as the long term incremental increase in T given a 1 W/m2 increase in input, where incremental input is called forcing”. but, I am lost. The terms must sound similar but mean different things. The equilibrium climate sensitivity (ECS) refers to the equilibrium change in global mean near-surface air temperature that would result from a sustained doubling of the atmospheric (equivalent) carbon dioxide concentration (ΔTx2). a forcing of 3.7 W/m2 – “The only place for the thermal energy to go, if not emitted, is back to the source ” Well it could go into a battery, but if not emitted it could never go back to the source. – “A gray body emitter is one where the power emitted is less than would be expected for a black body at the same temperature”. At the same temperature both a grey and a black body would emit the same amount of power. The grey body would not get to the same temperature as the black body from a constant heat source because it is grey, It has reflected, not absorbed, some of the energy. The amount of energy detected would be the same but the spectral composition would be quite different with the black body putting out far more infrared. “Both warm the surface by absorbing some fraction of surface emissions and after some delay, recycling about half of the energy back to the surface.” Unfortunately, when energy is emitted by the surface, the temperature must fall. Half the emitted energy returning will not return the temperature to its pre emission state. No increase in temperature. Night is an example. Or, temperatures falling after daytime maxima. Cheers, Anything based on the Earth’s average temperature is simply wrong. “Anything based on the Earth’s average temperature is simply wrong.” This is why a proper analysis must be done in the energy domain where superposition applies because average emissions do represent a meaningful average. Temperature is just a linear mapping of stored energy and a non linear mapping to emissions which is why average temperature is not necessarily meaningful. It’s best to keep everything in the energy domain and convert average emissions to an EQUIVALENT average temperature in the end. “Clouds also manifest a conditional cooling effect by increasing reflection unless the surface is covered in ice and snow when increasing clouds have only a warming influence.” Clouds reflect energy regardless of ice and snow cover on the ground,They always have an albedo cooling effect. Similarly clouds always have a warming effect on the ground whether the surface is ice and snow or sand or water. The warming effect is due to back radiation from absorbed infrared , not the surface conditions. The question is what effect does it have on emitted radiation. – “Near the equator, the emissivity increases with temperature in one hemisphere with an offsetting decrease in the other. The origin of this is uncertain” More land in the Norther Hemisphere means the albedo of the two hemispheres is different, The one with the higher albedo receives less energy to absorb and so emits less. angech, “Clouds reflect energy regardless of ice and snow cover on the ground,” Yes, but what matters is the difference in reflection between whether the cloud is present or not. When the surface is covered by ice and snow and after a fresh snowfall, cloud cover often decrease the reflectivity! Yes, the land/sea asymmetry between hemispheres is important, especially at the poles and mid latitudes which are almost mirror images of each other, but where this anomaly is, the topological differences between hemispheres are relatively small. co2isnotevil “Clouds reflect energy regardless of ice and snow cover on the ground,” “Yes, but what matters is the difference in reflection between whether the cloud is present or not. When the surface is covered by ice and snow and after a fresh snowfall, cloud cover often decreases the reflectivity!” Hm. The clouds have already reflected all the incoming energy that they can reflect.Hence the ice and snow are receiving less energy than they would have. Some of the radiation that makes it through and reflects will then reflect back to the ground and hence warm the surface again. Yes.Most will go out but I get your drift. The point though is that it can never make the ground warmer than it would be if there was no cloud present. Proof ad absurdio would be if the cloud was totally reflective, no light, ground very cold, a slight bit of light a bit warmer, no cloud warmest. “The point though is that it can never make the ground warmer than it would be if there was no cloud present.” Not necessarily. GHG’s work just like clouds with one exception. The water droplets in clouds are broad band absorbers and broadband Planck emitters while GHG’s are narrow band line absorbers and emitters. “If T is the temperature of the black body, it’s also the temperature of the input to the gray body, thus Equation 1 still applies per Wikipedia’s over-constrained definition of a gray body.” That’s just wrong. Radiant energy has no temperature, only energy relative to its wavelength (to have temperature, there must be a mass involved). The temperature of the absorbing surface of that energy is dependent on its emissivity, its thermal conductivity, and it’s mass. Tom, “Radiant energy has no temperature, …” Radiant energy is a remote measurement representative of the temperature of matter at a distance. There’s no need to quote the texbook understanding of blackbody radiation spectrum to me. Observing the peak wavelength may tell you the temperature of a blackbody, but not the temperature it will generate at the absorber. Consider this: an emitter at temperature T, with a surface area A, emits all its energy toward an absorber with surface area 4A. What is the temperature of the absorber? Never T. It’s not the WAVELENGTH of the photons that determines the temperature of the absorber, it’s the flux density of photons, or better the total energy those photons — of any wavelength — present to the absorber, that determines the heat input to that absorber. Distance from the emitter, the emissivity of the absorber, its thermal conductivity, its total mass… all these things affect the TEMPERATURE of that grey body. Consider it another way: take an object with Mass M and perfect thermal conductivity at temperature T, and allow it to radiate only toward another object of the same composition with mass 10M at initial temperature 0K. Will the absorber ever get hotter then T/10? I repeat: those photons do not have temperature. Only matter has temperature. Or would you care to tell me the temperature of microwave emissions from the sun? Certainly not 5778K. “Trenberth’s energy balance lumps the return of non radiant energy as part of the ‘back radiation’ term, which is technically incorrect since energy transported by matter is not radiation.” Trenberth shows the non radiant energy going out to space as radiation [ not “back radiating”] Trenberth is simply getting the non radiant energy higher in the atmosphere where it eventually becomes radiative energy out to space [of course it does some back radiating itself as radiant energy but this part is included in his general back radiation schemata] . He is technically correct. angech, “He is technically correct.” Latent heat cools the surface water as it evaporates and warms the droplet of water it condenses upon which returns to the surface as rain at a temperature warmer than it would be without the latent heat. The difference in what it would have been had all the latent heat been returned drives weather and is returned as gravitational potential energy (hydro electric power). The energy returned by the liquid water rain is not radiative, but nearly all the energy of that latent heat is returned to the surface as weather, including rain and the potential energy of liquid water lifted against gravity. “Latent heat cools the surface water as it evaporates and warms the droplet of water it condenses upon which returns to the surface as rain at a temperature warmer than it would be without the latent heat.” Where do you find such a description? Latent heat is released in order for water vapor to condense into liquid water again, and that heat is radiated away. Have you not noticed that water vapor condenses on cold surfaces, thus warming them? At the point of condensation the latent heat is lost from the now-liquid water, not when it strikes the earth again as rain. Tom, “… that heat is radiated away. ” Where do you get this? How was water vapor radiate away latent heat? When vapor condenses, that heat is returns to the water is condenses upon and warms it. Little net energy is actually ‘radiated’ away from the condensing water since that atmospheric water is also absorbing new energy as it radiates stored energy consequential to its temperature. In LTE, absorption == emission and LTE sensitivity is all we care about. This is quite pointless. Pick up a physics book, and figure out how the surface of water is cooled by evaporation: it is because in order for a molecule of water to leave the surface and become water vapor, it must have sufficient energy to break its bonds to the surface. This is what we call the heat of evaporation, or latent heat: water vapor contains more energy than liquid water at the same temperature. When water vapor condenses back into water, the energy that allowed it to become vapor is radiated away. It does not stay because… then the molecule would still be vapor. Your avatar, co2isnotevil, I completely agree with. Where you got your information about thermal energy in the water cycle, or about the “temperature” of radiative energy, that I cannot guess. Not out of a Physics book. But I have seen similar errors among those who do not believe that radiative energy transactions in the atmosphere have any effect on the surface temperature at all, even when presented with evidence of that radiative flux striking the surface from the atmosphere above. And in that crowd, understanding of thermodynamics is sorely lacking. Tom, You didn’t answer my question. You assert that latent heat is somehow released into the atmosphere BEFORE the phase change. No physics text book will make this claim. I suggest that you perform this experiment: Now, why is the phase change from vapor to liquid any different, relative to where the heat ends up? co2isnotevil wrote “You assert that latent heat is somehow released into the atmosphere BEFORE the phase change.” That is incorrect. The phase change forces the release of the latent heat, which itself was captured at the point of escape from the liquid state. But of course, that’s not the only energy change a molecule of water vapor undergoes on its way from the surface liquid state it left behind to the liquid state it returned to at sufficient altitude: there are myriad collisions along the way, each capable of either raising or lowering the energy of that molecule, along with radiative energy transactions where the molecule can either gain or lose energy. But we are talking about the AVERAGE here, for that is what TEMPERATURE is: an average energy measurement of some number of molecules, none of which must be at that exact temperature state. Your experiment shows nothing outrageous or unexpected: the latent heat of fusion (freezing) is 334 joules, or 79.7 calories, per gram of water, while it takes only 1 calorie to raise the temperature of one gram of water by 1 degree. Therefore, as the water freezes to ice, those ice molecules are shedding latent heat even without changing temperature, and the remaining water molecules — and the temperature probe as well as the container — were receiving that heat. Thermal conductivity slows the probe’s reaction to changes in environment, and your experiment no longer shows something unexpected. Only your interpretation is unexpected, frankly. The heat of fusion is much smaller than the heat of vaporization, which is 2,230 joules, or 533 calories, per gram. Latent heat is not magic, or even complicated. Water becoming water vapor chills the surface, the vapor carries the heat aloft, where it is released by the action of condensation. Any Physics text — or even “Science” books from elementary school curricula — will bear out this definition. “Water becoming water vapor chills the surface, the vapor carries the heat aloft, where it is released by the action of condensation.” I would say this, Evaporation cools by taking energy from the shared electron cloud of the liquid water that’s evaporating, the vapor carries the latent heat aloft, where the action of condensation adds it to the energy of the shared electron cloud of the water droplet it condenses upon, warming it. The water droplet collides with other similarly warmed water droplets (no net transfer here) and with colder gas molecules (small transfer here). Of course, any energy transferred to a gas molecule is unavailable for contributing to the radiative balance unless it’s returned back to some water capable of radiating it away. The only part you got right: “the vapor carries the latent heat aloft” “shared electron cloud” — you write as if you believe liquid water is one gigantic molecule. You neglect the physics of collisions, and the pressure gradients of the atmosphere, and pretend latent heat all returns to earth in rain. Liquid water emits radiation, “co2isnotevil”. Surely you know this. That radiative emission spreads in all directions, with a large part of it escaping from space. I’m done. I’ve already said this discussion is pointless, and I’ve wasted more than enough time. My physics books don’t read like you do; I’ll stick with them. “you write as if you believe liquid water is one gigantic molecule.” You’re being silly. But you do understand that the difference between a liquid and a gas is that the electrons clouds of individual molecules strongly interact, while in a gas, the only such interactions are elastic collisions where they never get within several molecular diameters of each other. This is also true for a solid, except that the molecules themselves are not free to move. Think about how close together the molecules in water are. So much so that when it freezes into a solid, it expands. Water vapour will condense under conditions of atmosphere cooler than it’s gaseous state. It most certainly not warm as a function of condensing. It will give up latent heat to sensible heat in the surrounding medium. This is generally at altitude where much of this heat will radiate away to space. John, “It will give up latent heat to sensible heat in the surrounding medium.” The ‘medium’ is the water droplet that the vapor condenses on. When water evaporates, it cools the water it evaporated from. When water freezes, the ice warms, just as when water condenses, the water it condenses upon warms. When ice melts, the surrounding ice cools. This is how salting a ski run works in the spring to solidify the snow. The latent heat is not released until the phase change occurs, which is why it’s called ‘latent’. What physical mechanism do you propose allows the latent heat to instantly heat the air around it when water vapor condenses? “What physical mechanism do you propose allows the latent heat to instantly heat the air around it when water vapor condenses?” The latent heat goes into the environment, bubble and air. On this scale, diffusion is fast. There is no unique destination for it. Your notion that the drops somehow retain the heat and return it to the surface just won’t work. The rain is not hot. Drops quickly equilibrate to the temperature of the surrounding air. On a small scale, radiation is insignificant for heat transfer compared to conduction. Condensation often occurs in the context of updraft. Air is cooling adiabatically (pressure drop), and the LH just goes into slowing the cooling. “Your notion that the drops somehow retain the heat and return it to the surface just won’t work.” Did you watch or do the experiment? You’re claiming diffusion, but that requires collisions between water droplets and since you do not believe the heat is retained by the water, how can diffusion work? The latent heat per H2O molecule is about 1.5E-19 joules. The energy of a 10u photons (middle of LWIR range of emissions) is about 2E-20 joules. Are you trying to say that upon condensation, at many LWIR photons are instantly released? Alternatively, the kinetic energy of an N2 or O2 molecule in motion at 343 m/sec is about 2.7E-20 joules, so are you trying to say that the velocity of the closest air molecule more than doubles? What laws of physics do you suggest explains this? How does this energy leave the condensed water so quickly? And BTW, the latent heat of evaporation doesn’t even show up until the vapor condenses on a water droplet, so whatever its disposition, it starts in the condensing water. casting and forecasting.” Spot on. I think the stuff about the simple grey body model contains some good ideas on energy balance but needs to be put together in a better way without the blanket statements. “The Earth can be accurately modeled as a black body surface with a gray body atmosphere, whose combination is a gray body emitter whose temperature is that of the surface and whose emissions are that of the planet. ” Accurate modelling is not possible with such a complex structure though well described. “Accurate modelling is not possible with such a complex structure though well described.” Unless it matches the data which Figure 3 tests and undeniably confirms the prediction of this model. As I also pointed out, I’ve been able to model the temperature dependence of the emissivity and the model matches the data even better. How else can you explain Figure 3? Models are only approximations anyway and the point is that this approximation, as simple as it is, has remarkable predictive power, including predicting what the sensitivity must be. The GCMs do not actually forecast. They equivocate, which is not the same concept. Hah! It’s forecasting without all that silly accountability! Right! When we remember that radiant energy is only a result of heat/temperature/ kinetic vibration rates in EM Fields, not a cause; we can start to avoid the tail-chasing waste of time that is modern climate ‘science’. When? Soon please. Can you actually use the Stefan-Boltzmann Law to something like Earth’s atmosphere which is never constant, its composition continually changes not least because of changes in water vapour and the composition of gases with respect to altitude?, Richard Verney “Can you actually use the Stefan-Boltzmann Law to something like Earth’s atmosphere” It’s a good question, not so much about the constancy issues, but just applying to a gas. S-B applies to emission from surface of opaque solid or liquid. For gases, it is more complicated. Each volume emits an amount of radiation proportional to its mass and emissivity properties of the gas, which are very frequency-banded. There is also absorption. But there is a T^4 dependence on temperature as well. I find a useful picture is this. For absorption at a particular frequency a gas can be thought of as a whole collection of little black balls. The density and absorption cross-section (absorptivity) determine how much is absorbed, and leads in effect to Beer’s Law. For emission, the same; the balls are now emitting according to the real Beer’s Law. Looking down where the cross-sections are high, you can’t see the Earth’s surface. You see in effect a black body made of balls. But they aren’t all at the same temperature. The optical depth measures how far you can see into them. If it’s low, the temperature probably is much the same. Then all the variations you speak of don’t matter so much. Thanks. That was partly what I had in mind when raising the question, but you have probably better expressed it than I would have. I am going to reflect upon insight of your second and third paragraphs. Richard, Gases are simple. O2 and N2 are transparent to visible light and LWIR radiation, so relative to the radiative balance, they are completely invisible. only important concept is the steradian component of emissions which is a property of EM radiation, not black or gray bodies. “For emission, the same; the balls are now emitting according to the real Beer’s Law.” Oops, I meant the real Stefan-Boltzmann law. co2isnotevil, directional aspects of water droplet reflection interest me, in that the shape of very small droplets is dominated by surface tension forces, which means they are spherical . . which means (to this ignorant soul) that those droplets ought to be especially reflective straight back in the direction in the light arrives from, rather than simply skittering the light, owing to their physical shape. This hypothetical behavior might have ramifications, particularly in the realms of cloud/mist albedo, I feel, but your discussion here makes me wonder if it might have ramifications in terms of “focused” directional “down-welling” radiation as well, as in the warmed surface being effectively mirrored by moisture in the atmosphere above it . . Pleas make me sane, if I’m drifting into crazyville here ; ) John, Wouldn’t gravity drive water droplets into tear drop shapes, rather than spheres? Certainly rain is heavy enough that surface tension does not keep the drops spherical, especially in the presence of wind. Water drops both absorb and reflect photons of light and LWIR, but other droplets are moving around so it doesn’t bounce back to the original source, but off some other drop that passed by and so on and so forth. Basic scattering. co2isnotevil. “Wouldn’t gravity drive water droplets into tear drop shapes, rather than spheres?” When they are large (and falling) sure, but most are not so large, of course. I did some investigating, and it seems very small droplets are dominated by surface tension forces and are generally quite spherical. “Water drops both absorb and reflect photons of light and LWIR. . ” That’s key to the questions I’m pondering now, the LWIR. Some years ago I “discovered” that highway line paint is reflective because tiny glass beads are mixed into it, and the beads tend to reflect light right back at the source (headlights in this case). I’ve never seen any discussion about the potential for spherical water droplets to preferentially reflect directly back at the source, rather than full scattering. It may be nothing, but I suspect there may be a small directionality effect that is being overlooked . . Thanks for the kind response. As rel humidity goes to nearly 100% outgoing radiation drops by about 2/3rds, one good possibility is fog that is effective in LWIR, but outside the 8-14u window because it and optical are still clear. This or both co2 and WV both start to radiate and start exchanging photons back and forth. But it drops based on dew point temperature. Thanks, micro, that’s some fascinating detail to consider . . Richard, Unless the planet and atmosphere is not comprised of matter, the SB law will apply in the aggregate. People get confused by being ‘inside’ the atmosphere, rather than observing it from a far. We are really talking about 2 different things here though. The SB law converts between energy and equivalent temperature. The steradian component of where radiation is going is common to all omnidirectional emittersm broad band (Planck) or narrow band emitters (line emissions). The SB law is applied because climate science is stuck in the temperature domain and the metric of sensitivity used is temperature as a function of radiation. What’s conserved is energy, not temperature and this disconnect interferes with understanding the system. The Earth/atmosphere system is a grey body for the period of time it takes for the first cycle of atmospheric convective overturning to take place. During that first cycle less energy is being emitted than is being received because a portion of the surface energy is being conducted to the atmosphere and convected upward thereby converting kinetic energy (heat) to potential energy (not heat). Once the first convective overturning cycle completes then potential energy is being converted to kinetic energy in descent at the same rate as kinetic energy is being converted to potential energy in ascent and the system stabilises with the atmosphere entering hydrostatic equilibrium. Once at hydrostatic equilibrium the system then becomes a blackbody which satisfies the S-B equation provided it is observed from outside the atmosphere. Meanwhile the surface temperature beneath the convecting atmosphere must be above the temperature predicted by S-B because extra kinetic energy is needed at the surface to support continuing convective overturning. That scenario appears to satisfy all the basic points made in George White’s head post. “What law(s) of physics can explain how to override the requirements of the Stefan-Boltzmann Law as it applies to the sensitivity of matter absorbing and emitting energy, while also explaining why the data shows a nearly exact conformance to this law?” The conditions that must apply for the S-B equation to apply are specific: “Quantitatively, emissivity is the ratio of the thermal radiation from a surface to the radiation from an ideal black surface at the same temperature as given by the Stefan–Boltzmann law. The ratio varies from 0 to 1” From here: and: T:” In summary, when a planetary surface is subjected to insolation the surface temperature will rise to a point where energy out will match energy absorbed. That is a solely radiative relationship where no other energy transmission modes are involved. For an ideal black surface the ratio of energy out to energy in is 1 (as much goes out as comes in) which is often referred to as ‘unity’. The temperature of the body must rise until 1 obtains. For a non-ideal black surface there is some leeway to account for conduction into and out of the surface such that where there is emission of less than unity the body is more properly described as a greybody. For example an emissivity of 0.9 but for rocky planets such processes are minimal and unity is quickly gained for little change in surface temperature which is why the S-B equation gives a good approximation of the surface temperature to be expected. Where all incoming radiation is reflected straight out again without absorption then that is known as a whitebody During the very first convective overturning cycle a planet with an atmosphere is not an ideal blackbody because the process of conduction and convection draws energy upward and away from the surface. As above, the surface temperature drops from 255K to 222K. The rate of emission during the first convective cycle is less than unity so at that point the planet is a greybody. The planet substantially ceases to meet the blackbody approximation implicit in the requirements of the S-B equation. Due to the time taken by convective overturning in transferring energy from the illuminated side to the dark side (the greybody period) the lowered emissivity during the first convective cycle causes an accumulation within the atmosphere of a far larger amount of conducted and convected energy than that small amount of surface conduction involved with a rocky surface in the absence of a convecting atmosphere and so for a planet with an atmosphere the S-B equation becomes far less reliable as an indicator of surface temperature. In fact, the more massive the atmosphere the less reliable the S-B equation becomes. For the thermal effect of a more massive atmosphere see here: “We find that higher atmospheric mass tends to increase the near-surface temperature mostly due to an increase in the heat capacity of the atmosphere, which decreases the net radiative cooling effect in the lower layers of the atmosphere. Additionally, the vertical advection of heat by eddies decreases with increasing atmospheric mass, resulting in further near-surface warming.” At the end of the first convective cycle there is no longer any energy being drawn from the incoming radiation because, instead, the energy required for the next convective cycle is coming via advection from the unilluminated side. At that point the planet reverts to being a blackbody once more and unity is regained with energy out equalling energy in. But, the dark side is 33K less cold than it otherwise would have been and the illuminated side is 33K warmer than it should be at unity. The subsequent complex interaction of radiative and non- radiative energy flows within the atmosphere does not need to be considered at this stage. The S-B equation being purely radiative has failed to account for surface kinetic energy engaged in non-radiative energy exchanges between the surface and the top of the atmosphere. The S-B equation does not deal with that scenario so it would appear that AGW theory is applying that equation incorrectly. It is the incorrect application of the S-B equation that has led AGW proponents to propose a surface warming effect from DWIR within the atmosphere so as to compensate for the missing non-radiative surface warming effect of descending air that is omitted from their energy budget. That is the only way they can appear to balance the budget without taking into account the separate non-radiative energy loop that is involved in conduction and convection. I really don’t think that “The Earth can be accurately modeled as a black body surface with a gray body atmosphere” This is utterly inaccurate because of the massive energy flux between those, that make them behave as a single thing : the tiny pellicle of the whole Earth, which also include ocean water a few meter deep, and other thing such like forests and human building. This pellicle may seem huge and apt to be separated in components from our human very small scale, but from a Stefan-Boltzmann Law perspective this shouldn’t be done. AND remember that photosynthesis has a magnitude ( ~5% of incoming energy) greater than that of the so called “forcing” or other variations. It just cannot be ignored… but it is ! paqyfelyc, “The Earth can be accurately modeled as a black body surface with a gray body atmosphere” Then what is your explanation for Figure 3? Keep in mind that the behavior in Figure 3 was predicted by the model. This is just an application of the scientific method where predictions are made and then tested. Photosynthesis is a process of conversion of electromagnetic energy to chemical potential energy. In total and over time, all energy fixed by photosynthesis is given up and goes back to space. Photosynthesis may retain some energy on the surface for a time but that energy is not thermal and has virtually no effect on temperature. John, “This is utterly inaccurate because of the massive energy flux between those” The net flux passing from the surface to the atmosphere is about 385 W/m^2 corresponding the the average temperature of about 287K. Latent heat, thermals and any non photon transport of energy is a zero sum influence on the surface. The only effect any of this has is on the surface temperature and the surface temperature adjusted by all these factors is the temperature of the emitting body. Trenberth messed this up big time which has confused skeptics and warmists alike by the conflating energy transported by photons with the energy transported by matter when the energy transported by matter is a zero sum flux at the surface. What he did was lump the return of energy transported by matter (weather, rain, wind, etc) as ‘back radiation’ when non of these are actually radiative. As best I can tell, he did this because it made the GHG effect look much larger than it really is. ” … warm the surface by absorbing some fraction of surface emissions and after some delay, recycling about half of the energy back to the surface.” Ahhh, there’s the magic! The surface warms itself. Are the laws of physics suspended during the “delay”? What’s causing the delay? What is the duration of the delay since those emissions are travelling at the speed of light? What’s the temperature delta of the surface between the emission and when it’s own energy is recycled back? If the delay and the delta are each insignificant, then the entire effect is insignificant. Thomas, “What’s causing the delay?” The speed of light. For photons that pass directly from the surface to space, this time is very short. For photons absorbed and re-emitted by GHG’s (or clouds), the path the energy takes is not a straight line and takes longer, moreover; the energy is temporally stored as either the energy of a state transition, or energy contributing to the temperature of liquid or solid water in clouds. co2isnotevil replied below with: “For photons absorbed and re-emitted by GHG’s (or clouds), the path the energy takes is not a straight line and takes longer” I’m asking how much longer, twice as long? Show me where and how long the duration is of any significant delay. Is it the same order of magnitude as the amount of time a room full of mirrors stays lit after turning out the lights? IOW, insignificant? Now, instead of considering these emissions as a set of photons, consider them as a relentless wave and you’ll see there is no significant delay. “I’m asking how much longer” At 1 ns per foot, it takes on the order of a milisecond for a photon to pass directly from the surface to space. Photons delayed by GHG absorption/re-emission will take on the order of seconds to as much as a minute. Photons of energy delayed by being absorbed by the water in clouds before being re-emitted is delayed on the order of minutes. It’s this delayed energy distributed over time and returned to the surface that combines with incident solar energy and contributes to GHG/cloud warming which of course is limited by the absorption of prior emissions. The delay doesn’t need to be long, just non zero in order for ‘old energy’ from prior surface emissions to be combined with ‘new energy’ from the Sun. But the sun does not shine at night (over half the planet). That is one of the facts that the K&T energy budget cartoon (whatever it should be called)fails to note. Richard, “But the sun does not shine at night (over half the planet).” This is one of the factors of 2 in the factor of 4 between incident solar energy and average incident energy. The other factor of 2 comes from distributing solar energy arriving in a plane across a curved surface whose surface area is twice as large. Of course, we can also consider the factor of 4 to be the ratio between the surface area of a sphere and the area that solar energy arrives from the Sun, half of this sphere is in darkness at all times. The Earth spins fast enough and the atmosphere smooths out night and day temps, so this is a reasonable thing to do relative to establishing an average. Planets tidal locked to its energy source (for example Mercury), would only divide the incident power by 2. [ co2isnotevil – “Photons of energy delayed by being absorbed by the water in clouds before being re-emitted is delayed on the order of minutes” ] Order of minutes? So, the laws of physics are being suspended then. A photon can bounce from the surface up to 40 Kilometers and back 3200 times per second if it were reflected without delay. And your claim is that it is delayed an order of minutes? I find that extremely doubtful. The transfer of heat is relentless. Do all of those photons take a vacation in the clouds? But my question also asked what the surface temperature delta is for the duration of your claimed delay. I’ll give you two minutes, what is the temperature delta in two minutes? That vacationing photon was emitted from the surface at some temperature, what is the surface temperature when it returns to the surface? Has it lost more energy than the surface during its vacation in the clouds? [ co2isnotevil – “The delay doesn’t need to be long, just non zero in order for ‘old energy’ from prior surface emissions to be combined with ‘new energy’ from the Sun.” ] But the duration of the delay is precisely the point, that’s how long this “old energy” is available to combine with “new energy”. It’s insignificant. You’re imagining that this “old energy” is cumulative, it is not. Does the planet Mercury make the sun hotter since it’s emitting photons back towards the sun’s surface? Thomas, “Order of minutes? So, the laws of physics are being suspended then.” Why do you think physics needs to be suspended? Is physics suspended when energy is stored in a capacitor? How does storing energy as a non ground state GHG molecule, or as the temperature of liquid/solid water in a cloud any different? What law of physics do you think is being suspended? Each time a GHG absorbs a photon, temporarily storing energy as a time varying EM field, and emits another photon as it returns to the ground state, the photon goes in a random direction. The path the energy takes can be many 1000’s of times longer than a direct path from the surface to space. “The path the energy takes can be many 1000’s of times longer than a direct path from the surface to space.” How about 384,000? Is that “many of 1000’s”? That’s two minutes of bouncing 40K @ 3200 trips per second. Think of it rather as a wave, a relentless wave. Heat continually seeks escape, it does not delay. “Think of it rather as a wave, a relentless wave. Heat continually seeks escape, it does not delay.” But we are taking about photons here and to escape means either leaving the top of the atmosphere or leaving the bottom and returning to the surface and being massless, the photon has no idea which was is up or down. And 100’s of thousands of ‘bounces’ between GHG molecules is not unreasonable. But the absolute time is meaningless and in fact the return to the surface of absorbed emissions from one point in time are spread out over a wide region of time in the future. All that matters is that the round trip time from the surface and back to the surface is > 0. Very interesting analysis, but this is far too complicated for climate scientists/MSM and will be ignored. “but this is far too complicated …” Actually, its not complicated enough. Because another force exceeds it. Water vapor over powers all of the co2 forcing. Here And measured effective sensitivity at the surface. here “Because another force exceeds it.” Water vapor is not a force, but operates in the same way as CO2 absorption, except as you point out, H2O absorption is a more powerful effect. When I talk about the GHG effect, I make no distinction between CO2, H2O or any other LWIR active molecule. well I was referring to the force of it’s radiation as it was being emitted, but fair enough. Also since they over lap there could be some interplay between them that is not expected.. This explains how it works, it’s just not what we’re being told. Scale on left of for all 3 traces, but each are different units. W’s/m^2, Percentage and Degrees F micro: I’ve tried to explain why you are wrong with this, and I know I won’t succeed, but in my mission to deny ignorance, then again…. You say on your blog: from Co2 has to be lost to space, before the change to the slower cooling rate.” in a sufficiently moist boundary layer, then yes the WV content does modulate (reduce) surface cooling. however, at some point the WV content falls aloft of the moist layer and it continues at that point. If fog forms then the fog top is the point at which emission takes place an it cools there. That is how fog continues to cool through it’s depth, via the diffusion down of the cooling. Also there are BL WV variations across the planet, and CO2 acts greatest in the driest regions. .” Both GHG’s “control” cooling. It is not one OR the other. Both. You take two examples at each extreme and come up with CO2 as the supposed cooling regulator. It’s not. Meteorology explains them. Not GHE theory. Yes the tropics have WV modulation in cooling, limiting it. Deserts have a lack of WV and that leads to greater cooling. That has nothing to do with CO2 which still has an effect on both over and above that that WV does. Particularly so in deserts. Without it deserts would get colder at night. Also at play in deserts is dry sandy surface and light winds, which feedback as the air cools (denser) to still the air more and aid the formation of a shallow inversion. That is why deserts warm up so quickly in the morning – the cooling only occurred in a shallow surface based layer of perhaps 100 ft ( depends on wind driven mixing ). As a proportion of the cooling of the atmosphere it is tiny. This is why sat trop temp data needs to know where surface inversion lie as it is such a tiny but sig part of the estimation of surface temp regionally. “This is the evidence that supports my theory that water vapor regulated nightly cooling, and co2 doesn’t do anything. Increasing relative humidity is the temperature regulation of nightly cooling, not co2.” No. Both. You just cannot see the CO2 doing it’s thing. Unless you measure it spectroscopically – as this experiment did…. micro: Just basic meteorology You have only slain a Sky-dragon. What you don’t know is that is how they make regulators work. And I have never denied that co2 has a spectrum. What I have never found is any effect on minimum temps. And I found proof why. Tone, you need to up your game, not me. Go show my chart to some of your Electrical Engineering buddies, they should understand it. Well or not, I’m very disappointed by people these days. Effectively it is only one, WV. Now let me try one more time. Yes, the dry rate is limited by co2. But the length of time in the high cooling rate mode isn’t, it is temperature controlled. So say dew points are 40, and air temp is 70F, and because of co2 it’s actually 73F. Dew point is still 40. And the point it gets to 70% rel humidity is the same before or after the extra 3F. So lets say this point is 50F, without the extra co2 it cools 6 hours to 50F, then starts reducing the cooling rate. In the case of the 73 degrees with the extra heat of co2, it cools 6 hours and 10 minutes, and then at the same 50F the cooling rate slows down. Now true, the slow rate is maybe a bit slower, but it too is likely not a linear add, and it has 10 minutes less to cool, but Willis and Anthony’s paper show this effect from space, it is why it follows Willis’s nice curve. And you get that 10 minutes back as the days get longer. ….” micro: No it doesn’t. Window opening/closing ! Visible fog it not needed. I use that as the extreme case. As I said what you say is true … except it does not negate the effect that CO2 has. CO2 is simply an addition to what WV does. WV does not take CO2 magically out of the equation. The “fog” is simply thicker in the wavelengths they both absorb at but to boot CO2 has an absorption line at around 15 micron, the wavelength of Earth’s ave temp, and at ~4 micron. This would not not be in your WV window in any case and where CO2 is most effective, especially in the higher, drier atmos. “What you don’t know is that is how they make regulators work. And I have never denied that co2 has a spectrum. What I have never found is any effect on minimum temps. And I found proof why. . ” And…. .” Tone, their attribution is wrong, min temps have changed because dew points changed. Dew points are following just where the wind blew the water vapor as the oceans shuffle warm water around. But you still do not understand nonlinear effect on cooling. “Tone, their attribution is wrong, min temps have changed because dew points changed. Dew points are following just where the wind blew the water vapor as the oceans shuffle warm water around. But you still do not understand nonlinear effect on cooling.” micro: Dp’s may have risen …. that is what an increasing non-condensing GHG will do. And you cannot use the wind direction argument as it was a global study not a regional one. As would the pdo changing phase, and the planet is not equally measured, as well as there are long term thermal storage in the oceans. That proves nothing. And yet what I have does prove WV is regulating cooling. Nicholas, “The surface of the earth is warm for the same reason a heated house is warm in the winter:” There is a difference where the insulation in a house does not store or radiate any appreciable amount of radiation while CO2 and clouds do. Sure they do (your inside wall is radiating like mad at room temps), it is just more opaque than the co2 in the air. I’m sure you’ve seen pictures of people through walls…. “appreciable” was the key word here. Fiberglass has no absorption lines, nor does it have much heat capacity. Insulation occurs as a result of the air trapped within where only radiation can traverse the gap and there is not enough photons for this to happen. Consider how a vacuum bottle works. I’ll accept “appreciable” 🙂 Fiberglass, should have a bb spectrum though. micro6500, “Fiberglass, should have a bb spectrum though.” Yes, as all matter does. The point is that this bb spectrum is not keeping the inside of the house warmer than it would be based on the heater alone. Slowing down the release of heat is what keeps the inside warm and if you start with a cold room and insulate it, the room will not get warmer. The bb spectrum from clouds and line emissions from GHG’s directed back to the surface does make the surface warmer than it would be based on incoming solar energy alone. Keep in mind that the GHG effect and clouds is not only slowing down cooling, it’s enhancing warming to be more than it would be based on solar energy alone. “Keep in mind that the GHG effect and clouds is not only slowing down cooling, it’s enhancing warming to be more than it would be based on solar energy alone.” not really, outgoing regulation of radiation to dew point eliminates almost all of this. Sorry I don’t have time to read thru this right now . But I do not understand why in all these years people still don’t seem to know a general expression for the equilibrium temperature for arbitrary source and sink power spectra and an arbitrary object absorptivity=emissivityspectrum . ε is just a scalar for a flat , gray , spectrum . I go thru the experimentally testable classically based calculations at . It’s essentially the temperature for a gray body in the same situation ( which is the same as for black body and simply dependent on the total energy impinging on the object ) , times the 4th root of the ratio of the dot products of the relevant spectra . It is the temperature such that dot[ solar ; objSpectrum ] = dot[ Planck[ T ] ; objSpectrum ] Given an actual measured spectrum of the Earth ( or any planet ) as seen from space , an actual equilibrium temperature can be calculated without just parroting 255K or whatever which is about 26 degrees below the 281 gray body temperature at our current perihelion point in our orbit . By the Divergence Theorem , no spectral filtering phenomenon can cause the interior of our ball , ie : our surface , to be hotter than that calculated for the radiative balance for our spectrum as seen from space . Nicholas: Just as a matter of curiosity …. Would you have, in a previous life, been NikFromNYC ? Oh, and I rebutted this nonsense in a recent thread. Than I told you you we a Sky-dragon slayer in a reply to your reply. BTW: Have seen this exact post of yours up on a well known home of Sky-dragon slaying science. I’m only a Texas housewife, but when we Texas housewives see somebody doing Stefan-Boltzmann calculations start with average radiation figures rather than taking the time variation of the incoming radiation and integrating, we suspect someone has chosen an inappropriate method. Y’all. Exactly (I’m learning), the average of 60 and 70 F is not 65F, which is done to every mean temp used (BEST, GISS, CRU, they all do it) “the average of 60 and 70 F is not 65F” Yes, but if you turn 60F and 70F into emissions, average the result and convert back to a temperature, you get a more proper average temperature which will be somewhat more than 65F. If you just average temperatures, a large change in a cold temperature is weighted more than a smaller change in a warmer temperature, even as the smaller change in the warmer temperature takes more incoming flux to maintain. I have been adding this into my surface data code. “the average of 60 and 70 F is not 65F” If you want to average in terms of 4th powers, the average is 65.167F. Not a huge difference. “If you want to average in terms of 4th powers, the average is 65.167F. Not a huge difference.” Yes, but there’s a huge difference when averaging across the limits of temperature found on the planet and the assumption of ‘approximate’ linearity is baked in to the IPCC sensitivity and virtuall all of ‘consensus’ climate science. BTW, since sensitivity goes as 1/T^3, the difference in sensitivity is huge as well. At 260K and an emissivity of 0.62, the sensitivity is 0.494 C per W/m^2, while at 330K, the sensitivity is only 0.198 C per W/m^2, for more than a factor of 2 difference between the sensitivity of the coldest and warmest parts of the planet. Because this defies the narrative, many warmists deny the physics that tells us so. This leads to another issue with ‘consensus’ support for a high sensitivity which is often ‘measured’ in cold climates and extrapolated to the rest of the planet. You may even be able to get a sensitivity approaching 0.8C somewhere along the 0C isotherm, where the GHG effect from water vapor kicks in. Anyone who thinks that the sensitivity of a thin slice of the planet at the isotherm of 0C can be extrapolated across the entire planet has definitely not thought through the issue. Typo it is 65.067 It makes a pretty decent difference when you are averaging a lot of stations. ” a pretty decent difference when you are averaging a lot of stations” No, if you have 1000 at 60 and 1000 at 70, the average is still 65.067. And it isn’t amplified if they are scattered. You can easily work out a general formula. If m1 is the mean in absolute, and m4 is the 4th power mean, then m4 is very close to m1 + 1.5*&sigma^2/m1. So if the mean is 65F and the average spread is 5F, the error is still 0.067. It’s much less than people think. I’d have to go look, but the difference with about 80 million stations was about a degree. No, because I calculated it both ways, and it was more than a small fraction. And the mean value that’s fed into all of the surface series all have this problem, and it’s more than 10 degrees apart. And they are not measured, they are calculated from min and max(at least this is how the gsod dataset is made). That is not the only problem with this post. Rhoda, So, you don’t accept that the equivalent BB temperature of the Earth is 255K corresponding to the average 240 W/m^2 of emissions? This is the point of doing the analysis in the energy domain. Averages of energy and emissions are relevant and have physical significance. The SB law converts the result to an EQUIVALENT average temperature. The fact that the prediction of this model is nearly exact (Figure 3) is what tells us that the sensitivity is equivalent to the sensitivity of a gray body emitter. No, because of the moon. Which has an actual measured average temp different to that. And because the moon’s temp variation is affected by heat retention of the surface and rate of rotation. Because the astronomical albedo (it seems to me) is not exactly what you need to determine total insolation because of glancing effects at the terminator. But most of all because of T to the fourth. You can’t take average temp as an input to T^4. The average of T + x and T – x is T. The average of (T +x)^4 and (T -x)^4 is not T^4. It isn’t even near enough for govt work when you are talking fractions of a watt/m2. Y’all. Rhoda, “Moon … Which has an actual measured average temp different to that” This is not the case. The Moon rotates slow enough that rather than dividing the input power by 4 to accommodate the steradian requirements, you divide by a little more than 2 to get the average temperature of the lit side of the Moon. When you do this, you get the right answer. The temperature of the dark side (thermal emissions) exponentially decays towards zero until the Sun rises again. Replying to your latest. Of course you can make the moon work by choosing the right divisor. But this seems glib. It will not do to just use a lot of approximations and fudges. One would almost think you were designing a GCM. You can’t average the heat first. You can’t ignore glancing insolation on a shiny planet. Most of all you are deceiving yourself if you use a closed-system radiation model and don’t think about all the H2O and what it does. Or at least that’s how it seems from a place in north Texas between a pile of ironing and a messy kitchen, y’all. Rhoda, Modelling is all about approximating behavior with equations. You start with the first order effects and if it’s not close enough, go on to model higher order effects and stop when its good enough. There will never be perfect model, except as it pertains to an ideal system, which of course never exist in nature. It seems that all of the objections I have heard about this are regarding higher order deviations that in the real world hardly matter as evidenced by Figure 3. What I have modelled is the fundamental first order effect of matter absorbing and emitting energy based on science that has been settled for a century. When I apply the test (green line as a predictor of the red dots in Figure 3) it was so close, I didn’t need to go further, nonetheless, I did and was able to identify and quantify the largest deviation from the first order model (water vapor kicking in at about 0C). It’s also important to understand that the reason I generated the plot in Figure 3 was to test the hypothesis that from a macroscopic point of view, the planet behaves like a gray body emitter. Sure enough, it does. In fact, the model matches quite well for monthly averages covering slices of latitude and is nearly as good when comparing at 280 km square grids across the entire surface. Long term averages match so well, even at the grided level, it’s hard to deny the applicability of this model that many seem to think is too simple. It’s not surprising that many think this way since consensus climate science has added layer upon layer of obfuscation and complexity to achieve the wiggle room necessary to claim a high sensitivity. I guarantee that if you run any GCM and generate the data needed to produce the scatter diagram comparing the surface temperature to the planet emissions, the result will look nothing like the measured data seen in Figure 3, because if it did, the models would be predicting a far lower sensitivity than they do. The problem as I see it is that consensus climate science has bungled the models and data too such a large extent, that nobody trusts models or data anymore. Models and data can be trusted, you just need to be transparent about what goes in to the model and how any data was adjusted. The gray body model has only 1 free variable, which is the effective emissivity and not really free, but calculated as the ratio between average planet emissions and average surface emissions. Best post I’ve eVer seen on here and she didn’t set down her hair poofer to do it! The IPCC definition of ECS is not in terms of 1w/m2 net forcing. It is the eventual temperature rise from a doubling of CO2, and in the CMIP5 models the median value is 3.2C. The translation to delta C per forcing is tortured, and to assert the result depends only on emissivity or change therein is simplistic and likely wrong. For example, the incoming energy from sunlight depends on albedo, and this might change (a feedback to a net forcing). ristvan, “The IPCC definition of ECS …” The ECS sensitivity FACTOR is exactly as I say. Look at the reference I cited. Reforming this in terms of CO2 is obfuscation that tries to make the sensitivity exclusive to CO2 forcing, when its exclusive to Joules. General question. Out of my depth but; does geometry enter into this in that the black and grey bodies are sphereical or at least circular? Does this impact, well, anything? Clif, It makes a difference when you are trying to work out how much net energy transfer between two shapes. In my thermodynamics, we included a shape factor to accommodate for this. For these calculations, working on a very large scale – the shape factor is irrelevant. Essentially from the surface of the earth to the surface of the TOA there is no shape factor. The use of terminology of this blog is confusing. For example: “This establishes theoretical possibilities for the planet’s sensitivity somewhere between 0.19K and 0.3K per W/m2”. This is not climate sensitivity, it is called climate sensitivity parameter (CSP). When the CSP is multiplied by forcing like 3.7 W/m2, we get the real climate sensitivity (CS). According to IPCC the transient CS = 0.5 K/(W/m2) * 3.7 W/m2 = 1.85 K and the equilibrium CS = 1 K/(W/m2) * 3.7 W/m2 = 3.7 K. The CSP according to S-B is 0.27 K/(W/m2) as realized in this blog. Then there is only one question remaining. What is the right forcing of doubled CO2 concentration from 280 ppm to 560 ppm? IPCC says it is 3.7 W/m2. I say it is only 2.16 W/m2, because the value of 3.7 W/m2 is calculated in the atmosphere of fixed relative humidity. aveollila, “This is not climate sensitivity, it is called climate sensitivity parameter ” Yes, and I make this clear in the paper where I define the climate sensitivity factor (the same thing as the parameter) and say that for the rest of the discussion it will be called simply the ‘sensitivity’. “What is the right forcing of doubled CO2 concentration from 280 ppm to 560 ppm? IPCC says it is 3.7 W/m2.” I’m comfortable with 3.7 W/m^2 being the incremental reduction at TOA upon instantly doubling CO2, but as I’ve pointed out, only about half of this ends up being returned to the surface in LTE since 3.7 W/m^2 is also the amount of incremental absorption by the atmosphere when CO2 is doubled and absorbed energy is distributed between exiting to space and returning to the surface. This also brings up an inconsistency in the IPCC definition of forcing, where an instantaneous increase in absorption (decrease at TOA) is considered to have the same influence as an instantaneous increase in post albedo incident power. All of the latter affects the surface, while only half of the former does. Ok I read the article and all the comments to date and as an MS in Engineering have a fair understanding of thermodynamics and physics in general but can not make heads nor tails of the presented data. What I can say is that the problem of isolating causation of weather/climate changes to one variable in a complex system is problematic at best. CO2 moving from 3 parts per 10,000 to 4 parts per 10,000 as the base for all climate change shown in models truly requires a leap of faith and I am unable to accurately predict both the location and speed of faith particles. Irrational D, “can not make heads nor tails of the presented data’ What’s confusing to you? The data is pretty simple and is a scatter diagram representing the relationship between the surface temperature and the planet emissions. The green line in Figure 3 is the prediction of the model and the red dots are monthly averages from satellites that conform quite well to the predictions. Note that the temperature averages are calculated as average emissions converted to a temperature (satellites only measure emissions, not temperature which is an abstraction of stored energy). If I plot surface emissions (rather than temperature) vs. emissions by the planet, it’s a very linear line with a slope of about 1.6 W/m^2 of surface emissions per W/m^2 of planet emissions. Here are some thought experiments. What would the average temperature of the surface be if the atmosphere contained 1 ATM of O2 and N2, the planet had no GHG’s or water and reflected 30% of the incident solar energy? (notwithstanding the practicality of such a system) The answer is 255K and based on the lapse rate, the average kinetic temperature of the O2 and N2 would start at about 255K at the surface and decrease as the altitude increased. Now, add 400 ppm of CO2 to the atmosphere and see what would happen. Will the surface warm? Add some clouds to the original system. Under what conditions would the surface warm or cool? (clouds can do both) Another thought experiment is to consider a water world and while somewhat more complicated, is still far simpler to analyze than the actual climate system. Will the temperature of this surface ever exceed about 300K which is the temperature where latent heat from evaporation start to appreciably offset incoming energy from the Sun? (Think about why Hurricanes form when the water temperature exceeds this). “What would the average temperature of the surface be if the atmosphere contained 1 ATM of O2 and N2, the planet had no GHG’s or water and reflected 30% of the incident solar energy? (notwithstanding the practicality of such a system)” Soln: Use your Fig. 2 with no other modes of energy transfer, only radiative energy transfer, in radiative equilibrium illuminated by SW source from the right at 342 W/m^2. The steady state allows text book energy balance by 1LOT of the left slab, add to your arrows (+ to left) w/the SW energy into left slab BB surface minus energy out 1LOT. (Left going) – right going energy arrows = 0 in steady state with O2/N2 low emissivity A = .05 say: SW*(1-albedo) + Ps(A/2) – Ps = 0 342*(1-0.3) + Ps(A/2-1) = 0 240 – Ps(0.05/2-1) = 0 240 + 0.975 Ps = 0 Ps= 246 (glowing at terrestrial wavelengths to the right) Ps = sigma*T^4 = 246 T = (246/0.0000000567) ^ 0.25 = 257 K Yes, I agree with your answer of 255K but a slight difference in that I made the O2/N2 gray body physical with their low (but non-zero) emissivity & absorptivity (very transparent across the spectrum, optically very thin). —— ”Now, add 400 ppm of CO2 to the atmosphere and see what would happen. Will the surface warm?” Soln: Try your model with emissivity A=0.8 with colloid water droplets, wv, CO2 et. al. as is measured for the real Earth global atm. looking up: 240 – Ps(0.8/2-1) = 0 240 + 0.6Ps = 0 Ps = 400 (glowing at terrestrial wavelengths to the right) T = (400/0.0000000567) ^ 0.25 = 290.7 K Your model reasonably well checks out with thermometer, satellite observations for a simple text book analogue of the global surface T, a model that can not be pushed too far. Trick, You are over-estimating a bit for the 400ppm CO2 case. Based on HITRAN line by line analysis, 400 ppm of CO2 absorbs about 1/4 of the surface energy and on the whole contributes only about 1/3 to the total GHG effect, thus A (absorption, not emissivity) is about 0.25 and the emissivity is (1 – A/2) = 0.875 and the surface power gain is 1.14. Given 240 W/m^2 of input, the surface will emit 1.14*240 = 274 W/m^2 which corresponds to a surface temperature of about 264K. The 1/4 surface energy absorbed by CO2 is calculated at 287K and not 264K, which because its a lower temperature, the 15u line becomes more important and A is increased a bit. Note that on Venus, the higher surface temperature moves the spectrum so far away from the main 15u CO2 line that its GHG effect is smaller than for Earth, despite much higher concentrations (the transparent window is still transparent) and only the weaker lines at higher wavelengths become relevant to any possible CO2 related GHG effect on the surface of Venus. Does Hitran do a changing evolution of night time cooling or is it a static snapshot? Because if it’s a snapshot it does not tell you what’s happening. micro6500, MODTRAN and the version I wrote, both of which are driven by HITRAN absorption line data do the same thing which is a static analysis, however; you can run the static analysis at every time step. What I’ve done is run it for number of different conditions and then interpolate the results since most conditions fall between 2 characterized conditions. It runs much faster that way and looses little accuracy since a full blown 3-d atmospheric simulation is rather slow. Surprisingly to many, you can even establish an scalar average absorption factor and apply it to averages and the results are nearly as good. This is not all that surprising owing to the property of superposition in the energy domain. BTW, is your handle related to the Motorola 6500 cpu? I’ve worked on designing Sparc CPU’s myself, most notably the PowerUp replacement CPU for the SparcStation. Yes, the the dynamics I’ve found has to involve the step by step change, or it’ll just appear as a static transfer function. Didn’t Harris have a cmos 6500? No. It’s my name, and a unique identifier. But I have done both ic failure analysis (at Harris), asic design for NASA, and 7 years at valid logic and another at view logic. And work for Oracle:) Modtran is a static timing verifier, this needs a dynamic solution. micro6500, Yes, MODTRAN is purely static and hard to integrate into other code, which is why I rolled my own. But, you can make it dynamic by running it at each time step, or whenever conditions change, enough to warrant re-running it’s just a pain and real slow. Which is why all of the results from it are worthless, just I doubt the professionals took the time, and the amateurs don’t know any better. Top post: “This leads to an emissivity for the gray body atmosphere of A” 1:56pm: “thus A (absorption, not emissivity) is about 0.25” So which do you mean true for your A? Actually, physically, your A in Fig. 2 is emissivity of the gray body block radiating 1/2 toward the BB and 1/2 toward the right as shown in Fig. 2. arrows. Absorptivity and emissivity are equal at any wavelength for a given direction of incidence and state of polarization. The emissivity of the current atm., surface looking up, has been extensively measured in the literature, found to be around 0.7 in dry arctic regions and around 0.95 equatorial humid tropics. My use of 0.8 global thus is backed reasonably by measurements over the spectrum and a hemisphere of directions. Trick, OK, so you were using emissivity for the system with water vapor, clouds and everything else, while the experiment was 400 ppm of CO2 and nothing else. The A is absorption of the gray body atmosphere and equal to its emissivity. The emissivity of the gray body emitter (the planet as a system) is not the same as that of the gray body atmosphere (unless the atmosphere only emitted into space) and is related to the emissivity of the gray body atmosphere, A by, e = (1-A/2). But, your values for A as measured are approximately correct, although I think the actual global average value of A is closer to 0.75 than 0.8 but it’s still in the ballpark. The average measured emissivity of the system is about 0.62. And in the rest of the world it changes from the dry end at sunset (depending of the days humidity) to the wet end every night by the time the sun comes up in the morning. “the experiment was 400 ppm of CO2 and nothing else.” The experiment was “add 400 ppm of CO2 to the atmosphere” which was unclear if meant the current atm. or your N2/O2 atm. I expressly wrote colloid water droplets, wv, CO2 et. al. as is measured for the real Earth global atm. looking up. Use any reasonable measured 400ppm CO2 in N2/O2 emissivity and your analogue will find the reasonable global surface temperature for that scenario (somewhere between 257K and 290.7 K). “The average measured emissivity of the system is about 0.62.” I see this often; it is incorrect. For illumination = 240W/m^2, BB Teff = 255K from sigma*T^4= 240. This is the equivalent blackbody temperature an observer on the moon would infer for Earth looked upon as an infrared sun. Earth satellites measure scenario brightness temperature ~255K (avg.d 24/7/365 over 4-10 years orbits) from ~240 W/m^2. Just as we on Earth say that the sun is equivalent to a ~6000 K blackbody (based on the solar irradiance), an observer on the moon would say that Earth is equivalent to a 255 K blackbody (based on the terrestrial irradiance). Note that the effective brightness temperature 255K in no (direct) way depends on the emissive properties of Earth’s atmosphere. 240 in and 240 out ~radiative equilibrium ~steady state means 255K BB temperature observed from space. Trick, “This is the equivalent blackbody temperature an observer on the moon would infer for Earth looked upon as an infrared sun.” Yes, 255K is the equivalent BB temp of the planet. However; this is predicated on the existence of a physical emission surface that radiates 240 W/m^2. This is an abstraction that has no correspondence to reality since no such surface exists and the photons that leave the planet originate from all altitudes between the surface to the boundary between the atmosphere and space. The only ‘proper’ emission surface is the virtual surface comprised of the ocean surface plus bits of land that poke through and that is in equilibrium with the Sun. Even most of the energy emitted by clouds originated at the surface. Clouds do absorb some solar energy, but from a macroscopic, LTE point of view, the water in clouds is tightly coupled to the water in the oceans and we can consider energy absorbed by clouds as equivalent to energy absorbed by the surface. If the virtual surface in equilibrium with the Sun is the true emitting surface, then the gray body model with an emissivity of 0.62 more accurately reflects the physical system. “Yes, 255K is the equivalent BB temp of the planet. However; this is predicated on the existence of a physical emission surface that radiates 240 W/m^2.” There is no such thing “predicated”. The ~240 is measured by many different precision radiometer instruments at the various satellite orbits, collectively known as CERES, earlier (1980s) ERBE. Trick, ” The ~240 is measured by many different precision radiometer instruments ” Yes, and I’m not saying otherwise, but to be a BB, there must be an identifiable surface that emits this much energy and there is no identifiable surface that emits 240 W/m^2, that is, you can not enclose the planet with a surface of any shape that touches all places where photons are emitted and combined emit 240 W/m^2. Many get confused by the idea that there is a surface up there whose temperature is 255K, but this is not the surface emitting 240 W/m^2. This represents the kinetic temperature of gas molecules in motion, per the Kinetic Theory of Gases. Molecules in motion emit little, it any energy, unless they happen to be LWIR active (i.e. a GHG). Higher up in the thermosphere, the kinetic temperature exceeds 60C, but the planet is certainly not emitting that much energy. In fact, there are 4 identifiable altitudes whose kinetic temperature is about 255K, one at about 5 km, another at about 30 km, another at about 50 km and another at about 140 km. If we examine the radiant temperature, that is the temperature associated with the upwards photon flux, it decreases monotonically from the surface temperature down to about 255K at TOA. Perhaps you missed this of mine at 6:40pm: Note that the effective brightness temperature 255K in no (direct) way depends on the emissive properties of Earth’s atmosphere. Thus neither atm. temperatures. Take Earth atm. completely away, keep same albedo, and once again radiative equilibrium will establish at 240 output for same input. Change albedo (input), change the 240 (output). You are trying to discuss, I think, within the atm. a level for the optimal tradeoff between high atm. density (therefore high atm. emissivity) and little overlying atm. to permit the atm. emitted radiation to escape to deep space. Most (but by no means all) of the outgoing atm. radiation observed by CERES et. al. comes from a level 1 optical thickness unit below TOA (for optical path defined 0 at surface). This has no effect at all on the 240 (as observed from moon say), as removing the atm. with same albedo gives all 240 straight from the surface. SIMPLE EXPLANATION FOR EARTH Using the author’s Figure 1, let Black Body T be Earth’s surface (which does not have to be a black body emitter) and E be the atmosphere. If Earth’s atmosphere contained no greenhouse gases (H2O, CO2, CH4, etc), then E would not be an absorber of outgoing long-wave radiation, and the atmosphere would not be heated by absorbing outgoing radiation, and Earth’s surface would not be further warmed. But Earth’s atmosphere actually has a value for E that is less than 1 (explanation below), and it does absorb outgoing radiation via the greenhouse gases. E less than 1 means E emits less radiation than it absorbs from T. The consequence of this is that E warms to a temperature greater than that of T until its radiation emission rate equals the rate it receives energy. Earth’s surface also warms in this process because E radiates back to the surface as well as into space. Why is the emissivity of the atmosphere (E) less than 1? When more CO2 is added to the atmosphere, its concentration in higher regions of the atmosphere also increases. On average, a CO2 molecule must be at some significant height in order for the radiation it emits upward to escape to space rather than be absorbed by another higher altitude CO2 molecule. That height, called the emission height, is a few miles. Adding more CO2 to the atmosphere causes that emission height to increase. BUT, Earth’s troposphere cools as altitude increases. And a cooler atmosphere causes the RATE of radiation emission from CO2 to decrease. Lower emission rate causes the atmosphere to warm until the CO2 emission rate at that new emission height stabilizes the temperature. Adding more CO2 increases CO2 emission height, causing the atmosphere to warm to compensate. Water behaves somewhat differently because it does not mix into the higher atmosphere and because its concentration varies significantly across Earth’s surface. It’s not about heat flow, but about quantum radiation effects and P-T characteristics of the atmosphere. One day, hopefully not far off, all the above complexity and confusion is going to be looked back upon with wry amusement. There are only two ways to delay the transmission of radiative energy through a system containing matter. i) A solid or a liquid absorbs radiation, heats up and radiates out at the temperature thereby achieved. That is where S-B can be safely applied. ii) Gases are quite different because not only do they move up and down relative to the gravitational field but also the molecules move apart as they move upwards along the density gradient induced by mass and gravity. It is the moving apart that creates vast amounts of potential energy within a convecting atmosphere. Far more potential energy is created in that process of moving molecules apart along the density gradient than in the simple process of moving molecules upward. The importance of that distinction is that creation of potential energy (not heat) from kinetic energy (heat) does NOT require a rise in temperature as a result of the absorption of radiation (which absorption is a result of conduction at the irradiated surface beneath the atmosphere) because energy in potential form has no temperature. Indeed the creation of potential energy from kinetic energy requires a fall in temperature but only until such time as the kinetic energy converted to potential energy in ascent is matched by potential energy converted to kinetic energy in descent. At that point the temperature of surface and atmosphere combined rises back to the temperature predicted by the S-B equation but only if viewed from a point outside the atmosphere. The temperature of surface alone will be higher than the S-B temperature. Altering radiative capability within the atmosphere makes no difference because convection simply reorganises the distribution of the mass content of the atmosphere to maintain long term hydrostatic equilibrium. If convection were to fail to do so then no atmosphere could be retained long term. So. solids and liquids obey the S-B equation to a reasonably accurate approximation (liquids will convect but there is little moving apart of the molecules to create potential energy so the S-B temperature is barely affected). Gases heated and then convected upward and expanded as a result of conduction from an irradiated surface will not heat up according to S-B due to the large amount of potential energy created from surface kinetic energy. They will instead raise the surface temperature beneath the mass of the atmosphere to a point higher than the S-B prediction so as to accommodate the energy requirement of ongoing convective overturning in addition to the energy requirement of radiative equilibrium with space. It really is that simple 🙂 So, are you suggesting that trying to apply the S-B law to Earth and Earth’s atmospheric system is itself flawed thinking ? Are we trying to force fit something that really is a misfit to begin with, in this context ? I can see how this suggestion might antagonize those who have figured out the complexities of such an application of S-B, and to question these folks on this point seems to create yet another camp of disagreement within the already bigger camp of disagreement over catastrophic warming. … So, now we have skeptics battling skeptics who are skeptical of other skeptics. “So, now we have skeptics battling skeptics who are skeptical of other skeptics.” This is because there’s so much wrong with ‘consensus’ climate science, yet to many skeptics, the ONLY problem is the one they have thought about. I characterize myself as a luke warmer, where I do not dispute that CO2 is a GHG, or that GHG’s and clouds warm the surface above what it would be without them, but there are many who believe otherwise. I definitely dispute the need for mitigation because the effect is far more beneficial than harmful. As I’ve said before, the biggest challenge for the future of mankind is how to enrich atmospheric CO2 to keep agriculture from crashing once we run out of fossil fuels to burn or if the green energy paradigm foolishly gains wide acceptance. I see the biggest problem as over-estimating the sensitivity by about a factor of 4 and it’s this assumption from which most of the other errors have arisen in order not to contradict the mantra of doubling CO2 causing 3C of warming. Many of those who think CO2 has no effect do not question the sensitivity and use ‘CO2 doesn’t affect the surface temperature’ as the argument against instead of attacking the sensitivity. Please note that I do accept that GHGs have an effect because they distort lapse rate slopes which causes convective adjustments so that the pattern of general circulation changes and some locations near climate zone boundaries or jet stream tracks may well experience some warming. However, since the greenhouse effect is caused by atmospheric mass conducting and convecting any additional effect from changes in GHG amounts will probably be too small to measure especially if it does turn out that most natural climate change is solar induced. Thus I am a lukewarmer and not a denier. As regards S-B it is well established that it deals with radiative energy transfers only and so it is not contentious to point out that it cannot accommodate the thermal effects of non radiative energy transfers between the mass of the surface and the mass of a conducting and convecting atmosphere. By all means apply S-B from beyond the atmosphere but that tells you nothing of the surface temperature enhancement required to fuel continuing convective overturning within the atmosphere at the same time as energy in equals energy out. Stephen, “By all means apply S-B from beyond the atmosphere but that tells you nothing of the surface temperature enhancement required to fuel continuing convective overturning within the atmosphere at the same time as energy in equals energy out.” This is not the case. Each of the 240 W/m^2 of incident energy contributes 1.6 W/m^2 of surface emissions at the LTE average surface temperature, or in other words, it takes 1.6 W/m^2 of incremental surface emissions to offset the next W/m^2 of input power (in LTE, input == output). Owing to the T^4 relationship, the next W/m^2 of solar forcing (241 total input) will increase the emissions by slightly less than 1.6 W/m^2, increasing the surface temperature by about 0.3C for a sensitivity of about 0.3C per W/m^2. Figure 3 characterizes this across the range of possible average monthly temperatures found across the whole planet (about 260K to well over 300K) and this relationship tracks SB for a gray body with an emissivity of 0.62 almost exactly across all possible temperatures. SB is the null hypothesis and the only way to discount it is to explain the red dots in Figure 3 otherwise, per the question at the end of the article. If some people have arrived at the position that CO2 does not affect the surface temperature, then these people have no need to argue for sensitivity, since the sensitivity of something that doesn’t matter anyway also does not matter. I am interested in HOW some of these people, seemingly who have studied the same rigorous math or physics, arrive at such a divergent conclusion. They will say that those who argue sensitivity are deluded, and those who argue sensitivity will say the same, creating another troubling subdivision that further confuses those trying to understand all this. How can a prize-winning physicist get condemned by another prize-winning physicist, when they both study (I presume) the same curriculum of physics or math ? I think there is a consensus beneath the main consensus (a “sub-consensus”) that forbids thinkers from straying too far from THEIR assumptions. co2isnotevil These are the important words that underlie all that follows: “The Earth can be accurately modeled as a black body surface with a gray body atmosphere, whose combination is a gray body emitter whose temperature is that of the surface and whose emissions are that of the planet.” I do not accept that the combination is as simple as a grey body emitter once hydrostatic equilibrium has been achieved following the completion of the first convective overturning cycle. It is certainly a grey body emitter during the first cycle because during that period and only during that period there is a net conversion of surface kinetic energy to potential energy which is being diverted to conduction and convection instead of being radiated to space. Once the first cycle completes the combined surface and atmosphere taken together behave as a blackbody when viewed from space and so S-B will apply from that viewpoint. The atmosphere might radiate but not as a greybody because if it has radiative capability which causes any radiative imbalance then convection alters the distribution of the mass within the atmosphere in order to retain hydrostatic equilibrium. Thus the atmosphere (under the control of convective overturning) also radiates as a blackbody which is why the S-B equation works from a viewpoint beyond the atmosphere. If the surface were to act as a blackbody but the atmosphere as a greybody there would be a permanent radiative imbalance which would destroy hydrostatic equilibrium and we know that does not happen even where CO2 reaches 90% of an atmosphere such as on Venus or Mars. On both those planets the temperature at the same atmospheric pressure is very close to that at the same pressure on Earth adjusted only for the distance from the sun. That is a powerful pointer to mass conducting and convecting rather than GHG quantity being the true cause of a surface temperature enhancement above the S-B expectation. Whether the atmosphere radiates or not there is the additional non radiative process going on which is not in George White’s above model and not dealt with by the S-B equation and which is omitted from the purely radiative AGW theory. The amount of surface energy permanently locked into the KE to PE exchange in ascent and the PE to KE exchange in descent is constant at hydrostatic equilibrium being entirely dependent on atmospheric mass and the power of the gravitational field.. The non radiative KE to PE and PE to KE exchange within convective overturning is effectively an infinitely variable buffer against radiative imbalances destroying hydrostatic equilibrium. I recommend that you or George reinterpret the observations set out in George’s head post in light of the more detailed scenario that I suggest. Stephen, I agree that there’s a lot of complication going on within the atmosphere, much of which is still unknown, but it’s impossible to model the complications until you know how it’s supposed to behave and trying to out psych complex, codependent behaviors from the inside out almost never works. The only way to understand how it’s supposed to work is a top down methodology which characterizes the system at the highest level of abstraction possible whose predictions are within a reasonable margin of error with the data. This provides a baseline to compare against more complex models. The highest level of abstraction would be black body which will be nearly absolutely accurate in the absence of an atmosphere. The purpose of this exercise was to extend the black body model to connect the dots between the behavior of a planet with and without an atmosphere. The first thing I added was a non unit emissivity and after adding this, the results were so close to the data, it was unnecessary to make it more complicated. Of course, I didn’t stop there and have extended the model in many ways which gets even closer by predicting more measured attributes, including seasonal variability. I’ve compared it to data at the gridded level, at the slice level (from 2.5 degree slices to entire hemispheres) and globally and it works well every time. There’s even an interesting convergence criteria the system appears to seek which is that it drives towards the minimum effective emissivity and warmest surface possible, given the constraints of incoming energy and static components of the system. You can see this in the plot earlier in the comments which plots the surface emissivity (power out/surface emissions) against the surface temperature. You will notice that the current average temperature is very close to the local minimum in this relationship. I can even explain why this is in terms of the Entropy Minimization Principle. There’s no such thing as a perfect model of the climate and in no way shape or form am I claiming that this is, but it is very accurate at predicting the macroscopic behavior of the planet especially considering how simple the model actually is. Feel free to object on the grounds that it seems too simple to be correct, as I had the same concerns early on and could not believe that somebody else had not recognized this decades ago (Ahrrenius came close), but unless objections are accompanied with an explanation for why the red dots in Figure 3 align along a contour of the SB relationship for a gray body with an effective emissivity of 0.62, no objection has merit. I should point out that the calculations of the output power are affected by a lot of different things and that each of the roughly 26K little red dots of monthly averages were each calculated by combining many millions of unadjusted data measurements. The fact that the distribution of dots is so close to the prediction (green line) is impossible to deny and is why without another explanation for the correlation, no objection can have merit. The source of this is the active regulation is discovered. co2isnotevil, Thanks for such a detailed response. I wouldn’t dream of objecting, merely supplementing it by simplifying further. My suggestion is that the red dots in Fig 3 align along a contour of the S-B relationship because convective overturning adjusts to eliminate radiative imbalances from whatever source. The remaining differential between the line of dots and the contour is simply a measure of the extent to which the lapse rate slopes are being distorted by radiative material within the bulk atmosphere and convection then works to neutralise the thermal effect of that distortion so that energy out to space matches energy in from space. The consequence is that the combined surface and atmosphere always act as a blackbody (not a greybody) when viewed from space. You have noted that there is an interesting convergence criteria ‘the system appears to seek’ and I suggest that those convective adjustments lie behind it. Are you George White ? Stephen, Yes. I’m the author of the article. The idea that the system behaves like a black body is consistent with my position, at least relative to power in vs. temperature. In fact, the Entropy Minimization Principle predicts this. Minimizing entropy means reducing deviations from ideal and 1 W/m^2 of surface emissions per W/m^2 of input is ideal. Here is the plot that sealed it for me: Unlike the output power, calculating the input power is a trivial calculation. In this plot, the yellow dots are the same as the red dots in Figure 3 and the red dots are the relationship between post albedo incident power and temperature and where they cross is the ‘operating point’ for the planet. Note that the slope of the averages for this is the same as the magenta line, where the magenta line is the prediction of the relationship between the input power and surface temperature. This is basically the slope of SB for an ideal BB at the surface temperature, biased towards the left. I’ve only talked about the output relationship because it’s a tighter relationship and easier to explain as a gray body, which people should be able to understand. Besides, its hard enough to get buy in to a sensitivity of 0.3C per W/m^2, much less 0.19C per W/m^2. You really have to think of this as 2 distinct paths. One that ‘charges’ the system with a sensitivity of 0.19 and the other that ‘discharges’ the system with a sensitivity of 0.3. The sensitivity of the discharge path is higher, which is a net negative feedback like effect, but is not properly characterized as feedback per Bode. On further reflection the gap between the red and green lines could indicate the extent to which mass and gravity have raised surface temperature above S-B. Convective adjustments then occur to ensure that energy out to space matches energy in from space so that the curve of the red line follows the curve of the green line. Stephen, I already understand and have characterized the biggest deviation which is a jump in emissivity around 273K (0C). This is the influence of water vapor kicking in and decreasing the effective emissivity. I’m still not sure what’s going on near the equator, but it seems that whatever is happening in one hemisphere is offset by an opposite effect in the other, so I haven’t given it much thought. The data does have a lot of artifacts and is useless for measuring trends, and equatorial data is most suspect, but my analysis doesn’t look at or care about trends or absolute values and instead concentrates only on aggregate behavior and the shapes of the relationships between different climate variables. There are a whole lot more plots comparing various variables here: Long term global trends, sure. And there is a lot that can be done with the data we have, you can get the seasonal change, and in the extratropics you can calculate what the 0.0 albedo surface power is, and then see how effective it was at increasing temperature. “Long term global trends, …” Even short term local trends. The biggest issue I have found with the ISCCP data set is a flawed satellite cross calibration methodology which depends on continuous coverage by polar satellites. When a polar satellite is upgraded and it’s only operational polar orbiter, there are discontinuities in the data, especially equatorial temperatures. I mentioned this to Rossow about a decade ago, but it has never been fixed, although I haven’t checked in over a year. It doesn’t even show up in the errata, except as a inconspicuous reference to an ‘unknown’ anomaly in one of the plots illustrating how satellites are calibrated to each other. Ah, some of the surface data has some use. What I have tried to do for the most part is to see what the stations we have measured. Which isn’t a GAT, even though I do averages of all of the stations as well as many different small chunks. I am not familiar with how this blog views the ideas of Stephen W., but I must say that I find his emphasis on the larger fluid dynamic mass of the atmosphere resonant with my layperson intuition, which I admit is biased towards fluid dynamic views. I have always wondered how radiation physics can dominate fluid dynamic physics of the larger mass of the atmosphere, and I see some hope here of reconciling the two aspects. Robert, “I find his emphasis on the larger fluid dynamic mass of the atmosphere resonant with my layperson intuition” If you want to understand what’s going on within the atmosphere, then fluid dynamics is the way to go, but that is not what this model is predicting. The gray body emissions model proposed only characterizes the radiant behavior at the boundaries of the atmosphere, one boundary at the surface (which is modelled as an ideal BB radiator) and the other with space. To the extent that the relationship between the behavior at these boundaries can be accurately characterized and predicted (the green line in figure 3), how the atmosphere manifests this behavior is irrelevant, moreover; as far as I can tell, nobody in all of climate science actually has a firm grasp on what the microscopic behavior actually is or should be. The idea that complex fluid dynamics of non linear coupled systems must be applied to predict the behavior of the climate is a red herring promoted by consensus climate science to make the system seem too complicated for mere mortals to comprehend. It’s the difference between understanding the macroscopic behavior (the gray body emission model) and the microscopic behavior (fluid dynamics …). Both can get the same answer, except that the later has too many unknowns and ‘impherical’ constants, so unless you can compare it to how the system must behave at the macroscopic level, such a model can never be validated as being correct. Consider simulating an digital circuit that adds 2 numbers. A 64-bit adder has many hundreds of individual transistor switches. The complexity can explode dramatically when various carry lookahead schemes are implemented. The only way to properly validate that the microscopic transistor logic matches the macroscopic task of adding 2 numbers is to actually add 2 numbers together and compare this with the results of the digital logic. Most systems can be modelled at multiple levels of abstraction and best practices for developing the most certain models is to start with the highest level of abstraction possible and then use this to sanity check more detailed models. For example, I can guarantee that if you generated the data I presented in Figure 3 using a GCM, it would look nothing like either the measured data or the prediction of the gray body emitter. If it did, the modelled sensitivity would only be about 0.3 and no where near the 0.8 claimed by the IPCC. Thank you. There is some hostility here but support as well so as long I express myself in a moderate tone my submissions continue to be accepted. I think one can reconcile the two aspects in the way I have proposed. The non radiative energy exchange between the mass of the surface and the mass of the atmosphere needs to be treated entirely independently of the radiative exchange between the Earth system and space. One can do that because there really is no direct transfer of energy between the radiative and non radiative processes once the atmosphere achieves hydrostatic equilibrium. Instead, the convective adjustments vary the ratio between KE and PE in the vertical and horizontal planes so as to eliminate any imbalances that might arise in the radiative exchange between the Earth system (surface and atmosphere combined) and space. So, if GHGs try to create a radiative imbalance such as that proposed in AGW theory they are prevented from doing so via changes in the distribution of the mass content of the atmosphere. If GHGs alter the lapse rate slope in one location then that change in the lapse rate slope is always offset by an equal and opposite change in the lapse rate slope elsewhere and convection is the mediator. GHGs do have an effect but in the form of circulation changes rather than a change in average surface temperature and the thermal effect is miniscule because it was initially the entire mass of the atmosphere that set up the enhanced surface temperature in the first place and not GHGs. Otherwise the similarities with Mars and Venus would not exist. When people argue over what the first principles actually are, seemingly not able to agree on them, then where is the foundation for a common understanding.? Even the foundation of the foundation seems to have far more flexibility in interpretation than can allow for it to be the basis for that sought-after common ground. When you guys reach a common agreement on what the Stephan Boltzmann Law says and HOW it does or does not apply to Earth, I’ll start to worry about understanding these discussions in depth. For now, I seem doomed to watch yet a deeper level of disagreement over what I naively thought was a common foundation. I’m such a child ! Nick Stokes, No reply to my comment here and the one below it? RW, What you are saying seems to echo what George is saying, and I replied at length there. This sums it up: .” Yes, it’s not a thermodynamically manifested value, if I understand what that means. There is thermodynamics needed, and you can’t get an answer to sensitivity without it. The only constraint provided by COE is on total of flux up and down. It does not constrain the ratio. A common weakness in George’s argument, and I think yours, is that he deduces some “effective” or “equivalent” quantity by back-working some formula in some particular circumstance, and assumes that it will apply in some other situation. I’ve disputed the use of equivalent temperature, but more central is probably the use of an emissivity of 0.62, read somehow from a graph. You can’t use this to determine sensitivity, because you have no reason to expect it to remain constant. It isn’t physical. The give-away here is that S-B is used in situations where it simply doesn’t apply, and there is no attempt to grapple with the real equations of radiative gas transfer. S-B tells you the radiation flux from a surface of black body at a uniform temperature T. Here we don’t have surfaces (except for ground) and we don’t have uniform T. Gas radiation is different; it does involve T^4, but you don’t have the notion of surface any more. Emissivity is per unit volume, and is of course highly frequency dependent (I objected to the careless usage of grey body). So there is so much missing from his and your comments that I’m really stuck for much more to say than that you simply have no basis for a 50-50 split, and especially one that is sufficiently fixed that its constancy will determine sensitivity. One thing I wish people would take account of – scientists are not fools. They do do this kind of energy balance, and CS has been energetically studied, but no-one has tried to deduce it from this sort of analysis. Maybe George has seen something that scientists have missed with their much more elaborate analysis of radiative transfer, or maybe he’s just wrong. I think wrong. Nick, The 50/50 split itself claimed by George does NOT determine the sensitivity. It quantifies the effect that absorbed surface IR by GHGs has within the complex thermodynamic path, so far as its ultimate contribution to the enhancement of surface warming by the absorption of upwelling IR by GHGs and the subsequent non-directional re-radiation of that initially absorbed energy within the atmosphere. The physical driver of the GHE is the re-radiation of some of that initially absorbed surface IR back towards (and not necessarily back to) the surface. Since the probability of re-emission at any discrete layer is equal in any direction regardless of the rate its emitting at, you would only expect about half of what’s initially captured by GHGs to be contributing to the downward IR push the atmosphere makes at all levels, where as the other half will contribute to the upward IR push the atmosphere makes at all levels. Only the increased downward emitted IR push from the re-radiation of the energy absorbed by GHGs is further enhancing the radiative warming of the planet and ultimately the enhancement of surface warming. The 50/50 split ratio is NOT a quantification of the temperature structure or bulk IR emission structure of the atmosphere, which emits roughly double the amount of IR flux to the surface as it emits out the TOA. If it were claiming to be, it would surely be wrong (spectacularly so). COE constrains the black box output at the surface to not be more than 385 W/m^2, otherwise a condition of steady-state does not exist. While flux equal to 385 W/m^2 must be somehow exiting the atmosphere at the bottom of the box at the surface, 239 W/m^2 must be exiting the box at the TOA, for a grand total of 624 W/m^2. The emergent 50/50 split only means an amount *equal* to half of what’s initially absorbed by GHGs is ultimately radiated to space and an amount *equal* to the other half is gained by the surface, i.e. added to the surface, somehow in some way. Nothing more. So in effect, the flow of energy in and out of the whole system is the same as if what’s depicted in the box model were occurring. The black box is constrained by COE to produce a value of ‘F’ somewhere between 0 and 1.0, and the value that emerges from the COE constraint is about 0.5. If you don’t understand where the COE constraint is coming from in the black box, let’s go over it in detail step by step. The ultimate conclusion from the emergent 50/50 split is the *instrinsic* surface warming ability of +3.7 W/m^2 of GHG absorption (from 2xCO2) is only about 0.55C and not the 1.1C ubiquitously cited and widely accepted; however 0.55C is not a direct or precise quantification of the sensitivity. But before we can get to that component, you must first at least understand the black box component and the derived 50/50 atmospheric split.? co2isnotevil You referred to the red dots and George says this: . ” All they seem to show is that the temperature rose as a result of decreased cloudiness. There are hypotheses that the observed reduction in cloudiness was a result of high solar activity and unrelated to any increase in CO2 over the period. A reduction in cloudiness will allow more solar energy in to warm the system regardless of any changes in CO2 WUWT covered the point a while ago: .” George: Sorry to arrive late to this discussion. You asked: What law(s) of physics can explain how to override the requirements of the Stefan-Boltzmann Law as it applies to the sensitivity of matter absorbing and emitting energy, while also explaining why the data shows a nearly exact conformance to this law? Planck’ Law (and therefore the SB eqn) was derived assuming radiation in equilibrium with GHGs (originally quantized oscillators). Look up any derivation of Planck’s Law. The atmosphere is not in equilibrium with the thermal infrared passing through it. Radiation in the atmospheric window passes through unobstructed with intensity appropriate for a blackbody at surface temperature. Radiation in strongly absorbed bands has intensity appropriate for a blackbody at 220 degC, a 3X difference in T^4! So the S-B eqn is not capable of properly describing what happens in the atmosphere. The appropriate eqn for systems that are not at equilibrium is the Schwarzschild eqn, which is used by programs such as MODTRAN, HITRAN, and AOGCMs. Frank, The Schwarzschild eqn. can describe atmospheric radiative transfer for both the system in a state of equilibrium as well as out of equilibrium, i.e. during the path from one equilibrium state to another. But even what it can describe for the equilibrium state is an average of immensely dynamic behavior. The point is data plotted is the net observed result of all the dynamic physics, radiant and non-radiant, mixed together. That is, it implicitly includes the effect of all physical processes and feedbacks in the system that operate on timscales of decades or less, which certainly includes water vapor and clouds. Frank, “So the S-B eqn is not capable of properly describing what happens in the atmosphere.” This is not what the model is modelling. The elegance of this solution is that what happens within the atmosphere is irrelevant and all that complication can be avoided. Consensus climate science is hung up on all the complexity so they have the wiggle room to assert fantastic claims which spills over into skeptical thinking and this contributes to why climate science is so broken. My earlier point was that it’s counterproductive to try and out psych how the atmosphere works inside if the behavior at the boundaries is unknown. This model quantifies the behavior at the boundaries and provides a target for more complex modelling of the atmosphere’s interior. GCM’s essentially run open loop relative to the required behavior at the boundaries and hope to predict it, rather than be constrained by it. This methodology represents standard practices for reverse engineering an unknown system. Unfortunately, standard practices are rarely applied to climate science, especially if it results in an inconvenient answer. A classic example of this is testing hypotheses and BTW, Figure 3 is a test of the hypothesis that a gray body at the surface temperature with an emissivity of 0.62 is an accurate model of the boundary behavior of the atmopshere. I’m only modelling how it behaves at the boundaries and if this can be predicted with high precision, which I have unambiguously demonstrated (per Figure 3), it doesn’t matter how that behavior manifests itself, just that it does. As far as the model is concerned, the internals of the atmosphere can be pixies pushing photons around, as long as the net result conforms to macroscopic physical constraints. Consider the Entropy Minimization Principle. What does it mean to minimize entropy? It’s minimizing deviations from ideal and the Stefan-Boltzmann relationship is an ideal quantification. As a consequence of so many degrees of freedom, the atmosphere has the capability to self organizes itself to achieve minimum entropy, as any natural system would do. If the external behavior does not align with SB, especially the claim of a sensitivity far in excess of what SB supports, the entropy must be too high to be real, that is, the deviations from ideal are far out of bounds for a natural system. As far as Planck is concerned, the equivalent temperature of the planet (255K) is based on an energy flux that is not a pure Planck spectrum, but a Planck spectrum whose clear sky color temperature (the peak emissions per Wein’s Displacement Law) is the surface temperature, but with sections of bandwidth removed, decreasing the total energy to be EQUIVALENT to an ideal BB radiating a Planck spectrum at 255K. co2isnotevil: Rather than calling the solution “elegant” I would call it an application of the reification fallacy. Global warming climatology is based upon application of this fallacy. micro6500, “There won’t just be notches, there would be some enhancement in the windows” This isn’t consistent with observations. If the energy in the ‘notches’ was ‘thermalized’ and re-emitted as a Planck spectrum boosting the power in the transparent window, we would observe much deeper notches than we do. The notches we see in saturated absorption lines show about a 50% reduction in outgoing flux over what there would be given an ideal Planck spectrum which is consistent with the 50/50 split of energy leaving the atmosphere consequential to photons emitted by GHG’s being emitted in a random direction (after all is said and done, approximately half up and half down). Which is a sign of no enhanced warming. The wv regulation will completely erase any forcing over dew point as the days get longer. But it only the difference of 10 or 20 minutes less cooling at the low rate after an equal reduction at the high cooling rates. So as the days lengthen you get those 20. And a storm will also wipe it out. George: Figure 3 is interesting, but problematic. The flux leaving the TOA is the depended variable and the surface temperature is the dependent variable, so normally one would plot this data with the axes switched. Now let’s look at the dynamic range of your data. About half of the planet is tropical, with Ts around 300 K. Power out varies by 70 W/m2 from this portion of the planet with little change in Ts. There is not a functional relationship between Ts and power out for this half of the planet. The data is scattered because cloud cover and altitude has a tremendous impact on power out. Much of the dynamic range in your data comes for polar regions, a very small fraction of the planet. Data from the poles provides most of the dynamic range in your data. The problem with this way of looking at the data is that the Atmosphere is not a blackbody with an emissivity of 0.61. The apparent emissivity of 0.61 occurs because the average photon escaping to space (power out) is emitted at an altitude where the temperature is 255 K. The changes in power out in your graph are produced by moving from one location to another one the planet where the temperature is different, humidity (as GHG) is different and photons escaping to space come from different altitudes. The slope of your graph may have units of K/W/m2, but that doesn’t mean it is a measure of climate sensitivity – the change in TOA OLR and reflected SWR caused by warming everywhere on the planet. Frank, Part of the problem here with the conceptualization of sensitivity, feedback, etc. is the way the issue is framed by (mainstream) climate science. The way the issue is framed is more akin to the system being a static equilibrium system whose behavior upon a change in the energy balance or in response to some perturbation is totally unknown or a big mystery, rather than it being an already mostly physically manifested highly dynamic equilibrium system. I assume you agree the system is an immensely dynamic one, right? That is, the energy balance is immensely dynamically maintained. What are the two most dynamic components of the Earth-atmosphere system? Water vapor and clouds, right? I think the physical constraints George is referring to in this context are really physically logical constraints given observed behavior, rather than some universal physical constraints considered by themselves. No, there is no universal physical constraint or physical law (S-B or otherwise) on its own, independent of logical context, that constrains sensitivity within the approximate bounds George is claiming. RW. The most dynamic component of any convecting atmosphere (and they always convect) is the conversion of KE to PE in rising air and conversion of PE to KE in falling air. Clouds and water vapour and anything else with any thermal effect achieve their effects by influencing that process. Since, over time, ascent must be matched by descent if hydrostatic equilibrium is to be maintained it follows that nothing (including GHGs) can destabilise that hydrostatic equilibrium otherwise the atmosphere would be lost. It is that component which neutralises all destabilising influences by providing an infinitely variable thermal buffer. That is what places a constraint on climate sensitivity from ALL potential destabilising forces. The trade off against anything that tries to introduce an imbalance is a change in the distribution of the mass content of the atmosphere. Anything that successfully distorts the lapse rate slope in one location will distort it in an equal and opposite direction elsewhere. This is relevant: Stephen, “This is relevant:” (post on jonova) What I see that this does is provide one of the many degrees of freedom that combined drive the surface behavior towards ideal (minimize entropy) which is 1 W/m^2 of emissions per incremental W/m^2 of forcing (sensitivity of about 0.19 C per W/m^2). I posted a plot that showed that this is the case earlier in the comments. Rather than plotting output power vs. temperature, input power vs. temperature is plotted. co2isnotevil Everything you can envisage as comprising a degree of freedom operates by moving mass up or down the density gradient and thus inevitably involves conversion of KE to PE or PE to KE. Thus, at base, there is only one underlying degree of freedom which involves the ratio between KE and PE within the mass of the bulk atmosphere. Whenever that ratio diverges from the ratio that is required for hydrostatic equilibrium then convection moves atmospheric mass up or down the density gradient in order to eliminate the imbalance. Convection can do that because convection is merely a response to density differentials and if one changes the ratio between KE and PE between air parcels then density changes as well so that changes in convection inevitably ensue. The convective response is always equal and opposite to any imbalance that might be created.Either KE is converted to PE or PE is converted to KE as necessary to retain balance. The PE within the atmosphere is a sort of deposit account into which heat (KE) can be placed or drawn out as needed. I like to refer to it as a ‘buffer’. That is the true (and only) physical constraint to climate sensitivity to every potential forcing. As regards your head post the issue is whether your findings are consistent or inconsistent with that proposition. I think they are consistent but do you agree? Stephen, “I think they are consistent but do you agree?” It’s certainly consistent with the relationship between incident energy and temperature, or the ‘charging’ path. The head posting is more about the ‘discharge’ path as it puts limits the sensitivity, but to the extent that input == output in LTE (hence putting emissions along the X axis as the ‘input’), it’s also consistent in principle with the discharge path. The charging/discharging paradigm comes from the following equation: Pi(t) = Po(t) + dE(t)/dt which quantifies the EXACT dynamic relationship between input power and output power. When they are instantaneously different, the difference is either added to or subtracted from the energy stored by the system (E). If we define an arbitrary amount of time, tau, such that all of E is emitted in tau time at the rate Po(t), this can be rewritten as, Pi(t) = E(t)/tau + dE/dt You might recognize this as the same form of differential equation that quantifies the charging and discharging of a capacitor where tau is the time constant. Of course for the case of the climate system, tau is not constant and has relatively strong a temperature dependence. Thanks. If my scenario is consistent with your findings then does that not provide what you asked for, namely “What law(s) of physics can explain how to override the requirements of the Stefan-Boltzmann Law as it applies to the sensitivity of matter absorbing and emitting energy, while also explaining why the data shows a nearly exact conformance to this law?” “If my scenario is consistent with your findings then does that not provide what you asked for”, It doesn’t change the derived sensitivity, it just offers a possibility for how the system self-organizes to drive itself towards ideal behavior in the presence of incomprehensible complexity. I’m only modelling the observed behavior and the model of the observed behavior is unaffected by how that behavior arises. Your explanation is a possibility for how that behavior might arise, but it’s not the only one and IMHO, it’s a lot more complicated then what you propose. It only becomes complicated if one tries to map all the variables that can affect the KE/PE ratio. I think that would be pretty much impossible due to incomprehensible complexity, as you say. As for alternative possibilities I would be surprised if you could specify one that does not boil down to variations in the KE/PE ratio. The reassuring thing for me at this point is that you do not have anything that invalidates my proposal. That is helpful. With regard to derived sensitivity I think you may be making an unwarranted assumption that CO2 makes any measurable contribution at all since the data you use appears to relate to cloudiness rather than CO2 amounts, or have I missed something? Stephen, “With regard to derived sensitivity I think you may be making an unwarranted assumption that CO2 makes any measurable contribution at all” Remember that my complete position is that the degrees of freedom that arise from incomprehensible complexity drives the climate systems behavior towards ideal (per the Entropy Minimization Principle) where the surface sensitivity converges to 1 W/m^2 of surface emissions per W/m^2 of input (I don’t like the term forcing which is otherwise horribly ill defined). For CO2 to have no effect, the sensitivity would need to be zero. The effects you are citing have more to do with mitigating the sensitivity to solar input and is not particularly specific to increased absorption by CO2. None the less, it has the same net effect, but the effect of incremental CO2 is not diminished to zero. With regard to other complexities, dynamic cloud coverage, the dynamic ratio between cloud height and cloud area and the dynamic modulation of the nominal 50/50 split of absorbed energy all contribute as degrees of freedom driving the system towards ideal. Stephen, OK, but the point is the process by which water is evaporates from the surface, ultimately condenses to form clouds, and then is ultimately precipitated out of the atmosphere (i.e. out of the clouds) and gets back to the surface is an immensely dynamic, continuously occurring process within the Earth-atmosphere system. And a relatively fast acting one as the average time it takes for water molecule to be evaporated from the surface and eventually precipitated back to the surface (as rain or snow) is only about 10 days or so. The point is (which was made to Frank) is all of the physical processes and feedbacks involved in this process, i.e. the hydrological cycle, and their ultimate manifestation on the energy balance of the system, including at the surface, are fully accounted for in the data plotted. This is because not only is the data about 30 years worth, which is far longer than 10 day average of the hydrological cycle, each small dot component of that makes up the curve is a monthly average of all the dynamic behavior, radiant and non-radiant, know and unknown, in each grid area. Frank, It seems you have accepted the fundamental way the field has framed up the feedback and sensitivity question, which is really as if the Earth-atmosphere system is a static equilibrium system (or more specifically a system that has dynamically a reached a static equilibrium), and whose physical components’ behavior in response to a perturbation or energy imbalance will subsequently dynamically respond in a totally unknown way with totally unknown bounds, to reach a new static equilibrium. The point is system is an immensely dynamic equilibrium system, where its energy balance is continuously dynamically maintained. It has not reached what would be a static equilibrium, but instead reached an immensely dynamically maintained approximate average equilibrium state. It is these immensely dynamic physical processes at work, radiant and non-radiant, know and unknown, in maintaining the physical manifestation of this energy balance, that cannot be arbitrarily separated from those that will act in response to newly imposed imbalances to the system, like from added GHGs. It is physically illogical to think these physical processes and feedbacks already in continuous dynamic operation in maintaining the current energy balance would have any way a distinguishing such an imbalance from any other imbalance imposed as a result of the regularly occurring dynamic chaos in the system, which at any one point in time or in any one local area is almost always out to balance to some degree in one way or another. The term “climate science” is inaccurate and misleading for the models that are created by this field of study lack the property of falsifiability. As the models lack falsifiability it is accurate to call the field of study that creates them “climate pseudoscience.” To elevate their field of study to a science, climate pseudoscientists would have to identify the statistical populations underlying their models and cross validate these models before publishing them or using them in attempts at controlling Earth’s climate. Co2isnotevil I would say that the climate sensitivity in terms of average surface temperature is reduced to zero whatever the cause of a radiative imbalance from variations internal to the system (including CO2) but the overall outcome is not net zero because of the change in circulation pattern that occurs instead. Otherwise hydrostatic equilibrium cannot be maintained. The exception is where a radiative imbalance is due to an albedo/cloudiness change. In that case the input to the system changes and the average surface temperature must follow. Your work shows that the system drives back towards ideal and I agree that the various climate and weather phenomena that constitute ‘incomprehensible complexity’ are the process of stabilisation in action. On those two points we appear to be in agreement. The ideal that the system drives back towards is the lapse rate slope set by atmospheric mass and the strength of the gravitational field together with the surface temperature set by both incoming radiation from space (after accounting for albedo) and the energy requirement of ongoing convective overturning. The former matches the S-B equation which provides 255K at the surface and the latter accounts for the observed additional 33K at the surface. Stephen, “The ideal that the system drives back towards is the lapse rate slope …” You seem to believe that the surface temperature is a consequence of the lapse rate, while I believe that the lapse rate is a function of gravity alone and the temperature gradient manifested by it is driven by the surface temperature which is established as an equilibrium condition between the surface and the Sun. If gravity was different, I claim that the surface temperature would not be any different, but the lapse rate would change while you claim that the surface temperature would be different because of the changed lapse rate. Is this a correct assessment of your position? Good question 🙂 I do not believe that the surface temperature is a consequence of the lapse rate. The surface temperature is merely the starting point for the lapse rate. If there is no atmosphere then S-B is satisfied and there is (obviously) no lapse rate. The surface temperature beneath a gaseous atmosphere is a result of insolation reaching the surface (so albedo is relevant) AND atmospheric mass AND gravity.No gravity means no atmosphere. However, if you increase gravity alone whilst leaving insolation and atmospheric mass the same then you get increased density at the surface and a steeper density gradient with height. The depth of the atmosphere becomes more compressed. The lapse rate follows the density gradient simply because the lapse rate slope traces the increased value of conduction relative to radiation as one descends through the mass of an atmosphere. Increased density at the surface means that more conduction can occur at the same level of insolation but convection then has less vertical height to travel before it returns back to the surface so the net thermal effect should be zero. The density gradient being steeper, the lapse rate must be steeper as well in order to move from the surface temperature to the temperature of space over a shorter distance of travel. The surface temperature would remain the same with increased gravity (just as you say) but the lapse rate slope would be steeper (just as you say) and, to compensate, convective overturning would require less time because it has less far to travel. There is a suggestion from others that increased density reduces the speed of convection due to higher viscosity so that might cause a rise in surface temperature but I am currently undecided on that. Gravity is therefore only needed to provide a countervailing force to the upward pressure gradient force. As long as gravity is sufficient to offset the upward pressure gradient force and thereby retain an atmosphere in hydrostatic equilibrium the precise value of the gravitational force makes no difference to surface temperature except in so far as viscosity might be relevant. So, the lapse rate slope is set by gravity alone because gravity sets the density gradient which in turn sets the balance between radiation and conduction within the vertical plane. One can regard the lapse rate slope as a marker for the rate at which conduction takes over from radiation as one descends through atmospheric mass. The more conduction there is the less accurate the S-B equation becomes and the higher the surface temperature must rise above S-B in order to achieve radiative equilibrium with space. If one then considers radiative capability within the atmosphere it simply causes a redistribution of atmospheric mass via convective adjustments but no rise in surface temperature. Stephen, “If there is no atmosphere then S-B is satisfied and there is (obviously) no lapse rate.” I agree with most of what you said with a slight modification. If there is no atmosphere then S-B for a black body is satisfied and there is no lapse rate. If there is an atmosphere, the lapse rate becomes a manifestation of grayness, thus S-B can still be satisfied by applying the appropriate EQUIVALENT emissivity, as demonstrated by Figure 3. Again, I emphasize EQUIVALENT which is a crucial concept when it comes to modelling anything, It’s clear to me that there are regulatory processes at work, but these processes directly regulate the energy balance and not necessarily the surface temperature, except indirectly. Furthermore, these regulatory processes can not reduce the sensitivity to zero, that is 0 W/m^2 of incremental surface emissions per W/m^2 of ‘forcing’, but drives it towards minimum entropy where 1 W/m^2 of forcing results in 1 W/m^2 of incremental surface emissions. To put this in perspective, the IPCC sensitivity of 0.8C per W/m^2 requires the next W/m^2 of forcing to result in 4.3 W/m^2 of incremental surface emissions. In other terms, if it looks like a duck and quacks like a duck it’s not barking like a dog. Where there is an atmosphere I agree that you can regard the lapse rate as a manifestation of greyness in the sense that as density increases along the lapse rate slope towards the surface then conduction takes over from radiation. However, do recall that from space the Earth and atmosphere combined present as a blackbody radiating out exactly as much as is received from space. My solution to that conundrum is to assert that viewed from space the combined system only presented as a greybody during the progress of the uncompleted first convective overturning cycle. After that the remaining greyness manifested by the atmosphere along the lapse rate slope is merely an internal system phenomenon and represents that increasing dominance of conduction relative to radiation as one descends through atmospheric mass. I think that what you have done is use ’emissivity’ as a measure of the average reduction of radiative capability in favour of conduction as one descends along the lapse rate slope. The gap between your red and green lines represents the internal, atmospheric greyness induced by increasing conduction as one travels down along the lapse rate slope. That gives the raised surface temperature that is required to both reach radiative equilibrium with space AND support ongoing convective overturning within the atmosphere. The fact that the curve of both lines is similar shows that the regulatory processes otherwise known as weather are working correctly to keep the system thermally stable. Sensitivity to a surface temperature rise above S-B cannot be reduced to zero as you say which is why there is a permanent gap between the red and green lines but that gap is caused by conduction and convection, not CO2 or any other process. Using your method, if CO2 or anything else were to be capable of affecting climate sensitivity beyond the effect of conduction and convection then it would manifest as a failure of the red line to track the green line and you have shown that does not happen. If it were to happen then hydrostatic equilibrium would be destroyed and the atmosphere lost. Stephen, “However, do recall that from space the Earth and atmosphere combined present as a blackbody radiating out exactly as much as is received from space.” This isn’t exactly correct. The Earth and atmosphere combined present as an EQUIVALENT black body emitting a Planck spectrum at 255K. The difference being the spectrum itself and its emitting temperature according Wein’s displacement. I’ve no problem with a more precise verbalisation. Doesn’t affect the main point though does it? As I see it your observations are exactly what I would expect to see if my mass based greenhouse effect description were to be correct. Stephen, “As I see it your observations are exactly what I would expect to see if my mass based greenhouse effect description were to be correct.” The question is whether the apparently mass based GHG effect is the cause or the consequence. I believe it to be a consequence and that the cause is the requirement for the macroscopic behavior of the climate system to be constrained by macroscopic physical laws, specifically the T^4 relationship between temperature and emissions and the constraints of COE. The cause establishes what the surface temperature and planet emissions must be and the consequence is to be consistent with these two endpoints and the nature of atmosphere in between. Well, all physical systems are constrained by the macroscopic physical laws so the climate system cannot be any different. It isn’t a problem for me to concede that macroscopic physical laws lead to a mass induced greenhouse effect rather than a GHG induced greenhouse effect. Indeed, that is the whole point of my presence here:) Are your findings consistent with both possibilities or with one more than the other? Stephen, “Are your findings consistent with both possibilities or with one more than the other?” My finding are more consistent with the constraints of physical law, but at the same time, they say nothing about how the atmosphere self organizes itself to meet those constraints, so I’m open to all possibilities for this. Stephen wrote: “The most dynamic component of any convecting atmosphere (and they always convect) is the conversion of KE to PE in rising air and conversion of PE to KE in falling air.” You are ignoring the fact that every packet of air is “floating” in a sea of air of equal density. If I scuba dive with a weight belt that provides neutral buoyancy, no work done when I raise or lower my depth below the surface: An equal weight of water moves in the opposite direction I move. In water, I only need to overcome friction to change my “altitude”. The potential energy associated with my altitude is irrelevant. In the atmosphere, the same situation exists, plus there is almost no friction. A packet of air can rise without any work being done because an equal weight of air is falling. The change that develops when air rises doesn’t involve potential energy (and equal weight of air falls elsewhere), it is the PdV work done by the (adiabatic) expansion under the lower pressure at higher altitude. That work comes from the internal energy of the gas, lowering its temperature and kinetic energy. (The gas that falls is warmed by adiabatic compression.) After expanding and cooling, the density of the risen air will be greater than that of the surrounding air and it will sink – unless the temperature has dropped fast enough with increasing altitude. All of this, of course, produces the classical formulas associated with adiabatic expansion and derivation of the adiabatic lapse rate (-g/Cp). You presumably can get the correct answer by dealing with the potential energy of the rising and falling air separately, but your calculations need to include both. Frank, At any given moment half the atmosphere is rising and half is falling. None of it ever just ‘floats’ for any length of time. The average surface pressure is about 1000mb. Anything less is rising air and anything more is falling air. Quite simply, you do have to treat the potential energy in rising and falling air separately so one must apply the opposite sign to each so that they cancel out to zero. No more complex calculation required. ”At any given moment half the atmosphere is rising and half is falling. None of it ever just ‘floats’ for any length of time.” Nonsense, only in your faulty imagination Stephen. Earth atm. IS “floating”, calm most of the time at the neutral buoyancy line of the natural lapse rate meaning as Stephen often writes in hydrostatic equilibrium, the static therein MEANS static. This is what Lorenz 1954 is trying to tell Stephen but it is way beyond his comprehension. You waste our time imagining things Stephen, try learning reality: Lorenz 1954 “Consider first an atmosphere whose density stratification is everywhere horizontal. In this case, although total potential energy is plentiful, none at all is available for conversion into kinetic energy.” Lorenz does not claim that to be the baseline condition of any atmosphere. Lorenz is just simplifying the scenario in order to make a point about how PE can be converted to KE by introducing a vertical component. He doesn’t suggest that any real atmospheres are everywhere horizontal. It cannot happen. All low pressure cells contain rising air and all high pressure cells contain falling air and together they make up the entire atmosphere. Overall hydrostatic equilibrium does not require the bulk of an atmosphere to float along the lapse rate slope. All it requires is for ascents to balance descents. Convection is caused by surface heating and conduction to the air above and results in the entire atmosphere being constantly involved in convective overturning. Dr. Lorenz does claim that to be the baseline condition of Earth atm. as Stephen could learn by actually reading/absorbing the 1954 science paper i linked for him instead of just imagining things. Less than 1% of abundant Earth atm. PE is available to upset hydrostatic conditions, allowing for stormy conditions per Dr. Lorenz calculations not 50%. If Stephen did not have such a shallow understanding of meteorology, he would not need to actually contradict himself: “balance between KE and PE in the atmosphere so that hydrostatic equilibrium can be retained.” or contradict Dr. Lorenz writing in 1954 who is way…WAY more accomplished in the science of meteorology since as soundings show hydrostatic conditions generally prevail on Earth in those observations & as calculated: “Hence less than one per cent of the total potential energy is generally available for conversion into kinetic energy.” Not the 50% of total PE Stephen imagines showing his ignorance of atm. radiation fields and available PE. There is a difference between the small local imbalances that give rise to local storminess and the broader process of creation of PE from KE during ascent plus creation of KE from PE in descent. It is Trick’s regular ‘trick’ to obfuscate in such ways and mix it in with insults. I made no mention of the proportion of PE available for storms. The 50/50 figure relates to total atmospheric volume engaged in ascent and descent at any given time which is a different matter. Even the stratosphere has a large slow convective overturning cycle known as the Brewer Dobson Circulation and most likey the higher layers too to some extent. Convective overturning is ubiquitous in the troposphere. No point engaging with Trick any further. ”He doesn’t suggest that any real atmospheres are everywhere horizontal. It cannot happen.” Dr. Lorenz only calculates 99% Stephen not 100% as you imagine or there would be no storms observed. Try to stick to that ~1% small percentage of available PE, not 50/50. I predict you will not be able. ”I made no mention of the proportion of PE available for storms. The 50/50 figure relates to total atmospheric volume engaged in ascent and descent at any given time which is a different matter.” Dr. Lorenz calculated in 1954 that 99/1 available for ascent/descent which means the atm. is mostly in hydrostatic equilibrium, 50/50 figure is only in Stephen’s imagination not observed in the real world. Stephen even agreed with Dr. Lorenz 1:03pm: “because indisputably the atmosphere is in hydrostatic equilibrium.” then contradicts himself with the 50/50. ”It is Trick’s regular ‘trick’ to obfuscate in such ways and mix it in with insults.” No obfuscation, I use Dr. Lorenz’ words exactly clipped for the interested reader to find in the paper I linked & and only after Stephen’s initial fashion: 1/15 12:45am: “I think Trick is wasting my time and that of general readers.” No need to engage with me, but to further Stephen’s understanding of meteorology it would be a good idea for him to engage with Dr. Lorenz. And a good meteorological text book to understand the correct basic science. “Much of the dynamic range in your data comes for polar regions” This is incorrect. Each of the larger dots is the 3 decade average for each 2.5 degree slice of latitude and as you can see, these are uniformly spaced across the SB curve and most surprisingly, mostly independent of hemispheric asymmetries (N hemisphere 2.5 degree slices align on top of S hemisphere slices). Most of the data represents the mid latitudes. There are 2 deviations from the ideal curve. One is around 273K (0C) where water vapor is becoming more important and I’ve been able to characterize and quantify this deviation. This leads to the fact that the only effect incremental CO2 has is to slightly decrease the EFFECTIVE emissivity of surface emissions relative to emissions leaving the planet. It’s this slight decrease applied to all 240 W/m^2 that results in the 3.7 W/m^2 of EQUIVALENT forcing from doubling CO2. The other deviation is at the equator, but if you look carefully, one hemisphere has a slightly higher emissivity which is offset by a lower emissivity in the other. As far as I can tell, this seems to be an anomaly with how AU normalized solar input was applied to the model by GISS, but in any event, seems to cancel. George, what you are seeing at toa, is my WV regulating outgoing, but at high absolute humidity, there’s less dynamic room. The high rate will reduce as absolute water vapor increases, so the difference between the two speeds will be less. Thus would be manifest as the slope you found as absolute humidity drops moving towards the poles, increasing the regulation ability, the gap between high and low cooling rates go up. Does the hitch at 0C have an energy commiserate with water vapor changing state? “Does the hitch at 0C have an energy commiserate with water vapor changing state?” No. Because of the integration time being longed than the lifetime of atmospheric water, the energy of the state changes from evaporation and condensation effectively offset each other, as RW pointed out. The way I was able to quantify it was via equation 3 which relates atmospheric absorption (the emissivity of the atmosphere itself) to the EQUIVALENT emissivity of the system comprised of an approximately BB surface and an approximately gray body atmosphere. The absorption can be calculated with line by line simulations quantifying the increase in water vapor and the increase in absorption was consistent with the decrease in EQUIVALENT emissivity of the system. But you have two curves, you need say 20% to 100% rel humidity over a wide range of absolute humidity (say Antarctica and rainforest) you’ll get a contour map showing ir interacting with both water and co2. As someone who has designed cpu’s you should recognize this. This making a single assumption for an interconnect model for every gate in a cpu, without modeling length, parallel traces, driver device parameters. An average might be a place to start, but it won’t get you fabricated chips that work. micro6500, In CPU design there are 2 basic kinds of simulations. One is a purely logical simulation with unit delays and the other is a full timing simulation with parasitics back annotated and rather than unit delay per gate, gates have a variable delay based on drive and loading. The gray body model is analogous to a logical simulation, while a GCM is analogous to a full timing simulation. Both get the same ultimate answers (as long as timing parameters are not violated) and logical simulations are often used to cross check the timing simulation. George, I was an Application Eng for both Agile and Viewlogic as the simulation expert on the east coast for 14 years. GCM are broken, their evaporation parameterization is wrong. But as I’ve shown, we are not limited to that. My point is that Modtran, Hitran, when used with a generic profile is useless for the questions at hand. Too much of the actual dynamics are erased throwing away so much knowledge. Though it is a big task, that I don’t know how to do. micro6500, “GCM are broken …” “My point is that Modtran, Hitran, when used with a generic profile is useless for the questions at hand.” While I wholeheartedly agree that GCM’s are broken for many reasons, I don’t necessarily agree with your assertion about the applicability of a radiative transfer analysis based on aggregate values. BTW, Hitran is not a program, but a database quantifying absorption lines of various gases and is an input to Modtran and to my code that does the same thing. While there are definitely differences between a full blown dynamic analysis and an analysis based on aggregate values, the differences are too small to worry about, especially given that the full blown analysis requires many orders of magnitude more CPU time to process than an aggregate analysis. It seems to me that there’s also a lot more room for error when doing a detailed dynamic analysis since there are many more unknowns and attributes that must be tracked and/or fit to the results. Given that this what GCM’s attempt to do, it’s not surprising that they are so broken, Simpler is better because there’s less room for error, even if the results aren’t 100% accurate because not all of the higher order influences are accounted for. The reason for the relatively small difference is superposition in the energy domain since all of the analysis I do is in the energy domain and any reported temperatures are based on an equivalent ideal BB applied to the energy fluxes that the analysis produces. Conversely, any analysis that emphasises temperatures will necessarily be wrong in the aggregate. Then I’m not sure you understand how water vapor is regulating cooling, because a point snapshot isn’t going to detect it, and it’s only the current average of the current conditions during the dynamic cooling across the planet. micro6500, “because a point snapshot isn’t going to detect it” There’s no reliance on point snapshots, but of averages in the energy domain of from 1 month to 3 decades. Even the temperatures reported in Figure 3 are average emissions, spatially and temporally integrated and converted to an EQUIVALENT temperature. The averages smooth out the effects of water vapor and other factors. Certainly, monthly average do not perfectly smooth out the effects and this is evident by the spread of red dots around the mean, but as the length of the average increases, these deviations are minimized and the average converges to the mean. Even considering single year averages, there’s not much deviation from the mean. The nightly effect is dynamic, that snapshot is just what it’s been, which I guess is what it was, but you can’t extrapolate it, that is meaningless. Stephen wrote: “At any given moment half the atmosphere is rising and half is falling. None of it ever just ‘floats’ for any length of time. The average surface pressure is about 1000mb. Anything less is rising air and anything more is falling air. Yes. The surface pressure under the descending air is about 1-2% higher than average and the pressure underneath rising air is normally about 1-2% lower. The descending air is drier and therefore heavier and needs more pressure to support its weight. To a solid first approximate, it is floating and we can ignore the potential energy change associated with the rise and fall of air. You can only ignore the PE from simple rising and falling which is trivial You cannot ignore the PE from reducing the distance between molecules which is substantial. That is the PE that gives heating when compression occurs. However, PdV work is already accounted for when you calculate an adiabatic lapse rate (moist or dry). If you assume a lapse rate created by gravity alone and then add terms for PE or PdV, you are double-counting these phenomena. Gases are uniformly dispersed in the troposphere (and stratosphere) without regard to molecular weight. This proves that convection – not potential energy being converted to kinetic energy – is responsible for the lapse rate in the troposphere. Gravity’s influence is felt through the atmospheric pressure it produces. Buoyancy ensures that potential energy changes in one location are offset by changes in another. Sounds rather confused. There is no double counting because PE is just a term for the work done by mass against gravity during the decompression process involved in uplift and which is quantified in the PdV formula. Work done raising an atmosphere up against gravity is then reversed when work is done by an atmosphere falling with gravity so it is indeed correct that PE changes in one location are offset by changes in another. Convection IS the conversion of KE to PE in ascent AND of PE to KE in descent so you have your concepts horribly jumbled, hence your failure to understand. Brilliant! George: Before applying the S-B equation, you should ask some fundamental questions about emissivity: Do gases have an emissivity? What is emissivity? The radiation inside solids and liquids has usually come into equilibrium with the temperature of the solid or liquid that emits thermal radiation. If so, it has a blackbody spectrum when it arrives at the surface, where some is reflected inward. This produces an emissivity less than unity. The same fraction of incoming radiation is reflected (or scattered) outward at the surface; accounting for the fact that emissivity equals absorptivity at any given wavelength. In this case, emissivity/absorptivity is an intrinsic property of material that is independent of mass. What happens with a gas, which has to surface to create emissivity? Intuitively, gases should have an emissivity of unity. The problem is that a layer of gas may not be thick enough for the radiation that leaves its surface to have come into equilibrium with the gas molecules in the layer. Here scientists talk about “optically-thick” layers of atmosphere that are assumed to emit blackbody radiation and “optically thin” layers of atmosphere whose: 1) emissivity and absorptivity is proportional to the density of gas molecules inside the layer and their absorption cross-section whose emissivity varies with B(lambda,T), but whose absorptivity is independent of T. One runs into exactly the same problem thinking about layers of solids and liquids that are thin enough to be partially transparent. Emissivity is no longer an intrinsic property. The fundamental problem with this approach to the atmosphere is that the S-B is totally inappropriate for analyzing radiation transfer through an atmosphere with temperature ranging from 200-300 K, and which is not in equilibrium with the radiation passing through it. For that you need the Schwarzschild eqn. dI = emission – absorption dI = n*o*B(lambda,T)*dz – n*o*I*dz where dI is the change in spectral intensity, passing an incremental distance through a gas with density n, absorption cross-section o, and temperature T, and I is the spectral intensity of radiation entering the segment dz. Notice these limiting cases: a) When I is produced by a tungsten filament at several thousand K in the laboratory, we can ignore the emission term and obtain Beer’s Law for absorption. b) When dI is zero because absorption and emission have reached equilibrium (in which case Planck’s Law applies), I = B(lambda,T). (:)) When dealing with partially-transparent thin films of solids and liquids, one needs the Schwarzschild equation, not the S-B eqn. When an equation such as the S-B or Schwarzchild is at the center of attention of a group of people there is the possibility that the thinking of these people is corrupted by an application of the reification fallacy. Under this fallacy, an abstract object is treated as if it were a concrete object. In this case, the abstract object is an Earth that is abstracted from enough of its features to make it obey one of the two equations exactly. This thinking leads to the dubious conclusion that the concrete Earth on which we live has a “climate sensitivity” that has a constant but uncertain numerical value. Actually it is a certain kind of abstract Earth that has a climate sensitivity. Terry: From Wikipedia: ).[10] Thus, if properly understood and empirically corroborated, the “reification fallacy” applied to scientific constructs is not a fallacy at all; it is one part of theory creation and evaluation in normal science.” Thermal infrared radiation is a tangible quantity that can be measured with instruments. It’s interactions with GHGs have been studied in the laboratory and in the atmosphere itself: Instruments measure OLR from space and DLR measured at the surface. These are concrete measurements, not abstractions. A simple blackbody near 255 K has a “climate sensitivity”. For every degK its temperature rises, it emits 3.7 W/m2 or 3.7 W/m2/K. (Try it.) In climate science, we take the reciprocal and multiply by 3.7 W/m2/doubing to get 1.0 K/doubling. 3.8 W/m2/K is equivalent and simple to understand. There is nothing abstract about it. The earth also emits (and reflects) a certain number of W/m2 to space for each degK of rise in surface temperature. Because humidity, lapse rate, clouds, and surface albedo change with surface temperature (feedbacks), the Earth doesn’t emit like blackbody at 255 K. However, some quantity (in W/m2) does represent the average increase with TOA OLR and reflected SWR with a rise in surface temperature. That quantity is equivalent to climate sensitivity. Frank: In brief, that reification is a fallacy is proved by its negation of the principle of entropy maximization. If interested in a more long winded and revealing proof please ask. Frank, “Do gases have an emissivity?” “Intuitively, gases should have an emissivity of unity.” The O2 and N2 in the atmosphere has an emissivity close to 0, not unity, as these molecules are mostly transparent to both visible light input and LWIR output. Most of the radiation emitted by the atmosphere comes from clouds, which are classic gray bodies. Most of the rest comes from GHG’s returning the the ground state by emitting a photon. The surface directly emits energy into space that passes through the transparent regions of the spectrum and this is added to the contribution by the atmosphere to arrive at the 240 W/m^2 of planetary emissions. Even GHG emissions can be considered EQUIVALENT to a BB or gray body, just as the 240 W/m^2 of emissions by the planet are considered EQUIVALENT to a temperature of 255K. EQUIVALENT being the operative word. Again, I will to emphasize that the model is only modelling the behavior at the boundaries and makes no attempt to model what happens within. Since emissivity less than unity is produced by reflection at the interface between solids and liquids and since gases have no surface to reflect, I reasoned that that would have unit emissivity. N2 and O2 are totally transparent to thermal IR. The S-B equation doesn’t work for materials that are semi-transparent and (you are correct that) my explanation fails for totally transparent. The Schwarzschild equation does just fine. o = 0 dI = 0. The presence of clouds doesn’t interfere with may rational why Doug should not be applying the S-B eqn to the Earth. The Schwarzschild equation works just fine if you convert clouds to a radiating surface with a temperature and emissivity. The tricky part of applying this equation is that you need to know the temperature (and density and humidity) at all altitudes. In the troposphere, temperature is controlled by lapse rate and surface temperature. (In the stratosphere, by radiative equilibrium, which can be used to calculate temperature.) When you observe OLR from space, you see nothing that looks like a black or gray body with any particular temperature and emissivity. If you look at dW/dT = 4*e*o*T^3 or 4*e*o*T^3 + oT^4*(de/dT), you get even more nonsense. The S-B equation is a ridiculous model to apply to our planet. Doug is applying an equation that isn’t appropriate for our planet. Frank, “The S-B equation doesn’t work for materials that are semi-transparent” Sure it does. This is what defines a gray body and that which isn’t absorbed is passed through. The wikipedia definition of a gray body is one that doesn’t absorb all of the incident energy. What isn’t absorbed is either reflected, passed through or performs work that is not heating the body, although the definition is not specific, nor should it be, about what happens to this unabsorbed energy. The gray body model of O2 and N2 has an effective emissivity very close to zero. Frank wrote: The S-B equation doesn’t work for materials that are semi-transparent” co2isnotevil replied: “Sure it does. This is what defines a gray body and that which isn’t absorbed is passed through.” Frank continues: However the radiation emitted by a layer of semi-transparent material depends on what lies behind the semi-transparent material: a light bulb, the sun, or empty space. Emission (or emissivity) from semi-tansparent materials depends on more that the just the composition of the material: It depends on its thickness and what lies behind. The S-B eqn has no terms for thickness or radiation incoming from behind. S-B tells you that outgoing radiation depends only on two factors. Temperature and emissivity (which is a constant). Some people change the definition of emissivity for optically thin layers so that it is proportional to density and thickness. However, that definition has problems too, because emission can grow without limit if the layer is thick enough or the density is high enough. Then they switch the definition for emissivity back to being a constant and say that the material is optically thick. Frank. “Frank continues: However the radiation emitted by a layer of semi-transparent material depends on what lies behind the semi-transparent material” For the gray body EQUIVALENT model of Earth, the emitting surface in thermal equilibrium with the Sun (the ocean surface and bits of land poking through) is what lies behind the semi-transparent atmosphere. The way to think about it is that without an atmosphere, the Earth would be close to an ideal BB. Adding an atmosphere changes this, but can not change the T^4 dependence between the surface temperature and emissions or the SB constant, so what else is there to change? Whether the emissions are attenuated uniformly or in a spectrally specific manner, it’s a proportional attenuation quantifiable by a scalar emissivity. Frank, “The tricky part of applying this equation is that you need to know the temperature (and density and humidity) at all altitudes.” I agree with what you are saying, and this is a key. You can regard a gas as a S-B type emitter, even without surface, provided its temperature is uniform. That is the T you would use in the formula. A corollary toi this is that you have to have space, or a 0K black body, behind, unless it is so optically thick that negligible radiation can get through. For the atmosphere, there are frequencies where it is optically thin, but backed by surface. Then you see the surface. And there are frequencies where it is optically thick. Then you see (S-B wise) TOA. And in between, you see in between. Notions of grey body and aggregation over frequency just don’t work. Nick said: You can regard a gas as a S-B type emitter, even without surface, provided its temperature is uniform. Not quite. For black or gray bodies. the amount of material is irrelevant. If I take one sheet of aluminum foil (without oxidation), its emissivity is 0.03. If I layer 10 or 100 sheets of aluminum foil on top of each other or fuse them into a single sheet, it emissivity will still be 0.03. This isn’t true for a gas. Consider DLR starting it trip from space to the surface. For a while, doubling the distance traveled (or doubling the number of molecules passed, if the density changes) doubles the DLR flux because there is so little flux that absorption is negligible. However, by the time one reaches an altitude where the intensity of the DLR flux at that wavelength is approaching blackbody intensity for that wavelength and altitude/temperature, most of the emission is compensated for by absorption. If you look at the mathematics of the Schwarzschild eqn., it says that the incoming spectral intensity is shifted an amount dI in the direction of blackbody intensity (B(lambda,T)) and the rate at which blackbody intensity is approached is proportional to the density of the gas (n) and its cross-section (o). The only time spectral intensity doesn’t change with distance traveled is when it has reached blackbody intensity (or n or o are zero). When radiation has traveled far enough through a (non-transparent) homogeneous material at constant temperature, radiation of blackbody intensity will emerge. This is why most solids and liquids emit blackbody radiation – with a correction for scattering at the surface (ie emissivity). And this surface scattering the same from both directions – emissivity equals absorptivity. Frank, “This is why most solids and liquids emit blackbody radiation” As I understand it, a Planck spectrum is the degenerate case of line emission occurring as the electron shells of molecules merge, which happens in liquids and solids, but not gases. As molecules start sharing electrons, there are more degrees of freedom and the absorption and emission lines of a molecules electrons morphs into broad band absorption and emission of a shared electron cloud. The Planck distribution arises as probabilistic distribution of energies. Frank,, “Notions of grey body and aggregation over frequency just don’t work.” If you are looking at an LWIR spectrum from afar, yet you do not know with high precision how far away you are, how would you determine the equivalent temperature of its radiating surface? HINT: Wein’s Displacement What is the temperature of Earth based on Wein’s Displacement and its emitted spectrum? HINT: It’s not 255K In both cases, you can derate the relative power by the spectral gaps. This results in a temperature lower than the color temperature (from Wein’s Displacement) after you apply SB to arrive at the EQUIVALENT temperature of an ideal BB that would emit the same amount of power, however; the peak in the radiation will be at a lower energy than the peak that was measured because the equivalent BB has no spectral gaps. I expect that you accept that 255K is the EQUIVALENT temperature of the 240 W/m^2 of emissions by the planet, even though these emissions are not a pure Planck spectrum.. Relative to gray bodies, the O2 and N2 in the atmosphere is inert since it’s mostly transparent to both visible and LWIR energy. Atmospheric emissions come from clouds and particulates (gray bodies) and GHG emissions. While GHG emissions are not BB as such, the omnidirectional nature of their emissions is one this that this analysis depends on. The T^4 relationship between temperature and power is another and this is immutable, independent of the spectrum, and drives the low sensitivity. Consensus climate science doesn’t understand the significance of the power of 4. Referring back Figure 3, its clear that the IPCC sensitivity (blue line) is linear approximation, but rather than being the slope of a T^4 relationship, its a slope passing through 0. The gray body nature of the Earth system is an EQUIVALENT model, that is, it’s an abstraction that accurately models the measured behavior. It’s good that you understand what an EQUIVALENT model is by knowing Thevenin’s Theorem, so why is it hard to understand that the gray body model is an EQUIVALENT model? If predicting the measured behavior isn’t good enough to demonstrate equivalence, what is? What else does a model do, but predict behavior? Given that the gray body model accurately predicts limits on the relationship between forcing and the surface temperature (the 240 W/m^2 of solar input is the ONLY energy forcing the system) why do you believe that this does not quantify the sensitivity, which is specifically the relationship between forcing and temperature? The gray body model predicts a sensitivity of about 0.3C per W/m^2 and which is confirmed by measurements (the slope of the averages in Figure 3). What physics connects the dots between the sensitivity per this model and the sensitivity of about 0.8C per W/m^2 asserted by the IPCC? co2isnotevil January 8, 2017 at 8:53 pm. You have this completely wrong. The emissions from CO2 arise from kinetic energy in the rotational and vibrational modes, not translational! What is required to remove that energy collisionally is to remove the ro/vib energy not stop the translation. A CO2 molecule that absorbs in the 15 micron band is excited vibrationally with rotational fine structure, in the time it takes to emit a photon CO2 molecules in the lower atmosphere collide with neighboring molecules millions of times so that the predominant mode of energy loss there is collisional deactivation. It is only high up in the atmosphere that emission becomes the predominant mode due to the lower collision frequency.. Nick wrote:: I think you are missing much of the physics described by the Schwarzschild eqn where S-B emissivity would appear to be greater than 1. Those situations arise when the radiation (at a given wavelength or integrated over all wavelengths) entering in a layer of atmosphere has a spectral intensity greater than B(lambda,T). Let’s imagine both a solid shell and a layer of atmosphere at the tropopause where T = 200 K. The solid shell emits eo(T=200)^4. The layer of atmosphere emits far more than o(T=200)^4 and it has no surface to create a need for an emissivity less than 1. All right, let’s cheat and then assign a different emissivity to the layer of atmosphere and fix the problem. Now I leave the tropopause at the same temperature and change the lapse rate to the surface which changes emission from the top of the layer. Remember emissivity is emission/B(lambda,T). If you think the correct temperature for considering upwelling radiation is the surface at 288 K, not 200 K, let’s consider DLR which originates at 3 K. Now what is emissivity? Or take another extreme, a laboratory spectrophotometer. My sample is 298 K, but the light reaching the detector is orders of magnitude more intense than blackbody radiation. Application of the S-B equation to semi-tansparent objects and objects too thin for absorption and emission to equilibrate inside leads to absurd answers. It is far simpler to say that the intensity of radiation passing through ANYTHING changes towards BB intensity (B(lambda,T)) for the local temperature at a rate (per unit distance) that depends on the density of molecules encountered and the strength of their interaction with radiation of that wavelength (absorption cross-section). If the rate of absorption becomes effectively equal to the rate of emission (which is temperature-dependent), radiation of BB intensity will emerge from the object – minus any scattering at the interface. The same fraction of radiation will be scattered when radiation travels in the opposite direction. Look up any semi-classical derivation of Planck’s Law: Step 1. Assume radiation in equilibrium with some sort of quantized oscillator. Remember Planck was thinking about the radiation in a hot black cavity designed to produce such an equilibrium (with a pinhole to sample the radiation). Don’t apply Planck’s Law and its derivative when this assumption isn’t true. With gases and liquids, we can easily see that the absorption cross-section at some wavelengths is different than others. Does this (as well as scattering) produce emissivity less than 1? Not if you think of emissivity as an intrinsic property of a material that is independent of quantity. Emissivity is dimensionless, it doesn’t have units of kg-1. co2isnotevil January 9, 2017 at 10:32 am. I think you don’t understand the meaning of the term ‘spontaneous emission’, in fact CO2 has a mean emission time of order millisec and consequently endures millions of collisions during that time. The collisions do not induce emission of a photon they cause a transfer of kinetic energy to the colliding partner and a corresponding deactivation to a lower energy level (not necessarily the ground state). A short discussion about EQUIVALENCE seems to be in order. In electronics, we have things called Thevenin and Norton equivalent circuits. If you have a 3 terminal system with a million nodes and resistors between the 3 terminals (in, out and ground), it can be distilled down to one of these equivalent circuits, each of which is only 3 resistors (series/parallel and parallel/series combinations). In principle, these equivalent circuits can be derived using only Ohms Law and the property of superposition. The point being that if you measure the behavior of the terminals, a 3 resistor network can duplicate the terminal behavior exactly, but clearly is not modeling the millions of nodes and millions of resistors that the physical circuit is comprised of. In fact, there’s an infinite variety of combinations of resistors that will have the same behavior, but the equivalent circuit doesn’t care and simply models the behavior at the terminals. I consider the SB relationship to be analogous to Ohms Law, where power is current, temperature is voltage and emissivity is resistance, but owing to superposition in the energy domain, that is 1 Joule can to X amount of work, 2 Joules can to twice the work and heating the surface takes work, the same kinds of equivalences are valid. I don’t know much about electronic circuitry and simple analogies can be misleading. Suppose I have components whose response depend on frequency. Don’t you need a separate equivalent circuit for each frequency? Aren’t real components non-linear if you put too much power through you circuit? If radiation of a given wavelength entering a layer of atmosphere doesn’t already have blackbody intensity for the temperature of that layer (absorption and emission are in equilibrium), the S-B equation can not tell your how much energy will come out the other side. It is as simple as that. Wrong is wrong. It was derived assuming the existence of such an equilibrium. Look up any derivation. Frank, “Don’t you need a separate equivalent circuit for each frequency?” No. The kinds of components that have a frequency dependence are inductors and capacitors. The way that the analysis is performed is to apply a Laplace transform converting to the S domain which makes capacitors and inductors look like resistors and equivalence still applies although now, resistance has a frequency dependent imaginary component called reactance. Impedance is the magnitude of a real resistance and imaginary reactance. ” Don’t you need a separate equivalent circuit for each frequency?” It’s a good point. As George says, you can assign a complex impedance to each element, and combine them as if they were resistances. But then the Thevenin equivalent is a messy rational function of jω. If you want to use the black box approach to assigning an equivalent impedance, you really do have to do a separate observation and assign an impedance for each frequency. If it’s a simple circuit, you might after a while be able to infer the rational function. Nick, “you really do have to do a separate observation and assign an impedance for each frequency.” Yes, this is the case, but it’s still only one model. Some models are one or more equations, or passive circuit that defines the transfer function for that device, some are piece wise linear approximations others parallel. And the Thevinian equivalence is just that from the 3 terminals you can’t tell how complex the interior is as long as the 3 terminals behave the same. Op Amps are modeled as a transfer function and input and output passive components to define the terminals impedance. We use to call what I did as stump the chump (think stump the trunk), whenever we did customer demos some of the engineers present would try to find every difficult circuit they worked on and give it to me to see if we could simulate it, and then they’d try to find a problem with the results. And basically if we were able to get or create models, or alternative parts we were always able to simulate it and explain the results, even when they appeared wrong. I don’t ever really remember bad sim results that wasn’t an application of the proper tool in the proper settings problem. I did this for 14 years. Short answers: No Most no, there are active devices with various nonlinear transfer functions.. “In electronics, we have things called Thevenin and Norton equivalent circuits.” Yes. But you also have a Thevenin theorem, which tells you mathematically that a combination of impedances really will behave as its equivalent. For the situations you are looking at, you don’t have that. Nick, “Thevenin theorem” Yes, but underlying this theorem is the property of superposition and relative to the climate, superposition applies in the energy domain (but not in the temperature domain). Nick Stokes, No response to my response to you here? Sorry, I didn’t see it. But I find it very hard to respond to you and George. There is such a torrent of words, and so little properly written out maths. Could you please write out the actual calculation? The actual calculation for what? not complicated, it’s just arithmetic. In equations, the balance works out like this: Ps -> surface radiant emissions Pa = Ps*A -> surface emissions absorbed by the atmosphere (0 < A surface emissions passing through the transparent window in the atmopshere Pa*K -> fraction of Pa returned to the surface (0 < K fraction of Pa leaving the planet Pi -> input power from the Sun (after reflection) Po = Ps*(1-A) + Pa*(1-K) -> power leaving the planet Px = Pi + Pa*K -> power entering the surface In LTE, Ps = Px = 385 W/m^2 Pi = Po = 240 W/m^2 If A ~ .75, the only value of K that works is 1/2. Pick a value for one of A or K and the other is determined. Lets look at the limits, A == 0 -> K is irrelevant because Pa == 0 and Pi = Po = Ps = Px as would be expected if the atmosphere absorbed no energy A == 1 -> Ps == Ps and the transparent window is 0. (1 – K) = 240/385 = 0.62, therefore K = 0.38 and only 38% of the absorbed energy must be returned to the surface to offset its emissions. A ~ 0.75 -> K ~ 0.5 to meet the boundary conditions. If A > 0.75, K < 0.5 and less than half of the absorption will be returned to the surface. if A 0.5 and more than half of what is absorbed must be returned to the surface. Note that at least 145 W/m^2 must be absorbed by the atmosphere to be added to 240 and result in the 385 W/m^2 of emissions which requires K == 1. Therefore, A must be > 145/385, or A > 0.37. Any value of A between 0.37 and 1 will balance, providing the proper value of K is selected. George, It doesn’t get to CS. But laid out propoerly makes the flaw more obvious “If A ~ .75, the only value of K that works is 1/2” Circularity. You say that we observe Ps = 385, that means A=0.75, and so K=.5. But how do you then know that K will be .5 if say A changes. It’s just an observed value in one set of circumstances. “A == 1 -> Ps == Ps and the transparent window is 0. (1 – K) = 240/385 = 0.62” Some typo there. But it seems completely wrong. If A==1, you don’t know that Px=385. It’s very unlikely. With no window, the surface would be very hot. Nick, “If A==1, you don’t know that Px=385. It’s very unlikely.” The measured Px is 385 W/m^2 (or 390 W/m^2 per Trenberth), and you are absolutely correct that A == 1 is very unlikely. For the measured steady state where Px = Ps = 385 W/m^2 and Pi = Po = 240 W/m^2, A and K are codependent. If you accept that K = 1/2 is a geometrical consideration, then you can determine what A must be based on what is more easily quantified. If you do line by line simulations of a standard atmosphere with nominal clouds, you can calculate A and then K can be determined. When I calculate A in that manner, I get a value of about 0.74 which is well within the margin of error of being 0.75. I can’t say what A and K are exactly, but I can say that their averages will be close to 0.75 and 0.5 respectively. I’ve also developed a proxy for K based on ISCCP data and it shows monthly K varyies between 0.47 and 0.51 with an average of 0.495, which is an average of 1/2 within the margin of error. “If you accept that K = 1/2 is a geometrical consideration” I don’t. You have deduced it here for particular circumstances. I still don’t see how you are getting to sensitivity. “You have deduced it here for particular circumstances. I still don’t see how you are getting to sensitivity.” The value of 1/2 emerges from measured data and a bottoms up calculation of A. I’ve also been able to quantify this ratio from the variables supplied in the ISCCP data set and it is measured to be about 1/2 (average of .49 for the S hemisphere and .50 for the N hemisphere). Sensitivity is the relationship between incremental input energy and incremental surface temperature. Figure 3 shows the measured and predicted relationship between output power and surface temperature where in LTE, output power == input power, thus is a proxy for the relationship between the input power and the surface temperature. The slope of this relationship is the sensitivity (delta T / delta P). The measurements are of the sensitivity to variable amounts of solar power (this is different for each 2.5 degree slice). The 3.7 W/m^2 of ‘forcing’ attributed to doubling CO2 means that doubling CO2 is EQUIVALENT to keeping the system (CO2 concentrations) constant and increasing the solar input by 3.7 W/m^2, at least this is what the IPCC’s definition of forcing infers. Nick, just a foundational starting point to work from and further discuss all of this. It means 293 W/m^2 goes into the black box and 92 W/m^2 passes through the entirety of the box (the same as if the box, i.e. the atmosphere, wasn’t even there). Remember with black box system analysis, i.e. modeling the atmosphere as a black box, we are not modeling the actual behavior or actual physics occurring. The derived equivalent model from the black box is only an abstraction, or the simplest construct that gives the same average behavior. What is so counter intuitive about equivalent black box modeling is that what you’re looking at in the model is not what is actually happening, it’s only that the flow of energy in and out of the whole system *would be the same* if it were what was happening. Keep this in mind. ” Figure 3 shows the measured …” It doesn’t show me anything. I can’t read it. That’s why I’d still like to see the math written out. Nick, “It doesn’t show me anything. I can’t read it. That’s why I’d still like to see the math written out.” Are you kidding? The red dots are data (no math required) and the green line is the SB relationship with an emissivity of 0.62, that’s the math. How much simpler can it get? Don’t be confused because it’s so simple. Nick, The black box the model is not an arbitrary model that happens to give the same average behavior (from the same ‘T’ and ‘A’). Critical to the validity of what the model actually quantifies is that it’s based on clear and well defined boundaries where the top level constraint of COE can be applied to the boundaries; moreover, the manifested boundary fluxes themselves are the net result of the all of the effects, known and unknown. Thus there is nothing missing from the whole of all the physics mixed together, radiant and non-radiant, that are manifesting the energy balance. (*This is why the model accurately quantifies the aggregate dynamics of the steady-state and subsequently a linear increase in adaption of those aggregate dynamics, even though it’s not modeling the actual behavior). The critical concept behind equivalent systems analysis and equivalent modeling derived from black box analysis is that there are infinite number of equivalent states the have the same average, or there are an infinite number of physical manifestations that can have the same average. The key thing to see is 1) the conditions and equations he uses in the div2 analysis bound the box model to the same end point as the real system, i.e. it must have 385 W/m^2 added to its surface while 239 W/m^2 enters from the Sun and 239 W/m^2 leaves at the TOA, and 2) whether you operate as though what’s depicted in the box model is what’s occurring or to whatever degree you can successfully model the actual physics of the steady-state atmosphere to reach that same end point, the final flow of energy in and out of the whole system must be the same. You can even devise a model with more and more micro complexity, but it is still none the less bound to the same end point when you run it, otherwise the model is wrong. This is an extremely powerful level of analysis, because you’re stripping any and all heuristics out and only constraining the final output to satisfy COE — nothing more. That is, for the rates of joules going in to equal the rates of joules going out (of the atmosphere). In physics, there is thought to be nothing closer to definitive than COE; hence, the immense analysis power of this approach. Again, in the end, with the div2 equivalent box model you’re showing and saying balance would be equal at the surface and the TOA — if half were radiated up and half were radiated down as depicted, and from that (and only from that!), you’re deducing that only about half of the power absorbed by the atmosphere from the surface is acting to ultimately warm the surface (or acting to warm the surface the same as post albedo solar power entering the system); and that if the thermodynamic path that is manifesting the energy balance, in all its complexity and non-linearity, adapts linearly to +3.7 W/m^2 of GHG absorption, where the same rules of linearity are applied as they are for post albedo solar power entering the system, per the box model it would only take about 0.55C of surface warming to restore balance at the TOA (and not the 1.1C ubiquitously cited). Also, for the box model exercise you are considering on EM radiation because the entire energy budge, save for infinitesimal amount for geothermal, is all EM radiation (from the Sun), EM radiation is all that can pass across the system’s boundary between the atmosphere and space, and the surface (with an emissivity of about 1) radiates back up into the atmosphere the same amount of flux its gaining as a result of all the physical processes in the system, radiant and non-radiant, know and unknown. “Are you kidding?” It’s a visibility issue. The colors are faint and the print is small. And the organisation is not good. Nick, “It’s a visibility issue.” Click on figure 3 and a high resolution version will pop up. George, “high resolution version will pop up” That helps. But there is no substitute for just writing out the maths properly in a logical sequence. All I’m seeing from Fig 3 in terms of sensitivity is a black body curve with derivatives. But that is a ridiculous way to compute earth sensitivity. It ignores how the Earth got to 287K with a 240W/m2 input. It’s because of back radiation. Suppose at your 385 W/m2 point, you increase forcing by 1 W/m2. What rise in T would it take to radiate that to space? You have used the BB formula with no air. T rises by just 0.19K. But then back radiation rises by 0.8 W/m2 (say). You haven’t got rid of the forcing at all. Nick, “It ignores how the Earth got to 287K with a 240W/m2 input. It’s because of back radiation.” How did you conclude this? It should be very clear that I’m not ignoring this. In fact, the back radiation and equivalent emissivity are tightly coupled through the absorption of surface emissions. “You have used the BB formula with no air. T rises by just 0.19K. But then back radiation rises by 0.8 W/m2 (say). You haven’t got rid of the forcing at all.” Back radiation does not increase by 0.8C (you really mean 4.3 W/m^2 to offset a 0.8C increase). You also need to understand that the only thing that actually forces the system is the Sun. The IPCC definition of forcing is highly flawed and obfuscated to produce confusion and ambiguity. CO2 changes the system, not the forcing and the idea that doubling CO2 generates 3.7 W/m^2 of forcing is incorrect and what this really means is that doubling CO2 is EQUIVALENT to 3.7 W/m^2 more solar forcing, keeping the system (CO2 concentrations) constant. ” It should be very clear that I’m not ignoring this.” Nothing is clear until you just write out the maths. ” what this really means is that doubling CO2 is EQUIVALENT to 3.7 W/m^2 more solar forcing” Yes. That is what the IPCC would say too. The point is that the rise in T in response to 3.7 W/m2 is whatever it takes to get that heat off the planet. You calculate it simply on the basis of what it takes to emit it from the surface, ignoring the fact that most of it comes back again through back radiation. When you are in space looking down, or on the surface looking up at radiation, that’s all baked in already. I’ve Ive shown what it changes dynamically throughout the day. Nick, “You calculate it simply on the basis of what it takes to emit it from the surface,” I calculate this based on what the last W/m^2 of forcing did, which was to increase surface emissions by 1.6 W/m^2 affecting about a 0.3C rise. It’s impossible for the next W/m^2 to increase surface emissions by the 4.3 W/m^2 required to affect a 0.8C temperature increase. Nick, Here’s another way to look at it. Starting from 0K, the first W/m^2 of forcing will increase the temperature to about 65K for a sensitivity of 65K per W/m^2. The next W/m^2 increases the temperature to 77K for an incremental sensitivity of about 12K per W/m^2. The next one increases it to 85K for a sensitivity of 8C per W/m^2 and so on and so forth as both the incremental and average sensitivity decreasing with each additional W/m^2 of input as expressed per a temperature, while the energy based sensitivity is a constant 1 W/m^2 of surface emissions per W/m^2 of forcing. CO2 and most other GHG’s are not a gas below about 195K where the accumulated input forcing has risen to about 82 W/m^2 and the sensitivity has monotonically decreased to about 1.1K per W/m^2. There are 158 W/m^2 more forcing to get to the 240 W/m^2 we are at now and about 93C more warmth to come, meanwhile; GHG’s start to come in to play as well as clouds as water vapor becomes prevalent. Even accounting for a subsequent linear relationship between forcing and temperature, which is clearly a wild over-estimation that fails to account for the T^-3 dependence of the sensitivity on forcing, 93/158 is about 0.6 C per W/m^2 and we are already well below the nominal sensitivity claimed by the IPCC. This is but one of the many falsification tests of a high sensitivity that I’ve developed. . Fig. 2, reasonably calculated verified against observation of the real surface & atm. system, not pushed too far, can be very instructive to learn who has made correct basic science statements in this thread vs. those that are confused about the basic science. Fig. 2 is at best an analogue, useful for helping one understand some basic physics, possibly to frame testable hypotheses, even to estimate relative changes if used judiciously. Some examples: 1) mass, gravity, insolation did not change in Fig. 2 when the CO2 et. al. replaced N2/O2 up to current atm. composition, yet the BB temperature increased to that observed! 2) No conduction, no convection, no LH entered Fig.2 , yet the BB temperature increased to that observed! No change in rain, no change in evaporation entered either. No energy was harmed or created. Entropy increased and Planck law & S-B were unmolested, no gray or BB body def. was harmed. Wein displacement was unneeded. Values of Fig. 2 A were used as measured in the literature not speculated. 3) No Schwarzschild equation was used, no discussion of KE or PE quantum energy transfer among air molecules, no lines, no effective emission level, no discussion of which frequencies deeply penetrate ocean water, no distribution of clouds other than fixed albedo, no lapse rate, no first convective cycle, no loss of atm. or hydrostatic discussion, no differentials yet Fig. 2 analogue works demonstrably well according to observations. Decent starting point. 4) Fig 2 demonstrates if emissivity of the atmosphere is increasing because of increased amounts of infrared-active gases, this suggests that temperatures in the lower atmosphere could increase net of all the other variables. Demonstrates the basic science for interpreting global warming as the result of “closing the window”. As the transmissivity of the (analogue) atmosphere decreases, the radiative equilibrium temperature T increases. Same basis for interpreting global warming as the result of increased emission. As the gray body emissivity increases, so does the radiative equilibrium temperature. No conduction, no convection, no lapse rate was harmed or needed to obtain observed global temperature from Fig. 2. 5) Since many like to posit their own thought experiment, to further bolster the emission interpretation, consider this experiment. Quickly paint the entire Fig. 2 BB on the left with a highly conducting smooth silvery metallic paint, thereby reducing its emissivity to near zero. Because the BB no longer emits much terrestrial radiation, little can be “trapped” by the gray body atmosphere. Yet the atmosphere keeps radiating as before, oblivious to the absence of radiation from the left (at least initially; as the temperature of the gray body atmosphere drops, its emission rate drops). Of course, if this metallic surface doesn’t emit as much radiation but continues to absorb SW radiation, the surface temperature rises and no equilibrium is possible until the left surface terrestrial emission (LW) spectrum shifts to regions for which the emissivity is not so near zero & steady state balance obtained. IMO, dead nuts understanding Fig. 2 will set you on the straight and narrow basic science, additional complexities can then be built on top, added – like basic sensitivity. Fig. 3 is unneeded, build a case for sensitivity from complete understanding of Fig. 2. . More later. Trick, “If equivalent to a forcing then it’s a forcing! If NOT a forcing, then it is not equivalent.” An ounce of gold is equivalent to about $1200. Are they the same? There’s a subtle difference between a change in stimulus and a change to the system, although either change can have an effect equivalent to a specific change in the other. The IPCC’s blind conflation of changes in stimulus with changes to the system is part of the problem and contributes to the widespread failure of consensus climate science. It gives the impression that rather then being EQUIVALENT, they are exactly the same. If the Sun stopped shining, the temperature would drop to zero, independent of CO2 concentrations or any change thereof. Would you still consider doubling CO2 a forcing influence if it has no effect? “I calculate this based on what the last W/m^2 of forcing did” Again, it’s very frustrating that you won’t just write down the maths. In Fig 3, your gray curve is just Po=σT^4. S-B for a black-body surface, where Po is flux from the surface, and T is surface T. You have differentiated (dT/dP) this at T=287K and called that the sensitivity 0.19. That is just the rise in temp that is expected if Po rises by 1 W/m2 and radiates into space. It has nothing to do with the rise where there is an atmosphere that radiates back, and you still have to radiate 1W/m2 to space. Nick, “it’s very frustrating that you won’t just write down the maths.” Equation 2) is all the math you need which expresses the sensitivity of a gray body at some emissivity as a function of temperature. This is nothing but the slope of the green line predictor (equation 1) of the measured data. What other equations do you need? Remember that I’m making no attempt to model what’s going on within the atmosphere and my hypothesis is that the planet must obey basic first principles physical laws at the boundaries of the atmosphere. To the extent that I can accurately predict the measured behavior at these boundaries, and it is undeniable that I can, it’s unnecessary to describe in equations what’s happening within the atmosphere. Doing so only makes the problem far more complex than it needs to be. “It has nothing to do with the rise where there is an atmosphere that radiates back, and you still have to radiate 1W/m2 to space.” Are you trying to say that the Earth’s climate system, as measured by weather satellites, is not already accounting for this? Not only does it, it accounts for everything, including that which is unknown. This is the problem with the IPCC’s pedantic reasoning; they assume that all change is due to CO2 and that all the unknowns are making the sensitivity larger. Each slice of latitude receives a different amount of total forcing from the Sun, thus the difference between slices along the X axis of figure 3 and the difference in temperature between slices along the Y axis represents the effect that incremental input power (solar forcing) has on the temperature, at least as long as input power is approximately equal to the output in the steady state, which of course, it must be. Even the piddly imbalance often claimed is deep in the noise and insignificant relative to the magnitude and precision of the data. I think it’s time for you to show me some math. 1) Do you agree that the sensitivity is a decreasing function of temperature going as 1/T^3? If not, show me the math that supersedes my equation 2. 2) Do you agree that the time constant is similarly a decreasing function of temperature with the same 1/T^3 dependence? If not show me the math that says otherwise. My math on this was in a previous response where I derived, Pi = Po + dE/dt as equivalent to Pi = E/tau + dE/dt and since T is linear to E and Po goes as T^4, tau must go as 1/T^3. Not only does the sensitivity have a strong negative temperature coefficient, the time it takes to reach equilibrium does as well. 3) Do you agree that each of the 240 W/m^2 of energy from the Sun has the same contribution relative to the 385 W/m^2 of surface emissions which means that on average, 1.6 W/m^2 of surface emissions results from each W/m^2 of solar input. If not, show me the math that says the next W/m^2 will result in 4.3 W/m^2 to affect the 0.8C temperature increase ASSUMED by the IPCC. Hey George, do you have surface data for those bands? I can get you surface data by band? micro6500, The ISCCP temperature record was calibrated against surface measurements on a grid basis, but there are a lot of issues with the calibration. A better data set I can use to calibrate it myself would be appreciated, although my preferred method of calibration is to pick several grid points whose temperatures are well documented and not subject to massive adjustments. I’m not so much concerned about absolute values, just relative values, which seem to track much better, at least until a satellite changes and the cross calibration changes. Mine is better described as an average of the stations in some area. Min, max day to day average change, plus a ton of stuff. It’s based on ncdc gsod dataset. Look on the source forge page, reports, very 3 beta, get that zip. Then we can discuss what some of the stuff is, and then what you want for area per report. Can you supply me a link? I probably won’t have too much time to work on this until the snow melts. I’ll be relocating to Squaw Valley in a day or 2, depending on the weather. I need to get my 100+ days on the slopes in and I only have about a 15 days so far (the commute from the Bay area sucks). BTW, once my relocation occurs, my response time will get a lot slower, but I do have wifi at my ski house will try to get to my email at least once a day. I can also be reached by email at my handle @ one of my domains, one of which serves the plots I post. “An ounce of gold is equivalent to about $1200. Are they the same?” Not the same. They are equivalent. Both will buy an equal amount of goods and services like skiing. Just as a solar forcing being equivalent to a certain CO2 increase will buy an equal amount of surface kelvins. This was supposed to say: “Also, for the box model exercise you are considering only EM radiation….” Nick, The most important thing to understand is black box modeling is not in any way attempting to model or emulate the actual thermodynamics, i.e. the actual thermodynamic path manifesting the energy balance. Based on your repeated objections, that seemed to be what you thought it was trying to do somehow. It surely cannot do that. The foundation is based on the simple principle that in the steady-state, for COE to be satisfied, the number of joules going in, i.e. entering does not exist. The surface at a steady-state temperature of about 287K (and a surface emissivity of 1), radiates about 385 W/m^2, which universal physical law dictates must somehow be replaced, otherwise the surface will cool and radiate less or warm and radiate more. For this to occur, 385 W/m^2, independent of how it’s physically manifested, must somehow exit the atmosphere and be added to the surface. This 385 W/m^2 is what comes out of the black box at the surface/atmosphere boundary to replace the 385 W/m^2 radiated away from the surface as consequence of its temperature of 287K. Emphasis that the black box only considers the net of 385 W/m^2 gained at the surface to actually be exiting at its bottom boundary, i.e. actually leaving the atmosphere and being added to the surface. That there is significant non-radiant flux in addition the flux radiated from the surface (mostly in the form of latent heat) — is certainly true, but an amount equal to the non-radiant flux leaving the surface must be being offset by flux flowing into the surface in excess of the 385 W/m^2 radiated from the surface, otherwise a condition of steady-state doesn’t exist. The fundamental point relative to the black box, is joules in excess of 385 W/m^2 flowing into or away from the surface are not adding or taking away joules from the surface, nor are they adding or taking away joules from the atmosphere. That is, they are not joules entering or leaving the black box (however, they none the less must all be conserved). With regard to latent heat, evaporation cools the surface water from which it evaporated and as it condenses, transfers that heat to the water droplets the vapor condenses upon, and is the main. Keep in mind that the non-radiant flux leaving surface and all its effects on the energy balance (which are no doubt huge) have already had their influence on the manifestation of the surface energy balance, i.e. the net of 385 W/m^2 added to the surface. In fact, all of the effects have, radiant and non-radiant, known and unknown. Also, the black box and its subsequent model does not imply that the non-radiant flux from the surface does not act to accelerate surface cooling or accelerate the transport of surface energy to space (i.e. make the surface cooler than it would otherwise be). COE is considered separately for the radiant parts of the energy balance (because the entire energy budget is all EM radiation), but this doesn’t mean there is no cross exchange or cross conversion of non-EM flux from the surface to EM flux out to space or vice versa. There also seems to be some misunderstanding in physical terms. ‘F’ is the free variable in the analysis that can be anywhere from 0-1.0 and quantifies the equivalent fraction of surface radiative power captured by the atmosphere (quantified by ‘A’) that is *effectively* gained back by the surface in the steady-state. Because the black box considers only 385 W/m^2 to be actually coming out at its bottom and being added the surface, and the surface radiates the same amount (385 W/m^2) back up into the box, COE dictates that the sum total of 624 W/m^2 (385+239 = 624) must be continuously exiting the box at both ends (385 at the surface and 239 at the TOA), otherwise COE of all the radiant and non-radiant fluxes from both boundaries going into the box is not being satisfied (or there is not a condition of steady-state and heating or cooling is occurring). What is not transmitted straight through by the surface into space (292 W/m^2), must be being added to the energy stored by the atmosphere, and whatever amount of the 239 W/m^2 of post albedo solar power entering the system that doesn’t pass straight to the surface must be going into the atmosphere, adding those joules to the energy stored by the atmosphere as well. While we perhaps can’t quantify the latter as well as we can quantify the transmittance of the power radiated from the surface (quantified by ‘T’), the COE constraint still applies just the same, because an amount equal to the 239 W/m^2 entering the system from the Sun has to be exiting the box none the less. From all of this, since flux exits the atmosphere over 2x the area it enters from, i.e. the area of the surface and TOA are virtually equal to one another, it means that the radiative cooling resistance of the atmosphere as a whole is no greater than what would be predicted or required by the raw emitting properties of the photons themselves, i.e. radiant boundary fluxes and isotropic emission on a photonic level. Or that an ‘F’ value of 0.5 is the same IR opacity through a radiating medium that would *independently* be required by a black body emitting over twice the area it absorbs from. The black box and its subsequently derived equivalent model is only attempting to show that the final flow of energy in and out of the whole system is equal to the flow it’s depicting, independent of the highly complex and non-linear thermodynamic path actually. Absolutely nothing more. The bottom line is the flow of energy in and out of the whole system is a net of 385 W/m^2 gained by the surface, while 239 W/m^2 enters from the Sun and 239 W/m^2 leaves at the TOA, and the box equivalent model matches this final flow (while fully conserving all joules being moved around to manifest it). Really only 385 385 W/m^2 added to the surface, are physically manifested. The black box isn’t interested in or doesn’t care about the how, but only what amount of flux actually comes out at its boundaries, relative to how much flux enters from its boundaries. Sorry, RW, I can’t go on with this until you make the effort to write out the maths. Just too many fuzzy words. Nick, “Sorry, RW, I can’t go on with this until you make the effort to write out the maths. Just too many fuzzy words.” OK, let’s start with this formula: dTs = (Ts/4)*(dE/E), where Ts is equal to the surface temperature and dE is the change in emissivity (or change in OLR) and E is the emissivity of the planet (or total OLR). OLR = Outgoing Longwave Radiation. Plugging in 3.7 W/m^2 for 2xCO2 for the change in OLR, we get dTs = (287K/4) * (3.7/239) = 1.11K This is how the field of CS is arriving at the 1.1C of so-called ‘no-feedback’ at the surface, right? This is supposed to be the *intrinsic* surface warming ability of +3.7 W/m^2 of GHG absorption, right? “OK, let’s start with this formula:”. As to the status of 1.1C, that is something people use as a teaching aid. You’d have to check the source as to what they mean. Nick, .” Why in relation to what? We’re assuming a steady-state condition and an instantaneous change, i.e. an instantaneous reduction in OLR. I’m not saying this says anything about the feedback in response or the thermodynamic path in response. It doesn’t. We need to take this one step a time. “As to the status of 1.1C, that is something people use as a teaching aid. You’d have to check the source as to what they mean.” It’s the amount CS quantifies as ‘no-feedback’ at the surface, right? What is this supposed to be a measure of if not the *intrinsic* surface warming ability of +3.7 W/m^2 of GHG absorption? I think that’s a big assumption. 1.1C is the temp rise of a doubling of co2 at our atm’s concentrate. I’m not sure if they are suppose to be the same or not. micro6500, “1.1C is the temp rise of a doubling of co2 at our atm’s concentrate” This comes from the bogus feedback quantification that assumes that .3C per W/m^2 is the pre-feedback response, moreover; it assumes that feedback amplifies the sensitivity, while in the Bode linear amplifier feedback model climate feedback is based on, feedback affects the gain which amplifies the stimulus. “Why in relation to what?” ???. You said we start with a formula. I said it was wrong. You left out dε/ε and need to justify it. That is the maddening lack of elementary math reasoning here. “It’s the amount CS quantifies as ‘no-feedback’ at the surface, right?”. Nick, “???. You said we start with a formula. I said it was wrong. You left out dε/ε and need to justify it. That is the maddening lack of elementary math reasoning here.”.” Yes, I know. All the models are doing though is applying a linear amount of surface/atmosphere warming according to the lapse rate. The T^4 ratio between the surface (385 W/m^2) and the TOA (239) quantifies the lapse rate, and is why that formula I laid out gets the same exact answer as the models. And, yes I’m well aware that the 1.1C is only a theoretical conceptual value. Nick, The so-called ‘zero-feedback’ Planck response for 2xCO2 is 3.7 W/m^2 at TOA per 1.1C of surface warming. It’s just linear warming according to the lapse rate, as I said. From a baseline of 287K, +1.1C is about 6 W/m^2 of additional net surface gain, and 385/239 = 1.6, and 6/3.7 = 1.6. Nick, .” What I mean here is I’m not making any assumption regarding any dynamic thermodynamic response to the imbalance and its effect on the change in energy in the system. I’m just saying that *if* the surface and atmosphere are linearly warmed according to the lapse rate, 1.1 C at the surface will restore balance at the TOA and that this is the origin of the claimed ‘no-feedback’ surface temperature increase for 2xCO2. I accept the point that the atmosphere is more complicated than the great bodies used to validate radiation radiative heat transfer and the black body/ gray body theory. But at the end of the day we evaluate models based on how well they match real world data. If the data fit the gray body model of atmosphere best, it’s the best model. All models wrong some models are useful right? The unavoidable conclusion is the gray body model of the atmosphere is much more useful than the general circulation models. I checked with Occam and he agrees. bitsandatomsblog: The best model is the one that optimizes the entropies of the inferences that are made by the model. This model is not the result of a fit and is not wrong. botsanddatamblog, “I checked with Occam and he agrees.” Yes, Occam is far more relevant to this discussion than Trenberth, Hansen, Schlesinger, the IPCC, etc. yes and when we speak about global circulation models the man we need to get in tough with goes by the name of Murphy 😉 gray bodies not great bodies! This data shows that various feedback to CO2 warming must work in a way that makes the atmosphere behave like a gray body. “This data shows that various feedback to CO2 warming must work in a way that makes the atmosphere behave like a gray body.” The operative word being MUST. George, Is power out measured directly by satellite? I am understanding it is not. Can you share links to input data? Thanks for this post and your comments. bits…, The power output is not directly measured by satellites, but was reconstructed based on surface and clouds temperatures integrated across the planet’s surface combined with line by line radiative transfer codes. The origin of temperature and cloud data was the ISCCP data set supplied by GISS. It’s ironic that their data undermines their conclusions by such a wide margin. The results were cross checked against arriving energy, which is more directly measured as reflection and solar input power, again integrated across the planets surface. When their difference is integrated over 30 years of 4 hour global measurements, the result is close to zero. CO2isnotevil (and I suspect the author of this post) say: “[Climate] Sensitivity is the relationship between incremental input energy and incremental surface temperature. Figure 3 shows the measured and predicted relationship between output power and surface temperature where in LTE, output power == input power, thus is a proxy for the relationship between the input power and the surface temperature. However, the power output travels thorough a different atmosphere from the surface at 250 K to space than traveling from a surface at 300 K to space. The relationship between temperature and power out seen on this graph is caused partially by how the atmosphere changes from location to location on the planet – and not solely by how power is transmitted to space as surface temperature rises.. Well yeah I found that outgoing radiation was being regulated by water vapor by noticing the rate curve did not match the measurement I took. And it will take hardly cause any increase in temperature because the cooling rate will automatically increase the time at the high cooling mode, before later reducing the rate to the lower rate. micro6500 wrote: “Well yeah I found that outgoing radiation was being regulated by water vapor by noticing the rate curve did not match the measurement I took.” Good. Where was this work published? I most reliable information I’ve seen comes the paper linked below, which looks at the planetary response as measured by satellites to seasonal warming every year. That is 3.5 K warming, the net result of larger warming in summer in the NH (with less land and a shallower mixed layer) than in the NH. (The process of taking temperature anomalies makes this warming disappear from typical records.) You can clearly see that outgoing LWR increases about 2.2 W/m2/K, unambiguously less than expected for a simple BB without feedbacks. The change is similar for all skies and clear skies (where only water vapor and lapse rate feedbacks operate). This feedback alone would make ECS 1.6 K/doubling. You can also see feedbacks in the SWR channel that could further increase ECS. The linear fit is worse and interpretation of these feedbacks is problematic (especially through clear skies). Seasonal warming (NH warming/SH cooling) not an ideal model for global warming. Neither is the much smaller El Nino warming used by Lindzen. However, both of these analyses involve planetary warming, not moving to a different location – with a different atmosphere overhead – to create a temperature difference. And most of the temperature range in this post comes from polar regions. The data is scattered across 70 W/m2 in the tropics, which cover half of the planet. The paper also shows how badly climate models fail to reproduce the changes seen from space during seasonal warming. They disagree with each other and with observations. Frank, Did you read my post to you here: If George has a valid care here, this is largely why you’re missing it and/or can’t see it. You’ve accepted the way the field has framed the feedback question, and it is dubious this framing of it is correct. It’s certainly at least arguably not physically logical for the reasons I state. .” A big complaint from George is climate science does not use the standard way of quantifying sensitivity of the system to some forcing. In control theory and standard systems analysis, the sensitivity in response to some stimuli or forcing is always quantified as just output/input and is a dimensionless ratio of flux density to flux density of the same units of measure, i.e. W/m^2/W/m^2. The metric used in climate science of degrees C per W/m^2 of forcing has the same exact quantitative physical meaning. As a simple example, for the climate system, the absolute gain of the system is about 1.6, i.e. 385/239 = 1.61, or 239 W/m^2 of absorbed solar flux (the input) is converted into 385 W/m^2 of radiant flux emitted from the surface (the output). An incremental gain in response to some forcing greater than the absolute gain of 1.6 indicates net positive feedback in response, and an incremental gain below the absolute gain of 1.6 indicates net negative feedback in response. The absolute gain of 1.6 quantifies what would be equivalent to the so-called ‘no-feedback’ starting point used in climate science, i.e. per 1C of surface warming there would be about +3.3 W/m^2 of emitted through the TOA, and +1C equals about 5.3 W/m^2 of net surface gain and surface emission and 5.3/1.6 = 3.3. A sensitivity of +3.3C (the IPCC’s best estimate) requires about +18 W/m^2 of net surface gain, which requires an incremental gain of 4.8 from a claimed ‘forcing’ of 3.7 W/m^2, i.e. 18/3.7 = 4.8, which is 3x greater than the absolute gain (or ‘zero-feedback’ gain) of 1.6, indicating net positive feedback of about 300%. What you would be observing at the TOA so far as radiation flux if the net feedback is positive or negative (assuming the flux change is actually a feedback response, which it largely isn’t and a big part of the problem with all of this), can be directly quantified from the ratio of output (surface emission)/input (post albedo solar flux). If this isn’t clear and fully understood, the framework where George is coming from on all of this would be extremely difficult, if not nearly impossible to see. We can get into why the ratio of 1.6 is already giving us a rough measure of the net effect of all feedback operating in the system, but we can’t go there unless this is fully understood first. Frank, As I suggested to George, a more appropriate title of this essay might be ‘Physically logical constraints on Climate Sensitivity’. It’s not being claimed that the physical law itself, in and of itself, constrains the sensitivity within the bounds being claimed. But rather given the observed and measured dynamic response of the system in the context of the physical law, it’s illogical to conclude the incremental response to an imposed new imbalance, like from added GHGs, will different than already observed measured response. That’s really all. just was looking at ISCCP data. Is formula for power out something like this? Po = (Tsurface^4) εσ (1 – %Cloud) + (Tcloud^4) εσ ( %Cloud) Or do you make a factor for each type of cloud that is recorded by ISCCP? bits… Your equation is close, but the power under cloudy skies has 2 parts based on the emissivity of clouds (inferred by the reported optical depth) where some fraction of surface energy also passes through. Po = (Tsurface^4) (εs)σ (1 – %Cloud) + ((Tcloud^4) (εc) + (TSurface)^4) (1 – εc) (εs)) (σ %Cloud) where εs is the emissivity relative to the surface for clear skies and εc is the emissivity of clouds. It a little more complicated than this, but this is representative. Does the are weighted sum of the Power out in each 2.5 degree latitude ring equal the Trenberth energy balance diagram? Would be interesting to see that trend as a time series, I think! Both for the global number and the time series for all bands. There is a lot more area in the equatorial bands on the right than the polar bands on the left, right? bits… “Does the are weighted sum of the Power out in each 2.5 degree latitude ring equal the Trenberth energy balance diagram?” The weighted sum does balance. There are slight deviation plus or minus from year to year, but over the long term, the balance is good. One difference with Trenberth’s balance is that he significantly underestimates the transparent window and I suspect that this is because he fails to account for surface energy passing through clouds and/or cloud emissions that pass through the transparent window. Another difference is that Trenberth handles the zero sum influence on the balance of energy transported by matter by lumping its return as part of what he improperly calls back ‘radiation’. One thing to keep in mind is that each little red dot is not weighted equally. 2.5 degree slices of latitude towards the poles are weighted less than 2.5 degree slices of latitude towards the equator. Slices are weighted based on the area covered by that slice. This plot shows the relationship between the input power (Pi) and output emissions (Po). The magenta line represents Pi == Po. The ’tilt’ in the relationship is the consequence of energy being transported from the equator to the poles. The cross represents the average. Excellent article and responses George. Your clarity and grasp of the subject are exceptional. Martin, Thanks. A lot of my time, effort and personal fortune has gone into this research and it’s encouraging that it’s appreciated. I could study this entire post for a year and still probably not glean all the wisdom from it. Hence, my next comment might show my lack of study, but, hey, no guts no glory, so I am going forth with the comment anyway, knowing that my ignorance might be blasted (which is okay — critique creates consistency): First, I am already uncomfortable with the concept of “global average temperature”. Second, I am now aware of another average called “average height of emission”. Third, I seem to be detecting (in THIS post) yet another average, denoting something like an “average emissivity”. I think that I am seeing a Stefan Boltzmann law originally derived on the idea of a SURFACE, now modified to squish a real-world VOLUME into such an idealized surface of that original law, where, in this real-world volume, there are many other considerations that seem to be at high risk of being sanitized out by all this averaging. We have what appears to be an average height of emission facilitating this idea of an ideal black-body surface acting to derive (in backwards fashion) a temperature increase demanded by a revamped S-B law, as if commanding a horse to walk backwards to push a cart, in a modern, deep-physics-justified version of the “greenhouse effect”. Two words: hocus pocus … and for my next act, I will require a volunteer from the audience. I NOT speaking directly to the derivation of this post, but to the more conventional (I suppose) application of S-B in the explanation that says emission at top of atmosphere demands a certain temperature, which seems like an unreal temperature that cannot be derived FIRST … BEFORE … the emission that seemingly demands it. Robert, The idea of an ’emission surface’ at 255K is an abstraction that doesn’t correspond to reality. No such surface actually exists. While we can identify 4 surfaces between the surface and space whose temperature is 255K (google ‘earth emission spectrum’), these are kinetic temperatures related to molecules in motion and have nothing to do with the radiant emissions. In the context of this article, the global average temperature is the EQUIVALENT temperature of the global average surface emissions. The climate system is mostly linear to energy. While temperature is linear to stored energy, the energy required to sustain a temperature is proportional to T^4, hence the accumulated forcing required to maintain a specific temperature increases as T^4. Conventional climate science seems to ignore the non linearity regarding emissions. Otherwise, it would be clear that the incremental effect of 1 W/m^2 of forcing must be less then the average effect for all W/m^2 that preceded, which for the Earth is 1.6 W/m^2 of surface emissions per W/m^2 of accumulated forcing. Climate science obfuscates this by presenting sensitivity as strictly incremental and expressing it in the temperature (non linear) domain rather than in the energy (linear) domain. It’s absolutely absurd that if the last W/m^2 of forcing from the Sun increases surface emissions by only 1.6 W/m^2 that the next W/m^2 of forcing will increase surface emissions by 4.3 W/m^2 (required for a 0.8C increase). George, Wouldn’t a more appropriate title for this article be “Physically Logical Constraints on the Climate Sensivitity? Based on the title, I think a lot of people interpreting you as saying the physical law itself, in and of itself, constrains sensitivity to such bounds. Maybe this is an important communicative point to make. I don’t know. RW, “Wouldn’t a more appropriate title for this article be “Physically Logical Constraints on the Climate Sensivitity [sic]?” Perhaps, but logical arguments don’t work very well when trying to counter an illogical position. Perhaps, but a lot of people are going to take it to mean the physical law itself, in and of itself, is what constrains the sensitivity, and use it as a means to dismiss the whole thing as nonsensical. I guess this is my reasoning for what would be maybe a more appropriate or accurate title. Can’t stop thinking about this. If we take the time series of global Pin – Pout and integrate to Joules / m^2, it should line up with or lead global temperature time series, right? “Can’t stop thinking about this. If we take the time series of global Pin – Pout and integrate to Joules / m^2, it should line up with or lead global temperature time series, right?” Yes, since temperature is linear to stored energy they will line up. More interesting though is that the seasonal difference is over 100 W/m^2 p-p, centered roughly around zero and that this also lines up with seasonal temperature variability. Because of the finite time constant Pout always lags Pin per hemisphere. Globally, its gets tricky because the N hemisphere response is significantly larger than the S hemisphere response because the S has a larger time constant owing to a larger fraction of water and when they are added, they do not cancel and the global response has the signature of the N hemisphere. I have a lot of plots that show this for hemispheres, parts of hemispheres and globally based on averages across the entire ISCCP data set. The variable called Energy Flux is the difference between Pi and Po. Note that the seasonal response shown is cancelled out of the data for anomaly plots, otherwise, the p-p temp variability would be so large, trends, present or not, would be invisible. Well you can use the seasonal slopes of solar and temp, and calculate sensitivity? No need to throw it away without using it. I use the whole pig. micro6500, “Well you can use the seasonal slopes of solar and temp, and calculate sensitivity?” More or less, but because the time constants are on the order of the period (1-year), the calculated sensitivity will be significantly less than the equilibrium sensitivity which is the final result after at least 5 time constants have passed. What’s your basis for a 5 year period? micro6500, “What’s your basis for a 5 year period?” Because after 5 time constants, > 99% of the effect it can have will have manifested itself. (1 – e^-N) is the formula quantifying how much of the equilibrium effect will have been manifested in N time constants. And where did this come from? micro6500, “And where did this come from?” One of the solutions to the differential equation, Pi = E/tau + dE/dt, is a decaying exponential of the form e^kt since if E=e^x, dT/dx is also e^x. Other solutions are of the form e^jwt which are sinusoids. If you google TIME CONSTANT and look at the wikipedia page, it should explain the math as it also asserts that the input (in this case Pi) is the forcing function which is the proper quantification of what forcing is. Why decaying exponential? While it’s been decades, I was pretty handy with rich circuits and could simulate about anything except vlsi models I didn’t have access to. “Why decaying exponential? ” Because the derivative of e^x is e^x and the DE is the sum of a function and its derivative. You realize nightly temp fall is a decaying exponential. And it’s period would be 5 days. Also, what I discovered is why nightly temps fall is a decaying exponential. micro6500, “You realize nightly temp fall is a decaying exponential. And it’s period would be 5 days. Also, what I discovered is why nightly temps fall is a decaying exponential.” Yes, I’m aware of this and the reason is that it’s a solution to the DE quantifying the energy fluxes in and out of the planet. But you’re talking about the time constant of the land, which actually varies over a relative wide range (desert, forest, grassland, concrete, etc.), while overnight, ocean temperatures barely change at all. Even on a seasonal basis, the average ocean temps vary by only a few degrees. That the average global ocean temperature changes at all on a seasonal basis is evidence that the planet responds much more quickly to change than required to support the idea of a massive amount of warming yet to manifest itself. At the most, perhaps a little more than half of the effect of the CO2 emitted in the last 12 months has yet to manifest. The time constant of land is significantly shorter than that of the oceans and is why the time constant of the S hemisphere is significantly longer than that for the N hemisphere. Once again, the property of superposition allows spatially and temporarily averaging time constants, which is another metric related to energy and its flux. Part of the reason for the shorter than expected time constant (at least to those who support the IPCC) for the oceans is that they store energy as a temperature difference between the deep ocean cold and warm surface waters and this can change far quicker than moving the entire thermal mass of the oceans. As a simple analogy, you can consider the thermocline to be the dielectric of a capacitor which is manifested by ‘insulating’ the warm surface waters from the deep ocean cold. If you examine the temperature profile of the ocean, the thermocline has the clear signature of an insulating wall. Air temps over land, and ocean air temps, not ocean water temps. micro6500, “Air temps over land, and ocean air temps, not ocean water temps.” That explains why it’s so short. Same as air over land. I don’t think this is correct at all. First I show that only a small fraction even remains over a single night, in the spring that residual(as the days grow longer) is why it warms in the spring, and for the same reason as soon as the length of days starts to shorten, the day to day change responds within days to start the process of losing more energy than it receives each day. This is the average of the weather stations for each hemisphere. Units are degrees F/day change. Here I added calculated solar, by lat bands This last one shows the step in temp after the 97-98 El Nino. micro6500, “I don’t think this is correct at all. First I show that only a small fraction even remains over a single night” Yes, diurnal change could appear this way, except that its the mean that slowly adjusts to incremental CO2, not the p-p daily variability which is variability around that mean. Of course, half of the effect from CO2 emissions over the last 12 months is an imperceptible small fraction of a degree and in grand scheme of things is so deeply buried in the noise of natural variability it can’t be measured. bits… One other thing to notice is that the form of the response is exactly as expected from the DE, Pi(t) = Po(t) + dE(t)/dt where the EnergyFlux variable is dE/dt Bits, if you haven’t yet, follow my name here, and read through the stuff there. It fits nicely with you question. And I have a ton of surface reports at the sourceforge link. thanks well look at source forge! George I would love to work with these data sets to validate the model (or not, right?) I looked at a bunch of the plots at which is you, I assume. bits…, Yes, those are my plots. They’re a bit out of date (generated back in 2013) where since then, I’ve refined some of the derived variables including the time constants and added more data as it becomes available from ISCCP, but since the results aren’t noticeably different, I haven’t bothered to update the site. The D2 data from ISCCP does a lot of the monthly aggregation for you, is available on-line via the ISCCP web site and is a relatively small data set. It’s also reasonably well documented on the site (several papers by Rossow et. all). I’ve also obtained the DX data to do the aggregation myself after correcting the satellite cross calibration issues, but this is almost 1 Tb of data and hard to work with. Even with Google’s high speed Internet connections (back when I worked for them), it took me quite a while to download all of the data. I have observed that the D2 aggregation is relatively accurate, so I would suggest starting there. One cannot cross validate the model as no statistical population underlies this model and a statistical population is required for cross validation. terry, “One cannot cross validate the model as no statistical population underlies this model and a statistical population is required for cross validation.” 3 decades of data sampled at 4 hour intervals, which for the most part is measured by 2 or more satellites, spanning the entire surface of the Earth at no more than a 30 km resolution is not enough of a statistical population? The entity that you describe is a time series rather than a statistical population. Using a time series one can conduct an IPCC-style “evaluation.” One cannot conduct a cross validation as to so requires a statistical population and there isn’t one. Ten or more years ago, IPCC climatologists routinely confused “evaluation” with “cross validation.” The majority of journalists and university press agents still do so but today most professional climatologists make the distinction. To make the distinction is important because models that can be cross validated and models that can be evaluated differ in fundamental ways. Models that can be cross validated make predictions but models that can be evaluated make projections. Models that make predictions supply us with information about the outcomes of events but models that make projections supply us with no information. Models that make predictions are falsifiable but models that make projections are not falsifiable. A model that makes predictions is potentially usable in regulating Earth’s climate but not a model that makes projections. Professional climatologists should be building models that make predictions but they persist in building models that make projections for reasons that are unknown to me. Perhaps, like many amateurs, they are prone to confusing a time series with a statistical population. Terry, A statistical population is necessary when dealing with sparse measurements homogenized and extrapolated to the whole, as is the case with nearly all analysis done by consensus climate science. In fact a predicate to homogenization is a normal population of sites, which is never actually true (cherry picked sites are not a normal distribution). I’m dealing with the the antitheses of sparse, cherry picked data, moreover; more than a dozen different satellites with different sensors have accumulated data with overlapping measurements from at least 2 satellites looking a the same points on the surface at nearly the same time in just about all cases. Most measurements are redundant across 3 different satellites and many are redundant across 4 (two overlapping geosynchronous satellites and 2 polar orbiters at a much lower altitude). If you’re talking about a statistical population being the analysis of the climate on many different worlds, we can point to the Moon and Mars as obeying the same laws, which they do. Venus is a little different due to the runaway cloud coverage condition dictating a completely different class of topology, none the less, it must still obeys the same laws of physics. If neither of these is the case, you need to be much clearer about what in your mind constitutes a statistical population and why is this necessary to validate conformance to physical laws? I’m not aware of past research in the field of global warming climatology. If you know of one please provide a citation. “I’m not aware of past research in the field of global warming climatology. If you know of one please provide a citation.” Hansen Lebedeff Homogenization GISStemp and any other temperature reconstruction that claims to support a high sensitivity or extraordinary warming trends. co2isnotevil: Thank you for positively responding to my request for a citation to a paper that made reference to a statistical population. In response to the pair of citations with which you responded, I searched the text of the paper that was cited first for terms that made reference to a statistical population. This paper was authored by the noted climatologist James Hansen. The terms on which I searched were: statistical, population, sample, probability, frequency, relative frequency and temperature. “Statistical” produced no hits. “Population produced six hits, all of which were to populations of humans. “Sample” produced one hit which was, however, not to a collection of the elements of a statistical population.” “Probability” produced no hits. “Frequency” produced no hits. “Relative frequency” produced no hits. “Temperature” produced about 250 hits. Hansen’s focus was not upon a statistical population but rather was upon a temperature time series. Terry, The reference for the requirement for a normal distribution of sites is specific to Hansen Lebedeff homogenization. The second reference relies on this technique to generate the time series as do all other reconstructions based on surface measurements. My point was that the requirement for a normal distribution of sites is materially absent from the site selection used to reconstruct the time series in the second paper. The term ‘statistical population’ is an overly general term, especially since statistical analysis underlies nearly everything about climate science, except the analysis of satellite data. Perhaps you can be more specific about how you define this term and offer an example as it relates to a field you are more familiar with. co2isnotevil: I agree with you regarding the over generality of the term “statistical population.” By “statistical population” I mean a defined set of concrete objects aka sampling units each of which is in a one-to-one relationship with a statistically independent unit event. For global warming climatology an element in this set can be defined by associating with the concrete Earth an element of a partition of the time line. Thus, under one of the many possible partitions, an element of this population is the concrete Earth in the period between Jan 1, 1900 at 0:00 hours GMT and Jan 1, 1930 at 0:00 hours GMT. Dating back to the beginning of the global temperature record in the year 1850 there are between 5 and 6 such sampling units. This number is too few by a factor of at least 30 for conclusions to be reached regarding the causes of rises in global temperatures over periods of 30 years. I disagree with you when you state that “statistical analysis underlies nearly everything about climate science, except the analysis of satellite data.” I would replace “statistical” by “pseudostatistical” and “science” by “pseudoscience.” Terry, ‘I would replace “statistical” by “pseudostatistical” and “science” by “pseudoscience.”’ Fair enough. So your point is that we don’t have enough history to ascertain trends, especially since there’s long term periodicity that’s not understood, and on that I agree which is why I make no attempt to establish the existence or non existence of trends. The analysis I’ve done is to determine the sensitivity by extracting a transfer function through quantifying the systems response to solar input from satellite measurements. The transfer function varies little from year to year, in fact, almost not at all, even day to day. It’s relatively static nature means that an extracted average will be statistically significant, especially since the number of specific samples is over 80K, where each sample is comprised of millions of individual measurements. A key insight here is that dPower/dLattitude is well known from optics and geometry. dTemperature/dLattitude is also well known. To get dTemp/dPower just divide. [The mods note that dPowerofdePope/dAltitude is likely to be directly proportional to the Qty_Angels/Distance. However, dTemperature/dAltitude seems to be inversely proportinal to depth as one gets hotter the further you are from dAngels. .mod] Oops. I meant to say “I’m not aware of past research in the field of global warming climatology that was based upon a statistical population. If you know of one please provide a citation. George / co2isnotevil You have argued successfully in my view for the presence of a regulatory mechanism within the atmosphere which provides a physical constraint on climate sensitivity to internal thermal changes such as that from CO2 However, you seem to accept that the regulatory process fails to some degree such that CO2 retains a thermal effect albeit less than that proposed by the IPCC. You have not explained how the regulatory mechanism could fail nor have you considered the logical consequences of such failure. I have provided a mass based mechanism which purports to eliminate climate thermal sensitivity from CO2 or any other internal processes altogether but which acknowledges that as a trade off there must be some degree of internal circulation change that alters the balance between KE and PE in the atmosphere so that hydrostatic equilibrium can be retained. That mechanism appears to be consistent with your findings. If climate sensitivity to CO2 is not entirely eliminated then surface temperature must rise but then one has more energy at the surface than is required to both achieve radiative equilibrium with space AND provide the upward pressure gradient force that keeps the mass of the atmosphere suspended off the surface against the downward force of gravity yet not allowed to drift off to space. The atmosphere must expand upwards to rebalance but that puts the new top layer in a position where the upward pressure gradient force exceeds the force of gravity so that top layer will be lost to space. That reduces the mass and weight of the atmosphere so the higher surface temperature can again push the atmosphere higher to create another layer above the critical height so that the second new higher layer is lost as well. And so it continues until there is no atmosphere. The regulatory process that you have identified cannot be permitted to fail if the atmosphere is to be retained. The gap between your red and green lines is the surface temperature enhancement created by conduction and convection. The closeness of the curves of the red and green lines shows the regulatory process working perfectly with no failure. George: In Figure 2, if A equals 0.75 – which makes OLR 240 W/m2 – then DLR is 144 W/m2 (Ps*A/2). This doesn’t agree with observations. Therefore your model is wrong. (Some people believe DLR doesn’t exist or isn’t measured properly by the same kind of instruments used to measure TOA OLR. If DLR is in doubt, so is TOA OLR – in which case the whole post is meaningless.) Frank, Ps*A/2 is NOT a quantification of DLR, i.e the total amount of IR the atmosphere as a whole passes to the surface, but rather it’s the equivalent fraction of ‘A’ that is *effectively* gained back by the surface in the steady-state. Or it’s such that the flow of energy in and out of the whole system, i.e. the rates of joules gained and lost at the surface and TOA, would be the same. Nothing more. It’s not a model or emulation of the actual thermodynamics and thermodynamic path manifesting the energy balance, for it would surely be spectacularly wrong if it were claimed to be.. George, Maybe we can clear this up. What does your RT simulation calculate for actual DLR at the surface? It’s roughly 300 W/m^2…maybe like 290 W/m^2 or something, right? RW, “Maybe we can clear this up. What does your RT simulation calculate for actual DLR at the surface?”. Note that this is the case, even if some of the return of non radiant energy was actually returned as non radiant energy transformed into photons. However; there seems to be enough non radiant return (rain, weather, downdrafts, etc.) to account for the non radiant energy entering the atmosphere, most of which is latent heat. When you only account for the return of surface emissions absorbed by GHG’s and clouds, the DLR returned to the surface is about 145 W/m^2. Remember that there are 240 W/m^2 coming from the Sun warming the system. In LTE, all of this can be considered to all affect the surface temperature owing to the short lifetime of atmospheric water which is the only significant component of the atmosphere that absorbs any solar input. Given that the surface and by extension the surface water temporarily lifted into the atmosphere absorbs 240 W/m^2, only 145 W/m^2 of DLR is REQUIRED to offset the 385 W/m^2 of surface emissions consequential to its temperature. If there was more actual DLR this, the surface temperature would be far higher then it is.. Frank, . If Ps*(A/2) were a model of the actual physics, i.e. actual thermodynamics occurring, then yes it would be spectacularly wrong (or BS as you say). But it’s only an abstraction or an *equivalent* derived black box model so far as quantifying the aggregate behavior of the thermodynamics and thermodynamic path manifesting the balance. Let me ask you this question. What does DLR at the surface tell us so far as how much of A (from the surface) ultimately contributes to or is ultimately driving enhanced surface warming? It doesn’t tell us anything at all, much less quantify its effect on ultimate surface warming among all the other physics occurring all around it. Right?. “..then DLR is 144 W/m2 (Ps*A/2). This doesn’t agree with observations. Therefore your model is wrong.” No, Fig. 2 model is a correct text book physics analogue. What is wrong is setting A=0.75 too “dry” when global observations show A is closer to = 0.8 which calculates Fig. 2 global atm. gray block DLR of 158 (not 144 which is too low). Then after TFK09 superposing thermals (17) and evapotranspiration (80) with solar SW absorbed (78) by the real atm. results 158+17+80+78=333 all sky emission to surface over the ~4 years 2000-2004. Trick, The value A can be anythin and if it is 0.8 then more than 50% of absorption must be emitted into space and less than half is required to be returned to the surface. I’m not saying that this is impossible, but goes counter to the idea that more tha half must be returned to the surface. Keep in mind that the non radiant fluxes are not a component of A or of net surface emissions. ”BTW, the only way to accurately measure DLR is with a LWIR specific sensor placed at the bottom of a tall vacuum bottle (tube) pointed up…. you must measure ONLY those photons directed perpendicular to the surface“ No, the flux through the bottom of the atm. unit area column arrives from a hemisphere of directions looking up and down. The NOAA surface and CERES at TOA radiometers admit viewed radiation from all the angles observed. “Keep in mind that the non radiant fluxes are not a component of A or of net surface emissions” They are in the real world from which global A=0.8 is measured and calculated to 290.7K global surface temperature using your Fig. 2 analogue. “if it is 0.8 then more than 50% of absorption must be emitted into space and less than half is required to be returned to the surface.” The 0.8 global measured atm. Fig. 2 A emissivity returns (emits) half (158) to the surface and emits half (158) to space as in TFK09 balance real world observed Mar 2000 to May 2004: 158+78+80+17=333 all sky emission to surface and 158+41+40= 239 all-sky to space + 1 absorbed = 240, rounded. Your A=0.75 does not balance to real global world observed, though it might be result of a local RT balance as you write. Phil, “The collisions do not induce emission of a photon they cause a transfer of kinetic energy to the colliding partner …” The mechanism of collisional broadening, which supports the exchange between state energy and translational kinetic energy, converts only small amounts at a time and in roughly equal amounts on either side of resonance and in both directions. The kinetic energy of an O2/N2 molecule in motion is about the same as the energy of a typical LWIR photon. The velocity of the colliding molecule can not drop down to or below zero to energize a GHG, nor will its kinetic energy double upon a collision. Beside, any net conversion of GHG absorption to the translational kinetic energy of N2/O2 is no longer available to contribute to the radiative balance of the planet as molecules in motion do not radiate significant energy, unless they are GHG active. When we observe the emitted spectrum of the planet from space, there’s far too much energy in the absorption bands to support your hypothesis. Emissions are only attenuated by only about 50%, where if GHG absorption was ‘thermalized’ in the manner you suggest, it would be redistributed across the rest of the spectrum and we would not only see far less in the absorption bands, we would see more in the transparent window and the relative attenuation would be an order of magnitude or more. co2isnotevil said: 1) ” Excellent. In other words just what I have been saying about the non radiative energy tied up in convective overturning. The thing is that such zero sum non radiative energy MUST be treated as a separate entity from the radiative exchange with space yet it nonetheless contributes to surface heating which is why we have a surface temperature 33K above S-B. Since trhose non radiative elements within the system are derived from the entire mass of the atmosphere the consequence of any radiative imbalance from GHGs is too trivial to consider and in any event can be neutralised by circulation adjustments within the mass of the atmosphere. AGW proponents simply ignore such energy being returned to the surface and therefore have to propose DWIR of the same power to balance the energy budget. In reality such DWIR as there is has already been taken into account in arriving at the observed surface temperature so adding an additional amount (in place of the correct energy value of non radiative returning energy) is double counting. 2) “All non radiant flux does is to temporarily reorganize surface energy and while if may affect the specific origin of emitted energy it has no influence on the requirements for what that emitted energy must be” Absolutely. The non radiative flux can affect the balance of energy emitted from the surface relative to emissions from within the atmosphere and it is variable convective overturning that can swap emissions between the two origins so as to maintain hydrostatic equilbrium. The medium of exchange is KE to PE in ascent and PE to KE in descent. The non radiative flux itself has no influence on the requirement for what that emitted energy must be BUT it does provide a means whereby the requirement can be consistently met even in the face of imbalances created by GHGs. George is so close to having it all figured out. CO2isnotevil writes above and is endorsed by Wilde: . ” This is ridiculous. Let’s take the lapse rate, which is produced by convection. It controls the temperature (and humidity) within the troposphere, where most photons that escape to space are emitted (even in your Figure 2). Let’s pick a layer of atmosphere 5 km above the surface at 288 K where the current lapse rate (-6.5 K/km, technically I shouldn’t use the minus sign) means the temperate is 255 K. If we change the lapse rate to DALR (-10 K/km) or to 0 K/m – to make extreme changes to illustrate my point – the temperature will be 238 K or 288 K. Emission from 5 km above the surface, which varies with temperature, is going to be very different if the lapse rate changes. If you think terms of T^4, which is an oversimplification, 238 K is about 28% reduction in emission and 288 K is about 50% increase in emission. At 10 km, these differences will be twice as big. And this WILL change how much radiation comes out of the TOA. Absorption is fairly independent of temperature, so A in Figure 2 won’t change much. By removing these internal transfers of heat, you disconnect surface temperature from the temperature of the GHGs that are responsible for emitting photons that escape to space – that radiatively cool the earth. However, their absorption is independent of temperature. You think TOA OLR is the result of an emissivity that can be calculated from absorption. Emission/emissivity is controlled by temperature variation within the atmosphere, not absorption or surface temperature. If our atmosphere didn’t have a lapse rate, the GHE would be zero! In the stratosphere, where temperature increase with altitude, increasing CO2 increases radiative cooling to space and cools the stratosphere. Unfortunately, the change is small because few photons escaping to space originate there. CO2isnotevil writes: “When you only account for the return of surface emissions absorbed by GHG’s and clouds, the DLR returned to the surface is about 145 W/m^2. Remember that there are 240 W/m^2 coming from the Sun warming the system.” Partially correct. The atmosphere can emit an average of 333 W/m2 of DLR, not 145 W/m2 as calculated, because it receives about 100 W/m2 of latent and sensible heat from convection and absorbs about 80 W/m2 of SWR (that isn’t reflected to space and doesn’t reach the surface). Surface temperature is also the net result of all incoming and outgoing fluxes. ALL fluxes are important – you end up with nonsense by ignoring some and paying attention to others. CO2isnotevil writes; “In LTE, all of this can be considered to all affect the surface temperature owing to the short lifetime of atmospheric water which is the only significant component of the atmosphere that absorbs any solar input.” Read Grant Petty’s book for meteorologists, “A First Course in Atmospheric Radiation” and learn what LTE means. The atmosphere is not in thermodynamic equilibrium with the radiation passing through it. If it were, we would observe a smooth blackbody spectrum of emission intensity, perhaps uniformly reduced by emissivity. However, we observe a jagged spectrum with very different amounts of radiation arriving at adjacent wavelengths (where the absorption of GHGs differs). LTE means that the emission by GHGs in the atmosphere depends only on the local temperature (through B(lambda,T)) – and not equilibrium with the local radiation field. It means that excited states are created and relaxed by collisions much faster than by absorption or emission of photons – that a Boltzmann distribution of excited and ground states exists. See CO2isnotevil writes: .” In that case, all measurements of radiation are suspect. All detectors have a “viewing angle”, including those on CERES which measure TOA OLR and those here on Earth which measure emission of thermal infrared. We live and make measurements of thermal IR surrounded by a sea of thermal infrared photons. Either we know how to deal with the problem correctly and can calibrate one instrument using another or we know nothing and are wasting our time. DLR has been measured with instruments that record the whole spectrum in addition to pyrometers. I’ll refer you to figure’s in Grant Petty’s book showing the spectrum of DLR. You can’t have it both ways. You can’t cherry-pick TOA OLR and say that value is useful and at DLR and say that value may be way off. That reflects your confirmation bias in favor or a model that can’t explain what we observe. .” Right, but the RT simulations don’t rely on sensors to calculate DLR. Doesn’t your RT simulation get around 300 W/m^2? The 3.6 W/m^2 of net absorption increase per 2xCO2 — you’re RT simulations quantify this the same as everyone else in the field, i.e. the difference between the reduced IR intensity at the TOA and the increased IR intensity at the surface (calculated via the Schwartzchild eqn. the same way everyone else does). This result is not possible without the manifestation of a lapse rate, i.e. decreasing IR emission with height. You need to clarify that your claimed Ps*A/2 is only an abstraction, i.e. only an equivalent quantification of DLR after you’ve subtracted the 240 W/m^2 entering from the Sun from the required net flux gained at the surface required to replace the 385 W/m^2 radiated away a consequent of its temperature. And that it’s only equivalent so far as quantifying the aggregate behavior of the system, i.e. the rates of joules gained and lost at the TOA. People like Frank here: are getting totally faked out by all of this, i.e. what you’re doing. You need to explain and clarify that what you’re modeling here isn’t the actual thermodynamics and thermodynamic path manifesting the energy balance. Frank (and many others I’m sure) think that’s what you’re doing here. When you’re talking equivalence, it would be helpful to stipulate it, because it’s not second nature to everyone as it is for you. George, My understanding is your equivalent model isn’t saying anything at all about the actual amount of DLR, i.e. the actual amount of IR flux the atmosphere as a whole passes to the surface. It’s not attempting to quantify the actual downward IR intensity at the surface/atmosphere boundary. Most everyone, including especially Frank, think that’s what you’re claiming with your model. It isn’t, and you need to explain and clarify this.. The transport of energy by matter is an orthogonal transport path to the photon transport related to the sensitivity, and the ONLY purpose of this analysis was to quantify the relationship between the surface whose temperature we care about (the surface emitting 385 W/m^2) and the outer boundary of the planet which is emitting 240 W/m^2. The IPCC defines the incremental relationship between these two values as the sensitivity. My analysis quantifies this relationship with a model and compares the model to the data that the model is representing. Since the LTE data matches the extracted transfer function (SB with an emissivity of 0.62), the sensitivity of the model represents the sensitivity measured by the data so closely, the minor differences are hardly worth talking about. The claim for the requirement of 333 W/m^2 of DLR comes from Trenberth’s unrepresentative energy balance, but this has NEVER been measured properly, even locally, as far as I can tell, and nowhere is there any kind of justification, other then Trenberth’s misleading balance, that 333 W/m^2 is a global average. The nominal value of A=0.75 is within experimental error of what you get from line by line analysis of the standard atmosphere with nominal clouds (clouds being the most important consideration). Half of this is required both to achieve balance at the top boundary and to achieve balance at the bottom boundary. The real problem is that too many people are bamboozled by all the excess complexity added to make to climate system seem more complex than it needs to be. The problem is that the region between the 2 characterized boundaries is very complex and full of unknowns and you will never get any simulation or rationalization about its behavior correct until you understand how it MUST behave at its boundaries. Frank The lapse rate is NOT set by convection. It is set by gravity sorting molecules into a density gradient such that the gas laws dictate a lower temperature for a lower density. Therefore, however much conduction occurs at the surface there will always be a lapse rate and an isothermal atmosphere cannot arise even with no GHGs at all. Convection is a consequence of the lapse rate when uneven heating occurs via conduction (a non radiative process) at the surface beneath. The uneven surface warmimg makes parcels of gas in contact with the surface lighter than adjoining parcels so that they rise upward adiabatically in an attempt to match the density of the warmer parcel with the density of the colder air higher up..No radiative gases required. Convective overturning is a zero sum closed loop as far as the adiabatic component (most of it in our largely non radiative atmosphere) is concerned. Radiative imbalances are neutralised by convective adjustments within an atmosphere in hydrostatic equilibrium. “Radiative equilibrium profile could be unstable; convection restores it to stability (or neutrality)” George, I don’t know why you’re invoking DLR at the surface as some sort of means of explaining your derived equivalent model. It’s causing massive confusion and misunderstanding (see Frank’s latest post). To me, the entire point the model is ultimately making is DLR at the surface has no clear connection to A’s aggregate ability to ultimately drive and manifest enhanced surface warming, i.e. no clear connection to the underlying driving physics of the GHE via the absorption and (non-directional) re-radiation of surface emitted IR by GHGs amongst all the other effects, radiant and non-radiant, known and unknown, that are manifesting the energy balance. I’m perplexed why you think Ps*A/2 is attempting saying anything about DLR at the surface. To me, the whole point is it’s not. It’s instead quantifying something else entirely. Let’s be clear that what I (and I presume Frank) are referring to by DLR at the surface is the total amount of IR flux emitted from the atmosphere (as a whole mass) that *passes* to and is absorbed by the surface. Not saying it’s all necessarily added to the net flux gained the surface. Is this clear? You’ve kind of lost me a little here with these last few posts of yours. And that only about half of ‘A’ ultimately contributes to the overall downward IR push made in the atmosphere that drives and ultimately leads to enhanced surface warming (via the GHE). The point being it’s the downward IR push within or the divergence of upwelling surface IR captured and re-emitted back towards (and not necessarily back to the surface) that is the fundamental underlying driving mechanism slowing down the upward IR cooling that ultimately leads to enhanced warming of the surface — not DLR at the surface. If this is not correct, then I don’t understand your model (as I thought I did). RW, Your description of how absorbed energy per A is redistributed is correct. OK, I’m relieved. Your atmospheric RT simulator must calculate and have a value for downward IR intensity at the surface. I recall you’ve said its about 300 W/m^2 (or maybe 290 W/m^2 or something). I don’t know why you’re going the route of surface DLR to explain your model. It seems to be causing massive confusion on an epic scale. George, As clearly evidenced by this post here: Frank has absolutely no clue what you’re doing here with this whole thing. He’s totally and completely faked out. There’s got be a better way to step everyone through what you’re doing here with this exercise and derived equivalent model. I know it’s second nature to you what you’re doing with all of this (since you’ve successfully applied these techniques to a zillion different systems over the years), but most everyone else has no clue from what foundation all of this is coming from. They think this is spectacular nonsense, and it surely would be if what you’re actually doing and claiming with it is what they think it is.. Some fail to grasp the purpose because they deny the consequences. Others are bamboozled by excess complexity, others don’t understand the difference between photons and molecules in motion and still others are misdirected by their own specific idea of how things work. For example some think that the lapse rate sets the surface temperature. Nothing could be further from the truth since the lapse RATE is independent of the surface temperature, moreover; the atmospheric temperature profile is only linear to a lapse rate for a small fraction of its height. BTW, my responses going forward will be fewer and farther between since I intend to get some serious skiing in over the next few months. I finally got to Tahoe, Squaw has been closed for days and the top has as much as 15′ of fresh powder. George, .” I understand all of this, but others like Frank clearly don’t and are totally faked out. He has no clue what you’re doing with all of this. For one, you need to make it clear that your derived equivalent model only accounts for EM radiation, because the entire energy budget is all EM radiation, EM radiation is all that can pass across the system’s boundary between the atmosphere and space, and the surface emits EM radiation back up into the atmosphere at the same rate its gaining joules as a result of all the physical processes in the system, radiant and non-radiant, known and unknown. This is why your model doesn’t include or quantify non-radiant fluxes. They fundamentally don’t understand that your model is just the simplest construct that gives the same average behavior, i.e the same rates of joules gained and lost at the surface and TOA, while fully conserving all joules, radiant and non-radiant, being moved around to physically manifest it. And that the model is *only* a quantification of aggregate, already physically manifested, behavior. Or only a quantification of the aggregate behavior of the complex, high non-linear thermodynamic path manifesting the energy balance. They think your model is trying to model or emulate the actual thermodynamics and thermodynamic path manifesting the energy balance, as evidenced by Frank’s latest post. “Validate” is the wrong word. One cannot “validate” a model absent the underlying statistical population. “Evaluate” is the IPCC-blessed word for the cockeyed way in which global warming models are tested. Terry, OK. How about attempting to falsify my hypothesis which didn’t fail. BTW, I think I have and adequate sample space. I’m not attempting to identify trends from a time series, but using each of millions of individual measurements spanning all possible conditions found on the planet as representative of the transfer function quantifying the relationship between the radiant emissions of the surface consequential to its temperature and the emissions of the planet. co2: Contrary to how the phrase sounds, the “sample space” is not the entity from which a sample is drawn. Instead it is the “sampling frame” from which a sample is drawn. The “sample space” is the complete set of the possible outcomes of events. The elements of the sampling frame are the “sampling units.” The complete set of sampling units is the “statistical population.” For global warming climatology there is no statistical population or sampling frame. There are no sampling units. Thus there are no samples.There are, however, a number of different temperature time series. Many bloggers confuse a temperature time series with a statistical population thus reaching the conclusion that a model can be validated when it cannot. To attempt scientific research absent the statistical population is the worst blunder that a researcher can make as it assures that the resulting model will generate no information. I agree with you, but you can’t just try finding statistical significance between different measured values thinking that will give you insight. And too much of this seems, like is what is going on, lot of computing power available in most pc’s to do all sorts of things with statistics. But you won’t find it until you know the topic well enough to spot the areas that have seams, and roughness spots that need examined, and then you have to keep digging until you figure it out. Terry, “:Many bloggers confuse a temperature time series with a statistical population thus reaching the conclusion that a model can be validated when it cannot. ” Yes, when trying to predict the future based on a short time series of the past. There’s just too much long to medium term periodicity of unknown origin to extrapolate a linear trend from a short term time series. My point is that I have millions of samples of behavior from more than a dozen different satellites covering all possible surface and atmospheric condition whose average response is most definitely statistically significant. Not to extrapolate a trend, but to quantify the response to change, Terry, your attempt at obscuring the definitions of things makes you look ridiculous. A specific element in any given time series is an n-tuple of a) geographical coordinates, b) date/time stamp and c) a measured value. The “sample space is the set of ALL n-tuples. An element of a time series is called a sample drawn from the above mentioned sample space. Your use of the word “frame” is not applicable to what co2isnotevil is talking about. If you wish to introduce new terms to this discussion, please define them rigorously, or don’t use them. The whole point here, if I’m understanding this all correctly, is the radiative physics of the GHE that ultimately leads to enhanced surface warming are *applied* physics within the physics of atmospheric radiative transfer. The physics of atmospheric radiative transfer are NOT by themselves the physics of the GHE, or more specifically NOT the underlying driving physics of the GHE. This is a somewhat subtle, but crucial fundamental point relative to what you’re doing and modeling here that needs to be grasped and understood by everyone from the outset. DLR at the surface is the ultimate manifestation of the downward IR intensity through the whole of the atmosphere predicted by the Schartzchild eqn. at the surface/atmosphere boundary. This physical manifestation, however, is not the underlying physics of the GHE (or more specifically the underlying physics driving the GHE). Moreover perhaps, its manifestation at the surface has no clear relationship to absorptance A’s ability to drive the ultimate manifestation of enhanced surface warming, i.e. greenhouse warming of the surface via the absorption of surface IR by GHGs and the subsequent (non-directional) re-radiation of that absorbed surface IR energy among all of the other effects that manifest the energy balance (radiant and non-radiant). RW, and you can see the applied physics in this I’ll be on vacation and out of touch until Monday, Jan 16. Please defer responses until then. co2isnotevil said: . ” This is a point I made here some time ago about the Trnberth energy budget which shows latent heat and thermals going up but not returning to the surface in a zero sum adiabatic/convective loop. Instead Trenberth racked up DWIR to the surface by an identical amount and I pointed that out as a mistake. Many didn’t get it then and are not getting it now. George’s work, if correctly interpreted, shows that any DWIR from the atmosphere is already included in the S-B surface temperature with no additional surface temperature enhancement necessary or required. The reason being that at S-B surface temperature (beneath an atmosphere) WITH NO NON RADIATIVE PROCESSES GOING ON radiation to space from within the atmosphere would be matched by a reduction of radiation to space from the surface for a zero net effect. If one then adds convection as a non radiative process and acknowledge that convection up and down requires a separate closed energy loop then it follows that the surface temperature rises above S-B as a result of the non radiative processes alone George’s work appears to validate that since to get emission to space at 255k one needs a surface temperature of 33K higher than S-B to accommodate the additional surface energy tied up in non radiative processes. Trenberth et al have failed to account for the return of non radiative energy towards the surface via the PE to KE exchange in descending air. I don’t think your assessment of George’s work is correct. He agrees that added GHGs will enhance the GHE and ultimately lead to some surface warming (to restore balance at the TOA). He’s disputing the magnitude of surface warming that will occur. RW, I think George hasn’t yet realised the implications of his work. Maybe he will comment himself shortly. I suggested higher up the thread that for added GHGs to enhance the GHE it would have to cause the red curve to fail to follow the green curve but he seems to be saying that doesn’t happen. I’m pretty sure (I don’t want to put words in his mouth) he is, very similar to what Anthony and Willis just published, and it’s the TOA view of what I’ve found looking up. What is shows is the surface temp follows water vapor, and water vapor is so ubiquitous it’s affect completely (>90%) overwhelms the ghg effect of co2 on temperature. In this case George has shown this effect looks identical to an e=.62. micro6500 Water vapour certainly does make it far easier for the necessary convective adjustments to be made so as to neutralise the effect of non condensing GHGs such as CO2. The phase changes are very powerful. Water vapour causes the lapse rate slope to skew to the warm side so it is less steep. A less steep lapse rate slope slows down convection which allows humidity to rise. When humidity rises the dew point changes so that the vapour can condense out at a lower warmer height which then causes more radiation to space from clouds at the lower warmer height. That offsets the potential warming effect of CO2 and that is the mechanism which I suggested to David Evans when he was developing his hypothesis about multiple variable ‘pipes’ for radiative loss to space. The water vapour pipe increases to compensate for any reduction in the GHG (or CO2) pipe. But in the end, even without water vapour, convection would neutralise the radiative imbalance derived from non condensing GHGs and even if it does not do so the effect of GHGs is reduced to near zero anyway because the main cause of the GHE is convection within atmospheric mass as explained above. “When humidity rises the dew point changes” only if the air mass carries additional water in, but the conditions I’ve been discussing that is not part of the process, absolute humidity changes slowly as fronts move in. Rel humidity swings with temp, so changes significantly over a day, regardless of a weather change. To be absolutely clear, I do not dispute the fact that GHG’s and clouds warm the surface beyond what it would be without them and that both influences are purely radiative. But again, demonstrating this either way is not the purpose of this analysis which was focused on the sensitivity. The purpose was to separate the radiation out, model how it should behave by extracting the transfer function between surface temperature and planet emissions, test the resulting model with data measuring what is being predicted and if the model correctly describes the relationship between the surface temperature to the planets emissions into space, it also must quantify the sensitivity, which the IPCC defines as the incremental relationship between these two factors. This whole exercise is nothing more than an application of the scientific method to ascertain a quantitative measure of the sensitivity which to date has never been done. My original hypothesis was that the radiation fluxes MUST obey physical laws at the boundaries of the atmosphere and the best candidate for a law to follow was SB. The reason is that without an atmosphere, the planet is perfectly quantified as a BB (neglecting reflection as ‘grayness’) and the only way to modify this behavior is with a non unit emissivity, which the atmosphere provides, relative to the surface. This is the only possible way to ‘connect the dots’ between BB behavior and the observed behavior. Subsequent to this, I began to understand why this must be the case which is that a system with sufficient degrees of freedom will self organize itself towards ideal behavior as the goal of minimizing changes in entropy. If you look here under ‘Demonstrations of Control’, I’m considering writing another piece explaining how these plots arise as consequence of this hypothesis. co2isnotevil said this: “I do not dispute the fact that GHG’s and clouds warm the surface beyond what it would be without them and that both influences are purely radiative” Well, if you have radiative material within an atmosphere which is radiating out to space but not radiating to the surface then the surface would cool below S-B. But if that radiative material is also radiating down to the surface then the surface will indeed be warmed beyond what it otherwise would be but not to beyond the S-B expectation, only up to it. So, do GHGs radiate out to space at a different rate to the rate at which they radiate down to the surface or not ? The atmosphere is indefinitely maintained in hydrostatic equilibrium with no net radiative imbalances overall and so the balance MUST be equal once hydrostatic equilibrium has been attained. For CO2 molecules the idea is that they block outgoing at a certain wavelength so presumably they are supposed to radiate downward more powerfully than they radiate to space. Yet George shows that for the system as a whole the surface temperature curve follows the S-B curve in his diagram and he concludes that the system always moves successfully back to the ‘ideal’. That being the case, how can one reserve a residual RADIATIVE surface warming effect beyond S-B for any component of the atmosphere? I suggest that in so far as CO2 blocks outgoing radiation the water vapour ‘pipe’ counters any potential warming effect and even if there were no water vapour then other radiative material within the atmosphere operates to the same effect just as well. For example, stronger winds would kick up more dust which is radiative material and convection would ensure that radiation from such material would go out to space from the correct height along the lapse rate slope to ensure maintenance of hydrostatic equilibrium. Mars is a good example. I aver that the planet wide dust storms on Mars arise when the surface temperature rises too high for hydrostatic equilibrium so that winds increase, dust is kicked up and radiation to space from that dust increases until equilibrium is restored. Only a NON RADIATIVE surface warming effect fits the bill in every respect and that is identifiable not in the similarity between the slopes of the red and green curves but rather in the distance between the red and green curves. Water is the current main working fluid, where our planet is about in the middle of it’s 3 states temperature. But this is the actual net surface radiation with temp and rel humidity. This is 5 days, mostly clear, a few cumulus clouds on the middle two days afternoon. Then zoomed in so you can see the net outgoing radiation When this is going on at night, the switching between water open and water closed, it is visibly clear out. So as air temps near dew points, the water window closes to ir (but not visible), and the outgoing clear calm skies drops by about 2/3rds. This is where the e=.62 comes from. The temp globally does this. Co2 is ineffective at affecting temps, at least with all of the water vapor. Co2 does impact both rates by the 2 whatever watts, but some rel humidity is a temperature effect, it will stay in the high rate longer, until any excess temperature energy in the surface system (in relationship to dew point) is radiated away, the net rad measurement shows this. It does all of this with no measurable convection. Maybe 1,000 feet, but dead calm at the surface, and the first graph explains what surface temps are doing. Notice that there is almost no measured increase in max temperature? only min. And when you look at min alone, it jumps with dew point during the 97 El Nino, that is all that has happened, the oceans changed where the water vapor went. micro6500, I consider water to be the refrigerant in a heat pump implementing what we call weather. Follow the water and its a classic Carnot cycle. It’s certainly true that Co2 is a far less effective GHG than water vapor, moreover; water vapor is dynamic and provides the raw materials for the radiative balance control system. The volume of clouds is roughly proportional to atmospheric water content, but the ratio between cloud height and cloud area is one of those degrees of freedom I mentioned that drives the system towards an idealized steady state consistent with it’s goal to minimize changes in entropy in response to perturbations to the system or its stimulus. Then you are not understanding the chart I keep showing. What it’s showing is a temperature regulated switch that turns off 70% or so of the outgoing radiation from the surface once the set temp is reached. The set point temperature follows humidity levels. This process regulates morning minimum temperature everywhere rel humidity reaches 100% at night under clear calm skies. Yes, but you can’t directly measure that in your own backyard to whatever suitable accuracy to satisfy that co2 is not doing anything. I mean really glad you did this, it’s been needed for a long time. But it doesn’t kill their argument. Actually a test, I think you would say e will change as ghg increase forcing, at 62% or so. If what I discovered works like I think it will be more like less than 5 or 10%. And I think if you look at the temp record, you’d see it can’t be 62%. “So, do GHGs radiate out to space at a different rate to the rate at which they radiate down to the surface or not ?” If geometry matters, its equal. Stephen, “So, do GHGs radiate out to space at a different rate to the rate at which they radiate down to the surface or not ?” I would say, yes they do; however, this is a function of emission rate decreasing with height and NOT because the probability of photon emitted within is greater downwards than upwards. This is a key distinction that relates to all of this that many seem to be missing. With regard to what George is quantifying as ‘A’, the re-emission of ‘A’ is by and large equal in any direction regardless of the emitting rate where any portion of ‘A’ is actually absorbed. Even clouds are made up of small droplets that themselves radiate (individually) roughly equally in all directions, though of course the top of the clouds generally emit less IR up than the bottom of clouds emit IR downward. RW, I would go with George on this. Although temperature declines with height and the emission rate declines accordingly a cloud at any given height will radiate equally in all directions based on its temperature at that height. The depth of the cloud would be dealt with in the average emissions from the entire cloud. Micro6500 Your graphs relate to emissions from the surface but I was considering emissions from clouds to space. At a lower height along the lapse rate slope a cloud is warmer and radiates more to space. CO2 causes the cloud heights to drop. That is a mechanism whereby the ‘blocking’ of radiation to space by CO2 can be neutralised. Maybe it can, but it does not interfere with the decaying rate of cooling under clear skies that I have discovered that is from 2 cooling rates controlled by water vapor. The global average of min temp following dew points shows it is a global mechanism. How is that relevant to the point I made? Because I don’t think the two are associated, I don’t see how cloud top emissions can counter how wv closes the path for a significant amount of energy to space under clear skies. So, maybe I misunderstood your comment relating to this clear sky effect. I didn’t say that cloud top emissions counter how water vapour closes such a path. I was referring to the outgoing wavelengths blocked by CO2. CO2 absorbs those wavelenghs and prevents their emission to space. That distorts the lapse rate to the warm side, the rate of convection drops, humidity builds up at lower levels and clouds form at a lower warmer height because greater humidity causes clouds to form at a higher temperature and lower height for example 100% humidity allows fog to condense out at surface ambient temperature. I think this is ~33% mixture, and it doesn’t completely block 15u. Now, I can be pedantic, so if that’s all it is, okay, sorry 🙂 Stole this from Frank Exactly. The diffraction at the surface is because the speed of light in the material changes compared to a vacuum, or the medium those photons come from (ie different types of glass used in a pair of lens that are physically in contact with each other). The reason it’s a different speed is the atoms interact with that wavelength of photon, but it can still be transparent, like glass. Maybe these help explain my thoughts on this. Micro, I see that I made a typo which has misled you. Sorry. I typed ‘water vapour’ instead of ‘CO2’ in my post at 9.40 am. It is the distortion of the lapse rate by CO2 that I was intending to talk about. micro6500 January 13, 2017 at 6:54 am “CO2 absorbs those wavelenghs and prevents their emission to space.” I think this is ~33% mixture, and it doesn’t completely block 15u. Now, I can be pedantic, so if that’s all it is, okay, sorry 🙂 And a path length of only 10 cm. A high res spectra under those conditions shows complete absorption in the Q-branch but of course our atmosphere is a lot thicker than 10cm. At 400ppm the atmosphere will show a similar high res spectra at 10m. All true, but not blocked to space? Right? And Phil, I’d like your thoughts on this if you can take a look. Since we’ve talked a lot of this sort of thing. “However, if an object doesn’t emit at some wavelength, then it doesn’t absorb at that wavelength either and it is semi-transparent.” This is inconsistent with Planck law which demonstrates any massive object with positive radii and diameter much larger than wavelength of interest (semi-transparent or opaque) emits at all wavelengths at all temperatures, and a given angle of incidence and polarization, emissivity = absorptivity. Plancks Law is more relevant to liquids and solids. Gases emit and absorb specific wavelengths and its really not until it’s heated into a plasma that it will emit radiation conforming to Plancks Law. O2/N2 at 1 ATM neither absorbs or emits any measurable amount of radiation in the LWIR spectrum that the Earth emits. i.e. emissivity = absorption = 0 “..its really not until it’s heated into a plasma that it will emit radiation conforming to Plancks Law.” Not correct, gases radiate according to Planck law at all temperatures, all wavelengths including N2 and O2. Emissivity over the spectrum, in a hemisphere of directions would be very low for N2/O2 atm. but nonzero as shown by Planck law & measured gas emissivity over the spectrum. Envisage a radiative atmosphere in hydrostatic equilibrium with no non radiative processes going on. For the atmosphere to remain in hydrostatic equilibrium energy out must equal energy in for the combined surface / atmosphere system. If the atmosphere is radiative then energy goes out to space from within the atmosphere so that less must go out to space from the surface. Less energy going out to space from the surface requires a cooler surface so would the surface drop below S=B ? No it would not because the atmosphere would be radiating to the surface at the same rate as it radiates to space and the S-B surface temperature would be maintained. Thus S-B must apply to a radiative atmosphere just as much as to a surface with no atmosphere and DWIR is already accounted for in the S-B equation. If one then adds non radiative processes then they will require their own independent energy source and the surface temperature must rise above S-B The radiative theorists have mistakenly tried to accommodate the energy requirement of non radiative processes into the purely radiative energy budget. Quite a farrago has resulted. Instead of trying to envisage a non radiative atmosphere it turns out that the key is to envisage an atmosphere with no non radiative processes 🙂 ”Envisage a radiative atmosphere in hydrostatic equilibrium with no non radiative processes going on.” Ok, this is Fig. 2 gray body. ”For the atmosphere to remain in hydrostatic equilibrium energy out must equal energy in for the combined surface / atmosphere system.” There must also be no free energy, along with radiative equilibrium of Fig. 2. When there is free energy, get stormy weather. ”If the atmosphere is radiative then energy goes out to space from within the atmosphere so that less must go out to space from the surface.” There is MORE energy from the surface, not less. See Fig 2. See the arrow to the left into the surface? The arrow is correct. As A reduces from 0.8 to say 0.7 emissivity (dryer, and/or less GHG) THEN “less must go out to space from the surface”, the global T reduces still at S-B. “Less energy going out to space from the surface requires a cooler surface so would the surface drop below S=B ?” No, the surface is always at S-B, by law from many tests in radiative equilibrium of Fig. 2 as A varies over time. “If one then adds non radiative processes then they will require their own independent energy source and the surface temperature must rise above S-B” No, the sun is the only energy source burning a fuel that is needed. No mistake by radiative theorists only Stephen. Trick, By ‘independent energy source’ I simply mean the solar energy diverted by conduction and convection into the separate non radiative energy loop during the first cycle of convective overturning. No mistake by me there. I agree that absent of convective overturning the surface would remain at S-B because DWIR from atmosphere to surface offsets the potential cooling of the surface below S-B when the atmosphere also radiates to space. You cannot have MORE energy from the surface to space PLUS radiation to space from within the atmosphere without having more going out than coming in. There is no ‘free’ energy’. Energy in from the sun flows straight through the system giving radiative balance with space and energy in the convective overturning cycle is locked into the system permanently in a zero sum up and down loop. “I simply mean the solar energy diverted by conduction and convection” There is no such “diversion”, the system as shown in Fig 2 does not need any such “diversion” when the hydrological cycle is superposed. If there is no free energy in the column, there would not be storms, hydrostatic would prevail everywhere, but there are storms (non-hydrostatic) so Stephen is wrong about no ‘free energy’. Storms do not indicate free energy. They are merely a consequence of local imbalances and weather worldwide is the stabilising process in action. In the end, the atmosphere remains indefinitely in hydrostatic equilibrium because there is no net energy transfer between the radiative and non radiative energy loops once equilibrium has been attained. Stephen demonstrates his shallow understanding of meteorology in 8:20am comment. What is truly embarrassing for Stephen is that he makes no effort over the years to deepen his understanding through study of past work when his errors of imagination are pointed out. “Storms do not indicate free energy. They are merely a consequence of local imbalances..” Local imbalances IMPLY free energy Stephen as is shown in stormy weather which is NOT hydrostatic. Stephen could deepen his understanding by reading this paper but his lack of accomplishment in math (and especially in calculus involving rates of change i.e. derivatives & integrals) prevents his understanding of the basics. This is only one very famous 1954 paper in meteorology Stephen can’t comprehend: Hydrostatic per the paper: “Consider first an atmosphere whose density stratification is everywhere horizontal. In this case, although total potential energy is plentiful, none at all is available for conversion into kinetic energy.” Fig. 2 above in top post, shows no up down movements of PE to KE delivering 33K to the surface as Stephen always imagines as it is hydrostatic. Radiation is shown to deliver the increase in global surface temperature in Fig. 2 simply by increasing A above N2/O2. —– Stormy: “Next suppose that a horizontally stratified atmosphere becomes heated in a restricted region. This heating adds total potential energy to the system, and also disturbs the stratification, thus creating horizontal pressure forces which may convert total potential energy into kinetic energy.” Dr. Lorenz then goes on to develop the math, way, way…WAY beyond Stephen’s ability. But not beyond Trenberth’s ability, note Dr. Lorenz’ Doctoral student: The imbalances leading to storms might misleadingly be referred to as indicating ‘free energy’ locally but taking the atmosphere as a whole there is no free energy because storms are simply the process whereby imbalances are neutralised. Excess energy in one place is matched by a deficit elsewhere. Overall, every atmosphere remains in hydrostatic equilinbrium indefinitely. Obviously, a horizontally stratified atmosphere that is immobile in the vertical plane cannot make use of its potential energy.Thast is why the convective overturning cycle is so important. That is what shifts KE to PE in ascent and PE to KE in descent. Lorenz confirms that introducing a vertical component by disturbing the stratification converts PE to KE. I think Trick is wasting my time and that of general readers. ”Thast is why the convective overturning cycle is so important.” There is no surface convective overturning in your horizontally stratified atmosphere Stephen, every day is becalmed at the surface as in Fig. 2, again: Hydrostatic per the paper: “Consider first an atmosphere whose density stratification is everywhere horizontal. In this case, although total potential energy is plentiful, none at all is available for conversion into kinetic energy.” Lorenz confirms that introducing a vertical component by disturbing the stratification converts PE to KE, agree due introduction of imbalances in local heating (or cooling). It is Stephen’s imagination unconstrained by basic physics wasting time with known unphysical comments, making no or little progress over the years. Steve writes: “No it would not because the atmosphere would be radiating to the surface at the same rate as it radiates to space and the S-B surface temperature would be maintained.” I believe this is wrong. If we go to Venus, the atmosphere the flux from the atmosphere to the surface is not the same has it is from the atmosphere to space. The same is true on Earth (DLR 333 W/m2; TOA OLR 240 W/m2 if you trusted the numbers). However, it is much easier to see that this isn’t true when you think about Venus. In a non-convective gray atmosphere (ie radiative equilibrium) with no SWR being absorbed by the atmosphere, the difference between the upward flux and downward flux is always equal to the SWR flux being absorbed by the surface. That controls TOA OLR. DLR is depends on the optical thickness of the gray atmosphere at the surface. The mathematics of this is describe here: Frank, Separating the radiative and non radiative energy transfers into two separate ‘loops’ with no net transfer of energy between the two loops solves all those problems. Frank. Radiation from an atmosphere taken as a single complete unit must be emitted in all directions equally. That means radiation down must equal radiation up otherwise the atmosphere can never attain hydrostatic equilibrium. More going down than up means that the upward pressure gradient will always exceed the power of gravity and more going up than down means that the upward pressure gradient will always fall short of the power of gravity. One of the problems with working with averages, the surface all emits different from equator to pole, from east to west. I’m not sure I completely buy the numbers, but I can see them not being the same, and being different depending where you are.
https://wattsupwiththat.com/2017/01/05/physical-constraints-on-the-climate-sensitivity/
CC-MAIN-2020-10
refinedweb
70,470
60.45
See also: IRC log Alexey: I call you about the organization chart ... I want to project the IANA slide that I think was skipped yesterday (setting up projector) (IETF and IANA is projected) Alexey: IANA manages registries, and there are multiple entities that affect what IANA does ... If IETF adopts a procedure or defines a policy, IANA is required to follow it ... IANA does give input on what the policy should be ... IANA follows what IETF says in RFCs ... the other entity that affects IANA is the IAB (Internet Architecture Board) - talks to IANA about policy decisions like licensing ... IESG approves RFCs and so defines the formats, IAB controls the policy experts ... If people are unhappy with IANA policies they should not blame IANA - except in the case where IANA is slow in updating something AVK: can blame them about format, URL persistence Alexey: there is a document, RFC5226 which defines standard procedures for registries ... IETF can make any format that it wants, but there is a typical format for registries ... registries can have different policies, templates, levels of restrictiveness ... most permissive level is first come first serve ... examples include vendor names ... on the other end of the spectrum, the strictest ones require a standards track RFC ... in the middle is a procedure called "specification required" ... requires a stable specification from an IETF-recognized standards organization HS: Is there an official definition of what is a recognized standards organization? there are different opinions Alexey: no, it's not defined; people don't want to fix the list ... general criteria are: long established, stable document HS: why is stability a requirement? if the software moves faster than the registry, then the registry is out of date Alexey: depends on the registry - many registries are for developers ... for example, as a developer you may want to find all the link relations AVK: but as a developer, I find current IANA registries useless ... wikipedia is a better reference for URI schemes than IANA is ... vetting by experts makes registries incomplete and inaccurate HS: you said not just software implementors or others ... for years, image/svg+xml wasn't in the registry ... when Apple shipped MPEG-4, the type wasn't in the registry ... I can't think of any constituency for whom the registry says all that they want to know, or even close AVK: apart from pedants, maybe Alexey: a couple of comments on this ... different registries have different policies ... at the time when the registry was established, there was IETF consensus that this was the desired policy ... as time goes on, it may be that reality shows that a particular policy was too strict (or too permissive) ... maybe part of the answer is to revise the policy HS: in the days of classic MacOS when Carbon was still used a lot, and you needed four char type and creator codes, it seemd that the value for those codes was smaller than the space for MIME types ... so you'd think you'd have a greater need than for MIME types to limit who can get what, but Apple operated a registry on first-come first-serve basis and nothing bad came out <anne> MJS: you mentioned that it is possible to change the policy <anne> ... assuming that some of the folks here are interested in a much more permissive policy <anne> ... what would be the process to get the IETF to change <anne> Alexey: talk to the AD and talk to other people to initiate discussion <anne> Alexey: I'm happy to help with the progress Alexey: the other half of the answer ... there is a reason there are expert reviews for some of the registries, like MIME types ... people do make stupid mistakes in MIME types, so there is an opportunities to fix this HS: one of the supposed mistakes is using the text/* subtree for a lot of stuff, and there I would claim the mistake on the IETF side AVK: what proportion of MIME types are not in use when they are registered? it seems like most of them already are deployed by the time you go to register them, so it might be too late to fix Alexey: in the ideal world, people should ask experts up front <Julian> ! Alexey: one example is that you can't use UTF-16 of textual types HS: that's bogus AVK: still insisting the case now is misguided JR: one thing that Anne mentioned - some registries have a provisional system ... but not MIME types Alexey: vendor prefix ones are first-come first-server JR: other question -regarding the media type registration RFC, Larry has started discussing revising it in the TAG ... for example, people sniff for types - we could make that more robust HS: I want to complain more about CR/LF ... the history of CR/LF restriction and the fact that text/* defaults to US-ASCII in the absence of charsets... ... this is an artifact of a leaky abstraction from SMTP ... US-ASCII default is a theoretical most prudent default from the time when in email there wasn't an obvious default ... but neither of those considerations apply to HTTP ... HTTP can send text that has line breaks that are not CR/LF ... in fact for HTML, LF-only is preferred ... it makes no sense to say that all these types like HTML, JavaScript and CSS are "wrong" ... instead it would make more sense to say that CR/LF does not apply to HTTP ... for some types, for historical reasons we need to default to Windows -1252 or UTF-8 ... pretending these need to be registered under the application/* subtree doesn't help anyone ... it only serves the RFC canon that HTTP and SMTP match, but that doesn't help authors or implementors ... line breaks should be based on transport protocol ... types themselves should be able to define their default charset JR: if you look at the thing that Larry brought to the TAG about MIME on the Web... ... he mentions all these problems ... line break thing doesn't make sense on the Web ... HTTP appears to use MIME, but doesn't, and doesn't need to ... charset is also an issue for HTTP ... conflict between MIME, HTTP and XML types on text/* HS: I actually implement RFC2023 ... I have a checkbox for saying ignore it <anne> (There's a t-shirt saying "I support RFC 3023") HS: if I shipped the validator without the "ignore it" box, people couldn't use the validator JR: what's the default? HS: defaults to supporting it Alexey: comment on Web vs email - this needs to be discussed in IETF ... if Web requires modified version of MIME, let's do it ... there is a new WG in applications area <anne> APPSAWG <weinig> HS: it feels frustrating to actually have to discuss this ... that people don't believe what they see on the web AVK: the feeling is that the IETF is so much behind, and then we have to get in and tell the old timers what the new world looks like ... we're not sure it is worth our time ... we have moved on Alexey: it is occasionally helpful to talk to people who designed the original ... especially when it comes to character set - I think there is agreement from the original author AVK: I talked about some of the discussion about moving away from text/plain drafts, and people there express fear of Unicode.... ... W3C is kind of slow too, but at least we think HTML and Unicode are ok HS: well, W3C isn't ready to publish HTML5 as HTML5 yet JR: IETF thinks HTML and Unicode are fine, just not for their documents Alexey: there is provisional registration AVK: for header fields, you need spec even for provisional ... person guarding the header field registry was too conservative JR: does header name registry have a public mailing list ... registry lists should be public Alexey: can you draw cases like this to my attention? it might be implementation of process failures AVK: but if we look at URI schemes.. Alexey: it's hard for me to defend the people who designed the procedure ... there was a discussion about relaxing registration of certain types of URIs ... so we could register things like skype or yahoo IM AVK: we are trying to register about: - there should be some registration pointing to the draft ... and for many headers, browsers have to know about them even if they are unregistered ... difficulty of using registry causes incentive to use X- names and just not registry JR: one thing we should look at is accountability - there needs to be a public mailing list for header registration ... also Larry will join us to talk about IRI AVK: I would rather just get rid of IANA and have a W3C registry, with a community-managed wiki HS: to consider how the XHTML2 WG was doing things - at some point it was obvious that just giving feedback wasn't going to change the way they did things ... so instead of trying to change the way they did things, another group did something else, and that became the group people paid more attention to ... there is a feeling that fixing IANA is so difficult that it would just be easier to set up a wiki AVK: we could just compete Alexey: this is not helpful AVK: I would like a registry that would tell me X-Frame-Options exists ... I don't think this will ever fly at IANA HS: I have no experience of registration, but the language tag registry is a very positive role model Alexey: when I talk to IANA, they listen AVK: I think the problem is the process Alexey: I can help you initiate changing the process AVK: not sure I am interested in helping to fix the process if there is an easier path HS: we should mention willful violations of the charset registry ... it would be useful for the main charset registry to be the place to go to find out what you need to implement ... the thing is that ISO-Latin1 should actually be interpreted as Windows-1252 ... another example is that instead of Shift-JS you need to use the Microsoft tables not the ISO tables LM: I note that my draft covers many of these issues HS: not in this much detail; I will give feedback <Julian> LM: I hope in the cases where there are willful violations, that the right thing to do is to fix the registry AVK: in the case of the charset registry, there might be a need for separate registries for Web clients vs other clients HS: for example the Java platform uses the IANA names for charsets with their real meaning ... it would not be good to change Java, so the registry should include both sets of info ... JAva could add an API for Web content decoders LM: I think this is a three-phase process ... (1) identify the problem ... (2) identify which things need to change (w/o being explicit about how) ... (3) then there needs to be action on the change ... I would like to identify the problem and the kinds of changes first ... only then decide whether to make a wiki, change the process, etc AVK: if you are already working on this, then that's great LM: I would be happy to have co-authors Alexey: at minimum we should talk LM: I think we should bring it into a working group or take it up as an action item ... MIME is a part of the Web architecture that we have adopted without adopting it JR: we talked earlier about text/html and encoding LM: again I think we should describe the problem first ... same thing might be said for URI schemes HS: given last call schedule (1H2010), how realistic is it that changes of these magnitude could go through the IETF ... seems unlikely LM: my view is that a W3C document entering LC can make reference to documents at similar or behind level of maturity ... they don't need to be final until you go to REC MS: (explains W3C process) HS: one reason I'm skeptical about the rate of change at IETF is the URL thing ... we had rules in the HTML5 spec abut transforming href values to IRIs ... it was argued that IRIbis was supposed to solve it ... I remember there was a schedule LM: it's quite off HS: at the date when there was supposed to be a deliverable, they haven't even started ... we shouldn't send things to the IETF to die ... I was really annoyed when I wanted to fix a bug relating to URL handling in Firefox and the spec did not have what was needed ... I think that for URLs the process has had it chance and din't deliver RI: the original schedule was very aggressive and we never really expected meeting it LM: it was wildly optimistic ... the problem with most standards activities is that there's nobody home except for people who showed up ... if you look at the archives, there was really a fallow period, but since then it is picking up ... meeting next week in beijing ... people who care about URLs in HTML should show up online HS: there is also the problem that if people are already showing up in some venue, then moving the work to a different venue and then complaining that people didn't show up in the other venue is not productive LM: the problem really is that what was in the HTML document before was wrong ... unfortunately there is complexity due to need to coordinate with IDNA and bidirectional IRIs HS: you need something that takes a base IRI, a relative reference as UTF-16, and a charset, and you get a URI/IRI back ... my point is that the HTML spec doesn't need to deal with rendering any kind of address ... it just cares about resolution / parsing ... nothing about how to render an IRI ... what is required is someone writing down the real-world algorithm for this resolution thing ... and it needs to be somewhere that you can reference it RI: if it were in the IRI specification would it be ok for you HS: what I am annoyed about is that we had something that was right or fixable, was removed or delegated, and now we have to rewrite it ... I am now betting on Adam delivering it JR: I would like to say one thing ... we need to find the right separation between things that are just part of the attribute and things that are part of the the resolving algorithm ... I think whitespace discarding is not part of the resolutions ... there might be a step before resolving that is part of extracting from an attribute AVK: in the running code, whitespace stripping happens at the resolving end LM: it would be nice if you could copy from the location bar into other apps HS: we are not talking about the location bar JR: what about space-separated lists of URLs AVK: this is a different case LM: motivation for trying to start the work in the IETF was to make sure that URLs in HTML and in other apps weren't different ... it is true that the work has been delayed, but activity has been restarted Alexey: you need to open bugs LM: Adam was at the last meeting ... there is an IETF document of how to do IETF document HS: it's great the kinds of URLs that the web uses were the same as what other things use it, that would be great ... but the Web is constrained JR: this was very useful, which I'm not sure was expected; we have another point about link relations, which is on the agenda ack MS: in the future, we shouldn't delete things until the replacement is ready LM: chairs from IRI working group are prepared to add an additional charter item AVK: Adam is a bit reluctant to go back to the IETF <anne> (that was my impression) RI: it seems like there are discussions coming up in beijing where we need to be talking between the HTML WG and IETF LM: editors will be remote, so remote participation might be good ... how about file: URLs HS: they are not really on the Web ... best thing to do for USB key is relative URLs <r12a> whether it's beijing or not, i think we need to find a way to pursue this dialog with HTML5 folks and chairs/editors of the IRI spec RI: is something gonna happen ... action items? LM: don't be skeptical - if you believe it will work <scribe> ACTION: Henri to give feedback to Larry on MIME etc draft [recorded in] <scribe> ACTION: Anne to give Alexey info about registry problems [recorded in] <MikeSmith> started lunch break? MikeSmith, we're about it <MikeSmith> k er, about to session adjuourned <anne> fwiw, testing was half an hour delayed <anne> not sure if anyone is actually in the other room yet <anne> but since you just signed in... <Julian> isn't testing at 5pm (50 mins from now?) <anne> no <anne> it's a double block <Julian> oh <anne> yes <anne> we are setting up <anne> dbaron, ^^ <hsivonen> dbaron, we are in Rhone 3b <hendry> scribenick hendry <oedipus> scribenick: hendry me: to find the connection type, it's not slow or rather blocking is it? it's a fast operation Andrei: yes, we fire online when the type changes type just caches last seen connection type [ scribe apologies for pasting in wrong buffer ] maciej: how to particpate in tasks tf, testing framework <plh> kk: and goals for LC kk: the TF meet every two weeks ... there is a wiki with schedule, there is a server with hg ... philippe has mirrored that work at <plh> --> HTML test suite repository kk: same content on both servers <plh> --> HTML Testing Area kk: asking what to test ... localstorage, x-domain messaging, doing spec analysis ... looking at features which are shipping ... submitted some canvas tests <plh> --> Canvas test suite kk: getElementsByClassname tests from Opera ... distinction between approved and un-approved tests <plh> --> s/Philipp Taylor/Philip Taylor/ kk: bugzilla to process the test <plh> --> Test harness jonias: what is the harness ? anne: same as XHR kk: tests run automatically ... video tests is hard to automate ... self-describing test ... some exceptions that you can't poke in the OM and you can't test it hsivonen: can you do some REFerence tests ? jonas: yes, there are some things kk: there are some things you can't test with REF tests, for e.g. Audio hsivonen: multi-testing question plh: some tests are manual and some tests are automatic kk: existing tests not using the testharness, it might not be worth re-writing them plh: it's a bug, it shows the buttons, though its automatic kk: waits for 5 seconds before going to next test maciej: this UI is broken kk: can we get all the requirements up front ? ... esp we need a plan with REF tests maciej: propsed categories; script driven, ref test, manual test ... too awkward with 100k tests ... takes too long to run plh: the test can indicate itself, if it's manual or automatic anne: if the test loads the test harness, we know it's an automatic test ( no need to categorise ) hsivonen: just have 3 directories dbaron: you can harness the harness kk: we should do it in one file hsivonen: the easier way is to use directories jonas: i don't care maciej: text file is harder to maintain than a directory, not big deal either way <plh> scripts/ <plh> reftests/ anne: we want directories for *types* of tests <plh> manuals/ dbaron: painful to use dirs as metadata, as you may need to move them around kk: maybe we will come up with a new dir in some months time, prefers a text file as it wont change location jonas: bigger problem to have a function call when the test finishes so we don't have to wait 5 seconds after each one loads anne: there is logic in the harness to handle this & async tests hsivonen: [ didn't quite understand your implicit mochi test comment ] <dbaron> plh: need a way to copy all the additional files that tests depend on <hsivonen> I find that I almost always have to use the explicit finish function for scripted tests, so it's not a win to finish tests implicitly on onload jonas: we need to somehow markup dependencies sweinig: in the common case there will be no deps hsivonen: should we decide whether to allow data URLs ? anne: common resources makes sense hsivonen: you want to use data URLs for near 0 load times [ why does jonas use data URLs? didn't get his argument ] kk: ie9 supports dataURIs ... might be a problem that browsers do not support dataURIs jonas: we need to list our deps and assumptions ... can we assume browsers have ES5, foreach is nice maceij: we should not use ES5 until it's widely implemented jonas: queryselector test cases were held up by WebIDL kk: e.g. of WebIDL false positive in canvas read only thing jonas: do we have any existing docs of assumptions? kk: there is just the source code ... can someone take an action to document them? anne: read the XHR tests :-) <krisk> testing wiki jonas: these tests are already in directories kk: suggests documenting the tests in the wiki hsivonen: ... something about re-writing the "mochi tests" ?? anne: i'm fine with re-writing / using another harness kk: first anchor test is very simple, it's not hard to migrate to james's harness jonas: make some requirements for making the tests portable between harnesses [ IIUC ] hsivonen: something about integration layer, which allows reporting into your own system (thanks anne) <plh> --> mercurial plh: you can commit a test if you have a W3C account dbaron: might need to be aware with hg's push caveats [ to plh ] <plh> ACTION: plh to work with systeam to make sure we keep track of hg push [recorded in] maciej: not great security, since hg trusts the client's config WRT who wrote the patch dbaron: you might want logs ... Mozilla have a tool called push-log for this problem jonas: i can see now the tests are seperated by directory <dbaron> The source for pushlog is in this hg repository: jonas: is there a description file ? <anne> <anne> kk: see ... we will add extra info jonas: remove domain so it's not server specific ... we have a test file per dir ... i want to walk this from the cmdline ... i want relative paths kk: we might need some absolute stuff jonas: i'm pulling via hg kk: there is no absolute need for absolute urls hsivonen: mochi-tests point to localhost jonas: something clearly identifiable for a search & replace to get the tests working ... you can get different types of relative paths ... it's important that we can accomodate them in a "search & replace" ... we need to scale ... it's not workable to ban absolute paths hsivonen: we need to document the "clearly identifiable" bit, like test.w3.org and test2.w3.org jonas: we have to say it's OK to use abs paths hsivonen: worried about some dir namespace collision ... get rid of prefixes jonas: OK <krisk> That is fine kk: how to delimit the file ? jonas: i don't care ... though, since it's hand-written, make it easy & little to type sam: is there a preferred lengthmicroformats.org with CSS tests there was a wide range ... bad = long test & lots of permutations hsivonen: we know a bad test when we see it maceij: there is a fuzzy boundary jonas: io bound if we have a million tests ... we need to keep it somewhat reasonable sam: there are examples of tests that can be merged adrian: there is a review process kk: you could file a bug, raise issues adrian: of course if it's approved, it doesn't mean it can't change again sam: if all the tests pass, then the bugs are in the specs kk: tests do content negotiation (canPlayTypepermanence) WRT choosing a codec the runtime supportS hsivonen: mochi tests that we (mozilla) use, requires server side javascript plh: was a lot of trouble already to support PHP for security reasons sam: we have tests that use python, php, curl for certain load tests <dom> (we evoked this in WebApps the other day; we can probably consider more server-side stuff at some point, but we need to need to have requirements documented earlier rather than ater) <dom> (and please consider limiting the number of needed languages/platforms as much as possible) jonas: we can generalise "slow load tests" so it doesn't neccessarily require PHP ... some security concerns here plh: we need to review PHP files before they become live jonas: we need it one the same server for same origin type cases <dom> if same server == test.w3.org, that's part of the plan hsivonen: we need a mechanism to load things slowly for example <dom> (use a DTD for that) hsivonen: avoid echo, we should return existing (approved) files jonas: is there sensitive data WRT XSS-ing plh: should be fine <anne> safest might be w3test.org or some such kk: what happens if 10 million tests are in the Q to be approved dbaron: biggest risk is a test that claims to test something, but doesn't actually test it sam: we should only accept tests that use the new harness ... the tests here are about testing regressions kk: worried about approval rate, esp. if only he does it plh: if a subset of tests are passed by everyone, they are probably good anne: 1) is it good enough hsivonen 2) ... [ didn't get that ] maceij: lets do a cost benefit analysis <adam> Accidentally testing something that is not a requirement at all maceij: 1st category testing undefined behaviour ... 2nd -- testing something contrary to a requirement ... -- at least one browser will fail this [ can someone write what maceij said pls ? ] scribe: 3rd cat testing something where it doesn't actually test it ... review should catch them all ... almost certain something will be wrong ... how much time should be spent on review versus benefit ... test approved == matches what the spec says dbaron: from exp within CSS, review is more work than writing to test... so its not worth doing for an existing contributor s/writing to test/writing the test/ dbaron: figure out why the test is failing sooner than later ... imp report: 1) run all tests 2) bug in test suite or in browser (v. time consuming) ... figure out WHY tests are failing hsivonen: we should flag tests that fail in all browsers ... we can't assume the spec is neccessarily 100% correct <hsivonen> we should flag tests that fail in 3 engines maceij: low skilled tests don't need to be approved, better if everyone is just running them [ IIUC ] anne: we should distribute the testing maceij: don't have ref test when you could have a script test ... distributed test is more likely to succeed hsivonen: do we have any way to feed the test info to the WHATWG HTML5 section info box things kk: could be an admin problem if links change <krisk> see for an example of a script based test <freedom> nobody in 3B yet? there will be an EPUB related meeting right? <oedipus> according to the agenda, EPUB discussion in 3B starting 8:30 french time <mgylling> Reads 09:00 to me <mgylling> To anybody who is physically there: does 3B have call-in facilities? <oedipus> guess the first half hour will be spent in common again then breakout to 3B <freedom> seems not <freedom> I am in 3B physically now <mgylling> freedom, thanks. <MichaelC> scribe: Julian ms: markus to give overview mgylling: (remotely) <mgylling> mgylling: epub standard for ebooks, around for several years, expanding in popularity, large adoption ... idpf.org ... based on xhtml, subsets defined ... current ebpub 2.0 ... uses XHTML1.1 mod ... is a fileset, ZIP container, different document types ... container called OCF <freedom> mgylling: some of the formats in epub defined by w3c ... some of the metadata formats owned by epub itself ... is undergoing rev to 3.0 ...charter: update & alignment with modern web standard s use HTML5 as grammar is not allowed by current specs but already happening need to formalize & stabilize on HTML5 vs XHTML5: epub decided to use X* based on requirement for existing reading systems to be upgradeable MS: asks about design philosophies ... drive spec based on what current UAs already can do? mg: docs used to be static ... <script> SHOULD/MUST be ignored ... but scripting is going to be added ... problems with legacy readers ... and non-browser-based impls ... it's clear that this will be needed in the future MS: devices coming to market with have full browser engines Julian: usability of spec for being referenced ? mg: not a problem yet ... we're not forking ... defining profiles and extensions, follow the HTML5 style Julian: how does ext work for you? mg: XHTML5 is supposed to allowed namespace-based extensibility ms: feedback on this is ... epub I18N requirements -> CSS WG -> vertical text support ... does not seem to affect HTML though ... is there something the HTML WG need to do? mg: books / ebooks slightly different domain ... missing semantics for books ... distinguish node references and nodes ... skippability page breaks have looked at role attributes for extensibility mjs: extending role not recommended because owned by aria ... needs coordination with PFWG ... maybe dedicated elements or attributes what affects rendering should be in HTML mg: book semantics, chicago manual of style in transcript, replace "node" by "note" MC: asks about roles MG: uses custom attributes <MichaelC> Role attribute extensibility: MG: fastest way for now (own NS) MC: role module *does* allow extensibility <MikeSmith> RRSAgent:, make minutes MC: PF and HTML need to coordinate on r@ole @role <Zakim> MichaelC, you wanted to discuss role extensions, future aria, etc. MG: ownership of @role mjs: HTML defines @role by refererence to ARIA spec MC: aria defines on HTML to define @role <MichaelC> s/aria defines/aria depends/ mg: request to clarify the HTML spec wrt role extensibility <fantasai> RRSAgent: make minutes mg: on metadata in epub ... NCX doesn't have metadata at all anymore <MichaelC> ARIA on host language role attribute mg: core metadata will continue to come from outside HTML/head <mjs> -> role attribute in HTML5: mg: reading systems need to get the metadata from the package file HS: on role attribute <fantasai> hsivonen: ARIA spec defines aria- attributes, but does not define role attributes <fantasai> hsivonen: requires that a host language define a role attribute with certain characteristics <fantasai> hsivonen: HTML5 tries to do this <fantasai> hsivonen says something about tricky wordsmithing <fantasai> hsivonen: Way forward would be to figure out roles that current AT vendors need (?) and define tokens for them, and have ARIA promise not to conflict <fantasai> hsivonen: The role module spec relies on CURIEs for extensibility <fantasai> hsivonen: ... not good for EPUB <fantasai> hsivonen: I don't expect web engines to support CURIEs, relies on namespace stuff ... lookup DOM L3 <fantasai> hsivonen: Best way forward is to ask PF to set aside the names that you expect to use <fantasai> hsivonen: Doesn't make sense to pretend different groups dont' know about each other <fantasai> hsivonen: We're communicating, so let's coordinate. <MichaelC> ARIA taxonomy <fantasai> ?: I'm ok with approach Henri is suggesting, but coordination with PF is important sooner rather than later <fantasai> MichaelC: Everything would have to fit into our taxonomy <fantasai> hsivonen: Implementations don't care about the taxonomy, that's only to help out with spec design <fantasai> hsivonen: If PF promises that this set of names is not going to be used, and picks different names if it decides to expand in that area, then we don't have to worry about all this extensibility stuff <mjs> ack q+ <fantasai> MichaelC: For author understanding, we want to pick tokens that match the most appropriate terminology <Zakim> MichaelC, you wanted to say if you want to follow the approach Henri suggests, should coordinate with PFWG sooner than later and to say ARIA roles are part of a taxonomy <fantasai> hsivonen: They're just tokens, it doesn't really matter <fantasai> mjs: Instead of debating in the abstract, let's just send the list of suggested roles to PF asap <hsivonen> DOM 3 namespace lookup doesn't work for CURIEs in text/html DOMs, so don't expect browsers to implement CURIEs <fantasai> mjs: If they don't like the tokens proposed, then they can respond about that. <fantasai> mjs: I don't think this meta-conversation is getting us anywhere <Zakim> Julian, you wanted to let Mike speak <fantasai> hsivonen: I'd like to add a note about why CURIEs are bad idea in this space <fantasai> hsivonen: So, frex, how Gecko exposes roles to interface to JAWS, Gecko picks the first role it recognizes and exposes that as the MSAA role <hsivonen> IAccessible2 <fantasai> hsivonen: And then exposes the entire value of the role attribute as the xml-roles property in the iAccessible2 interface <fantasai> hsivonen: It follows that the namespace mapping context of the CURIE binding context is not exposed at all <MichaelC> scribe: fantasai hsivonen: If you wanted to do something with CURIE, you wouldn't do CURIE processing. ... You would wind up exposing to JAWS the prefix and local name <freedom> IAccessible2, hsivonen: Therefore I advise against relying on the mapping context, because the existing ... doesn't expose the mapping to IAccessible2 and therefore to JAWS markus: Does Gecko expose the roles regardless of whether it recognizes it? hsivonen: Yes. All the data is passed through, in case JAWS wants to violate ARIA and look at things itself. ... Gecko doesn't police whether JAWS follows ARIA spec MikeSmith: I just wanted to state where things stand. ... It's not inconceivalbe that the language features you need for EPUB could be considered as native elements and attriutes to be added to HTML5 itself. It's not too late for that. ... It's not too late to ask, anyway. ... I'm sure we're going to get LC comments asking for new elements and attributes. ... There will be a lot of people who haven't looked at the spec yet, or want opportunity to have their request considered. ... Proper way to change the spec is file a bug against the spec. ... Cutoff for pre-LC was Oct1. Everything after that date will be considered an LC comment. ... I don't think that you should self-censor, and just assume there's no chance of getting any new language feature requests for native elements and attriutes considered. ... That's not what we want ... I don't want to say you have nothing to lose, because there's cost in time to everyone ... But something for EPUB to consider, whether you want to make requests for new elements/attributes. <hsivonen> Gecko exposes the value of the role attribute to JAWS but not any kind of CURIE prefix mapping context, which mean using CURIEs wouldn't really work with the URL and you'd end up hard-coding a known prefix and the resolution to an absolute URI would be fiction MikeSmith: Not mutually exclusive: could also pursue extensible approach, too <hsivonen> thus bad idea to use CURIEs MikeSmith: It's a good idea, although some things we need are likely to be considered out-of-scope for HTML5 Markus says something about e.g. notes fantasai asks if that wouldn't be <aside> mjs: Just want to reinforce Mike's comment that we would definitely like to hear all the requests, even though we are late in the game and probably aren't going to add major new feature. ... But requests that are modest in scope and important for a particular use case will be considered ... We're not 100% frozen yet, but in a few months we will be. So better to get those requests in now rather than later. ... Any other comments? fantasai: Wouldn't notes be an <aside>? Markus: Notes would be a subclass of <aside> Markus says something about an href role mjs: Talking about footnotes and end notes? Markus: Yes. Need to distinguish those for formatting MikeSmith: Don't we have a bug open on having more roles for <a>? mjs: If particular semantic of linking to footnote or endnote might be more appropriate as a rel value hsivonen: Maybe have a CSS pseudo-class detecting the note type from what the <a> points to instead of requiring author to specify Markus: Reponse from EPUB authors say that overall, it's really good. There are a number of additions from XHML1 that we love. ... We're already very close to having it work for books, only a few minor concerns. ... So not looking for any major surgery here. fantasai: I think they should define a microformat for subclassing notes. hsivonen: Håkon and Bert already defined a microformat for books, although I don't think they addressed notes. Bert: yes. A lot of that has been added to HTML5, though: <article>, <section>, etc. mjs: HTML5 just recommends a plain <a>, with no distinguishing markup hsivonen: footnotes are a thorny issue in CSS. Prince supports something, but it's not optimal ... I was reading Dante's Inferno in HTML5. It doesn't make any sense to read it without footnotes. mjs: Yeah, I read a Terry Pratchett book that was supposed to have footnotes, but they were all endnotes and it didn't work so well <Bert> Boom! (BOOk Microformat) hsivonen: I think we should figure out the CSS layout model first, then fit the markup to that. ... If we come up with markup first, and it doesn't fit the CSS layout model, making it work in layout could become very complicated, involving many pseudo-classes, etc. meeting closed? <Bert> (Contrary to what I remembered, BOOM *does* have footnotes, not just sidenotes: <span class=footnote>) discussion of role attributes mjs: You need centralized extensibility for accessibility, so the a11y technology understands the roles hsivonen: If you're on Windows, what FF can do is more than with the AS api on Mac <MikeSmith> hsivonen: So maybe it's a bad idea to design stuff with the assumption that you have IAccessibible2 on Windows ... Alternatively, could consider it a bug that AS doesn't have this feature <hsivonen> s/AS/AX/ anne: The only case you'd notice it is JAWS was updated before voiceover hsivonen: I'm guessing the upgrade rate of JAWS is a non-issue in practice <MikeSmith> Julian: You might not believe how backwards some people are in upgrading their browser hsivonen: Big parts of ARIA have been designed with the assumption of an enterprise stuck with IE7 for years after ARIA has been deployed in JAWS <MikeSmith> hsivonen: Design decisions make assumptions about which part of the system will be upgraded first. Might not have been the best design decisions. <MikeSmith> fantasai: So is EPUB subsetting HTML5? MikeSmith: not sure mjs: Engines are unlikely to enforce any subsetting fantasai: True, but such content could be non-conformant for EPUB 3. ... Not all EPUB implementations are based on browser engines ?: Are there many that are not? fantasai: I know of at least two ... and I haven't actually looked into the issue <kennyluck> fantasai: When I was at Tokyo, I found a EPUB implementation that implements CSS but not based on browser <kennyluck> .. I also found one EPUB implementation that's not based on browser at all <kennyluck> ... yet it renders vertical text quite nicely <kennyluck> ... (It does not support CSS) fantasai: uses effectively a UA stylesheet only hsivonen: Are the CSS implementatiosn any good/ fantasai: Don't know, haven't done any testing discussion of converting HTML5 to EPUB would need to split into multiple files for EPUB impl's tiny brains :) <mgylling> Yes, splitting files is done a lot due to memory constraints in certain handhelds <mgylling> A popular one has a 300k limit IIRC <MikeSmith> 12 minutes to caffeine <freedom> which means EPUB doesn't encourage authors to write long chapters? <mgylling> hehe, yes, need to keep it short ;) <mgylling> I expect these max file size recommendations to be gone soon, just another generation shift needed in devices <freedom> mg: do it, my iPhone 4 has 512MB now <mgylling> freedom, right. Note that this is not spec restrictions; these are conventions that has arisen in the ecosystem <freedom> OK, bad implementation, not bad spec <scribe> ScribeNick: fantasai mjs: Subtopics include ... Idea of using microformats ... another is that we have a number of specific issues <mjs> <mjs> <mjs> <mjs> mjs summarizes the open issues mjs: Does anyone else have other subtopics? <adam> *u must be dozing off* <anne> no kidding <Zakim> MikeSmith, you wanted to show XPointer registry and to discuss potential need for a role registry similar to need for a rel registry MikeSmith: Somehow I ended up the one responsible for registering all link relations for HTML5 ... So, I guess I can put some kind of report on that? What should I be doing. Julian: Let's start with a description of .. right now ... I'll summarize where IETF is right now. ... It all started with realization that HTTP has a Link header that's supposed to be equivalent to Link element in HTML ... And that there are documents on the web which are not HTML and for which it would be useful to expose linking ... Lots of people think it would be a good way of expressing link semantics independently of HTML ... So Mark Nottingham started on the work of writing a new def of Link in HTTP ... And establishing a registry that could be used in HTML as well, but would not necessarily be used in HTML ... The IANA registry also includes the link relations registry that was established for the Atom feed format, which is similar but not identical to HTML. ... So there are overlapps, but it included syndication-related things and not everything that HTML has ... So there was lots of discussion on procedural things, and licensing of the registry. ... Can talk about that later. ... Took a long time for spec to come out, but has finally been published. <Julian> Julian: That's a very old style: you send an email to an IETF list, and a group of designated experts to register that or ask questions. <Julian> Julian: Mark has started making this more modern by, first of all, providing a web page explaining how to register, has a template to help with you write the registration and submit for you to the mailing list <Julian> Julian: The designated experts now also has an issue tracker ... So people can watch where there registration requests are progressing ... Makes the IANA process a bit more pleasant <Julian> Julian: Here's the registry riht now ... This contains link relations defined in Atom, Atom extensions, and HTML4 ... and some parts for HTML5 <Julian> hsivonen: ? has been recognized as an entity that has reasonable ? measures in place ... It seems that the domain name is owned by ??? ... as an individual ... And whatwg.org is also owned by an individual Julian: I'm not sure how that affects our impression of whether microformats.org is stable or not <MikeSmith> s/???/Rohit Khare/ mjs: My biggest disappointment about the RFC is that it doesn't have provisions for individual registrations ... It would be useful to have a central repository where all of these can be listed so people know what's in use, even if it doesn't have a formal spec ... I think Mark should make a provisional registry. ... Mark said the registry would be so lightweight it wouldn't be necessary ... But that has not proven to be true. <hsivonen> moreover, even proven to be false Julian: We have provisional registries in other IANA things, and nobody's used them. <MikeSmith> mjs: I think if you find something that's almost never used, then creating something that has higher barrier to entry, then creating something with a higher barrier to entry isn't going to increase use Julian: People don't use provisional registries because they don't care enough. mjs: microformats.org list has even lower barrier to entry, and it is used Julian: One difference between IANA registry and wiki page is that wiki is completely HTML focused ... So they don't consider relations among other formats other than HTML ... They don't think about use on PDF or video mjs: Most people invent link relations for HTML. I don't think it makes sense to force them to address these abstract link uses that may or may not be practical. ... It makes more sense to me to provisionally register the link relations, and then encourage them to think about generalizing to other formats. hsivonen: It might be not about people not caring, but about provisional registration being dysfunctional ... I also agree with mjs that in some cases people don't care about nonHTML use cases. In that case we should just do HTML. Julian: we talked about ... provisional registry [that hsivonen mentioned] yesterday, and I totally agree this problem needs to be investigated. ... I think we try. ... I think we should try to encourage people to think of link relations applied to non-HTML content mjs: I think encouragement is fine. But if encouragement fails, what happens? Should the link relation then be undocumented because encouragement was unsuccessful? Julian: ... nobody's mailed a link relation and asked designated experts to help make the link relation more generic mjs: You've raised the barrier by tring to make it generic, the person doesn't care about making it generic, so it ends up being unregistered anne: You don't need that to get it in the registry, but to get it endorsed hsivonen relates hixie's experience with trying to register a link relation hsivonen: If what hixie wrote wasn't enough, then I think we have a problem. Julian: My point of view was that he didn't seriously try. He wanted to prove it didn't work. ... I don't think it will be productive to continue on this path. mjs: When I looked at the original templates hixie submitted and compared them to what the RFC said, I couldn't see any mechanical procedure that determined they failed to qualify ... So it seems anyone trying to register would require multiple email go-around ... Same problems result in failure to register MIME types and URL schemes MikeSmith: I have been going through the process of making requests using the mandated procedures <MikeSmith> MikeSmith: You can see there the discussions about the registry ... It does take multiple go-arounds in email for these. ... One is for some of the link relation names or types, they are already being used in other contexts ... One of those was 'search'. ... If you look at that, it was specified somewhere else. ... Regardless of how you do this, there has to be some discussion about what this description should say ... I don't see any way to get around that, if you have multiple ppl want to define the same thing. ... Other issues were with how it's defined in the spec itself. ... 'up' is one of those. Had to go back to WG and get a resolution for it ... .. Maciej... having to change the description of the link relation so that it's more generic, and less about HTML ... I'm not thrilled with that. ... Don't really care about doing that at this point in the procedure. <hsivonen> (one of the top Google hits for the metaphor is from one of our co-chairs: ) MikeSmith: I think many ppl are not going to be thrilled about changing what they think is a perfectly reasonable discription of their use case to handle some speculative use cases ... That's alwasy going to be a troublesome thing for someone to do s/disc/desc/ MikeSmith: In the spirit of going through the procedure and taking it to the end to see if it ends up being something it works or not ... But I do think we have to keep open the possibility that we decide that it doesn't work. ... I don't think it's a given that just because it's an RFC and the registry exists, that we've commited to this is how we do it. <MikeSmith> MikeSmith: I think it's still a possibility that this isn't working the way we would like it to work, let's try something else. ... There is something else, plh asked me to point out. ... Is the xpointer registry. <anne> +1 to W3C doing web registires MikeSmith: This is another way of registering something that is similar <anne> s/registires/registries/ MikeSmith: I think the biggest ... difference between things that have been successfully regsitered ... and those that are still being reviewed ... i.e. provisionally registered ... All you need to do to request a provisional registration, you just start by typing in a name of some kind it gives you a form asking for a description, and optionally a spec URL MikeSmith: This is a middle ground between a wiki page and <hsivonen> This looks good to me MikeSmith: At least it's got a form-driven interface ... I think this is a good middle ground ... If the IANA registry provided a way of doing this, I think that would be something we could agree on Julian: IANA registry has something very similar ... The only thing is that instead of being automatically registered, it gets sent to the email list ... If we made a provisional registration out of the sumission, that would be the same. <Julian> <anne> The requirements for XPointer are first-come-first-serve Julian: and then someone on the mailing list to the tracker page <anne> This is not at all the case for the link registry <anne> well, the one the IETF/IANA uses hsivonen: How do you know the tracker issue is filed and where that is? Julian: You don't ?: Why can't you do a web-based form? Julian: Can't do that in IANA. IANA doesn't have web-based forms. Lives in last century. ... The form that posts to email is a compromise. hsivonen: So why does HTMLWG/W3C want to deal with an organization that lives in the last century <weinig> s/?:/Sam hsivonen: Instead of using xpointer registry code? Julian: It depends on whether you think the link relations should be synced with other formats or not sicking: Why couldn't you let W3C do the syncing to IANA? MikeSmith: Before ? pointed out xpointer, I didn't know we did registries mjs: Sounds like building a registry along the lines of xpointer would be a great idea <MikeSmith> s/?/PLH/ mjs: Any volunteers to do that? ... write it up as a Change Proposal? ... It's a little past deadline, but since we have new info on the W3C registry option, would be a good thing to do MikeSmith: Guess I should talk to plh about this. hsivonen volunteers MikeSmith: plh asked me to point out the open issue about Role ... We talked about it this morning. Similar potential need to have a role registry ... plh isn't sure xpointer way is the right way to go, but wanted us to be aware that it exists anne: I think we should do role more centralized, because it affects implementations directly. hsivonen: In last meeting I asked EPUB to ask PF to set aside some tokens for them once getting commitments from AT vendors that they will support these roles mjs: Other things in HTML5 might benefit from this ... e.g. <meta> names ... There was a third thing Julian: canvas context? mjs: Seems more like role, in that it has implementation implications and should therefore be centralized hsivonen: Yes. for role, e.g. you need coordination among AT vendors and browsers etc. ... Not good to have a registry. Rare to make a new role. ... PF should be able to set that aside without a formal process. anne: Other one is meta http-equiv, which has a different namespace than meta name ... And canvas context, you do sorta need a place that says which are the contexts and which are compatible with which. ... Currently all are incompatbile, so not an issue now, but might change. hsivonen: New canvas context is even rare r ?: Still need a list of them ??: No, could just be defined by the specs that define them hsivonen: I don't see this as being a problem right now. <kennyluck> s/??/mjs/ hsivonen: There are three canvas contexts in the world, and one is proprietary anne: we're removing them, 'cuz features have been added to 2d ... Might want a variant of WebGL that is compatible with 2D ... But still it's very limited mjs: There's probably only a single-digit number of these, and should all go through HTMLWG anyways fantasai: For link relations, seems like the idea is to have a provisional xpointer registry ... What about if someone wants to port a provisionally registered link rel to IANA, for more general use? discussion hsivonen: Dont't think we want to hijack Atom registrations Julian: If we decide not to go with IANA registry, need to decide whether we want to continue with registration of HTML5 link relations in IANA mjs: I think registering HTML5 link rels in IANA is unrelated to progress of HTML5 ... It's not a requirement for us. It just makes the IANA registry more complete. mjs expresses that he doesn't care whether MikeSmith finishes the registration since it's not required for HTML5 MikeSmith: It's not a lot of work, think it makes sense to finish offf. mjs: what about the ones where the designated experts require changes to the definitions MikeSmith: filed issues on that mjs: For us, the importance of a registry is as an extension point. sicking: Seems to me that the best caretakers of the link registry so far has been the microformats people ... So I want whatever solution we choose here to work for them. mjs: Idea of using page on microformats wiki was proposed, but nobody's written up a change proposal for that either. ... Anyone want to volunteer to write that up? sicking: Ok, I'll do it. mjs: So post to the mailing list and say how long it will take you? ... I think we should make an exception here, because we have new information that will help us make a better decision Julian: Microformats.org is not a new idea sicking: New information is our experience with IANA Julian: Half have gone through. A number are held on bugs being fixed in HTML ... Then we have to review the updated spec. mjs: If the spec isn't updated, what happense? Julian: We'd probably accept the registration anyway. mjs: So why is the registration being held up? Julian: If the description is updated at HTMl5, then the IANA registration would have to be updated multiple times. hsivonen: Why is updating IANA registry multiple times a problem? Julian: I don't think it makes a big difference either way fantasai: Then I suggest you ask the IANA registers to finish the registration for any link relations that will be registered with the current text, and then update the registry when the problems they've pointed out have been addressed with updated text. <scribe> ACTION: Julian to Ask the IANA designated experts if this would be an acceptable model [recorded in] <Julian> ISSUE-127 Julian: ... Means in theory the semantic of the link relation can change depending on whether it's on <link> or <a> <MikeSmith> trackbot, associate this channel with #html-wg <trackbot> Sorry... I don't know anything about this channel <trackbot> If you want to associate this channel with an existing Tracker, please say 'trackbot, associate this channel with #channel' (where #channel is the name of default channel for the group) <MikeSmith> issue-127 <MikeSmith> issue-127? <trackbot> Sorry... I don't know anything about this channel Julian: I think the link relation should be defined the same for both, and the usage affect details like scope ... I think the section should be revised to not imply that rel values on <link> and <a> could be substantially different ... The IANA registry has an extension point so that each registration can have multiple columns <MikeSmith> issue-127? <trackbot> Sorry... I don't know anything about this channel <kennyluck> trackbot, associate this channel with #html-wg <trackbot> Associating this channel with #html-wg... Julian: That was requested by Ian <MikeSmith> issue-127? <trackbot> ISSUE-127 -- Simplify characterization of link types -- raised <trackbot> Julian: E.g. to have a column that says whether the linked resource is required to be loaded, or just informational relation <MikeSmith> ACTION: Julian to Ask the IANA designated experts if this would be an acceptable model [recorded in] <trackbot> Created ACTION-196 - Ask the IANA designated experts if this would be an acceptable model [on Julian Reschke - due 2010-11-12]. mjs: It seems that in practice the spec does what's requested, so it's more an editorial issue Julian: This distinction applies both to the spec and also to the registry ... I don't think having the distinction in the registry is a good idea. ... We don't seem to have any good cases for that. ... The observation is, we currently have a table in the spec that has columns for effect on <link> and effect on <a> and <area> ... In this table, both are exactly the same ... except for two values, which in one column it's listed they're not allowed ... And in these case there are bugs on whether that distinction is a good idea. fantasai: Setting stylesheet on <a> doesn't make sense to me mjs: 'stylesheet' and 'icon' would have no effect outside <a>, even if we add them Julian: ... ... We'll have to make a decision on that no matter where we put the registry. Defining things such that it's possible for relations to have a different deifnition on different elements is a bad idea. mjs: ok <kennyluck> s/<a>/<link>/ <Julian> Julian: This is about the 'up' relation. ... Someone thought it would be nice to change the definition to allow repetition of 'up' ... to e.g. have 'up up' mean grandparent mjs: That wouldn't work very well given the DOM api for rel, which lists unique tokens fwiw, I agree this seems like an ill-fitted idea... <Julian> <anne> HTML5 says something different from HTML4? <Julian> this is about navigational link relations that changed in HTML5, potentially changing existing content hsivonen: fwiw, I think we should get rid of the up up up thing. ... It won't be supported in UI very well anyway Julian: The use case given was to build a navigation tree in the UA ... But I think there are better ways to address that use case hsivonen: When a browser user experience team wants to implement something, and asks for syntax for it, then we should conside rit. ... but at this point it just seems a theoretical idea ... So I would propose to just drop it Julian: I'd like to ask the chairs to bundle the timing for these issues so they don't get too spread out mjs: Could put them all together ... have been staggering them so you don't have to write proposals all at once meeting closed RRSAgent: make minutes RRSAgent: make logs public <anne> scribe: anne MJS: Lets make a testcase in this session and submit it ... in the later half of this session JS: I am willing to coming up with a format for tests ... and write a harness <mjs> ACTION: sicking to design a file format for describing tests, and to write a harness that will run the automated tests [recorded in] <trackbot> Sorry, couldn't find user - sicking <mjs> ACTION: Sicking to design a file format for describing tests, and to write a harness that will run the automated tests [recorded in] <trackbot> Sorry, couldn't find user - Sicking trackbot, this is HTML WG <trackbot> Sorry, anne, I don't understand 'trackbot, this is HTML WG'. Please refer to for help <dbaron> trackbot, status <trackbot> This channel is not configured KK: I can update the wiki <MikeSmith> trackbot, associate this channel with #html-wg <trackbot> Associating this channel with #html-wg... <scribe> ACTION: kris to update the wiki [recorded in] <trackbot> Created ACTION-199 - Update the wiki [on Kris Krueger - due 2010-11-12]. <scribe> ACTION: Sicking to design a file format for describing tests, and to write a harness that will run the automated tests [recorded in] <trackbot> Sorry, couldn't find user - Sicking <scribe> ACTION: jonas to design a file format for describing tests, and to write a harness that will run the automated tests [recorded in] <trackbot> Sorry, couldn't find user - jonas <sicking> gaah, i don't exist <sicking> i irc, therefor i exist KK: What about XSS issues? PLH: I agree we cannot solve the XSS issues ... My goal is that we do not set up services on these domains ... so there is no problem, effectively AVK: as long as w3.org does not document.domain we are fine, otherwise it might be safer to use w3test.org MJS: There might be a problem in the future; everything should be safe if we do not use a subdomain JS: I have an idea for non-automatible tests, but we can discuss that later ... The way I would like us to do new things is write tests in the new format if it is compatible with our features MJS: We have a requirement for landing new features and we could require them to be written in the HTML format AvK: We have used this format successfully already ... e.g. for server-sent events and XMLHttpRequest MJS: one thing we might need to do is identify features in the specification which are not new but still need tests ... there is an HTML4 test suite AvK: I do not think we should start from that [people agree] HS: How does updating work? JS: We will have to figure it out HS: for html5lib WebKit first lands in WebKit, I land first in html5lib [HS implements for Gecko] SW: We are not opposed to change AvK: I think if the test contributor is known the tests should just get in JS: I do not agree, I think we should have a staging area KK: I think so too MJS: I think it makes more sense that the testing in browsers happens later and that tests should get automatically in [scribe misses out on discussing Mozilla specifics] KK: Basically you have a set of tests, and wait for them to be approved MJS: What do you want the approver to actually do? KK: cursory review AB: I think it might be worth having almost automatic approval process ... for tests that pass in multiple user agents MJS: why does there need to be this approval step? it will happen in distributed form anyway AB: to increase the level of quality MJS: it does not seem to happen now AvK: agreed DB: I am not sure that a approval process is good for known contributors MJS: It seems like a waste of time of people to require people to manually run the tests in every browser before it is approved ... there will also be cases that fail in all browsers DB: it seems you want a staging area because you want a known good set of tests ... an alternative approach is to ship a release, rather than delay on trunk HS: not having a lot of process helped html5lib to move forward faster MJS: with a release you know it does not get worse KK: the idea of approved is that is done AvK: so far that has not worked I think MJS: I think you will always get more tests and with releases you know the delta and can review whether that is ok as you already know the previous release was ok [something about multiple vendors contributing tests being awesome] MJS: problematic tests can be removed from the release <hsivonen> fantasai: Microsoft testa a lot of value combinations. Mozilla tests tricky edge cases. <fantasai> fantasai: Different vendors take different approaches to testing, and thereby cover different aspects of the features. <fantasai> fantasai: By putting them together you get a more comprehensive test suite JS: if the release process does not work we can revise it KK: i like to lock things done DB: if browsers import the tests they will report the problems more quickly KK: in the current model the test can be pulled right away [mercurial haz magic] JS: If I find something wrong should I fix the test and mail the list KK: currently mail the list ... and open a bug MJS: I think people who report the bug should be allowed to fix the test AvK: you want to optimize for the case that is most common, and most common the bug reporter will be correct I think DB: you should notify the person who wrote the test JS: I am fine with attaching patches to bugs <plh> --> Mercurial server <dbaron> hg clone is an example of a test following the non-written guidelines <dbaron> default-push = https://[USERNAME]@dvcs.w3.org/hg/html/ <dbaron> is a line that you'd want to add to .hg/hgrc after: <dbaron> [paths] <dbaron> default = <hsivonen> let's make one of these: <hsivonen> that is, we should have a tool like that for the W3C harness <krisk> see <hsivonen> I'm already annoyed by having to wrap stuff in test() <hsivonen> so I can't do ok(false, "FAIL!"); in scripts that aren't supposed to run <plh> ACTION: Kris to add reftest handling in the test harness [recorded in] <trackbot> Created ACTION-200 - Add reftest handling in the test harness [on Kris Krueger - due 2010-11-12]. <krisk> uses a relative path <hsivonen> <hsivonen> you'll really want to use MQ Media Queries ftw <krisk> <weinig> sicking: <plh> a reftest: <dbaron> trackbot, associate this channel with #html-wg <trackbot> Associating this channel with #html-wg...
http://www.w3.org/2010/11/04-html-wg2-minutes.html
CC-MAIN-2018-17
refinedweb
11,137
65.25
DEVFS has gone through a bunch of debug and fix passes since the initial integration and is now ready for even wider testing on master. * Now works properly with X. * Now works properly with mono and other applications. * Now probes disklabels in GPT slices, and properly probes GPT slice 0. * Misc namespace issues fixed on reprobe (there were problems with iscsi and VN). * Auto-reprobe is now synchronous from the point of view of fdisk, gpt, and disklabel, or any program that is setting up slices and partitions. (there were races against the creation of the actual sub-devices in /dev before that are now fixed). * mount-by-serialnumber is possible via /dev/serno/. Example fstab: serno/L41JAB0G.s1d / hammer rw 1 1 And example vfs.root.mountfrom line in /boot/loader.conf: vfs.root.mountfrom="hammer:serno/L41JAB0G.s1d" * /etc/devtab integration is complete but not yet documented. An example /etc/devtab entry would be something like: driveA serno L41JAB0G And in /etc/fstab: driveA.s1d / hammer rw 1 1 devfs will be fully operational for the release. Vinum is still non-operational. -- iscsi via the iscsi-initiator.ko kernel module and the /sbin/iscontrol program is now in an alpha-test state. It works, but it isn't pretty. There is a pkgsrc package for userland target implementation called /usr/pkgsrc/devel/netbsd-iscsi-target. Iscsi is definitely in an alpha state. iscsi support will be generally working for the release but we are unlikely to have root support for it by the release. -Matt Matthew Dillon <dillon@backplane.com>
http://leaf.dragonflybsd.org/mailarchive/users/2009-08/msg00025.html
CC-MAIN-2015-06
refinedweb
262
60.92
Local variables Function parameters, as well as variables defined inside the function body, are called local variables (as opposed to global variables, which we’ll discuss in a future chapter). In this lesson, we’ll take a look at some properties of local variables in more detail. Local variable lifetime In lesson 1.3 -- Introduction to variables, we discussed how a variable definition such as int x; causes the variable to be instantiated (created) when this statement is executed. Function parameters are created and initialized when the function is entered, and variables within the function body are created and initialized at the point of definition. int x; For example: The natural follow-up question is, “so when is an instantiated variable destroyed?”. Local variables are destroyed in the opposite order of creation at the end of the set of curly braces in which it is defined (or for a function parameter, at the end of the function). Much like a person’s lifetime is defined to be the time between their birth and death, an object’s lifetime is defined to be the time between its creation and destruction. Note that variable creation and destruction happen when the program is running (called runtime), not at compile time. Therefore, lifetime is a runtime property. For advanced readers The above rules around creation, initialization, and destruction are guarantees. That is, objects must be created and initialized no later than the point of definition, and destroyed no earlier than the end of the set of the curly braces in which they are defined (or, for function parameters, at the end of the function). In actuality, the C++ specification gives compilers a lot of flexibility to determine when local variables are created and destroyed. Objects may be created earlier, or destroyed later for optimization purposes. Most often, local variables are created when the function is entered, and destroyed in the opposite order of creation when the function is exited. We’ll discuss this in more detail in a future lesson, when we talk about the call stack. Here’s a slightly more complex program demonstrating the lifetime of a variable named x: In the above program, x’s lifetime runs from the point of definition to the end of function main. This includes the time spent during the execution of function doSomething. Local scope An identifier’s scope determines where the identifier can be accessed within the source code. When an identifier can be accessed, we say it is in scope. When an identifier can not be accessed, we say it is out of scope. Scope is a compile-time property, and trying to use an identifier when it is not in scope will result in a compile error. A local variable’s scope begins at the point of variable definition, and stops at the end of the set of curly braces in which they are defined (or for function parameters, at the end of the function). This ensures variables can not be used before the point of definition (even if the compiler opts to create them before then). Here’s a program demonstrating the scope of a variable named x: In the above program, variable x enters scope at the point of definition and goes out of scope at the end of the main function. Note that variable x is not in scope anywhere inside of function doSomething. The fact that function main calls function doSomething is irrelevant in this context. Note that local variables have the same definitions for scope and lifetime. For local variables, scope and lifetime are linked -- that is, a variable’s lifetime starts when it enters scope, and ends when it goes out of scope. Another example Here’s a slightly more complex example. Remember, lifetime is a runtime property, and scope is a compile-time property, so although we are talking about both in the same program, they are enforced at different points. Parameters x and y are created when the add function is called, can only be seen/used within function add, and are destroyed at the end of add. Variables a and b are created within function main, can only be seen/used within function main, and are destroyed at the end of main. To enhance your understanding of how all this fits together, let’s trace through this program in a little more detail. The following happens, in order: And we’re done. Note that if function add were to be called twice, parameters x and y would be created and destroyed twice -- once for each call. In a program with lots of functions and function calls, variables are created and destroyed often. Functional separation In the above example, it’s easy to see that variables a and b are different variables from x and y. Now consider the following similar program: In this example, all we’ve done is change the names of variables a and b inside of function main to x and y. This program compiles and runs identically, even though functions main and add both have variables named x and y. Why does this work? First, we need to recognize that even though functions main and add both have variables named x and y, these variables are distinct. The x and y in function main have nothing to do with the x and y in function add -- they just happen to share the same names. Second, when inside of function main, the names x and y refer to main. Because the scopes don’t overlap, it’s always clear to the compiler which x and y are being referred to at any time. Key insight Names used for function parameters or variables declared in a function body are only visible within the function that declares them. This means local variables within a function can be named without regard for the names of variables in other functions. This helps keep functions independent. We’ll talk more about local scope, and other kinds of scope, in a future chapter. Where to define local variables Local variables inside the function body should be defined as close to their first use as reasonable: In the above example, each variable is defined just before it is first used. There’s no need to be strict about this -- if you prefer to swap lines 5 and 6, that’s fine. Best practice Define your local variables as close to their first use as reasonable. Quiz time Question #1 What does the following program print? Show Solution main: x = 1 y = 2 doIt: x = 1 y = 4 doIt: x = 3 y = 4 main: x = 1 y = 2 Here’s what happens in this program: Note that even though doIt‘s variables x and y had their values initialized or assigned to something different than main‘s, main‘s x and y were unaffected because they are different variables. Hello I learned a few basics on programming some years ago, I'm trying to learn properly, but I remember my teacher telling me to declare local variables at the beginning of the function, not necessarily near its first use, why is it better practice to declare them just before using them compared to declaring them at the beginning? I remember being easier to identify my variables so I didn't repeat them within the same function Your teacher might be a C programmer, where declaring the functions at the start was necessary. That's not how it goes in C++. If you have so many variables that you're starting to repeat them, your functions are too long. The benefits of declaring variables at the first use are - They're never uninitialized. If you declare them at the start, you might not know which value they should have. - Initialization is faster and safer than default-initialization+assignment. - You're not creating variables and then not using them. If a function exits early, only the variables that were required up to that point have been created (Exceptions apply to fundamental types). "Local variables are destroyed in the opposite order of creation at the end of the set of curly braces in which it is defined (or for a function parameter, at the end of the function)" Que--> what do you mean by "at the end of function", isn't the function ended when closing curly braces are encountered. Is end of curly braces not the end of function? Or this is a thing which we will learn later in this tutorial. Please explain.. Yes, the function is ended when the closing curly braces are encountered. I make the distinction because technically function parameters aren't defined inside a set of curly braces. Thanks for the clarification Alex, and I greatly appreciate your tutorials and the help & support you are providing. Thanks again.. Very well done explanation! I work a lot with 3D software and it's very comforting to see some concept of 3D programs and actual programming match. It's like a hierarchy. If you'd view a program like one above from a zoomed out total view there would be (fake global) variables visible like: funktion add → x and y funktion print → x and y funktion main → x and y So they are all at all times distinguishable. I guess the major difference is that they really are local, and have a lifetime, really get created and destoyed very so in runtime other then exist all the time. That's probably way better to handle for computation. Is that the reason or a major reason why there are local and global variables? I hope I got that right. Thank you very much for this lessons! Void returns nothing, then in quiz time question 1 how void dolt return X, y values to the main function? It doesn't, which is why main's x and y aren't changed. Hi, " Define your local variables as close to their first use as reasonable. " Why we use should from this? Thanks. i think avoid memory last I think it will be helpful when you are going through your code you will easily find out when an variable created and to find out it's scope At starting we are more prone to make silly and small misakes and defining local variables as close to their first use will help us tackle them easily for e.g -> In the below code the program will compile and work fine but don't give the desired output In this e.g when you will compile it you will get an compilation error " error: ‘y’ was not declared in this scope " which means "y" is not declared till now (in the "main" for this e.g) So this will prevent you to do mistake and get frustrated more often [code] #include <iostream> using namespace std; int main() { int x{}; cout<<"Enter x\n"; cin>>y; // It will throw an error because we are assigning the value to the variable which //is still not declared int y{}; cout<<"Enter y\n"; cin>>x; cout<<"x-y"<<x-y; return 0; } [\code] :- There may be more benefits of using this practice. This is what I think of :) Hi, There are many reasons for this. Amongst other, it is easier to read and understand the code if variables are defined as close as possible to where they are used (or else you might have forgotten the definition when you get to where they are used) Hi am a beginner I do appreciate this tutorial. I read this page a couple times well trying too get it. And this is the best I could do it works for me. #include <iostream> void doPrint(int n) { std::cout << "Character # for 'n' is: " << n << '\n'; } void doPrint2(int y) { std::cout << "And 'y' is the #: " << y << '\n'; } int main() { char x{'n'}; char y{'y'}; std::cout << "And i.e of passing a character value. " << '\n'; doPrint(x); std::cout << ("It's that ez, ") << '\n'; doPrint2('y'); return 0; } If I read this correctly, you get the character number of a char by returning it's int-value via a function, is that correct? In that case, you probably can optimize a bit. You could do something like this: But it can be even simpler, if you only want to output the ASCII number of a character. You don't even need a function. You can simply convert the char into an int by explicitly casting it into an int for the output. (casting, put simply, is forcing a data type to convert into another. It is not possible to cast every type into every other type, they have to be compatible). The variable itself will still hold the char afterwards. All you would need to do is this: Of course you can also go and optimize that as well. But these are other possibilities to accomplish what you want.</iostream></iostream> A little correction, since I messed up and can't edit: In regards to the second example I gave you, ignore that and instead use this (fixes a little error I did because I'm stupid and makes the code actually compile). Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/introduction-to-local-scope/
CC-MAIN-2021-17
refinedweb
2,223
67.49
Reliable Messaging in WSIT Milestone 2 Other than the usual bug fixes and minor adjustments needed to adapt to changes between WCF versions, the new work for the milestone consists of implementing some configuration settings that may affect performance. First some background: The WS-RM spec defines a SOAP-based protocol used by middleware components that exchange messages called the sender and receiver. It defines a way for a sender of to ask the receiver which messages have arrived and a way for the receiver to answer. The specification says little about how to use the protocol. Of course, the main use is that it allows the sender to ensure that all messages have arrived by periodically asking the receiver which ones have been delivered and resending the ones that haven't. A few variables affect the efficiency and performance of a system that does this. Namely how often the sender asks the receiver to account for the received messages and how often the sender resends the un-accounted-for messages. Imagine a scenario where almost no messages are lost. "Correct" values for these variables depend very much on the frequency of lost messages and the length of time it takes to deliver a message. We are exposing these variables as configuration settings, so end-users will be able to adjust them. The configurations are client configurations, unlike the other RM configurations that only affect the endpoint. As of Milestone 2, the settings are not exposed in the Netbeans UI, but it is possible to try them by manually editing the configuration files. Each setting uses a proprietary PolicyAssertion. They are: <sun:ResendInterval <sun:AckRequestInterval where sun==. The namespace will change in the next release. By default, retries happen every 2000 ms. The Resend setting in the Policy assertion will be used if it is larger than 2000, unless the system detects an abnormal build-up of unacknowledged messages, in which it will revert to the default value. By default the client requests acknowledgements on every application message. If a positive value is specified for AckRequestInterval in the policy assertion, the system will always wait for that interval between requests for acknowledgements, unless the system detects an abnormal build-up of unacknowledged messages, in which case it will refert to the default behavior. As of Milestone 2, the settings are not exposed in the Netbeans UI. However it is possible to experiment with the settings by manually adding them to the PolicyAssertion for an endpoint's wsdl:binding. Obviously, this is a temporary arrangement, since the settings ultimately need to be part of the client's configuration. However, at the moment, a client will use these settings if it finds them in the WSDL for the endpoint it is communicating with. - Login or register to post comments - Printer-friendly version - mikeg's blog - 1617 reads
https://weblogs.java.net/blog/mikeg/archive/2006/09/reliable_messag.html
CC-MAIN-2015-14
refinedweb
475
51.99
Generates a vtkTable based on an SQL query. More... #include <vtkSQLDatabaseTableSource.h> Generates a vtkTable based on an SQL query. This class combines vtkSQLDatabase, vtkSQLQuery, and vtkQueryToTable to provide a convenience class for generating tables from databases. Also this class can be easily wrapped and used within ParaView / OverView. Definition at line 39 of file vtkSQLDatabaseTableSource.h. Definition at line 43 of file vtkSQLDatabaseTableSource. The name of the array for generating or assigning pedigree ids (default "id"). If on (default), generates pedigree ids automatically. If off, assign one of the arrays to be the pedigree id. This is called by the superclass. This is the method you should override. Reimplemented from vtkTableAlgorithm.
https://vtk.org/doc/nightly/html/classvtkSQLDatabaseTableSource.html
CC-MAIN-2021-17
refinedweb
112
53.68
Linear programming with a strange result I have encoutered a strange behavior in solving a linear program. def sol_zero_sum_game(M=matrix([[1,-1], [-1,1]]),code=1) : dim = M.nrows() U=ones_matrix(dim,dim) zsg=MixedIntegerLinearProgram(maximization=False, solver="GLPK") x=zsg.new_variable(real=True,nonnegative=True, indices=[0..dim-1]) minM=min(min(M)) Id= identity_matrix(dim) M1=(abs(minM)+1)*U+M Bzsgl=M1*x zsg.set_objective(sum(x[i] for i in range(dim))) zsg.solve() xx=zsg.get_values(x) #show(xx) for i in range(0,dim) : zsg.add_constraint(Bzsgl[i]>=1) if code==1 : return zsg.show() if code==2 : return xx The above code can display the program for the following G matrix : G=matrix([[1,-1], [-1,1]]) sol_zero_sum_game(G,1) The solution is given by : G=matrix([[1,-1], [-1,1]]) sol_zero_sum_game(G,2) which gives {0: 0.0, 1: 0.0} as a solution. But this is obviously false since after substitution of $(0,0)$ in the constrains it doesn't work. As this is the formalisation of a game I know that the solution is $(0.25, 0.25)$ as confirmed by an other software LIPS (see the screen capture below). I have certainly made a mistake but I can't see where. Thanks for help.
https://ask.sagemath.org/question/55430/linear-programming-with-a-strange-result/?sort=oldest
CC-MAIN-2021-39
refinedweb
216
53.37
On Wed, Jul 26, 2006 at 09:30:44AM -0700, Simon Baxter wrote: > I've had this problem for ages and never found any resolution to it as it's > really only a problem with DVDs that have a lot of chapters. > > When it get towards the end of a chapter, the sound disappears. The video > is still fine, but the sound stops. > > Are there any debugs I can run to get more info out of dvd-plugin? I also had this problem until a applied the following workaround. It seems the call to 'm_xineLib.execFuncResetAudio();' causes a short audio dropout. But it appears the call is not neccessary anyway. At least I noticed no bad sideeffects yet after skipping the call. --------------------------------------------- --- ../../vdr.org/VDR/PLUGINS/src/xine-0.7.9/xineDevice.c 2006-04-17 20:36:01.000000000 +0200 +++ xine-0.7.9/xineDevice.c 2006-08-26 07:27:39.000000000 +0200 @@ -2969,7 +2969,11 @@ // np = true; } +#if 0 m_xineLib.execFuncResetAudio(); +#else + xfprintf(stderr, "skipping execFuncResetAudio()\n"); +#endif if (f) m_xineLib.execFuncSetSpeed(0.0); ---------------------------------------------
http://www.linuxtv.org/pipermail/vdr/2007-January/011920.html
CC-MAIN-2014-52
refinedweb
178
61.43
It looks like you're new here. If you want to get involved, click one of these buttons! Hello, I'm trying to make an ASCII code type box (the ones you see on GameFAQs of famous logos and such) since its part of the Chapter 6 Lab for Programming Arcade Games with Pygame. Here is what I have so far. The trouble I'm having is to determine how to space out the middle line. I know this depends on what n equals and I've been trying to tweak it to match any number of 'n'. The line I am having trouble with is Line 39 I'm trying to calculate how many spaces I actually need by multiplying it by n*2 to get the max number across plus a little bit for one extra space. This only works for n = 4, so I think I messed up pretty badly sadly enough. I do this because you can't just add a number to a space character and I believe you can only multiply or divide a space by a number. I was even thinking about rethinking this and converting whatever I obtain into int using int(), but I might be overthinking things. Here's my code: # n = row number # Try 3 rows: n = 4 for i in range(n): for j in range(n*2): # Add if statements to limit the cases to draw a line of o's # on the top and bottom lines for the box but spaces # with an o on each side for all the lines in between: # Top row: if i == 0: print("o", end = " ") # Middle cases (maintain first row to last row, but limit to 1st column) if i > 0 and i < n-1 and j < 1: print("o", end = " ") # Maintain first row to last row, but limit to last column based on n*2: if i > 0 and i < n-1 and j == (n): # 13 spaces needed for n = 4 print(" " * ((n*2)+(n-1)), "o", end = " ") # Bottom row: if i == n-1: print("o", end = " ") print() # Fix the middle case # Chapter 6 Lab Link: # This is my output so far (ignore the first case since it was in an unrelated part of the lab *** Python 3.3.3 (v3.3.3:c3896275c0f6, Nov 18 2013, 21:18:40) [MSC v.1600 32 bit (Intel)] on win32. *** *** Remote Python engine is active *** >>> *** Remote Interpreter Reinitialized *** >>> 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 o o o o o o o o o o o o o o o o o o o o >>> Depending on the text in the terminal/CLI the space usually is the same, in Python 2 you can just multiply the characters out to make length e.g.: def print_squarebox_lines(n,l): print "x" * n for i in range(l): print "x" + " " * (n-2) + "x" print "x" * n `` print_squarebox_lines(3,3): xxx x x x x x x xxx Not sure how python 3 stacks up
http://programmersheaven.com/discussion/434379/need-help-with-ascii-type-box-in-python
CC-MAIN-2017-09
refinedweb
538
71.52
When I update the firmware, I get an error and cannot connect to the port.Who can help me? I’ve also finally had some time to pull out my Chip Wisher Lite and start looking at doing the tutorials available via the notebooks. As I was going through the tutorials I hit the problem of not being able to update the firmware. dmesg output: [93941.615885] usb 1-1: New USB device found, idVendor=2b3e, idProduct=ace2, bcdDevice= 1.00 [93941.615890] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [93941.615894] usb 1-1: Product: ChipWhisperer Lite [93941.615897] usb 1-1: Manufacturer: NewAE Technology Inc. [93941.615900] usb 1-1: SerialNumber: 53313120313436373230362039303030 kernel version (Ubuntu 20): Linux 5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux based on the upgrade guide at python output: import chipwhisperer as cw scope = cw.scope() Traceback (most recent call last): File “/home/fivedown/workspace/chipwhisperer/software/chipwhisperer/hardware/naeusb/naeusb.py”, line 296, in txrx response = self.usbdev().ctrl_transfer(payload[0], payload[1], payload[2], payload[3], payload[4], timeout=self._timeout) File “/home/fivedown/.local/lib/python3.8/site-packages/usb/core.py”, line 1070, in ctrl_transfer ret = self._ctx.backend.ctrl_transfer( File “/home/fivedown/.local/lib/python3.8/site-packages/usb/backend/libusb1.py”, line 893, in ctrl_transfer ret = _check(self.lib.libusb_control_transfer( File “/home/fivedown/.local/lib/python3.8/site-packages/usb/backend/libusb1.py”, line 604, in _check raise USBError(_strerror(ret), ret, _libusb_errno[ret]) usb.core.USBError: [Errno 32] Pipe error WARNING:root:Your firmware is outdated - latest is 0.20. Suggested to update firmware, as you may experience errors programmer = cw.SAMFWLoader(scope=scope) When I disconnect and re-connect the device it dosen’t come back up as a serial device i.e. /dev/ttyACM0. I’ve also tried to erase before the upload but the same issue. programmer.enter_bootloader(really_enter=True) Entering bootloader mode… Please wait until the ChipWhisperer shows up as a serial port. Once it has, call the program(COMPORT, FWPATH) to program the ChipWhisperer Default firmware can be found at chipwhisperer/hardware/capture/chipwhisperer-lite/sam3u_fw/SAM3U_VendorExample/Debug/SAM3U_CW1173.bin Traceback (most recent call last): File “”, line 1, in File “/home/fivedown/workspace/chipwhisperer/software/chipwhisperer/capture/scopes/cwhardware/ChipWhispererSAM3Update.py”, line 152, in enter_bootloader self.usb.enterBootloader(True) AttributeError: ‘SAMFWLoader’ object has no attribute ‘usb’ Hopefully the following output helps, please let me know if there is anything else I can post to help debug. Hi, It looks like there’s some USB issues preventing the scope from actually connecting. We recently added a new USB command and I’m guessing my code to avoid that command on older firmware isn’t completely working. As a quick fix, you should be able to short the erase pins on the Lite, after which you can proceed with step 4 in the firmware update instructions. @biyuanqiao you should be able to follow the same steps to update your firmware as well. Alex My problem is the same as yours Thanks for your help that worked great. I just shorted erase jumper (JP2) on the cwlite with a jumper, powered on. Removed the jumper and it came up as /dev/ttyACM0 and I was able to program the cwlite as per the doco. Then rebooting the cwlite it came back and now I’m going through the jupyter notebooks!
https://forum.newae.com/t/chipwhisperer-lite-fireware-update/2103
CC-MAIN-2020-45
refinedweb
584
50.63
5919/what-are-the-differences-between-type-and-isinstance What are the differences between these two code fragments? Using type(): import types if type(a) is types.DictType: do_something() if type(b) in types.StringTypes: do_something_else() Using isinstance(): if isinstance(a, dict): do_something() if isinstance(b, str) or isinstance(b, unicode): do_something_else() To summarize the contents of other (already good!) answers, isinstance caters for inheritance (an instance of a derived class is an instance of a base class, too), while checking for equality of typedoes not (it demands identity of types and rejects instances of subtypes, AKA subclasses).: return treatasscalar(x) (see here). (see here. Normally, in Python, you want your code ...READ MORE There are a lot of pressing topics ...READ MORE The key differences include changes in the ...READ MORE What are each's advantages and drawbacks? I've noticed ...READ MORE Lists are mutable(values can be changed) whereas ...READ MORE Classes and Labels both are almost same things ...READ MORE There are few differences between Python and ...READ MORE The theoritical approach can be this way, re.match is ...READ MORE down voteaccepted ++ is not an operator. It is ...READ MORE import re a = " this is a ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/5919/what-are-the-differences-between-type-and-isinstance?show=5920
CC-MAIN-2022-40
refinedweb
227
67.65
unshortenit 0.2.0 Unshortens adf.ly, adfoc.us, lnx.lu, linkbucks, sh.st, and any 301 redirected shortener urls Unshortens ad-based urls and 301 redirects. Supports adf.ly, lnx.lu, linkbucks.com, sh.st, and adfoc.us Features - Supports unshortening the following ad-based shortners: - Adf.ly and related subdomains - Custom adf.ly domains by passing the type=’adfly’ parameter - Lnx.lu - Linkbucks.com and related subdomains (Selenium library with PhantomJS required) - Adfoc.us - Sh.st Supports any 301 redirected urls Python 2.7 and 3.3 support Usage import unshortenit unshortened_uri,status = unshortenit.unshorten(‘’) > unshortenit.unshorten will return a tuple (unshortened_uri,status) > unshortened_uri will contain the unshortened uri. If you pass in a non-shortener url it will return the original url. > status will contain the status code or any error messages Installation pip install unshortenit In order to enable linkbucks.com support you will need to install selenium along with PhantomJS. History 0.1.0 (2013-10-08) - First release. 0.1.1 (2013-10-11) - Added support for custom adf.ly domains via the type=’adfly’ variable. 0.1.2 (2013-10-11) - Fixed bug with t.co not working. 0.1.3 (2013-10-11) - Added a timeout parameter 0.1.4 (2013-10-12) - Added support for p.ost.im. - Fixed blocking issue with direct links to file downloads 0.1.6 (2014-02-01) - Fixed adfoc.us issues resulting from changes to their site - Fixed linkbucks.com issues resulting from changes to their site 0.1.7 (2014-02-03) - Fixed linkbucks.com issues resulting from additional changes to their site 0.1.8 (2014-02-04) - Fixed linkbucks.com issues resulting from additional changes to their site 0.1.9 (2014-02-08) - Switched linkbucks.com to use selenium PhantomJS driver due to ongoing challenges with their site 0.2.0 (2014-02-25) - Removed PyV8 requirement for adf.ly - Added ay.gy domain for adf.ly regex - Added sh.st support - Author: Jeff Kehler - Keywords: unshortener adf.ly linkbucks lnx.lu adfoc.us sh.st shortener - License: MIT - Categories - Package Index Owner: DevKeh - DOAP record: unshortenit-0.2.0.xml
https://pypi.python.org/pypi/unshortenit/0.2.0
CC-MAIN-2016-50
refinedweb
360
56.01
Andrew Morton a écrit :> Andi Kleen <ak@muc.de> wrote:>> On Thursday 09 February 2006 19:04, Andrew Morton wrote:>>> Ashok Raj <ashok.raj@intel.com> wrote:>>>> The problem was with ACPI just simply looking at the namespace doesnt>>>> exactly give us an idea of how many processors are possible in this platform.>>> We need to fix this asap - the performance penalty for HOTPLUG_CPU=y,>>> NR_CPUS=lots will be appreciable.>> What is this performance penalty exactly? > > All those for_each_cpu() loops will hit NR_CPUS cachelines instead of> hweight(cpu_possible_map) cachelines.You mean NR_CPUS bits, mostly all included in a single cacheline, and even in a single long word :) for most cases (NR_CPUS <= 32 or 64)-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/2/10/84
CC-MAIN-2013-48
refinedweb
142
55.34
K-nearest neighbors. Many machine learning techniques involve building a model that is capable of representing the data and then finding the optimal parameters for the model to minimize error. K-nearest neighbors, however, is an example of instance-based learning where we instead simply store the training data and use it to make new predictions. In general, instance-based techniques such as k-nearest neighbors are lazy learners, as compared to model-based techniques which are eager learners. A lazy approach will only "learn" from the data (to make a prediction) when a new query is made while an eager learner will learn from the data right away and build a generalized model capable of predicting any value. Thus, lazy learners are fast to train and slower to query, while eager learners are slower to train but can make new predictions very quickly. K-nearest neighbors is based on the assumption of locality in your data, that nearby points have similar values. Thus, what's a good way to predict a new data point? Just look for similar observations in your training data and develop your best guess. More specifically, use the k-nearest observations to formulate your prediction. For regression, this prediction is some measure of the average value (ie. mean) of the surrounding neighbors. For classification, this prediction is a result of some voting mechanism (ie. mode) of the surrounding neighbors. In the example above, we selected $k=3$ to show an example of k-nearest neighbors for both regression and classification. Finding similar observations (measures of distance) In order to find the k-nearest neighbors, we must first define some measure of "closeness" (in order words, distance). Remember, our observations exist within an n-dimensional featurespace; as it turns out, there are many ways to measure the distance between two points in space. In sklearn's implementation for k-nearest neighbors, you can use any of the available methods found in the DistanceMetric class. Remember, we're using distance as a proxy for measuring similarity between points. One important note to make is that since we're using distance as a measure of similarity, we're implicitly constraining our model to weight all features equally; distance in the $x _1$ dimension is on the same scale as distance in the $x _n$ dimension. The best way to deal with data containing features of varying importance is to simply feed the KNN algorithm more data; given sufficient training data (sometimes this might mean hundreds of thousands of examples), the detrimental effect of implicitly weighting all features equally will diminish. Choosing k After selecting a metric to be used for finding the observations closest to the query, you must determine how many of the nearest neighbors you'd like to take into account. By default, sklearn.neighbors.KNeighborsRegressor and sklearn.neighbors.KNeighborsClassifier use 5 as the default value for n_neighbors (otherwise known as $k$), but this can easily be optimized using something like K-fold cross validation to try out different values for $k$ and determine the best choice. A common rule of thumb seems to be that $\sqrt n$, where $n$ is the number of samples in your training set, seems to often perform well. Consider the case where we set $k=n$. Wherever our query point is, we'd end up using the entire dataset to predict the value. If we're using a simple average, the resulting predictor will return a constant value regardless of the input features. For classification, you'll always end up classifying new data points as whichever class is most prevalent in the training dataset. For regression, you'll always end up return the mean value of the entire training dataset. However, if we instead use a weighted average for prediction, where data points closer to the query take on higher weights, how would the results look? Let's take a dive into a Jupyter notebook and explore this. First, let's load an example dataset and prepare the data for training and testing. from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split boston = load_boston() features_train, features_test, labels_train, labels_test = train_test_split(boston.data, boston.target, test_size=0.4, random_state=0) import pandas as pd features_train = pd.DataFrame(features_train, columns = boston.feature_names) features_test = pd.DataFrame(features_test, columns = boston.feature_names) # Select rooms column for univariate regression example features_train = features_train.iloc[:, 5] features_test = features_test.iloc[:, 5] labels_train = pd.DataFrame(labels_train, columns = ['Price']) labels_test = pd.DataFrame(labels_test, columns = ['Price']) Next, I'll just take a peek at the data to see what we're working with. import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_train, labels_train) Lastly, I'll train six KNN models with varying weights and values for $k$. from sklearn.neighbors import KNeighborsRegressor import numpy as np n = features_train.shape[0] params = [{'weights': 'uniform', 'n_neighbors': 3}, {'weights': 'distance', 'n_neighbors': 3}, {'weights': 'uniform', 'n_neighbors': np.sqrt(n).astype(int)}, {'weights': 'distance', 'n_neighbors': np.sqrt(n).astype(int)}, {'weights': 'uniform', 'n_neighbors': n}, {'weights': 'distance', 'n_neighbors': n}] plot_range = np.arange(min(features_train), max(features_train), 0.01) for i, param in enumerate(params): model = KNeighborsRegressor(**params[i]) model.fit(features_train.values.reshape(-1, 1), labels_train) pred = model.predict(plot_range.reshape(-1, 1)) # Sort values for plotting new_X, new_y = zip(*sorted(zip(plot_range.reshape(-1, 1), pred))) plt.subplot(3, 2, i + 1) plt.scatter(features_train.values.reshape(-1, 1), labels_train, c='k', label='data') plt.plot(new_X, new_y, c='g', label='prediction') plt.legend() plt.title("KNeighborsRegressor (k = %i, weights = '%s')" % (params[i]['n_neighbors'], params[i]['weights'])) plt.ylabel('Price') plt.xlabel('Number of rooms') plt.rcParams["figure.figsize"] = [24, 14] plt.show() The result is shown below. I kind of jumped ahead of myself, as we haven't finished discussing the considerations of the KNN algorithm, but I wanted to go ahead and show you the effect of $k$. Note: The rule of thumb $k = \sqrt n$ seems to be our best model. Tying things together (selecting neighbors) Now that we've established a metric for measuring similarity and picked how many neighbors we'd like to include in the prediction, we need a method to search through our stored data to find the k-nearest neighbors. A brute force search, simply calculating the distance of our query from every point in our dataset, will work fairly well with small datasets but becomes undesirably slow at larger scales. Tree-based approaches can bring greater efficiencies to the search process by inferring distances. For example, if $A$ is very far from $B$ and $C$ is very close to $B$, we can infer that $A$ is also far from $C$ without explicitly measuring the distance. The sklearn documentation covers the K-D Tree and Ball Tree search algorithms, as well as some practical advice on choosing a specific search algorithm. Using stored data for predictions (measures of the average) After we've found the k-nearest neighbors, we can use their information to predict a value for the query. Classification So you have a new observation with features $x _i$ and you'd like to predict the class label, $y _i$. You've queried the KNeighborsClassifier and found the $k$ nearest neighbors. Now you're tasked with using the information present among the $k$ neighbors to predict $y _i$. The simplest method is to conduct a uniform vote where each neighbor essentially casts one vote to say "the query shares the same class as me". In the event that your vote ends in a tie, choose the label which is more common across the entire dataset. However, we can perform a more accurate vote if we decide to weight each vote based on how close it is to the query point. This is accomplished by weighting each vote by the inverse of its distance from the query so that closer points have stronger votes. Regression Now let's suppose you have a new observation with features $x _i$ and you'd like to predict a continuous target output, $y _i$. You've queried the KNeighborsRegressor and found the $k$ nearest neighbors. Now you're tasked with using the information present among the $k$ neighbors to predict $y _i$. The approach is largely the same as discussed above for classifiers, but rather than establishing a voting mechanism we'll simply calculate the mean of the neighbor's $y$ values. We can still use uniform or distance weighting. Fun fact: You can combine k-nearest neighbors with linear regression to build a collection of linear models as a predictor. Read more here. Summary K-nearest neighbors is an example of instance-based learning where we store the training data and use it directly to generate a prediction, rather than attempted to build a generalized model. The three main things you must define for a KNN algorithm is a way to measure distance, how many neighbors ($k$) to use in your predictions, and how to average the information from your neighbors to generate a prediction. Get the latest posts delivered right to your inbox
https://www.jeremyjordan.me/k-nearest-neighbors/
CC-MAIN-2018-22
refinedweb
1,504
55.54
Agenda See also: IRC log chris: long agenda today ... paul suggested yesterday that today we focus on WSDL element id first ... after that we do what Fabian asked for , after that LC issues ... after that primer issues. That's rough schedule for today, comments? Paul: one suggestion: Paul: I have to leave at three o clock today, I'd like to talk about CR before, could we do that for lunch? Chris: sure ... after lunch 30-45 minutes Paul: Chris will chair the rest of today and tomorrow <Ashok> pl post link to document on IRC Paul: the document is now a "editors team" document ... but still David's name should remain first on the publication ... great thanks to Dave for his work on the document! chris: Dave, is this the right version? Dave: yes Paul: all AIs related to element identifiers (agenda item 1)) done ... only need bug numer for AI-188 (Chris goes through the AIs related to element identifiers) Chris: we start with 4127, Fabian's issue Fabian: WSDL 1.1. allows several operations with the same name on the endpoint ... original proposal does not take that into account ... dave has listed the 5 options to resolve the issue <cferris> Chris: above is the issue, Dave added that information to the issue Dave describes Chris: who is fine with no 1? 10 people Chris: who is fine with no 2? 5 people Chris: who is fine with no 3? 9 people Chris: who is fine with no 4? Prasad: question on no 4: does WSDL 1.1 require input names to be unique? Chris: you mean the name of the element? Dave: the value of the @name attribute at the element Prasad: is that enough? Can you still have conflicts if the output is different? The combination of both, "input, output" ... I thought the combination needs to be unique Dave: do you have an example? Prasad: Not 100% sure, looking it up Chris: who is fine with no 4? 5 people Chris: who is fine with no 5? 5 people Chris: "who can not life question": Monica: Fabian cannot life with 2 and 3 <Ashok> I have a preference for 5 Chris: preferences? Ashok: preference for 5 Chris: 4 have preference for 5 Dave: I prefer no. 3 ... can life with 5, it covers a scenarios, but it's a bunch of work for little gain Glen: Dave, how would the URI be for 3 and 5? ... how would the difference be? Dave: the example is in the bugzilla entry Chris: preference for 3? 3 people <Yakov> +1 to 3 <prasad> Chris: let's knock out 2 and 4 ... now only discuss 1, 3 and 5 <dorchard> <op name="foo"><input></op> <dorchard> <op name="bar"><input/><output/></op> <dorchard> <op name="foo"><input/><output/></op> Dave: my example does not address Fabian's concerns, I think now ... 5 does not really address the problem <prasad> That is the issue I was raising, <input> element alone is not enough to distinguish between two operations, with the "same name" Chris: now about 3, who still cannot live with it? Fabian: 3 would be a real problem if you use more than one operation with the same name Dave: understand, if there is overloading, you could not specify the second or third choice Glen: you need to specify them somehow, e.g. by cardinal ordering Dave: if you do overloading: are they disambiguated by input / output children? Or also differentiate them by names? Glen: operation overloading was tied to RPC load, wanted to be able to map a Java object to WSDL ... in document literal world, now you have different elements Dave: so combination of input / output with name attribute is sufficient? Glen: no, you need the name ... the disambiguation happens on the qname of the input element. The output does not really matter, you can't overload s.t. with the same arguments chris: is this an academic exercise? ... i.e.: the name of the operation is just the name attribute on an element Glen: the name got used in code mapping. If I have a Java object, that gets mapped to a WSDL in some cases Chris: how many people do this? <dorchard> So does adding the message attribute qname value disambiguate? <dorchard> ie, wsdl11.portTypeOperation(TicketAgent/listFlights(tns:listFlightsRequest)) <prasad> Slightly old but addresses this aspect in depth: Glen: I do that Chris: o.k., different question: Do you have a real-world use case for two different operations with the same name, where you look in the input message parameters to distinguish? Glen: it's unlikely Chris: so I come back to option 1 or 3 <dorchard> The other option is based upon the name of the input, ie wsdl11.portTypeOperation(TicketAgent/listFlights(foo)) <dorchard> when <input name="foo" message="tns:listFlightsRequest"/> Chris: how about option 6 "the identifier resolves to all operations"? Fabian: don't think it's perfect, but I could live with it ... looking for a solution that will not create real-world difficulties Chris: new option 6 - who can live with it? 14 people Symon: a comment: use case of a different policy. You need to define which part to sign by the namespace. Imagine there is a change of namespace. ... you might need to have a new policy, just because of the namespace change Glen: that does not apply here Chris: who cannot live with 6? WG agreed on direction on resolution: 6. Dave will update Bugzilla so before lunch Chris: O.K., next one is (scoping of WSDL element identifiers spec) Chris: proposal is to be constraint to be consistent with attachment points defined in the attachment spec ... my mail on the issue, finishing AI 173 Paul: the document discusses element identifiers in general ... the WSDL WG let us go with it so far ... we should rename the document "WS Policy WSDL 1.1. element identifiers" ... in our spec, we should make explicit which identifiers are valid attachment points ... the WG said they wanted to have an informative reference to the document ... under Chris proposal, I'm not sure how we can do that ... we can't have a normative reference to a note ... in summary: I agree with Chris proposal ... my counter proposal is: don't use the word "policy" in the document ... if we delete many rows, the WSDL WG might say "you did not do what you promised" ... so we need to find in the attachment spec where the valid attachment points are ... Asir said: the non-normative reference can be high up ... so two decisions: do we agree with the sentiment to your proposal ... if we do that, we should structure that in the way so we can use it ... and we need to be able to reference that from the attachment spec ... the Key we have is the first column of the table in the element identiers draft ... we could say "the valid attachment points for WSDL 1.1. are the same as the elements which are listed " Chris: so: anybody who would disagree with the general "theme" to say: the set of external attachment points are the same as the set of internal attachment points? Felix: need to make sure WSDL WG agrees Paul: not necessary, they would have given us a LC comment already Felix: only necessary to get agreement if we want to have a normative document on WSDL 1.1. element identifiers Paul: sure, but that's not the question here Chris: we had the document out for LC, we had no comments on this question ... so everybody fine with constraining the element idenfifiers? Prasad: How about 4127? Will we leave the element ident spec as it is? Chris: yes Dave: what you do in the WSDL 1.1 element ident spec: you say you don't disambiguate ... in the attachment spec you say: operations apply to all elements Paul: agree, element identifiers is a generic sub routine Tony: I don't agree with first sentence in sec. 4 of attachment spec Paul: that will change, editors have an AI to change that, related to WSDL element ID spec Asir: decided to add a reference to WSDL 1.1., it is an outstanding action waiting for element ID draft ... one thing: waiting for a concrete proposal Chris: agree, now I'm looking only for a direction Paul: so we have a consensus that we need a restricted scope. <Ashok> Agree with PaulC Paul: my recommendation is : we should leave the WSDL 1.1. document as a generic document, and make the restricted scope in the attachment spec ... Chris said "delete the rows we don't use", I say: we need new text in the attachment spec Chris: so we don't change the scope of sec. 4 of attachment spec <dorchard> previous issue proposal, to add at the end of 3.4.1. Chris: and say: the element identfifiers which are scope of attachment need to be described in attachmetn spec <dorchard> "When a URI domain expression does not uniquely identify resources (such as WSDL 1.1 operation name overloading), the Policy applies to all the resources that are identified." Dave agrees to flesh out a proposal for attachment spec Asir: I don't understand Tony's comment on first sentence of sec. 4 in attachment spec Maryann explains Tony's comment <scribe> ACTION: David to make a formal proposal for WSDL 1.1. element identifiers referenced by / within attachment spec [recorded in] <trackbot> Created ACTION-197 - Make a formal proposal for WSDL 1.1. element identifiers referenced by / within attachment spec [on David Orchard - due 2007-01-24]. Chris: now issue 4208, see Dave: we should use element names "input" and "output", change "In" and "Out" for message references Chris: for this issue there are two proposals: Dave and Ashok ... Ashok's proposal has no issue number, but is related Dave: we could say: "let's do this". Ashok could say: "I don't like 'input' and 'output' at the end" <dorchard> I think we should adopt mine, and then look at Ashok's. Chris: sounds reasonable Ashok: like the word in the message name rather than the qualifiers at the end Chris: an example? Umit: both for Dave and Ashok? <dorchard> wsdl11.portTypeMessageReference(TicketAgent/listFlights/input) <dorchard> Ashok's pref: wsdl11.portTypeinput(TicketAgent/listFlights) Ashok: that's my proposal, the word "input" or "output" in the "method" name Chris: WSDL WG said: "In" and "Out" are inappropriate ... can we assure it is input and output for both and close 4208? Ashok: fine by me Chris: everybody fine with that? <prasad> Should it be wsdl11.portType.input(TicketAgent/listFlights) and wsdl11.portType.output(TicketAgent/listFlights)? <prasad> I.e. not munge like "portTypeinput" RESOLUTION: 4208 is resolved by <prasad> I hope that was a typo <umit> +1, that is my point as well. <dorchard> My guess is that there should 3 options: Chris: Ashok, could you open an issue about your mail? <dorchard> 1) as-is Ashok: let's discuss this briefly <dorchard> 2) wsdl11.portTypeinput(TicketAgent/listFlights) <dorchard> 3) wsdl11.portType.input(TicketAgent/listFlights) <not-cferris> here is the note to the thread to which Ashok is referring: Paul: we are not sure about the answer of WSDL WG on that <prasad> What is the argument for option (2) over (3)? Dave: we need to align the people as closely as possible ... we are not far enough to judge if it matters Chris: I think it's just syntax Umit: don't think it's just syntax <umit> there are two languages, WSDL + Messages Chris: question again to Ashok: could you raise an issue on WSDL element identifiers, with a link to mail thread from Jonathan ... that would be the remaining open issue <umit> option 3 is clearer on the boundary. Chris: question: if we resolve this issue, could we publish the WSDL 1..1 spec? Ashok: yes <prasad> +1 to option 3 <scribe> ACTION: Ashok to open issue on WSDL 1.1. spec with a link to mail thread from Jonathan [recorded in] <trackbot> Created ACTION-198 - Open issue on WSDL 1.1. spec with a link to mail thread from Jonathan [on Ashok Malhotra - due 2007-01-24]. Paul: Ashok, when you open the mail thread, the bug will identify the three alternatives in IRC? 1) as-is , 2) wsdl11.portTypeinput(TicketAgent/listFlights) , 3) wsdl11portTypeinput(TicketAgent/listFlights) Chris: WSDL WG said they don't have an opinion on this <umit> 3 is wsdl11.portType.input(TicketAgent/listFlights) <umit> the "." is significant <umit> the "." is significant thanks, Umit <prasad> Say it again? Chris: preference for 1)? 2 people. Chris: preference for 3) (2 is dropped)? 4 people Chris: who needs time? 5 people <Yakov> +1 for more time Paul: who does not care? 3 people Chris: who cannot live with 1)? 4 people Paul: please look at wsdl11.portType.input(TicketAgent/listFlights) and check if it works Ashok: I'll do my AI on this within an hour Chris: now break, after that v.next ... coming back at 10:55 <Fabian> pong Fabian: on Chris: everybody understands the issue? (no questions) Chris: everybody fine with closing this as v.next now? Prasad: what does marking as "v.next" mean now? <dorchard> Proposal for scoping of wsdl, bug 4045, updated at Chris: at Proposed Rec stage, we look at them Paul: keyword is "futureConsideration" <asir> use the following URI <asir> <prasad> Keywords URI: Paul: so we close the issues as "won't fixed" and use the keyword "futureConsideration" Chris: so everybody fine to close 4045 with "won't fixed" and "futureConsideration"? RESOLUTION: everybody fine to close 4178 with "won't fix" and "futureConsideration" Tony: can we make new proposals later? Paul: yes, this is a candidate list (fix 4045>4178 in the minutes later) Chris: now 4179 Fabian on Chris: discussion? Objections to close this like 4178? RESOLUTION: everybody fine to close 4179 with "won't fixed" and "futureConsideration" <not-cferris> ping Chris: now 4206 <not-cferris> (Fabian explains the mail) Chris: so proposal is to add sec. 3.2 Monica: right, and to make explanation in sec. 4.5 Frederick: is this the issue "you don't know howt to intersect if you have parameters"? Monica: yes Frederick: important and good issue Fabian: look at sec. 4.5 ... the end of the example , there is an explanation , in the context of the example Chris: proposal in the original bugzilla entry should be replaced? Monica: we could separate the other parts of the original proposal, to allow to close the LC issue Chris: new proposal replaces the original proposal by replacing one sentence in sec. 4.5 , to refer to sec. 3.2 ... that would be enough to close 4206 ... and we could work on the other aspects of 4206 later Monica: yes, we just need an AI for these aspects Frederick: does that really resolve the issue? Umit: the point was to have a default Frederick: the issue was: what to do if there is a conflict, what are the options? ... why don't we describe the options Asir: they took the security token as an example Frederick: there was a class of issues Asir: not aware of *class* of issues, just security token Frederick: can't describe the class now, but have the feeling there was something Fabian: original proposal was to suggest as a default to check all assertion parameters for compatability or exact matches ... we realized in the meantime that this is s.t. you can't do with an XML infoset ... to establish equality of two XML infosets ... the framework suggests only QName top level matching. We think now that this is the right thing to do Chris: again: revised proposal is instead of changing the algorithm is to add the reference to sec. 3.2 Fabian: in XML there is no reliable way of canonicalizing two XML infoset Frederick: so a domain could have one more than one representation for a value Asir: yes, e.g. different order of values Frederick: what is the error condition? Asir: there is none, it is an undefined behavior Umit: QName is fine. But if you have parameters and there is no idea about domian specific processing ... you could default to fail Chris: no, the framework will say "if the QNames are the same, they are compatible", there is no failure Dan: parameters are the payload of the assertion, they are not relevant for compatability Chris: so latest proposal is to add the reference to sec. 3.2 and to have an AI against primer and guidlines for other changes ... proposal in ... fine with closing the issue with that resolution? RESOLUTION: 4206 closed with proposal in Chris: next issue is 4196 <not-cferris> Chris: schedule changed, now 4198 <monica> For 4198: updated Fabian describes 4198 Chris reads proposal at Chris: sounds good to me, fine to close 4198 with that? Tony: WS-PolicyAttachment defines only certain mechanisms, there could be others ... should be clarified that this is particular to the mechanisms that WS-PolicyAttachment defines ... so is this be scoped to what is definded in WS-PolicyAttachment? Fabian: why should this be restriced? Tony: other domains may not want to buy this Fabian: it's guidelines, so no normative text Chris: essence of the point is: assertion "foo" means "foo", no matter what attachment I use Tony: unless it has parameters :) Dan: you should not tie semantics into the attachment mechanism Tony: don't agree, In security policy, we say what valid subjects you can attach the policy to Dan: agree with what you say, but the *way* you attach the subject, e.g. external versus inline, does not matter ... no matter what mechanism you use, the semantics should be the same (Chris types a proposal) <not-cferris> Although a policy assertion may be tailored for or constrained to a specific set of <not-cferris> take 2 <not-cferris>. <not-cferris> Dan suggests s/are not/should not/ Chris: better "should not be" Tony: what brought up the issue? Umit: we have now various ways of attachment ... the problem is to have a "sub routine" to make sure that the semantics are the same Tony: I could have an assertion attached to WSDL, I could do the same for a message ... how do you determine the subject? Maryann: we don't have examples for that Umit: given a domain, people should not have a fixed set of policy subjects? Tony: it may not be the case, the assertion may be dynamic Fabian: don't think it's an issue here ... only question is if I should recommend which mechanism to use, or leave it up to the implementers? Umit: the statements says "a policy assertion may be tailored for a specific set of policy subjects by design" Dan: attachement spec says: the operations you are supposed to do (merging) are also independent of the attachment mechanism Tony: willing to go along with that change, but needs to be looked at in v.next Chris: other discussion? RESOLUTION: closed with proposal in as amanded by Chris here: ." break now, resume at 1 p.m. pacific time after lunch, discussion on getting out of LC > going into CR <scribe> scribe: prasad <not-cferris> we're starting up again Paul: No detail on the agenda item ... it is important that we all have same understanding ... we need directors approval to go from LC to CR Paul walks through the document at a draft schedule Goal to make March f2f interop testing meeting a) W3C process Substantive changes like adding ignorable in LC would require another LC Not in our case paul walks through (a) 1-7 scribe: clarifies consensus and formal / minority objection walk throgh of item (b) If we close all LC issus this week, we can get ready for directors call with chairs in Feb sometime Paul; It is not required to show that a technical report has two independent and interoperable implementations scribe: as part of the director's request. However, the WG should include a report of present and expected implementations as part of the request After gathering implementation experience, the WG may declare certain features as being "at risk" scribe: AC reps may appeal the decision to advance Paul: That is the summary of how to get out of LC and what to do in LC Felix: It is not 100% necessary that the chairs attend the call with the director. The Director may rely on W3C's rep's recommendation Paul: Item (c) ... Sometime back IBM and MS submitted scenarios doc Asir: Explains color coding in the scenarios doc Paul: As soon as we exit last call we will start getting volunteers to work on available scenarios in the document ... W3C does not limit two two participants in the WG anymore ... the chairs welcome WG companies bringing new resources ... If we don't get volunteers, we will schedule time on regular calls to work on the scenarios (e) Draft schedule Umit: Are you going to publish what IBM and MS are working on? Paul / Chris: Yes, see the next items in (e) (1) IBM and MS contribute the updated scenarios pack may be end of next week..? say Jan 26th (e-2) Close Last Call issues .. Jan 18 (e-3) Co-chairs establish a plan to complete the remaining scenarios work Jan 26 (e-4) Editors deliver CR drafts for publication - Jan 26 (e-5) WG members review of candidate CR drafts Jan 31 (e-6) hairs prepare disposition of comments and other LC evidence - Jan 31 (e-7) Co-chairs' CR conference call with the Director and other W3C staff - 2nd/3rd wk of Feb (e-8) Editors deliver Scenarios for publication - Prior to call with the Director (e-9) Remaining scenarios are due - TBD (e-10) Director's decision and CFI announcement -- 3rd week of Feb (e-11) CR publication, WG publishes the First Public Working Draft of Scenarios - 3rd week of Feb (e-12) Editors deliver updated Scenarios for publication - TBD (e-13) WG publishes the Second Public Working Draft of Scenarios -- TBD (e-14) Mar 13-15 - WG F2F meeting in SFO, co-chairs invite implementers to attend (e-15) 2nd Interop scenarios added in 2nd WD scenarios -- TBD Asir: Publish scenarios in Word? Paul: I am happy with PDF Felix: Where do the scenarios docs get published? In WS-Policy WG space or W3C public Rec space? Paul: We should put it on WG page not the TR page Chris: Issue 4045 Paul: Explains David's proposal for 4045 and 4127 combined with Editors AI 112 <asir> +1 to Dave's proposal <scribe> ... New text: When a URI domain expression identifies multiple resources, ie WSDL 1.1 supports multiple operations with the same name (sometimes called operation name overloading), the Policy applies to all the resources that are identified. IRI References for WSDL 2.0 components are defined in Appendix C of the Web scribe: IRI References for WSDL 1.1 elements are defined in WSDL 1.1 Element Identifiers [ref]. The scope of URI domain expressions for WSDL 2.0 components or WSDL 1.1 elements is limited to the subjects defined in this specification at (ref to Attaching Policies Using WSDL 1.1 and WS-Policy Attachment for WSDL 2.0). The above change as now shown in the 4045 bug updated text, goes in 3.4.1: Chris: Comments, Qs? Discussion? ... any objections to closing 4045 and 4127? Monica: Allow fabian to comment? Paul: He said he won't be here tomorrow morning and this is the sentiment we agreed to this morning Chris: Tony, what to do with 1st sentence in section 4 Attaching Policies in WSDL 1.1 of the WSDL element identifiers doc? Asir; What is chris' concern? Chris: The paragraphs in section 4 do not talk about external attachment Paul: section 5 for WSDL 2.0 does not say anything about preferences. Should we similar thing for WSDL 11 and get rid of recommended completely Maryann: Not sure how it fits with calculating effective policy Chris: What if there is one inline and also attached Dan: Effective policy applies to both. does not matter where you get it from Maryann: I need to read through that section but, ok Asir: This recommendation is only in section 4 applies to that section only Maryann: Then you have two references to doing things with WSDL 1.1, section 3.5 and this one Paul: I have an updated proposal to address this ... discussion and attempts to refines paul's proposed changes to section 4 1st few paragraps of Attachment spec Asir: suggests copying the corresponding text for WSDL 2.0 and replacing component with construct and WSLL2.0 with WSDL 11 etc. Maryann: we need symmetry in the section title also ... further wordsmithing.. <scribe> New proposal in This is to resolve 4045 and 4127 and Editors AI 112 Review the text during break >>>>>BREAK<<<< Reconvene in 15 mins at.. 2:45pm <not-cferris> we are about to resume Chairs: Opening the floor for the discussion of the proposal Asir: I need more time Chris: I would rather not context switch again.. Asir: ready to go ... we can go as proposed. No changes to the proposal Chris: Are we good to go for this as combined proposal to close 4045 & 4127 <not-cferris> <not-cferris> RESOLUTION: Close 4045 and 4127 with text as provided in and respectively Chris: Issue 4196 proposed change for Framework, Section 2.2: Extensions that are Child Element Information Items added to Policy operators wsp:Policy, wsp:All and wsp:ExactlyOne MUST NOT use the policy language XML namespace name Daveo: See section 2.1 in FWK doc The ellipses characters are used to indicate a point of extensibility that allows other Element or Attribute Information Items .. etc. scribe: we have 2 different ways of talking of extensibility; short hand ... form ... and specific ones with {any} and @{any} in each of the sections applicable We have general model in section 2.1 and 2.2 and specific ones in the actual sections of the spec Daveo: so, it does not make sense to add this change to section 2.2 ... it is in the guidelines doc we should say "don't put extensions in policy namespace" So, none of this belongs in the FWK doc <PaulC> Paul is leaving the F2F now. <PaulC> I hope to join by phone for parts of tomorrow (Thu). scribe: discussion on daves asserition that this does not belong here .. Chris: The text "If an Element Information Item is not recognized, it MUST be treated as a policy assertion ..." is problematic ... if they are recognized, what to do? it does not say Glen: We don't have something like SOAP processing model for policy Monica: regardsless of where we place the text, noone said, what we proposed is not what we want <umit> +1 to Monica Daveo: Not to use ws-policy namespace for extensions is implied by the references we have Umit: It is implicit not stated explicitly Daveo: The spec covers it. We don't repeat things maryann/umit: where does the spec state it? Daveo: section 2.3 -- "All information items defined by this specification are identified by the XML namespace URI" Chris: We have an open issue 4238 - we don't have normative description of compact form except in schema <dorchard> In a "compact form" section, you'd have something like /Policy/{any} - allows elements from other namespace dan/asir: sounds great <asir> 4238 Chris: keep 4196 pending ... Issue 4238 ... Section 2.1 says "Normative text within this specification takes precedence over normative outlines, which in turn take precedence over the XML Schema [XML Schema Structures] descriptions." But outline for compact form points to normative form. Which means compact form is not allowed Asir: The best way to move forward is for someone to make a formal proposal. I can take thye action if we set the criteria ... one criteria - outline for compact form should be in sync with schema ... another criteria, address Monica's concern <dorchard> <not-cferris> wsp:Policy/{any} (line break) an extensibility point that allows inclusion of elements. Such elements MUST NOT have the policy language XML namespace name. <not-cferris> this should be taken as input requirements to the proposal for closing 4238 and 4196 <scribe> ACTION: Asir to make a proposal based on the guidelines to close issues 4238 and 4196 - due by tomorrow morning [recorded in] <trackbot> Created ACTION-199 - make a proposal based on the guidelines above or criteria above to close issues 4238 and 4196 [on Asir Vedamuthu - due 2007-01-17]. Issue 4138 see: Asir: Explains the proposal - joint from Asir, Umit and Dan Chris: Comments? Concerns? ... Objection to closing 4138 with the above? <not-cferris> RESOLUTION: 4138 closed with proposal in Now issue 4240 Spec says: Distributing wsp:All over an empty wsp:ExactlyOne is equivalent to no alternatives. Discussion between Dan and Umit how to explain this with other rules, viz. distribution etc. Asir's resonse to issue: <not-cferris> <not-cferris> Distributing wsp:All over an empty wsp:ExactlyOne is equivalent to no alternatives. *** reslution in msg 173 goes here *** For example, In section 4.3.3 put the base case described in prior to current example identifiedin the issue RESOLUTION: Close issue 4240 with "In section 4.3.3 put the base case described in prior to current example identified in the issue" Now issue 4235 <cferris> Chris: Describes the issue and proposed resolution Glen: Did we check what the policy attachment spec says? ... I beleive the intent of the attachment spec is spelled out. We don't need anything more <cferris> RESOLUTION: Close issue 4235 with proposal outlined in Chris: Reviews open LC issues at this point <fsasaki> <fsasaki> 4196 and 4238 pending proposal from Asir Issue 4251 discussed already Now issue 4254 <fsasaki> <fsasaki> (will be the new policy ns) RESOLUTION: close issue 4254 with adoption of the new namespace Issue: 4041 Asir: likes to defer to tomorrow Issue: 4103 Questionable use of Contoso <umit> +1 to company A Chris: describes the current status on the discussion and research on this, by Felix and others ... Company A seems non-controversial Asir: Company A is a registered trademark Frederick: How about Felix's suggestion of "Example company .." Prasad: How about Fake-Company-A? <fsasaki> example from web arch: <fsasaki> "Dirk would like to add a link from his Web site to the Oaxaca weather site. He uses the URI" Chris: Change the domain name in examples as subdomain of example.com ... Instead of using a named company, replace with a generic, "the company"? Asir: If we say "A company", we will lose context when we talk about it in the examples <umit> Here is some kind of text we use; <umit> Let us look at a fictitious scenario used in this document to illustrate the features of the policy language. A Web service developer is building a client application that retrieves real time stock quote information from Contoso, Ltd. Contoso supplies real time data using Web services. The developer has Contoso’s advertised WSDL description of these Web services. Contoso requires the use of addressing headers for messaging. Just the WSDL description is not s <umit> The SOAP message in the example above includes security timestamps that express creation and expiration times of this message. Contoso requires the use of security timestamps and transport-level security - such as HTTPS – for protecting messages. (The prefixes wss and wsu are used here to denote the Web Services Security and Utility namespaces.) <umit> Similar to the use of addressing, Contoso indicates the use of transport-level security using a policy expression. The example below illustrates a policy expression that requires the use of addressing and transport-level security for securing messages. Asir: If we say "A company", we will lose context when we talk about it in the examples Moving on.. Issues: 4212, 4213 Maryann has action 192 related to this already Chris: We are done with the agenda for today. We have AIs for pending LC issues Check on peoples' availability / early departures tomorrow Tony and William plan to leave early Asir: Owners for remaining scenarios? Chris: We can assign tomorrow ... Tomorrow's agenda - Going until 3pm. Remaining LC issue, Primer and Guidelines issues and Interop scenarios Editors meeting at 3pm in this room Claps for the progress made on the LC issues. Congrats to WG from Felix <cferris> recessed
http://www.w3.org/2007/01/17-ws-policy-minutes.html
crawl-002
refinedweb
5,431
61.77
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics Register / Login JavaRanch » Java Forums » Java » Beginning Java Author recursion find number of times int appears in array... Johnny Steele Greenhorn Joined: Nov 12, 2010 Posts: 12 posted Mar 29, 2011 04:35:55 0 Hi, Who loves recursion?!?!?!?! So, given an array of ints I need to recursively find the number of times a particular integer appears in that array. Like so: public int numOf(int number, int[]a) { int index = 0; int count = 0; if(x.length > 0) { if(index < x.length) { if(number == x[index]) { count+= +1; index+= +1; return count + index + numOf(number, a); }else{ index += +1; return count + index + numOf(number, a); } }else{ return count; } }else{ return 0; } This code is creating a huge stackOverFlow error when I unit test it. I am at a loss. I am allowed to use private helper methods although I am a bit confused as to how that would help in general or particular to this case. Regardless, any suggestions? Thank you much el Duderino Wouter Oet Saloon Keeper Joined: Oct 25, 2008 Posts: 2700 I like... posted Mar 29, 2011 05:09:23 0 Why do you want to solve this problem with recursion? "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." --- Martin Fowler Please correct my English. Johnny Steele Greenhorn Joined: Nov 12, 2010 Posts: 12 posted Mar 29, 2011 05:11:25 0 my assignment requires it to be witten using recursion. anirudh jagithyala Ranch Hand Joined: Dec 07, 2010 Posts: 41 posted Mar 29, 2011 05:51:17 0 Hey Johnny, Check out the below code.....This would work as far as i understood your requirement. public int numOf(int number, int[]a) { int count=0; if(a.length > 0) {int value=a[0]; int a1[]=new int [a.length-1]; if(number==value) count=1; for(int i=1;i<a.length;i++){ a1[i-1]=a[i]; } return count+numOf(number,a1); } else return 0; } Johnny Steele Greenhorn Joined: Nov 12, 2010 Posts: 12 posted Mar 29, 2011 06:02:32 0 that seems to work!!! thank you very much!!! I did not think to use a for loop as in the past we've been told to avoid loops but not with this one. thanks a lot!!! Matthew Brown Bartender Joined: Apr 06, 2010 Posts: 3786 1 I like... posted Mar 29, 2011 06:23:23 0 The reason your original version didn't work is because you kept calling the method with the same arguments. Which means you got into an infinite loop - and eventually the stack runs out of memory and falls over. If you're using recursion, you've got to make sure that the recursion is guaranteed to end at some point. Anirudh's version works because the array is one element shorter every time the method is called, so eventually you hit a zero length and the recursion stops. Carey Brown Ranch Hand Joined: Nov 19, 2001 Posts: 159 I like... posted Mar 29, 2011 07:22:53 0 Your problem is a linear problem, recursion tends to be used on tree structures. This code uses recursion to subdivide an array a-la binary search, thereby artificially treating the problem as a tree. import java.util.Random; public class TryRecursion { private static final int SIZE = 42; private static final int FIND = 7; private static final boolean DEBUG = false; public static void main( String[] args ) { int[] ary = new int[ SIZE ]; int countRecursive, countLinear; Random rand = new Random(); for( int i = 0 ; i < ary.length ; i++ ) { ary[ i ] = rand.nextInt( 10 ); if( DEBUG ) System.out.println( ary[i] ); } countRecursive = recursive( ary, 0, ary.length, FIND ); countLinear = linear( ary, 0, ary.length, FIND ); if( countRecursive == countLinear ) System.out.println( "Success, count = " + countRecursive ); else System.out.println( "Failed, recursive = " + countRecursive + " linear = " + countLinear ); } private static int recursive( int[] ary, int start, int end, int find ) { int len = end - start; if( DEBUG ) System.out.format( "start=%2d end=%2d\n", start, end ); if( len <= 0 ) return 0; if( len == 1 ) return ary[ start ] == find ? 1 : 0; int len2 = len / 2; return recursive( ary, start, start + len2, find ) + recursive( ary, start+len2, end, find ); } private static int linear( int[] ary, int start, int end, int find ) { int count = 0; for( int i=start ; i < end ; i++ ) if( ary[i] == find ) count++; return count; } } I agree. Here's the link: subject: recursion find number of times int appears in array... Similar Threads find word occurence in string Passing Vector elements to an array problem with recurssion Recent interview question - Find a pair in array whose sum is x. Recursive Problem. All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/532483/java/java/recursion-find-number-times-int
CC-MAIN-2013-20
refinedweb
806
72.36
Currently when you build a module and it depends on another module your module can "see" non-api classes from the other module. This can lead to clean compile but failure during runtime (NoClassDefFoundErrors). The build script should put only API packages on "compile-classpath". This can be (hopefully) easily achieved by putting the API packages in separate .jar file and putting only this API jar on classpath when building dependant modules. This idea comes from project AWA (sorry it is in czech):, especially from the arch description (again in czech): and here They were inspired by Eclipse if I am not mistaken ... I've thought of it before too. Might be a performance problem to create the API-only JARs; not sure. I guess the algorithm for each classpath entry would be 1. If it comes from an impl dep, leave it alone. 2. If it comes from a spec dep, check if the dep has any public packages (taking into account OpenIDE-Module-Friends). 2a. If not, halt the build (since you should not have been using such a dep). 2b. If so, create a separate JAR (where??) containing only the public packages, and replace the classpath entry with this JAR. Can cache such JARs so long as the original has not changed. I don't think that performance is of any big concern during build time. I agree that keeping the API jars somewhere is a good idea since they can be used when building one module only (e.g. from the IDE). If the API jars are put to some special folder the behaviour of the resulting program will not be affected (only the build process). BTW when looking at the mentioned framework they can live with much simpler module system even during runtime since they keep the jars separated (and thus they have no need for any complicated run-time checking of the dependencies). They have only simple dependencies between their jar files. Yeah, the question for me is where to put the temp JARs - it's OK for the netbeans.org build to use e.g. nbbuild/build/apijars/ but what about builds of external modules? Need to find another spot. Not a big problem, but will take a bit of work. I guess I would be the one to work on it. Re. splitting API from impl even at runtime: neat idea, though I think for NB it would not be worthwhile at this point; would require too many things to be modified. Totally agree that it is not relevant for NB as it would be lot of work for no gain (actually slower startup because there would be more jars) - it was just BTW. I like the idea of creating two JARs during build time and caching one in nbbuild/build somewhere (if it can work). Other possibility would be to change the compiler to honor OpenIDE-Module-Public-Packages, Hrebejk could be of some help here, but I doubt this would get enough priority on his list. Btw. From the proposal it seemed to me that when a JAR is gonna be created, also its api JAR will be put somewhere. Maybe, for the benefit of external modules, the common.xml could create the cache of api jars somewhere when compiling a module that depends on them. This might solve the problem with cache. It would always be near the target directory (of a suite). Maybe this directory could also be driven by a property to specify the cache dir and enable or disable this compile time improvement? *** Issue 62038 has been marked as a duplicate of this issue. *** it would be even better if we don't list such private classes in auto-complete popup during java programming in a module project. Re. excluding also from code completion - not directly supported by Java infrastructure in IDE, but could perhaps be faked by putting the API-only JARs into the IDE's effective classpath. ########Adding to aid others trying to work through this################ I found some documentation that explained it better to me. For anyone else looking look at this URL: and search the page (include quotes in your search) "back door" ######################################################################## I still think there needs to be another method however. Couldn't we just have an explicit means of saying....I don't care if errors arise...I'll deal with them when they do... something like: OpenIDE-Module-Module-Dependencies: org.netbeans.modules.project.libraries/1 = *,org.netbeans.modules.form/2 = * And this could allow us to not worry about the specific version number, yet we are aware our module might have issues that could arise from a shifty API. The reason I say that is that sometimes a version number is going to change, but none of the private code a developer was accessing will change, but a bunch of other stuff will. That or a simple bug fix will go in. The user updates from auto-update with some hot-fix, and now your module is broken. This way while you're getting something to work you can still deploy your module and work with the developers of another module to get some good use cases and public API's worked out of some existing code they have. Does this not sound like a good open compromise? I say this because it's hard enough to find extra time to work on some ideas for NB not including having to worry about how you're not going to replicate a bunch of existing code sitting there you know you can use if you were able to just access the API at runtime. This way we know we're opening up a can of worms when we use the *, but until we can work around it through mediation with another project it will help get a lot of other plug-ins going a little faster. If nothing else help get some proof of concepts going. For instance, External Libraries doesn't even have an implementation version, and I can't seem to figure out what the default value is if one even exists. So, now I'm like...ok...I don't want to rewrite the LibraryCustomizer just to have the same features because I don't have much time...I could always change the manifest for External Libraries and rebuild it, but then that only addresses my machine. I was hoping to bounce this back and fourth some with some other people without it being such a long drawn out sucker of my time, but currently I can't not without copying out much if not all of the External Libraries code into my own module. Wade - the last comment was off topic for this issue report. This issue is not proposing any changes to the module system. It is solely about enforcing the existing behavior better at compile time, to preempt runtime errors. No problem, I just noticed some issues were still being aired out and thought I would put this in there with them because it made me think about it. On to a comment relevent to the discussion. If you don't show the API's you still need to show them and include them in the classpath when an implementation dependency is being used in the manifest right? So, I thought I would mention that it would also be nice to have a more open way of saying that...show me the APIs and let me access it all. Still the issue: Basically if you split this out into two separate jars you'll have to include the jar on the classpath if an implentation dependency is definied. Yes, with issue #68631 and #68716 the algorithm for constructing a classpath from each dep marked as a <compile-prerequisite/> would be something like 1. Is target module in platform (minus declared excludes)? If no, error. 2. Is target module a matching version? If no, error. 3. Is it an impl dep? If yes, use original target module JAR in CP. 4. Does target module export packages only to friends, but source module is not a friend? If so, error. 5. Else, create separate API-only JAR somewhere and add to CP. Another note: AFAIK it is not currently possible to make code completion, Fix Imports, Open Class, etc. take into account public package restrictions while still providing popup Javadoc and other features, unless issue #70220 is fixed. However, attempted uses of unavailable classes would still appear as errors in the editor's error stripe, even before you tried to build the module, so this wouldn't be so bad I think. Will not attempt to do anything special for the in-IDE classpath, used for code completion, background compilation, etc. If you use a nonpublic class you will get a compiler error from Ant; probably enough. Would require either issue #70220 or issue #49371 to be implemented to solve this fully and without unpleasant side effects; #49371 would be preferable as it would not require a special SourceForBinaryImpl, and would not require the NBM project type to physically create the split JARs when working with an unbuilt source tree. Created attachment 28103 [details] First draft Done, with issue #68631 and issue #68716. Had to make a number of fixes in netbeans.org modules first, since there were in fact numerous compile-time violations of public package lists. A note: public package restrictions are *not* enforced for unit tests in a module - only main sources. committed * Up-To-Date 1.29 apisupport/harness/release/README committed * Up-To-Date 1.10 apisupport/harness/release/build.xml committed * Up-To-Date 1.11 apisupport/project/src/org/netbeans/modules/apisupport/project/ui/customizer/ModuleProperties.java committed * Up-To-Date 1.26 apisupport/project/src/org/netbeans/modules/apisupport/project/ui/customizer/SuiteProperties.java committed * Up-To-Date 1.9 nbbuild/antsrc/org/netbeans/nbbuild/JarWithModuleAttributes.java committed * Up-To-Date 1.28 nbbuild/antsrc/org/netbeans/nbbuild/ModuleListParser.java committed * Up-To-Date 1.32 nbbuild/antsrc/org/netbeans/nbbuild/ParseProjectXml.java committed * Up-To-Date 1.31 nbbuild/templates/common.xml committed * Up-To-Date 1.8 nbbuild/templates/emma.xml committed * Up-To-Date 1.64 nbbuild/templates/projectized.xml committed * Up-To-Date 1.14 nbbuild/templates/xtest-unit.xml
https://netbeans.org/bugzilla/show_bug.cgi?format=multiple&id=59792
CC-MAIN-2016-36
refinedweb
1,728
64
divert— #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> int socket(AF_INET, SOCK_RAW, IPPROTO_DIVERT); int socket(AF_INET6, SOCK_RAW, IPPROTO_DIVERT); A divert socket must be bound to a divert port through bind(2), which only the superuser can do. Divert ports have their own number space, completely separated from tcp(4) and udp(4). When pf(4) processes a packet that matches a rule with the divert-packet parameter (see pf.conf(5) for details) it is sent to the divert socket listening on the divert port specified in the rule. Note that divert-packet should not be confused with divert-to or divert-reply, which do not use divert sockets. If there are no divert sockets listening, the packets are dropped. Packets can be read via read(2), recv(2), or recvfrom(2) from the divert socket. The application that is processing the packets can then reinject them into the kernel. With recvfrom(2), an interface IP address is passed if it is an inbound packet. Outbound packets provide the unspecified address. When reinjecting, use this address as argument to sendto(2). This allows the kernel to guess the original incoming interface and and process it as an incoming packet. If no interface IP address is given, the reinjected packet is treated as an outgoing packet. Since the userspace application could have modified the packets, upon reinjection basic sanity checks are done to ensure that the packets are still valid. The packets' IPv4 and protocol checksums (TCP, UDP, ICMP, and ICMPv6) are also recalculated. Writing to a divert socket can be achieved using sendto(2) and it will skip pf(4) filters to avoid loops. Note that this means that a reinjected inbound packet will also not run through the pf out rules after being forwarded. A diverted packet that is not reinjected into the kernel stack is lost. Receive and send divert socket buffer space can be tuned through sysctl(8). netstat(1) shows information relevant to divert sockets. Note that the default is 64k and too short to handle full sized UDP packets. The IP_DIVERTFL socket option on the IPPROTO_IP level controls whether both inbound and outbound packets are diverted (the default) or only packets travelling in one direction. It cannot be reset once set. Valid values are IPPROTO_DIVERT_INIT for the direction of the initial packet of a flow, and IPPROTO_DIVERT_RESP for the direction of the response packets. The direction is relative to the packet direction. So for pf out rules, it is the other way around. If one filter is active, it specifies which packets should not be diverted. Both directions can be combined as bit fields, but then the traffic is not filtered; not using the P_DIVERTFL option has the same effect. pass out on em0 inet proto tcp to port 80 divert-packet port 700 The following program reads packets on divert port 700 and reinjects them back into the kernel. This program does not perform any processing of the packets, apart from discarding invalid IP packets. #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netinet/ip.h> #include <netinet/tcp.h> #include <arpa/inet.h> #include <stdio.h> #include <string.h> #include <err.h> #define DIVERT_PORT 700 int main(int argc, char *argv[]) { int fd, s; struct sockaddr_in sin; socklen_t sin_len; fd = socket(AF_INET, SOCK_RAW, IPPROTO_DIVERT); if (fd == -1) err(1, "socket"); memset(&sin, 0, sizeof(sin)); sin.sin_family = AF_INET; sin.sin_port = htons(DIVERT_PORT); sin.sin_addr.s_addr = 0; sin_len = sizeof(struct sockaddr_in); s = bind(fd, (struct sockaddr *) &sin, sin_len); if (s == -1) err(1, "bind"); for (;;) { ssize_t n; char packet[IP_MAXPACKET]; struct ip *ip; struct tcphdr *th; int hlen; char src[48], dst[48]; memset(packet, 0, sizeof(packet)); n = recvfrom(fd, packet, sizeof(packet), 0, (struct sockaddr *) &sin, &sin_len); if (n == -1) { warn("recvfrom"); continue; } if (n < sizeof(struct ip)) { warnx("packet is too short"); continue; } ip = (struct ip *) packet; hlen = ip->ip_hl << 2; if (hlen < sizeof(struct ip) || ntohs(ip->ip_len) < hlen || n < ntohs(ip->ip_len)) { warnx("invalid IPv4 packet"); continue; } th = (struct tcphdr *) (packet + hlen); if (inet_ntop(AF_INET, &ip->ip_src, src, sizeof(src)) == NULL) (void)strlcpy(src, "?", sizeof(src)); if (inet_ntop(AF_INET, &ip->ip_dst, dst, sizeof(dst)) == NULL) (void)strlcpy(dst, "?", sizeof(dst)); printf("%s:%u -> %s:%u\n", src, ntohs(th->th_sport), dst, ntohs(th->th_dport) ); n = sendto(fd, packet, n, 0, (struct sockaddr *) &sin, sin_len); if (n == -1) warn("sendto"); } return 0; } divertprotocol first appeared in OpenBSD 4.7.
https://man.openbsd.org/OpenBSD-6.2/divert.4
CC-MAIN-2019-22
refinedweb
749
57.37
Once that ambiguity occurs, there are only two possible options to resolve it, since the effect of a "using namespace" directive can't be undone by any means except removing the directive. The first approach to resolving such an ambiguity is to fully qualifying all names (i.e. prefixing all names with their namespace). That negates the point of the existing "using namespace" directive for all names where ambiguity has been introduced. The second approach is to remove the "using namespace" directive, which then makes it necessary to ensure all names (except, optionally, those in the global namespace) are fully qualified. Practically, for anything except the tiniest code samples, I find the effort of resolving such an ambiguity upon encountering it exceeds the effort of avoiding it in the first place (i.e. not employing any "using namespace" directives, and fully qualifying all names except - optionally - those that are local to a file or in the global namespace). And since I generally write code with a view to maintaining it, using it in a larger system, or extending it ...... I find it better to simply avoid "using namespace" directives in real code. The only time I employ a "using namespace" directive is actually in code samples I post in forums. I'm not fussed if people choose to employ a "using namespace" directive in their own code. They wear the impact. But a "using namespace" directive in a header file is a no-no, since it forces me to manage impacts whether I wish to or not. If I encounter a "using namespace" directive in a header file for a library, I do not use that library. Period.
http://cboard.cprogramming.com/cplusplus-programming/149762-simple-check-2-print.html
CC-MAIN-2014-15
refinedweb
278
51.68
STYLE(9) Midnight). * * * $FreeBSD: src/share/man/man9/style.9,v 1.123 2007/01/28 20:51:04 joel: /*- * * Long, boring license goes here, but redacted for brevity */ An automatic script collects license information from the tree for all comments that start in the first column with ‘‘/*-’’. ‘‘#if defined(LIBC_SCCS)’’), enclose both in ‘‘#if 0 ... #endif’’ to hide any uncompilable bits and to keep the IDs out of object files. Only add ‘‘From: ’’ in front of foreign VCS IDs if the file is renamed. #if 0 #ifndef lint #endif /* not lint */ #endif #include <sys/cdefs.h> __FBSDID("$FreeBSD: src/share/man/man9/style.9,v 1.123 2007/01/28 20:51:04 joel Exp $"); Leave another blank line before the header files. Kernel include files (i.e. sys/*.h) come first; normally, include <sys/types.h> OR <sys/param.h>, but not both. <sys/types.h> includes <sys/cdefs.h>, and it is okay to depend on that. program go in "pathnames.h" in the local directory. #include <paths.h> Leave another blank line before the user include files. Do not #define or declare names in the implementation namespace except for implementing application interfaces. The names of ‘‘unsafe’’ macros (ones that have side effects), and the names of macros for manifest constants, are all in uppercase. The expansions of expression-like macros are either a single token or have outer parentheses. Put a single tab character between the #define and the macro name. If a macro is an inline expansion of a function, the function name is all in lowercase and the macro has the same name all in uppercase.. } corresponding #if or #ifdef. The comment for #else and #elif should match the inverse of the expression(s) used in the preceding #if and/or #elif statements. In the comments, the subexpression ‘‘defined(FOO)’’ is abbreviated as ‘‘FOO’’. For the purposes of comments, ‘‘#ifndef FOO’’ is treated as ‘‘ */ The project is slowly moving to use the ISO/IEC 9899:1999 (‘‘ISO;. When declaring variables in structures, declare them sorted by use, then by size (largest to smallest), only if it suffices to align at least 90% { }; Use queue(3) macros rather than rolling your own lists, whenever possible. Thus, the previous example would be better written: #include <sys/queue.h> struct foo { };. When convention requires a typedef, make its name match the struct tag. Avoid typedefs ending in ‘‘_t’’, except as specified in Standard C or by POSIX. /* Make the structure name match the typedef. */ } BAR; All functions are prototyped somewhere. Function prototypes for private functions (i.e., functions not used. In general code can be considered ‘‘new code’’ when it makes up about 50% or more of the file(s) involved. This is enough to break precedents in the existing code and use the current style guidelines. The kernel has a name associated with parameter types, e.g., in the kernel use: In header files visible to userland applications, prototypes that are visible must use either ‘‘protected’’ names (ones beginning with an underscore) or no names with the types. It is preferable to use protected names. E.g., use: or: Prototypes may have an extra space after a tab to enable function names to line up: /* * All major routines should have a comment briefly describing what * they do. The comment before the "main" routine should describe * what the program does. */ int main(int argc, char *argv[]) { comment. Space after keywords (if, while, for, return, switch). No braces (‘{’ and ‘}’) are used for control statements with zero or only a single statement unless that statement is more than a single line in which case they are permitted. Forever loops are done with for’s, not while’s. Parts of a for loop may be left empty. Do not put declarations inside blocks unless the routine is unusually complicated. Indentation is an 8 character tab. Second level indents are four spaces. If you have to wrap a long statement, put the operator at the end of the line.. No spaces after function names. Commas have a space after them. No spaces after ‘(’ or ‘[’ or preceding ‘]’ or ‘)’ characters. Unary operators do not require spaces, binary operators do. Do not use parentheses unless they are required for precedence or unless the statement is confusing without them. Remember that other people may confuse easier than you. Do YOU understand the following? Exits should be 0 on success, or according to the predefined values in sysexits(3). } variables in the declarations. Use this feature only thoughtfully. DO NOT use function calls in initializers. struct foo one, *two; Do not declare functions inside other functions; ANSI C says that such declarations have file scope regardless of the nesting of the declaration. necessary. Values in return statements should be enclosed in parentheses. Use err(3) or warn(3), do not roll your own. } Old-style function declarations look like this: static char * function(a1, a2, fl, a4) { Use ANSI function declarations unless you explicitly need K&R compatibility. Long parameter lists are wrapped with a normal four space indent. Variable numbers of arguments should look like this: #include <stdarg.h> void vaf(const char *fmt, ...) { } static void usage() { (‘[’ and ‘]’). (‘|’) separates ‘‘either-or’’ options/arguments, and multiple options/arguments which are specified together are placed in a single set of brackets. "usage: f [-aDde] [-b b_arg] [-m m_arg] req1 req2 [opt1 [opt2]]\n" "usage: f [-a | -b] [-c [-dEe] [-n number]]\n" } −Wall) and produce minimal warnings. SEE ALSO indent(1), lint(1), err(3), sysexits(3), warn(3), style.Makefile(5) HISTORY This manual page is largely based on the src/admin/style/style file from the 4.4BSD−Lite2 release, with occasional updates to reflect the current practice and desire of the FreeBSD project. MidnightBSD 0.3 February 10, 2005 MidnightBSD 0.3
http://www.midnightbsd.org/documentation/man/style.9.html
CC-MAIN-2014-15
refinedweb
968
67.96
Crystals and band structure¶ In this tutorial we calculate properties of crystals. Setting up bulk structures¶ ASE provides three frameworks for setting up bulk structures: - ase.build.bulk(). Knows lattice types and lattice constants for elemental bulk structures and a few compounds, but with limited customization. - ase.spacegroup.crystal(). Creates atoms from typical crystallographic information such as spacegroup, lattice parameters, and basis. - ase.lattice. Creates atoms explicitly from lattice and basis. Let’s run a simple bulk calculation. Exercise Use ase.build.bulk() to get a primitive cell of silver, then visualize it. Silver is known to form an FCC structure, so presumably the function returned a primitive FCC cell. But it’s always nice to be sure what we have in front of us. Can you recognize it as FCC? You can e.g. use the ASE GUI to repeat the structure and recognize the A-B-C stacking. ASE should also be able to verify that it really is a primitive FCC cell and tell us what lattice constant was chosen: print(atoms.cell.get_bravais_lattice()) Periodic structures in ASE are represented using atoms.cell and atoms.pbc. The cell is a Cell object which represents the crystal lattice with three vectors. pbc is an array of three booleans indicating whether the system is periodic in each direction. Bulk calculation¶ For periodic DFT calculations we should generally use a number of k-points which properly samples the Brillouin zone. Many calculators including GPAW and Aims accept the kpts keyword which can be a tuple such as (4, 4, 4). In GPAW, the planewave mode is very well suited for smaller periodic systems. Using the planewave mode, we should also set a planewave cutoff (in eV): from gpaw import GPAW, PW calc = GPAW(mode=PW(600), kpts=(8, 8, 8), setups={'Ag': '11'}, ...) Here we have used the setups keyword to specify that we want the 11-electron PAW dataset instead of the default which has 17 electrons, making the calculation faster. (In principle, we should be sure to converge both kpoint sampling and planewave cutoff – I.e., write a loop and try different samplings so we know both are good enough to accurately describe the quantity we want.) Exercise Run a single-point calculation of bulk silver with GPAW. Save the ground-state in GPAW’s own format using calc.write('Ag.gpw'). Density of states¶ Having saved the ground-state, we can reload it for ASE to extract the density of states: import matplotlib.pyplot as plt from ase.dft.dos import DOS from gpaw import GPAW calc = GPAW('groundstate.gpw') dos = DOS(calc, npts=500, width=0) energies = dos.get_energies() weights = dos.get_dos() plt.plot(energies, weights) plt.show() Calling the DOS class with width=0 means ASE well calculate the DOS using the linear tetrahedron interpolation method, which takes time but gives a nicer representation. We could also have given it a nonzero width such as the default value of 0.1 (eV). In that case it would have used a simple Gaussian smearing with that width, but we would need more k-points to get a plot of the same quality. Note that the zero point of the energy axis is the Fermi energy. Exercise Plot the DOS. You probably recall that an Ag atom has 10 d electrons and one s electron. Which parts of the spectrum do you think originate (mostly) from s electrons? And which parts (mostly) from d electrons? Time for analysis. As we probably know, the d-orbitals in a transition metal atom are localized close to the nucleus while the s-electron is much more delocalized. In bulk systems, the s-states overlap a lot and therefore split into a very broad band over a wide energy range. d-states overlap much less and therefore also split less: They form a narrow band with a very high DOS. Very high indeed because there are 10 times as many d electrons as there are s electrons. So to answer the question, the d-band accounts for most of the states forming the big, narrow chunk between -6.2 eV to -2.6 eV. Anything outside that interval is due to the much broader s band. The DOS above the Fermi level may not be correct, since the SCF convergence criterion (in this calculation) only tracks the convergenece of occupied states. Hence, the energies over the Fermi level 0 are probably wrong. What characterises the noble metals Cu, Ag, and Au, is that the d-band is fully occupied. I.e.: The whole d-band lies below the Fermi level (energy=0). If we had calculated any other transition metal, the Fermi level would lie somewhere within the d-band. Note We could calculate the s, p, and d-projected DOS to see more conclusively which states have what character. In that case we should look up the GPAW documentation, or other calculator-specific documentation. So let’s not do that now. Band structure¶ Let’s calculate the band structure of silver. First we need to set up a band path. Our favourite image search engine can show us some reference graphs. We might find band structures from both Exciting and GPAW with Brillouin-zone path \(\mathrm{W L \Gamma X W K}\). Luckily ASE knows these letters and can also help us visualize the reciprocal cell: lat = atoms.cell.get_bravais_lattice() print(lat.description()) lat.plot_bz(show=True) In general, the ase.lattice module provides BravaisLattice classes used to represent each of the 14 + 5 Bravais lattices in 3D and 2D, respectively. These classes know about the high-symmetry k-points and standard Brillouin-zone paths (using the AFlow conventions). Exercise Build a band path for \(\mathrm{W L \Gamma X W K}\). You can use path = atoms.cell.bandpath(...) — see the Cell documentation for which parameters to supply. This gives us a BandPath object. You can print() the band path object to see some basic information about it, or use its write() method to save the band path to a json file such as path.json. Then visualize it using the command: $ ase reciprocal path.json Once we are sure we have a good path with a reasonable number of k-points, we can run the band structure calculation. How to trigger a band structure calculation depends on which calculator we are using, so we would typically consult the documentation for that calculator (ASE will one day provide shortcuts to make this easier with common calculators): calc = GPAW('groundstate.gpw') atoms = calc.get_atoms() path = atoms.cell.bandpath(<...>) calc.set(kpts=path, symmetry='off', fixdensity=True) We have here told GPAW to use our bandpath for k-points, not to perform symmetry-reduction of the k-points, and to fix the electron density. Then we trigger a new calculation, which will be non-selfconsistent, and extract and save the band structure: atoms.get_potential_energy() bs = calc.band_structure() bs.write('bs.json') Again, the ASE command-line tool offers a helpful command to plot the band structure from a file: $ ase band-structure bs.json Exercise Calculate, save, and plot the band structure of silver for the path \(\mathrm{W L \Gamma X W K}\). You may need to zoom around a bit to see the whole thing at once. The plot will show the Fermi level as a dotted line (but does not define it as zero like the DOS plot before). Looking at the band structure, we see the complex tangle of what must be mostly d-states from before, as well as the few states with lower energy (at the \(\Gamma\) point) and higher energy (crossing the Fermi level) attributed to s. Equation of state¶ We can find the optimal lattice parameter and calculate the bulk modulus by doing an equation-of-state calculation. This means sampling the energy and lattice constant over a range of values to get the minimum as well as the curvature, which gives us the bulk modulus. The online ASE docs already provide a tutorial on how to do this, using the empirical EMT potential: Exercise Run the EOS tutorial. Complex crystals and cell optimisation¶ (If time is scarce, please consider skipping ahead to do the remaining exercises before returning here.) For the simple FCC structure we only have a single parameter, a, and the EOS fit tells us everything there is to know. For more complex structures we first of all need a more advanced framework to build the atoms, such as the ase.spacegroup.crystal() function. The documentation helpfully tells us how to build a rutile structure, saving us the trouble of looking up the atomic basis and other crystallographic information. Rutile is a common mineral form of TiO2 Exercise Build and visualize a rutile structure. Let’s uptimise the structure. In addition to the positions, we must optimise the unit cell which, being tetragonal, is characterised by the two lengths a and c. Optimising the cell requires the energy derivatives with respect to the cell parameters accessible through the stress tensor. atoms.get_stress() calculates and returns the stress as a vector of the 6 unique components (Voigt form). Using it requires that the attached calculator supports the stress tensor. GPAW’s planewave mode does this. The ase.constraints.ExpCellFilter allows us to optimise cell and positions simultaneously. It does this by exposing the degrees of freedom to the optimiser as if they were additional positions — hence acting as a kind of filter. We use it by wrapping it around the atoms: from ase.optimize import BFGS from ase.constraints import ExpCellFilter opt = BFGS(ExpCellFilter(atoms), ...) opt.run(fmax=0.05) Exercise Use GPAW’s planewave mode to optimize the rutile unit cell. You will probably need a planewave cutoff of at least 500 eV. What are the optimised lattice constants a and c? Exercise Calculate the band structure of rutile. Does it agree with your favourite internet search engine? Solutions¶ Ag ground state: from ase.build import bulk from gpaw import GPAW, PW atoms = bulk('Ag') calc = GPAW(mode=PW(350), kpts=[8, 8, 8], txt='gpaw.bulk.Ag.txt', setups={'Ag': '11'}) atoms.calc = calc atoms.get_potential_energy() calc.write('bulk.Ag.gpw') Ag DOS: import matplotlib.pyplot as plt from gpaw import GPAW from ase.dft.dos import DOS calc = GPAW('bulk.Ag.gpw') #energies, weights = calc.get_dos(npts=800, width=0) dos = DOS(calc, npts=800, width=0) energies = dos.get_energies() weights = dos.get_dos() ax = plt.gca() ax.plot(energies, weights) ax.set_xlabel('Energy [eV]') ax.set_ylabel('DOS [1/eV]') plt.savefig('dos.png') plt.show() Ag band structure: from gpaw import GPAW calc = GPAW('bulk.Ag.gpw') atoms = calc.get_atoms() path = atoms.cell.bandpath('WLGXWK', density=10) path.write('path.json') calc.set(kpts=path, fixdensity=True, symmetry='off') atoms.get_potential_energy() bs = calc.band_structure() bs.write('bs.json') Rutile cell optimisation: from ase.constraints import ExpCellFilter from ase.io import write from ase.optimize import BFGS from ase.spacegroup import crystal from gpaw import GPAW, PW a = 4.6 c = 2.95 # Rutile TiO2: atoms = crystal(['Ti', 'O'], basis=[(0, 0, 0), (0.3, 0.3, 0.0)], spacegroup=136, cellpar=[a, a, c, 90, 90, 90]) write('rutile.traj', atoms) calc = GPAW(mode=PW(800), kpts=[2, 2, 3], txt='gpaw.rutile.txt') atoms.calc = calc opt = BFGS(ExpCellFilter(atoms), trajectory='opt.rutile.traj') opt.run(fmax=0.05) calc.write('groundstate.rutile.gpw') print('Final lattice:') print(atoms.cell.get_bravais_lattice()) Rutile band structure: from gpaw import GPAW calc = GPAW('groundstate.rutile.gpw') atoms = calc.get_atoms() path = atoms.cell.bandpath(density=7) path.write('path.rutile.json') calc.set(kpts=path, fixdensity=True, symmetry='off') atoms.get_potential_energy() bs = calc.band_structure() bs.write('bs.rutile.json')
https://wiki.fysik.dtu.dk/ase/gettingstarted/tut04_bulk/bulk.html
CC-MAIN-2020-16
refinedweb
1,957
59.4
Represents a connected chain of edges. More... #include <vcl_iosfwd.h> #include <vcl_vector.h> #include <vtol/vtol_chain.h> #include <vtol/vtol_edge_2d_sptr.h> #include <vtol/vtol_face_2d_sptr.h> Go to the source code of this file. Represents a connected chain of edges. The vtol_one_chain class is used to represent a set of edges on a topological structure. A vtol_one_chain consists of its inferior edges and the superiors on which it lies. A vtol_one_chain may or may not be an ordered cycle. If the chain of edges encloses an area, then the vtol_one_chain may be used as the boundary of a topological face in a 3D structure. Modifications: JLM Dec 1995, Added timeStamp (Touch) to operations which affect bounds. JLM Dec 1995, no local method for ComputeBoundingBox Should use edge geometry recursively to be proper. Currently reverts to bounds on vertices from TopologyObject::ComputeBoundingBox() JLM Jan 1998 Added method to get direction of an edge JLM Feb 1999 Added correct method for ComputeBoundingBox() PTU May 2000 ported to vxl JLM Nov 2002 Modified the compute_bounding_box method to use box::grow_minmax_bounds for uniformity and to avoid dependence on dimension. Old method was strictly 2-d. Dec. 2002, Peter Vanroose -interface change: vtol objects -> smart pointers Definition in file vtol_one_chain.h.
http://public.kitware.com/vxl/doc/release/contrib/gel/vtol/html/vtol__one__chain_8h.html
crawl-003
refinedweb
205
59.19
third in a series about D’s BetterC mode D as BetterC (a.k.a. DasBetterC) is a way to upgrade existing C projects to D in an incremental manner. This article shows a step-by-step process of converting a non-trivial C project to D and deals with common issues that crop up. While the dmd D compiler front end has already been converted to D, it’s such a large project that it can be hard to see just what was involved. I needed to find a smaller, more modest project that can be easily understood in its entirety, yet is not a contrived example. The old make program I wrote for the Datalight C compiler in the early 1980’s came to mind. It’s a real implementation of the classic make program that’s been in constant use since the early 80’s. It’s written in pre-Standard C, has been ported from system to system, and is a remarkably compact 1961 lines of code, including comments. It is still in regular use today. Here’s the make manual, and the source code. The executable size for make.exe is 49,692 bytes and the last modification date was Aug 19, 2012. The Evil Plan is: - Minimize diffs between the C and D versions. This is so that if the programs behave differently, it is far easier to figure out the source of the difference. - No attempt will be made to fix or improve the C code during translation. This is also in the service of (1). - No attempt will be made to refactor the code. Again, see (1). - Duplicate the behavior of the C program as exactly and as much as possible, bugs and all. - Do whatever is necessary as needed in the service of (4). Once that is completed, only then is it time to fix, refactor, clean up, etc. Spoiler Alert! The completed conversion. The resulting executable is 52,252 bytes (quite comparable to the original 49,692). I haven’t analyzed the increment in size, but it is likely due to instantiations of the NEWOBJ template (a macro in the C version), and changes in the DMC runtime library since 2012. Step By Step Here are the differences between the C and D versions. It’s 664 out of 1961 lines, about a third, which looks like a lot, but I hope to convince you that nearly all of it is trivial. The #include files are replaced by corresponding D imports, such as replacing #include <stdio.h> with import core.stdc.stdio;. Unfortunately, some of the #include files are specific to Digital Mars C, and D versions do not exist (I need to fix that). To not let that stop the project, I simply included the relevant declarations in lines 29 to 64. (See the documentation for the import declaration.) #if _WIN32 is replaced with version (Windows). (See the documentation for the version condition and predefined versions.) extern (C): marks the remainder of the declarations in the file as compatible with C. (See the documentation for the linkage attribute.) A global search/replace changes uses of the debug1, debug2 and debug3 macros to debug printf. In general, #ifdef DEBUG preprocessor directives are replaced with debug conditional compilation. (See the documentation for the debug statement.) /* Delete these old C macro definitions... #ifdef DEBUG -#define debug1(a) printf(a) -#define debug2(a,b) printf(a,b) -#define debug3(a,b,c) printf(a,b,c) -#else -#define debug1(a) -#define debug2(a,b) -#define debug3(a,b,c) -#endif */ // And replace their usage with the debug statement // debug2("Returning x%lx\n",datetime); debug printf("Returning x%lx\n",datetime); The TRUE, FALSE and NULL macros are search/replaced with true, false, and null. The ESC macro is replaced by a manifest constant. (See the documentation for manifest constants.) // #define ESC '!' enum ESC = '!'; The NEWOBJ macro is replaced with a template function. // #define NEWOBJ(type) ((type *) mem_calloc(sizeof(type))) type* NEWOBJ(type)() { return cast(type*) mem_calloc(type.sizeof); } The filenamecmp macro is replaced with a function. Support for obsolete platforms is removed. Global variables in D are placed by default into thread-local storage (TLS). But since make is a single-threaded program, they can be inserted into global storage with the __gshared storage class. (See the documentation for the __gshared attribute.) // int CMDLINELEN; __gshared int CMDLINELEN D doesn’t have a separate struct tag name space, so the typedefs are not necessary. An alias can be used instead. (See the documentation for alias declarations.) Also, struct is omitted from variable declarations. /* typedef struct FILENODE { char *name,genext[EXTMAX+1]; char dblcln; char expanding; time_t time; filelist *dep; struct RULE *frule; struct FILENODE *next; } filenode; */ struct FILENODE { char *name; char[EXTMAX1] genext; char dblcln; char expanding; time_t time; filelist *dep; RULE *frule; FILENODE *next; } alias filenode = FILENODE; macro is a keyword in D, so we’ll just use MACRO instead. Grouping together multiple pointer declarations is not allowed in D, use this instead: // char *name,*text; // In D, the * is part of the type and // applies to each symbol in the declaration. char* name, text; C array declarations are transformed to D array declarations. (See the documentation for D’s declaration syntax.) // char *name,genext[EXTMAX+1]; char *name; char[EXTMAX+1] genext; static has no meaning at module scope in D. static globals in C are equivalent to private module-scope variables in D, but that doesn’t really matter when the module is never imported anywhere. They still need to be __gshared and that can be applied to an entire block of declarations. (See the documentation for the static attribute) /* static ignore_errors = FALSE; static execute = TRUE; static gag = FALSE; static touchem = FALSE; static debug = FALSE; static list_lines = FALSE; static usebuiltin = TRUE; static print = FALSE; ... */ __gshared { bool ignore_errors = false; bool execute = true; bool gag = false; bool touchem = false; bool xdebug = false; bool list_lines = false; bool usebuiltin = true; bool print = false; ... } Forward reference declarations for functions are not necessary in D. Functions defined in a module can be called at any point in the same module, before or after their definition. Wildcard expansion doesn’t have much meaning to a make program. Function parameters declared with array syntax are pointers in reality, and are declared as pointers in D. // int cdecl main(int argc,char *argv[]) int main(int argc,char** argv) mem_init() expands to nothing and we previously removed the macro. C code can play fast and loose with arguments to functions, D demands that function prototypes be respected. void cmderr(const char* format, const char* arg) {...} // cmderr("can't expand response file\n"); cmderr("can't expand response file\n", null); Global search/replace C’s arrow operator ( ->) with the dot operator ( .), as member access in D is uniform. Replace conditional compilation directives with D’s version. /* #if TERMCODE ... #endif */ version (TERMCODE) { ... } The lack of function prototypes shows the age of this code. D requires proper prototypes. // doswitch(p) // char *p; void doswitch(char* p) debug is a D keyword. Rename it to xdebug. The \n\ line endings for C multiline string literals are not necessary in D. Comment out unused code using D’s /+ +/ nesting block comments. (See the documentation for line, block and nesting block comments.) static if can replace many uses of #if. (See the documentation for the static if condition.) Decay of arrays to pointers is not automatic in D, use .ptr. // utime(name,timep); utime(name,timep.ptr); Use const for C-style strings derived from string literals in D, because D won’t allow taking mutable pointers to string literals. (See the documentation for const and immutable.) // linelist **readmakefile(char *makefile,linelist **rl) linelist **readmakefile(const char *makefile,linelist **rl) void* cannot be implicitly cast to char*. Make it explicit. // buf = mem_realloc(buf,bufmax); buf = cast(char*)mem_realloc(buf,bufmax); Replace unsigned with uint. inout can be used to transfer the “const-ness” of a function from its argument to its return value. If the parameter is const, so will be the return value. If the parameter is not const, neither will be the return value. (See the documentation for inout functions.) // char *skipspace(p) {...} inout(char) *skipspace(inout(char)* p) {...} arraysize can be replaced with the .length property of arrays. (See the documentation for array properties.) // useCOMMAND |= inarray(p,builtin,arraysize(builtin)); useCOMMAND |= inarray(p,builtin.ptr,builtin.length) String literals are immutable, so it is necessary to replace mutable ones with a stack allocated array. (See the documentation for string literals.) // static char envname[] = "@_CMDLINE"; char[10] envname = "@_CMDLINE"; .sizeof replaces C’s sizeof(). (See the documentation for the .sizeof property). // q = (char *) mem_calloc(sizeof(envname) + len); q = cast(char *) mem_calloc(envname.sizeof + len); Don’t care about old versions of Windows. Replace ancient C usage of char * with void*. And that wraps up the changes! See, not so bad. I didn’t set a timer, but I doubt this took more than an hour, including debugging a couple errors I made in the process. This leaves the file man.c, which is used to open the browser on the make manual page when the -man switch is given. Fortunately, this was already ported to D, so we can just copy that code. Building make is so easy it doesn’t even need a makefile: \dmd2.079\windows\bin\dmd make.d dman.d -O -release -betterC -I. -I\dmd2.079\src\druntime\import\ shell32.lib Summary We’ve stuck to the Evil Plan of translating a non-trivial old school C program to D, and thereby were able to do it quickly and get it working correctly. An equivalent executable was generated. The issues encountered are typical and easily dealt with: - Replacement of #includewith import - Lack of D versions of #includefiles - Global search/replace of things like -> - Replacement of preprocessor macros with: - manifest constants - simple templates - functions - version declarations - debug declarations - Handling identifiers that are D keywords - Replacement of C style declarations of pointers and arrays - Unnecessary forward references - More stringent typing enforcement - Array handling - Replacing C basic types with D types None of the following was necessary: - Reorganizing the code - Changing data or control structures - Changing the flow of the program - Changing how the program works - Changing memory management Future Now that it is in DasBetterC, there are lots of modern programming features available to improve the code: - modules! - memory safety (including buffer overflow checking) - metaprogramming - RAII - Unicode - nested functions - member functions - operator overloading - documentation generation - functional programming support - Compile Time Function Execution - etc. Action Let us know over at the D Forum how your DasBetterC project is coming along! 4 thoughts on “DasBetterC: Converting make.c to D” Link to discussion on Dlang to save you some seconds: Incredibly clear and pragmatic explanations on how any C programmer can progressively convert its C legacy software to the modern D language. Probably the best “D tutorial for C programmers” to date !!! Pardon the suggestion, but I think minised would be a good tool to convert. (I halfway wanted to convert it to Turbo Pascal [FPC] myself!) It’s BSD-licensed, has a good test suite, and is a standard POSIX tool. I often find sed very useful (even if I’m not really *nix savvy). I started on Dash (that is, Bash) as a training exercise. The C code is surprisingly clean. It does, however, use YACC, which I would replace with a Pratt parser solution in the Future.
https://dlang.org/blog/2018/06/11/dasbetterc-converting-make-c-to-d/
CC-MAIN-2019-51
refinedweb
1,919
57.47
This page uses content from Wikipedia and is licensed under CC BY-SA. of sections (including subsections) should be unique on a page. Using the same heading more than once on a page causes problems: For registered users who use Preferences → Appearance → Auto-number headings, sections are numbered in the table of contents and at beginning of each section heading. For the ordering of (appendix & footer) sections, see: Wikipedia:Manual of Style/Layout § Order of article elements. For each page with at least four headings, a table of contents (TOC) is automatically generated from the section headings unless the magic word __NOTOC__ (with two underscores on either side of the word) is added to the article's wikitext.: {{TOC limit}}can be used to reduce the length of the TOC by hiding nested subsections, rather than a floating TOC.. By default, the TOC includes all the headings in the page, whatever their level. When an article or project page has a very large number of subsections, ===sub-sections=== but hide ====sub-sub-sections====). The limit=n parameter can also be given to {{TOC left}} or {{TOC right}} the same way. The TOC is automatically generated with HTML id="toc". You can make a link to it with [[#toc]]: [[#toc|Contents]] [[Help:Wiki markup#toc|Contents]] The auto-generated TOC is not maximally appropriate or useful in all article types, such as long list articles and glossaries, so there are numerous replacement templates. To use one, put __NOTOC__ at the top of the article, and place the alternative TOC template, such as {{Compact ToC}} (which can be customized for many list styles) where needed.. See for example: Legality of cannabis by country. See also: Legality of cannabis by U.S. jurisdiction. It has 2 TOCs; a short vertical one, and a long horizontal one.: <span class="mw-headline" id="Section_linking">Section linking</span> A link to this section (Section linking) looks like this: [[Help:Section#Section linking|Section linking]] (NB section links are case sensitive, including the first character (Help:Link).). If a section has a blank space as heading, it results in a link in the TOC that does not work. For a similar effect see NS:0. To create an anchor target without a section heading, you can use: [[#section| ]]-> [[page#section| ]]-> [[namespace:page#section| ]]-> For linking to an arbitrary position in a page see Section linking (anchors).:P:Categorizing redirects. Sections can be separately edited by clicking special edit links labeled "[edit]" by the heading or by right clicking on the section heading. This is called "section editing". The section editing feature will take you to an edit page by a URL such. (But if one does need the article during a section edit, you could open the section "edit" link in a new window, or during section editing, open the article or page in a different window). Section editing alleviates some problems of large pages by making it slightly faster and much easier to find the text that you want to change.[1] It also may help when the full page is just too large for the browser to handle all-at-once in the editor. Adding the code __NOEDITSECTION__ anywhere on the page will remove the edit links. It will not disable section editing itself; the URL can still be accessed manually. Inserting a section can be done by editing either the section before or after it. An editor can merge one section with the previous section by deleting the heading. Note that in these cases the preloaded section name in the edit summary is not correct, and has to be changed or deleted. Navigation on pages from "talk" namespaces provides a special link labeled "New section", "+", or "Add topic" used to create a new section to the end of the page.[2] Pages having the code __NEWSECTIONLINK__ in wikitext also display this link, regardless of the namespace. The URL for such an action looks like: In this case, a text box having as title "Subject/headline", will appear and the content you type in it will become the name of the new section heading. There is no input box for the edit summary; it is automatically created. Do not edit the last existing section to create a new one, because it will result in a misleading edit summary which will conceal creation of the section and may confuse other users. By default, there is no link to edit the lead section of a page, so the entire page must be edited. Lead section editing can be enabled through Preferences → Gadgets → Appearance → Add an [edit] link for the lead section of a page.. The edit page shows the list of templates used on the whole page, i.e. also the templates used in other sections.. To edit a footnote rendered in a section containing the code <references />, edit the section with the footnote mark referring to it (see Help:Footnotes). Previewing the section will show a preview of the footnote. If a page has very large sections, or is very large and has no division into sections, and one's browser or connection does not allow editing of such a large section, then one can still:. When conditionally (using a parser function) transcluding a template with sections, the "edit" links of this and subsequent sections will edit the wrong section or give the error message m:Template:void (backlinks edit) for the template to transclude to produce nothing... Note that a parameter value appearing in a template, for example "{{. Advantages of separate pages: Advantages of one combined page with sections:. Template-style section transclusion (TST) is an older method of transcluding sections. Mark off sections in the text using this markup: <onlyinclude>{{#ifeq:{{{transcludesection|}}}|chapter1| content }}</onlyinclude> Use a template call to transclude the section. For example, to transclude a section called chapter1 from a page called pageX: {{:pageX|transcludesection=chapter1}} The target page defines the location of the section. This section is linked to from #Section linking.
https://readtiger.com/wkp/en/Help:Section
CC-MAIN-2018-34
refinedweb
1,002
60.14
Working with React In this chapter, the following recipes will be covered: - Introduction - Working with the latest JS features in React - What's new in React? - Using React on Windows Introduction React is a JavaScript library (MIT License) made by Facebook to create interactive UIs. It's used to create dynamic and reusable components. The most powerful thing about React is that can be used in the client, server, mobile applications, and even VR applications. In the modern web, we need to manipulate the DOM constantly; the problem is that doing this a lot may affect the performance of our application seriously. React uses a Virtual DOM, which means that all updates occur in memory (this is faster than manipulating the real DOM directly). The learning curve of React is short in comparison with other JavaScript frameworks such as Angular, Vue, or Backbone, mainly because the React code is mostly written with modern JavaScript (classes, arrow functions, string templates, and so on) and does not have too many patterns used to write code, like Dependency Injection, or a template system, like in Angular. Companies such as Airbnb, Microsoft, Netflix, Disney, Dropbox, Twitter, PayPal, Salesforce, Tesla, and Uber are extensively using React in their projects. In this book, you will learn how to develop your React applications in the way they do, using best practices. Working with the latest JS features in React As I said in the introduction, React is mainly written with modern JavaScript (ES6, ES7, and ES8). If you want to take advantage of React, there are some modern JS features that you should master to get the best results for your React applications. In this first recipe, we are going to cover the essential JS features so you are ready and can start working on your first React application. How to do it... In this section, we will see how to use the most important JS features in React: - let and const: The new way to declare variables in JavaScript is by using let or const. You can use let to declare variables that can change their value but in block scope. The difference between let and var is that let is a block scoped variable that cannot be global, and with var, you can declare a global variable, for example: var name = 'Carlos Santana'; let age = 30; console.log(window.name); // Carlos Santana console.log(window.age); // undefined - The best way to understand "block scope" is by declaring a for loop with var and let. First, let's use var and see its behavior: for (var i = 1 ; i <= 10; i++) { console.log(i); // 1, 2, 3, 4... 10 } console.log(i); // Will print the last value of i: 10 - If we write the same code, but with let, this will happen: for (let i = 1 ; i <= 10; i++) { console.log(i); // 1, 2, 3, 4... 10 } console.log(i); // Uncaught ReferenceError: i is not defined - With const, we can declare constants, which means the value can't be changed (except for arrays and objects): const pi = 3.1416; pi = 5; // Uncaught TypeError: Assignment to constant variable. - If we declare an array with const, we can manipulate the array elements (add, remove, or modify elements): const cryptoCurrencies = ['BTC', 'ETH', 'XRP']; // Adding ERT: ['BTC', 'ETH', 'XRP', 'ERT']; cryptoCurrencies.push('ERT'); // Will remove the first element: ['ETH', 'XRP', 'ERT']; cryptoCurrencies.shift(); // Modifying an element cryptoCurrencies[1] = 'LTC'; // ['ETH', 'LTC', 'ERT']; - Also, using objects, we can add, remove, or modify the nodes: const person = { name: 'Carlos Santana', age: 30, }; // Adding a new node... person.website = ''; // Removing a node... delete person.email; // Updating a node... person.age = 29; - Spread operator: The spread operator (...) splits an iterable object into individual values. In React, it can be used to push values into another array, for example when we want to add a new item to a Todo list by utilizing setState (this will be explained in the next chapter): this.setState({ items: [ ...this.state.items, // Here we are spreading the current items { task: 'My new task', // This will be a new task in our Todo list. } ] }); - Also, the Spread operator can be used in React to spread attributes (props) in JSX: render() { const props = {}; props.name = 'Carlos Santana'; props.age = 30; props.email = '[email protected]'; return <Person {...props} />; } - Rest parameter: The rest parameter is also represented by .... The last parameter in a function prefixed with ... is called the rest parameter. The rest parameter is an array that will contain the rest of the parameters of a function when the number of arguments exceeds the number of named parameters: function setNumbers(param1, param2, ...args) { // param1 = 1 // param2 = 2 // args = [3, 4, 5, 6]; console.log(param1, param2, ...args); // Log: 1, 2, 3, 4, 5, 6 } setNumbers(1, 2, 3, 4, 5, 6); - Destructuring: The destructuring assignment feature is the most used in React. It is an expression that allows us to assign the values or properties of an iterable object to variables. Generally, with this we can convert our component props into variables (or constants): // Imagine we are on our <Person> component and we are // receiving the props (in this.props): name, age and email. render() { // Our props are: // { name: 'Carlos Santana', age: 30, email: '[email protected]' } console.log(this.props); const { name, age, email } = this.props; // Now we can use the nodes as constants... console.log(name, age, email); return ( <ul> <li>Name: {name}</li> <li>Age: {age}</li> <li>Email: {email}</li> </ul> ); } // Also the destructuring can be used on function parameters const Person = ({ name, age, email }) => ( <ul> <li>Name: {name}</li> <li>Age: {age}</li> <li>Email: {email}</li> </ul> ); - Arrow functions: ES6 provides a new way to create functions using the => operator. These functions are called arrow functions. This new method has a shorter syntax, and the arrow functions are anonymous functions. In React, arrow functions are used as a way to bind the this object in our methods instead of binding it in the constructor: class Person extends Component { showProps = () => { console.log(this.props); // { name, age, email... } } render() { return ( <div> Consoling props: {this.showProps()} </div> ); } } - Template literals: The template literal is a new way to create a string using backticks (` `) instead of single quotes (' ') or double quotes (" "). React use template literals to concatenate class names or to render a string using a ternary operator: render() { const { theme } = this.props; return ( <div className={`base ${theme === 'dark' ? 'darkMode' : 'lightMode'}`} > Some content here... </div> ); } - Map: The map() method returns a new array with the results of calling a provided function on each element in the calling array. Map use is widespread in React, and is mainly used to render multiple elements inside a React component; for example, it can be used to render a list of tasks: render() { const tasks = [ { task: 'Task 1' }, { task: 'Task 2' }, { task: 'Task 3' } ]; return ( <ul> {tasks.map((item, key) => <li key={key}>{item.task}</li>} </ul> ); } - Object.assign(): The Object.assign() method is used to copy the values of all enumerable own properties from one or more source objects to a target object. It will return the target object. This method is used mainly with Redux to create immutable objects and return a new state to the reducers (Redux will be covered in Chapter 5, Mastering Redux): export default function coinsReducer(state = initialState, action) { switch (action.type) { case FETCH_COINS_SUCCESS: { const { payload: coins } = action; return Object.assign({}, state, { coins }); } default: return state; } }; - Classes: JavaScript classes, introduced in ES6, are mainly a new syntax for the existing prototype-based inheritance. Classes are functions and are not hoisted. React uses classes to create class Components: import React, { Component } from 'react'; class Home extends Component { render() { return <h1>I'm Home Component</h1>; } } export default Home; - Static methods: Static methods are not called on instances of the class. Instead, they're called on the class itself. These are often utility functions, such as functions to create or clone objects. In React, they can be used to define the PropTypes in a component: import React, { Component } from 'react'; import PropTypes from 'prop-types'; import logo from '../../images/logo.svg'; class Header extends Component { static propTypes = { title: PropTypes.string.isRequired, url: PropTypes.string }; render() { const { title = 'Welcome to React', url = '' } = this.props; return ( <header className="App-header"> <a href={url}> <img src={logo} </a> <h1 className="App-title">{title}</h1> </header> ); } } export default Header; - Promises: The Promise object represents the eventual completion (or failure) of an asynchronous operation and its resulting value. We will use promises in React to handle requests by using axios or fetch; also, we are going to use Promises to implement the server-side rendering (this will be covered in Chapter 11, Implementing Server-Side Rendering). - async/await: The async function declaration defines an asynchronous function, which returns an AsyncFunction object. This also can be used to perform a server request, for example using axios: Index.getInitialProps = async () => { const url = ''; const res = await axios.get(url); return { coins: res.data }; }; What's new in React? This paragraph was written on August 14, 2018, and the latest version of React was 16.4.2. The React 16 version has a new core architecture named Fiber. In this recipe, we will see the most important updates in this version that you should be aware of to get the most out of React. How to do it... Let's see the new updates: - Components can now return arrays and strings from render: Before, React forced you to return an element wrapped with a <div> or any other tag; now it is possible to return an array or string directly: // Example 1: Returning an array of elements. render() { // Now you don't need to wrap list items in an extra element return [ <li key="1">First item</li>, <li key="2">Second item</li>, <li key="3">Third item</li>, ]; } // Example 2: Returning a string render() { return 'Hello World!'; } - Also, React now has a new feature called Fragment, which also works as a special wrapper for elements. It can be specified with empty tags (<></>) or directly using React.Fragment: // Example 1: Using empty tags <></> render() { return ( <> <ComponentA /> <ComponentB /> <ComponentC /> </> ); } // Example 2: Using React.Fragment render() { return ( <React.Fragment> <h1>An h1 heading</h1> Some text here. <h2>An h2 heading</h2> More text here. Even more text here. </React.Fragment> ); } // Example 3: Importing Fragment import React, { Fragment } from 'react'; ... render() { return ( <Fragment> <h1>An h1 heading</h1> Some text here. <h2>An h2 heading</h2> More text here. Even more text here. </Fragment> ); } - Error boundaries with from the official website:: render() { <ErrorBoundary> <MyComponent /> </ErrorBoundary> } - Better server-side rendering with from the official site: - Reduced file size with from the official site: "Despite all these additions, React 16 is actually smaller compared to 15.6.1. -) That amounts to a combined 32% size decrease compared to the previous version (30% post-gzip)." If you want to check the latest updates on React, you can visit the official React blog:. Using React on Windows I'm not a big fan of Windows for development since it's kind of problematic to configure sometimes. I will always prefer Linux or Mac, but I'm aware that a lot of people who are reading this book will use Windows. In this recipe, I'll show you the most common problems you may have when you try to follow the recipes in this book using Windows. How to do it... We'll now see the most common problems using Windows for development: - Terminal: The first problem you will face is to use the Windows terminal (CMD) because it does not support Unix commands (like Linux or Mac). The solution is to install a Unix Terminal; the most highly recommended is to use the Git Bash Terminal, which is included with Git when you install it (), and the second option is to install Cygwin, which is a Linux Terminal in Windows (). - Environment variables: Another common problem using Windows is to set environment variables. Generally, when we write npm scripts, we set environment variables such as NODE_ENV=production or BABEL_ENV=development, but to set those variables in Windows, you use the SET command, which means you need to do SET NODE_ENV=production or SET BABEL_ENV=development. The problem with this is that if you are working with other people that use Linux or Mac, they will have problems with the SET command, and probably you will need to ignore this file and modify it only for your local environment. This can be tedious. The solution to this problem is to use a package called cross-env; you can install it by doing npm install cross-env, and this will work in Windows, Mac, and Linux: "scripts": { "start": "cross-env NODE_ENV=development webpack-dev-server -- mode development --open", "start-production": "cross-env NODE_ENV=production webpack-dev- server --mode production" } - Case-sensitive files or directories: In reality, this also happens on Linux, but sometimes it is very difficult to identify this problem, for example, if you create a component in the components/home/Home.jsx directory but in your code you're trying to import the component like this: import Home from './components/Home/Home'; - Paths: Windows uses a backslash (\) to define a path, while in Mac or Linux they use a forward slash (/). This is problematic because sometimes we need to define a path (in Node.js mostly) and we need to do something like this: // In Mac or Linux app.use( stylus.middleware({ src: __dirname + '/stylus', dest: __dirname + '/public/css', compile: (str, path) => { return stylus(str) .set('filename', path) .set('compress', true); } }) ); // In Windows app.use( stylus.middleware({ src: __dirname + '\stylus', dest: __dirname + '\public\css', compile: (str, path) => { return stylus(str) .set('filename', path) .set('compress', true); } }) ); // This can be fixed by using path import path from 'path'; // path.join will generate a valid path for Windows or Linux and Mac app.use( stylus.middleware({ src: path.join(__dirname, 'stylus'), dest: path.join(__dirname, 'public', 'css'), compile: (str, path) => { return stylus(str) .set('filename', path) .set('compress', config().html.css.compress); } }) );
https://www.packtpub.com/product/react-cookbook/9781783980727
CC-MAIN-2020-50
refinedweb
2,343
54.22
Giphy is the largest library providing one of the most popular forms of media widely used for chatting – GIFs or Graphics Interchange Format and stickers. The most popular social media apps such as WhatsApp, Instagram, Slack, Skype and Twitter (to mention a few) use Giphy’s technology to provide GIF content and Stickers for their chat users to improve the chatting experience. At Instamobile, we’ve added Giphy integration into all of our chat apps, so we’re going to describe our experience of integrating the Giphy API into any React Native app. In this article, we’re going to dive into integrating Giphy API in React Native in four simple and quick steps. 1. Get an API key Head over to the developer page and create an account on a chrome browser. Your dashboard should look like this. Click on the ‘Create an App’ button to create a new API. You will be prompted to select an option between API or SDK. For this article we are focusing on the API so click on the API option. Fill out your app name and app description, then create app. Your dashboard should be well set up with your API key on it. 2. Fetch Data from Giphy API Additionally, we’ll create states to hold the gif data and the term we search for. const [gifs, setGifs] = useState([]); const [term, updateTerm] = useState(''); In your App.js, create a fuction fetchGifs() in your App component. async function fetchGifs() { try { const API_KEY = <API_KEY>; const BASE_URL = ''; const resJson = await fetch(`${BASE_URL}?api_key=${API_KEY}&q=${term}`); const res = await resJson.json(); setGifs(res.data); } catch (error) { console.warn(error); } } Feel free to use this method into any of your React Native apps to save 20 minutes of coding, testing and debugging. 3. Display the GIFs in the React Native UI Let’s create an image list component to hold the GIFs in an image format. To achieve this, we wrote the code below: import React, {useState} from 'react'; import {View, TextInput, StyleSheet, FlatList, Image} from 'react-native'; // do not forget to add fresco animation to build.gradle export default function App() { const [gifs, setGifs] = useState([]); const [term, updateTerm] = useState(''); async function fetchGifs() { try { const API_KEY = <API_KEY>; const BASE_URL = ''; const resJson = await fetch(`${BASE_URL}?api_key=${API_KEY}&q=${term}`); const res = await resJson.json(); setGifs(res.data); } catch (error) { console.warn(error); } } /// add facebook fresco function onEdit(newTerm) { updateTerm(newTerm); fetchGifs(); } return ( <View style={styles.view}> <TextInput placeholder="Search Giphy" placeholderTextColor='#fff' style={styles.textInput} onChangeText={(text) => onEdit(text)} /> <FlatList data={gifs} renderItem={({item}) => ( <Image resizeMode='contain' style={styles.image} source={{uri: item.images.original.url}} /> )} /> </View> ); } const styles = StyleSheet.create({ view: { flex: 1, alignItems: 'center', padding: 10, backgroundColor: 'darkblue' }, textInput: { width: '100%', height: 50, color: 'white' }, image: { width: 300, height: 150, borderWidth: 3, marginBottom: 5 }, }); Important: For you to make GIFs appear on your Android device you have to add the following to the list of dependencies in your android/app/build.gradle. implementation 'com.facebook.fresco:fresco:2.0.0' implementation 'com.facebook.fresco:animated-gif:2.0.0' Now simply run the app, and you’ll see something like this on the main screen of you React Native app: 4. Display advanced Giphy units such as stickers, trending GIFs and best matches The video above show how the Giphy API React Native integration looks like with GIFs. You can easily search for Giphy stickers by replacing the BASE_URL This is what Giphy API stickers look like in your React Native app: Giphy API provides developers with two more easy but powerful endpoints trending GIPHY Trending returns a list of the most relevant and engaging content each and every day. Our feed of trending content is continuously updated, so you always have the latest and greatest at your fingertips. GIPHY Translate converts words and phrases to the perfect GIF or Sticker using GIPHY’s special sauce algorithm. This feature is best exhibited in GIPHY’s Slack integration. Conclusion As we learned, adding GIFs support to any React Native app is extremely straightforward with the Giphy API integration in React Native. The generic REST endpoints provided by Giphy are extremely simple to call from React Native. If you want to see it in action, check out one of the demos of our social apps, all of which have React Native Giphy API integration..
https://www.instamobile.io/mobile-development/giphy-react-native/?ref=hackernoon.com
CC-MAIN-2021-10
refinedweb
728
54.32
Dataset Reference: McCauley, T. (2014). Dimuon event information derived from the Run2010B public Mu dataset. CERN Open Data Portal. DOI: 10.7483/OPENDATA.CMS.CB8H.MFFA. import ROOT Welcome to JupyROOT 6.07/07 A little extra: JavaScript visualisation. This command will become a magic very soon. %jsroot on inputFileName = 'MuRun2010B.csv' import os if not os.path.exists(inputFileName): import urllib2 response = urllib2.urlopen('') filecontent = response.read() with open(inputFileName,"w") as f_out: f_out.write(filecontent) dimuons = ROOT.TTree("MuonPairs","MuonPairs") dimuons.ReadFile(inputFileName) 100000L Now we create an histogram to hold the invariant mass values. In order to loop on the TTree rows, we use the TTree::Draw method: this is the most straightforward way in which you can loop on a N-tuple in ROOT. Notice that the plot is an interactive JavaScript based visualisation: you can zoom on the resonances to better inspect the result. invMass = ROOT.TH1F("invMass","CMS Opendata: #mu#mu mass;#mu#mu mass [GeV];Events",512, 2, 110)> invMass",cut,"hist") c.SetLogx() c.SetLogy() c.Draw() That might have been too fast. We now make the analysis above more explicit producing a plot also for the J/Psi particle. from math import sqrt invMass = ROOT.TH1F("Spectrum","Subset of CMS Run 2010B;#mu#mu mass [GeV];Events",1024, 2, 110) jpsiLow = 2.95 jspiHigh = 3.25 jpsi = ROOT.TH1F("jpsi","Subset of CMS Run 2010B: J/#psi window;#mu#mu mass [GeV];Events",128, jpsiLow, jspiHigh) for e in dimuons: # a loop on the events if e.Q1 * e.Q2 != -1: continue m2 = (e.E1 + e.E2)**2 - ((e.px1 + e.px2)**2 + (e.py1 + e.py2)**2 + (e.pz1 + e.pz2)**2) m = sqrt(m2) invMass.Fill(m) if m < jspiHigh and m > jpsiLow: jpsi.Fill(m) Now time to draw our plot: this time we will inline an image in the notebook. We will plot on the same canvas the full spectrum and the zoom in the J/psi particle. dualCanvas = ROOT.TCanvas("DualCanvas","DualCanvas",800,512) dualCanvas.Divide(2,1) leftPad = dualCanvas.cd(1) leftPad.SetLogx() leftPad.SetLogy() invMass.Draw("Hist") dualCanvas.cd(2) jpsi.Draw("HistP") dualCanvas.Draw()
http://nbviewer.jupyter.org/github/dpiparo/swanExamples/blob/master/notebooks/CMSDimuon_py.ipynb
CC-MAIN-2018-13
refinedweb
361
53.68
As seen from various studies, there is a direct correlation between faster load times and higher conversion rates. When analyzed critically, the reason is simple, users want to get information quickly and when a website takes too long to provide that information, they move on to other alternatives. We can reduce the chances of users leaving our application by improving the page load times of navigation using link prefetching. Link prefetching is a technique that is used to fetch links in advance which speed up subsequent navigations. In this article, we’ll look at three libraries that can be used to prefetch links and explore the pros/cons of each one. Prefetch with link=prefetch Before we get into the libraries, I want to note that the browser has a built-in method for prefetching links. Some of the libraries discussed in this article use this method under-the-hood while others don’t. When the browser is done downloading critical resources for the page and not handling much user interactions, it has some idle time. This idle time is when links with <link=prefetch> are fetched and stored in cache. When the user navigates to the link, it’s fetched from the cache which speeds up navigation. Prefetching a link is as simple as adding: <link rel="prefetch" href="/journal" as="document"> as=document tells the browser the type of resource to prefetch so it sets the appropriate headers. Other options are style, script, font and more. When the user navigates to a prefetched page, in the network tab you’ll see prefetch cache under the size column as seen in the screenshot below. You’ll notice the load time is 10 milliseconds so the page appears instantly to the user. If you would rather not have third-party code, you can roll in your own custom solution using this as a starting point. One of the limitations of the browser mechanism of prefetching is that it works with only <link> tags. There’s also little you can do for customization and flexibility. For the rest of the article, we’ll look at three different libraries and the method used by each to prefetch links. InstantClick From the official documentation, InstantClick is a JavaScript library that dramatically speeds up your website, making navigation effectively instant in most cases. InstantClick works by prefetching links as soon as a link ( <a>)is hovered on (for mobile devices, on touchstart) so by the time the user actually clicks the link, the page is already downloaded. You can get started with InstantClick through a CDN or an unofficial package on npm. From the command line in your project directory, run the command: npm install --save instantclick Then use it in your project: import InstantClick from 'instantclick' InstantClick.init() If you are using the CDN, add <script> to your document. <script src=""></script> Then initialize it: <script data-no-instant>InstantClick.init();</script> You can also pass additional configuration parameters to determine when to start prefetching a link and how long to keep it in the cache: InstantClick.init({ preloadingMode: 50, // Mouseover duration before preload is triggered preloadCacheTimeLimit: 30000 // How long to cache preloaded pages }); That’s the basics of what you need to add InstantClick to your application. There are other things you can do which can be found in the documentation. quicklink Next, we’ll look at quicklink, which takes a different method to prefetch links. The method can be broken into four steps: - Check all the links currently in the viewport (links that are visible using IntersectionObserver) - Detect if the browser is not busy (using requestIdleCallback) - Check if the user is on a slow connection (with the Network Information API) - Prefetch the URLs to the links (using <link rel=prefetch>or XHR or fetch) Getting started is as simple as adding <script> with a CDN link to the bottom of your document. quicklink can also be installed via npm. To install via npm: npm install --save quicklink Or using cdn: <script src=""></script> Then initialize it like so: quicklink.listen(); There are other configuration options that can be passed during initialization. Some of them are: quicklink.listen({ timeout: 4000, // defaults to 2 seconds el: document.getElementById('carousel'), // DOM element to observe for viewport links origins: ['example.com'], // list of origins to allow to prefetch from, defaults to hostname priority: true // defaults to low-priority }); The entire library weighs less than < 1kb minified and gzipped so it’s quite lightweight. Guess.js Out of all the libraries covered, Guess.js requires the most overhead setup cost. This, in part, is due to the data-driven method used to determine the links to prefetch. Another important factor is the development environment, framework (Angular, Nuxt.js, Gatsby, Next.js) or static site? This second part is important as the development environment determines the setup. Let’s look at an example with the Nuxt.js framework. Nuxt.js transforms every *.vue file in the pages/ directory to a route. Assuming we have a structure like: pages/ ├── about.vue ├── example.vue ├── index.vue └── media.vue This generates the following routes: /about /example / /media To use guess.js with Nuxt.js, install guess-webpack as a devDependency: npm i -D guess-webpack Then inside nuxt.config.js, add this snippet: import { readFileSync } from 'fs' import { GuessPlugin } from 'guess-webpack' export default { ... build: { ... extend(config, ctx) { if (ctx.isClient) { config.plugins.push( new GuessPlugin({ reportProvider() { return Promise.resolve(JSON.parse(readFileSync('./routes.json'))) } }) ) } ... } }, // Nuxt > v2.4.0 router: { prefetchLinks: false } } Nuxt.js v2.4.0 uses quicklink by default so we override it with prefetchLinks: false. Create a file routes.js in the same directory as nuxt.config.js and add the following: { "/": { "/example": 80, "/about": 20 }, "/example": { "/": 20, "/media": 0, "/about": 80 }, "/about": { "/": 20, "/media": 80 }, "/media": { "/": 33, "/about": 33, "/example": 34 } } This file is a sample file which shows the number of times users have gone from one route to another. For example, if we look at the last object, we’ll see that from /media, there were 33 sessions in which users have visited /, another 33 sessions which users visited /about and 34 sessions which users visited /example. Guess.js takes this data and builds a model to predict which links to prefetch based on the probability of the user navigating to that page next. Guess.js also allows you to consume real-world data from analytics tools like Google Analytics. This real-world usage makes prefetching links more accurate and efficient since it’s based on real-world data. You can see how to configure Google Analytics with Guess.js and Nuxt.js here. Statistics and trends As can be seen from the graph above, quicklink and guess-webpack (guess.js) are the most downloaded libraries in the last 6 months with quicklink overtaking guess.js around May this year. InstantClick has the lowest downloads on npm and this may be attributed to the fact that it’s not an official package. The GitHub statistics are closer as can be seen from the table above. quicklink has 8,433 stars (the most) and 28 issues (the least) at of this time of writing. It’s also the smallest in terms of size (< 1kb). Guess-webpack – the npm package for guess.js – is the largest in terms of size (1.2mb). InstantClick has the most issues on GitHub (50) and looking at the last time it was updated, it seems it’s no longer actively maintained. Developer experience The table below gives insight into some factors to consider before deciding which one to pick: Conclusion In this article, we’ve covered three libraries that can be used to prefetch links as well as looking at the methods they use to determine which links to prefetch. We also looked at the inbuilt method of prefetching links. The library you use comes down to the project you are working on. We’ve seen the pros/cons of each library so you can decide which best suits the project you are working on. Whichever library you choose to use, it will ensure that your links are prefetched which will improve the speed of navigation for your users..
http://blog.logrocket.com/faster-page-load-times-with-link-prefetching/
CC-MAIN-2020-40
refinedweb
1,369
63.59
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. >This patch adds your recommendation almost verbatim (except for the >concluding "it's a pity...", want it in?) and with a little extra context >added at the beginning. Looks good as-is, thanks. >I've added it to the "Extensions and Backward >Compatibility" section of the FAQ, since this is referenced at the top >of the ext/howto.html page. I'm not sure if that's the best place for >it. Seems fine. >Should there be separate tests for 3.0 and 3.1, IIRC the extensions were >still in the std namespace until 3.1? This is correct, and should at least be noted as a comment. thanks again! benjamin
http://gcc.gnu.org/ml/libstdc++/2002-11/msg00310.html
crawl-001
refinedweb
129
78.55
activating SHIELD after BOOST is not able to affect BOOST your X,Y values are not able neither to be the same like your opponent nor to even be in 400 units from him - u both have radius of 400, so your radiuses are summed up Can someone help me How can I calculate the nextCheckpointAngle in Gold league? Look up a trig function called ATan2. Depending on the language you're using, it might be included in the standard math package. It will give the angle between a point and the origin. If you subtract your position from the checkpoint's position (or maybe vice-versa, I get mixed up) and use ATan2 on that, it'll be the angle from you to the checkpoint instead. Sorry guys for the question but i can't use BOOST too... In C# I wrote :Console.WriteLine(nextCheckpointX + " " + nextCheckpointY + " " + BOOST);I simply replaced the Thurst value by the keyword BOOST, but it always says me that BOOST doesn't exists in the current context... Why if it's a keyword ? Console.WriteLine(nextCheckpointX + " " + nextCheckpointY + " BOOST"); Thank you, i didn't understood to put it as a string Hello, After I have submitted twice my code. It is impossible to join the Bot Programming "Coders strike back".The error message is:"An error occurred (#-1): "internal error". Please contact coders@codingame.com". Can you help ? try to refresh? Some bugs have been fixed in Gold/Legend (inputs, ranking) Thank you that's great, what has been fixed besides the angle bug? Hello, I'm now stuck on the 3rd level with a task "Pods will now collide with each other. Pods will forcibly bounce off each other when they come into contact. Additionally, extra maps are now available for racing." I do not know, how to change program so that it will work. Can you please help me? Game-wise, you now have to figure out how to calculate collisions and either avoid them, or use them to your advantage. If your code is crashing, we'll need to see your errors to help. how do i use the boost keyword This question has been asked and answered a dozen times in this thread. Actually, there was a misunderstanding. Nothing has changed on the ranking. Sorry for the wrong announcement and late fix.We could have been more proactive on this one. Don't hesitate to chase us here or me on chat for this kind of issue. We could have been more proactive on this one. Don't hesitate to chase us here or me on chat for this kind of issue. Good job on the fix, but this has been known for over 9 months and reported numerous times - in general chat, in french chat, personally to multiple admins in private messages and of course on the forums. I don't know what finally triggered the fix, but it's definitely not just a matter of "chasing" you. I didn't say nor imply it was. I just think it doesn't hurt. Sorry again for this being fixed so late. Can someone please help me with my code? I'm in wood 2 but I cant seem to get it running. Im programming in python import sys import math # game loop while True: x, y, next_checkpoint_x, next_checkpoint_y, next_checkpoint_dist, next_checkpoint_angle = [int(i) for i in input().split()] opponent_x, opponent_y = [int(i) for i in input().split()] #distance if next_checkpoint_dist < 15: thrust =10 else: thrust = 100 #angle if nextCheckpointAngle > 90 or nextCheckpointAngle < -90 then: thrust -> 0 else: thrust -> 100 end if print x y thrust print(str(next_checkpoint_x) + " " + str(next_checkpoint_y) + "100") you need to put it as a string thrust = "BOOST" Hello, some tips:the distance should be like 1500 instead of 15 ^^ the radius' of the checkpoints themselfs are 600 ^^also you could make the angle where you thrust with 100 a little sharper like 50 and -50one idea is also to turn to the middle of the map like 1k from the checkpoint away already so that you will be able to boost to the next one more efficiently Crysaac
http://forum.codingame.com/t/coders-strike-back-puzzle-discussion/1833?page=7
CC-MAIN-2018-30
refinedweb
690
80.72
In the main bitmap graphics in Silverlight are second class citizens because Silverlight is based on WPF which emphasises vector graphics. However, in practice bitmaps in Silverlight are very important for reasons of efficiency since Silverlight doesn't have access to the GPU. Find out how to work with dynamic bitmaps. In the main bitmap graphics in Silverlight are a second class citizens because Silverlight is based on WPF which emphasises vector graphics. However in practice bitmaps in Silverlight are very important. The reason is that while WPF can get away with what at first sight appear to be inefficient implementations of graphical methods it gets away with it because it has access to graphics hardware acceleration. Silverlight for reasons of security ignores the GPU and so has no hardware graphics acceleration and as such occasionally you need to use bitmaps, and dynamically generated bitmaps, to make things efficient. The good news Silverlight can do dynamic bitmaps with the WriteableBitmap object introduced in Silverlight 3. It can even do bitmap manipulation which opens up a whole new range of possibilities by way of dynamic bitmaps generated by code rather than loaded from a URI. The only problem is that the facilities provided within Silverlight aren’t as complete as the full WPF implementation. On the other hand there are aspects of Silverlight bitmaps that are arguably better than their WPF analogs. What is important however is that you know what the Silverlight bitmap objects will do and how they differ from the WPF implementation of supposedly the same objects. So let’s take a look at WriteableBitmap, how it works and how to use it to do dynamic things. To use WriteableBitmap we need to add: using System.Windows.Media.Imaging; which allows us to reference all of the bitmap facilities we need without having to write them out in full. WriteableBitmap wbmap = new WriteableBitmap(100, 100); Notice that this constructor isn’t the same as the full WPF constructor, which includes pixel formats and other specifies. The reason is that a Silverlight bitmap has a fixed pixel format. Each pixel, a 32-bit int, uses a four-byte ARGB – that is the high byte is the alpha channel followed by bytes for the Red, Green and Blue channels. Once created you can access the individual pixels using the Pixels property. The only problem is that they are stored in a one-dimensional array without any differentiation into row or column. In fact they are stored in row order starting with the top left-hand pixel. Each pixel is stored as a 32 bit int and the color data is packed into the four bytes as already explained. This has to be contrasted to WPFs way of doing the same job. The WPF WriteableBitmap uses a byte array with four bytes per pixel. Not only is the format different but the WPF WriteableBitmap needs an external byte array which is constructed or manipulated and then transferred as a block to the bitmap's pixels. To make use of the Pixels property we have to construct a storage mapping function to map the x and y co-ordinates into the one-dimensional array. That is, the pixel at x,y is stored in Pixels[x+w*y] where w is the width of the bitmap in pixels. It’s a mystery why WriteableBitmap doesn’t have any methods that implement this mapping but it doesn’t. It is easy enough to add however by way of two extension methods. First we need a static class that can be used as the container for the extension methods: public static class bitmapextensions{ The setPixel method first checks that the x and y co-ordinates are in the correct range: public static void setPixel( this WriteableBitmap wbm, int x, int y, Color c){ if (y > wbm.PixelHeight - 1 || x > wbm.PixelWidth - 1) return; if (y < 0 || x < 0) return; Then it computes the storage mapping function to access the pixel at that x,y location and stores the specified colour: wbm.Pixels[y * wbm.PixelWidth + x] = c.A << 24 | c.R << 16 | c.G << 8 | c.B;} The only complication is the use of left shift operator << to place the bytes in the correct position within the 32 bit integer.]); }} Notice the use of the BitConverter object to convert the four bytes that make up the Int32 pixel value to a four-byte array. BitConverter is a much overlooked object which provides a link between high level data types and their representations in bits.. The complete class is: public static class bitmapextensions{ public static void setPixel( this WriteableBitmap wbm, int x, int y, Color c) { if (y > wbm.PixelHeight - 1 || x > wbm.PixelWidth - 1) return; if (y < 0 || x < 0) return; wbm.Pixels[y * wbm.PixelWidth + x] = c.A << 24 | c.R << 16 | c.G << 8 | c.B; }]); }} <ASIN:0470534044> <ASIN:1430224258 > <ASIN:0672330628 > <ASIN:0470524650 > <ASIN:1430229888 >
http://www.i-programmer.info/programming/silverlight/1145-writeablebitmap-in-silverlight.html
CC-MAIN-2015-35
refinedweb
824
63.19
Who Are We My name is Orlando Hoilett and I am a PhD student in Biomedical Engineering at Purdue University. This team is composed of myself and my friend Raj Patel, a senior Biomedical Engineering student at Purdue University. We are interested in this problem because it provides us an opportunity to put our engineering skills to the test in solving a real-world problem. Here, we propose our idea for a multi-sensor system to analyze various parameters of Lake Erie. The sensors will consist of a water temperature, nitrogen, humidity, temperature, pH, spectrophotometer, color, conductivity, phosphorus and turbidity sensor. We will demonstrate our different sensor capabilities. We propose that these sensors can be placed on a platform such as an aerial or aquatic drone to monitor Lake Erie and other relevant bodies of water for algal growth. IOT and GPS IOT is changing the way information can be used. With the LinkIt One, we are able send sensor data to a server where it can be interpreted. Normal Arduino boards cannot solely do the vigorous computations and store the data. IOT would allow the tracking of trends via algorithms. The LinkIt One board module will be used to track location of each sample taken. The GPS module is critical because without it the data on the sensors would not specific to a certain region. Shown here is the output of our Adafruit GPS module when sitting outside Neil Armstrong Hall of Engineering at Purdue University. Google Maps data confirms the GPS module data. We used Adafruit's "Direct Connect" code example to collect this data. Please follow their tutorial here. As of now, we are proving the concept of our sensors in the following sections. The IOT and GPS portion will be further developed in phase 2. Water Temperature Water temperature is a vital parameter in enhancing algae growth. We have sourced a low cost digital water thermometer for this purpose. Toxic algae like blue-green algae grows in warmer water [1]. The warm water creates conditions which cause the algae to clump causing faster growth [1]. The algae blooms will create a positive feedback loop due to algae absorbing more sunlight causing to water to get warmer. The solution to this is measure and monitor the temperature at regions of Lake Erie with a waterproof DS18B20 digital temperature sensor. DS18B20 can detect temperatures in a range of -55 to 125°C with an accuracy of ±.5°C. This DS18B20 and a GPS module can track temperature data based on location. With that data notifications can be sent for regions at risk for algae blooms. Circuit Hookup: Code Below is sample code provided by Konstantin Dimitrov. You can find links to the DallasTemperature.h and OneWire.h files on Konstantin's page. #include <OneWire.h> #include <DallasTemperature.h> /********************************************************************/ // Data wire is plugged into pin 2 on the Arduino #define ONE_WIRE_BUS); /********************************************************************/ void setup(void) { // start serial port Serial.begin(9600); // Start up the library sensors.begin(); } void loop(void) { // call sensors.requestTemperatures() to issue a global temperature // request to all devices on the bus /********************************************************************/ sensors.requestTemperatures(); // Send the command to get temperature readings /********************************************************************/ Serial.println(sensors.getTempCByIndex(0)); // Why "byIndex"? // You can have more than one DS18B20 on the same bus. // 0 refers to the first IC on the wire delay(1000); } Testing: The sensor was tested by placing it from room temperature to a colder environment and then back to room temperature. The data is displayed in the graph below. The graph shows at a decrease in temperature when the DS18B20 is placed at a colder temperature and then increases back to room temperature showing the temperature works and is calibrated. Nitrogen Sensor Nitrogen is reported to support the production of algal blooms [2]. We found a low cost gas sensor (MQ-135) that’s able to detect nitrogen oxide. MikroElektronika sells a very convenient breakout board for the sensor which makes connecting the gas sensor to a microcontroller seamless. We were able to demonstrate gathering data from the sensor. Unfortunately, the calibration step was a bit more difficult due to the lack available standard calibrations for nitrogen gas available to us and also the lack of specificity of the sensor. Instead, we calibrated the sensor in standard air and also in air with "wafted" 30% isopropyl alcohol. Our data shows that the sensor responded to the change in air composition with the addition of the rubbing alcohol vapor. We propose that with further funding, we will be able to better calibrate the sensor and create calibration standards for use in the field for determining nitrogen gas content around Lake Erie. The breakout board from MikroElektronika has standard connections. GND was connected to the GND of the Arduino, 5V to 5V, and OUT was connected to Analog input 0. Then we implemented a straightforward code to read the voltage output of the sensor. void setup() { Serial.begin(9600); while(!Serial); } void loop() { double x = analogRead(A0); double volts = (x/1023.0)*5.0; Serial.println(volts); delay(1000); } Air Temperature and Humidity Temperature and humidity are important metrics for environmental health. We utilized the DHT22, a common air temperature and humidity sensor, to demonstrate our capabilities. The DHT22 was tested using code provided by Adafruit Industries. Please visit their GitHub repository for the DHT22 sensor. You will also need to download their Sensors.h file. Our data compares the results from the DHT22 sensor with a low-cost indoor air humidity monitor. Hookup instructions for the DHT22 sensor. (From left to right) pin 1 is Vcc and should be connected to 5V. Pin is 2 is data and should be connected to 5V through a 10k pull-up resistor. Then connect Pin 2 to Digital Pin 10 on the Arduino. Pin 3 of the DHT22 sensor is unused, leave it floating. Finally, connect pin 4 to ground. Circuit Hookup: Testing: We tested the output of the DHT22 sensor alongside an Acurite indoor home temperature and humidity monitor. Results indicate that we have reasonable agreement between the two sensors. Further calibration and tuning can improve the accuracy of our system. pH Sensor pH is a known indicator of water quality. Algal growth is optimal around pH 6-9 which makes pH sensing important for assessing algal blooms [3]. As a proof of concept, we utilized the pH sensor from DFRobot which is well-suited for prototyping purposes. The pH sensor was wired according to the schematic below. Circuit Hookup: We used example code provided by DFRobot. To demonstrate the activity of the pH sensor, we placed the pH sensor in vinegar which is reported to have a pH of about 2.3 [4], milk which has a pH of 6.6 [5], and baking soda which has a pH of 8.3 [5]. Our results indicate that our sensor was successful in measuring the pH of these solutions. /* #; } Spectrophotometer A spectrophotometer allows us to have more quantitative analysis of water samples [6]. A spectrophotometer is a very useful lab instrument that allows us to measure the amount of light that is absorbed by a given sample. By analyzing the sample at different wavelengths of light, we can determine the concentration of different amounts of analytes in the samples. This could be oxygen content, carbon dioxide or other nutrients. Spectrophotometers range from a few hundred dollars to a few tens of thousands. Luckily, a spectrophotometer can be easily approximated using a light source and a light sensor. We have chosen to use a simple RGB LED and a PD333-3C/H0/L2 to demonstrate how we would utilize the spectrophotometer to analyzing samples from Lake Erie. For our concept, we used measured the amount red and green light absorbed by different concentrations of blue and red food coloring, respectively. We positioned our photodiode directly across from our light source. The sample was placed in between the photodiode and light source. The absorbance of each solution was plotted against "normalized" concentration. If we now had an "unknown" sample and measured its absorbance, we could calculate its concentration using the data we collected. Again, this could be of different analytes including gases and nutrients, etc. making spectrophotometry a really useful technique in assessing water quality. We developed a simple Arduino code to control the output of the RGB LED to switch between red, green, and blue. We also created a simple photodiode preamplifier signal to process the signal coming from the photodiode. Our simple spectrophotometer can be scaled up to analyze samples at many different colors by using different colored LEDs, or even using a white LED and breaking up the spectra with a diffraction grating similarly as others have done. /* FILENAME: monochromater.ino AUTHOR: Orlando S. Hoilett DATE: Sunday, September 7, 2014 UPDATES: Version 0.0.0 09/07/2014:1930> Reads value from detector circuit using one of the analog in pins on the Arduino. 09/08/2014:1253> Added redON(), greenON(), and blueON() functions to easily select the wavelength of light we want the RGB LED to emit. DESCRIPTION This program reads voltages from a photodetector circuit for a simple monochromator. This example was created as a demonstration of pulse oximetry for first-year engineering students at Vanderbilt University. */ //declare variables const int sensorPin = A0; //Analog In Pin 0 int ADCvalue = 0; double sensorVoltage = 0; //define pin locations int redPin = 3; //digitalPin 3 int greenPin = 4; //digitalPin 4 int bluePin = 5; //digitalPin 5 //initialize helper methods double convertToVoltage(int ADCvalue); void initializeLEDPins(int redPin, int greenPin, int bluePin); void redON(); void greenON(); void blueON(); void setup() { //begin communication with Arduino and computer Serial.begin(9600); initializeLEDPins(redPin, greenPin, bluePin); //you should write the function to turn on the correct color //here please remember to type the semicolon ";" at the end of //the function like you see with all the other commands I have //written redON(); //did you add the semicolon ";" at the end? } void loop() { //reads value from sensor using Arduino ADCvalue = analogRead(sensorPin); //convert ADC value to a voltage using the conversion //(ADCvalue / 1023) * 5 sensorVoltage = convertToVoltage(ADCvalue); //print the value to the "COM" port Serial.println(sensorVoltage); //wait for a bit to print the data //this makes it easier to read in the "COM" port delay(350); } double convertToVoltage(int ADCvalue) { return (ADCvalue / 1023.0) * 5.0; } void initializeLEDPins(int redPin, int greenPin, int bluePin) { pinMode(redPin, OUTPUT); pinMode(greenPin, OUTPUT); pinMode(bluePin, OUTPUT); } void redON () { digitalWrite(redPin, LOW); digitalWrite(greenPin, HIGH); digitalWrite(bluePin, HIGH); } void greenON () { digitalWrite(redPin, HIGH); digitalWrite(greenPin, LOW); digitalWrite(bluePin, HIGH); } void blueON () { digitalWrite(redPin, HIGH); digitalWrite(greenPin, HIGH); digitalWrite(bluePin, LOW); } Circuit Hookup: Color and Light Sensor We thought that color would be a really simple qualitative test to assess algal growth at different locations around Lake Eerie. For this test, we utilized Adafruit’s TCS34725 RGB Color Sensor Breakout. The TCS34725 is a convenient sensor that allows us to measure the red-green-blue (RGB) values of a nearby object. We envision that the sensor could be mounted on a drone or boat and scan the RGB values of different areas of the lake. The breakout contains a neutral LED that shines a light on an object. The reflected light hits the TCS34725 and the TCS34725 is able to calculate the RGB values of the nearby object. We placed a red, green, and blue object in front of the sensor and observed the sensors output. It appears that the sensor output agrees with the dominant color of the object. Additionally, sunlight is reported to affect algal blooms as well. The color sensor breakout also has the ability to measure luminous intensity which can be used to measure how much light the algal blooms are getting [8]. Circuit Hookup: A Few Future Directions Turbidity Turbidity is the quantitative method for measuring how clear a liquid is. Turbidity can affect the conditions that algae grows. Low turbidity can be caused by water moving slowing. That leads to clearer water which is easier for light to penetrate creating better conditions for algae to grow [7]. The calculation of turbidity will be calculated with a turbidometer. The boat will have a device that will pick up a water sample from Lake Erie. Then a sample of light will be running through the sample where it would reach a light to frequency converter to measure the absorbance. Then with Bear’s law the concentration of particles can be calculated. Conductivity Sensor Salinity is the concentration of dissolved salts in a liquid. Salinity can affect the conditions that algae is grown in. Varying algae types need different conditions to thrive. Blue-green algae is the harmful algae in Lake Erie and needs lower salinity to grow. Blue-green algae is common in many bodies of water, and in 1999 Australia had an issue with blue-green algae. Their solution was to add salinity sensors. If levels fell below 2 ms/cm, it providing an early warning system. Our solution is to add a conductivity sensor which can convert the data to salinity. This along with the IOT system would give early warning detection to prevent algae outbreaks from happening. Phosphorus sensor In addition to nitrogen, phosphorus levels increase dramatically with algal blooms [8]. We have identified a source for a low cost phosphorus sensor that can be implemented with our solution [9]. Conclusions Thank you for considering our project. We have demonstrated the various sensor capabilities that we have and have also provided paths for scaling up our design. We have demonstrated the use of various sensors for measuring important parameters in monitoring algal blooms. These sensors can easily be mounted on boat or on a drone for real-time analysis of Lake Erie. Citations [1] [2] [3] [4] [5] [6] [7] [8] [9]
https://mediateklabs.hackster.io/calvary-engineering-llc/hisens-lake-monitoring-system-e0f4f7
CC-MAIN-2018-47
refinedweb
2,298
55.64
Logging Other languages: français | ... Problem: You want to control logging for default HTTPServer. Solution: With the built-in webserver you can control logging by using wsgilog and passing it to your app as middleware. This code works for wsgilog version 0.2 import sys, logging from wsgilog import WsgiLog import config class Log(WsgiLog): def __init__(self, application): WsgiLog.__init__( self, application, logformat = '%(message)s', tofile = True, toprint = True, file = config.log_file, interval = config.log_interval, backups = config.log_backups ) Then when you run your app, you pass a reference to to the class e.g. (if the above was part of the module 'mylog') from mylog import Log application = web.application(urls, globals()) application.run(Log)
http://webpy.org/cookbook/logging
CC-MAIN-2017-17
refinedweb
115
53.68
The journey on the unsinkable — what AI can learn from the disaster Have you ever thought of being one of the passengers on the Titanic in 1912? Would you survive? Would your children survive? What would happen if you were in the upper deck? We are often told that women and children were given priority to the lifeboats, but I was not satisfied with this short, qualitative answer. With the advent of machine learning, we are able to dig deeper. We can spot patterns and infer what would have happened to you and your family should you all were onboard. Curiosity is the wick in the candle of learning. — William Arthur Ward TL;DR I developed a machine learning model that does so, and deployed it as a Web App. Check out and complete the quiz to find out your chance of survival. Mmm… borderline for me… Table of Contents Originally published at edenau.github.io. Introduction Motivation For data scientists, Titanic Kaggle dataset is arguably one of the most widely used datasets in the field of machine learning, along with MNIST hand-written digit, Iris flower etc.. For some non-data scientists, machine learning is a black box that does magic; but for some sceptics who ‘learn from the past’ (and inadvertently human-trained a classification model in their head), they foresee that AI is the new dot-com. Photo by Marc Sendra Martorell on Unsplash One of my professors argued that machine learning-driven time-series forecasting model would not perform with flying colours, as it is just looking in the rearview mirror. However, the omnipresence of machine learning models proves their values to our society, and the fact that they have already brought a lot of convenience to us is undeniable — Google auto-completes email sentences which really saves me a lot of time. I hope I can share my experience, help spread the knowledge, and demonstrate how new technologies can achieve things that we could not in the last century. AI is the new electricity. — Andrew Ng PS: machine learning is a subset of artificial intelligence, in case you are confused. What is Kaggle? Kaggle is a platform for data scientists to share data, exchange thoughts, and compete in predictive analytics problems. There are many high-quality datasets that are freely accessible on Kaggle. Some Kaggle competitions even offer prize money, and they attract a lot of famous machine learning practitioners to participate. People often think that Kaggle is not for beginners, or it has a very steep learning curve. They are not wrong. But they do offer challenges for people who are ‘getting started’. As a (junior) data scientist, I could not resist searching for interesting datasets to start my journey on Kaggle. And I bumped into the Titanic. Kaggle is an Airbnb for data scientists — this is where they spend their nights and weekends. — Zeeshan-ul-hassan Usmani Overview Here goes the overview of the technical bit. I first found the dataset on Kaggle and decided to work on it and analyze it with Python. I used it to develop and train an ensemble of classifiers using scikit-learn that would predict one’s chances of survival. I then saved it by pickle and deployed it as a Web App on localhost using Flask. Finally, I leveraged AWS free tier (available for 12 months) to cloud-host it. There are many tutorials online that focus on how to code and develop a machine learning model in Python and other languages. Therefore, I will only explain my work in a relatively qualitative manner with graphs and results, instead of bombarding you with codes. If you really want to go through my codes, they are available on GitHub. Tools I decided to use Python since it is the most popular programming language for machine learning with numerous libraries. And I don’t know R. And no one uses MATLAB for machine learning. Instead of my local machine, I went for Google Colab so that I could work cross-platform without any hassle. Sit tight, here we go! Data Data Inspection Let’s import the data into a DataFrame. It consists of passenger ID, survival, ticket class, name, sex, age, number of siblings and spouses onboard, number of parents and children onboard, ticket number, passenger fare, cabin number, and port of embarkation. First 5 rows of data What immediately came to my mind was the following points: PassengerIdis key (unique), Survivedis the target that we would like to infer, Namemight not help but their titles might, Ticketis a mess, and - there are missing data labelled as NaN. I decided to drop variable Ticket for now for simplicity. It could possibly be holding useful information, but it would require extensive feature engineering to extract them. We should start with the easiest, and take it from there. The ratio of missing data On the other hand, let’s take a closer look at the missing data. There are a few missing entries in variables Embarked and Fare, which should be able to be inferred by other variables. Around 20% passenger ages were not recorded. This might pose a problem to us, since Age is likely to be one of the key predictors in the dataset. ‘Women and children first’ was a code of conduct back then, and reports suggested that they were indeed saved first. There are >77% of missing entries in Cabin, which is unlikely to be very helpful, and let’s drop it for now. Data Visualization Pair plot (not shown) is usually my go-to at the beginning of a data visualization task, as it is usually helpful, and it has a high information-to-lines-of-code ratio. One single line of seaborn.pairplot() gives you n² plots (technically n(n+1)/2 distinct plots), where n represents the number of variables. It gives you a basic understanding of the relationship between every pair of variables, and the distribution of each variable itself. Let’s dive into different variables. We first inspect the relationship of target variable Survived with each predictor one by one. By a simple count plot seaborn.countplot(), we found that most people belonged to the third class, which wasn’t surprising; and in general they had a lower probability of survival. Even with this single predictor, given everything else unknown, we could infer that a first-class passenger would be more likely to survive, while this would be unlikely for a third-class passenger. Meanwhile, women and children were more likely to survive, which aligned with the aforementioned theory of ‘women and children first’. First-class young female passengers would now be the ones with the highest chance of survival, if we only examine variable Pclass, Sex, and Age. Nevertheless, it might be harder to interpret the density plot seaborn.kdeplot() of passenger fare. For both ‘survived’ and ‘not survived’ classes, they span over a wide range of fare, with the ‘not survived’ class having a smaller mean and variance. Note that there is a funny tail in the ‘survived’ class, which corresponds to three people getting their first-class tickets with $512 each (no idea what currency the dataset was referring to). They all got onboard at the port of Cherbourg, and all of them survived. On the other hand, the port of embarkation seems to also play a role in determining who would survive. Most people embarked at the port of Southampton — the first stop of the journey, and they had the lowest survival rate. Maybe they were assigned to cabins further away from exits, or spending more time on a cruise would make people relaxed or tired. No one knows. Or maybe it’s just indirectly caused by some third variable — say maybe there were fewer women/children/first-class passengers that got onboard at the first port. This plot does not provide such information. Further investigation is required, and is left as an exercise for the reader. (Nani?) If you are a fan of tables instead of plots, we can also visualize the data by pandas.DataFrame.groupby() and take the mean for each class. However, I don’t think there is a clear pattern shown in the table of Parch below. Correlation matrix generated by seaborn.heatmap() illustrates the strength of correlation between any two variables. As you can see, Sex has the highest magnitude of correlation with Survived, whereas guess what, Fare and Pclass are highly correlated. SibSp and Parch do not seem to play a big role in predicting one’s survival chance, although instinct suggests otherwise. Family members onboard might be your helping hands in escaping from the sinking ship, or they could also be a burden (not from my personal experience, in case you are reading). More on that later in Feature Engineering. Missing Data Imputation We have found earlier in Data Inspection that there were missing data entries. For instance, we seem to have no clue how much did a 60-year-old Thomas Storey pay for his ticket. Instinct tells us that ticket fare hugely depends on ticket class and port of embarkation (sex might also be a factor in the early 20th century), We can cross-check with the correlation matrix above. Therefore, we will just take the mean (or median if you want) of third-class fare at Southampton. This is just an educated guess and is probably wrong, but it is good enough. Bear in mind that it is impossible to have noiseless data, and machine learning models are (to different extents) robust against noise. Go digging in historical archives is not worth it. There were also two women whom we had no idea where they got on the ship. This should be strongly correlated with ticket class and fare. As they both paid 80 dollars for a first-class seat, I would bet my money on Cherbourg (C in the plots). If there are only a few missing entries in a particular variable, we can use the tricks above to make educated guesses by essentially taking the maximum likelihood value. Nonetheless, it would be really dangerous to do the same thing if we have more missing data, like in Age where about 20% of them are missing. We can no longer make educated guesses by inspection. Since we dropped variable Cabin, and all other missing entries are filled in, we can leverage all other variables to infer the missing Age by random forest regressor. There are 80% of ‘training’ data to infer the remaining 20%. Machine Learning Feature Engineering As suggested in Data Inspection, passengers’ name would probably not helpful in our case, since they are all distinct and, you know what, being called Eden wouldn’t make me less likely to survive. But we could extract the titles of their names. While most of them had titles of ‘Mr’, ‘Mrs’, and ‘Miss’, there were quite a number of less frequent titles — ‘Dr’, ‘The Reverend’, ‘Colonel’ etc., some of them only appeared once, such as ‘Lady’, ‘Doña’, ‘Captain’ etc.. Their rare appearance would not help much in model training. In order to find patterns with data science, you need data. One datum point has no patterns whatsoever. Let’s just categorize all those relatively rare titles as ‘Rare’. Categorical data requires extra care before model training. Classifiers simply cannot process string inputs like ‘Mr’, ‘Southampton’ etc.. While we can map them to integers, say (‘Mr’, ‘Miss’, ‘Mrs’, ‘Rare’) → (1, 2, 3, 4), there should be no concept of ordering amongst titles. Being a Dr does not make you superior. In order not to mislead machines and accidentally construct a sexist AI, we should one-hot-encode them. They become: ( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) ) On the other hand, I decided to add two more variables — FamilySize and IsAlone. Adding FamilySize = SibSp + Parch + 1 makes more sense since the whole family would have stayed together on the cruise. You wouldn’t have a moment with your partner but abandon your parents, would you? Besides, being alone might be one of the crucial factors. You could be more likely to make reckless decisions, or you could be more flexible without taking care of your family. Experiments (adding variables one at a time) suggested that their addition did improve the overall predictability. Model Evaluation Codes are available on GitHub. Before picking and tuning any classifiers, it is VERY important to standardize the data. Variables measured at various scales would screw things up. Fare covers a range between 0 and 512, whereas Sex is binary (in our dataset) — 0 or 1. You wouldn’t want to weigh **_Fare_** more than **_Sex_**. I have some thoughts (and guesses) on which classifiers would perform well enough, often by experience. I personally prefer Random Forest as it usually guarantees good enough results. But let’s just try out all the classifiers we know — SVM, KNN, AdaBoost, you name it, and they were all tuned by grid search. XGBoost stands out eventually with an 87% test accuracy, but that does not mean it would perform the best in inferring unknown data subsets. To increase the robustness of our classifier, an ensemble of classifiers with different natures was trained and final results were obtained by majority voting. It is vital to embed models with different strengths into the ensemble, otherwise there is no point building an ensemble model at the expense of computation time. Finally, I submitted it to Kaggle and achieved around 80% accuracy. Not bad. There is always room for improvement. For instance, there is surely some useful information hidden in Cabin and Ticket, but we dropped them for simplicity. We could also create more features, e.g. a binary class Underage that is 1 if Age < 18 or 0 otherwise. But we will move on for now. Saving and Loading Model I am not satisfied with just a trained machine learning model. I want it to be accessible by everyone (sorry for people who don’t have internet access). Therefore, we have to save the model and deploy it elsewhere, and this can be done by pickle library. Parameters 'wb' and 'rb' in open() function represent write access and read-only in binary mode respectively. pickle.dump(<model>, open(<file_path>, 'wb'))pickle.load(open(<file_path>, 'rb')) Photo by Jonathan Pielmayer on Unsplash Web App Deployment Web Framework Flask is an easy-to-use web framework in Python. I only had little prior experience in building websites (HTML in primary school, GitHub pages last year, Wix doesn’t count), and I found it straightforward and simple. The simplest thing you can do is: from flask import Flaskapp = Flask(__name__) @app.route("/")def hello():return "<h1>Write something here.</h1>" app.run(host='0.0.0.0', port=60000) And voilà! You can browse it in your localhost. What else do we need? We want people to fill in a form to collect the data required and pass it to the machine learning model (and not sell it, Mark). The model would have an output, which we will redirect users to that page. We will use WTForms to build a form in Python, and a single form is defined by a class, which looks like the following: from wtforms import Form, TextField, validators, SubmitField, DecimalField, IntegerField, SelectField class ReusableForm(Form): sex = SelectField('Sex:',choices=[('1', 'Male'), ('0', 'Female') ],validators=[validators.InputRequired()]) fare = DecimalField('Passenger Fare:',default=33,places=1,validators=[validators.InputRequired(),validators.NumberRange(min=0,max=512, message='Fare must be between 0 and 512')\]) submit = SubmitField('Predict') I found an HTML template from Will Koehrsen and built on top of it. You look at the people who have been there before you, the people who have dealt with this before, the people who have built the spaghetti code, and you thank them very much for making it open source, and you give them credit, and you take what they have made and you put it in your program, and you never ever look at it again. — Tom Scott No way I delve into a CSS spaghetti. Cloud Hosting Now that the webpage can be viewed via my localhost, and everything works fine. The last step would be hosting it online. Unfortunately, I only used one site hosting service before, GitHub Pages, which only hosts static sites. There are 3 major cloud hosting services now — AWS, GCP, and Azure. AWS is by far the most popular one, so I went for its 12-month free tier. They had lots of tutorials and documentation (and customer services), and it is quite easy to follow. Photo by Nathan Anderson on Unsplash I connected to the Linux server instance with my private key, migrated my repository to the server, ran my script, and it worked! The only problem was system timeout when it was idle for too long. By some googling, turns out it can be fixed by starting a screen session, which we run the scripts inside the session. When you want to check the status next time, use screen -r to resume your screen session. You can exit the session by Ctrl+A followed by D. And now, it is running 24/7 non-stop! Check it out at. The Takeaway Endnotes To sceptics: AI is not false hope. They are backed by sophisticated statistical theories. They are hard works. They are keys to revolutionize industries. To new learners: Data science is not just about training and tuning neural networks. It requires curiosity to drive us, knowledge to kick-start, energy to clean data, experience (i.e. trial and error) to engineer features, patience to train models, maturity to handle failure, and wisdom to explain. To technology enthusiasts: AI is no magic. There is nothing new under the sun. Photo by Antonio Lainez on Unsplash To future me: Doing data science is one thing, writing it is a different story. — current me Related Articles If you want to start a machine learning project now, check out the article below for starters: Quick guide to run your Python scripts on Google Colaboratory_Start training your neural networks with free GPUs today_towardsdatascience.com If you want to check out what else we can do using Python, these articles might help: Visualizing bike mobility in London using interactive maps and animations_Exploring data visualization tools in Python_towardsdatascience.com Handling NetCDF files using XArray for absolute beginners_Exploring climate-related data manipulation tools in Python_towardsdatascience.com Remarks If you want to contribute in any means, feel free to drop me an email or find me on Twitter. As you can see, this project would definitely benefit from getting a domain name! Originally published at edenau.github.io.
https://hackernoon.com/would-you-survive-the-titanic-aa4ae2e86e9?source=post_page-----3b9f55ccb59----------------------
CC-MAIN-2022-40
refinedweb
3,127
63.39
David Ebbo's blog - The Ebb and Flow of ASP.NET If you’ve ever written any non-trivial ASP.NET control, you’re probably familiar with the concept of a Control Builder. Basically, it’s a class that you associate with your control and that affects the way your control gets processed at parse time. While ControlBuilder has been around since the ASP.NET 1.0 days, a very powerful new feature was added to it in 3.5 (i.e. VS 2008). Unfortunately, we never had a chance to tell people about it, and a web search reveals that essentially no one knows about it! Pretty unfortunate, and obviously, the point of this post is to change that. :-) So what is this super cool feature? Simply put, it lets the ControlBuilder party on the CodeDom tree used for code generation of the page. That means a ControlBuilder can inspect what’s being generated, and make arbitrary changes to it. Warning: this post assumes some basic knowledge of CodeDom. If you are not familiar with it, you may want to get a basic introduction to it on MSDN or elsewhere before continuing. To use this feature, all you have to do is override the new ProcessGeneratedCode() method on ControlBuilder. here is what this method looks like: // // Summary: // Enables custom control builders to access the generated Code Document Object // Model (CodeDom) and insert and modify code during the process of parsing // and building controls. // // Parameters: // codeCompileUnit: // The root container of a CodeDOM graph of the control that is being built. // // baseType: // The base type of the page or user control that contains the control that // is being built. // // derivedType: // The derived type of the page or user control that contains the control that // is being built. // // buildMethod: // The code that is used to build the control. // // dataBindingMethod: // The code that is used to build the data-binding method of the control. public virtual void ProcessGeneratedCode(CodeCompileUnit codeCompileUnit, CodeTypeDeclaration baseType, CodeTypeDeclaration derivedType, CodeMemberMethod buildMethod, CodeMemberMethod dataBindingMethod); So basically you get passed a bunch of CodeDom objects and you get to party on them. It may seem a bit confusing at first to get passed so many different things, but they all make sense in various scenarios. Tip to make more sense of all that stuff: a great way to learn more about the code ASP.NET generate is simply to look at it! To do this, add debug=”true” on your page, add a compilation error in there (e.g. <% BAD %>) and request the page. In the browser error page, you’ll be able to look at all the generated code. Let’s take a look at a trivial sample that uses this. It doesn't do anything super useful but does demonstrate the feature. First, let’s write a little control that uses a ControlBuilder: [ControlBuilder(typeof(MyGeneratingControlBuilder))] public class MyGeneratingControl : Control { // Control doesn't do anything other than generate code via its ControlBuilder } Now in the ControlBuilder, let’s implement ProcessGeneratedCode so that it spits out a little test property: // Spit out a property that looks like: //protected virtual string CtrlID_SomeCoolProp { // get { // return "Hello!"; // } //} var prop = new CodeMemberProperty() { Attributes = MemberAttributes.Family, Name = ID + "_SomeCoolProp", Type = new CodeTypeReference(typeof(string)) }; prop.GetStatements.Add(new CodeMethodReturnStatement(new CodePrimitiveExpression("Hello!"))); baseType.Members.Add(prop); So it just generates a string property with a name derived from the control ID. Now let’s look at the page: <test:MyGeneratingControl And finally, let’s use the generated property in code. The simple presence of the this tag allows me to write: Label1.Text = Foo_SomeCoolProp; And the really cool things is that Visual Studio picks this up, giving you full intellisense on the code generated by your ControlBuilder. How cool is that! :) Full runnable sample is attached to this post. At first glance, it may seem like this feature gives too much power to ControlBuilders, letting them inject arbitrary code into the page that’s about to run. The reality is that it really doesn’t let an evil control do anything that it could have done before. Consider those two cases: ProcessGeneratedCode is a pretty powerful feature, giving your ControlBuilders full control over the code generation. It’s also a pretty advanced feature, and you can certainly shoot yourself in the foot with it if you’re not careful. So be careful! David has an excellent post about a pretty cool ASP.NET feature that you almost certainly don’t know Please post some practical examples. Nice Feature! But does it work at a web application project? At a website it works fine! John, this will work in a Web Application as well. However, in that case note that the code you generate will only be available to code you write inline within the aspx page (e.g. <% %> blocks), and not from the code behind class. This is because the code behind is built by VS way before the ControlBuilder even comes into play. Hello David! That's really neat. Now I have a question. I built a tast-app, wrote a ControlBuilder and applied it to the Page-type and... nothing. Did the same to a regular control and it worked. So, is there a way for me to modify the page's own CodeDom-tree? Specifically, I'd like to change the type of a control to a derived type while the page is being generated. Thanks, Michael Hi Michael, Try using a FileLevelControlBuilderAttribute instead. e.g. [FileLevelControlBuilder(typeof(YourPageControlBuilder))] Thanks, David! It worked like a charm and for what I need it looks like I could even only need to overwrite GetChildControlType. Michael :) One user on my previous post on ProcessGenerateCode asked how he could associate a ControlBuilder not Michael, I just wrote a short post about this. Please make sure you read the part about using the right base class for your builder. A Little Holiday Love From The ASP.NET MVC Team Please post corrections/new submissions to the Dynamic Data Forum . Put FAQ Submission/Correction in Earlier this week, I wrote a post on using a BuildProvider to create ActionLink helpers .  That
http://blogs.msdn.com/davidebb/archive/2008/11/19/a-hidden-gem-for-control-builder-writers.aspx
crawl-002
refinedweb
1,024
56.66
Instead of either of these options, we can create the applet with a BorderLayout and put the TextArea in its center. Then we create a Panel, set the LayoutManager of the Panel to FlowLayout, add the button to the panel, and then add the panel to the south part of the applet. Indeed that's exactly what was done to produce the above applet. Here's the code: import java.applet.*; import java.awt.*; public class PanelExample extends Applet { public void init() { this.setLayout(new BorderLayout()); this.add(BorderLayout.CENTER, new TextArea()); Panel p = new Panel(); p.setLayout(new FlowLayout(FlowLayout.CENTER)); p.add(new Button("OK")); this.add(BorderLayout.SOUTH, p); } } It's important in this example to distinguish between adding to the applet ( add(...) or this.add(...)) and adding to the panel ( p.add(...) ). On the other hand it doesn't matter whether you add the panel to the applet and then add the button to the panel, or first add the button to the panel and then add the panel to the applet. Another common use for a panel is to align a series of checkboxes in a GridLayout with one column.
http://www.cafeaulait.org/course/week8/39.html
CC-MAIN-2020-29
refinedweb
195
56.96
Preventing a .NET module from being loaded by AutoCAD This is an interesting one that came up recently during an internal discussion: During my module's Initialize() function, I want to decide that the module should not actually be loaded. How can I accomplish that? The answer is surprisingly simple: if you throw an exception during the function, AutoCAD's NETLOAD mechanism will stop loading the application. For an example, see this C# code: using Autodesk.AutoCAD.ApplicationServices; using Autodesk.AutoCAD.EditorInput; using Autodesk.AutoCAD.Runtime; namespace PreventLoad { public class Commands : IExtensionApplication { public void Initialize() { // This will prevent the application from loading throw new Exception(); } public void Terminate(){} [CommandMethod("TEST")] static public void TestCommand() { Document doc = Application.DocumentManager.MdiActiveDocument; doc.Editor.WriteMessage( "\nThis should not get called." ); } } } Here's what happens when we attempt to load the application and then run the TEST command: Command: FILEDIA Enter new value for FILEDIA <1>: 0 Command: NETLOAD Assembly file name: c:\MyApplication.dll Command: TEST Unknown command "TEST". Press F1 for help. That's a useful tip, thanks Kean. One thing though - have you travelled forward in time by a couple of days? My browser is saying this was posted on the 12th and yet today is only the 8th.. :) Posted by: AlexF | September 08, 2008 at 10:34 AM OK, time to come clean: time travel ended up being the only way for me to keep up with my blog posting schedule. :-) Joking aside - I had been planning to post this on the 12th (I tagged it in TypePad to publish then), but I then decided to promote it by a few days and publish it today. So this must be a quirk of TypePad - I'll see if I can edit the post to fix it. Thanks, Alex! Kean Posted by: Kean | September 08, 2008 at 10:43 AM Kean, So if we extrapolate from your post, we can come up with a scenario that uses a decision branch that will tell the .NET module to finish loading or not. In plain and simple terms so that I can really understand this, if some condition is not met, then stop. My question is this, is there a way (I've tried the conventional way to no avail) to load that module if the condition is then satisfied? For example, if we NETLOAD the module and Initialize function sees that the OSMODE is set to something other than 0, 383 for instance, we tell the user that the value of OSMODE must be 0 in order to load the module. So the user sets OSMODE to 0 and tries to NETLOAD the module again. What I keep seeing is that nothing happens. NETLOAD already thinks the module is loaded in its entirety even though it really wasn't because of the Throw. Is there a way around that? Does this even make sense? Posted by: Jon Szewczak | September 09, 2008 at 04:35 AM John, I didn't put the condition in there explicitly, but that was my point. You wouldn't always stop it from loading (that would be pretty pointless :-). I see what you mean about the module no longer being loadable in that session. I'll check on whether it's possible to subsequently load without restarting AutoCAD. Kean Posted by: Kean | September 09, 2008 at 08:52 AM Kean. I have another .NET loading question for you. Is there a .NET equivalent of the ObjectARX AcadAppInfo class that allows you to easily manipulate the demand loading registry keys? I had a quick search on the ADN and couldn't find anything obvious but I thought I'd ask you before embarking on cloning the AcadAppInfo functionality myself! Obviously if there is an ADN solution just let me know and I'll add the issue in DevHelpOnline :) Chris Posted by: Chris Bray | September 09, 2008 at 12:32 PM Chris, I don't know - I haven't used one, myself. I suggest posting your question through DevHelp Online. :-) Will I see you at AU, this year? Kean Posted by: Kean | September 09, 2008 at 01:53 PM John, It turns out it's the .NET loader that stops the module from being loaded on subsequent attempts - we have no control over this. So it appears this is a reasonable mechanism to implement some kind of security feature, i.e. "only allow my module to be loaded if this is a legal installation of my software", rather than preventing load under certain configuration options (where it would be better to temporarily disable your application's functionality or - with the user's permission - change the problematic config options in AutoCAD). Regards, Kean Posted by: Kean | September 09, 2008 at 06:11 PM Hi Kean. I suppose this is one of those perception things. By the time the Initialize() method of an IExtensionApplication is called, the module/assembly that houses it has already been loaded, and can't be unloaded. Throwing the exception in Initialize() only prevents the loader from defining commands, and that's about it. Also, the fact that you see no message whatsoever when you throw the exception, is actually a bug. AutoCAD is supposed to display the exception stack trace when that happens (and did in earlier releases, but in AutoCAD 2007 or later it doesn't). That has been a constant source of frustration for myself and many others because when commands do not work, it is almost always because an exception is being thrown by code called from Initialize(), and the loader is supressing the error message and as a result, the only way the programmer can find out it happens is to run the app in the debugger. My standard advice is to wrap the entire Initialize() method's body in a try/catch block, and if an exception is thrown, you can display a message yourself from the catch block. Posted by: Tony Tanzillo | September 10, 2008 at 01:52 AM "Is there a .NET equivalent of the ObjectARX AcadAppInfo class that allows you to easily manipulate the demand loading registry keys?" In the file at the link below, you will find a class called ExtensionApplicationInfo. Posted by: Tony Tanzillo | September 10, 2008 at 02:51 AM Tony you're a star, that looks like just the ticket, thanks very much :D Kean, as much as I'd love to attend AU08 is looking doubtful, I've got an application to release by the end of the year and going to AU will remind me of all the cool stuff I don't have time to implement! I've already learned my lesson and re-scheduled Q4 2009 to make sure I have time to come next year though! I'll do my best to be at the London DevDay, are you attending the "tour"? Maybe I'll catch up with you there? Chris Posted by: Chris Bray | September 10, 2008 at 04:34 PM I'll certainly be in Paris, but haven't thought much about the other dates, as yet. Kean Posted by: Kean | September 10, 2008 at 04:38 PM
http://through-the-interface.typepad.com/through_the_interface/2008/09/preventing-a-ne.html
crawl-002
refinedweb
1,190
61.16
We've learned how to use Apache Camel from Groovy code. In this post we learn how to use Apache Camel in a Grails application. To include Apache Camel in a Grails application is so easy, thanks to the plugin system of Grails. We can install Camel with the Camel plugin. This plugin will add the Camel classes to our application, the possbility to send and route messages directly from services and controllers and a new Grails artifact: routes. We are going to install the plugin and create a new route which will poll a Gmail account and save the attachment to a specified directory. $ grails install-plugin camel $ grails create-route GetMail In the directory grails-app/routes we have a new file GetMailRoute.groovy. The great thing is we can use closures for the filter, when and process methods from the Java DSL. The following code fragment shows how we can poll for e-mails and save the attachment to a directory. We use values from the Grails configuration in our code for mail username and password and the directory we want the files to be saved.: import org.codehaus.groovy.grails.commons.* class GetMailRoute { def configure = { def config = ConfigurationHolder.config from("imaps://imap.gmail.com?username=" + config.camel.route.gmail.username + "&password=" + config.camel.route.gmail.password + "&folderName=GoogleAnalytics" + "&consumer.delay=" + config.camel.route.gmail.pollInterval) .filter { it.in.headers.subject.contains('Analytics') } .process{ exchange -> exchange.in.attachments.each { attachment -> def datahandler = attachment.value def xml = exchange.context.typeConverter.convertTo(String.class, datahandler.inputStream) def file = new File(config.camel.route.save.dir, datahandler.name) << xml log.info "Saved " + file.name } } } } That's it. We now have created our own route and when we start the application the route is executed. The only thing we need to do is to include the JavaMail libraries, because they are used by the Camel mail component. Therefore we copy activation.jar and mail.jar to the lib of the Grails application.
https://mrhaki.blogspot.com/2009/04/use-apache-camel-plugin-in-grails.html
CC-MAIN-2020-16
refinedweb
331
51.85
The fundamental concept of React.js React is a flexible, efficient, open-source Javascript library. It was developed by Facebook (2013) for building user interfaces. It allows us to create a complex UI by making components. components are reuseble 1. JSX in React Usually JSXjust syntactic sugar for React.createElement(component, props, ...children) The JSX code : <div>My Name is Hossain</div> Compile into : React.createElement("div", null, "My Name is Hossain"); 2. JavaScript Expressions as Props We can pass any javascript expression in props with {} . Flowing the example below : <Component sum={5 + 9 + 1} /> For Component , the have value props.sum will be 15 . because the expression is 5 + 6 + 1 . 3. String Literals We can pass a string as a prop. there two JSX expressions are equivalent. it the same as the HTML attribute. <Component name="Hossain" /> <Component name={'Hossain'} /> 4. Spread Attributes We can already have as props as an object, and we went to pass props in JSX. we can use ... “spread” operator to pass a props objects. const App = () => { const props = {name: 'Hossain', age: 20}; return <Person {...props} /> } 5. Children in JSX You can pass more JSX as children props.children or Display nested component as same as HTML. <Container> <App1 /> <App2 /> </Container> 6. defaultProps defaultProps can be defined property in class component. it set default props in the class . This is used for undefind but not use null class Container extends React.Component { // ... }Container.defaultProps = { color: 'red' }; 7. Use the Production Build If you are not sure your build process set up currently, you can install React Developer Tools for Chrome. if you visit your site with React production mood, you can see the React Developer Tools background color is dark. but your site is development mood, you can see the React Developer Tools background color is red. if you build your site with create-react-app , you can be flowing this commend: npm run build or yarn build 8. State Until now, we can use static data, but when data is changing for state change, we can use useState the form React hooks . flowing the example blew: useState is a function. Its default value of the count value. if setCount() to change state value. count value is changing. const App = () => { [count, setCount] = useState(0); const handleClick = () => { setCount(count + 1) } return ( <div> <h3>{count}</h3> <button onClick={handleClick}>Click Me</button> </div> ); } 9. Conditional Rendering: In JSX possible to use the ternary operator for conditional rendering. <div>{name ? name : 'What is your name?'}</div> 10. Handling Events Handling events to react element is very similar to handle events in the DOM element. there are some syntax deferent - React element name use camelCase. - With JSX you can pass a function as an event handler, rather than a string. <button onClick={() => console.log('Clicked me')}> Click me </button>
https://rimawahid3.medium.com/the-fundamental-concept-of-react-js-a8644ee69a56?source=user_profile---------1-------------------------------
CC-MAIN-2022-33
refinedweb
472
69.68
Hiber calender working in struts - Struts calender working in struts when i execute the following code ,that is working properly if i convert to struts html tags that code is not working please help me to rectify the problem m mad m mad pratiksha says - how to code the project?? :P Datagrid not working Datagrid not working The code here is working fine, apart from the fact that that I'm using netbeans 6.5 and the servlet v2.5 and struts 1.1.... working. please help me out Dynamic-update not working in Hibernate. am giving you a small block of code for updating of employee detail- public...Dynamic-update not working in Hibernate. Why is dynamic update not working in hibernate? Dynamic-update is not working. It means when StandardServiceRegistryBuilder not working in Hibernate 4.3.1 working with the Hibernate 4.3.1 and it is giving following error message: Access...StandardServiceRegistryBuilder not working in Hibernate 4.3.1 Hi, It seems that the StandardServiceRegistryBuilder not working in Hibernate 4.3.1. Mine Problem Mine Problem how to view a row from a table created in mysql in a java swing application What is Struts Framework? : Latest Version of Struts Framework. Apache Software foundation is working...What is Struts Framework? Learn about Struts Framework This article is discussing the Struts Framework. It is discussing the main points of Struts framework Ajax not working in jsp Ajax not working in jsp I'm using Netbean, Ajax validation in the following code is not working the code succesfully run for first UID validation..."); Pattern test =Pattern.compile ("[0-9]{12}$"); Matcher m=test.matcher validation problem in struts - Struts validation problem in struts hi friends... m working on one... , bt whn m trying to authenticate a user , d authentication is nt working. i... 1st user d authentication condition is nt working . My action calss code code not working properly code not working properly protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException...); } } in the above code if i enter a valid username and password i am Why is this code working Why is this code working Looking at this piece of code: #include <stdio.h> #include <stdlib.h> #include <string.h> typedef struct _point{ int x; int y; } point; int main (void) { point *ptr; point obj Why this is not working...? Why this is not working...? import java.util.*; public class Family { int size_of_family=0; public Person[] members=new Person[size_of_family...); f.printOutFamily(); } } Post the code of Student.java calender in struts - Struts calender in struts when i execute the following code ,that is working properly if i convert to struts html tags that code is not working please help me to rectify the problem Struts When Submit a Form and while submit is working ,press the Refresh , what will happen in Struts html dropdown not working firefox html dropdown not working firefox I am writing a Dropdown code in HTML which is not working in firefox. What could be the reason as it's perfectly working in IE and Crome. file uploading - Struts Struts file uploading Hi all, My application I am uploading files using Struts FormFile. Below is the code. NewDocumentForm... can again download the same file in future. It is working fine when I myJSF,Hibernate and Spring integration code is not working. - Hibernate myJSF,Hibernate and Spring integration code is not working. the code given in this url : i have tried but it does not work. when i write Struts Articles ; Strecks is built on the existing Struts 1.2 code base, adding a range of productivity... can be implemented in many ways using Struts and having many developers working... Source Apache Struts running on Jakarta Tomcat. With only a few lines of code Java - Struts Java hello friends, i am using struts, in that i am using tiles framework. here i wrote the following code in tiles-def.xml in struts-config file i wrote the following action tag php <? ?> tag not working php tag not working why PHP tags not working in my application? This might happen when your shortopentag is turned off. So, you...("shortopen_tag", 1); in your code; add the following line to your .htaccess file struts(DWR) - Ajax into servlet. i.e, i am giving my sample code here! plz help me , my.jsp... using xhr.send(); please correct the code and help me i have been working...struts(DWR) i want to pass a combo box value from my jsp page window.open() not working - Ajax is that the pop up window is not at all working.. Once I click the button, the page... I went wrong, in the above piece of code.. Regards, Ragavendran.R   struts struts how to generate bar code for continuous numbers in a loop using struts ... or atleast for a given number working of a div tag in html !DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "...; This is a simple code. but after executing it i am not getting the sentence STRUTS STRUTS 1) Difference between Action form and DynaActionForm? 2) How the Client request was mapped to the Action file? Write the code and explain Struts - Struts Struts Hi, I m getting Error when runing struts application. i have already define path in web.xml i m sending -- ActionServlet... /WEB-INF/struts-config.xml 1 struts the checkbox.i want code in struts stuts java script validation is not working. stuts java script validation is not working. --of jsp page-- Enter name: Enter pass: <html:submit...: Your hibernet tutorial is not working - Hibernate location it working... code. Thanks Implementation code inside interfaces Implementation code inside interfaces 2001-01-25 The Java Specialists' Newsletter [Issue 006] - Implementation code inside interfaces Author: Dr. Heinz M. Kabutz If you are reading this, and have not subscribed, please consider Code Problem - Struts Code Problem Sir, am using struts for my application.i want to uses session variable value in action class to send that values as parameter to function.ex.when i login into page,i will welcome with corresponding user homepage Working with CSS(CSS Manipulation) Working with CSS(CSS Manipulation) Working with CSS(CSS Manipulation...;); }); </script> Above code display Tapestry - Struts Tapestry I want to use Tooltip in my project, in which i m using... respond me fast thanks Hi friend, Code to help in solving...; } //end of copied code struts - Struts struts Hi, I am new to struts.Please send the sample code for login and registration sample code with backend as mysql database.Please send the code immediately. Please its urgent. Regards, Valarmathi Hi Friend function MyClass() { var m_data = 15; var m_text = "indian...; this.ShowText = DisplayText;** function DisplayData() { alert( m_data ); } function DisplayText() { alert( m_text ); return; } function SetData( myVal Struts - Struts Struts Hello I like to make a registration form in struts inwhich... compelete code. thanks Hi friend, Please give details with full source code to solve the problem. Mention the technology you have used .jar file keeps giving me " could not find the main class". Program will exit. .jar file keeps giving me " could not find the main class". Program will exit... this code. The content of this method is always * regenerated by the Form..." desc="Generated Code"> private void initComponents() { jLabel1 = new - Struts struts Hi, I need the example programs for shopping cart using struts with my sql. Please send the examples code as soon as possible. please send it immediately. Regards, Valarmathi Hi Friend, Please code for login fom - Struts code for login fom we have a login form with fields USERNAME: In this admin can login and also narmal uses can log I'm using struts2,i'm asking about the radio button validation <s:radio> I'm using struts2,i'm asking about the radio button validation i used this javascript code but it didnt work function valider(){ frm...; } } i tested with other components this javascript code and it worked well code problem - Struts code problem hi friends i have 1 doubt regarding how to write the code of opensheet in action class i have done it as the action class code...(); System.out.println(conn); //Code to generate new sheet i.e login time IS NULL Java runtime not working - Java Beginners of SwingFrame() therefore you have got that error.Anyways, here is your code 2 Programmer Java Struts 2 Programmer Position Vacant: Java Struts 2 Programmer Job Description You will be working on challenging Financial application - Struts the form tag... but it is giving error.. can you please give me solution print the content of file in 2d matrix having same dimension as given in file(n*m). how to print the content of file in 2d matrix having same dimension as given in file(n*m). here is code: import java.io.File; import... of any dimensions in the form of 2d n*m matrix. pls help me
http://www.roseindia.net/tutorialhelp/comment/1212
CC-MAIN-2014-23
refinedweb
1,486
66.74
Just like with any program, you will want to handle errors in your web application. By default, browsers handle your errors, but their default handling can be exceptionally ugly. Besides general ugliness, the errors leave the user with fairly bad information, and, most importantly, no further call to action. Take a 404, for example. When you get a 404, it means the page you were trying to load just simply doesn't exist. The default 404 error gives no navigation back to where the user came from, and it is just a very unfriendly looking error. It almost certainly ruins the mood. You've probably used a few websites and noticed that many websites appear to have custom errors like 404s that have various pictures. Try Google's 404 error by going to something like google.com/sfassfaa. A nice subtle robot that helps to lighten the mood when someone finds themselves on an error. Flask is no exception to allowing for custom errors. Not only can we handle things like HTTP 500, or 404 errors, but we can still also use the typical try/except syntax to handle other errors logically. First, let's consider a simple 404, or "page not found" error. A 404 is very specific, and luckily handling this error is built into the Flask framework. First you just need a 404 function in the init file, then you can make a 404 template. You can have that 404 template extending your header, this way all of your typical CSS remains, your navbar will be there, and the general "feel" of the error will be a bit better. from flask import Flask, render_template, flash from content_management import Content TOPIC_DICT = Content() app = Flask(__name__) @app.route('/') def homepage(): return render_template("main.html") @app.route('/dashboard/') def dashboard(): return render_template("dashboard.html", TOPIC_DICT = TOPIC_DICT) @app.errorhandler(404) def page_not_found(e): return render_template("404.html") if __name__ == "__main__": app.run() File: __init__.py, server location: /var/www/PythonProgramming/PythonProgramming/__init__.py Here, the only change is the page_not_found function. Notice how this function is called by the wrapper above it. app.errorhandler is part of Flask, then the 404 is the specific error that we're looking to handle with whatever function we're wrapping. In our case, we wrap the page_not_found function, using the actual error, e, as the parameter. Now, we just use a typical render_template function to render a 404.html template. That template doesn't exist yet, so let's make that. {% extends "header.html" %} {% block body %} <p>Woops, that page doesn't exist! (404)</p> {% endblock %} File: 404.html, server location: /var/www/PythonProgramming/PythonProgramming/templates/404.html Simple example, but even this tiny bit of code is a vast improvement over the default. How about other errors? Let's cause what would normally be an internal server error and see if we can handle that. from flask import Flask, render_template from content_management import Content TOPIC_DICT = Content() app = Flask(__name__) @app.route('/') def homepage(): return render_template("main.html") @app.route('/dashboard/') def dashboard(): return render_template("dashboard.html", TOPIC_DICT = TOPIC_DICT) @app.route('/slashboard/') def slashboard(): return render_template("dashboard.html", TOPIC_DICT = shamwow) @app.errorhandler(404) def page_not_found(e): return render_template("404.html") if __name__ == "__main__": app.run() File: __init__.py, server location: /var/www/PythonProgramming/PythonProgramming/__init__.py Now we've got a new function that we're calling "slashboard." We just copy and pasted the dashboard function, changing the url and the function name. Don't forget to change the function name, or your internal server error will be for a totally different reason. It is easy to copy and paste functions to save time, but to also forget to actually change the function's name. Instead of TOPIC_DICT = TOPIC_DICT, which is acceptable, we're going to say TOPIC_DICT = shamwow, where "shamwow" is a non-existent variable! This could get ugly! Save the code, restart apache, and you should find that you're getting a nasty error. Boo. So, what we can do is modify this slashboard function a bit: @app.route('/slashboard/') def slashboard(): try: return render_template("dashboard.html", TOPIC_DICT = shamwow) except Exception as e: return(str(e)) So here we're using the traditional try/except logic that is a part of Python. If the try doesn't work, then we're catching all errors to the except statement, where we are returning the error in plain text. Great, we caught the error, only we just replaced a very ugly message with a slightly less ugly, but still ugly, message. Simple enough though, you can probably already surmise the solution: @app.route('/slashboard/') def slashboard(): try: return render_template("dashboard.html", TOPIC_DICT = shamwow) except Exception as e: return render_template("500.html", error = str(e)) So now we have a new file that we're referencing, 500.html. We pass one variable through, and that is the error. Now we just need this 500.html page: {% extends "header.html" %} {% block body %} <p>O dear, we got an error: {{ error }}</p> {% endblock %} The 500.html page is basically the same as the 404 page in simplicity, we just have the specific error that occurred as well. Alright, I think we have had our fill of errors. Let's move on to a new topic: Flask Flashing!
https://pythonprogramming.net/flask-error-handling-basics/
CC-MAIN-2019-26
refinedweb
880
60.01
Apache OpenOffice (AOO) Bugzilla – Issue 62492 Incorrect opening of XLS files using Excel 2003 XML format Last modified: 2013-08-07 15:13:10 UTC If you try to open some XLS files created in Excel 2003 format (the files are plain XML files with 3 more bytes before first tag - EF BB BF) the suite will open them in Writer or open them in Calc with the tab separated filter dialog. The problem seamn to be inside the component that decide who will open the document because opening the same file with scalc.exe -o filename.xls will work opening the XLS file without problems (only if you have JRE installed - it seamns that this format is parsed by a filter writen in java) I would put a P5 on it but some will yell. I think that is maybe the most important bug of 2.0 release. Just wonder how hot are support lines with users calling that they received a XLS document that is no longer opening and the previous version 1.x work just working. I think that this sould be fixed before 2.0.2 release. One workaround with no visible effects was to change the association from soffice.exe to scalc.exe for xls files like below: [code] [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\OpenOffice.org.xls\shell\open\command] @="\"C:\\Program Files\\OpenOffice.org 2.0\\program\\scalc.exe\" -o \"%1\"" [/code] I don't know if this is a side effect or another bug but the file is still locked even the close of the XML document. Quiting quick starter is unlocking the file but this should not be normal behaviour. More information about the submitted bug can be found on Created attachment 34417 [details] Sample XLS 2003 file that is not opening well by default Something for you? Please have a look. Hi, the described behaviour is not a bug. The file attached to the Issue isn't a valid Excel Binary file as the file extension sugests. So Calc is looking into the file and foinds that it is a text file and now switches over to writer which import the file correctly. Using the correct filter or extension solves the problem. The problem with the file locking has to be examined by the framework team. Frank File locking: issue 21747 *** This issue has been marked as a duplicate of 21747 *** dupe This clearly not a duplicate of issue 21747 as anyone can read it's description and it's not resolved. Microsoft Office is writing XLS files in both formats: binary and XML and if OpenOffice.org is supposed to load XLS files It must be able to open them. This is why I think this issue must have a high priority because it fully make the opening of some XLS files imposible for normal user (not talking about guru kind). Normal user expects that an XLS file it's a spreadsheet and it doesn't have to know what kind of spreadsheets it contains - all it wants to do is doble click on the XLS in order to open the file in his spreadsheet editor. The number of this kind of files (XLS in XML format) is keep growing and we give support to hundreds of users. Please read carefully: File locking is duplicate to issue 21747. The attached file is invalid as fst has described. For upcomming issue you would like to write: Please describe only one problem in one issue. *** This issue has been marked as a duplicate of 21747 *** reclosed I think I read at least what I wrote - the description of the issue has nothing to do with the file-locking issue. If you read more carefully you'll see that the first comment was just referring to the filelocking bug so THIS ISSUE IS NOT THE DUPLICATE of the file locking bug. Sorry for this open-close ....maybe somebody is making pression on issue closing :) So if this issue has nothing to do with the file locking problem someone of the spreadsheet team should decide if it would make sense to have this registry key: [code] [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\OpenOffice.org.xls\shell\open\command] @="\"C:\\Program Files\\OpenOffice.org 2.0\\program\\scalc.exe\" -o \"%1\"" [/code] This was my suggested workaround but I don't know if changing the installer in order to associate XLS directly to scalc.exe it's the correct solution. I think that this bug must be treated with special care: it can be hidden including my workaround in the installer OR by chanigng the eay the suite is detecting the file type. I think the best solution would be to do both of them. This way we'll be sure that any .XLS file would be open by Calc. So I close this Issue as invalid. The file extension XLS is for Excel binary files and therefore a Excel XML file isn't a valid xls file. Changing the registry code will work in this case but not in all cases. The filter detection is looking into the file and did not find a Excel binary header, so the filter dfetection looks further and detects a text file. So it opens it as text in writer. If you use the correct file extension .xml for Excel 2003 XML files, the document will allways open in Calc. So this Issue isn't a bug and therrefore closed invalid. Frank closed invalid Let me be more specific: here are the real facts: The submited file it's 100% valid Microsoft Office Spreadsheet in XML format WITH and additional 3 bytes header before the first XML tag. We know that this file was generated by MS Office. Microsoft office not only open this file but also keep it's header (0xEF 0xBb 0xBF). This is working with ANY EXTENSION - even if you use XLS or XML. Now let's investigate OpenOffice.org behavior: Case 1: File > Open > test.XML in SCALC.EXE works. Case 2: File > Open > test.XLS in SCALC.EXE does not work. So I think there is a bug in filter detection algoritm. It should work like this: if recognize_binary_format() { ... open with binary filter } else /* not known binary */ { if (recognize_XML_format()) { ... open using detected filter } else if is_text_only() { ... open as text in writer } else { ... message: unrecognized binary format } XML file extension has not any meaning for the normal user - the file can be a HTML page, spreadsheet, configuration or any king of file. People must need to know what application is going to open their files in order to act. Normal people is not opening formated files (like XML) in order to edit them as plain text. They expect to edit the data from them not the file. So OOo must (not should) detect corectly the file format. As any power user who blamed MS for hiding the file extension from files any application must be able to open all suported files without requesting a specific extension for them. Extension is only informative and It can't guarantee you that the data inside it's valid. Currently hundreds (maybe more) developers are generating spreadsheet reports in microsoft XML format because it's very simple and it's just plain uncompressed xml. They are generating those files with XLS extension in order to make users to open them in a spreadsheet application BUT how nice OpenOffice.org it's smarter It can't accept opening an spreadsheet file (in XML format) stored in a file with XLS extension. If all file formats used were XML how do you expect for a user to make his computer open those files with a specific aapplication. File extension association it's a user configurable option, but recognizind the file format it the job of the application. The problem should stay open and must be assigned to the filter detection. I have to agree. If it is a major objective of OOo to provide file exchange with microsoft formats, then no matter how stupid and cavelier microsoft may be in the allocation of extensions then OOo should do what the user expects Hi, Excel is a single Application and therefore handels such file by opening them into itself. OOo Calc is part of an integrated Office suite sharing a lot of code and especially the file open dialogs. So the decission is made by the filterdetection how a file is opened. The first point to look is the extension and based on that the header of the file. So the filterdetection has no choice as to define the XML file named as XLS as text. This is how it works. Please have a look at Issue 8967 which basically is a double to this one. Frank *** This issue has been marked as a duplicate of 8967 *** closed double I agree with that it's a good thing that OOo share the opendialog code for all applications. You are wrong about this "...filterdetection has no choice as to define the XML file named as XLS as text. This is how it works.". If the filter detection parse the header it can parse XML header too in order to detect correct file type. Because filterdetection can receive any kind of files it must be able to do a good decision. The case of Excel XML saved with XLS extension it's one of the cases that can be resolved without breaking current behavior for current files. The "linked to" issue it's a very close relative of this but have a big difference: it's about CSV (tab separated) and plain TXT -> it can't be resolved in order to satisfy all. Should I reopen this issue? Based on the comments on the issue 18228 I conclude that this is a bug in filterdetection. So the issue must be opened. In the issue 18228 it's clearly written that the filter detection it's detecting the file reading his header and the file extension it's not important. A new better description of this bug is: The filterdetection it's not detecting corectly Excel XML file format if the extension of the file it's XLS or other. Sorry - but these are the problems of this file: a) It's not xls - because it's not a binary file. b) It's not XML - because it contains 3 binary signs. c) If MS generate such files realy - might be we have to react and try to workaround this "bug" of MS. The only bug I would accept here is: if these 3 bytes was removed from the file and the extension was not set right then this file is loaded into the writer a plaint text everytime. Thats why our txt filter has no real detection and cant differen between ASCII - all UniCode formated files - and even binaries. This cant be solved realy because nobody cant write such detection because it has to check all 64 K CodePages existing of this worl. Which by the way will affect detection time and will increase it up to "never ending" .-) The only workaround for this: ask the text filter explicitly at the end of the detection process so it can overrule any other detection service. That would help to solve other detection problems as well. I will file such issue to myself and try to fix it for OOo 2.0.3. (number of this new issue will be set as depedency to this isseu here later). Hello intersol, let us gather some facts concerning this file format of your example document. 1) Although it has the suffix XLS it is clearly no binary XLS 2) When you take a look into the W3C XML specification You see that a well-formed XML document starts with a prolog [1] document ::= prolog element Misc* [22] prolog ::= XMLDecl? Misc* (doctypedecl Misc*)? [23] XMLDecl ::= '<?xml' VersionInfo EncodingDecl? SDDecl? S? '?>' As you see for a well-formed XML document, there are no arbitrary prefix before the prolog. A document that has these bytes is therefore not well-formed and thereby can not be valid at all. Although aside of this three bytes the document would be well-formed and most likely valid. Therefore you have bring up an interesting bug of Microsoft Office, that we should take into account to ease the life of StarOffice / OpenOffice.org users. Finally some really positive feedback. Here it's what if discovered right now: The header it's something very common: it's a byte order mask so this file it's a valid XML. Take a look at From what i know this mask it's supported by XML specification. PS. I've tested with and without masks and Excel it's accepting the both files. The result of my investigation with Ooo (same file with and without 3-byte mask): test_with_mask.xml -> opens corectly in OOo 2.0 (opens calc) test_without_mask.xml -> opens corectly in OOo 2.0 (opens calc) test_with_mask.xls -> opens the ASCII Filter Options dialog (BAD) test_without_mask.xls -> opens as plain text in Writer (BAD) The correct beahaviour would be the same (first). The mask it's optional and it's telling the text encoding of the file (so it's text). I think the bug is that after detecting that the file is text it doesn't test for XML format. Something it's clear: OOo it behaving different on different file extensions even if the content it's the same. Here it's something interesting too related to encodings and XML: Hi, regarding the three bytes , I've saved a file from within Excel 2003 as Excel XML and these file clearly does not have these bytes. So the question is how does these bytes get into your file ??? Frank I'm testing on Microsoft Office Proffesional 2003 - Romanian Localized version so this could vary by the locale. Excel is accepting both kind of files and is keeping the mask on save. I must admit, I was wrong and he is completly right. The so called Byte-Order-Mark (BOM) is part of the Unicode specification and might occure: "The BOM is not considered part of the content of the text." Therefore it is still well-formed XML. His link gave a good summary of the possible entities. Thanks for giving us this hint. So there two different issues here one related to the opening of Excel XML files when they are not having XML extension (current issue) and a new one: "Incorrect recognizing of XML files with BOM (Byte-order-mask)" - issue #63077 I've created a new issue #63077 for the BOM problem. This issue must remain on Excel XML opening that will not work even it the BOM is resolved. Hi Andreas, please have a look at this Issue again. Proceed as needed and close if nothing can be done. Frank . The current bug it's very important because it's bloking the transition to OOo 2.0 for many user. The number of files received by email in new microsoft formats is growing. Just to make ti more complicated few days ago I received another .doc generated with Microsoft Office 2003 that is not going to open in OOo in any way. Looking at the file it seamns to be an archived version of the XML format. I will attach it too. Created attachment 36401 [details] DOC saved from Office 2003 - file format it's a sort of archived xml AS->SUS: Please make sure that such BOM does not disturb detection/loading of such excel 2003 xml files. THX. JA: the Excel 2003 file attached can be loaded in Calc if you manually select the filter "Microsoft Excel 2003 (*.xml). I don't see an issue that needs to be fixed. IBTD. Though I agree that the techincal arguments of fst and others are correct this is not a technical but a usability issue. If people want to replace MSO by OOo they have some expectations we need to take into consideration. IMHO the format problems explained here are even more annoying than the UI differences we tried to remove in OOo2.0. Especially as in the current case where it should be easy to fix: If it is correct that "scalc.exe filename.xls" opens the file correctly the only thing that we need to do is registerung scalc.exe as application for "xls" files instead soffice.exe as we do now. This works with OOo2.0.2, IIRC it doesn't work in earlier versions. So this could be solved as a registration issue. @intersol: please try to verify this. Does it help to change the registration of "xls" files from soffice.exe to scalc.exe? I confirm that setting associations of XLS to scalc.exe solve the problem reported. Also I will copy here an email received by me from Hoffmann Gisbert. Maybe some of you already received a copy. He has a point in it! Personally I'm about to loose other client because of this issue. -- begin cite I write to you because I do not want to create an issue for the same problen again (and because I would need a training before I could create an issue). Fact is that certain xls-files are correctly opened in Excel while with OO 2.x and SO8 they are opened with Writer. Details are described in the issues. Obviously this is a problem discussed since years (Issue 8967). The consequences seem to be not clear to you. The arguments of the developers are that MS does not show the correct behavior, but OO 2.x or SO 8 do. OO/SO should show the same behavior like MS, even if its wrong. No matter what the W3C XML specifications are. Your will not force MS and the provider of reporting tools (see below) to act as your want and as it would be correct in your opinion. We (and you should also) want to replace MS. All reporting-tools in the BI-market (so do Crystal Reports, Cognos ReportNet, Information Builders etc.) provide xls-output in the format described in the issues. This output opens Excel automatically and correct, so the user can further manipulate the output as needed. We can manage, that these tools start OO/SO instead of Excel. But then its opened in Writer or Writer/Web respectively. The user can try to cut the output in Writer, start a Calc-document and paste it into Calc, headers and footers seperate from the main page, and adust all cells. No user will accept this crippled way to work. It worked correct until SO 6/OO 1.x. For your information: I fighted more than 2 years now to replace MS with OO/SO. The result of your useless discussions for years now is that I lost the run and our company has to update more than 500 MS-Licences. Thats it. Good night OO/SO. --end of cite Created attachment 37016 [details] sample document generated by common web reporting tools. The new file i've attached is generated using one of the well known reporting tools available. In this case the document will not open corectly with scalc.exe but it will open with soffice.exe as html. Opening it with scalc.exe will open an empty spreadsheet. If you get an empty document from an HTML document by loading it in scalc.exe it is a bug in Calc. IIRC Calc by default uses the "WebQuery" filter, but that should make only minor differences. We should create a separate issue for this as it needs to be fixed in Calc. I will work on the other necessary changes. As we meanwhile changed the "xls" registration from soffice.exe to scalc.exe this issue is fixed. Can you please verify this in a recent version of OOo?
https://bz.apache.org/ooo/show_bug.cgi?id=62492
CC-MAIN-2020-40
refinedweb
3,321
73.27
Discussion Board for collaboration on Qlik NPrinting. Hi Qlik Experts, I am importing the recipients user task with Filter and groups,It succeed when no. of user upto 3 ,but after updation of excel file of import recipients with increased number of recipients,the same task getting failed and showing the error, " ERROR message : importing USER : Can not create the new user because their email is already in use. ERROR message : The import failed due to some invalid entries. No data was saved." Give me solution for resolving this problem,ASAP; @lech_miszkiewiczlech_miszkiewicz, @ Daniel Jenkins Thanks and Regards, RANJIT KAKADE. Hi Ranjit. Well - it worked for me. I was able to import whole list without issues. I think you have some legacy stuff there. I loaded all users without any issues and then re-run imports couple times changing things like time zones, filters etc Couple things i have noticed: I am not sure what else we could do. Works for me - I am running NPrinting 17.3.1 cheers Lech Hi, this indicates that you have the same email address assigned to multiple users. Think of e-mail column as of Primary key in recipient table - it cannot have duplicates!!! that is all said in your log Can not create the new user because their email is already in use. cheers Lech Hello Lech, First Thanks for your immediate reply, FYI, I have been not the same email address assigned to multiple users. I am just reloading the same task with following actions: Are you able to share your recipient file? did you add manually users to NPrinting console? Are you then trying to update them? Hi , Yes, I have been added some User manually in the Nprinting Console and trying to update the list of recipients to the specific groups.,But I dont want to delete the existing user to be delete,just want update on the same if already present in the Nprinting Console,even I already tried the same ,For E.g. The User Name "Kamineni Preveen" is already present in the Nprinting console,but after reloading of task its status become "Update", but in the excel file if the number of recipients changed than existing then task getting failed,PFB for Log file when The User Name "Kamineni Preveen" is already present in the Nprinting console,but after reloading of task its status become "Updated and PFA for List of Recipients.,1. existing list 2. Updated List of Recipients. 3. Log file details" Hello Guys, Im waiting for your important reply about my above query. I am testing scenarios - will let you know once i am done with it. I am thinking that only some of user related items can be actualy updated via import - this beeing filters, groups (not sure about the name though). cheers Hello Lech , No ,Name also updating after reloading the Import Task, PFB Details for the same: how are you going with recipients import? I am testing it - and it works for me. I do not know what is a problem with your import. Are there any unusual characters in your file? I had this issue when once i tried to import data with polish characters - load was failing. cheers Lech Hi Lech, Kindly find the attached Excelsheet file which contains list of User recipient.,Give me acknowledgement once when you download it, I have been referred the following weblink article for importing Users Recipients with filters and groups task,
https://community.qlik.com/t5/Qlik-NPrinting-Discussions/Import-Task-for-recipient-User-getting-failed/m-p/1315658
CC-MAIN-2022-21
refinedweb
579
62.58
Today I again faced an annoying SharePoint BCS deployment bug. I encountered it before in the beta but figured it would be solved in the release version so I forgot about it. Today it bit me in the rear again. As soon as you deploy a BCS solution from VS2010 and changed the model to much from the previous version you deployed, you get all sorts of errors. If you look closely to them, you'll notice they are caused by SharePoint trying to compile previous versions. If you however retract your solution and view Central Admin there are no models defined at all! Somehow these previous versions are stored somewhere we can't view or delete them! Sometimes the solution can be very simple. Don't trust the UI, don't trust VS2010 but try the Object Model. After I ran the following code I could find a lot of left over models and entities.); } Console.WriteLine("\nEntities:"); var entities = catalog.GetEntities("*", "*", false); foreach (var entity in entities) { Console.WriteLine("\t{0}", entity.DefaultDisplayName); } } Console.ReadLine(); After verrifying all models and entities were indeed left overs I adjusted the code a little bit to delete them.); model.Delete(); } Console.WriteLine("\nEntities:"); var entities = catalog.GetEntities("*", "*", false); foreach (var entity in entities) { Console.WriteLine("\t{0}", entity.DefaultDisplayName); entity.Delete(); } } Console.ReadLine(); After running the code I could deploy my solution again and all worked fine! Cheers, Wes. Every now and then you run into unexpected SharePoint behavior. The Title field of an item is a 'single line of text' field. In my specific case however this Title field contained a file number. Not a problem at all, BUT you get a really strange sort order when you start to sort numbers as strings. File number 1000 comes before file number 9. This is however expected behavior. The client wanted to be able to sort the items in a numeric way so I simply created a calculated columns of type number with this expression "=[Title]". Easy peasy right? And that's where the unexpected behavior came along. The field indeed got the value of the Title field, but if I sorted the list by this field, it behaved exactly as if the field contained a single line of text. So again, 1000 comes before 9. Checked if I indeed set the calculated field to be a number and well, I did. The value simply wasn't treated as a number but as text. Strange... So I started digging in my development experience and I decided to make a little change to the expression. I changed the expression to "=[Title] + 0". And what do you know? The calculated field now indeed returned a number and was treated like a number. Sorting by the field now produced the expected behavior. File number 9 now comes before file number 10000. Google Analytics is great out of the box already, but you can do much more than just registering your page loads. Especially with all these “Web 2.0” sites it can be convenient to not register page loads, but events! In this blog post I’ll show you how you can use jQuery in combination with Google Analytics to get a great insight on what actually happens on your website while you’re not looking! With Google Analytics you can track custom events if you like. This gives you the power to register f.e.: The Google Analytics method to register these events is: pageTracker._trackEvent(category, eventType, optional_label, optional_value) optional_label is useful for filtering and optional_value is somewhat special. If you pass an integer to the value parameter, Google analytics will aggregate the value for you. So if you have an ad campaign, you can keep track of how many people clicked the advertisements AND you can see right away how much money you’ve earned with that. There’s a nice explanation on the _trackEvent method over here: It’s quit cumbersome however to attach an event handler to all these items on your page. So I’ve created a jQuery plugin to make life easier for you and in it’s simplest form you can use it like this: $("selector").trackEvent(optional_options); What this does is that it will call _trackEvent if someone clicks the selected elements once per element. It will use the element nodeName as category variable and the href, value, id or text (first non empty in that order) as label variable. For the value variable it uses 1. So if my html looks like this: <script type="text/javascript"> $(function(){ $("a").trackEvent(); }); </script> <a href=""> ASP.Net iDeal library</a> <a href=""> Download Library</a> Than if someone clicks the first hyperlink, the event gets tracked like this: pageTracker._trackEvent("A", "click", "", 1); If someone clicks the second link, it will be tracked like this: pageTracker._trackEvent("A", "click", "", 1); These events will be tracked only once per element click per page load. This of course can be very helpful in registering how many people navigate away from your web site through external links. So I’ve added an extra filter expression to select all external links on the page. You can use it like this: $(function(){ $("a:external").trackEvent(); }); With this in place, all hyperlinks on your web site that point to external web sites get the event tracking behavior. Although the defaults are fine for me, they might not be for you, so the plugin is flexible enough to adjust it to your likings. The optional options variable looks like this: var settings = { eventType : string, once : bool, category : string or function, action : string or function, label : string or function, value : int or function, }; Lets say you would like to track every click on your jQuery.UI.Tab, with: Your jQuery load function would look somewhat like this: var eventTrackingOptions = { once : false, category: "Tab", label : function(event){ return $(this).text(); } }; $(function(){ $("#tabContainer li").trackEvent(eventTrackingOptions) }); Lets say you have a nice picture of your boy- or girlfriend on your page and would like to track every mouseenter AND click on this lovely picture. var eventTrackingOptions = { eventType: "mouseenter click", once : false, category : "Predators", label : "My sweet hearth" }; $(function(){ $("img.myLoverPic").trackEvent(eventTrackingOptions) }); Lets say you have some advertisements on your page, and you get payed 2 cents on each click. You can now keep track yourself on how much much money you are making like this. <script type="text/javascript"> var eventTrackingOptions = { category: "Advertisements", label : function(event){ return $(this).attr("rel"); }, value : 0.02 }; $(function(){ $(".advertisementLink").trackEvent(eventTrackingOptions) }); </script> <!-- some html --> <ul> <li> <a class="advertisementLink" rel="CompanyOne" href="">Buy here!</a></li> <li> <a class="advertisementLink" rel="CompanyTwo" href="">Or here!</a></li> </ul> I guess you’re all bored by now and curious about the source… so here it is: /** * Version 1.0 * March 27, 2010 * * Licensed under the GPL licenses. * **/ (function($) { var methods = { getOptionValue: function(value, elem, event) { if ($.isFunction(value)) { value = value.call(elem, event); } return value; }, getCategory: function() { return this.nodeName; }, getAction: function(event) { return event.type; }, getLabel: function() { var self = $(this); if (self.is("a")) { return self.attr("href"); } else if (self.is("input")) { return self.val(); } else if (self.attr("id")) { return self.attr("id"); } else { return self.text(); } } }; $.expr[':'].external = function(elem) { return (elem.host && elem.host !== location.host) === true; }; $.fn.trackEvent = function(options) { var settings = { eventType : "click", once : true, category : methods.getCategory, action : methods.getAction, label : methods.getLabel, value : 1 }; if (options) $.extend(settings, options); this.each(function(i) { var eventHandler = function(event) { var category = methods.getOptionValue(settings.category, this, event); var action = methods.getOptionValue(settings.action , this, event); var label = methods.getOptionValue(settings.label , this, event); var value = methods.getOptionValue(settings.value , this, event); //alert(category + "||" + action + "||" + label + "||" + value); pageTracker._trackEvent(category, action, label, value); }; if (settings.once) { $(this).one(settings.eventType, eventHandler); } else { $(this).bind(settings.eventType, eventHandler); } }); return this; }; })(jQuery); With this straight forward jQuery plugin it’s a piece of cake to keep track of what’s happening on your page while you’re not there. I do have pet projects in which I try to get every single nitty gritty detail right. And then it bothers me that by default the ASP.NET Image control adds a style="border-width:0px" to the rendered image tag even though I never asked for it. Not even does it add the style attribe without asking, it doesn't offer a way to get rid of it! You can get rid of it though! Fortunately ASP.NET does come with control adapters, so we can do something about it. Lets first create a new .browser file in the App_Browser directory to tie up our control adapter to the rendering of the Image control. The contents should like lik this: <?xml version="1.0" encoding="utf-8" ?> <browsers> <browser refID="Default"> <controlAdapters> <adapter controlType="System.Web.UI.WebControls.Image" adapterType="WMB.VirtueleKassa.WebControls.ImageAdapter" /> </controlAdapters> </browser> </browsers> Next thing to do is to create the WMB.VirtueleKassa.WebControls.ImageAdapter. Create a new class file in the App_Code directory. The code for the ImageAdapter should look like this: !! have a look at the comment of RichardD! He proposed a better solution !! using System; using System.IO; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.Adapters; namespace WMB.VirtueleKassa.WebControls { public class ImageAdapter : WebControlAdapter { public ImageAdapter() { } protected override void RenderBeginTag(HtmlTextWriter writer) { Image img = this.Control as Image; if (img.BorderWidth.IsEmpty) { string origTag = string.Empty; using (StringWriter sw = new StringWriter()) using (HtmlTextWriter hw = new HtmlTextWriter(sw)) { base.RenderBeginTag(hw); hw.Flush(); hw.Close(); origTag = sw.ToString(); } string newTag = origTag.Replace("border-width:0px;", ""); newTag = newTag.Replace(" style=\"\"", ""); writer.Write(newTag); } else { base.RenderBeginTag(writer); } } protected override void RenderEndTag(HtmlTextWriter writer) { } protected override void RenderContents(HtmlTextWriter writer) { } } } And thats it! If you do apply a width yourself, the tag will be saved, and so will any other style tags. Only if, after the deletion of the border-width:0px; the style attribute remains empty, the complete style attribute will be removed as well. Have a look at the source of this page to see the result. Regards, In this pet project of mine I have two large background images. Unfortunately this means that IE will display a nasty page flicker on each request. I used to get rid of this with the a metatag that looks like this: <!--[if IE]> <meta http- <![endif]--> For some reason however, this doesn't work in IE8 with cached pages. Use this tag instead and it all works fine again. <!--[if IE]> <meta http- <![endif]--> Triggered by a blog post of Scott Hanselman I was wondering if I could create a generic extension method to asynchronously retrieve objects from a DataContext. And well I could. I ended up with two classes. The ExecuteAsyncState: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Linq; 4: using System.Data.SqlClient; 5: using System.Threading; 6: 7: namespace AsyncDataAccess { 8: public class ExecuteAsyncState<T> { 9: public ExecuteAsyncState(DataContext dataContext, SqlCommand sqlCommand, Action<IEnumerable<T>> onReady) { 10: this.DataContext = dataContext; 11: this.SqlCommand = sqlCommand; 12: this.OnReady = onReady; 13: this.WaitHandle = new AutoResetEvent(false); 14: } 15: 16: public DataContext DataContext { get; private set; } 17: public SqlCommand SqlCommand { get; private set; } 18: public Action<IEnumerable<T>> OnReady { get; private set; } 19: public AutoResetEvent WaitHandle { get; private set; } 20: } 21: } And the DataContextUtility: 5: using System.Linq; 8: public static class DataContextUtility { 9: public static IAsyncResult ExecuteAsync<T>(this DataContext dataContext, IQueryable<T> query, Action<IEnumerable<T>> onReady) { 10: SqlCommand sqlCommand = dataContext.GetCommand(query) as SqlCommand; 11: 12: ExecuteAsyncState<T> asynchState = new ExecuteAsyncState<T>(dataContext, sqlCommand, onReady); 13: 14: AsyncCallback callback = new AsyncCallback(EndExecuteAsync<T>); 15: return sqlCommand.BeginExecuteReader(callback, asynchState); 16: } 17: 18: private static void EndExecuteAsync<T>(IAsyncResult result) { 19: ExecuteAsyncState<T> asynchState = result.AsyncState as ExecuteAsyncState<T>; 20: 21: DataContext dataContext = asynchState.DataContext; 22: SqlCommand sqlCommand = asynchState.SqlCommand; 23: Action<IEnumerable<T>> onReady = asynchState.OnReady; 24: 25: SqlDataReader reader = sqlCommand.EndExecuteReader(result); 26: var resultData = from item in dataContext.Translate<T>(reader) 27: select item; 28: 29: try { 30: onReady.Invoke(resultData); 31: } 32: finally { 33: reader.Close(); 34: asynchState.WaitHandle.Set(); 35: } 36: } 37: } 38: } These two classes enable you to retrieve your object asynchronously. Like so: 3: using System.Linq; 4: using System.Text; 8: class Program { 9: static void Main(string[] args) { 10: using (SampleDbContextDataContext context = new SampleDbContextDataContext()) { 11: context.Connection.Open(); 12: 13: var customerQuery = from Customer c in context.Customers 14: select c; 16: IAsyncResult customerResult = 17: context.ExecuteAsync<Customer>(customerQuery, (customers) => { 18: foreach (var c in customers) { 19: Console.WriteLine(c.ToString()); 20: } 21: }); 22: 23: var productQuery = from Product p in context.Products 24: select p; 25: 26: IAsyncResult productResult = 27: context.ExecuteAsync<Product>(productQuery, (products) => { 28: foreach (var p in products) { 29: Console.WriteLine(p.ToString()); 30: } 31: }); 32: 33: Console.WriteLine("Before the queries are returned:"); 34: 35: ExecuteAsyncState<Customer> customerState = customerResult.AsyncState as ExecuteAsyncState<Customer>; 36: ExecuteAsyncState<Product> productState = productResult.AsyncState as ExecuteAsyncState<Product>; 37: 38: WaitHandle[] waitHandles = new[] { customerState.WaitHandle, productState.WaitHandle }; 39: WaitHandle.WaitAll(waitHandles); 40: 41: Console.WriteLine("After the queries are returned:"); 42: 43: Console.ReadLine(); 44: } 45: 46: } 47: } 48: } And that results in a screen like this: Where you’ll see it prints my name first, then a product name, then the rest of the customer names to finish of with the rest of the product names. It was very easy to get this result and you could of course add some more methods that simply return a single object instead of an IEnumerable<T>. The code is attached so you can try it yourself. Last Januari I was presenting at the SharePoint Connections 2010 in Amsterdam. As a non Microsoft speaker they do not record your presentation so I decided to create a screencast on the SharePoint 2010 Sandboxed Solutions subject myself. I'm not very good at screencasts but it does contain some useful information and some nice code samples. The code samples and slidedeck are attached. Wesley Bakker For quit some time now, I have had the BetterImageProcessor on-line at. I received an awful lot of feedback on it and it was downloaded thousands of times. About half a year ago I decided I should port it to .Net 3.5 and maybe place it on CodePlex. Well today is the day. The Aplha version is released on CodePlex! Have a look at and just let me know if you digg it.. I really love the cross-browser compatibility jQuery gives me out of the box and the ease of use. So it was time to write a jQuery plug-in. Let’s not bother you any longer with an introduction and show you the code for such a simple plug-in. (function($) { $.fn.extend({ hoverImage: function(options) { var defaults = { src: "-hover", preload: true, replaceEnd: "" }; options = $.extend(defaults, options); var append = options.src.indexOf(".") == -1; var splitter; if (append) { splitter = options.replaceEnd + "."; } return this.each(function() { var obj = $(this); var img = obj.is("[src]") ? obj : obj.children("[src]:first").eq(0); if (!img.is("[src]")) { return true; } var oSrc = img.attr("src"); img.data("oSrc", oSrc); var hSrc = options.src; if (append) { hSrc = oSrc.split(splitter).join(hSrc + "."); } img.data("hSrc", hSrc); if (options.preload) { new Image().src = hSrc; } obj.hover(function() { img.attr("src", img.data("hSrc")); }, function() { img.attr("src", img.data("oSrc")); }); }); } }); })(jQuery); It is a default plug-in with some settings. If the object on which this plug-in is called does not have a “src” attribute, it wil try to get the first child that has. The hover effect will still take place on hovering the parent, but the child its image src will be changed. Here’s a sample of html in which the plug-in is used. <html xmlns=""> <head runat="server"> <title>hoverImage test page</title> <script src="/ClientScript/jquery-1.3.2.min.js" type="text/javascript"></script> <script src="/ClientScript/jquery.hoverImage.js" type="text/javascript"></script> <script type="text/javascript"> $(function() { $(".standardImage").hoverImage({replaceEnd: "-standard"}); $(".hoverImage").hoverImage(); }); </script> </head> <body> <form id="form1" runat="server"> <div> <img class="standardImage" src="Imgs/button-standard.gif" /> <img class="hoverImage" src="Imgs/button.gif" /> <a class="hoverImage" href="#" style="display:block; width:100%;border:solid 1px black;"> <img src="Imgs/button.gif" /> </a> </div> </form> </body> </html> Works like a charm! Cheers and have fun!
http://weblogs.asp.net/wesleybakker/default.aspx?PageIndex=2
CC-MAIN-2014-15
refinedweb
2,747
52.05
(This article was first published on Adventures in Statistical Computing, and kindly contributed to R-bloggers) So where did we mess up? In the calculation of returns for the market cap weighted portfolio and the portfolio optimization portfolio, we simply took the starting weights (W0) and multiplied them by the relevant series of returns. resEqual = as.matrix(returns) %*% t(ret) and subRes = as.matrix(subRes) %*% t(ret) To correct this, we have 2 options. - Recalculate the weight at each time point assuming a starting weight. Multiply those weights to each day’s returns to produce the series - Assume a starting monetary value of the portfolio (1 is convenient), and apply the return series to the position values. At each period, calculate the daily portfolio return. 1 does not equal 2. Why? Remember that we are using log returns. #1 is a weighted arithmetic average. #2 is the log of a sum of functions. If we say that prices are final and weights are initial – that is that we observe the price at the end of time t and the weight at the beginning (as a function of prior period prices), then we can rewrite #2 as The literature uses #1 because it makes the math easier. The numbers are approximately equal because log(1+r) ~= r (or exp(r) ~= 1+r, if you prefer). However, they are different and the difference can compound with time. Your brokerage account works according to #2. We will use #1 as that is the convention. If I was analyzing something for real, and I was given a log returns data set to use, I would use #2. #1 may be more compact and make the math nice. #2 will more closely reflect your account balance. How do we calculate wi,,t given wi,t-1? We apply the returns to the weights and re-standardize the numbers. Because we want to reuse this methodology for cap weighted and optimized portfolios, we should just create a function that will take a series of returns and a vector of weights, and give back the return series. reweight = function(returns, startWeight){ n = nrow(returns) lastWeight = as.vector(startWeight) outReturn = data.frame() for(i in seq(1,n)){ rts = as.vector(exp(returns[i,])) w = lastWeight * rts sumW = sum(w) w = w/sumW r = as.matrix(returns[i,]) %*% w lastWeight = w outReturn = rbind(outReturn,r) } return (outReturn) } Substituting the function call in place of the lines quoted above, the new Annualized Returns table looks like this: Here we can see the capitalization weighted portfolio performs much better than previously calculated. This is expected. The Portfolio Optimization portfolio also does better and has a slight advantage over the cap weight portfolio in the Sharpe Ratio. The correlation chart is: The cap weight portfolio and the equal weight portfolio are nearly perfectly correlated. The optimized portfolio is less correlated, but still has a high degree of correlation. NOTE: we can put a Beta on the optimized portfolio of 0.70 (.87 * .1775 / .2199) using the calculations above. The cumulative return graph is: This really just shows us what we already know. The cap and equal weights are highly correlated. The cap weight pulls away after the 2009 bottom. The optimized portfolio shows less volatility than the cap weight portfolio after the bottom. It has a steady march up while the cap weighted portfolio whips up and down. To leave a comment for the author, please follow the link and comment on...
http://www.r-bloggers.com/portfolio-optimization-in-r-part-4-redeux-2/
CC-MAIN-2015-40
refinedweb
581
57.27
Angular 5 To-Do List App - Part 2 In the previous article we created a very simple To-Do list with Angular that is almost featureless. The code of the previous article can be found in article itself and the complete project can be found in the Github repository Learn Angular With Sabuj. Code from the first 10 articles can be found by commits, but from the 11th article I have decided to put the codes under branches. So, to get the full project code of the previous article, browse to the branch named todo-app-part-1. To get the full project of the current article browse the branch named todo-app-part-2. In this article we will make our application more functional and usable. Making a Nicer Interface In our previous article we only could display a list of to-dos statically from the property of an array of the component object—we had no way of adding them dynamically. Now, we will add an input field and a button to input tasks to the to-do list. We already have Bootstrap added, so we will use Bootstrap classes to style them nicely. Open the app.component.html file. Our current markup looks like this. <div class="container"> <div class="row"> > We want to add the input field and the button above the To-Do list table. The input markup will look like below: <input type="text" class="form-control" name="task" placeholder="Task"> The button markup will look like below: <button class="btn btn-info">Add</button> To make better alignment and to go with Bootstrap we need to put them inside columns and wrap them inside a row (where the row will also be wrapped inside a container). So, our markup for the input and the button should look like below: <div class="row"> <div class="col-xs-8"> <input type="text" class="form-control" name="task" placeholder="Task"> </div> <div class="col-xs-4"> <button class="btn btn-info">Add</button> </div> </div> We do not have any noticeable title for our application. We want to put a big title at the top of our page inside an h1 tag. Again, to go with Bootstrap and its grid system we need to create a container (or use an existing container), create a row inside of it, create a column, and put the h1 tag with the application title inside of it. Our markup for that will look like below: <div class="container"> <div class="row"> <div class="col-xs-12"> <h1> To-Do Application </h1> </div> </div> </div> So, the final markup will look like below: <div class="container"> <div class="row"> <div class="col-xs-12"> <h1> To-Do Application </h1> </div> </div> </div> <div class="container"> <div class="row"> <div class="col-xs-8"> <input type="text" class="form-control" name="task" placeholder="Task"> </div> <div class="col-xs-4"> <button class="btn btn-info">Add</button> </div> </div> <div class="row"> <div class="col-xs-12"> > </div> Open up the browser to see a relatively nicer to-do application as the following screenshot. Adding Tasks into To-Do The above part was just presentational; we need to make it functional now. We need to add a click handler to the Add button like below. <button class="btn btn-info" (click)="addTask()">Add</button> We haven't yet created the addTask() method inside the component class. Before creating that we need to make some more changes. We do not want to find the button element and get the input value from it. Instead, we want to pass the value through the click handler method. So, we need to create a template reference variable for the input field like below: <input type="text" class="form-control" name="task" placeholder="Task" #taskInput> We can pass the input value to addTask() with the following syntax: <button class="btn btn-info" (click)="addTask(taskInput.value)">Add</button> Now, it's time to create the addTask() method inside the AppComponent class. import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { todo_array: Array<string> = [ "Go home", "Take a nap", "Start learning Angular with Sabuj" ] clearToDo(){ this.todo_array.splice(0); } addTask(value: string){ this.todo_array.push(value); } } Go to the browser, input a task into the input field, click the add button to see that the to-do list is getting updated automatically. Awesome! We have created a functional to-do application with Angular 5. So, now we have no need to keep that static array of tasks. You can remove it now, but I am keeping it in place so that when the app initially loads, our application does not look empty. Numbering the Tasks Imagine that we have a huge list of tasks that cannot fit in one page. So, we need a numbering before every task. To do so, we need to use the index of the *ngFor. *ngFor="let todo of todo_array; let idx = index" Indexes start at zero, so we need to add 1 with them to make it more readable. {{ idx + 1 }}. {{todo}} So, our final code for the loop should look like below: <tr * {{ idx + 1 }}. {{todo}} </tr> Go to the browser to see a nicely numbered to do application. Add new tasks and you will see that they are properly numbered. Conclusion We are advancing in development of our To-Do application in a good speed. Try to write every line of code to keep pace with the articles. If you’re having in trouble, go to the Github repository for the full project and if you have any questions, ask them as issues on that Github repository. Keep practicing with Angular 5 until the next article.. Recent Stories Top DiscoverSDK Experts Compare Products Select up to three two products to compare by clicking on the compare icon () of each product.{{compareToolModel.Error}} Your Comment
http://www.discoversdk.com/blog/angular-5-to-do-list-app-part-2
CC-MAIN-2019-13
refinedweb
995
60.14
JavaScript Usage in the GoLive Environment - JavaScript Objects in the GoLive Environment - Scope of Variables and Functions - Handling Events - Sharing Data - Delays and Timeouts This chapter discusses JavaScript and ECMAScript/ExtendScript concepts and usage in GoLive, including: - JavaScript Objects in the GoLive Environment - Scope of Variables and Functions - Handling Events - Sharing Data - Delays and Timeouts JavaScript Objects in the GoLive Environment GoLive provides access to data and objects in a way that JavaScript programmers will find familiar. Those new to JavaScript will discover that the GoLive environment usually provides multiple ways to access data and objects. This section describes various ways an extension’s JavaScript code can access data and objects in the GoLive environment. Objects, elements, and properties GoLive JavaScript objects can represent parts of a document, or GoLive components: - When GoLive loads a markup document, it generates objects to represent the markup tree of a document. Collectively, the objects that represent portions of the markup document are known as markup objects. Markup objects can represent HTML markup elements (as defined by a tag and its attributes), and also comments, text blocks, and other types of markup such as entities or CDATA sections. - When GoLive loads an extension definition file (that is, the Main.html file for your extension), it creates JavaScript objects to represent the GoLive components you define, such as windows, UI controls, and menu items. For example, a <jsxdialog> element in your extension definition results in a window Object (page 313). There are various ways to obtain a reference to a JavaScript object, depending on the object’s type. - You can retrieve most component objects by name from global properties, such as the menus and dialogs collections. - Objects that represent HTML page content are available from the markup tree; you get the root object from the document Object (page 138) for the page (document.documentElement), and that object’s properties and methods allow you to navigate the tree. - GoLive passes relevant objects as event-object property values to event-handling functions. For information on retrieving a particular object, see that object’s description in the GoLive CS2 SDK Programmer’s Reference. Accessing attribute values The attributes of an element (whether it is a document markup element or an element in your extension definition) appear as the properties of the corresponding JavaScript object. For example, the name attribute of the element becomes the name property of the object. Access to JavaScript properties is case-sensitive; that is, the Thing attribute creates the Thing property, not the thing property. When writing JavaScript code, observe case accordingly. JavaScript uses the symbol undefined to indicate a null state. When a property exists, but no value has been explicitly set, that property has a value of undefined. If a property has never been defined, its state (rather than its value) is undefined. To test whether a property exists in JavaScript, you must test the state (not the value), by checking whether the name has a defined type; for example: // correct test if (typeof (myProperty) != undefined) // do something Do not use the following test. This tests the property’s value, rather than its state, and results in a run-time error if the property does not exist: // incorrect test if (myProperty != undefined) // if myProperty does not exist, an error occurs When you must test a property’s value with a case-sensitive comparison, you can use the toLowerCase method of the JavaScript String object. For example, this tests an element object’s tagName property, disregarding the value’s case: if (currElt.tagName.toLowerCase()) == (tagToGet.toLowerCase()) For element Objects (page 156), attributes are also represented by objects, which are themselves nodes in the markup tree. Use an element object’s getAttributeNode and setAttributeNode functions (page 157) to access the attribute object, rather than accessing the attribute directly by name, as a property of the element object. By using these methods, you avoid potential problems with referencing names that contain special characters, such as hyphens. Naming objects and attributes The value of an element’s name attribute must follow JavaScript naming conventions. If more than one element or object uses the same name, the results of name-based object retrieval are unpredictable, so you must take care to ensure that your names are unique. One way to do this is to use a unique prefix or postfix in all of your extension’s names. For example, the following element definitions begin the value of each element’s name attribute with the letters ADBE. <jsxmenubar> // opens definition of all menus <jsxmenu name="Hello" title="Hello, GoLive!"> //Hello menu <jsxitem name="This" title="Do Something"> // menu item <jsxitem name="That" title="Do Something Else" > menu item </jsxmenu> // closes definition of Hello menu </jsxmenubar> // closes definition of all menus When the SDK loads an extension containing these elements, it creates a menu object that appears in the JavaScript global menus collection. You can use the name to retrieve this menu from the collection: var myMenu = menubar["ADBEHello"]; JavaScript object collections The SDK makes commonly used objects available as the elements of array-like structures that all extensions can access. GoLive updates the contents of these structures dynamically as these objects are created and deleted. The SDK implements many of these structures as collection Objects (page 72). This is like an array that provides access to its elements by name or index; however, collections are not actually arrays; not every collection provides numeric access to its elements, as an array object does. Each of these global properties contains a collection object that GoLive updates dynamically: Using the global object arrays These examples use the menus array to illustrate how you retrieve objects from global arrays. These arraya provide access to all of the menus and menu items added to GoLive by extensions. Most of the arrays work the same way; exceptions are noted in the Programmer’s Reference, Part 2 of this book. The following JavaScript defines a menu Sample, with one item, MyItem. The SDK creates a menu object named sample and a menuitem object named item1: <jsxmenu name="sample" title="Sample" ...> <jsxitem name="item1" title="MyItem" ...> </jsxmenu> The following retrieves the Sample menu from the menuCollection object in the menubar global variable, and stores the retrieved menu object in the sampleMenu variable: var sampleMenu = menubar["sample"] In this case, “Sample” is the title of the menu, as displayed to the user, while “sample” is the name of the menu object, which you use to access it programmatically. The following retrieves the menu item by name from the collection in the items property the sample menu: menubar["sample"].items["item1"] Alternatively, you can retrieve the menu item directly, using its name as a property name of the sample menu: menubar["sample"].item1 GoLive also makes each menu available as a JavaScript object in the global namespace. Thus, the following simple line of JavaScript retrieves the menu item from the sample menu. sample.item1 Many collections can be accessed by numeric index as well as by name. For example, if item1 is the first menu item: menubar["sample"].items[0] // 0-based index of first item This is only reliable for the items, not for menus; because other extensions can also add menus, you cannot rely on the order. Some collections, like the controls collection, do not support numeric access at all. Most of the time, an object’s unique name property provides the most reliable way to retrieve it. Comparing objects To ascertain an object’s identity, you can compare the value of its name property to a known string, or you can compare object references directly. For example, you can test the name of a menu item in any of the following ways: if (item.name == "item1") // compare object name to known string value if (item == menubar["sample"].items["item1"]) // compare objects if (item == sample.item1) // another object comparison example Updating references to objects GoLive generates objects to represent the markup tree of a document when it loads that document. It regenerates these objects if the document changes; this is know as reparsing the document. If you save a reference to an object that GoLive generated as the result of interpreting a markup tag, you must update that reference any time the document containing the tag changes. For details, see the full version of the Programmer’s Guide on the product CD.
http://www.peachpit.com/articles/article.aspx?p=433765&amp;seqNum=2
CC-MAIN-2017-04
refinedweb
1,396
50.26
Subscribe to these release notes. The Maps JavaScript API team regularly updates the API with new features, bug fixes, and performance improvements. You can indicate which version of the API to load within your application by specifying it in the v parameter of the Maps JavaScript API bootstrap request. Read more about versioning. This changelog lists releases by date and version number, along with associated changes. To receive updates on new Maps API versions, please subscribe to the google-maps-js-api-v3-notify group. 3.46.5 16 September, 2021 Bug fixes: - Fixed a bug with a color contrast ratio in the Place autocomplete widget. 3.46.3 02 September, 2021 Changes: - Scroll instructions are now displayed above open InfoWindows and CustomOverlays. 3.46.2 26 August, 2021 Changes: - Beta Channel: Telemetry information is now reported. You may need to update your Content Security Policy to ensure these calls are not blocked at browser-level. For more information, please see the FAQ. 3.46 18 August, 2021 Version 3.46 of the Maps JavaScript API is now available. See the Versioning guide. Changes: - Added two new fields to PlaceResult, icon_mask_base_uriand icon_background_color. - The weekly channel was updated to version 3.46. - The quarterly channel was updated to version 3.45. - Version 3.44 is still available when requested by number. - Version 3.43 was deleted, and can no longer be used. Deprecations: - Support for Internet Explorer is being deprecated. Beginning in August 2021 with Maps JavaScript API v3.46, Internet Explorer 11 users will see a warning message at the top of maps. The last version of the Maps JavaScript API to support Internet Explorer 11 is v3.47. Support for Internet Explorer 11 will be entirely discontinued in August 2022. Bug fixes: - Fixed a bug where a polygon's vertex was shifted at certain zoom levels. - Fixed a bug where content of multiple Info Windows overlapped each other. 3.45.8 29 July, 2021 Changes: - Embed API: On embedded maps, the zoom level is retained after being redirected to the directions page. 3.45.7 22 July, 2021 Changes: - Support for IE11 is deprecated. Maps in IE11 will now display a banner in the beta channel. - Updated the "Get Directions" link in embedded maps to be more accurate. Bug fixes: - Fixed an issue where the getFeatureById()method could not retrieve features with an id of 0. - Fixed an issue where an UNKNOWN ERROR/SERVICE BACKGROUND ERRORis thrown if the ComponentRestrictionvalue is undefined or null. 3.45.6a 15 July, 2021 Changes: - When opened, focus is now managed to the InfoWindow container when the first focusable control is not in the InfoWindow viewport or when disableAutoPan=true. Bug fixes: - Fixed a bug where unfinished poly drawing is completed automatically when drawing manager is removed and re-added to the map. - Fixed a bug where the string 'Directions' is truncated in 'Embed a map' pop-up in some languages. 3.45.5 01 July, 2021 Changes: - Updated URL for Maps Studio in typings and JS API. Bug fixes: - Fixed an unhandled Promise rejection when providing callback. 3.45.4 24 June, 2021 Changes: - Fixed a bug that caused the keyboard shortcuts dialog to open on form submit. - Fixed a bug where the "Keyboard shortcuts" button was out of alignment. - Removed objects accidentally added to window(Spherical, PolylineCodec, PolyGeometry). These objects should be accessed at their fully qualified namespace instead. 3.45.3 16 June, 2021 Bug fixes: - Updated InfoWindow, so that focus does not move when open()is called within the same run loop as map instantiation. Changes: - Updated the InfoWindowOpenOptionsAPI reference section with more detailed information. - Fixed focus rings when using keyboard navigation (Tab + Option) in Safari. 3.45.2 10 June, 2021 Bug fixes: - Fixed an issue where the Map Type control was not properly overlaid by the keyboard shortcut dialog's background in Embed API. - Fixed an issue that could cause an error when loading font CSS styles. - Fixed a bug where flickering could occur when panning the Map with an open InfoWindow. Changes: - Added the InfoWindowOpenOptions.mapproperty. 3.45.1 3 June, 2021 Changes: - Added documentation for animations, icons, label, and opacity properties to the Data.StyleOptions interface. - Redesigned the copyright dialog for better accessibility support. - InfoWindows will now automatically manage focus when opened. This represents the new default behavior. - Added the InfoWindowOpenOptions API, allowing developers to control how focus is managed when opening InfoWindows. - InfoWindows can now be closed by pressing the ESC key. - InfoWindows are now announced as a "dialog" when using screen readers. - Polyfill isolation is now enabled; this change prevents the Maps API internal polyfills from being installed on the host page. Bug fixes: - Fixed an issue where a map focus ring would appear when switching browser tabs. - Fixed an issue where the focus ring for map controls, marker elements, and map type control submenu items, incorrectly appeared with mouse interaction in some browsers. New features: - Enabled support for Promises in Directions, Distance Matrix, Elevation, Geocoder, Maximum Zoom Imagery, StreetView, and AutcompleteService.getPlacePredictions()services. - Added keyboard shortcuts control and dialog on the map to improve discoverability of keyboard shortcuts. - Vector maps are now available in the weekly channel (basic features only; WebGL features are available in the beta channel). 3.44.14 13 May, 2021 Changes: - A warning is now logged to the console when InfoWindow.open()is called without an associated Map or StreetView instance. 3.44.12 28 April, 2021 Bug fixes: - Fixed an issue where removed or hidden optimized markers are added back to the map after changing to a new map type. Local Context: - Added support for arrays that don't have an @@iteratormethod defined as a valid placeTypePreferencesvalue. 3.44.11a 20 April, 2021 Changes: - Improved performance for creating custom markers with Icon objects. - Prevent focus from moving to the map type dropdown menus when hovering over a button. Bug fixes: - Fixed a bug where an error could occur when loading static markers prior to initializing the base map. 3.44.9 8 April, 2021 Bug fixes: - Fixed a bug where Map controls were keyboard accessible when Street View is enabled. 3.44.8 1 April, 2021 Changes: - Local Context Library: Moved the Google logo in the Place Details View to the bottom of the contents. 3.44.7 25 March, 2021 Changes: - Keyboard focus now returns to the appropriate element when the InfoWindow is closed. 3.44.4 4 March, 2021 Changes: - 45° imagery is now available in a wider range of zoom levels, and the rotation control now includes both clockwise and anti-clockwise buttons. Bug fixes: - Fixed a memory leak that could happen when rendering the map repeatedly. - Fixed a memory leak that could happen when adding or removing circles or rectangles on the map. 3.44.2 25 February, 2021 Changes: - Fixed a bug where the Places Autocomplete getPlacePredictionsfunction call incorrectly points to the wrong endpoint in the beta channel. 3.44.1 18 February, 2021 Changes: - LocalContext Place Chooser buttons are now disabled when reloading the places shown. - Non-optimized markers are now focusable and keyboard accessible. - Beta Channel: Removed objects accidentally added to window( Spherical, PolylineCodec, PolyGeometry). These objects should be accessed at their fully qualified namespace instead. - Turned on Polyfill Isolation in Beta channel. This means that the Maps JavaScript API will no longer install polyfills into the host page. 3.43.8a 25 February, 2021 Only the quarterly channel was updated. Changes: - Fixed a memory leak issue when rendering a map multiple times 3.43.8 5 February, 2021 Changes: Bug fixes: - Fixed an issue where Markerswith labels intercepted click events, even when clickablewas set to false. 3.43.7a 28 January, 2021 Bug fixes: - Fixed various issues related to the drawing of polylines. 3.43.6 21 January, 2021 Bug fixes: - Fixed a bug where geojson polygon holes were sometimes being filled in. - Fixed a bug that caused the overlay disappear when switching between map and street view. 3.43.5 14 January, 2021 Changes: - Changed checkboxes in menus to be more accessible with screen readers. - Increased the size of the floors buttons for indoor Street View panoramas. - Fixed the position of mobile motion tracking controls in Street View. Bug fixes: - Keyboard menu button now triggers the contextmenuevent. - Fixed a bug where the fullscreen control did not work properly when switching between the Map and Street View in some situations. - Fixed an issue with screen readers not being able to navigate to markers within the map. - Fixed a bug that caused the place icon to disappear in LocalContext's place marker. 3.43.3 9 December, 2020 Changes: - Renamed the google.maps.MouseEventinterface to google.maps.MapMouseEventand added a domEventproperty, providing direct access to the underlying event from the DOM. - Improved memory behavior of polygon overlays. - Canvas memory is now explicitly released to avoid Safari memory issues. Bug fixes: - Fixed a bug where the Fullscreen control stopped working when switching between Map and Street View in some situations. - Fixed a bug where an error was logged in console on marker's clickevent. - Fixed a bug where the size of a marker's clickable area was drawn too large. - Fixed a bug with polyfill conflicts that affected Symbols on IE11. 3.43.2 1 December, 2020 Changes: - Adds the contextmenuevent, as a better alternative to the rightclickevent. The contextmenuevent adds the capability to respond to ctrl-click on macOS. - Adds accessibility attributes to non-optimized markers when titleor labelare provided. - Adds more ARIA labels and roles for MapTypecontrols, for an improved screen reader experience. - Improved screen reader support for MapTypecontrols. When the user navigates away from the control, the dropdown menu automatically closes. - Corrects the number of items reported by screen readers in the dropdown menu; this prevents the line separator from being counted as a list item. - Adds a focus ring within the mapelement, to indicate keyboard focus when focused via keyboard interaction. - Adds additional keyboard controls to the MapTypecontrols. The dropdown options now automatically close when the control loses focus. The up and down arrow keys open the dropdown, and the Escape button closes it. Home and End keys move to the first and last items in the dropdown, respectively. Bug fixes: - Fixed a bug where canceling a marker animation could, under some circumstances, cause that particular marker to terminate future animations early. 3.43.1a 20 November, 2020 Changes: - Adds a classNameproperty to the MarkerLabelinterface to set the CSS class of the labelelement. - Adds beta support for Promises in the Maximum Zoom Imagery Service. - Add beta support for Promises in the StreetView service. - Added accessibility text to non-optimized markers when titleor labelare provided. - Removed the aria-pressedlabel from the MapTypecontrol dropdown menu button, to improve accessibility. - Converted DropdownMenuand DropdownMenuItemto sematic elements to improve accessibility. - Changed copyright element text to meet color contrast standards. - Dropdown menus can now be opened and closed by using the enter key or the space bar. - Dropdown menu items can now be focused. - Local Context Library: Carousel control buttons are now disabled when they would have no effect, and no longer overlap the first or last item in the list. Bug fixes: - Fixed a bug that could happen when GroundOverlayscross the 180 degree meridian. - Keyboard shortcuts are no longer disabled by default when disableDefaultUIis set to true. - Fixed a bug where screen reader text was incorrect for map toggle buttons. - Adds accessibility name and type to Map. - Fixed interface documentation that incorrectly showed some optional properties as required. - Fixed a bug where the noWrap LatLngconstructor param, was ignored when passed a LatLngLiteral. 3.43 18 November 2020 Version 3.43 of the Maps JavaScript API is now available. See the Versioning guide. - The weekly channel was updated to version 3.43. - The quarterly channel was updated to version 3.42. - Version 3.41 is still available when requested by number. - Version 3.40 was deleted, and can no longer be used. 3.42.9 15 October, 2020 Changes: - Fixed a bug where a wrong checkbox's state was communicated by a screen reader. - Fixed an issue with Map controls, where the Tab/Shift+Tab was in the wrong order. - Converted map buttons from divto native button, for improved accessibility. - Tilt is now restricted depending on zoom level for WebGL maps. - Fixed fullscreen button partially disappearing on Internet Explorer when controlSizeis less than 27. 3.42.8 7 October, 2020 Changes: - Added beta support for promises in the Directions service. - Geocoder componentRestrictionsnow performs validation checks for empty strings. - Fixed a bug where the marker label was covered by the custom marker symbol on Safari. Support for updated Place icons 1 October, 2020 The icons returned with Place Details and Place Search requests have been updated to use new icon glyphs. No action is required, the new glyphs will display automatically. 3.42.7 29 September, 2020 Changes: - Fixed a bug where the word order was wrong for RTL (right-to-left) languages on the "Report a map error" control tooltip. - Centered marker label for RTL text direction. 3.42.6 21 September, 2020 Changes: - Added beta support for promises in the Distance Matrix service. 3.42.5 16 September, 2020 Changes: - Updated LatLngBounds.unionmethod to handle cases when two bounds are over 180 degrees. 3.42.4 8 September 2020 Changes: - An error is now logged to the console when an invalid Date.now()implementation is detected. - Added beta support for Promises in Elevation service methods. - Introduced a new InfoWindow.minWidthproperty for specifying the minimum width for an InfoWindow. - Fixed a bug where directions routes became blurred after changing the destination. 3.42 19 August 2020 Version 3.42 of the Maps JavaScript API is now available. See the Versioning guide. Changes: - InfoWindows now have a default max width of 648px which can be overridden by setting the InfoWindow maxWidth property. The width of an InfoWindow can now exceed 648px but will still be limited by the width of the map. (Previously, Infowindows were always restricted to the lesser of 648px or the map width.) - Prevent flickering of the default-styled render while rendering large GeoJson data sets. - The weekly channel was updated to version 3.42. - The quarterly channel was updated to version 3.41. - Version 3.40 is still available when requested by number. - Version 3.39 was deleted, and can no longer be used. 3.41.7 22 July 2020 Fixed: - Fixed a bug where setting the clickableIconsproperty to falsehad no effect when using Cloud Styling. 3.41.5 6 July 2020 Changes: - Fixes trusted types violation. 3.41.2 27 May 2020 Changes: - The Places field permanently_closedin the Places Library, Maps JavaScript API is deprecated. 3.41 20 May 2020 Version 3.41 of the Maps JavaScript API is now available. See the Versioning guide. - The weekly channel was updated to version 3.41. - The quarterly channel was updated to version 3.40. - Version 3.39 is still available when requested by number. - Version 3.38 was deleted, and can no longer be used. 3.40.11 28 April 2020 Changes: - Adds a new field, business_status, to Place Search and Place Details results. Use this field instead of permanently_closed. - Fixes an incorrect console warning when requesting (new) PlaceResult.utc_offset_minutes; a warning is now returned for PlaceResult.utc_offset, which has been deprecated. 3.40.9 14 April 2020 Changes: - Map now throws an easier to diagnose InvalidValueError if passed a mapDiv that is not an Element. - The Chrome browser autofilling an address into google.maps.places.Autocompleteshould no longer trigger an autocomplete request to the server (which will avoid billing). - Made the maxWidthproperty more accurate. Before this change, if you specified the maxWidthof an InfoWindowto be 100, the actual maximum width of the InfoWindowwould be 94px. After this change the maximum width would actually be 100px. - Fixes a memory leak issue with paint request builder when using markers and map bounds. 3.40.6 24 March 2020 Changes: - Fixed the truncated text in the travel time field for IE11. 3.40.4 10 March 2020 Changes: - Fixes bug where calling setTilt()twice would skew the Map. 3.40.2 23 February 2020 Changes: - Hidden iframe within Map removed from tab navigation. 3.40.1 18 February 2020 Changes: - Do not warn about InvalidVersion for v=beta. - Fixes a problem with scrolling when Street View is displayed. 3.40 11 February 2020 Version 3.40 of the Maps JavaScript API is now available. See the Versioning guide. - The weekly channel was updated to version 3.40. - The quarterly channel was updated to version 3.39. - Version 3.38 is still available when requested by number. - Version 3.37 was deleted, and can no longer be used. 3.39.6 08 January 2020 Version 3.39.6 of the Maps JavaScript API is now available. See the Versioning guide. Fixed: - For Directions requests, ZERO_RESULTSare now logged to the console, and no longer result in an error log. 3.39 20 November 2019 Version 3.39 of the Maps JavaScript API is now available. See the Versioning guide. Changes: - Internet Explorer 10 is no longer supported (3.38 was the last version to support it). AutocompletePredictionnow returns the straight-line distance to the selected place, from the specified origin lat/lng. Deprecations: - The Places fields open_nowand utc_offsetare deprecated as of November 20, 2019, and will be turned off on February 20, 2021. See Places Field Migration to learn more. - The weekly channel was updated to version 3.39. - The quarterly channel was updated to version 3.38. - Version 3.37 is still available when requested by number. - Version 3.36 was deleted, and can no longer be used. 3.38 20 August 2019 Version 3.38 of the Maps JavaScript API is now available. See the Versioning guide. - The weekly channel was updated to version 3.38. - The quarterly channel was updated to version 3.37. - Version 3.36 is still available when requested by number. - Version 3.35 was deleted, and can no longer be used. - Fusion tables can no longer be used (3.37 was the last version to support it). - Support for Internet Explorer 10 is now deprecated, and will end between November 2019 and May 2020 depending on the release channel or version number you use. 3.37 15 May 2019 Version 3.37 of the Maps JavaScript API is now available. See the Versioning guide. - The weekly channel was updated to version 3.37. - The quarterly channel was updated to version 3.36. - Version 3.35 is still available when requested by number. - Version 3.34 was deleted, and can no longer be used. - Internet Explorer 9 can no longer be used (3.34 was the last version to support it). 3.36 14 February 2019 Version 3.36 of the Maps JavaScript API is now available. See the Versioning guide. Changes: - New features added: - You can now change the size of map controls, using MapOption.controlSize. - You can now restrict map bounds, using MapOptions.restriction. - Infowindow has been improved. - The weekly channel was updated to version 3.36. - The quarterly channel was updated to version 3.35. - Version 3.34 is still available when requested by number. - Version 3.33 was deleted, and can no longer be used. 3.35 29 January 2019 Changes: - The API can now return the total number of reviews for each place. - Added support for Plus codes. Plus codes are short codes that provide an address for every location in the world, even in areas where traditional street addresses don't exist. - Internet Explorer 9 is no longer supported. Deprecations: - The placeIdOnlyparameter for Autocompleteis deprecated. - The Fusion Tables Layer in the Maps JavaScript API is deprecated as of December 3, 2018. The Fusion Tables Layer will be completely turned off on December 3, 2019, and will no longer be available after that date. Learn more. 13 November 2018 Version 3.35 of the Maps JavaScript API is now available. See the Versioning guide. Changes: - The weekly channel was updated to version 3.35. - The quarterly channel was updated to version 3.34. - Version 3.33 is still available when requested by number. - Version 3.32 was deleted, and can no longer be used. 3.34 14 August 2018 Version 3.34 of the Maps JavaScript API is now available. See the Versioning guide. Changes: - New version names have been implemented. You can now specify release channels or version numbers: - The weekly channel was updated to version 3.34. - The quarterly channel was updated to version 3.33. - Version 3.32 is still available. - Version 3.31 was deleted, and can no longer be used. Customers specifying versions 3.0 to 3.31 will receive their default channel instead, either the weekly channel or the quarterly channel (see the Versioning guide). - A larger control UI is now enabled. With the increase in touch operations on various devices, we adjusted the control UI to fit for both finger touches and mouse clicks. (It's possible to opt out by loading the API with v=quarterly, v=3.33, or v=3.32.) 3.33 11 June 2018 Changes: - Place Details requests now support using fields to specify the types of place data to return. - Two new Find Place requests are now available: - Place autocomplete now supports session-based billing. 16 May 2018 Version 3.33 of the Maps JavaScript API is now available as the experimental version. See the guide to API versioning. Changes: 3.32 13 February 2018 Changes: 3.31 13 February 2018 Changes: - Make the first 256 Markers DOM Markers by default, then make subsequent markers Tile Markers. Current default is all Tile Markers. - At high zoom levels (zoomed in) when dragging Pegman, prefer the NEAREST, rather than the Google-selected BESTpanorama. gestureHandling: nonenow works the same as draggable: falsewhen changed inside a mousedown handler (it now takes effect on mousedown). 21 November 2017 Version 3.31 of the Maps JavaScript API is now available as the experimental version. See the guide to API versioning. Changes: - The regionfield is now returned with Place Details requests. 3.30 16 August 2017 Version 3.30 of the Maps JavaScript API is now available as the experimental version. See the guide to API versioning. Changes: - The fullscreen button is now enabled by default on desktop. - This version introduces the gestureHandlingproperty for desktop applications that enable user interaction using a mouse scroll wheel or touchpad. To control how users interact with a map, it is recommended that you use the gestureHandlingproperty instead of the scrollwheel, disableDoubleClickZoom, and draggableproperties. 3.29 16 May 2017 Version 3.29 of the Maps JavaScript API is now available as the experimental version. See the guide to API versioning. Changes: - The format of the pano ID for user generated (custom) Street View panoramas has changed due to underlying infrastructure updates. This slightly increases the number of available panoramas. - Requests for user generated (custom) panoramas using the old pano ID in the Maps JavaScript API still work. If you try to find a panorama using the positionproperty of the StreetViewPanoramaOptionsobject, your result will contain the new pano ID. There is no requirement to map the old and new pano IDs, as both will remain valid. - If you depend on pano ID parsing and/or verification logic, note that the format of pano IDs may change. - You can report any issues using the issue tracker. - Updates on the fitBoundsmethod of the google.maps.Mapclass. - To change the viewport while a map is hidden, you can now set the map to visibility: hidden, thereby ensuring that the map div has an actual size. 3.28 18 April 2017 The draggable property of the MapOptions object is deprecated. To disable dragging of the map on desktop devices, use the gestureHandling property and set it to none. 15 February 2017 Version 3.28 of the Maps JavaScript API is now available as the experimental version. See the guide to API versioning. Changes: - Signed-in maps are no longer supported in version 3.28 and higher of the Maps JavaScript API. 3.27 2 February 2017 Fixed: - Issue 11331: text inside InfoWindow cannot be selected 10 January 2017 You can now restrict Autocomplete predictions to only surface from multiple countries. You can do this by specifying up to 5 countries in the componentRestrictions field of the AutocompleteOptions. 15 November 2016 Version 3.27 of the Maps JavaScript API is now available as the experimental version. (See the guide to API versioning.) Changes: - A new gestureHandlingoption in the MapOptionsobject helps you optimise your users' experience when interacting with the map on mobile devices. The available values are: greedy: The map always pans (up or down, left or right) when the user swipes (drags on) the screen. In other words, both a one-finger swipe and a two-finger swipe cause the map to pan. cooperative: The user must swipe with one finger to scroll the page and two fingers to pan the map. If the user swipes the map with one finger, an overlay appears on the map, with a prompt telling the user to use two fingers to move the map. View the sample above on a mobile device to see cooperative mode in action. none: The map is not pannable or pinchable. auto(default): The behavior is either cooperativeor greedy, depending on whether the page is scrollable or not. For more details and examples, see the developer's guide. - The fullscreen control is visible by default on mobile devices, so users can easily enlarge the map. When the map is in fullscreen mode, users can pan the map using one or two fingers. Note: iOS doesn't support the fullscreen feature. The fullscreen control is therefore not visible on iOS devices. Signed-in maps deprecated 6 October 2016 The signed-in feature is deprecated. Versions 3.27 and earlier of the Maps JavaScript API continue to support signed-in maps. A future version will no longer support signed-in maps, but will continue to support features that save a place to Google Maps using an info window or the SaveWidget. Read more about signed-in maps. Change in via waypoints in Directions service response 29 August 2016 The via_waypoints field in the Directions service response contains an array of waypoints that were not specified in the original request. The via_waypoints field will continue to appear in the draggable directions response, but is deprecated in the alternative route response. Version 3.27 will be the last version of the API that supports via_waypoints in alternative routes. The recommended approach is to request alternative routes, then display all routes as non-draggable plus the main route as draggable. Users can drag the main route until it matches an alternative route. The via_waypoints field is available on the resulting route (dragged by the user). 3.26 18 August 2016 Version 3.26 of the Maps JavaScript API is now available as the experimental version. (See the guide to API versioning.) Changes: - A new Street View renderer brings rendering improvements, including smoother transitions and animations, improved object modeling, better support for mobile, and clearer controls. See the details on the Google Geo Developers Blog. - The API now supports device orientation events in Street View, so users on mobile devices can look around by moving their phones. As a developer, you can enable or disable this feature. See the developer's guide for details. 3.25 25 May 2016 Version 3.25 of the Maps JavaScript API is now available as the experimental version. (See the guide to API versioning.) Internet Explorer 9 support ends 2 May 2016 As of April 30th, 2016, Internet Explorer 9 is no longer officially supported by the Maps JavaScript API. See the list of supported browsers. 3.24 14 April 2016 Changes: - You can now disable the clickability of map icons. A map icon represents a point of interest, also known as a POI. See the setClickableIconsmethod on google.maps.Map. 31 March 2016 Fixed: - Issue 9507: Links in Street View now work again in Safari. 28 March 2016 Fixed: - Issue 9394: Info windows automatically close when the user opens an info window for a base map icon, and vice versa. - Show a white Google logo when the base map is styled using the stylesproperty on the map (previously, the logo became white only when applying a style using a StyledMapType). 18 March 2016 Fixed: - Issue 9424: new LatLng({lat: 0, lng: 0}) - Fixed mouse panning with the new Street View renderer (with google.maps.streetViewViewer = 'photosphere'). 15 February 2016 Changes: - The ability to opt out of the new controls using google.maps.controlStyle = 'azteca'has been removed. 3.23 18 January 2016 Changes: - This release includes a new full-screen control for the map. Users can click the control to maximize the map so that it takes up the entire screen. By default, this control is turned off. You can enable it in MapOptions, and configure it using the FullscreenControlOptions. Its default position is RIGHT_TOP. - The full-screen control for Street View is enabled by default. You can disable it via StreetViewPanoramaOptionsand configure it using the FullscreenControlOptions. Its default position is now RIGHT_TOP. 4 January 2016 Fixed: - Issue 9009: When synthesizing mouse events from touch, use the left button instead of the middle button, for compatibility with jQuery. - Issue 4201: The API no longer makes use of eval(). Therefore, it is now possible to use the API without the unsafe-evalContent Security Policy directive. 21 December 2015 Changes: - Map Option to disable the sign in button for signed in maps (it will show the avatar for logged in users, it will still allow to sign in via signed in actions (e.g. starring) but it will no longer have the button to sign in on the map when this option is set). - The interface for text search requests has changed. The typesparameter is deprecated as of March 1, 2016, replaced by a new typeparameter which only supports one type per search request. Additionally, the establishment, food, and grocery_or_supermarkettypes will no longer be supported as search parameters (however these types may still be returned in the results of a search). Requests using the legacy typesparameter will be supported until March 1, 2017, after which all text searches must use the new implementation. 2 December 2015 Changes: - The Autocompleteconstructor verifies that it is given an input element. - Base map point of interest info windows show the same content in non-signed-in mode as signed-in mode. - Google Maps API externs for the Closure Compiler now specify a type (number or string) for enums. 25 November 2015 Changes: - Added .toJSON()methods to LatLngand LatLngBoundsobjects. These are intended to be used via JSON.stringify(). 19 November 2015 Changes: - White google logo for styled maps Fixed: - Issue 8674: Bug: Protect against img { max-width: 100%; } 3.22 7 January 2016 Fixed: - Issue 9009: When synthesizing mouse events from touch, use the left button instead of the middle button, for compatibility with jQuery. 10 November 2015 Changes: - The Directions service and the Distance Matrix service now return the predicted time in traffic (in response field duration_in_traffic) when the travel mode is driving. To receive predicted travel times, include a drivingOptionsobject literal in the request, specifying a current or future departureTime. You can also specify a trafficModelof optimistic, pessimistic, or best guess (default), to influence the assumptions used when calculating the travel time. For details, see the developer's guide for the Directions service and the Distance Matrix service. Note: The duration_in_trafficis available only to Google Maps Platform Premium Plan customers. Deprecated: - The durationInTrafficrequest field is now deprecated. It was previously the recommended way for Google Maps Platform Premium Plan customers to specify whether the result should include a duration that takes into account current traffic conditions. You should now use the drivingOptionsfield instead. 5 November 2015 Deprecated: - The AdSense library has been deprecated since May 2015, and is no longer available in the experimental version of the Maps JavaScript API. The library will be removed from the release and frozen versions of the API soon. An alternative solution is Google AdSense. See the guide to creating an AdSense ad unit. 22 September 2015 Changes: - Added support for place IDs when making directions and distance matrix requests: DirectionsRequest.origin, DirectionsRequest.destinationand DirectionsWaypoint.locationnow accept Place objects, and DistanceMatrixRequest.originsand DistanceMatrixRequest.destinationsnow accept an array of Place objects. 15 September 2015 Changes: - The default position and appearance has changed for many of the controls on the map and on Street View panoramas. The user experience is now consistent regardless of whether a map is using signed-in mode mode or not, and is also more consistent with the Google Maps website. If you want to continue using the earlier set of controls for a while, you can set google.maps.controlStyle = 'azteca'in v3.22. - The new Full Screen control in Street View allows the user to open the Street View panorama in fullscreen mode. Deprecated: - The Overview Map control is no longer available. - The Pan control on the map is no longer available. To pan the view, users click and drag, or swipe, the map. (Note that the Pan control in Street View remains available.) - The Zoom control is available in only one style, and google.maps.ZoomControlStyleis therefore no longer available. 1 September 2015 Changes: - Added LatLngBounds literals - Fixed issue with overly broad CSS classes - Improved tile loading after the map is resized Internet Explorer 8 support ends 31 August 2015 As of August 31st, 2015, Internet Explorer 8 is no longer officially supported by the Maps JavaScript API. See the list of supported browsers. For information on Microsoft's browser support policy, see the IEBlog post of August 7, 2014. 3.21 5 August 2015 Changes: - Performance improvement: only load visible tiles - Numerous docs improvements 21 July 2015 Changes: - Markers with Labels launched - Fixed: iOS7 Out of Memory Error for poly on very high definition screens - Touch Event Fixes on IE10+ - Error verification on Developer provided inputs now output to the console rather than throwing an error. 6 July 2015 Fixed: - Issue 8159: Bug: incorrect rendering of StrokePosition.OUTSIDE 17 June 2015 Changes: - Fixed: Issue 6321: Bug: "Uncaught TypeError: Cannot read property 'x' of undefined" only in Android/iOs browsers 2 June 2015 Changes: - Deprecated: CloudLayer, PanoramioLayer - Fixed: Issue 8098: Bug: Weighted Heatmap does not render correctly with one point 2 June 2015 Changes: - adds ability to Geocode a placeId to an address/latlng - returns placeIds via the Geocoding API 28 May 2015 Changes: - Fixed: Issue 6358: SVG Path Notation does not render correctly on HDPI devices 19 May 2015 Fixed: - Issue 7673: Controls loose position after map type dropdown used - Issue 7589: Pegman jumps when the map is resized. 3.20 24 April 2015 Fixed: - Increase in terrain max zoom and loading high dpi tiles even at max zoom. 13 April 2015 Fixed: - Issue 7820: Cursors not working on pages loaded from file:// - Issue 7591: Bug: StreetViewService.getPanoramaByLocation fails when radius argument is not an integer 25 March 2015 Fixed: - Issue 7733: Bug: KML Ground/Images- Overlays are Suddenly Very Low Resolution - Save Widget text better aligned with star icon 17 March 2015 Fixed: - Issue 7756: Bug: Safari 8 performance regression - Removed demographics layer - Improvements to InfoWindow chrome 17 February 2015 The current Maps JavaScript API experimental version (3.19) will become the release version. Version 3.17 will be removed. Requests for 3.17 or any prior version will now be served version 3.18. Versioning documentation is available at: Available versions after rollover: Experimental: 3.20 Release: 3.19 Frozen: 3.18 3.19 24 April 2015 Fixed: - Cursors in signed-in mode. 19 March 2015 Fixed: - Issue 7756: Bug: Safari 8 performance regression 20 January 2015 Fixed: - Issue 7475: Bug: phantomjs TypeError: Unable to delete property 13 on January 2015 Fixed: - tiles are now hidden from screen readers 17 December 2014 Fixed: - Issue 6917: Bug: Shapes don't respect map's 'draggable' property - Issue 7445: Bug: Presentation faults when using the signed-in feature of the v3 Maps API 02 December 2014 Fixed: - Issue 7390: Bug: weather.com hyperlinks not working - Issue 7376: Bug: WebGL has been turned off (now re-enabled) - ES6 naming clash with Symbol 25 November 2014 Fixed: - Issue 7333: Bug: caret of infowindow is broken in IE9 3.18 04 December 2014 Fixed: - Issue 7390: Bug: weather.com hyperlinks not working 18 September 2014 Fixed: - Issue 7136: Multiple marker titles not working in Firefox 09 September 2014 Fixed: - Issue 7098: Setting streetview POV heading throws an error 26 August 2014 3.18 released to experimental. 3.17 is now stable. 3.17 20 August 2014 Fixed: - Issue 6937: Regression in 3.17: Cannot read property "remove" of undefined (in Places) 12 August 2014 Fixed: - Issue 6968: Bug: Keyboard arrow keys not working with v=3.exp 31 July 2014 Added: - Map Panes given explicit documentation for how DOM events propagate through them. overlayMouseTargetpane added. 07 July 2014 Added: toGeoJsonadded to Data Layers and individual Data Layer Features, allowing export of geometry to GeoJSON. 24 June 2014 Added: place_id, a unique identifier for a place, added to the Places Library for Autocomplete and Place Details. overview_pathadded to DirectionsRoute, providing an encoded polyline representing the entire course of the route. 26 May 2014 Added: sensorparameter is no longer required in the Maps API URL. 20 May 2014 3.17 released to experimental. 3.16 15 April 2014 Fixed: - Markers now have opacity that matches other geometry types - 08 April 2014 Added: - Map pans on mouse move while drawing. Fixed: - Accept LatLngLiteral in more locations. - InfoWindow resizes itself when Roboto has finished loading ( Issue 5713) 31 March 2014 Fixed: - Semi-transparent KML layers no longer transparent on IE 8 ( Issue 6540) 26 March 2014 Fixed: - Removed event.returnValue calls in Chrome to prevent console warnings. - Pinch-to-zoom does not work in IE11 ( Issue 5747) 18 March 2014 Added: - Data Layer launched - 12 March 2014 Added: - LatLngLiteral support in most places where google.maps.LatLng is accepted - 24 February 2014 Added: - Support for ferries in Distance Matrix and Directions services. 17 February 2014 3.16 released to experimental. 3.15 03 March 2014 Fixed: - Reenable hardware acceleration in Chrome Windows and Linux now that Chrome bug is fixed () 10 February 2014 Fixed: - Disable all tile hardware acceleration on Chrome/Linux to work around larger Chrome hardware acceleration issue. 03 February 2014 Fixed: - Scroll wheel does not work in IE 11 ( Issue 5944) 29 January 2014 Fixed: - Disable all tile hardware acceleration on Chrome/Windows to work around larger Chrome hardware acceleration issue. ( Issue 6219) 22 January 2014 Fixed: - Temporarily disable hardware acceleration on Chrome/Windows when the drawing manager is loaded to work around Chrome issue: ( Issue 6224) 16 January 2014 Fixed: - Visual Refresh CSS made less specific to override fewer user-set styles. 27 November 2013 Fixed: - Creating marker after instantiating map throws 'contains' undefined error ( Issue 5798) 19 November 2013 Fixed: - Directions panel maneuver icons are not properly displayed in Firefox 3.14 10 September 2013 Fixed: - Links in official Google info windows do not open in new tabs/windows ( Issue 5794) 15 August 2013 - Visual refresh becomes the default map rendering mode in release version of the API. - DynamicMapsEngineLayer: feature reporting for vector, imagery, and KML layers 3.13 25 June 2013 Added: - DynamicMapsEngineLayer 11 June 2013 Added: - 'disableDefaultUI' option to StreetViewPanoramaOptions Fixed: - Bug: Custom Street View panoramas and 90 degrees down ( Issue 4875) 3.12 04 June 2013 Fixed: - Infowindow domready doesn't fire when visualRefresh=true ( Issue 5415) - Bug: visualRefresh info windows on iOS ( Issue 5396) 15 May 2013 Added: - Google Maps visual refresh: 29 April 2013 Fixed: - Removed markers stay on the map on Android and Dolphin browsers 3.11 19 February 2013 Fixed: - Changing DrawingMode while drawing causes error - Clicking on steps in the directions panel changes zoom 12 February 2013 Fixed: - Undraggable polygon can be dragged through a draggable polygon with touch input ( Issue 4868) - Transit icons incorrect in Route Alternatives Panel when travelMode switched ( Issue 4581) - panTo(latLng) does not always center map exactly at latLng under some conditions 29 January 2013 Added: - added StreetViewCoverageLayer for adding the coverage layer programmatically - Exposed StreetViewPov for StreetViewPanoramas Fixed: - Increased memory usage when using V3.8 of Google Maps JavaScript leads to a crash (observed in IE 9, Chrome, etc.) ( Issue 4162 ) - Drop-down (select) menu in InfoWindow won't follow map pan on Firefox 22 January 2013 Added: - draggable option to polylines, polygons, circles, rectangles - price_level field in PlaceResult 15 January 2013 Fixed: - Repeating Polyline icons are drawing incorrectly for some polylines ( Issue 4333) 07 January 2013 Fixed: - Scrolling the map scrolls the page too ( Issue 1605, Issue 3652) 17 December 2012 Added: - New Languages: Urdu & Icelandic Fixed: - blurred/hazy maps in IE9 after navigation ( Issue 3875) 17 December 2012 Added: - New Languages: Urdu & Icelandic Fixed: - blurred/hazy maps in IE9 after navigation ( Issue 3875) 10 December 2012 Added: - ability to load images with the crossorigin attribute set Fixed: - issues showing markers with remote images in closeups ( Issue 4616) - JS error when reshowing symbols on polylines on IE 7/8 03 December 2012 Fixed: - Markermanager library not working with the Maps API JS v3 ( Issue 4543) 27 November 2012 Added: - using High DPI canvas on High DPI devices for optimized markers Fixed: - MapTypeControl did not clear styles ( Issue 4588) 3.10 04 February 2013 Fixed: - Increased memory usage when using V3.8 of Google Maps JavaScript? leads to a crash (observed in IE 9, Chrome, etc.) ( Issue 4162 ) 25 September 2012 Added: - Indoor Street View - fixedRotation option to IconSequence - "Time in Current Traffic" to Directions 11 September, 2012 New: - Added StrokePosition to polygons, rectangles, circles - computeOffsetOrigin to geometry library - Four new languages to the API: Afrikaans, Amharic, Swahili, Zulu 28 August, 2012 Added: - types to Places API textSearch Fixed: - Symbol object cannot be passed to MarkerOptions under GWT - Regression: Pinch to zoom on iOS 5 when page scrolled ( Issue 4046) 14 August, 2012 Noticeable changes: - Modify the interface to KmlLayer to accept url as an MVC property, rather than a constructor argument. 3.9 10 September 2012 Fixed: - Symbol object cannot be passed to MarkerOptions under GWT - Regression: Pinch to zoom on iOS 5 when page scrolled ( Issue 4046) 9 August, 2012 Fixed: - Syntax error on Android 3.x 31 July, 2012 Noticeable changes: - Hide Street View overlay on custom projections Fixed: - Marker symbols do not fire events on safari when the scale is > 35 - click event not raised for markers on a custom map (map type + projection) 25 July, 2012 Fixed: - componentRestrictions on Autocomplete ( Issue 4302) 17 July, 2012 Fixed: - Text box is not clickable in InfoWindow on IE - bounds_changed should fire before zoom_changed ( Issue 1399) - Map option backgroundColor not preserved after Streetview invoked - Switching from Styled map to satellite unnecessarily loads the map tiles - Map draggable/scrollwheel properties ignored in drawing mode ( Issue 4012) - LatLngBounds should return a full longitude range when more than one copy of the world is shown - Don't draw empty shape when double clicking ( Issue 3964) - Superfluous marker events on click ( Issue 3911) Noticeable changes: - Renamed search and query endpoints to nearbySearch and textSearch 10 July, 2012 Noticeable changes: - Make google.maps.event.removeListener() accept null as argument 27 June, 2012 Added: - TRANSIT DirectionsMode: - Pagination, Reviews, textSearch to Places API: Noticeable changes: - Recognize 'transparent' as a color. 23 June, 2012 Added: - Symbols - HeatmapLayer - New styler options "weight" and "color": 15 May, 2012 Added: - Country restriction for Autocomplete (AutocompleteOptions.componentRestrictions) ( Issue 3899) - Regions and Cities type filters 3.8 1 May, 2012 Fixed: - Deleted markers sometimes reappear ( Issue 4087) - Marker shadows sometimes do not render ( Issue 3993) 23 April, 2012 Fixed: - Pegman stays on map with custom map type after closing Street View ( Issue 4076) - Removed animated markers cannot be re-added to the map ( Issue 4052) 18 April, 2012 Fixed: - Weather info window always shows raining icon on Firefox ( Issue 4063) Added: - orderBy, limit and offset for FusionTablesLayer ( Issue 3557) 10 April, 2012 Fixed: - Aerial imagery shown even if aerial not available for whole viewport ( Issue 3913) 3 April, 2012 Added: - WeatherLayer and CloudLayer ( Issue 3555) - DemographicsLayer Fixed: - Tile boundaries visible for polys on IE 9 Quirks - Markers jumping around when panning in Canvas Street View 28 March, 2012 Fixed: - Use lowercase tag names to be XHTML compliant ( Issue 3868) - Changed the zooming behavior for Apple trackpads to make it less sensitive ( Issue 2416) 20 March, 2012 Fixed: - Creating a marker with a shadow image that doesn't exist causes errors ( Issue 4014) Added: - Click-to-go/Click-to-zoom in Street View ( Issue 2447) Noticeable changes: - Disabled double-click to zoom by default in Street View 6 March, 2012 Fixed: - Error in OverviewMapControl when zooming in very quickly ( Issue 3882) - Error in IE when map div removed from page ( Issue 3971) - Scaled markers rendering incorrectly on IE < 9 ( Issue 3912, Issue 3908) Added: - opacity to GroundOverlay ( Issue 2767) - utc_offset and opening_hours to PlaceResult ( Issue 2431) - clickToGo option to StreetViewPanoramaOptions ( Issue 2447) 21 February, 2012 Fixed: - Blurry maps on IE 9 ( Issue 3875) - Polyline consisting of collinear edges in LatLng space incorrectly simplified ( Issue 3739) Added: - google.maps.geometry.poly.containsLocation() and isLocationOnEdge() ( Issue 1978) February 15, 2012 Fixed: - Regression: Scaled markers rendered incorrectly with invalid "size" parameter ( Issue 3908) - Map stuck in editing mode when setEditable(false) called while user is dragging control point ( Issue 3842) Noticeable changes: - Cross-fade between Street View panoramas. 3.7 February 7, 2012 Fixed: - Marker flickers at final position before drop animation ( Issue 3608) - Opening InfoWindow and setting zoom at the same time results in incorrect map center ( Issue 3738) - Ignore right click when dragging markers ( Issue 3237) - Marker title sometimes does not appear on Firefox ( Issue 3773) Noticeable changes: - Scaled markers now rendered in Canvas where available - setOpacity() for ImageMapType ( Issue 3125) - setOpacity() for GroundOverlay ( Issue 2767) - "tilesLoaded" event for ImageMapType ( Issue 1744) - stop() to MouseEvent, which stops event propagation ( Issue 2172) January 31, 2012 Fixed: - Fixed: Wrong Korean tiles after panning around the world ( Issue 2722) - Fixed: Cannot drop pegman accurately to display StreetView ( Issue 3861) - Added "visible" property for polys ( Issue 2861) - panTo and panBy animated even when viewports far apart (Regression) January 24, 2012 Fixed: - Mouse events bubble through an InfoWindow ( Issue 3573) - Enabled CSS transforms on IE9 - Added imageDateControl to StreetViewPanorama - Enabled Canvas Street View on IE 9, Opera and Safari/Windows January 16, 2012 Fixed: - Pegman should not be draggable in drawing mode - Correctly fire events, respecting zIndex of polys and other layers - Add KmlLayer "status" property ( Issue 3015) January 10, 2012 Fixed: - Show pegman if disableDefaultUI is true and streetViewControl is true December 7, 2011 Fixed: - Controlled access highways can be styled separately from highways (road.highway.controlled_access) - place_changed fired when user presses "Enter" on Autocomplete ( Issue 3407) November 28, 1011 Fixed: - Reposition Autocomplete when window resized. "resize" event can be triggered on Autocomplete object. November 22, 2011 Fixed: - InfoWindow content size now computed taking into account cascading styles - Aerial map rotation control doesn't match the heading on map creation November 7, 2011 Noticeable changes: - New visual style of default controls - Editable shapes (polygons, polylines, circles, rectangles) - New DrawingManager for adding new overlays - New PlaceResult fields: website and international_phone_number - New ElevationResult field: resolution Fixed: - Start and end icons in directions results now render with transparent background in IE 7+ 3.6 October 31, 2011 Fixed: - Error when Maps API used with Prototype Library on IE7 October 25, 2011 Fixed: - Errors when KmlLayer map changed before layer finished loading - Memory leak in IE when adding and removing polys October 18, 2011 Noticeable changes: - Added keyword field to PlaceSearchRequest - Removed road lines from Street View October 10, 2011 Fixed: - Address is too long in the Street View preview September 27, 2011 Fixed: - Overview map control updates center and zoom together if zoom has changed September 27, 2011 Fixed: - Double-fetch of initial tiles in Chrome - Memory leaks while panning - Don't drop Street View pegman when panning to area out of coverage - Regression: GroundOverlays that cross -180 longitude disappearing - Regression: Map center incorrect when opening info window while panning and zooming Noticeable changes: - Show Street View previews while pegman is dragged September 20, 2011 Fixed: - Memory leak in Chrome/Windows when rendering markers using Canvas - Marker rendering for aerial view with heading of 90 or 270 degrees Noticeable changes: - Default Google map types cannot be accessed through the map type registry (prevents access to map tiles) - Rotation animations when leaving and entering aerial imagery September 12, 2011 Resolved issues: - Marker stuck in raised position after tooltip appears on Firefox 4+ ( Issue 3334) Noticeable changes: - GeocoderResult.formatted_address not documented September 6, 2011 Fixed: - Select element (drop down) info window does not follow map pan in Firefox - Tiles in Korea on some mobile devices August 29, 2011 Noticeable changes: - Allow custom controls to have a higher z-index than API controls - Links take users to correct language version of maps.google.com - Do not open an InfoWindow over a KML feature if there is no info window html, name, or description. - Fixed Regression: high DPI tiles broken August 24, 2011 Resolved issues: - Hardware acceleration disabled for Chrome/Mac: fixes marker rendering issue and overlay clicking ( Issue 3544, Issue 3551) - Pinch-to-zoom fixed for Android when the page has been scrolled ( Issue 3373) Noticeable changes: - Business icons are now on by default. August 17, 2011 Noticeable changes: - Clickable map icons for points of interests. - Styles can be set in MapOptions and applied across all default map types. - Pegman appears on custom map types unless explicitly disabled. 3.5 August 10, 2011 Fixed: - Markers stuck in drag up position when dragged to horizon in Street View - Street View not resizing when map or window resized - Street View with a shared InfoWindow crashes browser August 2, 2011 Resolved issues: - Hyperlinks in info window on IE don't work ( Issue 3503) - Scale control does not print ( Issue 2966) - Regression: Error when using OverviewMapControl with styled maps ( Issue 3489) Noticeable changes: - Context menu on most controls now disabled on right click. - Street View Panorama/Map inside an InfoWindow does not pan when mousing over InfoWindow - Now possible to scroll an InfoWindow on iPad July 18, 2011 Resolved issues: - Event LatLng incorrect when page is scrolled on iOS >= 4.1 ( Issue 3373) July 11, 2011 Resolved issues: - Fixed: <select>not clickable within infowindow on touch device ( Issue 3232) - Fixed: Click not fired on map after right click if MapOptions.draggable is false ( Issue 3071) Noticeable changes: - Markers with same z-index are now ordered consistently across tile boundaries - Now possible to to scroll in infowindow on iOS - Markers and polys are now repainted when the map’s projection changes July 5, 2011 Resolved issues: - Fixed: Marker cursor not displayed when map is not draggable ( Issue 3120) Noticeable changes: - Added ability to style the Places Autocomplete control and dropdown - Places Autocomplete widget preserves description returned by server after user selects a suggestion June 30, 2011 Resolved issues: - Fixed: Initial map tiles would be double fetched - Fixed: maptypeid_changed event was fired multiple times when the map was created after 3.4 ( Issue 3051) Noticeable changes: - Places autocomplete was changed to append to the body rather than the inputs parent - BOTTOM_RIGHT now positions correctly June 8, 2011 Resolved issues: - Fixed: Zoom no longer animated if change in zoom level greater than 2 ( Issue 3033) Noticeable changes: - Fixed: OverlayView.set('map', foo) is now the same as OverlayView.setMap - Removed GeocoderRequest’s "language" option May 17, 2011 Resolved issues: - Fixed: Streetview rendering issue in IE7 ( Issue 3272) Noticeable changes: - Enabled fade transitions for map tiles when loading and changing map type. May 7, 2011 No noticeable changes. 3.4 May 7, 2011 No noticeable changes. May 6, 2011 Resolved issues: - Fixed: Street view panorama does not display in IE7 ( Issue 3272) - Fixed: Semi-transparent PNG with ImageMapType loses transparency in IE7 and IE8 ( Issue 3275) Noticeable changes: - Distance Matrix Service April 14, 2011 Resolved issues: - Fixed: Support named CSS colors for poly strokeColor and fillColor - Fixed: Polygon not visible if the strokeOpacity is set to 0.0 ( Issue 3241) - Fixed: Errors in IE8 upon panning with AdUnit visible ( Issue 3159) - Allow Terrain and Hybrid map without Map and Satellite in the map type control ( Issue 3089) - High DPI tiles are loaded for high DPI screens ( Issue 2614) Noticeable changes: - Aerial tilt defaults to 45 degrees when aerial imagery is enabled and available - Pinch behavior has been improved on the iPad - Renamed DirectionsTravelMode and DirectionsUnitSystem to TravelMode and UnitSystem (old names remain backwards compatible) April 14, 2011 Resolved issues: - Fixed: Marker icon and shadow no longer transparent on IE6 - Fixed: Markers no longer flicker on zoom - Fixed: Rendering of polygons across tile boundaries near the north/south poles Noticeable changes: - Polylines and Polygons now rendered in Canvas where possible. - LatLngs for events on polylines and polygon borders now snap to the nearest point on the line. April 5, 2011 Resolved issues: - panTo doesn't animate on touch devices ( Issue 3066) - Marker.MAX_ZINDEX is undefined ( Issue 3184) March 28, 2011 Resolved issues: - Can't change heading in Street View when in satellite mode ( Issue 3174) - Map broken when VML disabled ( Issue 3119) Noticeable changes: - Street view road overlay now shows in obliques mode March 22, 2011 Resolved issues: - Pegman shows in custom map types ( Issue 3154) Noticeable changes: - Fixed: Giant markers are clipped at tile boundaries - Fixed: Street view overlay in obliques mode - Fixed: Overview map control shows in print mode March 17, 2011 Resolved issues: - 45 Degree imagery ( Issue 2412) - Overview map control ( Issue 1470) - Support animated gifs - 'optimized' option added ( Issue 3095) - Marker.getVisible() returns undefined ( Issue 3114) Noticeable changes: - Fixed: Circles and rectangles ignore zIndex - Fixed: Mouse events trigger events twice when Marker is animating - Fixed: Styled maps do not use styles if map type added to registry after map type id set - Fixed: Non-styled map types adopt the style of a Styled Map Type - Markers now fire MouseEvents, not DOM events March 11, 2011 Resolved issues: - Double clicking on a marker no longer zooms the map ( Issue 3090) - Anchor point with custom marker shadow now works ( Issue 3112) Noticeable changes: - Panoramio Layer - Directions marker z-indexing fixed - Default shadow position fixed March 2, 2011 Noticeable changes: - Markers now rendered in Canvas/VML where available - Bigger click targets for polylines, polygons, KML on touch-enabled devices February 22, 2011 Resolved issues: - InfoWindow anchor exposed via 'anchorPoint' MVC property ( Issue 2860) - Fixes Hybrid at zoom level 0 and 1 ( Issue 3062) - Circle/Rectangle fixed if added/removed quickly to map ( Issue 3052) - draggable: false fixed on touch devices ( Issue 3044) February 18, 2011 Resolved issues: - Markers in Street View are only shrunk, not enlarged ( Issue 2969) - Draggable directions now work when routeIndex is not 0 ( Issue 2995) - Conflict between Flash and map dragging fixed ( Issue 2956) - When mapTypeId is changed in maptypeid_changed listener, the map type control is now consistent Noticeable changes: - Labels are now on by default when Satellite mode clicked from map type control February 8, 2011 Noticeable changes: - New hierarchical map type controls - they're touch-friendly! - New logo (Issue 2574) - MarkerShape.coord renamed to MarkerShape.coords to match HTML <area>coords attribute 3.3 March 17, 2011 Resolved issues: - Polygon not rendering February 8, 2011 Resolved issues: - Tile requests not being cancelled in Chrome ( Issue 3011) - rightclick event not being fired in FF/Mac ( Issue 2920) - Text rendering issues in Safari/Mac ( Issue 3024) - Directions maps not printing in IE correctly January 21, 2011 Noticeable changes: - MVCArrays are now accepted in spherical geometry library - Fix cross showing under a marker while being dragged in IE6 - z-index is no longer set on the map container div January 17, 2011 Resolved issues: - Fixed an issue where incorrect timing for zoom_changed caused incorrect results for fromLatLngToContainerPixel ( Issue 2539) January 6, 2011 Resolved issues: - Geometry library added - provides spherical geometry and polyline encoding utilities ( Issue 2540, Issue 2595, Issue 2246) - Ability to set the min/max zoom level on the map ( Issue 1624) Noticeable changes: - Zoom and pan controls have been separated (no more navigation control). These can be configured separately. - New touch-friendly zoom control on touch devices - Contents of an MVCArray can now be cleared by calling clear() - Fixed memory leak when adding and removing markers in IE8 - Faster rendering of polys with improved simplification algorithm December 23rd, 2010 Resolved issues: - Waypoint click handlers fixed in draggable directions ( Issue 2871) December 21st, 2010 Resolved issues: - Added momentum to the map when dragging ( Issue 2592) - Fixed CSS error in Street View ( Issue 2666) - Fixed JS error when showing a InfoWindow with a Map width of 0 in IE ( Issue 2536) Noticeable changes: - On touch devices, we will now display a touch-friendly zoom control whether ZOOM_PAN or SMALL navigation control is requested. If the device supports multi-touch in the browser, no zoom control will be displayed, as zooming is accomplished through pinching. December 9th, 2010 Resolved issues: - InfoWindows now print nicely in IE ( Issue 1343) - Fixed opacity in IE8 for ImageMapType Noticeable changes: - A cross will be displayed beneath Markers with a custom icon when dragged, and raiseOnDrag is enabled December 5th, 2010 Resolved issues: - Provide an interface for discovering the maximum zoom level at a given location for satellite imagery. ( Issue 2049) - Add an option (raiseOnDrag) to enable/disable animations when a marker is dragged. ( Issue 2910) Noticeable changes: - Markers now lift when dragged, and bounce when dropped. - Marker animations can be controlled programmatically with the setAnimation function. November 28th, 2010 Resolved issues: - maptypeid_changed no longer fires twice ( Issue 2449) - The "size" property of a MarkerImage object is now accessible ( Issue 2465) - Marker shape references the icon rather than the sprite ( Issue 2629) - Panning the map on marker drag has been improved for smaller maps ( Issue 2868) - Maps can now be printed without enabling printing of background images - Fixed bug where draggable direction markers were draggable when 'draggable' was set to false November 16th, 2010 Changed issues: - Issue 2076: Provide a way to give a InfoWindow to the DirectionsRenderer ( Issue 2076) - Issue 2524: Implement streetViewControlOptions - Issue 2557: Add disable zoom to Street view Noticeable changes: - Fixed bug that caused a new window to open in FF when a marker if shift-clicked. - Letter marker icons were lost when markerOptions were specified with the DirectionsRenderer 3.2 November 11th, 2010 Noticeable changes: - Fixed bug where polygons were clipped/truncated with RTL on IE7/IE8 - Fixed bug that caused checkboxes to be hidden on Safari 5 because of 3d transformations - Geodesic polylines that spanned the equator lacked detail - Added control positions for LEFT_CENTER, LEFT_BOTTOM, RIGHT_CENTER, RIGHT_BOTTOM. - Renamed control positions LEFT to LEFT_TOP, RIGHT to RIGHT_TOP, TOP to TOP_CENTER and BOTTOM to BOTTOM_CENTER October 11th, 2010 Changed issues: - Issue 2478: Streetview - Compass Misalignment/Missing in some browsers - Issue 2528: ImageMapTypeOptions opacity broken in IE8 - Issue 2661: Infowindow - Right click on an input field does not display context menu - Issue 2741: Marker placement not working on iOS 4 following map pan Noticeable changes: - Street View is enabled by default - Fixed bug where 'this' wasn't being passed to .getTileUrl - InfoWindow domready event is now triggered after the window is visible September 28th, 2010 Changed issues: - Issue 2712: Memory Leaks (add/remove markers, show/hide markers, zoom/pan map) Noticeable changes: - V2 and V3 maps work better when both are on the same page - Fixed error in HTML5 Street View when dragged quickly downwards September 16th, 2010 Changed issues: - Issue 2701: Initial Street View Navigator Control Heading Doesn't Follow POV September 14th, 2010 Changed issues: - Issue 157: Support draggable driving directions - Issue 1852: 'rightclick' event on a google.maps.Marker is fired up without an argument - Issue 2673: Pegman disappears after position change Noticeable changes: - Marker performance has been improved August 31st, 2010 Changed issues: - Issue 2658: Tall Info Windows Noticeable changes: - Geodesics have been improved for higher zooms August 24th, 2010 Changed issues: - Issue 2648: Trigger map resize event causes error in Firefox in V3.2.1 Noticeable changes: - When zooming in or out repeatedly (such as when using a scroll wheel), we now load fewer tiles from the intermediate zoom levels. August 16th, 2010 Changed issues: - Issue 2416: Apple Magic Mouse Panning and Zooming too Sensitive - Issue 2606: Setting draggable: false on a map disables links - Issue 2640: Memory not cleared with browser refreshes / onunload (IE) Noticeable changes: - StreetView markers are scaled according to their distance - Zoom slider updates on pan August 9th, 2010 Noticeable changes: - Deprecated properties KMLMouseEvent.position and FusionTablesMouseEvent.position have been removed. Use .latLng instead - Deprecated property StreetViewService.getNearestPanorama has been removed. Use .getPanoramaByLocation instead 3.1 September 28th, 2010 Noticeable changes: - Fixed issue where directions with the same origin and destination threw a JS error August 9th, 2010 Noticeable changes: - Changing an OverlayView's Map has been fixed - Calling GroundOverlay.setMap(null) is fixed - IE no longer leaks memory zooming/panning August 5th, 2010 Changed issues: - Issue 2588: Calling setVisible(false) on Panorama object with a listener attached causes a JS error in IE Noticeable changes: - Markers disappearing in IE6 on zoom change has been fixed Jul 29, 2010 Changed issues: - Issue 2337: Lost Polyline - Issue 2497: Clickable option is not honored for Circle Noticeable changes: - Custom panorama 'originHeading' has been deprecated in favour of 'centerHeading' - Korean hybrid tiles now display roads - Clicks now pass though non-clickable polygons on the map Jul 22, 2010 Changed issues: - Issue 1856: Support polygon rendering in Opera - Issue 2159: Dragend event is triggered after zoom using the scrollwheel - Issue 2385: At deeper zoom levels, GroundOverlay goes black in Internet Explorer - Issue 2337: Lost Polyline - Issue 2427: Dragging with an info window open on auto-pan causes "hanging" markers - Issue 2493: Markers aren't correctly cleared in IE7 - Issue 2500: Cropped MarkerImage When Using !Marker.setIcon(<scaled MarkerImage>) for Existing Marker - Issue 2549: CSS for Google's dropdown menu generates warning Noticeable changes: - A click event is no longer fired when a polygon is dragged - Clicking on a Form select element that expands outside of a InfoWindow no longer fires a map click - Clicking on a KML overlay no longer fires a map click event - Streetview is now automatically panned to fit an InfoWindow on screen - KML and FusionTables MouseEvent LatLng changed from 'position' to 'latLng' - Android zoom controls no longer pass click to the map Jun 17, 2010 Changed issues: - Issue 2346: Option to disable smooth animation Noticeable changes: - Fixed marker flicker bug - InfoWindow domready event triggering has been improved - DirectionsRoute.bounds is now exposed Jun 11, 2010 Changed issues: - Issue 2389: Street View doesn't work in IE7 - Issue 2460: Bug in pegman positioning - Fixed marker memory leak - First geocode latency has been improved - Provided access to the StreetViewService 3.0 May 18, 2010 Changed issues: - Issue 2037: GPolylineOptions geodesic - Fixed bug to correctly Google copyright on custom map type - Added geodesic property to Polygons and Polylines - Added clickable option to Polygons and Polylines - Added clickable option to GroundOverlay May 13, 2010 - Issue 1724: Incorrect infoWindow size/margins when setting the content through an element, rather than string Noticeable changes: - Fixed bug where ground overlays were cropped prematurely when crossing the dateline - Marker setIcon now works with \ in the url - Polygon and Polyline mouseout event triggers in IE - Changing a marker icon no longer flickers May 7, 2010 - Issue 1458: Feature request: KML support in Google API v3 - Issue 1658: Add Traffic Overlay - Issue 2209: Stack overflow - Issue 2254: Multiple calls "setMap(gMap)" and "setMap(null)" on Circle object changes it's stroke and fill opacity Noticeable changes: - Added KML and GeoRSS Layers - Added Ground Overlays - Added new layers: Traffic and Bicycling - Added "suppressBicyclingLayer" property against DirectionsRendererOptions - Fixed bug to ensure zoom layer is correctly referenced when MapType changes - Renamed DirectionsResult property "start/end_point" to "start/end_location" - Renamed DirectionsLeg property "start/end_geocode" with "start/end_address" - Renamed DirectionsRoute "optimized_waypoint_order" property with "waypoint_order" - Removed support for old directions property names (setTripIndex, getTripIndex, hideTripList, provideTripAlternatives) and continue logging warnings. Also removes conversion of routes to legs to steps and trips to routes to steps. - Updated GeocoderGeometry.latLng to GeocoderGeometry.location April 30, 2010 - Issue 2230: Map initializes without intended custom projection Noticeable changes: - Clicking on the map now focuses the keyboard - iPad pinch-to-zoom is now supported April 26, 2010 - Issue 1826: Add mouseover and mouseout events on Polygons and Polylines - Issue 2177: map.setZoom(z) not working properly during the 'maptypeid_changed' event - Issue 2247: hideRouteList option on DirectionsRenderer doesn't work as expected Noticeable changes: - Fixed bug where custom icons disappear off the bottom of the map during pan April 13, 2010 - Issue 2275: MarkerImage cannot be reused - Issue 2181: When you add a google.maps.Marker and then use your mouse scroll wheel to zoom in or out, the marker is hidden April 11, 2010 No noticeable changes or changed issues. April 5, 2010 - Issue 1976: Custom icon & draggable marker issues - Issue 2107: Draggable marker vanishes when dragged off the map - Issue 2181: The projection property of the basemaps is not present Noticeable changes: - Exposed the directions overview polyline in DirectionsRoute as overview_path. - Exposed the Map's current projection as a read only property. Mar 23, 2010 Noticeable changes: Mar 15, 2010 Noticeable changes: - Polygons now correctly repaint when styles are changed. - Deprecated warning messages via console.log are given when old style 'Directions' are used. Mar 10, 2010 - Issue 1801: Polyline/polygon zIndex - Issue 2144: DirectionsRequest should provide avoidHighways option - Issue 2207: Bug: Polyline gets filled in IE - Issue 2113: Polylines broken in FF on high zoom levels after Jan release Noticeable changes: - Added bicycling directions! - Added new DirectionsRequest options: avoidHighways avoidTolls optimizeWaypoints - Improved Polygon/Polyline rendering speeds - Renamed the following Directions objects. Old names remain supported. - DirectionsRoute to DirectionsLeg - DirectionsTrip to DirectionsRoute Mar 3, 2010 - Issue 2136: Obfuscate Properties in google.maps.MarkerImage which should not be referenced Noticeable changes: - Map jump-jump bug fixed. - Zooming twice in succession has been improved. - Marker drag event .latLng is no longer obfuscated. Feb 26, 2010 - Issue 1651: mousemove / mouseover / mouseout for map canvas - Issue 2142: DirectionsRendererOptions should allow users to suppress markers entirely - Issue 2148: event.latLng Missing! - Issue 2109: Bug: NavigationControlStyle.ZOOM_PAN doesn't display correctly in IE8 - Issue 2153: MVCArray.push() does not return new length Noticeable changes: - Add opacity to ImageMapType. - Fixed the bug where rightclick on a rectangle/circle was not being fired. - Info window's content events are no longer being removed on hide. Feb 8, 2010 - Issue 2135: Bug: If you reuse a Polygon's MVCArray in a Polyline, the Polyline is closed. Noticeable changes: - Added new Rectangle class - Added new Circle class - Fixed memory leak when creating then removing a marker. - Stopped annotating the MVCArray of LatLngs to close Polygons, as that causes Polylines which share the same MVCArray to be closed too (see Issue 2135) - Fixed a bug which sometimes hid onscreen markers if the map's zoom was set to its current value. - Fixed ImageMapType to display correctly on Android. - Changed the polygon clipping scheme to allow polygons which contain the north or south pole. - Increased the latitude range of MercatorProjection to the maximum possible subject to floating point precision. January 28, 2010 - Issue 1367: Feature Request: Expose LayoutManager for developers to place DIVs in the "control flow" - Issue 1916: Feature Request: Add ability to scale MarkerImage - Issue 1443: extend() and union() should return the LatLngBounds object - Issue 1997: Documentation of 'size' MapOption - Issue 2074: Map doesn't render when the world map fits the exact dimensions of the map container Noticeable changes: - Added support for Indic languages: - bn, gu, kn, ml, mr, ta, te - Added new static methods to the event namespace: - addListenerOnce - addDomListenerOnce - Added new 'encoded_lat_lngs' property to the DirectionsStep object to expose the set of latlngs in compressed ASCII format - Improved performance by removing offscreen marker DOM elements - Fixed panning bug in Google Chrome - Fixed pinch-zooming bug on the iPhone January 19, 2010 - Issue 1422: Feature Request: Let developers create custom map types - Issue 1523: Feature request: fromContainerPixelToLatLng (and vice versa) - Issue 1443: extend() and union() should return the LatLngBounds object - Issue 1960: bug: incomplete information using provideTripAlternatives - Issue 1675: Tutorial Documentation Error - Issue 1676: Tutorial Documentation Error - Control Options - Issue 1856: Polygons not rendering in Opera!!! - Issue 1954: The Bulgarian language localization is not correct. - Issue 1976: Bug: Custom icon & draggable marker issues - Issue 2063: Variable Name Collisions when Minifying OverlayView Subclasses Noticeable changes: - Released support for custom map types, including base map types, overlay map types, and projection: - New ImageMapType object to support custom map tiles December 17, 2009 Noticeable changes: - Fixed initial jerk occurring before a map panning animation begins. - Fixed map jumping to different location when zooming past the max zoom level using Scrollwheel or DoubleClick. - Copyright, MapType and Navigation controls resizes to suit map size. - Old style getters, setters, and event names are officially deprecated and no longer defined. December 10, 2009 - Issue 1820: Zoom in with scroll wheel seems to zoom beyond max zoom level and "skip/jump" the map's position - Issue 1743: Scroll zooming causes the map to move to a completely different location Other noticeable changes: - Added new method panToBounds. - Added new map animation. Affects dragging, panning, zooming, and calls to setCenter/setZoom in all browsers. - Added a console log warning message if deprecated methods are being used. November 25, 2009 - Issue 1696: Feature Request: map control placement - Issue 1909: getBounds corruption after map center changed - Issue 1938: map.setOptions fails to recognise control options Other noticeable changes: - Added RTL support to enable the following languages: Arabic, Farsi, Hebrew - Exposed lat_lngs property for DirectionsResult steps. November 11, 2009 - Issue 1742: Custom icon marker always appears on top of default marker Other noticeable changes: - Added support for three new languages. - Basque - Galician - Tagalog October 29, 2009 - Issue 1421: Feature Request: Add a Directions class to API v3 Other noticeable changes: - Reference documentation updated with DirectionsRenderer and DirectionsService October 26, 2009 - Issue 1647: Feature Request: Provide an event for infowindow dom ready - Issue 1710: ability to cancel user zoom event on double click Other noticeable changes: - JS Error is thrown when invalid arguments are passed into new google.maps.LatLng() - Fixed bug: static map was loading twice on map load October 15, 2009 - Issue 1525: get_bounds error at low zoom levels - Issue 1757: fitBounds() doesn't work across the 180 meridian - Issue 1790: map.setOptions cannot set the cursor - Issue 1767: BugProblem with event propagation Other noticeable changes: - Documentation updates: - New method exposed: LatLngBounds.isEmpty() - Sorted all methods, events, properties, and constants by name - Fixed incorrect types in polyline and polygon option properties September 28, 2009 - Improvements to poly rendering performance. - Fixed issue with JS warning for SVGView. September 22, 2009 - Issue 1420: Feature: Add Polyline class to API - Issue 1371: map.bounds_changed event fires repeatedly when the map is moving - Issue 1700: Incorrect location in click after zoom out in Firefox 3.5 - Issue 1702: Incorrect latLng reported in click & dblclick events when there is a scroll offset in a parent element - Issue 1723: Map jumps when dragging on iPhone Other noticeable changes: - Launched polylines and polygons! These allow you to draw lines or filled regions on the map, specify stroke and fill styles, and support most mouse events (i.e. no mouseover yet). They work in all supported browsers (IE6.0+, Firefox 2.0+, Safari 3.1+, Chrome), which includes supported mobile devices. - Added two new sections to the developer guide for Polylines and Polygons. - Added two new classes to the API Reference for Polylines and Polygons. - Added new Map event "idle", fired when the map hasn't moved for a bit. Resolves Issue 1371. - Fixed incorrect LatLng values being returned from the click events. - Fixed pinch zoom bug on the iPhone. See Issue 1723. September 10, 2009 - Issue 1659: Incorrect latLng reported in click & dblclick events after panning, Firefox 3.5 - Issue 1621: getting wrong location after click on map in ie8 with dooctype - Issue 1642: InfoWindow overflow:auto - Issue 1531: height of infoWindow grows with each open Other noticeable changes: - Syntax modified for get/set methods and event names as specified below. Old syntax remains supported to stay backwards compatible. For example: - set_funBoat() => setFunBoat() - get_funBoat() => getFunBoat() - funBoat_changed => funboat_changed - Added new method "onAdd" to the OverlayView interface, which gets called when panes and projection are first initialized. This addresses Issue 1377. - OverlayView interface's "remove" method has been renamed to "onRemove". Old name remains supported to stay backwards compatible. September 2, 2009 - Issue 1525: get_bounds error at low zoom levels - Issue 1596: Panning past the northern or southern edge of the world returns an error - Issue 1643: Bug: Map scroll wheels unnecessarily - Issue 1379: I can't see Korea map data in V3 Other noticeable changes: - Enabled continuous scrollwheel and double-click zoom on Chrome, Safari 4, and Firefox 3.5 - Improved map dragging performance - Double-click now centers the map after zooming August 24, 2009 - Issue 1567: map.set_center to a nearby location does not work. - Issue 1605: Scrolling the map scrolls the page too - Issue 1467: Pan Map Function + Animation Other noticeable changes: - Enabled scrollwheel zoom by default. To disable it, set Map option's scrollwheel property to false. - Documentation updated to include panTo and panBy functions. August 14, 2009 - Issue 1575: Bug in draggable markers method set_draggable() Other noticeable changes: - Scrollwheel zoom has been enabled. - Fixed issue affecting iPhones where map jumps occur after drag. August 4, 2009 - Issue 1393: Allow draggable markers - Issue 1448: Bug: API v3 needs a checkResize() function (or equivalent) - Issue 1404: Error with cursor in Opera - Issue 1514: MapType select arrow displayed incorrectly with HTML 4.01 strict - Issue 1426: InfoWindow z-index control Other noticeable changes: - Pinching and dragging on the iPhone should be more robust. - Added zIndex setters and getters to InfoWindow objects. Documentation changes: - Marker get_draggable and set_draggable methods added - Marker drag, dragstart, dragend, draggable_changed events added - Marker draggable property added - Info Window get_zIndex and set_zIndex methods added - Info Window zIndex_changed event added - Info Window zIndex property added July 13, 2009 - Issue 1415: infowindow content: selectable true/ false - Issue 1432: Mouseout event doesn't trigger after set_icon is called - Issue 1365: Map Type Controls render incorrectly with strict doctype Other noticeable changes: - Developers no longer need to specify size for a MarkerImage, the API will detect it when not provided. On a related note, the size, anchor, and origin arguments for MarkerImage are all optional. - Developers no longer need to call OverlayView.call(this) in an OverlayView subclass's constructor. - The OverlayView "changed" methods were removed from the interface. This should not affect developer's code, as these methods weren't actually used before. - The partialmatch option was removed from Geocoder Request objects. If a developer continues to pass it, it will have no effect on the query. June 12, 2009 Changed issues: - Issue 1363: Bug: Map click events are not dispatched on the iPhone Other noticeable changes: - Large zoom control is clickable in all browsers - Infowindow "clears" the large zoom control, positioning itself fully inside the map and controls - Mobile copyright uses pretty images - Tiles load from the center instead of the top left - Users can no longer select the text of the map type buttons, or any of the control images - The main library is smaller by ~1.9 KB
https://developers.google.com/maps/documentation/javascript/releases?hl=lv
CC-MAIN-2021-39
refinedweb
12,586
51.99
import "text/scanner" Package.: } const src = "%var1 var2%" var s scanner.Scanner s.Init(strings.NewReader(src)) s.Filename = "default" for tok := s.Scan(); tok != scanner.EOF; tok = s.Scan() { fmt.Printf("%s: %s\n", s.Position, s.TokenText()) } fmt.Println() s.Init(strings.NewReader(src)) s.Filename = "percent" // treat leading '%' as part of an identifier s.IsIdentRune = func(ch rune, i int) bool { return ch == '%' && i == 0 || unicode.IsLetter(ch) || unicode.IsDigit(ch) && i > 0 } for tok := s.Scan(); tok != scanner.EOF; tok = s.Scan() { fmt.Printf("%s: %s\n", s.Position, s.TokenText()) } Output: default:1:1: % default:1:2: var1 default:1:7: var2 default:1:11: % percent:1:1: %var1 percent:1:7: var2 percent:1:11: % const src = ` // Comment begins at column 5. This line should not be included in the output. /* This multiline comment should be extracted in its entirety. */ ` var s scanner.Scanner s.Init(strings.NewReader(src)) s.Filename = "comments" s.Mode ^= scanner.SkipComments // don't skip comments for tok := s.Scan(); tok != scanner.EOF; tok = s.Scan() { txt := s.TokenText() if strings.HasPrefix(txt, "//") || strings.HasPrefix(txt, "/*") { fmt.Printf("%s: %s\n", s.Position, txt) } } Output: comments:2:5: // Comment begins at column 5. comments:6:1: /* This multiline comment should be extracted in its entirety. */ // tab-separated values const src = `aa ab ac ad ba bb bc bd ca cb cc cd da db dc dd` var ( col, row int s scanner.Scanner tsv [4][4]string // large enough for example above ) s.Init(strings.NewReader(src)) s.Whitespace ^= 1<<'\t' | 1<<'\n' // don't skip tabs and new lines for tok := s.Scan(); tok != scanner.EOF; tok = s.Scan() { switch tok { case '\n': row++ col = 0 case '\t': col++ default: tsv[row][col] = s.TokenText() } } fmt.Print(tsv) Output: [[aa ab ac ad] [ba bb bc bd] [ca cb cc cd] [da db dc dd]] const ( ScanIdents = 1 << -Ident ScanInts = 1 << -Int ScanFloats = 1 << -Float // includes Ints and hexadecimal floats '"'. Use GoTokens to configure the Scanner such that it accepts all Go literal tokens including Go identifiers. Comments will be skipped. The result of Scan is one of these tokens or a Unicode character. GoWhitespace is the default value for the Scanner's Whitespace field. Its value selects Go's white space characters. TokenString returns a printable string for a token or Unicode character. type Position struct { Filename string // filename, if any Offset int // byte offset, starting at 0 Line int // line number, starting at 1 Column int // column number, starting at 1 (character count per line) } A source position is represented by a Position value. A position is valid if Line > 0. IsValid reports whether the position is valid., or to obtain the position immediately // after the most recently scanned token. Position // contains filtered or unexported fields } A Scanner implements reading of Unicode characters and tokens from an io.Reader. Init initializes a Scanner with a new source and returns s. Error is set to nil, ErrorCount is set to 0, Mode is set to GoTokens, and Whitespace is set to GoWhitespace.. Peek returns the next Unicode character in the source without advancing the scanner. It returns EOF if the scanner's position is at the last character of the source. Pos returns the position of the character immediately after the character or token returned by the last call to Next or Scan. Use the Scanner's Position field for the start position of the most recently scanned token.. TokenText returns the string corresponding to the most recently scanned token. Valid after calling Scan and in calls of Scanner.Error. Package scanner imports 6 packages (graph) and is imported by 1332 packages. Updated 2020-06-02. Refresh now. Tools for package owners.
https://godoc.org/text/scanner
CC-MAIN-2020-29
refinedweb
624
60.61
Introduction: Learning Java: Your First Program! Computer Programming is a fun, interesting pastime that everybody should learn. With computer programming, or coding, people give a computer basic, simple commands which add up to create a complex, useful or entertaining program that can accomplish anything from math problems to playing a high-definition video. This coding is accomplished through a programming language. In this Instructable, I am guiding you through the process of downloading one of the most popular free programming engines: Java! In addition, I will teach you how to code your first program. Let's get started, shall we? Step 1: What You Will Need: 1) A Windows Running PC (Any Windows Software Will Work: Slight Adaptations Might be Needed for Software other than XP. See links below.) 2) An Internet Connection 3) Administration Abilities Step 2: Finding the Java Development Kit In order to begin programming, we first need to download the Java Development Kit, or JDK, and the Java software. Your computer might already have the Java Runtime Environment; this is more commonly known as the JRE, or simply Java. This lets you RUN Java programs, but not code them. In order to download the JDK, which lets you program Java software, you first need to go to the following website:. You should be at the page shown. Step 3: Determining What "Bit" Your Computer Is On this page, it is necessary for you to identify your computer's processing power (it is either 32-bits or 64-bits.) In order to do this, click the start button. Next, click on accessories. Go to system tools, and then system information. Find wherever the computer displays system type. If it displays X86-based PC, then your computer is a 32-bit. If it shows X64-based PC, then your computer is a 64-bit. In the case of the picture shown, I am running a 32-bit, as is displayed in the system information bar. Step 4: Downloading the Java Development Kit Finally, we will begin to download the JDK. Scroll down the page; make sure to accept the User's License Agreement. Next, click on the correct version of the JDK download for your computer's bit (either X-86 or X-64.) I have highlighted the Windows downloads in the picture for easier reference. Step 5: Installing JDK Part One After the download is complete, a window should automatically pop up. Click next, and then next again. After the installation process is complete, click close. This has completed the factory installation! However, there are still some settings that need to be changed for Windows computers to code at the full potential. Open the start menu and right click on the "Computer" or "My Computer" button. Next, click the "Properties" button in the popup menu. The image should more or less show what appears. Step 6: Installing JDK Part Two Click on the advanced tab on this popup menu. Near the bottom of the advanced menu is the environmental variables menu. Click on this button. In the middle menu, scroll to the Path variable, highlight it, and click the edit button near the bottom of the page, as is shown in the first image. A long list of computer words will pop up. Scroll to the beginning, and insert "bin;" in the string of characters so that it reads "C:\Program Files\Java\jdk1.7.0\bin;." This is shown in the second image. Finally, click OK until all the menus are exited out of. Now, we can finally begin true coding. Step 7: Preparing for Coding This is a simple step: open up "My Documents". Create a new folder called "Java Coding". Save it. Now we can finally code! Step 8: Typing the Program Open up the start menu. Scroll to the Accessories tab, and then open up a notepad. Type in the following words EXACTLY (Capitalization and all) except for one thing. There will be two quotes side by side below in the transcript. In between these quotes, insert any text you want. public class FirstApp { public static void main(String[ ] args) { System.out.println(" "); } } Save this file as FirstApp.java in the Java Coding folder. I will show another copy of the program in the picture. Step 9: Running the Program Open a command prompt. To do this, open the start menu. In the bottom corner, there should be a button marked run. Click on this and type in "cmd". Hit enter. A black box should pop up with white text. Type in the following: "cd My Documents\Java Coding". Next, type in "javac FirstApp.java". Finally, type in "java FirstApp". If the directions were followed correctly, the text you entered should pop up on the next line of the command prompt. Congratulations, you have successfully coded your first program in Java! My program is shown below. Step 10: But What Now...? This program represents just one tiny fraction of the vastness of accomplishments available through programming on Java. For more tutorials on how to code in this language, be sure to take a look at the following website:. Also, most bookstores have several books on how to code in a vast assortment of programming languages; these are also very helpful in learning these languages. Thanks for viewing this instructable, and I hope you continue to be successful in your programming ventures. Recommendations We have a be nice policy. Please be positive and constructive. 2 Comments I agree with you in the beginning that programming is fun and that everyone should learn (especially in today's computer driven modern world). Awesome! I'm going to use this to hack into the Matrix. Thanks! JF
http://www.instructables.com/id/Learning-Java-Your-First-Program/
CC-MAIN-2018-26
refinedweb
950
67.45
Providing local data storage in an application is a real problem faced by many a C++ programmer. To avoid getting oneself confused with low-level file handling routines and chores like data indexing, most programmers tend to use commercial database systems even for minimum data handling purposes. If your application doesn't need the capabilities of a complete RDBMS server, then a small and efficient database library that plugs into your source code will seem an interesting solution. DarkSide SQL Mini is an effort to create such a library. You can take this as a beta release and I want developers to test this code and report bugs before the stable release. You may find even this beta release useful in many of your projects. The best thing about DarkSide SQL Mini is that, unlike other embedded database libraries, you don't have to learn a new set of APIs. It provides a subset of SQL that you can use to define schemas and manipulate data. All you have to learn to use are two classes (Database and ResultSet) and two member functions (execute() and executeQuery())!. Everything else is plain SQL. Database ResultSet execute() executeQuery() DarkSide SQL Mini is a source code library. Copy all CPP files in the \dsqlm_1\cpp folder to your projects working directory, add the \dsqlm_1\include directory to your include path, link your object code with dsqlm_1\libdb41s.lib and you are done. You have a nice database system embedded in your application. Now on to some SQL lessons... First include dsqlm.h in your CPP file: #include "dsqlm.h" using namespace dsqlm; Next create a database object: Database db("zoo"); This will create the folder zoo, if it does not exist. To create a table, call the CREATE TABLE command. CREATE TABLE db.execute("CREATE TABLE animals(name varchar(40) indexed,age int,dob date)"); Data is inserted using the INSERT command: INSERT db.execute("INSERT INTO animals VALUES('Joe',2,'2001-2-20')"); SELECT command is used to search and retrieve data: ResultSet rslt = db.executeQuery("SELECT * FROM animals WHERE age > 1"); while(rslt.next()) { cout << rslt.getString(1) << rslt.getString(3) << endl; } The above code will print the name and date of birth of all animals whose age is above 1. In addition to these commands, DarkSide SQL Mini supports DELETE, DROP TABLE, DROP DATABASE and OPTIMIZE commands. The installation contains detailed documentation on the library. DELETE DROP TABLE DROP DATABASE OPTIMIZE In the demo code, you will find a complete working program that demonstrates the use of various DarkSide SQL commands. Follow the instructions in DarkSide SQL Mini Help files to compile this code. If you need a complete RDBMS server, you can download DarkSide SQL server for.
https://www.codeproject.com/Articles/5548/DarkSide-SQL-Mini-Version-The-embedded-database?msg=2990959
CC-MAIN-2017-43
refinedweb
457
57.06
The week before last. This was going to be a discussion on the libraries (namely MFC, WTL, ATL etc) but I wanted to talk a little bit more about the compiler. Personally, I would hate to be working on the Visual C++ compiler. The guys who wrote the C# compiler must, I'm sure, wander around to the dark murky depths of the Visual C++ Compiler Developer's cage and lounge around and talk at length on how nice it is to be able to develop a compiler from scratch for a language that is so compiler friendly (well, compared to VC++). Because of the design of the C# language the C# compiler can process up to 3 million lines of code a minute. I'm also guessing there are no C# compliance hassles for the C# compiler team - though once ECMA gets through with the language there is, of course, no gaurantee that non-compliance may sneak in. A standard is merely a standard, and anyone who chooses to implement a compiler is free to change their implementation to whatever suits them (or their market) best. This is the main problem with the Visual C++ compiler. It's up to something like its 13th incarnation, and each version is building on previous versions, with each new version being backwards compatible with those previous versions. So if someone, way back in version 2.0 made a bad call on a particular feature, then too bad - future generations are stuck with it or are forced to make horrible workarounds to cope with it. Fixing these non-compliance issues is often non-trivial but the guys at Microsoft are at pains to emphasise that when dealing with non-compliance they work at fixing the most used features first. Obviously there are economic (and time) considerations as well. Microsoft has in the past worked towards imposed deadlines (whether or not they ever met those deadlines is another story) and as the deadlines loomed certain features that were deemed, well, optional, were tossed overboard. They no longer want to work this way, and if it means release dates are strung out a little then that's the way it will be. The focus is on quality, not timeliness. Compliance was always on their minds but never as much as some would like, but they do recognise this and are trying to remedy the situation. Let me digress and pose a question. Templates seem to be an area where religious fanaticism on compliance is particularly rife, but is this really an issue to most developers? Templates can certainly make life easier for a developer, but what about the developer who takes over the project when the original developer leaves? In the face of increased labor shortages is it really wise to be developing apps using specialised techniques for which you or your company will have serious problems finding developers who can maintain your code? Even if you do find such a developer, that person may charge a premium on their specialised services, and/or may cost more in the long run because of the extra time needed to get fully acquainted (and work) with the templatised code. Maybe I'm just a wuss and like taking the easy route to code writing, so I'd be interested in hearing your points of view. I asked Ronald about the compiler internals and he said that the compiler gets full rewrites only occasionally. There was a rumour that the compiler code was so spaghetti-like in it's internals that no one - no one - wanted to touch the thing. He said it wasn't quite that bad. The last rewrite for the compiler was for Visual C++ version 4. Each version after this has had various revisions, additions, bug fixes, 'feature' additions (you guys know what I'm talking about) and maybe a #pragma or two to keep things interesting. All in all though, the versions of the C++ compiler we've had since Windows 95 development came into vogue are refinements on the original VC4 compiler. As with each previous incarnation the latest Visual C++ compiler has even undergone further optimisation. Given that this is around the 13th review of the compiler, with each version gaining incremental, and smaller performance increases over the last version, it is amazing that they have tweaked a further 5% performance out of the compiler. I've already mentioned some of the newest improvements to the Visual C++ compiler: a new crash recovery feature that allows an application to be restored to its state immediately prior to crashing allowing post mortems to be carried out; 'Edit and Continue' has been improved, and there is now public access to the debug info file On top of these, there is also now a new /GL switch that allows the compiler to perform global optimisations across all files in a project instead of being confined to per-file optimisations. Given varying applications and files there is expected to be a gain of around 5%-10% speed increase in a typical app. Note that this switch is only a 'final-build' option, since it significantly increases compile time. The compiler also (with the appropriate switch) now inserts buffer overrun checking code into apps that stops the possibility of buffer overrun attacks. This is a major source of attacks in applications such as internet applications and is the subject of many millions of dollars of investment by companies searching for solutions. By simply recompiling your app in VC7 (with said switch) your application will be protected. If you suspect buffer overruns may occur you can simply add your own buffer overrun event handler as a last-ditch 'I've tried everything' resort. Other problems such as the use of uninitialised variables are treated in a similar manner, and you can include your own handler in your code to take care of any uncaught instances of these. Finally, there are new Processor Packs that you can add to the environment that allow you to target specific processors, such as Pentium III's, processors with 3D Now! etc. Gone are the days of convoluted inline assembly to target these cases. As more processors come out, more processor Packs will be distributed. Any thoughts on why Microsoft would be adding the ability to target different processors so easily? I'm sure they are just looking after us . The Libraries. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) I stick with MS tools and Windows and recommend them to others because I find I am most productive working with them. However, I get really frustrated about the lack of concern for templates, namespaces, etc. on MS's part. I write a lot of heavily numeric code. I use C++ because it makes it easier to write and maintain my code than working in FORTRAN, but there are some real performance hits. Some libraries such as Blitz++ claim to be able to remove a lot of the performance hits by making fancy use of templates, including a LOT of partial specialization. Many compilers such as GCC, KAI C++, Metrowerks Codewarrior, Borland C++ Builder, and IBM Visual Age C++ can work with Blitz++, but Visual C++ cannot. Let me reiterate with emphasis: Templates do not only let you do more with less code. They can also seriously speed up your code at runtime. Anyone working with large arrays (image processing, DSP, scientific and engineering computing, statistical analysis) and who cares about run-time performance should be pushing hard for better template support. I don't pretend to know all the ins and outs of partial template specialization, but the problem for me is that there are major libraries that would potentially help my productivity and the performance of my code, but I can't use them because of MS's lack of attention to standards compliance. I would also note that there is a bit of a chicken-and-egg problem here. The way most programmers learn about language features is to try them out and play with them. There is a good book about templates and generic programming (Generic Programming and the Stl : Using and Extending the C++ Standard Template Library by Matthew H. Austern (Addison-Wesley, 1998)), but a poor MSVC developer can't play with the new ideas from the book and learn about where they would and would not be useful because MSVC won't compile the code. Thus, as long as MS won't support many advanced template features, developers who need these features will buy other compilers and those who stick with MSVC are likely to be unfamiliar with the advantages they would gain from features that MS does not support. To make matters worse, MFC is actively hostile to many pieces of best-practices C++ coding. As we get into more and more complicated designs, use of C++ namespaces becomes essential to prevent collisions between different libraries' namespaces. However I challenge anyone to derive a class from CObject within a namespace and then use any of the basic MFC features such as DECLARE_xxx/IMPLEMENT_xxx. Similarly deriving abstract classes or template classes from CObject breaks most of the nice functionality that MS put in for simple classes. The library's hostility towards advanced C++ features is pushing me away from MFC in more and more of my coding, and I am sorry because there are many nice things about that library if MS were to maintain it more actively.
http://www.codeproject.com/Articles/788/A-Visit-to-Redmond-Part-5?fid=1365&df=90&mpp=10&noise=3&prof=True&sort=Position&view=Quick&fr=11
CC-MAIN-2016-36
refinedweb
1,594
55.68
I visited with a group of 12 people mid-week. The restaurant was really quiet (in fact nearly empty) but the atmosphere was still good. The menu selection was really extensive & the food was freshly cooked and delicious. It was served in good time & the portion sizes were reasonable. We didn't know about this place until it was recommended to us - I'm happy to pass on the recommendation & hope we can still get a table when we return (which we definitely will !!) If you own or manage Papaya Indian Restaurant, register now for free tools to enhance your listing, attract new reviews, and respond to reviewers.
http://www.tripadvisor.com/ShowUserReviews-g186418-d1223733-r158309244-Papaya_Indian_Restaurant-Maidenhead_Windsor_and_Maidenhead_Berkshire_England.html
CC-MAIN-2014-52
refinedweb
108
62.17
Libraries for interacting with the Crunchbase 2.0 API Project Description crunchbase This is a Python Library for the Crunchbase 2.0 API. API Documentation The CrunchBase API provides a RESTful interface to the data found on CrunchBase. The response is in JSON format. Follow the steps below to start using the CrunchBase API: Setup pip install git+git://github.com/anglinb/python-crunchbase Up & Running Import Crunchbase then intialize the Crunchbase object with your api key. from crunchbase import CrunchBase cb = CrunchBase('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') Here is an example of searching for an organization. cb.getOrganizations('Dropbox', page='1', order=...) This returns the result of an organization search query in JSON format. The keyword arguments (page, order, ... ) will be translated into GET variables and passed along with the request. Check the documentation to find which arguments are availible for which API endpoint. Now you are ready to perform any of the following queries against the Crunchbase 2.0 API getOrganizations(query) #This returns result of an organization search query in JSON format. getOrganization(path) #This returns result of an organization search query in JSON format. getPeople() #This returns result of people in JSON format. getPerson(path) #This returns result of a single person getProducts() This returns result of products in JSON format. ... Check crunchbase/crunchbase.py for a list of all the of the possible functions. These methods are in order with the ones found in the Crunchbase API Documentation. API Usage Terms Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://test.pypi.org/project/python-crunchbase/
CC-MAIN-2018-09
refinedweb
263
59.7