text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
/**18 * Defines constants for query parameters names commonly used by services.19 * 20 * @author Howard M. Lewis Ship21 * @since 4.022 */23 public class ServiceConstants24 {25 /**26 * The name of the service responsible for processing the request.27 */28 public static final String SERVICE = "service";29 30 /**31 * The name of the page to activate when processing the service.32 */33 34 public static final String PAGE = "page";35 36 /**37 * The id path to the component within the page. By convention, this component is within the38 * {@link #PAGE}, unless {@link #CONTAINER_PAGE} is specified.39 */40 41 public static final String COMPONENT = "component";42 43 /**44 * The name of the page containing the component; this is only specified when the component is45 * contained by a page other than the activate page ({@link #PAGE}).46 */47 48 public static final String CONTAINER = "container";49 50 /**51 * A flag indicating whether a session was active when the link was rendered. If this is true,52 * but no session is active when the request is processed, the a service may at its discression53 * throw a {@iink org.apache.tapestry.StaleLinkException}54 */55 56 public static final String SESSION = "session";57 58 /**59 * Contains a number of additional strings meaningful to the application (the term service60 * parameters is something of an entrenched misnomer, a better term would have been application61 * parameters). These parameters are typically objects that have been squeezed into strings by62 * {@link org.apache.tapestry.services.DataSqueezer}.63 * <p>64 * The value is currently "sp" for vaguely historical reasons ("service parameter"), though it65 * would be better if it were "lp" (for "listener parameter"), or just "param" perhaps.66 */67 68 public static final String PARAMETER = "sp";69 70 /**71 * A list of all the constants defined by this class.72 * 73 * @see org.apache.tapestry.form.FormSupportImpl74 */75 public static final String [] RESERVED_IDS =76 { SERVICE, PAGE, COMPONENT, CONTAINER, SESSION, PARAMETER };77 } Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/tapestry/services/ServiceConstants.java.htm
CC-MAIN-2016-44
refinedweb
334
53.31
QML Syntax check Hello, I have searched the forum but found no result. We start to develop the GUI with QML and in this case a lot of syntax errors made by programmers. I use QtCreator 2.7.2 (Based on At 5.1.0). Normally a lot of syntax errors found by the editor or by Tools.QML/JS.Run Checks. But in the following case I can't found the error with the tools including the debugger @ import QtQuick 2.0 Rectangle { id: rectangle1 width: 360 height: 360 MidRectangle { id: midrectangle1 width: parent.width / 2 height: parent.heightt / 2 // <-- syntax error should be height } MouseArea { anchors.fill: parent onClicked: { Qt.quit() } } }@ MidRectangle is a simple Rectangle centered in the parent: @ import QtQuick 2.0 Rectangle { width: 50 height: 50 anchors.horizontalCenter: parent.horizontalCenter anchors.verticalCenter: parent.verticalCenter color: "#13caf3" border.color: "#ff0000" } @ There is no runtime error, why? And the result of the program is complete different from the expected. How can I find such syntax error automatically? In the past I used QScript and there are checkSyntax() and evaluate() functions to find such errors. Thanks - chrisadams The short answer is: that is not a syntax error. All property bindings are JavaScript expressions. JavaScript expressions can arbitrarily modify objects (add dynamic properties, instantiate new objects, and so forth) - and the halting problem states that until you run a bit of code, you cannot know with certainty what the outcome will be. So at build time, it's difficult to know whether the symbol "parent.heightt" will be resolvable at run time. (Well, of course, that's an oversimplification, since you can look for things which change the evaluation context (like eval() etc) and if none exist, you can assume that the runtime context "should be" identical to the context model that your tooling generates at build time - assuming that you have a full expression evaluator built into your tooling, of course). Now, there will be a runtime error (you should see something like "ReferenceError: parent.heightt not resolvable" at runtime, when the property bindings are initialized (evaluated for the first time)) which you could scrape for error detection. Alternatively, you could write a tool which parses all QML files and builds up a hash of static QML type information (type names and their property names), and then runs over your source tree and identifies any non-static property usages in other QML files. I think QtCreator could/should include something like this, if it does not already, as I agree that it's a common case which tooling should help solve (by warning / yellow squiggley at the very least). Cheers, Chris. Thanks, syntax or not:-) I need a way to detect such errors. You are right, with @ height: myValue @ I get a ReferenceError: myValue is not defined message, which is very helpfull. But not for the line @ height: parent.heightt @ I see no reason to ignore the second one except a bug. Best Steffen
https://forum.qt.io/topic/29470/qml-syntax-check
CC-MAIN-2018-09
refinedweb
496
64.61
OverlayItem in Mono for Android 4.0 is implementing System.Object rather than Java.Lang.Object, which the compiler complains about when trying to implement ItemizedOverlay. The CreateItem method in ItemizedOverlay needs to return a Java.Lang.Object. If you go to the definition of OverlayItem in VS2010, then you can see that there is a using statement: using Java.Lang; and public class OverlayItem : Object Object in this case is not Java.Lang.Object but System.Object. I am using a workaround in my ItemizedOverlay right now, which seems to work fine: protected override Java.Lang.Object CreateItem(int index) You can see a small test project here: It looks like it may be a generator problem. ItemizedOverlay has an abstract method "CreateItem(int)" and we don't seem to have that method in our assembly. monodroid.testoverlayitem.MyItemizedOverlay is not abstract and does not override abstract method createItem(int) in com.google.android.maps.ItemizedOverlay [C:\Users\Jonathan\Desktop\Cheesebaron-MonoDroid.TestOverlayItem-4a801fe\MonoDroid.TestOverlayItem\MonoDroid.TestOverlayItem.csproj] In Mono for Android 1.9.2 OverlayItem was explicitly implementing Java.Lang.Object. Looks like this if you browse the definition of the class: public class OverlayItem : Java.Lang.Object In 4.0 this changed to public class OverlayItem : Object But because of using System; using Java.Lang; in the definition the compiler gets confused. CreateItem(int) does indeed exist in ItemizedOverlay and expects Java.Lang.Object being returned. At least if you look at the definition of ItemizedOverlay you can find: [Register("createItem", "(I)Lcom/google/android/maps/OverlayItem;", "GetCreateItem_IHandler")] protected abstract Object CreateItem(int i); So if you ask me the problem is OverlayItem is implementing the wrong Object. There was a duplicate discussion on the mailing list. OverlayItem indeed derives from Java.Lang.Object. To avoid confusion between two Object types, see the mailing list discussion above. Maybe using Object = Java.Lang.Object is easier. (But also note that this does not affect our already-compiled Mono.Android.GoogleMaps.dll.)
https://xamarin.github.io/bugzilla-archives/23/2358/bug.html
CC-MAIN-2019-26
refinedweb
334
52.76
Unanswered: Conditional CSS in XTemplate In an XTemplate I have a boolean that determince if a css name should be present in a div or not. But I get the following error message [ERROR] [gxt3test] - SafeHtml used in a non-text context. Did you mean to use java.lang.String or SafeStyles instead? Java: Code: public class TemplTest { public interface Renderer extends XTemplates { @XTemplate(source = "template.html") public SafeHtml render(boolean css,Style style); } interface Resources extends ClientBundle { @Source("Test.css") Style css(); } interface Style extends CssResource { String test(); } @SuppressWarnings("unused") public TemplTest() { Renderer renderer = GWT.create(Renderer.class); final Resources resources = GWT.create(Resources.class); resources.css().ensureInjected(); Style style = resources.css(); SafeHtml html = renderer.render(true,style); } } Code: <div class='<tpl if='css'>{style.test}</tpl>'></div> Code: .test { position: relative; } I was having this issue with GWT this morning. Here is the deal: "<div style=\"{0}\" class=\"{2}\"><label debugId=\"{3}\">{1}</label></div>" * {0} is a style, so must be a SafeStyles type * {1} is the content of an HTML element, so must be SafeHtml type * {2} is the a class name, not HTML or a style, so must be String type * {3} is the a non-standard attribute, not HTML or a style, so must be String type Your error is being caused by the 2/3 cases. It is not documented well but should fix your issue. Seems odd to me, since you could do an injection attack here like {2}=' " /><evilHtml/><div ' Not sure why the don't let you escape the classnames. Sincerely, Joe - Join Date - Feb 2009 - Location - Minnesota - 2,736 - Vote Rating - 92 - Answers - 109 Olaus: This is one part a limitation with XTemplates, stemming from the fact that we build on GWT's SafeHtmlTemplates, and one part 'not valid xml' - you can't put a template tag within the body of another tag. Instead, a few other options: * Make the String parameter to your template optionally blank to leave out the class name * Make two divs, each with its own <tpl> parent, one for the positive case, one for the negative. * (not always possible) Add the css class after the template has rendered. twisted_pear: I think you'll find that won't actually work - If it is a String being inserted, GWT will make the necessary changes to prevent this from escaping the attribute. Along these same lines, I don't believe it will allow SafeHtml to be inserted within the tag body of another tag, only between two existing tags. Additionally, if there is a SafeHtml object with the contents Code: " /><evilHtml/><div
https://www.sencha.com/forum/showthread.php?178466-Conditional-CSS-in-XTemplate
CC-MAIN-2015-40
refinedweb
435
61.16
23 August 2012 12:25 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> RIL announced late on 22 August the increase in list prices for LLDPE injection and all LDPE grades except lamination by Rs1.00/kg, with effect from 23 August, the source said. The new list prices of LDPE film now stands at Rs95.00-95.50/kg “There has been a drop in import arrivals in recent months, particularly for general purpose LDPE film grades. Hence, prices are on the rise,” the source said. “The higher metallocene LLDPE (MLLDPE) prices led to a switch in blending compositions to the cheaper LDPE film. As a result, demand for LDPE film boomed,” a major film converter based in Mumbai said. Spot C6-MLLDPE prices were assessed at $1,480-1,500/tonne CFR (cost and freight) Mumbai on 22 August, according to ICIS. Buying ideas for LDPE film were at $1,330-1,350/tonne CFR Mumbai on 23 August, industry sources said. Domestic sales of LDPE film have consequently improved tremulously, a source from RIL, the sole local producer, said. “Over April to July, the overall demand in Besides the decline in LDPE imports, imports of LLDPE injections were very low, market players said. Exxonmobil and SABIC are the only two foreign suppliers to Lower production output of LLDPE injection was one of the reasons for the lower export volumes to LDPE film is commonly used in food and medical/pharmaceutical packaging, as well as in agricultural film and disposable nappies. Applications of LLDPE injection include house wares, rubbish bins, lids and large industrial containers and lids. ($1 = Rs55
http://www.icis.com/Articles/2012/08/23/9589567/indias-ril-targets-higher-pe-prices-on-short-supply-better-demand.html
CC-MAIN-2015-22
refinedweb
270
61.77
Introduction: Automatic Light Switch You’ve tried everything. You’ve tried throwing socks, you’ve tried reaching extra far, you’ve even tried activating your latent psychic powers, yet the light switch refuses to budge. Of course, you could just get up off your bed and walk all the way across the room, but that would be too much effort for a feeble college student such as yourself. No, what we need right now is a convoluted contraption to turn the switch off from across the room, and with a little bit of coding, you can finally turn that switch on and off from your bed. What we have come up with here is a mechanical switch that is easy to put together, easily removed, and does not require any modification of the wiring of the actual light switch, which makes it perfect for a college dorm room. Additionally, it can be made partially from parts found on eBay, and parts found in a typical dorm room. This means it does not require any additional tools such as a 3D printer or laser cutter, and therefore requires even less effort on your part! Although this may seem like it requires actual time and effort to put the whole contraption together, this will totally be totally worth it when with the push of a button or the flick of a switch, you can turn your lights off from across the room. Materials We were able to rent all of the electronic parts we needed for free from our university, but if you were to buy the exact parts we used in order to replicate this design, it would be quite expensive. There are much cheaper alternatives available for the cost-conscious college student. The prices listed are the average prices found after some online research. Electronic Parts: Arduino We used the Arduino Uno, which can be bought online for around $25. While there are cheaper Arduino boards available, such as the Pro mini for $10, the Uno is recommended as the best board for beginners with electronics and coding. If you decide to try a different board, keep in mind that they will likely have a different programming interface, instead of the standard USB connection. Servo We used the Hitec HS-645MG Servo which costs about $30. There are a wide range of servos available at a much cheaper price that will do the same job as the one we used. A couple of good websites to browse servos are hobbyking and servocity. You can find a similarly sized servo from Hitec for $8 (HS-311). The third picture above depicts a servo motor from four angles so you can see approximately what the motor you get should look like. Motor Shield (Not required, Cost: $20) The Motor Shield made it easy for us to safely connect the servo to the Arduino without any extra wiring. The Arduino Uno has a 5V regulator, so you can connect the servo directly to the Arduino as long as 5V is within the servo's operating range. If you choose not to buy the motor shield, you will need some jumper wires which can be bought in large packs for a couple of dollars. You can find many guides on how to connect a servo to an arduino with a quick google search. Long USB cable ($4) and USB Type A to Type B adapter ($2) This is what will be used to program and control the Arduino from your computer. The USB cable should be long enough to reach from your bed to the light switch. Other Materials: - Any kind of tape - Two-sided adhesive stickers - A popsicle stick - A computer A rough estimate of the cost for this project would be $40. This includes the Arduino Uno because it is the easiest to work with, but does not include the motor shield. The cost for the other materials was not included because many students likely already have these materials around their dorm. While the cost for this project may seem high, keep in mind that you can re-purpose all of the electronic parts for use in future projects once you no longer need them for this light switching device. Cost was a factor in every part of this design, as we wanted to make a DIY that was accessible to many college students. Step 1: Downloading the Arduino Program and Library to Your Computer The first thing you must do is download the Arduino program to your computer from here. The program is completely free as well as the code, which is why we chose to use the Arduino. It is very easy to learn how to use as there are many online resources available. Secondly, download the servo library from here. This library has to be put in a very specific place so that the Arduino program can locate it on your computer. - On an Apple or Windows computer - Go to Documents > Arduino > Libraries - If there is no Libraries folder, Right click > New Folder > Name folder “Libraries” - Libraries > Place the unzipped downloaded file in here, and rename it “Adafruit_MotorShield” - Go to /home/ - Sketchbook > Libraries - If there is no Libraries folder, right click > New Folder > Name Folder "Libraries" - Libraries > Place the unzipped downloaded file in here, and rename it “Adafruit_MotorShield” Step 2: Assembling and Programming Your Arduino Assembling the brains of the project, the Arduino and motor, is the first and most crucial step. As shown above, attach the motor shield to the Arduino. This will arrange all of the circuitry for you, in terms of powering your motor. Next, attach your servo motor to the Servo 2 pins on the corner of your motor shield, as shown in the second picture. Now, plug your Arduino into your computer using the cable provided. Next up is the coding aspect of this project, but don't fear, we've figured all the code out for you! (It's written in C++ if you are curious.) Before you can begin to code,you will be required to upload a program to your arduino that will cause the motor to turn a certain number of degrees every time you enter “1” into the console. Copy and paste this code into the Arduino window: #include <AFMotor.h> #include<Servo.h> Servo light; int angle = 0; int onoff = 0; void setup() { Serial.begin (9600); Serial.println ("Enter 1 to toggle on/off"); light.attach (9); delay (1000); } void loop() { if (angle <= 0) { angle = Serial.read(); } else { angle=0; if(onoff%2==0){ light.write (0); Serial.println ("off"); onoff++; } else { light.write (35); // this 35 value represents the angle the motor will turn, so you can change this Serial.println ("on"); onoff--; } } } Step 3: Assembling Your Light Switcher Now that you have finished setting up the Arduino, you may be relieved to find out that setting up this device is pretty simple! You have created the code to make the motor spin, all you need to do is make something that will hit your light switch. In order to do this, you simply need to extend the linear motor attachment you have been given, by taping half a popsicle stick to it. This ensures that the motor attachment will be able to push the switch in. Make sure you leave the hole in the middle of the motor attachment free of any tape so you can connect this piece to the servo. Connect the motor attachment to the servo motor as shown. Step 4: Attaching Your Set-up to the Wall For the push switch, you now need to attach both the servo motor and arduino board to the wall. We used these sticky strips to put each piece of equipment on the wall. There are several kinds of adhesive that you can use (two varieties are shown in the top picture). It doesn’t really matter where the arduino goes, as long as it’s close enough for the cord connecting it to the servo motor to reach. The servo, however, must be attached right next to the light switch so that the motor attachment lies directly overtop of the switch. In the bottom picture, we showed where we placed our sticky strips with respect to the light switch. Step 5: Putting It All Together Before you put the servo motor on the wall, make sure to run the code through it once, ensuring that your device has been set up properly and the popsicle stick is in the correct orientation. Finally, stick both your arduino and servo motor to the wall. One trick that helped us one was to put two little erasers in the space between the servo and the wall, to help counteract the force that the light switch exerts on the motor when it is being pressed down by the motor attachment. Any small object lying around your room will work here. Next, run your USB extender cord all the way around your room. In order to avoid the cord causing any problems, you could use hooks and two-sided adhesive, to run the cord along your walls to the switch, or you could run the cord along the floor around the outside of your room. Keeping your computer close to your bed, you can now control the device using the code for the Arduino to rotate the motor, turning the lights on and off. Now, anytime you want to turn the lights off from your bed, simply send the code from your computer and voilà! Step 6: Further Design Options Depending on the setup of your dorm room and how comfortable you are with programming and technology, there are other options for how you can control the light switch from your bed. One option is using a smartphone app called “Blynk” which controls the servo motor through your phone. Having and app on your phone means that you are not restricted to using the light switching device from your bed. As long as your phone is nearby and you are within range of the Arduino, you will be able to control the lights in your room. This option can be extremely convenient although it can take some time to set up, especially if you are unfamiliar with coding or with how Blynk works. Using Blynk requires an ethernet shield (similar in shape to the motor shield) to be plugged into the Arduino, allowing it to connect to the internet. This ethernet shield can cost somewhere around $20, so we decided against choosing this as our main option as it is less accessible. However, using Blynk eliminates the need for having wires running throughout your room, something that we think can be well worth the extra time it takes to set up the app on your phone. To guide you through this somewhat complicated task, there are many tutorials online, explaining how to use Blynk. A good one to follow is which will walk you through all the steps. Another way to modify this device is to repurpose a remote to control the Arduino and turn the motor. One of the advantages of this method is that it is less expensive and easier to set up than the app. Although using this method would also require programming and wiring to ensure the remote is connected to the Arduino properly. This method makes it easy to control the light switch from anywhere in the room by pushing the buttons on the remote, however, it requires extra parts for your Arduino such as the Raspberry Pi Infrared Remote Control Ir Receiver Module DIY Kit, which can be found on Amazon for about $9. It also requires a bit more circuitry, but again, since the Arduino is so commonly used, there are plenty of other tutorials to be found online. We hope this tutorial was helpful, and that you learned a little bit from it as well. Although setting up this device can take a little bit of time and money, just think of how nice it will be to turn off the lights from the comfort of your own bed. Happy light switching! Recommendations We have a be nice policy. Please be positive and constructive. 6 Comments Sorry to say this: But this instructable shows the complete lack of electronic knowledge. Putting a servo on top of a mechanical light switch? It looks ugly and can be done much easier: with pure electronics! If you use a bistable relay you can switch the light on and off with the light switch of the room AND you can switch the light on and off by Arduino. Without needing any servo. Have a look at the last chapter of this instructable: what if you don't want to cut up your wall? Cut up the wall? This is not necessary. You just have to add a cable to the switch. You can let it come out under the cover of the light switch. can i use a sg90 servo motor instead? Nice ! I was working exactly on the same design ! Using esp8266 for small node footprint Nice design. With enough space between the wall and the servo, you can have a gap between the rotor and the switch that lets you still use it manually. Clever.
http://www.instructables.com/id/Automatic-Light-Switch-2/
CC-MAIN-2018-26
refinedweb
2,221
67.49
Jupyter doesn't use the built-in MathJax I have two copies of MathJax. One in /usr/lib/sagemath/local/share/mathjax, another in /usr/share/javascript/mathjax. I open up the jupyter notebook: sage -n jupyter. I open a notebook and what I get is Math/LaTeX rendering will be disabled. If you have administrative access to the notebook server and a working internet connection, you can install a local copy of MathJax for offline use with the following command on the server at a Python or Jupyter prompt: from Jupyter.external import mathjax; mathjax.install_mathjax() But I don't want a third (sic!) copy of MathJax on my computer! How can I make Jupyter use the existing ones? I'm using Linux Mint 17.2, used the sagemath-upstream-binary package from the PPA.
https://ask.sagemath.org/question/31542/jupyter-doesnt-use-the-built-in-mathjax/?answer=31551
CC-MAIN-2019-39
refinedweb
136
60.61
Proposed features/moped access Rationale In Finland mopeds are defined as 2-wheeled very light motorcycles with a maximum speed of 45 km/h and maximum engine size of 50 cm³. A real motorcycle/car driving license isn't required to operate one, but in Finland young people have to take a short course to get a moped driving license (M-class). I imagine similar definitions are used in other countries as well? Anyway in Finland mopeds are sometimes allowed to drive on foot/cycleways, sometimes not. This is signaled with traffic signs and the data should be available on maps as well for routing purposes. Mopeds are never allowed to drive on motorways and they should use the shoulder of the road, if available, when driving on other roads. See Moped for some definitions of a moped. Applies to - Ways - A footway (or cycleway) where you can drive a moped: <tag k="highway" v="footway" /> <tag k="moped" v="yes" /> Default, if the tag is missing from a footway/cycleway, could be no access, or not known. Default for roads smaller than motorways should be yes. - Sounds good. It would join the car, foot, bicycle, bus=yes/no family. FYI, definition is similiar in the UK, up to 49 cc as I recall. The situation is similar in Sweden, mopeds seem to be able to ride on most footpath / cycleways unless specifically prohibited by signs. MikeCollinson 17:37, 23 September 2007 (BST) - In the Netherlands, there are so-called "bromfietspaden". Essentially, these are shared moped-cycleways. I've come across such a way during mapping and thought of "scooter=yes/no/.." but after a discussion on talk-nl I believe moped is the better option. --Benbono 23:29, 23 September 2007 (BST) - Sounds good. In Germany we have the "Autostraße" that are not marked in any map but where these 50ccm scooters cannot drive. We where already discussing this issue on the german map-features page without a conclusion yet. --MarcusWolschon 05:03, 24 September 2007 (BST) - I'd prefer to see this as part of the access: namespace proposal. --Hawke 21:47, 24 September 2007 (BST) - I like the namespace idea (didn't see it first because it isn't linked from the access-section of the parent wiki page). Access tags should definitely be in their own namespace. Anyway that's semantics, so I guess we could consider this page as a general yes/no discussion to the idea of having any kind of moped access tags. Tsok 05:20, 25 September 2007 (BST) - In Belgium, mopeds are divided in two different classes: A (maximum speed 25km/h, no driver's license needed) and B (maximum speed 45km/h, driver's license needed), with different rules for them (class B not always allowed to follow a cycleway for example like class A, or roads where class B isn't allowed and A is, or where B can't drive in the opposite direction in a one-way road and A can. Anyone an idea how that could be solved? --Eimai 12:44, 16 April 2008 (UTC) - In germany we have to classes Mofas (max 25km/h) and Mopeds (max 50km/h). Mopeds must always drive on a street, but Mofas can drive on e.g. cycleways when an extra sign allows this. --Marco.horstmann 16:37, 29 December 2008 (UTC) - I'd like to propose mofa=* for moped-light: "Mofa" in Germany, moped class A in Belgium, "Snorfiets" in The Netherlands. Such a verhicle is sometimes treated differently from a regular/"real" moped, so one access category is not enough. It is not a real bicycle either. For example in The Netherlands, mofas can (must) ride on regular cycleways where mopeds are not allowed, but they are not allowed on optional cycleways (unless the engine is turned off). --Cjw 21:52, 19 May 2009 (UTC) - usage in Belgium is "moped_A" and "moped_B", and "moped" includes those two categories. What's the relationship between "moped" and "mofa" in the Netherlands then? --Eimai 17:14, 23 May 2009 (UTC) - The classes seem to be the same in several countries, just the names are different. In the Netherlands a "snorfiets" (=belgian moped A, =german mofa) is treated like a bicycle in all cases except one. A "bromfiets" (=belgian moped B, = german moped/kleinkraftrad), however is often directed to car lanes in towns/cities. Is it different in Belgium? A moped category including both vehicle types could be nice for roads where neither is allowed, but is it needed? It also requires finding an additional name while I already have enough trouble finding an appropriate name for "mofas". AFAICT "moped" quite universally means a motorized two-wheeled vehicle that is allowed to go 40-50km/h. So my proposal would mean for Belgium moped_A=>mofa and moped_B=>moped, if some rule applies to both, 2 tags would be needed. Anyway, the important thing is to have the same tags everywhere. --Cjw 19:09, 25 May 2009 (UTC) - is that really needed? It will provoke a lot of confusion, since a "snorfiets" is a "bromfiets" (that's why we call it bromfiets klasse A here), i.e. it's both moped. So you can't go around the current usage and understanding of the tag. Secondly, it's not necessary to have the exact same tags in each country. Belgium doesn't have a "hgv" class for example in the traffic code, so only "goods" should be used in tagging (and hgv is a synonym for goods now) (not discussing different driver's licenses here). So, moped_A and moped_B will be much easier for Belgian mappers to understand than trying to tell moped_B will now be moped (people will forget that it's only the B class, and it's just wrong in a logical point of view), and introduce a word no-one ever heard of "mofa". So the situation works nicely as it is now in Belgium, and there's no need to change it. If it needs to be adapted in the Netherlands, then discuss that with the Dutch user base, but don't try to enforce that new rule to other countries where it doesn't make sense. --Eimai 12:46, 26 May 2009 (UTC)
http://wiki.openstreetmap.org/wiki/Proposed_features/moped_access
crawl-003
refinedweb
1,047
61.26
src/test/taylor.c Equilibrium of a droplet suspended in an electric field A droplet suspended in a fluid subjected to a uniform electric field deforms due to the competing effect of electrical forces and surface tension. If the electrification level is moderate an equilibrium shape is reached. It was observed experimentally that droplets deforms as prolate or oblate spheroids (i.e. larger elongation aligned with the external electric field or viceversa). The analytical analysis of O’Konski & Thacher (1953) unexpectedly predicted prolate forms whereas the experiments showed oblate ones (and viceversa). It was the genius of Geoffrey Taylor (1966) who shed light on the problem. The work of O’Konski & Thacher assumed that both fluids (inner and outer) were perfect dielectrics, given the low conductivity of the fluids involved. Taylor realized that the conductivity could be very low but it was not zero, so that the charges could migrate through the “leaky” media, thus accumulating at the fluid interface and altering radically the pattern of electrical forces. Moreover, approximating the electrostatic Maxwell equations with the Taylor–Melcher leaky dielectric model and assuming small deformation Taylor predicted recirculating velocities and provided analytical expressions for the radial and azimuthal velocities as functions of the dimensionless radius and the azimuthal coordinate . For this gives, and for , with where , and are the ratio of inner to outer conductivity, permittivity and viscosity, respectively. is the imposed electrid field, the droplet radius, is the outer permittivity and is the outer viscosity. The electrical forces induces recirculations in both (viscous) fluids. This test case is also discussed in Lopez-Herrera et al, 2011. The problem is assumed to be axisymmetric. #include "axi.h" #include "navier-stokes/centered.h" #include "ehd/implicit.h" #include "ehd/stress.h" #include "vof.h" #include "tension.h" We need to track the interface with the volume fraction field f. The viscosity is constant but the coefficients will vary due to the axisymmetric metric terms. scalar f[], * interfaces = {f}; face vector muv[]; The maximum level of resolution, LEVEL, will be varied from 8 to 10. int LEVEL = 8; #define Ef 1.34 // External electric field #define R0 0.1 // Radius of the droplet #define F 50. #define R 5.1 // Conductivity ratio #define Q 10.0 // permittivity ratio #define CMU 0.1 // Outer viscosity #define theta (M_PI/4.) #define LAM 1. // Viscosity ratio #define VC (sq(Ef)*R0/CMU) // characteristic velocity #define A (-9./10.*(R - Q)/sq(R + 2.)/(1. + LAM)) F is the conductivity of the outer medium. F has no influence on the steady solution but decreases the characteristic electrical relaxation time and consequently the electrical transient. #define cond(T) (F*((1. - (T)) + R*(T))) #define perm(T) ((1. - (T)) + Q*(T)) The electric potential is linear. φ[top] = dirichlet(Ef*x); φ[left] = dirichlet(Ef*x); φ[right] = dirichlet(Ef*x); We make sure there is no flow through the boundaries, otherwise the compatibility condition for the Poisson equation can be violated. uf.n[left] = 0.; uf.n[right] = 0.; uf.n[top] = 0.; uf.n[bottom] = 0.; The domain spans [0:2]. We will compute only a quarter of the droplet, making use of axisymmetry and right-left symmetry. The surface tension coefficient is unity. The viscosity coefficients are variable. int main() { L0 = 2; N = 1 << LEVEL; f.σ = 1.; μ = muv; run(); } event init (t = 0) { We initialize the volume fraction field corresponding to a circular interface of radius R0. fraction (f, sq(R0) - sq(x) - sq(y)); We initialize the electrical potential. foreach() φ[] = Ef*x; boundary ({φ}); } Permittivity and electrical conductivity are face values and also incorporate the metric factors. The viscosity is constant but the viscosity coefficients need to incorporate the metric factors. event properties (i++) { foreach_face() { double ff = (f[] + f[-1])/2.; ε.x[] = perm(ff)*fm.x[]; K.x[] = cond(ff)*fm.x[]; muv.x[] = CMU*fm.x[]; } boundary ((scalar *){ε, K, muv}); } Convergence We store the horizontal component of the velocity to check its convergence with time. scalar un[]; event init_un (i = 0) { foreach() un[] = u.x[]; } event error (i += 20; t <= 10.) { We monitor the variation in the horizontal component of the velocity and the convergence of the multigrid solvers every 20 timesteps. double du = change (u.x, un); fprintf (stdout, "%g %g %d %d %d %d %d %d %d %d\n", t, du, mgp.i, mgp.nrelax, mgpf.i, mgpf.nrelax, mgu.i, mgu.nrelax, mgphi.i, mgphi.nrelax); fflush (stdout); If the change is small enough (i.e. the solution has converged for this level of refinement), we increase the level of refinement. If the simulation has converged and the level of refinement is 10, we stop the simulation. if (i > 0 && du < 1e-5) return (LEVEL++ == 10); /* stop */ } Results At the end of the simulation we create two files: log (standard error) will contain the dimensionless radial and azimuthal velocities and their theoretical values as functions of the dimensionless radial coordinate along the line . vector.svg displays the velocity field, interface position and isopotential lines, as displayed by gfsview-batch. event result (t = end) { double h = 0.35*L0/(2*99); for (int i = 1; i <= 100; i++) { double x = i*h, y = i*h, r = sqrt(sq(x) + sq(y))/R0; double ux = interpolate (u.x, x, y)/VC; // dimensionless velocities double uy = interpolate (u.y, x, y)/VC; double vrt, vtt; // theoretical radial and azimuthal velocities; if (r < 1.) { vrt = A*r*(1. - sq(r))*(3.*sq(sin(θ)) - 1.); vtt = 3*A/2*r*(1. - 5./3.*sq(r))*sin(2.*θ); } else { vrt = A/sq(r)*(1/sq(r) - 1.)*(3.*sq(sin(θ)) - 1.); vtt = - A*1./sq(sq(r))*sin(2.*θ); } fprintf (stderr, "%g %g %g %g %g\n", r, (ux*x + uy*y)/(R0*r), vrt, (-uy*x + ux*y)/(R0*r), vtt); } FILE * fp = popen ("gfsview-batch2D taylor.gfv", "w"); output_gfs (fp); fprintf (fp, "Save vectors.svg { format = SVG }\n"); pclose (fp); } The mesh is adapted according to interpolation errors on the volume fraction, charge density and velocity fields. event adapt (i += 20) { adapt_wavelet ({f, rhoe, u.x, u.y}, (double[]){1e-3, 1, 2e-4, 2e-4}, maxlevel = LEVEL); } Set to one below to see the simulation on-the-fly. #if 0 event gfsview (i += 20) { static FILE * fp = popen ("gfsview2D taylor.gfv", "w"); output_gfs (fp); } #endif Radial profiles of radial azimuthal velocities compared to analytical results. Steady-state velocity vectors, interface position and equipotential lines. Bibliography Chester T. O’Konski, Henry C. Thacher Jr. “The Distortion of Aerosol Droplets by an Electric Field” J. Phys. Chem., 1953, 57 (9), pp 955–958. G. Taylor, “Studies in Electrohydrodynamics. I. The Circulation Produced in a Drop by an Electrical Field”, Proc Roy. Soc. Lond. Ser. A: Math. Phys. Sci., 1966, vol. 291, pp. 159-166.
http://basilisk.fr/src/test/taylor.c
CC-MAIN-2018-34
refinedweb
1,138
50.33
There are several ways to use Vue and D3 together[1]. Here's an attempt to use D3 for the DOM and Vue for the reactivity. I think it's a little different from other techniques I've seen. I'm using no watchers and I'm using no refs. It's based on my Vue + Canvas code[2]. color: radius: margin top: bottom: margin left: right: The code is below (also here). The main idea is that there's a reusable Vue component that produces a <g> element. You pass a function to that component to perform the one-time setup, and then return a function that should be called on update. // example code is under the CC0 license - No Rights Reserved /* This is a reusable component that produces a <g> element that lets d3 control the DOM inside of it. */ Vue.component('vue-d3', { props: ['draw'], template: `<g :</g>`, data() { return {update: () => null}; }, mounted() { this.update = this.draw(d3.select(this.$el)); }, }); function minmax(data) { return [d3.min(data), d3.max(data)]; } /* This is a chart (maybe you'd make it a component); put the data you depend on in props and data and computed, and provide a method that will get passed to the <vue-d3> */ new Vue({ el: "figure", data: { dataset: 'sine', fill: 'hsl(0,50%,50%)', radius: 3, outerWidth: 600, outerHeight: 300, margin: {left: 30, top: 10, right: 10, bottom: 20}, }, computed: { data() { switch(this.dataset) { case 'sine': return d3.range(0, 5, 0.05) .map(x => [x, Math.sin(x)]); case 'parabola': return d3.range(-10, 10, 0.2) .map(x => [x, x*x]); case 'hilly': return d3.range(-15, 15, 0.3) .map(x => [x, Math.abs(Math.sin(x/4) + 0.5*Math.cos(x/2) + 0.3*Math.sin(x))]); } return []; }, }, methods: { draw(parent) { // This function handles creation, console.log('draw init'); const x = d3.scaleLinear(); const y = d3.scaleLinear(); const xAxis = d3.axisBottom(x).ticks(10); const yAxis = d3.axisLeft(y).ticks(10); const root = parent.append('g'); const xAxisG = root.append('g'); const yAxisG = root.append('g'); const line = root.append('g'); // and returns a function that handles updates return () => { console.log('draw update'); root .attr('transform', `translate(${this.margin.left}, ${this.margin.top})`); const innerWidth = (this.outerWidth - this.margin.left - this.margin.right), innerHeight = (this.outerHeight - this.margin.top - this.margin.bottom); x .domain(minmax(this.data.map(d => d[0]))) .range([0, innerWidth]); y .domain(minmax(this.data.map(d => d[1]))) .range([innerHeight, 0]); xAxisG .attr('transform', `translate(0,${y(0)})`) .call(xAxis); yAxisG .attr('transform', `translate(${x(0)},0)`) .call(yAxis); let selection = line.selectAll('circle') .data(this.data); selection.exit().remove(); selection.enter().append('circle') .attr('stroke', "white") .attr('stroke-width', "0.5") .merge(selection) .attr('r', this.radius) .attr('fill', this.fill) .transition() .attr('cx', d => x(d[0])) .attr('cy', d => y(d[1])); }; } }, }); Look at the draw(parent). It's d3 code. It doesn't do any Vue-specific things. In theory, this will make it easier for you to reuse this code in another project that doesn't use Vue. Motivation: I try to avoid watchers. There are two errors I make: - (correctness) As I change the code, I might introduce a dependency that I forgot to watch. I change that value but the diagram doesn't update, even though it should. - (performance) As I change the code, I might no longer have a dependency that I have been watching. I change that value and the diagram updates, even though it shouldn't. By using Vue's automatic dependency tracking, these dependencies are always accurate, and I avoid both of these issues. Caveats: the code assumes that the element is created only once. This is probably fine for most cases, but you can't turn it on and off with v-if, or add a variable number of them with v-for, etc. Maybe :key would help; I'm not sure. TODO: I need to write a better example. Above, one function does all the drawing. But the same technique should work when you have different parts of the visualization, each depending on some subset of data, and then only those parts will redraw. For example, if I have a line that depends on x,y and a horizontal axis that depends on x and a vertical axis that depends on y, then when I change x, it will redraw the line and horizontal axis but not the vertical axis. Each would be a separate <vue-d3> component with its own draw function passed in. I haven't used this in a real project and there may be more caveats.
https://www.redblobgames.com/x/1842-vue-d3/
CC-MAIN-2020-40
refinedweb
780
60.31
I have a file README.TXT. If I issue the following command: README.TXT PS> Get-Item ReadMe.txt ...then it returns "ReadMe.txt". I want to find out the actual name of the file on disk, including case. How do I get it to return "README.TXT"? I ask because I'm trying to track down a problem with case-insensitive filenames on Windows versus case-sensitive files on a Unix box, and I want to get the actual case used on the Windows box. More detail: I have a list of files (stored in a .CSPROJ file) which are in a different case from those stored on disk. I want to be sure that they match. For example: if the .CSPROJ file says "ReadMe.txt", but the file on disk is "README.TXT", sometimes editing the file in Visual Studio rewrites the file as "ReadMe.txt", which then confuses Perforce, because it's case-sensitive, and the filename no longer has the case it was expecting. I want to write a script that spots the mismatched filenames, so that I can do something about them before it causes a problem. .CSPROJ Here's what I came up with: function Get-ActualFileName($file) { $parent = Split-Path $file $leaf = Split-Path -Leaf $file $result = Get-ChildItem $parent | where { $_ -like $leaf } $result.Name } The bulk of the file system work in PowerShell is done by some .NET classes in the System.IO namespace. If you know the path (like in your example, you've navigated to the file system location where your subject file is located), then you could use something like (get-item $pwd).GetFiles('readme.txt') Since the base .NET classes are case sensitive, the resulting output of your file names will be case sensitive as well. For reference, the in the example, you are calling the GetFiles method of the System.IO.DirectoryInfo object that represents the current working directory ($pwd). You can change that to point to the directory containing the file you need to confirm the casing on. The Name, BaseName, and FullName properties should all reflect the proper casing. More info on the GetFiles method. The fastest way I can think of is to replace all the slashes with *\ and then call Get-Item, and to make sure you get no dupes, pick the one that's case-insensitively equal to the one you specified. $Path = Get-Item "$($Path.TrimEnd('\/') -replace "(?<!:)(\\|/)", '*$1')*" | Where { $_.FullName -ieq $Path } | Convert-Path Rather than specifying the exact name, just use a wildcard: PS> Get-Item ReadMe.tx? this should return README.TXT By posting your answer, you agree to the privacy policy and terms of service. asked 2 years ago viewed 240 times active 1 year ago
http://serverfault.com/questions/431416/get-filename-in-the-case-stored-on-disk/431433
CC-MAIN-2015-18
refinedweb
460
75.81
loom 0.0.3. Install ------- $ sudo pip install loom Getting started --------------- First of all, you create `fabfile.py` and define your hosts: from fabric.api import * from loom import puppet from loom.tasks import * env.user = 'root' env.environment = 'prod' env.roledefs = { 'app': ['prod-app-1.example.com', 'prod-app OS support ---------- It's only been tested on Ubuntu 12.04. I would like to support more things. Send patches! API --- Look at the source for now. It's all Fabric tasks, and they're pretty easy to read. (Sorry.) - Author: Ben Firshman - Package Index Owner: bfirsh, loom - Package Index Maintainer: loom - DOAP record: loom-0.0.3.xml
https://pypi.python.org/pypi/loom/0.0.3
CC-MAIN-2017-30
refinedweb
109
72.53
How to: Draw Images Off-Screen You can reduce the flicker when drawing large images by using a Graphics object not associated with the form to create the image off-screen. Then draw the image on the screen using a Graphics object of the form. Example This example overrides the OnPaint method to create a large bitmap off-screen using a Graphics object derived from the bitmap. Then it draws the bitmap to the screen using the Graphics object returned from the Graphics property of the PaintEventArgs. After the form loads, it can take a few seconds for the image to appear. Protected Overrides Sub OnPaint(e As PaintEventArgs) Dim bmp As Bitmap Dim gOff As Graphics ' Create a bitmap the size of the form. bmp = New Bitmap(ClientRectangle.Width, ClientRectangle.Height) Dim BlueBrush As New SolidBrush(Color.Blue) Dim WhitePen. Dim z As Integer For z = 1 To 1000 ' Generate a random number with ' seeds from the system clock. Thread.Sleep(1) Dim rx As New Random() Thread.Sleep(1) Dim ry As New Random() ' Create rectangles in the inner area of the form. Dim rect As New Rectangle(rx.Next(10,200), ry.Next(10,200), 10, 10) gOff.DrawRectangle(WhitePen, rect) gOff.FillRectangle(BlueBrush, rect) Next z ' Use the Graphics object from ' PaintEventArgs to draw the bitmap onto the screen. e.Graphics.DrawImage(bmp, 0, 0, ClientRectangle, GraphicsUnit.Pixel) gOff.Dispose() End Sub protected override void OnPaint(PaintEventArgs e) { Bitmap bmp; Graphics gOff; // Create a bitmap the size of the form. bmp = new Bitmap(ClientRectangle.Width, ClientRectangle.Height); SolidBrush BlueBrush = new SolidBrush(Color.Blue); Pen WhiteP. for (int z = 1; z <= 1000; z++) { // Generate a random number with // seeds from the system clock. Thread.Sleep(1); Random rx = new Random(); Thread.Sleep(1); Random ry = new Random(); // Create rectangles in the inner area of the form. Rectangle rect = new Rectangle(rx.Next(10,200), ry.Next(10,200), 10, 10); gOff.DrawRectangle(WhitePen, rect); gOff.FillRectangle(BlueBrush, rect); } // Use the Graphics object from // PaintEventArgs to draw the bitmap onto the screen. e.Graphics.DrawImage(bmp, 0, 0, ClientRectangle, GraphicsUnit.Pixel); gOff.Dispose(); } Compiling the Code This example requires references to the following namespaces: Robust Programming Note that the Graphics object created for the off-screen drawing should be disposed. The Graphics object returned by the Graphics property of the PaintEventArgs object is destroyed by the garbage collector and does not need to be explicitly disposed. See Also Other Resources Graphics and Drawing in the .NET Compact Framework
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/ms172506%28v%3Dvs.90%29
CC-MAIN-2019-47
refinedweb
420
60.31
Richard Jones' Log: Something I'm working on... import withgui gui = withgui.Window() gui.image('') gui.run() I haven't seen any examples with multiple widgets yet. How is layout going to work? And will it support more than one top level window? How do you see this library being used? What is it you are working on? Is it a new GUI (cross platform I hope) library? Looks like it's a library built on top of TkInter. Looks like something similar to easygui!! @Panos: it's nowhere near public-ready yet. Seriously. And I'm a "release early, release often" kinda guy too. It also doesn't really have a name yet except "withgui" :) @Michael: I snuck some packing into the latest example. @Ronaldo: if I'm lucky I won't have to develop an actual GUI toolkit. Currently I've got partial implementations on top of Tkinter, PyQt4, kytten and simplui. @RB: different enough to easygui, I think. Stop teasing and share the code already! :D
http://www.mechanicalcat.net/richard/log/Python/Something_I_m_working_on.2
CC-MAIN-2018-43
refinedweb
169
70.6
Mech-Drickel,Nice to see Hi Mech-Drickel, Nice to see you back ! :D (CRAZY COOL AWESOME ROBOT - REALLY NICE !!!) Now, I got to ask the obvious .. cuz I can't resist ! Did you load the latest MRLComm.ino into your Arduino ? (Just making sure is all :) Hi Grog Hi GroG. I've loaded the MRLComm.ino into my Arduino. I've used the code of the Alessandruino's "Little Printed Dumb Robot" to control the hand of my robot, but only with pre-defined positions: hand open(100), wide open(150) and close(40). But what I want is the following: the program starts with a hand in rest position (100), when I press one button it increments the position in one degree (goes to 101), press again and increments one more degree (goes to 102), press other button and decrements it one degree (goes to 101) and etc.... Awsome project :D I guess the Awsome project :D I guess the problem is hand.moveTo(b) because in your script b is a float,but the method moveTo() needs an integer : moveTo(integer). So you can replace 2.0 with 2, and 1.0 with 1 ,or you can force b into an integer .. Let us know... Alessandro Thanks Alessandruino I still trying some things but no success. I'm very newbie with this.. Just the second day with Python/Jython... But I'll keep ahead!!! [[Joystick.robotHand.py]] Check your post...i edited it... now your script should be worky !!! it's in the repository now... You are a Python Master! This is great man, and as I said before: You are a Python Master!! :) Well below is the code with some adapt to close and open the hand. What I want now is to make some combos for, for example, when I press the RB + A buttons close the right hand. When press R B + X open right hand. When press LB + A close left hand, when press LB + X open left hand... and etc... Other thing I want is to when press the button (or buttons, depending on the case) the servo keeps increasing the steps till I release the button. The code for now: arduino = Runtime.createAndStart("arduino","Arduino") joystick = Runtime.createAndStart("joystick","Joystick") handR = Runtime.createAndStart("handR","Servo") arduino.setSerialDevice("COM3", 57600, 8, 1, 0) sleep(4) arduino.attach(handR.getName() , 2) a = 90 print a handR.moveTo(a) handRMaxPos = 130 handRMinPos = 40 def handRopen(): global a buttonA = msg_joystick_button0.data[0] print buttonA if (buttonA == 1) and (a >=handRMinPos): a -= 2 print a handR.moveTo(a) elif (buttonA == 0): print 'button not pressed' return def handRclose(): global a buttonX = msg_joystick_button2.data[0] print buttonX if (buttonX == 1) and (a <=handRMaxPos): a += 2 print a handR.moveTo(a) elif (buttonX == 0): print 'button not pressed' return joystick.addListener("button0", python.name, "handRopen") joystick.addListener("button2", python.name, "handRclose") Oh I forgot - if stuff does Oh I forgot - if stuff does not work - always send a NoWorky - it helps alot ! Looks like were going to BORG in a MAX chip :D LED display YOU WILL BE ASSIMILATED !!! Led matrix + MAX7219: Support info Arduino library: , good support info: , my post of facial expressions with led matrix and MAX7219 on LMR: NO worky combo! joystick = runtime.createAndStart("joystick","Joystick") sleep(4) def a(): a = msg_joystick_button0.data[0] print a rb = msg_joystick_button5.data[0] print rb if (( a == 1) and (rb == 1)): print 'Combo!' elif ( a == 1): print 'A pressed' elif ( rb == 1): print 'RB pressed' else: print 'Nothing pressed' #create a message route from joy to python so we can listen for button joystick.addListener("button0", python.name, "a") joystick.addListener("button5", python.name, "a") NO worky combo! def combo(): if (( buttonApressed) and (buttonRBpressed)): print 'Combo!' else: print 'Nothing pressed' One more of my "dumb" codes, but I think it can help someway... joystick = runtime.createAndStart("joystick","Joystick") sleep(4) a = 1 rb = 1 buttonApressed = 1 buttonRBpressed = 1 def buttonA(): global a buttonA = msg_joystick_button0.data[0] print buttonA if (buttonA == 1): a == buttonApressed print a elif (buttonA == 0): a == buttonAnotpressed print a def buttonRB(): global rb buttonRB = msg_joystick_button5.data[0] print buttonRB if (buttonRB == 1): rb == buttonRBpressed print rb elif (buttonRB == 0): rb == buttonRBnotpressed print rb def combo(): global buttonApressed global buttonRBpressed if ((buttonApressed) and (buttonRBpressed)): print "Combo!" elif (buttonApressed): print "Button A pressed" elif (buttonRBpressed): print "Button RB pressed" else: print "Nothing pressed" joystick.addListener("button0", python.name, "buttonA") joystick.addListener("button5", python.name, "buttonRB") edited your post...now there edited your post...now there is a Joystick.comboButton.py script Sphinx Issues In the past I have seen Sphinx not work because lack of memory so I supplied a alternate myrobotlab.bat (it was renamed myrobotlab.txt and attached to the post) I've downloaded and replaced my local myrobotlab.bat with this files and I can get the JVM to change the amount of memory it allocates. Here is the contents of the file - and the change I put in -Xmx128m for allocating max size of 128 megs Starting myrobotlab.bat then looking at the myrobotlab.log I can see the changes which have occured. Its not a full 128 megs because Xmx - is specifying the "maximum" size not the "initial" size, to change the initial size you could add -Xms128m
http://myrobotlab.org/content/way-make-robot-become-smarter
CC-MAIN-2017-51
refinedweb
886
68.47
If you have a requirement to save and serve files, then there are at least a couple options. - Save the file onto the server and serve it from there. - Mongo[^n] provide GridFS[^n] store that allows you not only to store files but also metadata related to the file. For example: you can store author, tags, group etc right with the file. You can provide this functionality via option 1 too, but you would need to make your own tables and link the files to the metadata information. Besides replication of data is in built in Mongo. Bottle You can upload and download mongo files using Bottle[^n] like so: import json from bottle import run, Bottle, request, response from gridfs import GridFS from pymongo import MongoClient FILE_API = Bottle() MONGO_CLIENT = MongoClient('mongodb://localhost:27017/') DB = MONGO_CLIENT['TestDB'] GRID_FS = GridFS(DB) @FILE_API.put('/upload/< file_name>') def upload(file_name): response.content_type = 'application/json' with GRID_FS.new_file(filename=file_name) as fp: fp.write(request.body) file_id = fp._id # If the file is found in the database then the save # was successful else an error occurred while saving. if GRID_FS.find_one(file_id) is not None: return json.dumps({'status': 'File saved successfully'}) else: response.status = 500 return json.dumps({'status': 'Error occurred while saving file.'}) @FILE_API.get('/download/< file_name>') def index(file_name): grid_fs_file = GRID_FS.find_one({'filename': file_name}) response.headers['Content-Type'] = 'application/octet-stream' response.headers["Content-Disposition"] = "attachment; filename={}".format(file_name) return grid_fs_file run(app=FILE_API, host='localhost', port=8080) And here's the break down of the code: Upload method: Line 12: Sets up upload method to recieve a PUT request for /upload/<file_name> url with file_name variable holding the value that user passed in. Line 15-17: Create a new GridFS file with name: file_name and get the content from request.body. request.body may be StringIO type or a File type because Python is smart enough to decipher the body type based on the content. Download method: Line 29: Find the GridFS file. Line 30-31: Set the response Content-Type as application-octet-stream and Content-Disposition to attachment; filename=<file_name> so the client can download the file. Line 33: Return the GridOut object. Based on Bottle documentation (below), we can return an object which has .read() method available and Bottle understands that to be a File object. Bottle handles return of File object(s) such that they can be downloaded. File objects Everything that has a .read() method is treated as a file or file-like object and passed to the wsgi.file_wrapper callable defined by the WSGI server framework. Some WSGI server implementations can make use of optimized system calls (sendfile) to transmit files more efficiently. In other cases this just iterates over chunks that fit into memory. That is as simple as it gets as far as Bottle is concerned. Now on to implementing the same functionality in Flask. Flask You can upload/download files using Flask[^n] like so: import json from gridfs import GridFS from pymongo import MongoClient from flask import Flask, make_response from flask import request __author__ = 'ravihasija' app = Flask(__name__) mongo_client = MongoClient('mongodb://localhost:27017/') db = mongo_client['TestDB'] grid_fs = GridFS(db) @app.route('/upload/ ', methods=['PUT']) def upload(file_name): with grid_fs.new_file(filename=file_name) as fp: fp.write(request.data) file_id = fp._id if grid_fs.find_one(file_id) is not None: return json.dumps({'status': 'File saved successfully'}), 200 else: return json.dumps({'status': 'Error occurred while saving file.'}), 500 @app.route('/download/ ') def index(file_name): grid_fs_file = grid_fs.find_one({'filename': file_name}) response = make_response(grid_fs_file.read()) response.headers['Content-Type'] = 'application/octet-stream' response.headers["Content-Disposition"] = "attachment; filename={}".format(file_name) return response app.run(host="localhost", port=8081) You might notice that the Flask upload and download method(s) are very similar to Bottle's. It differs only in a few places listed below: Line 14: Routing is configured differently in Flask. You mention the URL and the HTTP methods that apply for that URL. Line 17: Instead of request.body you use request.data to get the request content. Line 28-31: In Flask, if you want to add additional headers, one way to do so is to "make the response" with the file content and set up the appropriate headers. Finally, return the response object. Questions? Thoughts? Please feel free to leave me a comment below. Thank you for your time.___ **Github repo**: ___ #####References: [^n]: MongoDB: [^n]: GridFS: [^n]: Bottle: [^n]: Flask: [^n]: PyMongo GridFS doc [^n]: Get to know GridFS: ___
https://javawithravi.com/upload-and-download-file-from-mongo-using-bottle-and-flask/
CC-MAIN-2022-33
refinedweb
754
52.15
Introduction One of the first steps when looking to gain access to a host, system, or application is to enumerate usernames. Once usernames are guessed or enumerated targeted password based attacks can then be launched against those found usernames. In this blog post, we discuss common techniques that are used to enumerate usernames. While these techniques are not new, there is still some value in discussing them, as they are such an important part of the process of gaining access to systems. Previous Work This blog is a continuation from Ben Williams’ presentation [1] The L@m3ne55 of Passw0rds: Notes from the Field, in which he discusses password-based attacks, a sometimes-underused method of gaining access to a host or system. Username Enumeration Techniques A number of useful and often used techniques for enumerating valid usernames currently exist; they can be categorised into two broad categories, web application and infrastructure-based username enumeration, although others may exist. The following examples are by no means exhaustive; they serve as examples of the types of issues which provide consultants and threat actors alike the ability to enumerate usernames. Web Application Standard Authentication: In standard authentication, a user is required to enter a username and password into a form to gain access to the web application. When entering an invalid username along with a password, a generic message such as “incorrect password” is often returned, suggesting that the username does not exist. However, when entering a valid username and an incorrect password, we will often see a message such as “password incorrect for this user”, suggesting that the username is valid. A malicious user can use automated tools to gather a list of valid usernames using this method. Once valid usernames have been successfully enumerated, a brute-force attempt to retrieve passwords can be used against those usernames. Forgotten Password When using a recovery facility such as a forgotten password function, a vulnerable application might return a message that reveals if a username exists or not. Entering a valid username or email address may return something along the lines of: Your password has been successfully sent to the email address you registered with where we can assume that we have identified a valid username; however, an invalid username or email address would return something along the lines of: email address is not valid or the specified user was not found Predictable Username Formats In some cases user IDs are created with specific predictable sequences or formats. For example, we can view users with IDs created in sequential order: - CN000100 - CN000101 Armed with this information, an automated attack can take place by increment the value and using a technique such as the forgotten password functionality (discussed above) to determine if a username is valid. It is worth noting that that the above methods can be expanded to include more than simply the message that is returned; other factors should also be analysed when attempting to enumerate valid usernames [2], including: - The error code received on login pages for valid and non-valid credentials; - URLs and URL redirections for valid and non-valid credentials; - Web page titles for valid and non-valid credentials; WordPress Default Installation WordPress is a free and open-source content management system. [3] Under a non-hardened WordPress installation, it is possible to enumerate usernames. For example, wpscan [4] can be used to enumerate WordPress usernames. Below is an example of the type of output received from the enumerate users module (ruby ./wpscan.rb –url –enumerate u): +----+---------------+------+ | Id | Login | Name | +----+---------------+------+ | 1 | administrator | | | 2 | edward | | | 3 | gareth | | | 5 | dylan | | | 6 | dafydd | | | 7 | sarah | | +----+---------------+------+ Coupled with the ability to identify the /wp-admin/ admin page, we could then use usernames to perform a targeted brute-force attack. Bespoke WordPress Username Enumeration There are instances when standard tools, like wpscan do not work when looking to enumerate usernames. It is worth taking the time to look at the structure at the website and determine the possibility of a bespoke method of enumerating usernames. The following is an example, redacted, script that has been used in the past to enumerate valid usernames for WordPress sites when standard tools were not found to be successful: #!/usr/bin/python #EDW NCCGroup.trust #enumerate usernames from authors url from urllib2 import Request, urlopen, URLError, HTTPError import re import optparse p = optparse.OptionParser("usage: %prog -f file", version="%prog 0.1") p.add_option("-f", "--file", dest="file", type="str", help="name of file for usernames") (options,args) = p.parse_args() if len(args) != 0: parser.error("parser error") exit() file = options.file filename = file try: f = open(filename,'a') except: print "unable to open file" req = Request('') try: response = urlopen(req) except HTTPError as e: print 'The server couldn\'t fulfill the request.' print 'Error code: ', e.code except URLError as e: print 'We failed to reach a server.' print 'Reason: ', e.reason else: if response.getcode() == 200: for a in response: m = re.search("(.*)/\" title=", a) if m: f.write(m.group(1)+"\n") Infrastructure Based Enumeration Default/Common Usernames A number of common, well-known, usernames exist on default installations of operating systems and software. This information can be used to create a list of usernames. For example, it is reasonable to assume, unless there is evidence to the contrary, that a Windows host will have an account called administrator. The following is a sample list of common, well-known usernames: - administrator - root - guest - backup - test - Service accounts, for example: SophosManagement SophosUpdateMgr Username Enumeration through Port Identification When conducting an internal pen test, it is common practice to profile hosts; this is normally done through a port scan, as open ports can often lead themselves to username enumeration. For example, identifying TCP port 1521 on a host will, in more cases than not, indicate that the host has an oracle user. Similarly, an open TCP port of 5432 will often have a user named postgres. Guessable Usernames It is often the case that usernames are guessable, because they are all created using a common well-known format. After discovering the format, for example through open source intelligence (OSINT), it would be possible to generate a list of possible usernames. Figure 1: OSINT to Discover Username Format [5] Common examples of username formats include: RID Cycling Through a process of RID cycling it is possible to enumerate all domain users from a Windows 2003 domain controller. This method will work on Windows 2003 domain controllers, as the SID of the “domain users” group can then be enumerated; this was done to ensure a good level of compatibility and the same technique will not work on Windows 2008 domain controllers. With this information it is then possible to iterate through the RIDs to enumerate users. The following is an example of automated username enumeration using GetAcct [6] from a Windows 2003 domain controller: Figure 2: RID Cycling Results Kerberos Username Validation With RID cycling becoming less common, it is possible to elicit valid usernames from the Kerberos service of a domain controller. When an invalid username is requested, the server will respond using the Kerberos error code KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN, allowing us to determine that the user name was invalid. Valid user names will elicit either the TGT or an AS-REP response or the error KRB5KDC_ERR_PREAUTH_REQUIRED, signalling that the user is required to perform pre-authentication and that the user is valid [6]. Using the krb5-enum-users [6] nmap nse script, users can be enumerated from a domain controller, as shown in the following example: Figure 3: Kerberos Username Enumeration OpenSSH Username Enumeration Certain versions of OpenSSH suffer from a timing-based attack: if a valid username with a long password is given, the time taken to return is noticeably longer than for an invalid username with a long password. The following is an example of this, for which a custom script was written; as can be seen both the “root” and “ed” users have been enumerated. The figure on the right-hand side is the time it took to respond, and for the enumerated usernames the time is significantly greater than for non-enumerated users: Figure 4: OpenSSH Username Enumeration SMTP Username Enumeration Several methods exist that can be used to abuse SMTP to enumerate valid usernames and addresses; namely VRFY, EXPN, and RCPT TO. VRFY This command will request that the receiving SMTP server verify that a given email username is valid. The SMTP server will reply with the login name of the user. This feature can be turned off in sendmail, because allowing it can be a security hole. VRFY commands can be used to probe for login names on a system. An example of this using VRFY is given below, where the root user is enumerated: $ telnet 10.0.0.1 25 Trying 10.0.0.1... Connected to 10.0.0.1. Escape character is '^]'. 220 myhost ESMTP Sendmail 8.9.3 HELO 501 HELO requires domain address HELO x 250 myhost Hello [10.0.0.99], pleased to meet you VRFY root 250 Super-User <root@myhost> VRFY blah 550 blah... User unknown EXPN EXPN is similar to VRFY, except that when used with a distribution list, it will list all users on that list. This can be a bigger problem than the “VRFY” command since sites often have an alias such as “all”. An example of this using EXPN is given below: $ telnet 10.0.10.1 25 Trying 10.0.10.1... Connected to 10.0.10.1. Escape character is '^]'. 220 myhost ESMTP Sendmail 8.9.3 HELO 501 HELO requires domain address HELO x EXPN test 550 5.1.1 test... User unknown EXPN root 250 2.1.5 <ed.williams@myhost> EXPN sshd 250 2.1.5 sshd privsep <sshd@mail2> RCPT TO This identifies the recipient of the email message. This command can be repeated multiple times for a given message in order to deliver a single message to multiple recipients. The RCPT TO: technique is extremely effective at enumerating local user accounts on most Sendmail servers. An example of using the RCPT TO method is given next with the ed user enumerated: $ telnet 10.0.10.1 25 Trying 10.0.10.1... Connected to 10.0.10.1. Escape character is '^]'. 220 myhost ESMTP Sendmail 8.9.3 HELO x 250 myhost Hello [10.0.0.99], pleased to meet you MAIL FROM:test@test.org 250 2.1.0 test@test.org... Sender ok RCPT TO:test 550 5.1.1 test... User unknown RCPT TO:admin 550 5.1.1 admin... User unknown RCPT TO:ed 250 2.1.5 ed... Recipient ok ACF2 ACF2 (Access Control Facility) is a commercial, discretionary, access control software security system developed for the MVS (z/OS), VSE (z/VSE) and VM (z/VM) IBM mainframe operating systems [7]. Through the responses received when connecting to the mainframe, AS400 in this instance, it is possible to enumerate valid usernames. The following is a PoC that was created to automate this process: #!/usr/bin/python # EDW NCCGroup.trust # ACF2 Username Enumeration import sys import time import optparse import re import signal from telnetlib import Telnet from socket import * p = optparse.OptionParser("usage: %prog host user port", version="%prog 0.1") p.add_option("-H", "--host", dest="host", type="string", help="specify hostname to run on") p.add_option("-u", "--userfile", dest="user", type="string", help="file of usernames") p.add_option("-p", "--port", dest="port", type="int", default=23, help="port number, default is 23") (options, args) = p.parse_args() host = options.host user = options.user port = options.port def main(): try: u = open(user).read().splitlines() except IOError as e: print "I/O error({0}): {1}".format(e.errno, e.strerror) sys.exit() for n in u: tn = Telnet(host, 23, 120) tn.write('test\r\n') tn.read_some() tn.write(n+'\r\n') tn.read_some() tn.write('pass\r\n') data = tn.read_until('test',1) if re.search (r"SUSPENDED BECAUSE OF PASSWORD VIOLATIONS",data): print ("user found: "+str(n)) elif re.search (r"PASSWORD NOT MATCHED",data): print ("user found: "+str(n)) tn.close() def signal_handler(signal, frame): print "\nCtrl+C pressed.. aborting..." exit()if __name__ == '__main__': signal.signal(signal.SIGINT, signal_handler) main() The results of running this are shown in figure 5: Figure 5: ACF2 Username Enumeration Generic Mitigations - For web applications, create a generic message, preventing the ability for users to elicit usernames. - When creating Active Directory usernames, consider an element of randomness; for example; - dylan.williams could be dylan.williams3280 and - dafydd.williams could be dafydd.williams6782 - For WordPress, there exist a number of plugins that can be used to stop the enumeration of usernames. - Restrict access to /wp-admin by means of IP restriction. - Implement two-factor authentication (Authy, Google) - Change easily guessable and default usernames to more complex, less guessable values. - Ensure all software is running at the latest, stable release. - Harden services such that null binds cannot be established and remote root authentication is not allowed. Summary While username enumeration is not new, it is still an important technique when looking to gain access to a host or system. Broad techniques for enumerating usernames from web applications, and from infrastructure including Windows, *nix, and mainframe environments, have been discussed. The importance of being able to enumerate usernames from a security consultant’s or threat actor’s perspective cannot be underestimated; while mitigating username enumeration is by no means a silver bullet, it should be included within an organisation’s risk assessment along with strong passwords, robust patching, and appropriate segregation. References [1] [2] [3] [5] [6] [7] [8] Published date: 10 June 2015 Written by: Information Security Expert
https://research.nccgroup.com/2015/06/10/username-enumeration-techniques-and-their-value/
CC-MAIN-2022-05
refinedweb
2,285
54.32
I wound up fooling around with the code and I figured out the issue. Broj1, your code wound up working flawlessly. I just need to alter a few things on my end. Thanks again! It does not seem to be appending to the query from the if statements. For some reason it is not picking up the if conditions, it is just displaying everything from the database. Thanks for the responses guys, I'll give them a try! Hey guys, So I've been devloping a TV Guide website that will allow you to fill out a variety of different fields, then searches the database for that information, and returns the results. The way I have it right now, the user has to fill out all of the fields in order for the search to work properly. I want to have it so even if the user only enters information for one field, it will ignore all of the empty fields, and still search the database. Here is the code for the form: <form method="POST" action="guideresult.php"> <table cellspacing="7" cellpadding="0" bgcolor="#ecedef" align="center"> <tr> <td>DATE</td> <td>TIME</td> <td>TYPE</td> <td>GENRE</td> <td>RATING</td> <td>PROGRAM NAME SEARCH</td> <td>Submit</td> </tr> <tr> <td> <input type="date" name="date"> </td> <td> <input type="time" name="startime"> </td> <td> <select name="type"> <option value="NULL"></option> <option value="TV Show">TV Show</option> <option value="Movie">Movie</option> </select> </td> <td> <select name="genre"> <option value="NULL"></option> <option value="Action">Action</option> <option value="Animated">Animated</option> <option value="Comedy">Comedy</option> <option value="Crime">Crime</option> <option value="Drama">Drama</option> <option value="Entertainment">Entertainment</option> <option value="Family">Family</option> <option value="Fantasy">Fantasy</option> <option value="Horror">Horror</option> <option value="Musical">Musical</option> <option value="Reality">Reality</option> <option value="Romance">Romance</option> <option value="ScFi">ScFi</option> <option value="Sports">Sports</option> <option value="Talk Show">Talk Show</option> </select> </td> <td> <select name="rating" class="rating"> <option value="NULL"></option> <option class="tvshow" value="TV-Y">TV-Y</option> <option class="tvshow" value="TV-Y7">TV-Y7</option> <option class="tvshow" value="TV-G">TV-G</option> <option class="tvshow" value="TV-PG">TV-PG</option> <option class="tvshow" value="TV-14">TV-14</option> <option class="tvshow" value="TV-MA">TV-MA</option> <option class="movie" value="G">G</option> <option class="movie" value="PG">PG</option> <option class="movie" value="PG-13">PG-13</option> <option class="movie" value="R">R</option> <option class="movie" value="NC-17">NC-17</option> </select> </td> <td><input type="text" name="programname" size="40"></td> <td><input type="submit" value=" Submit "></td> </tr> </table> </form> And the results page: <?php require_once 'connect.php'; $db_server = mysql_connect($db_hostname, $db_username, $db_password); if (!$db_server) die("Unable to connect to MySQL: " . ... Hello,(); I have corrected some of my minor errors, but now when I execute one of the options, I get the following error. S - Search For Address A - Add Address R - Remove Address E - Edit Address V - View Address Q - Quit a [color=red]./addressbook: line 27: fnPhoneAdd: command not found[/color] [/code] Anyone? This is what I currently have so far. The script needs to Add, Edit, Search, and View the address_file. Any help would be greatly appreciated. [code] ... Anyone? Thanks in advance Hello, I've created a simple menu that will execute some commands, and display them on the screen. However, when I try to execute them in the menu, nothing appears. I was not sure if I went about this the right way. Any help is appreciated. [code] [/code] [QUOTE=vmanes;1509146]Line 35 - what's 'i' doing there? I take it your loop at lines 34,35 should be displaying just the high temps and you will write a similar loop for the low temps? So, how are you pointing to the correct row of temp data?[/QUOTE] Oh wow, I see what you mean, I must have passed right over that. I had it in another loop that was dealing with 'i' and forgot to change it. Everything is printing out just fine now. Small bugs like that always kill me haha. Thanks. I'm trying to create a program where it reads in the high and low temperatures for each month of the year using a 2D array. However, I cannot seem to get it working correctly. It outputs a bunch of strange numbers. [code] using namespace std; const int MONTHS = 12; void getData(double [][2], int); int main() { double temperatures[MONTHS][2]; getData(temperatures, MONTHS); return 0; } void getData(double t[][2], int m) { int i, j, k; for(i=0; i < m; i++) { cout << "Enter higest temperature for the month " << i+1 << ": "; cin >> t[i][0]; cout << "Enter lowest temperature for the month " << i+1 << ": "; cin >> t[i][1]; } cout << "Jan" << setw(5) << "Feb" << setw(5) << "Mar" << setw(5) << "Apr" << setw(5) << "May" << setw(5) << "Jun" << setw(5); cout << "Jul" << setw(5) << "Aug" << setw(5) << "Sep" << setw(5) << "Oct" << setw(5) << "Nov" << setw(5) << "Dec" << setw(5) << endl; for(j=0; j<m; j++) cout << t[i][0] << setw(5); } [/code] Here is what the output is suppose to look like: [code] Month: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec High: 43 45 51 57 64 70 75 75 69 59 49 45 Low: 34 34 38 42 49 54 58 57 52 46 39 36 [/code] And here's what my output currently looks like. [code] Jan Feb Mar Apr May Jun Jul Aug Sep ... You almost have the code for that... You have written: int winnerIndex(int votes[]) { int i; int maximum; maximum = votes[0]; for(i=0; i<5; i++) { if(votes[i] > maximum) maximum = votes[i]; } return maximum; } Now if you add something like this: int winnerIndex(int votes[]) // This function now returns the index in votes that is the maximum: { int index(0); int maximum(votes[0]); for(int i=1; i<5; i++) { if(votes[i] > maximum) { maximum = votes[i]; index=i; } } return index; } Note I have changed a few thing: (a) you don't need to loop through the i=0 index because you have effectively processed that in the initialization stage. (b) I have elected to maintain two variables, maximum and index, you could do it with just the index value but each comparison would need to be to votes[index]. (c) There is not code to deal with a tie. You might like to change this function to include an extra input parameter to the size of the array, at the moment it is hard coded to 5, that is really ugly. And a silent bug waiting to happen when you add/remove a candidate. Thanks man, I finally got it working how I needed it to work. Much appreciated with the tips at the bottom, they really helped. [QUOTE=BTW8892;1494521]Thank you so much! I can't believe I overlooked that, stupid mistake on my part haha.[/QUOTE] I just had one more question, how would I got about getting the name of the winner with the most votes? [QUOTE=dkalita;1494505]before solving your problem lets look at the following code snippet [CODE] int x=4; int y=8; float z=x/y; cout<<z; [/CODE] Can you try this code and see what output you get. You might be expecting the output to be: 0.5 right ? But have a second look at the code. You are dividing an integer by another integer, so the result will also be an integer. Hence the output of the above code snippet will be: 0 instead of 0.5. I hope you got the problem with your code. If you want to get the exact value after division, you will have to typecast at-least the number you are dividing or the divisor to float type. Hence the above code snippet should be corrected to [CODE] int x=4; int y=8; float z=((float)x)/y;/or float z=(float)x/y;/ cout<<z; [/CODE][/QUOTE] Thank you so much! I can't believe I overlooked that, stupid mistake on my part haha. [QUOTE=pseudorandom21;1494486]percent[i] = totalvote; that isn't correct. Try making it a percentage based on totalvote and the number of votes they actually got.[/QUOTE] Sorry, that was an error, I tried testing to see what was wrong with it before. Here is the actual line that outputs 0. [code]percent[i] = (votes[i] / totalvote) * 100;[/code] I'm creating a voting program in which you input the persons name and how many votes they have received. It than prints out the persons name, their amount of votes and the percentage of their votes from the total. I can't seem to get the percent of total to work correctly, everything else is working fine. I'm also aware that nothing lines up correctly, i'll be fixing that after I get it working. Any ideas? thanks. [code]#include <iostream> using namespace std; int sumVotes(int list[]); int winnerIndex(int list[]); int votes[20]; double percent[10]; int main() { string name[10]; int i; for(i=0; i<5; i++) { cout << "Enter the name of the candidate and the number of votes: "; cin >> name[i]; cin >> votes[i]; } int totalvote; totalvote = sumVotes(votes); int w; w = winnerIndex(votes); cout << "Name" << setw(25) << "Votes Receieved" << setw(25) << "% of Total" << endl; for(i=0; i<5; i++) { percent[i] = totalvote; cout << name[i] << setw(21) << votes[i] << setw(20) << percent[i] << endl; } cout << endl << setw(30) << "Total: " << totalvote; return 0; } int sumVotes(int votes[]) { int i; int total = 0; for(i=0; i<5; i++) total = total + votes[i]; return total; } int winnerIndex(int votes[]) { int i; int maximum; maximum = votes[0]; for(i=0; i<5; i++) { if(votes[i] > maximum) maximum = votes[i]; } return maximum; } [/code] [b]Output:[/b] [code]Enter the name of the candidate and the number of votes: Person1 5000 Enter the ... Hey, I want to start this new project for my own site, but I'm not really sure on how to start. Was wondering if anyone could help start me off, with some ways to organize and start it out. Basically I want to start a hockey league script, that contains 30 teams and it tracks stats for that team, and the players on those teams. I've created a script that can handle a single team site, now I just want to expand it to a league script that can handle multiple teams at once. If anyone knows how I would go about starting this, or if there are any pre-made scripts out there. If you have a script like this and are selling it, you can let me know, I'd be interested. Thanks again. Sorry about that, I completely forgot to post these two links above. Here you go. Basically i want it to check if a player_stats field is empty, and if it is empty, it won't post it in the database. [b]Entry Form:[/b] [url][/url] [b]Insert Code:[/b] [url][/url] Alrite, So i have a an entry page with a form on it, that simply enters the data into the specified database. I have a player stats entry page, and there are five lines for players. However, if not all 5 players play, I want it to check if that field is empty, and if it is, i don't want it to enter it into the database. Because when i enter a game, and i leave a player field blank, all it does is enter in all 0's for that player. Hey, I have yet another question, so here it is. I have a roster page, and i have multiple players on the page. The players each have a unique playerid stored in the database. Now what i want to do is link up their name on the roster list that has their playerid in the url. It will than link to another page, and post all the players information based on that specific id. I believe its call parameter passing in a URL, if im right. Any way, here is the code i currently have. CODE: [url][/url] hey coppercup, I did what you said, and i got the following error after. [img][/img] Thanks a ton! Very helpful, easy to understand and was very informative! [QUOTE=coppercup;1262134]First, each set of Player form fields will must have unique names. You can't have "playerid" for five different form fields on the same form. So you will need something like "playerid_1", "playerid_2", etc. Each of the fields for each player must be named in the same fashion, having a suffix of "_X" at the end. This is how your script will identify the fields for each individual player. Then, the simplest thing to do is a single insert for each set of fields – five inserts. You're done. We have also written scripts that can identify an infinite number of such fields by evaluating the field names (the key) to find out how many [players] have been submitted, then looping through each one and doing an insert. This takes more effort to work out, but has advantages. In your case, if you only plan to insert five players' info at a time, you can either do five independent inserts or a simple loop to insert five.[/QUOTE] Now for the five inserts, would i need to insert the desired name like playerid_1 or would i just input playerid when writing the code to insert it. So i have a system, and i was creating a page to input player stats. However there are 5 rows that have be inserted per game from a single form. I have the code for the form and the insert code. I was just wondering if someone could help me so it will insert all 5 rows in the form into a desired table when entered through the form. Thanks Form: [url][/url] Insert Code: [url][/url] Any other replies? Hey, i was wondering if it was possible to find the sum of all the values in certain row and than find the max value of all of them. I have the following code, but it only grabs the largest value from all the rows, is it possible to sum all them, and than grab the max based on the persons playerid? [code=php]$result = mysql_query("SELECT gamertag, goals FROM player_stats WHERE goals = (select Max(goals) from player_stats)GROUP BY playerid; ");[/code] Is it also possible to grab a profile picture from another table based on the code above? Alrite so i am trying insert new values into my table, and when i try to, i get the following error [code]#1062 - Duplicate entry 'Hit Em High' for key 'team_home' [/code] Any idea how to fix this? I've heard it could be a primary key error, but im not sure. Thanks! Hello, I am trying to link up a certain row i am outputting on a site. How would i go about putting multiple lines inside of one echo statement. [code=php]echo "<a href='profile.php?player_id=>'".$row['gamertag']."</a>";[/code] That is the current code i am using, but it is now, working, any idea on how to make it work?
https://www.daniweb.com/members/773712/btw8892/posts
CC-MAIN-2018-22
refinedweb
2,589
68.1
Götterdämmerung The label "epic Viking doom metal" is not far off the mark for these Swedes. It does make them seem a bit more unique than they really are. An unusual genre label is just good marketing, and this is coming from the same label who brought you Ahab's "Nautik doom metal." In truth, Ereb Altor's Gastrike sounds mostly like Watain, being heavier-than-average black metal, and with a very similar sound in terms of tone, riff-writing, and vocals. But there are key differences, mainly drawn from Bathory's Viking period. True Viking metal should be swathed in keyboards, but beyond the first song they don't play a big part here. The Viking tag is earned from generously-applied epic clean vocals. They tend to stay to the back of the mix, and remain a-lyrical, providing dramatic embellishment. In "I Djupet Så Svart" they do take center stage, telling us something in Swedish (possibly about being in deep darkness). The record is mostly mid-paced, sometimes faster, and the riffs are almost entirely black metal in character. They do stray outside of purity at times, such as the blackened death metal style riff in the middle of closer "Seven." The solos are raw, and sometimes very heavy metal, so that's a nice touch. The real question isn't what genre label is appropriate, though. The real question is whether it's any good. And it is. The synths of Viking metal can be a big turnoff for a lot of people, since they can sound extremely cheesy. By mostly abandoning the keyboards and sticking to epic clean vocals, they've maintained all the drama of Viking metal while losing most of the cheese. The songwriting, while not entirely original ("Boatmans Call" and parts of "Seven" sound extremely familiar), is memorable and interesting. And the record is produced in that sweet spot where it's clean and powerful, but not over-polished. Viking doom metal? Not quite, but Ereb Altor's Gastrike is a very good Viking metal album. It might even appeal to those who normally ignore the genre out of cheese aversion. The Verdict: 4 out of 5 stars Gastrike MP3 link to Amazon import on Amazon on Napalm Records (available in Europe June 29)
http://fullmetalattorney.blogspot.com/2012/06/ereb-altor-gastrike-2012.html
CC-MAIN-2017-43
refinedweb
383
72.26
i have a application that is giving me a bit of a migrene ... never mind the rest ... the bit that i have the problem with : in an applet i wanna list a form .. to get info from the users my steps are .. make a jFrame, add a panel to the frame (frame.setContentpane() .. etc ) and in the panel put the necessary components.. the problem -- i want my frame to be 600/500 .. but i need the panel to be something like 600/1000 .. and i can't get the frame ( or panel) to display in a scrollable manner ... boiled down to the essentials .. this is the code : public class ScroolTest { public void createFrame(JScrollPane contentPane) { jFrame frame = new jFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setBackground(Color.RED); frame.setSize(new Dimension(600, 400)); frame.setResizable(false); frame.setContentPane(contentPane); frame.pack(); frame.setVisible(true); }// end Frame public void createJPanel() { JPanel panel = new JPanel(); panel.setOpaque(true); panel.setBackground(Color.LIGHT_GRAY); panel.setAutoscrolls(true); panel.setSize(new Dimension(1000, 1000)); JScrollPane scr = new JScrollPane(); scr.setPreferredSize(new Dimension(600, 300)); scr.add(panel); createFrame(scr); }// end createPanel public static void main(String arg[]) { ScroolTest st = new ScroolTest(); st.createJPanel(); }// end main i understand that JPanel doesn't implement "Scrollable" interface .. did that too .. still doesn't seem to work .. some help please .. or other sugestion of implementation .. thank's Last edited by Bogdan_1; 08-23-2006 at 08:14 AM. Reason: evrica momment Hi there, what you need to do is add a jpanel within another jpanel mand make the first jpanel scrollable. Here is an example snippet of a scrolling jpanel There are lots for snippets and java source downloads that might intrest you to. wow ... now that i think about it .. it make's perfect sence thank you very much thank you again. that solved my problem ... but still there is some head scratching i'll like to get rid of ... a piece of code from the example : Frame frame = new JFrame("Scrolling JPanel"); JPanel insidePane = new JPanel(new SpringLayout()); JPanel mainPane = new JPanel(); insidePane.setLayout(new GridLayout(20,1)); if i comment out " insidePane.setLayout(new GridLayout(20,1)); " the frame will stop displaying scoll's ... Why do i have to change the layout manager from Spring to Grid in order to get scollable area. my first thought was that implementing grid .. forced the pannel to expand to the set number of rows and beeing bigger than the frame is forced to have scoll's ... But even if a set the grid to the smallest number possible .. still the scroll appears. now, when i use Spring ( wich i like vry much ) i know ( or hope i know )that the panel is a certain size because i pinpoint the localtion on the panel .. can someone "Show me the light" thank' you I've posted code below; I think you are looking for this. I don't like the scrolling JPanel link; first of all, the ScrollingJPanel extends JFrame, and in main another JFrame gets created...and this is the one actually used. ScrollingJPanel which is a JFrame never gets touched! As for SpringLayout, there is no problem with using it. The reason why the SpringLayout won't work in the ScrollingJPanel example is b/c there was no call to SpringUtilities.makeCompactGrid( xxx ) As for the ScrollPane in your original examples; if you call pack() before you make your frame visible, all your components will be set to their preferredsize. This defeats the purpose of what I think you are trying to do -- since you are trying to use a ScrollPane, I believe you have predetermined that you want a frame of 600 x 1000, and that if your components total height grows bigger than 1000, you don't want to expand the frame but rather show scrollbars. Does this make sense? For example, if you have 120 JLabels w/ a preferred height = 10, then your min preferred size height will be @ least 1200 and if you call pack() your frame will resize to this. Below in my example, I have also set the scrollbar policy; obviously adjust this as you see fit. If you don't want your frame to expand, remove the pack() call. HTH, Stephen import javax.swing.*; import java.awt.*; /** * Created by IntelliJ IDEA. * User: Stephen Lum */ public class ScrollingJPanelLumExample { public static void main (String args[]){ JFrame frame = new JFrame("ScrollingJPanelLumExample"); JPanel insidePane = new JPanel(new SpringLayout()); insidePane.add(new JLabel("Label 1")); insidePane.add(new JLabel("Label 2")); insidePane.add(new JLabel("Label 3")); insidePane.add(new JLabel("Label 4")); insidePane.add(new JLabel("Label 5")); insidePane.add(new JLabel("Label 6")); insidePane.add(new JLabel("Label 7")); insidePane.add(new JLabel("Label 8")); insidePane.add(new JLabel("Label 9")); insidePane.add(new JLabel("Label 10")); insidePane.add(new JLabel("Label 11")); insidePane.add(new JLabel("Label 12")); insidePane.add(new JLabel("Label 13")); insidePane.add(new JLabel("Label 14")); insidePane.add(new JLabel("Label 15")); insidePane.add(new JLabel("Label 16")); insidePane.add(new JLabel("Label 17")); insidePane.add(new JLabel("Label 18")); insidePane.add(new JLabel("Label 19")); insidePane.add(new JLabel("Label 20")); insidePane.setBackground(new Color(0,0,255)); SpringUtilities.makeCompactGrid(insidePane, 20, 1, 0, 0, 0, 0); JScrollPane sp = new JScrollPane(insidePane, ScrollPaneConstants.VERTICAL_SCROLLBAR_ALWAYS, ScrollPaneConstants.HORIZONTAL_SCROLLBAR_ALWAYS); frame.getContentPane().add(sp); frame.setSize(200,200); frame.pack(); frame.setVisible(true); } } Btw - I left out a very very important point. You should wrap this code in the EDT - Event Dispatch Thread. I don't really want to get into Swing threading; if you're interested in swing then it's a must read -- but truthfully you probably don't need it to set a frame to visible. You will definitely need it if you are doing time consuming operations...long database writes / reads etc. or your gui will be painfully slow and blocking. Runnable runnable = new Runnable() { public void run() { xxx My Code xxx } } EventQueue.invokeLater(runnable); thank you, don't think i'd ever figured it out on my own .. Forum Rules
http://forums.devx.com/showthread.php?155529-Problems-with-piechart&goto=nextoldest
CC-MAIN-2013-48
refinedweb
1,012
60.92
Writing Text Files in C# .NET By Xav Hello, Xav here. There are many amazing things you can do with an application, but one of the most useful is to interact with the user's files. Most programs store data in files, and this can be used for numerous reasons. Here we are using the .NET Framework (I use v3.5). First, add the following statement to the top of the code file, next to the other "using" statements: using System.IO; The .NET Framework provides a namespace for working with the data - System.IO (short for Input/Output). Now - on to reading the most basic of files - the text file! Writing Data It is easier to write data to a file. Here's what you do: We use a StreamWriter class (in the System.IO namespace) to write the data: string path = @"C:\myfile.txt"; StreamWriter sw = new StreamWriter(path)The object is not static - you create a new object (which we here call "sw"), and use the constructor to choose a file. Replace the string "path" with your desired path - and don't forget to include the @ symbol, otherwise C# interprets the backslashes as escape sequences. Next, is the easy part: string bufferOne = "This is text 1."; string bufferTwo = "This is text 2."; sw.Write(bufferOne); sw.Write(bufferTwo); This writes the following text to the text file: This is text 1.This is text 2. If we wanted to have the text on separate lines, we could instead use: sw.WriteLine(bufferOne); sw.WriteLine(bufferTwo); This produces the following result in the text file: This is text 1. This is text 2. It depends on the situation as to which method to use. However, you must use whichever one is most appropriate for the situation. At the end, we must finish off by closing and unreferencing the StreamWriter object. If you do not call Close(), the code may not work: sw.Close(); sw.Dispose(); And that's all there is to it! Remember that you do not have to pass a string in to the Write() or WriteLine() - the method is overloaded, so you can use many data types, including char, byte, int and even boolean. Happy writing!
http://forum.codecall.net/topic/40878-c-tutorial-writing-text-files/
CC-MAIN-2020-45
refinedweb
371
84.27
Re: DNS v. 10 Preferred -- is all this trouble covert? - From: Nocantype <kami17@xxxxxxxxx> - Date: Sat, 25 Oct 2008 15:01:08 -0700 (PDT) Well my system just crashed again. -- went to blue screen. The e-mail I was working on, gone of course. I have to shut off my computer via it's hard button to go off and back on. What the heck, I'll go get the error messages (I wrote down win32k.sys somewhere along the error) C:\DOCUME~1\Owner\LOCALS~1\Temp\WERf0ba.dir00\Mini102508-01.dmp C:\DOCUME~1\Owner\LOCALS~1\Temp\WERf0ba.dir00\sysdata.xml ----------------- end of pasting------------------ JD said: Great new features do not improve basic functionality. Me: yeah, tell me about it! Maybe Internet Explorer is one of your problems. I haven't tried Firefox yet. I think I tried it with DNS 9 and something was... something I didn't like. But yes, I know I should try it. This thing: is what happened today.(Again)is what happened today.(Again)The other day I had to start my system over from a previous day (I was working on a Word file and my screen suddenly went blue) and I completely lost my Dragon and had to make a new user. Could not import my voice files. Sounds like you have significant system configuration problems, orNope, windows XP with the added service pack. It's been solid, afaik. you are using a pre-Windows XP operating system. Every time I close out of Dragon, I get "VoiceBar Damon... [something something]" and I click okay to let it close, but then have to click End Now. That doesn't look good. That's still happening. That's a basic problem you need to resolve. You should work on one problem at a time. That sounds like a good start. Okay, trying Firefox. Yes, I use history and "links" everyday. Maybe that's the same as bookmarks; I hope so, because my links are all lined up at the top of my Internet Explorer screen. Besides switching to Firefox, more important would be to find/hire aI know one really smart guy. But he's so... annoying... I do remember highly skilled techie friend that can clean up your system. Sounds like you have system problems. one time pricing folks, and actually getting on Craigslist and shopping, but so MANY techie people don't know about voice recognition, and a whole interaction, and what to clear, or what could be wrong. (My crazy full of himself friend does not voice recognition -- oh yes, I've thought of biting the bullet for a couple of hours. I know he's expensive though) I'm impressed by Dell's online support, but they are known for cluttering up the users system with all sorts of unnecessary software. If it were my computer, I would do a fresh uncluttered install of Windows XP. And from time to time I would make a known-good backup of my installation. Yes, but every time I ask someone about taking off clutter, they say you have to go down the list and look up each thing with its dot and 3 letters, and it's such a huge job! -- I interrupt you ---- I interrupt you -- I recall offering to help you personally. I think you are somewhat misled misled? All these problems started when I, after I loaded DNS preferred version 10 about the cause of your problems. NaturallySpeaking is a veryK complicated program that interacts heavily with your system, but I'm led to believe that they do it well ( . - Follow-Ups: - Re: DNS v. 10 Preferred -- is all this trouble covert? - From: John Doe - References: - DNS v. 10 Preferred -- is all this trouble covert? - From: Nocantype - Re: DNS v. 10 Preferred -- is all this trouble covert? - From: John Doe - Prev by Date: air jordan fusion shoes for kid ,women, men - nike - Next by Date: Re: DNS v. 10 Preferred -- is all this trouble covert? - Previous by thread: Re: DNS v. 10 Preferred -- is all this trouble covert? - Next by thread: Re: DNS v. 10 Preferred -- is all this trouble covert? - Index(es):
http://newsgroups.derkeiler.com/Archive/Comp/comp.speech.users/2008-10/msg00033.html
CC-MAIN-2016-50
refinedweb
698
85.59
IRC log of tagmem on 2003-09-15 Timestamps are in UTC. 18:47:57 [RRSAgent] RRSAgent has joined #tagmem 18:48:01 [Zakim] Zakim has joined #tagmem 18:50:09 [Ian_] Ian_ has joined #tagmem 18:50:17 [ian__] ian__ has joined #tagmem 18:50:49 [ian__] ian__ has changed the topic to: Agenda: 18:53:38 [timbl] timbl has joined #tagmem 18:53:56 [timbl] Zakim, who is on the phone? 18:53:56 [Zakim] sorry, timbl, I don't know what conference this is 18:53:57 [Zakim] On IRC I see timbl, ian__, Zakim, RRSAgent, Stuart, DanC, Norm 18:54:02 [timbl] Zakim, this is tag 18:54:02 [Zakim] ok, timbl 18:58:55 [Zakim] +Norm 18:59:03 [Norm] Zakim, who's on the phone? 18:59:03 [Zakim] On the phone I see TimBL, Norm 18:59:16 [ian__] zakim, call Ian-BOS 18:59:16 [Zakim] ok, ian__; the call is being made 18:59:17 [Zakim] +Ian 19:00:10 [Zakim] +??P26 19:00:24 [Zakim] +DanC 19:00:26 [Stuart] Zakim, ??P26 is me 19:00:26 [Zakim] +Stuart; got it 19:01:53 [Stuart] zakim, who is here 19:01:54 [Zakim] Stuart, you need to end that query with '?' 19:01:55 [Zakim] +Tim_Bray 19:01:58 [Stuart] zakim, who is here? 19:01:58 [Zakim] On the phone I see TimBL, Norm, Ian, Stuart, DanC, Tim_Bray 19:01:59 [Zakim] On IRC I see timbl, Ian, Zakim, RRSAgent, Stuart, DanC, Norm 19:02:20 [TBray] TBray has joined #tagmem 19:02:29 [Ian] Tim Bray, the RDDLer 19:02:33 [TBray] howdy 19:04:32 [Zakim] +David_Orchard 19:05:26 [Zakim] +Roy 19:05:55 [Ian] Roll call: TB, TBL, SW, NW, DC, RF, IJ, 19:06:02 [Ian] DO 19:06:03 [Ian] Regrets: CL 19:06:05 [Ian] Missing: PC 19:06:24 [Ian] Accept the minutes of the 8 Sep teleconf? 19:06:29 [Ian] SW, DC, TB: Ok 19:06:32 [Ian] Resolved to accept 19:06:37 [Ian] Accept this agenda? 19:06:44 [Ian] 19:07:49 [Ian] DC: I'm interested in talking about intro of arch doc 19:08:18 [DanC] (shoot; where is charmod on our agenda? oh well) 19:08:28 [Ian] Next meeting 22 Sep teleconf? 19:08:45 [Ian] Regrets: SW, DO (likely) 19:09:07 [Ian] SW: I will do a review of upcoming editor's draft by email, but can't attend mtg. 19:09:10 [Ian] NW: I will chair 22 Sep. 19:09:47 [Ian] NW: Reminder that people on last week's call agreed to review doc before 22 Sep. 19:10:45 [Ian] --- 19:10:57 [Ian] QUestion from PC about meeting 10 nov. 19:11:51 [Ian] Proposed: We will schedule call as normal, but understand if people absent due to travel. 19:11:58 [Ian] So resolved. 19:12:02 [Ian] (10 Nov teleconf) 19:12:04 [Ian] ==== 19:12:40 [Ian] SW: Anything going on in rdfURIMeaning-39? I urge TAG to sign up to relevant mailing list. 19:12:47 [Ian] DC: 8-9 people have introduced themselves. 19:12:58 [Ian] DC: There are a few considered mail messages per week, which is a good thing. 19:13:14 [Ian] DC: No call scheduled yet, but progress. 19:13:17 [Ian] === 19:13:29 [Ian] Upcoming events: 19:14:56 [Ian] IJ: Propose that two folks give ok to publish on TR page with some revision. 19:14:57 [Ian] ===== 19:15:06 [Ian] Status of work on namespaceDocument-8. 19:15:12 [Ian] 19:15:20 [Ian] Completed. From 21 July ftf meeting. 19:15:35 [Ian] 19:15:47 [Ian] This Version: September 15, 2003 19:16:15 [Ian] TBray: Produced status section. Haven't found canonical mapping to RDF yet. 19:16:33 [Ian] TBray: So in effect I added a status section, cleanup had already been done, didn't add mapping. 19:17:23 [Ian] DC: This action not done to my satisfaction until sent to www-tag. 19:17:39 [Norm] timbl, i thought you commented on RDDL, but I can't find that comment. Am I mistaken? If not, where did you send it? 19:17:52 [Ian] SW: What about statement about TAG consensus regarding suitability of RDDL as a format for ns docs? 19:19:00 [Ian] [TB sends email to www-tag] 19:20:37 [Ian] TBL: I think the TAG consensus part belongs in finding rather than in RDDL spec. 19:21:10 [Ian] (i.e., the statement in the status section of the RDDL draft) 19:21:28 [Norm] q+ 19:21:46 [Ian] TBray: I promise to do canonical mapping this week. 19:21:57 [Ian] TBL: Please put as a normative appendix in the RDDL spec. 19:22:15 [Ian] TBray: If you want to use it, it needs to exist (at least) independently. 19:22:19 [Ian] ack DanC 19:22:19 [Zakim] DanC, you wanted to ask for help finding "hello world" example 19:22:46 [Ian] TBray: Yes, DC asked for hello world example, and I agreed with him. 19:23:17 [Ian] Action TB: Add hello world example to a new draft this week. 19:24:05 [Stuart] q? 19:24:33 [Ian] NW to TB: Did you not also have to produce a DTD? 19:24:37 [Ian] TBray: [Big sigh] 19:24:52 [Ian] TBray: I'll do this after we're agreed to the content. 19:25:30 [Ian] Action TB: Produce schemaware once TAG has consensus on the syntax. 19:25:51 [Ian] SW: What about impact of RDDL on arch doc? 19:26:00 [Stuart] ack Norm 19:26:09 [Ian] TBray: PC is going to outline the finding. THat will include a sound bite for inclusion in arch doc. 19:26:17 [Ian] Action SW: Ping PC on status of his action. 19:27:06 [Ian] q? 19:27:15 [Ian] ==== 19:27:23 [Ian] Completed action IJ 2003/07/21: Update Deep linking finding (i.e., create a new revision) with references to German court decision regarding deep linking. No additional review required since just an external reference. (Done) 19:27:35 [Ian] 19:28:16 [DanC] policies section? 19:29:14 [Ian] TBray: How about "Public policy actions" instead? 19:29:54 [Ian] IJ: A summary blurb in English might also be useful. 19:31:08 [Ian] Action TB: Ask Lauren Wood to review German text to see if applicable. 19:31:27 [Ian] 19:32:35 [Ian] Action IJ: Take back to Comm Team publicity of this document. 19:32:46 [DanC] I seem to remember that discussion of publicity around this finding. 19:33:03 [Ian] ------ 19:33:12 [Ian] whenToUseGet-7: 9 July 2003 draft of URIs, Addressability, and the use of HTTP GET and POST 19:33:12 [Ian] * Action DO 2003/09/08: DO to send additional comments, due 12 Sep. 19:33:22 [Zakim] -David_Orchard 19:33:22 [Ian] 19:33:31 [Zakim] +DOrchard 19:33:42 [Ian] DO commetns: 19:35:42 [Ian] TBray: WSDL WG aware of the finding, right? 19:35:48 [Ian] DO: Yes. But they distinguish "GET" from safety. 19:36:04 [Ian] DC: Support for GET seems too low-level. 19:36:36 [Ian] DO: Customers are asking for a way to be able to mark an operation as safe. 19:36:57 [Ian] DC: Yes, mark as "safe" and use the appropriate binding at the protocol level (i.e., depends on the protocol). 19:37:58 [Ian] DC: Mark something at abstract level (e.g., "get stock quote") as safe; in protocol layer, it's bound to whatever the appropriate safe operations are. 19:39:05 [Ian] q+ 19:39:42 [Ian] TBray: In DO's note, are we still asking the WSDL WG to do something? 19:40:06 [Stuart] ack DanC 19:40:26 [Ian] DC: You can't currently say in WSDL that an operation is safe. 19:40:39 [Ian] SW: I agree with DC that marking safe should take place at abstract layer. 19:40:53 [Ian] TBray: We are also asking for buy-in from WS community that safe operations should be done with GET. 19:41:30 [Ian] DO: WSDL WG has accepted as a MUST that they have to accept the SOAP 1.2 binding. 19:41:43 [Ian] DO: But I didn't see the marking of operations as safe being required. 19:41:48 [Ian] TBray: I'm fine with our finding. 19:42:22 [Ian] TBray: I think we should ask the WS community to (1) investigate the possibility of building in a formalism to express the fact that an operation is safe and (2) encouraging, in specs, that developers implementing safe operations implement them with GET. 19:42:39 [Ian] DO: I'm comfortable with (1). 19:42:44 [Ian] q- 19:45:02 [Ian] TBray: I think we need to encourage people to use GET when they are using big globs of SOAP inefficiently. 19:45:30 [Ian] TBray: We have agreed with strong consensus that safe operations should be done with GET. 19:45:39 [Ian] TBray: If there are WGs that disagree, we need to explore this. 19:46:30 [Ian] DO: I am happy with first para of 6 w.r.t. to comments from Noah. 19:47:24 [Ian] TBray: In SOAP 1.1, I think it was wrong to only describe a POST binding and to ignore GET. 19:47:31 [Ian] TBray: SOAP 1.2 gives equal treatment to GET/POST. 19:47:35 [DanC] equal treatment? not for safe ops. 19:47:47 [Ian] TBray: I think that there are still a nubmer of cases where, even if you have a safe operation, you still want to do with POST. 19:47:57 [Ian] s/TB/DO for 3 previous assertions. 19:48:15 [Ian] DO: I think it's moving too far to say "You should only do it this particular way...." 19:48:24 [Ian] TBray: I agree with you for case of long URIs, etc. 19:48:30 [Stuart] q+ 19:49:01 [Ian] ack Stuart 19:49:17 [Ian] SW: I think that we are haggling over this statement: "However, to represent safety in a more straightforward manner, it should be a property of operations themselves, not just a feature of bindings." 19:49:23 [Ian] SW: SHould we give more rationale for our requset. 19:49:38 [Ian] DO: I think our last sentence is fine. But I hear TB saying he wants something stronger. 19:50:31 [TBray] 19:50:36 [Stuart] ack DanC 19:50:36 [Zakim] DanC, you wanted to respond re "only" GET 19:51:02 [Ian] DC: Our finding doesn't say "always use get"; it goes to great length. But the bottom line is it says "use get for safe operations". The SOAP spec doesn't give equal treatment for safe operations; it says use GET. 19:51:06 [DaveO] DaveO has joined #tagmem 19:51:19 [Ian] DC: Our position is pretty clear on this, it is "For safe operations, use GET.....except...." 19:51:33 [DanC] (SOAP 1.2 to wit ) 19:51:42 [Ian] Noah's comments since: 19:51:48 [Ian] 19:52:21 [Ian] Suggested replacement text: 19:52:25 [Ian] SSL can be used to protect information carried by either GET or POST 19:52:25 [Ian] operations. In situations where use of SSL or other connection level 19:52:25 [Ian] security is inappropriate, POST may be used to carry credentials or other 19:52:25 [Ian] information needed to authenticate an otherwise safe retrieval. Note too 19:52:25 [Ian] that access to an audited resource typically incurs an obligation, I.e. to 19:52:26 [Ian] have the access logged, and thus must be performed using POST. 19:53:19 [Ian] TBray: I'm inclined to accept this. 19:53:25 [Ian] DC: I think that's cost-effective. 19:53:53 [Ian] TBray: Add some of NM's language (e.g., when I need to authenticate all the way into some application) 19:54:06 [DanC] cost effective... thought I wish for more time to think about it 19:55:14 [Ian] Action IJ: Incorporate NM comments and publish revision. If nobody shouts "Stop!" then we consider the finding accepted by the TAG. 19:56:09 [Ian] TBray: Back to other point - our position on use of GET is pretty strong, so if there are WGs that are moving in the opposite direction, we should interact with them. 19:56:20 [Ian] DO: I monitor WSDL WG. 19:58:05 [Ian] DO: I'd like to point out next draft of finding to WSDL WG and say "I don't see this in your reqs. If WG does not intend to satisfy this requirement, please let's chat." 19:58:15 [TBray] q+ 19:58:34 [Ian] SW: I think that the WSDL's statement of their requirement is a misstatement of what we are saying. 19:59:05 [Ian] DO: First para of section 6 relates to SOAP 1.2. 20:00:04 [Ian] DO: As a result of changes to SOAP 1.2, WSDL says "support SOAP 1.2". There's a piece missing in WSDL, which I want to ask them about (concerning second para of section 6 in finding). I think that their issue is more related to SOAP 1.2, not an additional req. 20:00:27 [Ian] ack TBray 20:00:52 [timbl] q+ 20:00:59 [Ian] TBray: It would be great if someone gave us pointers into relevant specs where this issue is relevant. 20:01:17 [Ian] Action DC: Provide TAG with pointers into WS specs where issue of safe operations is manifest. 20:01:21 [Stuart] ack timbl 20:04:04 [Ian] [Discussion of TAG / WS liaison] 20:06:00 [Ian] DC: Have we told WSDL WG that we want their spec to look like this? 20:06:52 [Ian] DO: There has been lots of dialog. 20:07:25 [Ian] DO: This is a tough one since people have a certain mindset. I don't think that there's complete understanding of the issues on both sides yet. 20:08:40 [DaveO] R128 20:08:40 [DaveO] InterfaceBindings SHOULD provide for mapping Message content to WSDLService location URIs. (From DO. Last discussed 22 Jan 2003.) 20:08:54 [DaveO] 20:09:13 [Zakim] + +1.949.679.aaaa 20:09:16 [Zakim] -Roy 20:09:56 [Ian] Action DO: Ask WSDL WG to look at finding; ask them if marking operations as safe in WSDL is one of their requirements. 20:10:20 [DanC] coo. 20:10:22 [DanC] cool. 20:10:43 [Ian] ===== 20:10:52 [Ian] # contentTypeOverride-24: 9 July 2003 draft of Client handling of MIME headers 20:10:52 [Ian] 1. Comments from Roy on charset param 20:10:52 [Ian] 2. Comments from Philipp Hoschka about usability issues when user involved in error correction. Is there a new Voice spec out we can point to for example behavior? 20:10:52 [Ian] 3. Comments from Chris Lilley 20:10:53 [DanC] (so issue 7 will go into pending state?) 20:10:54 [Ian] 4. Change "MIME headers" to "server metadata" in title? 20:10:58 [Ian] yes 20:11:05 [Ian] Client handling of MIME headers 20:11:12 [Ian] 20:11:23 [Ian] RF comments: 20:11:28 [Ian] 20:12:15 [Roy] Roy has joined #tagmem 20:12:54 [Stuart] 20:13:14 [Ian] IJ to RF: Could you suggest replacement text for your points? 20:13:56 [Roy] I can suppy text 20:14:44 [Ian] TB reply to RF: 20:15:18 [Ian] TBray: I'm arguing that in the case of XML, it's actively harmful to provide a charset unless you are really certain you're right. 20:15:39 [Ian] RF: My problem has to so with some XML variations, such as XHTML. 20:16:34 [Ian] RF: E.g., if a system is set up to do a security check on content, they will tag it with appropriate charset for that document, whether they are certain whether the content in the document is really of that charset. Their instructions to the client is to ONLY use a particular charset. 20:17:02 [Ian] TBray: In what scenario is it desirable to tell the client to only use one charset. 20:17:22 [Ian] RF: There are security holes in some browsers that make them vulnerable when trying to do char code switching. 20:17:48 [Ian] RF: Server tells browser "Only interpret this data in the following way..." 20:19:01 [Ian] RF: Not all xml parsers are correct xml parsers. 20:19:14 [DanC] client vulnerability, but our advice to servers might make it worse 20:19:35 [Ian] TB summarizing: 20:19:40 [Ian] - I send xhtml text to browser 20:19:54 [Ian] - I send as text/html, so no problem sending charset. 20:20:03 [Ian] - If I send as application/xhtml+xml .... 20:20:07 [Ian] RF: No charset param. 20:20:12 [Ian] TBray: Or does it per 3023? 20:20:45 [Ian] TBray: If there's no charset param on application/xhtml+xml, then I'm fine. 20:21:46 [DanC] s/ascii/iso-latin-1/ 20:21:52 [Ian] [Diff is between sending charset for text/html and application application/xhtml+xml 20:21:54 [Ian] ] 20:22:27 [Ian] TBray: I think we agree that sending xml as text/* is a likely source of difficulties. 20:22:42 [timbl] q+ 20:22:47 [Ian] TBray: I think we are saying that if we send as application/* that it's a bad idea to send charset. 20:23:07 [Stuart] ack DanC 20:23:07 [Zakim] DanC, you wanted to ask if we've told the WSDL WG what we want from them and to and to check if he understood Roy 20:23:11 [Ian] RF: I am for removing charset parameter for application/* 20:23:21 [Ian] RFC 3023 : 20:24:19 [Ian] TBray: We could ask authors of 3023 to update it per the changes we are asking for. 20:24:32 [Ian] TBray: Section 3.2 of 3023: optional charset param. "Strongly recommended" 20:25:20 [Ian] RF: Ask on www-tag if we should remove charset from application/* types 20:25:56 [Ian] RF: Meanwhile, don't require the server to make a judgment call on the content type. Server doesn't have a brain. 20:26:57 [Ian] TBray: I note that in draft finding we already grumble about what 3023 says 20:27:43 [Ian] RF: We can ask authors of 3023 on www-tag why those types have charset param. 20:28:09 [Ian] Action TB: Draft a Note to authors of RFC 3023 cc'ing www-tag about concerns regarding charset asking about chances of getting this fixed. 20:28:23 [Ian] q+ 20:29:19 [Ian] IJ: How does this affect this sentence: "For this reason, servers should only supply a character encoding 20:29:19 [Ian] header when there is complete certainty as to the encoding in use. 20:29:19 [Ian] " 20:30:07 [Ian] RF: If you keep the sentence, it should say that server software should only supply charset when there's complete certainty about the character encoding used within the body. 20:31:04 [Ian] Action RF: Propose alternative text to other points in RF's original email. 20:31:13 [Ian] RF: I'll do this today. 20:31:23 [Ian] ADJOURNED 20:31:25 [Zakim] -Ian 20:31:26 [Zakim] -DOrchard 20:31:26 [Zakim] -Tim_Bray 20:31:27 [Ian] RRSAgent, stop
http://www.w3.org/2003/09/15-tagmem-irc.html
CC-MAIN-2016-36
refinedweb
3,345
80.72
Performance Tips This chapter contains performance tips for DeepSee. For more information on performance and troubleshooting options, see the InterSystems Developer Community. Also see the section “Placing the DeepSee Globals in a Separate Database,” earlier in this book. Result Caching and Cube Updates For any cube that uses more than 64,000 records (by default), DeepSee maintains and uses a result cache. When you update a cube in any way, parts of the result cache are considered invalid and are cleared. The details depend upon options in the cube definition (see “Cache Buckets and Fact Order,” later in this chapter). Therefore, it is not generally desirable to update the cubes constantly. The result cache works as follows: Each time a user executes a query (via the Analyzer for example), DeepSee caches the results for that query. The next time any user runs that query, DeepSee checks to see if the cache is still valid. If so, DeepSee then uses the cached values. Otherwise, DeepSee re-executes the query, uses the new values, and caches the new values. The net effect is that performance improves over time as more users run more queries. Specifying the Agent Count DeepSee sets up a pool of agents that execute queries. This pool consists of a set of agents with high priority and the same number of agents with low priority. You can control the number of agents, which are also used when cubes are built. For details, see “Specifying the Agent Count” in the chapter “Compiling and Building Cubes” in Defining DeepSee Models. Cache Buckets and Fact Order As noted earlier, for large data sets, DeepSee maintains and uses a result cache. In this case, it can be useful to control the order of rows in the fact table, because this affects how DeepSee creates and uses the cache. To do this, you can specify the Initial build order option for the cube; see “Other Cube Options” in Defining DeepSee Models. When users evaluate pivot tables, DeepSee computes and caches aggregate values that it later reuses whenever possible. To determine whether DeepSee can reuse a cache, DeepSee uses the following logic: It examines the IDs of the records used in a given scenario (for example, for a given pivot table cell). It checks the buckets to which those IDs belong. A bucket is a large number of contiguous records in the fact table (details given later). If the bucket has been updated (because there was a change for at least one ID in the bucket), DeepSee discards any corresponding cache associated with that bucket and regenerates the result. If the bucket has not been updated, DeepSee reuses the appropriate cache (if available) or generates the result (if not). In some scenarios, changes to the source records (and the corresponding updates to any cubes) occur primarily in the most recent source records. In such scenarios, it is useful to make sure that you build the fact table in order by age of the records, with the oldest records first. This approach means that the caches for the older rows would not be made invalid by changes to the data. (In contrast, if the older rows and newer rows were mixed throughout the fact table, all the caches would potentially become invalid when changes occurred to newer records.) For more information, see “How the DeepSee Query Engine Works,” later in this book. Removing Inactive Cache Buckets When a cache bucket is invalidated (as described in the previous section), it is marked as inactive but is not removed. To remove the inactive cache buckets, call the %PurgeObsoleteCache() method of %DeepSee.Utils. For example: d ##class(%DeepSee.Utils).%PurgeObsoleteCache("patients") Precomputing Cube Cells As noted earlier, when users evaluate pivot tables, DeepSee computes and caches aggregate values that it later reuses whenever possible. This caching means that the more users work with DeepSee, the more quickly it runs. (For details, see “How the DeepSee Query Engine Works,” later in this book.) To speed up initial performance as well, you can precompute and cache specific aggregate values that are used in your pivot tables, especially wherever performance is a concern. The feature works as follows: Within the cube class, you specify an additional XData block (CellCache) that specifies cube cells that should be precomputed and cached. For details, see the first subsection. You programmatically precompute these cube cells by using a utility method. See the second subsection. You must do this after building the cube. A simpler option is to simply run any queries ahead of time (that is, before any users work with them). Defining the Cell Cache Your cube class can contain an additional XData block (CellCache) that specifies cube cells that can be precomputed and cached, which speeds up the initial performance of DeepSee. The following shows an example: /// This xml document defines aggregates to be precomputed. XData CellCache [ <group name= "BS"> <item> <element >[Measures].[Big Sale Count]</element > </item> </group> <group name= "G1"> <item> <element >[UnitsPerTransaction].[H1].[UnitsSold]</ element> <element >[Measures].[Amount Sold]</element > </item> <item> <fact >DxUnitsSold</fact > <element >[Measures].[Amount Sold]</element > </item> </group> </cellCache > } The <cellCache> element is as follows: It must be in the namespace "" It contains zero or more <group> elements. Each <group> element is as follows: It has a name attribute, which you use later when specifying which groups of cells to precompute. It contains one or more <item> elements. Each <item> element represents a combination of cube indices and corresponds to the information returned by %SHOWPLAN. An <item> element consists of one or more <element> elements. An <element> can include one or more of either of the following structures, in any combination: <fact>fact_table_field_name</fact> Or: <element>mdx_member_expression</element > Where: fact_table_field_name is the field name in the fact table for a level or measure, as given by the factName attribute for that level or measure. mdx_member_expression is an MDX expression that evaluates to a member. This can be either a member of a level or it can be a measure name (each measure is a member of the special MEASURES dimension). This expression cannot be a calculated member. Each group defines a set of intersections. The number of intersections in a group affects the processing speed when you precompute the cube cells. Precomputing the Cube Cells To precompute the aggregate values specified by a <group>, use the %ComputeAggregateGroup() method of %DeepSee.Utils. This method is as follows: classmethod %ComputeAggregateGroup(pCubeName As %String, pGroupName As %String, pVerbose As %Boolean = 1) as %Status Where pCubeName is the name of the cube, pGroupName is the name of the cube, and pVerbose specifies whether to write progress information while the method is running. For pGroupName, you can use "*" to precompute all groups for this cube. If you use this method, you must first build the cube. The method processes each group by looping over the fact table and computing the intersections defined by the items within the group. Processing is faster with fewer intersections in a group. The processing is single-threaded, which allows querying in the foreground.
https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=D2IMP_ch_perf
CC-MAIN-2021-17
refinedweb
1,176
62.48
-- | A monad transformer for the [partial] consumption of 'List's. -- The interface closely mimics iterators in languages such as Python. -- -- It is often nicer to avoid using Consumer and to use -- folds and higher-order functions instead. module Control.Monad.Consumer ( ConsumerT, evalConsumerT, next, consumeRestM ) where import Control.Applicative (Applicative(..)) import Control.Monad (MonadPlus(..), ap) import Control.Monad.ListT (ListT(..), ListItem(..)) import Control.Monad.Maybe (MaybeT(..)) import Control.Monad.State (StateT, evalStateT, get, put) import Control.Monad.Trans (MonadTrans(..), MonadIO(..)) import Data.List.Class (List(..)) import Data.Maybe (fromMaybe) -- | A monad tranformer for consuming 'List's. newtype ConsumerT v m a = ConsumerT { runConsumerT :: StateT (Maybe (ListT m v)) m a } instance Monad m => Functor (ConsumerT v m) where fmap f = ConsumerT . fmap f . runConsumerT instance Monad m => Monad (ConsumerT v m) where return = ConsumerT . return fail = ConsumerT . fail a >>= b = ConsumerT $ runConsumerT a >>= runConsumerT . b instance Monad m => Applicative (ConsumerT v m) where pure = return (<*>) = ap instance MonadTrans (ConsumerT v) where lift = ConsumerT . lift instance MonadIO m => MonadIO (ConsumerT v m) where liftIO = lift . liftIO -- | Consume a 'ListT' evalConsumerT :: List l => ConsumerT v (ItemM l) a -> l v -> ItemM l a evalConsumerT (ConsumerT i) = evalStateT i . Just . toListT -- Consumer no longer has a producer left... putNoProducer :: List l => StateT (Maybe (l v)) (ItemM l) () putNoProducer = put Nothing -- | Consume/get the next value next :: Monad m => ConsumerT v m (Maybe v) next = ConsumerT . runMaybeT $ do list <- MaybeT get item <- lift . lift $ runListT list case item of Nil -> do lift putNoProducer mzero Cons x xs -> do putProducer xs return x where putProducer = put . Just -- | Return an instance of the underlying monad that will use the given 'ConsumerT' to consume the remaining values. -- After this action there are no more items to consume (they belong to the given ConsumerT now) consumeRestM :: Monad m => ConsumerT a m b -> ConsumerT a m (m b) consumeRestM consume = ConsumerT $ do mRest <- get let rest = fromMaybe mzero mRest putNoProducer return $ evalConsumerT consume rest
http://hackage.haskell.org/package/generator-0.5.2/docs/src/Control-Monad-Consumer.html
CC-MAIN-2014-10
refinedweb
325
55.95
Linux Scheduling and Kernel Synchronization For more information on Linux, visit our Linux Reference Guide or sign up for our Linux Newsletter In this chapter 7.1 Linux Scheduler 7.2 Preemption 7.3 Spinlocks and Semaphores 7.4 System Clock: Of Time and Timers Summary Exercises The Linux kernel is a multitasking kernel, which means that many processes can run as if they were the only process on the system. The way in which an operating system chooses which process at a given time has access to a system’s CPU(s) is controlled by a scheduler. The scheduler is responsible for swapping CPU access between different processes and for choosing the order in which processes obtain CPU access. Linux, like most operating systems, triggers the scheduler by using a timer interrupt. When this timer goes off, the kernel needs to decide whether to yield the CPU to a process different than the current process and, if a yield occurs, which process gets the CPU next. The amount of time between the timer interrupt is called a timeslice. System processes tend to fall into two types: interactive and non-interactive. Interactive processes are heavily dependent upon I/O and, as a result, do not usually use their entire timeslice and, instead, yield the CPU to another process. Non-interactive processes are heavily dependent on the CPU and typically use most, if not all, of their timeslice. The scheduler has to balance the requirements of these two types of processes and attempt to ensure every process gets enough time to accomplish its task without detrimentally affecting the execution of other processes. Linux, like some schedulers, distinguishes between one more type of process: a real-time process. Real-time processes must execute in real time. Linux has support for real-time processes, but those exist outside of the scheduler logic. Put simply, the Linux scheduler treats any process marked as real-time as a higher priority than any other process. It is up to the developer of the real-time processes to ensure that these processes do not hog the CPU and eventually yield. Schedulers typically use some type of process queue to manage the execution of processes on the system. In Linux, this process queue is called the run queue. The run queue is described fully in Chapter 3, "Processes: The Principal Model of Execution,"1 but let’s recap some of the fundamentals here because of the close tie between the scheduler and the run queue. In Linux, the run queue is composed of two priority arrays: Active. Stores processes that have not yet used up their timeslice Expired. Stores processes that have used up their timeslice From a high level, the scheduler’s job in Linux is to take the highest priority active processes, let them use the CPU to execute, and place them in the expired array when they use up their timeslice. With this high-level framework in mind, let’s closely look at how the Linux scheduler operates. 7.1 Linux Scheduler The 2.6 Linux kernel introduces a completely new scheduler that’s commonly referred to as the O(1) scheduler. The scheduler can perform the scheduling of a task in constant time.2 Chapter 3 addressed the basic structure of the scheduler and how a newly created process is initialized for it. This section describes how a task is executed on a single CPU system. There are some mentions of code for scheduling across multiple CPU (SMP) systems but, in general, the same scheduling process applies across CPUs. We then describe how the scheduler switches out the currently running process, performing what is called a context switch, and then we touch on the other significant change in the 2.6 kernel: preemption. From a high level, the scheduler is simply a grouping of functions that operate on given data structures. Nearly all the code implementing the scheduler can be found in kernel/sched.c and include/linux/sched.h. One important point to mention early on is how the scheduler code uses the terms "task" and "process" interchangeably. Occasionally, code comments also use "thread" to refer to a task or process. A task, or process, in the scheduler is a collection of data structures and flow of control. The scheduler code also refers to a task_struct, which is a data structure the Linux kernel uses to keep track of processes.3 7.1.1 Choosing the Next Task After a process has been initialized and placed on a run queue, at some time, it should have access to the CPU to execute. The two functions that are responsible for passing CPU control to different processes are schedule() and scheduler_tick(). scheduler_tick() is a system timer that the kernel periodically calls and marks processes as needing rescheduling. When a timer event occurs, the current process is put on hold and the Linux kernel itself takes control of the CPU. When the timer event finishes, the Linux kernel normally passes control back to the process that was put on hold. However, when the held process has been marked as needing rescheduling, the kernel calls schedule() to choose which process to activate instead of the process that was executing before the kernel took control. The process that was executing before the kernel took control is called the current process. To make things slightly more complicated, in certain situations, the kernel can take control from the kernel; this is called kernel preemption. In the following sections, we assume that the scheduler decides which of two user space processes gains CPU control. Figure 7.1 illustrates how the CPU is passed among different processes as time progresses. We see that Process A has control of the CPU and is executing. The system timer scheduler_tick() goes off, takes control of the CPU from A, and marks A as needing rescheduling. The Linux kernel calls schedule(), which chooses Process B and the control of the CPU is given to B. Figure 7.1 Scheduling Processes Process B executes for a while and then voluntarily yields the CPU. This commonly occurs when a process waits on some resource. B calls schedule(), which chooses Process C to execute next. Process C executes until scheduler_tick() occurs, which does not mark C as needing rescheduling. This results in schedule() not being called and C regains control of the CPU. Process C yields by calling schedule(), which determines that Process A should gain control of the CPU and A starts to execute again. We first examine schedule(), which is how the Linux kernel decides which process to execute next, and then we examine scheduler_tick(), which is how the kernel determines which processes need to yield the CPU. The combined effects of these functions demonstrate the flow of control within the scheduler: –---------------------------------------------------------------------- kernel/sched.c 2184 asmlinkage void schedule(void) 2185 { 2186 long *switch_count; 2187 task_t *prev, *next; 2188 runqueue_t *rq; 2189 prio_array_t *array; 2190 struct list_head *queue; 2191 unsigned long long now; 2192 unsigned long run_time; 2193 int idx; 2194 2195 /* 2196 * Test if we are atomic. Since do_exit() needs to call into 2197 * schedule() atomically, we ignore that path for now. 2198 * Otherwise, whine if we are scheduling when we should not be. 2199 */ 2200 if (likely(!(current->state & (TASK_DEAD | TASK_ZOMBIE)))) { 2201 if (unlikely(in_atomic())) { 2202 printk(KERN_ERR "bad: scheduling while atomic!\n "); 2203 dump_stack(); 2204 } 2205 } 2206 2207 need_resched: 2208 preempt_disable(); 2209 prev = current; 2210 rq = this_rq(); 2211 2212 release_kernel_lock(prev); 2213 now = sched_clock(); 2214 if (likely(now - prev->timestamp < NS_MAX_SLEEP_AVG)) 2215 run_time = now - prev->timestamp; 2216 else 2217 run_time = NS_MAX_SLEEP_AVG; 2218 2219 /* 2220 * Tasks with interactive credits get charged less run_time 2221 * at high sleep_avg to delay them losing their interactive 2222 * status 2223 */ 2224 if (HIGH_CREDIT(prev)) 2225 run_time /= (CURRENT_BONUS(prev) ? : 1); ----------------------------------------------------------------------- Lines 2213–2218 We calculate the length of time for which the process on the scheduler has been active. If the process has been active for longer than the average maximum sleep time (NS_MAX_SLEEP_AVG), we set its runtime to the average maximum sleep time. This is what the Linux kernel code calls a timeslice in other sections of the code. A timeslice refers to both the amount of time between scheduler interrupts and the length of time a process has spent using the CPU. If a process exhausts its timeslice, the process expires and is no longer active. The timestamp is an absolute value that determines for how long a process has used the CPU. The scheduler uses timestamps to decrement the timeslice of processes that have been using the CPU. For example, suppose Process A has a timeslice of 50 clock cycles. It uses the CPU for 5 clock cycles and then yields the CPU to another process. The kernel uses the timestamp to determine that Process A has 45 cycles left on its timeslice. Lines 2224–2225 Interactive processes are processes that spend much of their time waiting for input. A good example of an interactive process is the keyboard controller—most of the time the controller is waiting for input, but when it has a task to do, the user expects it to occur at a high priority. Interactive processes, those that have an interactive credit of more than 100 (default value), get their effective run_time divided by (sleep_avg/ max_sleep_avg * MAX_BONUS(10)):4 –---------------------------------------------------------------------- kernel/sched.c 2226 2227 spin_lock_irq(&rq->lock); 2228 2229 /* 2230 * if entering off of a kernel preemption go straight 2231 * to picking the next task. 2232 */ 2233 switch_count = &prev->nivcsw; 2234 if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) { 2235 switch_count = &prev->nvcsw; 2236 if (unlikely((prev->state & TASK_INTERRUPTIBLE) && 2237 unlikely(signal_pending(prev)))) 2238 prev->state = TASK_RUNNING; 2239 else 2240 deactivate_task(prev, rq); 2241 } ----------------------------------------------------------------------- Line 2227 The function obtains the run queue lock because we’re going to modify it. Lines 2233–2241 If we have entered schedule() with the previous process being a kernel preemption, we leave the previous process running if a signal is pending. This means that the kernel has preempted normal processing in quick succession; thus, the code is contained in two unlikely() statements.5 If there is no further preemption, we remove the preempted process from the run queue and continue to choose the next process to run. –---------------------------------------------------------------------- kernel/sched.c 2243 cpu = smp_processor_id(); 2244 if (unlikely(!rq->nr_running)) { 2245 idle_balance(cpu, rq); 2246 if (!rq->nr_running) { 2247 next = rq->idle; 2248 rq->expired_timestamp = 0; 2249 wake_sleeping_dependent(cpu, rq); 2250 goto switch_tasks; 2251 } 2252 } 2253 2254 array = rq->active; 2255 if (unlikely(!array->nr_active)) { 2256 /* 2257 * Switch the active and expired arrays. 2258 */ 2259 rq->active = rq->expired; 2260 rq->expired = array; 2261 array = rq->active; 2262 rq->expired_timestamp = 0; 2263 rq->best_expired_prio = MAX_PRIO; 2264 } ----------------------------------------------------------------------- Line 2243 We grab the current CPU identifier via smp_processor_id(). Lines 2244–2252 If the run queue has no processes on it, we set the next process to the idle process and reset the run queue’s expired timestamp to 0. On a multiprocessor system, we first check if any processes are running on other CPUs that this CPU can take. In effect, we load balance idle processes across all CPUs in the system. Only if no processes can be moved from the other CPUs do we set the run queue’s next process to idle and reset the expired timestamp. Lines 2255–2264 If the run queue’s active array is empty, we switch the active and expired array pointers before choosing a new process to run. –---------------------------------------------------------------------- kernel/sched.c 2266 idx = sched_find_first_bit(array->bitmap); 2267 queue = array->queue + idx; 2268 next = list_entry(queue->next, task_t, run_list); 2269 2270 if (dependent_sleeper(cpu, rq, next)) { 2271 next = rq->idle; 2272 goto switch_tasks; 2273 } 2274 2275 if (!rt_task(next) && next->activated > 0) { 2276 unsigned long long delta = now - next->timestamp; 2277 2278 if (next->activated == 1) 2279 delta = delta * (ON_RUNQUEUE_WEIGHT * 128 / 100) / 128; 2280 2281 array = next->array; 2282 dequeue_task(next, array); 2283 recalc_task_prio(next, next->timestamp + delta); 2284 enqueue_task(next, array); 2285 } next->activated = 0; ----------------------------------------------------------------------- Lines 2266–2268 The scheduler finds the highest priority process to run via sched_find_first_bit() and then sets up queue to point to the list held in the priority array at the specified location. next is initialized to the first process in queue. Lines 2270–2273 If the process to be activated is dependent on a sibling that is sleeping, we choose a new process to be activated and jump to switch_tasks to continue the scheduling function. Suppose that we have Process A that spawned Process B to read from a device and that Process A was waiting for Process B to finish before continuing. If the scheduler chooses Process A for activation, this section of code, dependent_sleeper(), determines that Process A is waiting on Process B and chooses an entirely new process to activate. Lines 2275–2285 If the process’ activated attribute is greater than 0, and the next process is not a real-time task, we remove it from queue, recalculate its priority, and enqueue it again. Line 2286 We set the process’ activated attribute to 0, and then run with it. –---------------------------------------------------------------------- kernel/sched.c 2287 switch_tasks: 2288 prefetch(next); 2289 clear_tsk_need_resched(prev); 2290 RCU_qsctr(task_cpu(prev))++; 2291 2292 prev->sleep_avg -= run_time; 2293 if ((long)prev->sleep_avg <= 0) { 2294 prev->sleep_avg = 0; 2295 if (!(HIGH_CREDIT(prev) || LOW_CREDIT(prev))) 2296 prev->interactive_credit--; 2297 } 2298 prev->timestamp = now; 2299 2300 if (likely(prev != next)) { 2301 next->timestamp = now; 2302 rq->nr_switches++; 2303 rq->curr = next; 2304 ++*switch_count; 2305 2306 prepare_arch_switch(rq, next); 2307 prev = context_switch(rq, prev, next); 2308 barrier(); 2309 2310 finish_task_switch(prev); 2311 } else 2312 spin_unlock_irq(&rq->lock); 2313 2314 reacquire_kernel_lock(current); 2315 preempt_enable_no_resched(); 2316 if (test_thread_flag(TIF_NEED_RESCHED)) 2317 goto need_resched; 2318 } ----------------------------------------------------------------------- Line 2288 We attempt to get the memory of the new process’ task structure into the CPU’s L1 cache. (See include/linux/prefetch.h for more information.) Line 2290 Because we’re going through a context switch, we need to inform the current CPU that we’re doing so. This allows a multi-CPU device to ensure data that is shared across CPUs is accessed exclusively. This process is called read-copy updating. For more information, see. Lines 2292–2298 We decrement the previous process’ sleep_avg attribute by the amount of time it ran, adjusting for negative values. If the process is neither interactive nor non-interactive, its interactive credit is between high and low, so we decrement its interactive credit because it had a low sleep average. We update its timestamp to the current time. This operation helps the scheduler keep track of how much time a given process has spent using the CPU and estimate how much time it will use the CPU in the future. Lines 2300–2304 If we haven’t chosen the same process, we set the new process’ timestamp, increment the run queue counters, and set the current process to the new process. Lines 2306–2308 These lines describe the assembly language context_switch(). Hold on for a few paragraphs as we delve into the explanation of context switching in the next section. Lines 2314–2318 We reacquire the kernel lock, enable preemption, and see if we need to reschedule immediately; if so, we go back to the top of schedule(). It’s possible that after we perform the context_switch(), we need to reschedule. Perhaps scheduler_tick() has marked the new process as needing rescheduling or, when we enable preemption, it gets marked. We keep rescheduling processes (and context switching them) until one is found that doesn’t need rescheduling. The process that leaves schedule() becomes the new process executing on this CPU. 7.1.2 Context Switch Called from schedule() in /kernel/sched.c, context_switch() does the machine-specific work of switching the memory environment and the processor state. In the abstract, context_switch swaps the current task with the next task. The function context_switch() begins executing the next task and returns a pointer to the task structure of the task that was running before the call: –---------------------------------------------------------------------- kernel/sched.c 1048 /* 1049 * context_switch - switch to the new MM and the new 1050 * thread’s register state. 1051 */ 1052 static inline 1053 task_t * context_switch(runqueue_t *rq, task_t *prev, task_t *next) 1054 { 1055 struct mm_struct *mm = next->mm; 1056 struct mm_struct *oldmm = prev->active_mm; ... 1063 switch_mm(oldmm, mm, next); ... 1072 switch_to(prev, next, prev); 1073 1074 return prev; 1075 } ----------------------------------------------------------------------- Here, we describe the two jobs of context_switch: one to switch the virtual memory mapping and one to switch the task/thread structure. The first job, which the function switch_mm() carries out, uses many of the hardware-dependent memory management structures and registers: –---------------------------------------------------------------------- /include/asm-i386/mmu_context.h 026 static inline void switch_mm(struct mm_struct *prev, 027 struct mm_struct *next, 028 struct task_struct *tsk) 029 { 030 int cpu = smp_processor_id(); 031 032 if (likely(prev != next)) { 033 /* stop flush ipis for the previous mm */ 034 cpu_clear(cpu, prev->cpu_vm_mask); 035 #ifdef CONFIG_SMP 036 cpu_tlbstate[cpu].state = TLBSTATE_OK; 037 cpu_tlbstate[cpu].active_mm = next; 038 #endif 039 cpu_set(cpu, next->cpu_vm_mask); 040 041 /* Re-load page tables */ 042 load_cr3(next->pgd); 043 044 /* 045 * load the LDT, if the LDT is different: 046 */ 047 if (unlikely(prev->context.ldt != next->context.ldt)) 048 load_LDT_nolock(&next->context, cpu); 049 } 050 #ifdef CONFIG_SMP 051 else { ----------------------------------------------------------------------- Line 39 Bind the new task to the current processor. Line 42 The code for switching the memory context utilizes the x86 hardware register cr3, which holds the base address of all paging operations for a given process. The new page global descriptor is loaded here from next->pgd. Line 47 Most processes share the same LDT. If another LDT is required by this process, it is loaded here from the new next->context structure. The other half of function context_switch() in /kernel/sched.c then calls the macro switch_to(), which calls the C function __switch_to(). The delineation of architecture independence to architecture dependence for both x86 and PPC is the switch_to() macro. 7.1.2.1 Following the x86 Trail of switch_to() The x86 code is more compact than PPC. The following is the architecture-dependent code for __switch_to(). task_struct (not thread_struct) is passed to __switch_to(). The code discussed next is inline assembler code for calling the C function __switch_to() (line 23) with the proper task_struct structures as parameters. The context_switch takes three task pointers: prev, next, and last. In addition, there is the current pointer. Let us now explain, at a high level, what occurs when switch_to() is called and how the task pointers change after a call to switch_to(). Figure 7.2 shows three switch_to() calls using three processes: A, B, and C. Figure 7.2 switch_to Calls We want to switch A and B. Before, the first call we have Current → A Prev → A, next → B After the first call: Current → B Last → A Now, we want to switch B and C. Before the second call, we have Current → B Prev → B, next → C After the second call: Current → C Last → B Returning from the second call, current now points to task (C) and last points to (B). The method continues with task (A) being swapped in once again, and so on. The inline assembly of the switch_to() function is an excellent example of assembly magic in the kernel. It is also a good example of the gcc C extensions. See Chapter 2, "Exploration Toolkit," for a tutorial featuring this function. Now, we carefully walk through this code block. –---------------------------------------------------------------------- ) ----------------------------------------------------------------------- Line 12 The FASTCALL macro resolves to __attribute__ regparm(3), which forces the parameters to be passed in registers rather than stack. Lines 15–16 The do {} while (0) construct allows (among other things) the macro to have local the variables esi and edi. Remember, these are just local variables with familiar names. Lines 17 and 30 The construct asm volatile ()6 encloses the inline assembly block and the volatile keyword assures that the compiler will not change (optimize) the routine in any way. Lines 17–18 Push the flags and ebp registers onto the stack. (Note: We are still using the stack associated with the prev task.) Line 19 This line saves the current stack pointer esp to the prev task structure. Line 20 Move the stack pointer from the next task structure to the current processor esp. We are now with a new kernel stack and thus, any reference to current is to the new (next) task structure. Line 21 Save the return address for prev into its task structure. This is where the prev task resumes when it is restarted. Line 22 Push the return address (from when we return from __switch_to()) onto the stack. This is the eip from next. The eip was saved into its task structure (on line 21) when it was stopped, or preempted the last time. Line 23 The next thread structure with the kernel stack pointer Thread local storage descriptor for this processor fs and gs for prev and next, if needed Debug registers, if needed I/O bitmaps, if needed __switch_to() then returns the updated prev task structure. Lines 24–25 Pop the base pointer and flags registers from the new (next task) kernel stack. Lines 26–29 These are the output and input parameters to the inline assembly routine. See the "Inline Assembly" section in Chapter 2 for more information on the constraints put on these parameters. Line 29 By way of assembler magic, prev is returned in eax, which is the third positional parameter. In other words, the input parameter prev is passed out of the switch_to() macro as the output parameter last. Because switch_to() is a macro, it was executed inline with the code that called it in context_switch(). It does not return as functions normally do. For the sake of clarity, remember that switch_to() passes back prev in the eax register, execution then continues in context_switch(), where the next instruction is return prev (line 1074 of kernel/sched.c). This allows context_switch() to pass back a pointer to the last task running. 7.1.2.2 Following the PPC context_switch() The PPC code for context_switch() has slightly more work to do for the same results. Unlike the cr3 register in x86 architecture, the PPC uses hash functions to point to context environments. The following code for switch_mm() touches on these functions, but Chapter 4, "Memory Management," offers a deeper discussion. Here is the routine for switch_mm() which, in turn, calls the routine set_context(). –---------------------------------------------------------------------- /include/asm-ppc/mmu_context.h 155 static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,struct task_struct *tsk) 156 { 157 tsk->thread.pgdir = next->pgd; 158 get_mmu_context(next); 159 set_context(next->context, next->pgd); 160 } ----------------------------------------------------------------------- Line 157 The page global directory (segment register) for the new thread is made to point to the next->pgd pointer. Line 158 The context field of the mm_struct (next->context) passed into switch_mm() is updated to the value of the appropriate context. This information comes from a global reference to the variable context_map[], which contains a series of bitmap fields. Line 159 This is the call to the assembly routine set_context. Below is the code and discussion of this routine. Upon execution of the blr instruction on line 1468, the code returns to the switch_mm routine. –---------------------------------------------------------------------- /arch/ppc/kernel/head.S 1437 _GLOBAL(set_context) 1438 mulli r3,r3,897 /* multiply context by skew factor */ 1439 rlwinm r3,r3,4,8,27 /* VSID = (context & 0xfffff) << 4 */ 1440 addis r3,r3,0x6000 /* Set Ks, Ku bits */ 1441 li r0,NUM_USER_SEGMENTS 1442 mtctr r0 ... 1457 3: isync ... 1461 mtsrin r3,r4 1462 addi r3,r3,0x111 /* next VSID */ 1463 rlwinm r3,r3,0,8,3 /* clear out any overflow from VSID field */ 1464 addis r4,r4,0x1000 /* address of next segment */ 1465 bdnz 3b 1466 sync 1467 isync 1468 blr ------------------------------------------------------------------------ Lines 1437–1440 The context field of the mm_struct (next->context) passed into set_context() by way of r3, sets up the hash function for PPC segmentation. Lines 1461–1465 The pgd field of the mm_struct (next->pgd) passed into set_context() by way of r4, points to the segment registers. Segmentation is the basis of PPC memory management (refer to Chapter 4). Upon returning from set_context(), the mm_struct next is initialized to the proper memory regions and is returned to switch_mm(). 7.1.2.3 Following the PPC Trail of switch_to() The result of the PPC implementation of switch_to() is necessarily identical to the x86 call; it takes in the current and next task pointers and returns a pointer to the previously running task: –---------------------------------------------------------------------- include/asm-ppc/system.h 88 extern struct task_struct *__switch_to(struct task_struct *, 89 struct task_struct *); 90 #define switch_to(prev, next, last) ((last) = __switch_to((prev), (next))) 91 92 struct thread_struct; 93 extern struct task_struct *_switch(struct thread_struct *prev, 94 struct thread_struct *next); ----------------------------------------------------------------------- On line 88, __switch_to() takes its parameters as task_struct type and, at line 93, _switch() takes its parameters as thread_struct. This is because the thread entry within task_struct contains the architecture-dependent processor register information of interest for the given thread. Now, let us examine the implementation of __switch_to(): –---------------------------------------------------------------------- /arch/ppc/kernel/process.c 200 struct task_struct *__switch_to(struct task_struct *prev, struct task_struct *new) 201 { 202 struct thread_struct *new_thread, *old_thread; 203 unsigned long s; 204 struct task_struct *last; 205 local_irq_save(s); ... 247 new_thread = &new->thread; 248 old_thread = ¤t->thread; 249 last = _switch(old_thread, new_thread); 250 local_irq_restore(s); 251 return last; 252 } ----------------------------------------------------------------------- Line 205 Disable interrupts before the context switch. Lines 247–248 Still running under the context of the old thread, pass the pointers to the thread structure to the _switch() function. Line 249 _switch() is the assembly routine called to do the work of switching the two thread structures (see the following section). Line 250 Enable interrupts after the context switch. To better understand what needs to be swapped within a PPC thread, we need to examine the thread_struct passed in on line 249. Recall from the exploration of the x86 context switch that the switch does not officially occur until we are pointing to a new kernel stack. This happens in _switch(). Tracing the PPC Code for _switch() By convention, the parameters of a PPC C function (from left to right) are held in r3, r4, r5, ...r12. Upon entry into switch(), r3 points to the thread_struct for the current task and r4 points to the thread_struct for the new task: –---------------------------------------------------------------------- /arch/ppc/kernel/entry.S 437 _GLOBAL(_switch) 438 stwu r1,-INT_FRAME_SIZE(r1) 439 mflr r0 440 stw r0,INT_FRAME_SIZE+4(r1) 441 /* r3-r12 are caller saved -- Cort */ 442 SAVE_NVGPRS(r1) 443 stw r0,_NIP(r1) /* Return to switch caller */ 444 mfmsr r11 ... 458 1: stw r11,_MSR(r1) 459 mfcr r10 460 stw r10,_CCR(r1) 461 stw r1,KSP(r3) /* Set old stack pointer */ 462 463 tophys(r0,r4) 464 CLR_TOP32(r0) 465 mtspr SPRG3,r0/* Update current THREAD phys addr */ 466 lwz r1,KSP(r4) /* Load new stack pointer */ 467 /* save the old current ’last’ for return value */ 468 mr r3,r2 469 addi r2,r4,-THREAD /* Update current */ ... 478 lwz r0,_CCR(r1) 479 mtcrf 0xFF,r0 480 REST_NVGPRS(r1) 481 482 lwz r4,_NIP(r1) /* Return to _switch caller in new task */ 483 mtlr r4 484 addi r1,r1,INT_FRAME_SIZE 485 blr ----------------------------------------------------------------------- The byte-for-byte mechanics of swapping out the previous thread_struct for the new is left as an exercise for you. It is worth noting, however, the use of r1, r2, r3, SPRG3, and r4 in _switch() to see the basics of this operation. Lines 438–460 The environment is saved to the current stack with respect to the current stack pointer, r1. Line 461 The entire environment is then saved into the current thread_struct pointer passed in by way of r3. Lines 463–465 SPRG3 is updated to point to the thread structure of the new task. Line 466 KSP is the offset into the task structure (r4) of the new task’s kernel stack pointer. The stack pointer r1 is now updated with this value. (This is the point of the PPC context switch.) Line 468 The current pointer to the previous task is returned from _switch() in r3. This represents the last task. Line 469 The current pointer (r2) is updated with the pointer to the new task structure (r4). Lines 478–486 Restore the rest of the environment from the new stack and return to the caller with the previous task structure in r3. This concludes the explanation of context_switch(). At this point, the processor has swapped the two processes prev and next as called by context_switch in schedule(). –---------------------------------------------------------------------- kernel/sched.c 1709 prev = context_switch(rq, prev, next); ----------------------------------------------------------------------- prev now points to the process that we have just switched away from and next points to the current process. Now that we’ve discussed how tasks are scheduled in the Linux kernel, we can examine how tasks are told to be scheduled. Namely, what causes schedule() to be called and one process to yield the CPU to another process? 7.1.3 Yielding the CPU Processes can voluntarily yield the CPU by simply calling schedule(). This is most commonly used in kernel code and device drivers that want to sleep or wait for a signal to occur.7 Other tasks want to continually use the CPU and the system timer must tell them to yield. The Linux kernel periodically seizes the CPU, in so doing stopping the active process, and then does a number of timer-based tasks. One of these tasks, scheduler_tick(), is how the kernel forces a process to yield. If a process has been running for too long, the kernel does not return control to that process and instead chooses another one. We now examine how scheduler_tick()determines if the current process must yield the CPU: –---------------------------------------------------------------------- kernel/sched.c 1981 void scheduler_tick(int user_ticks, int sys_ticks) 1982 { 1983 int cpu = smp_processor_id(); 1984 struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat; 1985 runqueue_t *rq = this_rq(); 1986 task_t *p = current; 1987 1988 rq->timestamp_last_tick = sched_clock(); 1989 1990 if (rcu_pending(cpu)) 1991 rcu_check_callbacks(cpu, user_ticks); ----------------------------------------------------------------------- Lines 1981–1986 This code block initializes the data structures that the scheduler_tick() function needs. cpu, cpu_usage_stat, and rq are set to the processor ID, CPU stats and run queue of the current processor. p is a pointer to the current process executing on cpu. Line 1988 The run queue’s last tick is set to the current time in nanoseconds. Lines 1990–1991 On an SMP system, we need to check if there are any outstanding read-copy updates to perform (RCU). If so, we perform them via rcu_check_callback(). –---------------------------------------------------------------------- kernel/sched.c 1993 /* note: this timer irq context must be accounted for as well */ 1994 if (hardirq_count() - HARDIRQ_OFFSET) { 1995 cpustat->irq += sys_ticks; 1996 sys_ticks = 0; 1997 } else if (softirq_count()) { 1998 cpustat->softirq += sys_ticks; 1999 sys_ticks = 0; 2000 } 2001 2002 if (p == rq->idle) { 2003 if (atomic_read(&rq->nr_iowait) > 0) 2004 cpustat->iowait += sys_ticks; 2005 else 2006 cpustat->idle += sys_ticks; 2007 if (wake_priority_sleeper(rq)) 2008 goto out; 2009 rebalance_tick(cpu, rq, IDLE); 2010 return; 2011 } 2012 if (TASK_NICE(p) > 0) 2013 cpustat->nice += user_ticks; 2014 else 2015 cpustat->user += user_ticks; 2016 cpustat->system += sys_ticks; ----------------------------------------------------------------------- Lines 1994–2000 cpustat keeps track of kernel statistics, and we update the hardware and software interrupt statistics by the number of system ticks that have occurred. Lines 2002–2011 If there is no currently running process, we atomically check if any processes are waiting on I/O. If so, the CPU I/O wait statistic is incremented; otherwise, the CPU idle statistic is incremented. In a uniprocessor system, rebalance_tick() does nothing, but on a multiple processor system, rebalance_tick() attempts to load balance the current CPU because the CPU has nothing to do. Lines 2012–2016 More CPU statistics are gathered in this code block. If the current process was niced, we increment the CPU nice counter; otherwise, the user tick counter is incremented. Finally, we increment the CPU’s system tick counter. –---------------------------------------------------------------------- kernel/sched.c 2019 if (p->array != rq->active) { 2020 set_tsk_need_resched(p); 2021 goto out; 2022 } 2023 spin_lock(&rq->lock); ----------------------------------------------------------------------- Lines 2019–2022 Here, we see why we store a pointer to a priority array within the task_struct of the process. The scheduler checks the current process to see if it is no longer active. If the process has expired, the scheduler sets the process’ rescheduling flag and jumps to the end of the scheduler_tick() function. At that point (lines 2092–2093), the scheduler attempts to load balance the CPU because there is no active task yet. This case occurs when the scheduler grabbed CPU control before the current process was able to schedule itself or clean up from a successful run. Line 2023 At this point, we know that the current process was running and not expired or nonexistent. The scheduler now wants to yield CPU control to another process; the first thing it must do is take the run queue lock. –---------------------------------------------------------------------- kernel/sched.c 2024 /* 2025 * The task was running during this tick - update the 2026 * time slice counter. Note: we do not update a thread’s 2027 * priority until it either goes to sleep or uses up its 2028 * timeslice. This makes it possible for interactive tasks 2029 * to use up their timeslices at their highest priority levels. 2030 */ 2031 if (unlikely(rt_task(p))) { 2032 /* 2033 * RR tasks need a special form of timeslice management. 2034 * FIFO tasks have no timeslices. 2035 */ 2036 if ((p->policy == SCHED_RR) && !--p->time_slice) { 2037 p->time_slice = task_timeslice(p); 2038 p->first_time_slice = 0; 2039 set_tsk_need_resched(p); 2040 2041 /* put it at the end of the queue: */ 2042 dequeue_task(p, rq->active); 2043 enqueue_task(p, rq->active); 2044 } 2045 goto out_unlock; 2046 } ----------------------------------------------------------------------- Lines 2031–2046 The easiest case for the scheduler occurs when the current process is a real-time task. Real-time tasks always have a higher priority than any other tasks. If the task is a FIFO task and was running, it should continue its operation so we jump to the end of the function and release the run queue lock. If the current process is a round-robin real-time task, we decrement its timeslice. If the task has no more timeslice, it’s time to schedule another round-robin real-time task. The current task has its new timeslice calculated by task_timeslice(). Then the task has its first timeslice reset. The task is then marked as needing rescheduling and, finally, the task is put at the end of the round-robin real-time tasklist by removing it from the run queue’s active array and adding it back in. The scheduler then jumps to the end of the function and releases the run queue lock. –---------------------------------------------------------------------- kernel/sched.c 2047 if (!--p->time_slice) { 2048 dequeue_task(p, rq->active); 2049 set_tsk_need_resched(p); 2050 p->prio = effective_prio(p); 2051 p->time_slice = task_timeslice(p); 2052 p->first_time_slice = 0; 2053 2054 if (!rq->expired_timestamp) 2055 rq->expired_timestamp = jiffies; 2056 if (!TASK_INTERACTIVE(p) || EXPIRED_STARVING(rq)) { 2057 enqueue_task(p, rq->expired); 2058 if (p->static_prio < rq->best_expired_prio) 2059 rq->best_expired_prio = p->static_prio; 2060 } else 2061 enqueue_task(p, rq->active); 2062 } else { ----------------------------------------------------------------------- Lines 2047–2061 At this point, the scheduler knows that the current process is not a real-time process. It decrements the process’ timeslice and, in this section, the process’ timeslice has been exhausted and reached 0. The scheduler removes the task from the active array and sets the process’ rescheduling flag. The priority of the task is recalculated and its timeslice is reset. Both of these operations take into account prior process activity.8 If the run queue’s expired timestamp is 0, which usually occurs when there are no more processes on the run queue’s active array, we set it to jiffies. We normally favor interactive tasks by replacing them on the active priority array of the run queue; this is the else clause on line 2060. However, we don’t want to starve expired tasks. To determine if expired tasks have been waiting too long for CPU time, we use EXPIRED_STARVING() (see EXPIRED_STARVING on line 1968). The function returns true if the first expired task has been waiting an "unreasonable" amount of time or if the expired array contains a task that has a greater priority than the current process. The unreasonableness of waiting is load-dependent and the swapping of the active and expired arrays decrease with an increasing number of running tasks. If the task is not interactive or expired tasks are starving, the scheduler takes the current process and enqueues it onto the run queue’s expired priority array. If the current process’ static priority is higher than the expired run queue’s highest priority task, we update the run queue to reflect the fact that the expired array now has a higher priority than before. (Remember that high-priority tasks have low numbers in Linux, thus, the (<) in the code.) –---------------------------------------------------------------------- kernel/sched.c 2062 } else { 2063 /* 2064 * Prevent a too long timeslice allowing a task to monopolize 2065 * the CPU. We do this by splitting up the timeslice into 2066 * smaller pieces. 2067 * 2068 * Note: this does not mean the task’s timeslices expire or 2069 * get lost in any way, they just might be preempted by 2070 * another task of equal priority. (one with higher 2071 * priority would have preempted this task already.) We 2072 * requeue this task to the end of the list on this priority 2073 * level, which is in essence a round-robin of tasks with 2074 * equal priority. 2075 * 2076 * This only applies to tasks in the interactive 2077 * delta range with at least TIMESLICE_GRANULARITY to requeue. 2078 */ 2079 if (TASK_INTERACTIVE(p) && !((task_timeslice(p) - 2080 p->time_slice) % TIMESLICE_GRANULARITY(p)) && 2081 (p->time_slice >= TIMESLICE_GRANULARITY(p)) && 2082 (p->array == rq->active)) { 2083 2084 dequeue_task(p, rq->active); 2085 set_tsk_need_resched(p); 2086 p->prio = effective_prio(p); 2087 enqueue_task(p, rq->active); 2088 } 2089 } 2090 out_unlock: 2091 spin_unlock(&rq->lock); 2092 out: 2093 rebalance_tick(cpu, rq, NOT_IDLE); 2094 } ----------------------------------------------------------------------- Lines 2079–2089 The final case before the scheduler is that the current process was running and still has timeslices left to run. The scheduler needs to ensure that a process with a large timeslice doesn’t hog the CPU. If the task is interactive, has more timeslices than TIMESLICE_GRANULARITY, and was active, the scheduler removes it from the active queue. The task then has its reschedule flag set, its priority recalculated, and is placed back on the run queue’s active array. This ensures that a process at a certain priority with a large timeslice doesn’t starve another process of an equal priority. Lines 2090–2094 The scheduler has finished rearranging the run queue and unlocks it; if executing on an SMP system, it attempts to load balance. Combining how processes are marked to be rescheduled, via scheduler_tick() and how processes are scheduled, via schedule() illustrates how the scheduler operates in the 2.6 Linux kernel. We now delve into the details of what the scheduler means by "priority." 7.1.3.1 Dynamic Priority Calculation In previous sections, we glossed over the specifics of how a task’s dynamic priority is calculated. The priority of a task is based on its prior behavior, as well as its user-specified nice value. The function that determines a task’s new dynamic priority is recalc_task_prio(): –---------------------------------------------------------------------- kernel/sched.c 381 static void recalc_task_prio(task_t *p, unsigned long long now) 382 { 383 unsigned long long __sleep_time = now - p->timestamp; 384 unsigned long sleep_time; 385 386 if (__sleep_time > NS_MAX_SLEEP_AVG) 387 sleep_time = NS_MAX_SLEEP_AVG; 388 else 389 sleep_time = (unsigned long)__sleep_time; 390 391 if (likely(sleep_time > 0)) { 392 /* 393 * User tasks that sleep a long time are categorised as 394 * idle and will get just interactive status to stay active & 395 * prevent them suddenly becoming cpu hogs and starving 396 * other processes. 397 */ 398 if (p->mm && p->activated != -1 && 399 sleep_time > INTERACTIVE_SLEEP(p)) { 400 p->sleep_avg = JIFFIES_TO_NS(MAX_SLEEP_AVG - 401 AVG_TIMESLICE); 402 if (!HIGH_CREDIT(p)) 403 p->interactive_credit++; 404 } else { 405 /* 406 * The lower the sleep avg a task has the more 407 * rapidly it will rise with sleep time. 408 */ 409 sleep_time *= (MAX_BONUS - CURRENT_BONUS(p)) ? : 1; 410 411 /* 412 * Tasks with low interactive_credit are limited to 413 * one timeslice worth of sleep avg bonus. 414 */ 415 if (LOW_CREDIT(p) && 416 sleep_time > JIFFIES_TO_NS(task_timeslice(p))) 417 sleep_time = JIFFIES_TO_NS(task_timeslice(p)); 418 419 /* 420 * Non high_credit tasks waking from uninterruptible 421 * sleep are limited in their sleep_avg rise as they 422 * are likely to be cpu hogs waiting on I/O 423 */ 424 if (p->activated == -1 && !HIGH_CREDIT(p) && p->mm) { 425 if (p->sleep_avg >= INTERACTIVE_SLEEP(p)) 426 sleep_time = 0; 427 else if (p->sleep_avg + sleep_time >= 428 INTERACTIVE_SLEEP(p)) { 429 p->sleep_avg = INTERACTIVE_SLEEP(p); 430 sleep_time = 0; 431 } 432 } 433 434 /* 435 * This code gives a bonus to interactive tasks. 436 * 437 * The boost works by updating the ’average sleep time’ 438 * value here, based on ->timestamp. The more time a 439 * task spends sleeping, the higher the average gets - 440 * and the higher the priority boost gets as well. 441 */ 442 p->sleep_avg += sleep_time; 443 444 if (p->sleep_avg > NS_MAX_SLEEP_AVG) { 445 p->sleep_avg = NS_MAX_SLEEP_AVG; 446 if (!HIGH_CREDIT(p)) 447 p->interactive_credit++; 448 } 449 } 450 } 452 452 p->prio = effective_prio(p); 453 } ----------------------------------------------------------------------- Lines 386–389 Based on the time now, we calculate the length of time the process p has slept for and assign it to sleep_time with a maximum value of NS_MAX_SLEEP_AVG. (NS_MAX_SLEEP_AVG defaults to 10 milliseconds.) Lines 391–404 If process p has slept, we first check to see if it has slept enough to be classified as an interactive task. If it has, when sleep_time > INTERACTIVE_SLEEP(p), we adjust the process’ sleep average to a set value and, if p isn’t classified as interactive yet, we increment p’s interactive_credit. Lines 405–410 A task with a low sleep average gets a higher sleep time. Lines 411–418 If the task is CPU intensive, and thus classified as non-interactive, we restrict the process to having, at most, one more timeslice worth of a sleep average bonus. Lines 419–432 Tasks that are not yet classified as interactive (not HIGH_CREDIT) that awake from uninterruptible sleep are restricted to having a sleep average of INTERACTIVE(). Lines 434–450 We add our newly calculated sleep_time to the process’ sleep average, ensuring it doesn’t go over NS_MAX_SLEEP_AVG. If the processes are not considered interactive but have slept for the maximum time or longer, we increment its interactive credit. Line 452 Finally, the priority is set using effective_prio(), which takes into account the newly calculated sleep_avg field of p. It does this by scaling the sleep average of 0 .. MAX_SLEEP_AVG into the range of -5 to +5. Thus, a process that has a static priority of 70 can have a dynamic priority between 65 and 85, depending on its prior behavior. One final thing: A process that is not a real-time process has a range between 101 and 140. Processes that are operating at a very high priority, 105 or less, cannot cross the real-time boundary. Thus, a high priority, highly interactive process could never have a dynamic priority of lower than 101. (Real-time processes cover 0..100 in the default configuration.) 7.1.3.2 Deactivation We already discussed how a task gets inserted into the scheduler by forking and how tasks move from the active to expired priority arrays within the CPU’s run queue. But, how does a task ever get removed from a run queue? A task can be removed from the run queue in two major ways: The task is preempted by the kernel and its state is not running, and there is no signal pending for the task (see line 2240 in kernel/sched.c). On SMP machines, the task can be removed from a run queue and placed on another run queue (see line 3384 in kernel/sched.c). The first case normally occurs when schedule() gets called after a process puts itself to sleep on a wait queue. The task marks itself as non-running (TASK_INTERRUPTIBLE, TASK_UNINTERRUPTIBLE, TASK_STOPPED, and so on) and the kernel no longer considers it for CPU access by removing it from the run queue. The case in which the process is moved to another run queue is dealt with in the SMP section of the Linux kernel, which we do not explore here. We now trace how a process is removed from the run queue via deactivate_task(): –---------------------------------------------------------------------- kernel/sched.c 507 static void deactivate_task(struct task_struct *p, runqueue_t *rq) 508 { 509 rq->nr_running--; 510 if (p->state == TASK_UNINTERRUPTIBLE) 511 rq->nr_uninterruptible++; 512 dequeue_task(p, p->array); 513 p->array = NULL; 514 } ----------------------------------------------------------------------- Line 509 The scheduler first decrements its count of running processes because p is no longer running. Lines 510–511 If the task is uninterruptible, we increment the count of uninterruptible tasks on the run queue. The corresponding decrement operation occurs when an uninterruptible process wakes up (see kernel/sched.c line 824 in the function try_to_wake_up()). Line 512–513 Our run queue statistics are now updated so we actually remove the process from the run queue. The kernel uses the p->array field to test if a process is running and on a run queue. Because it no longer is either, we set it to NULL. There is still some run queue management to be done; let’s examine the specifics of dequeue_task(): –---------------------------------------------------------------------- kernel/sched.c 303 static void dequeue_task(struct task_struct *p, prio_array_t *array) 304 { 305 array->nr_active--; 306 list_del(&p->run_list); 307 if (list_empty(array->queue + p->prio)) 308 __clear_bit(p->prio, array->bitmap); 309 } ----------------------------------------------------------------------- Line 305 We adjust the number of active tasks on the priority array that process p is on—either the expired or the active array. Lines 306–308 We remove the process from the list of processes in the priority array at p’s priority. If the resulting list is empty, we need to clear the bit in the priority array’s bitmap to show there are no longer any processes at priority p->prio(). list_del() does all the removal in one step because p->run_list is a list_head structure and thus has pointers to the previous and next entries in the list. We have reached the point where the process is removed from the run queue and has thus been completely deactivated. If this process had a state of TASK_INTERRUPTIBLE or TASK_UNINTERRUPTIBLE, it could be awoken and placed back on a run queue. If the process had a state of TASK_STOPPED, TASK_ZOMBIE, or TASK_DEAD, it has all of its structures removed and
http://www.informit.com/articles/article.aspx?p=414983&amp;seqNum=6
CC-MAIN-2016-50
refinedweb
7,760
60.04
30 April 2013 15:53 [Source: ICIS news] (Clarifies and recasts seventh paragraph) HOUSTON (ICIS)-. But the question in ?xml:namespace> "Solution SBR has been around for a long time," said James L McGraw, managing director of the International Institute for Synthetic Rubber Producers (IISRP). "Historically, it has been used in very high performance tyres." But that is starting to change as European countries, In the near term, demand for US BD is actually expected to go down slightly because US tyre producers have lost so much of the low-end replacement-tyre market to Asian imports. But the ." "My thesis is that there's been a tonne of new capacity added in Some market sources think it's too early to forecast what US BD demand will look like in a few years. "There are a lot of 'ifs' in all of the scenarios that look at BD demand in "This is nothing new," said another market source. "We've seen the same thing happen in One market source said that US BD producers have nothing to worry about. Not only will US tyre makers gain back share in the replacement-tyre market, but " Indeed, several market sources said the But there's just one problem with the scenario in which In January, Michelin began rolling tires out of its new $1.50bn (€1.14bn) factory
http://www.icis.com/Articles/2013/04/30/9663891/global-trend-toward-s-sbr-brings-uncertainty-to-bd-markets.html
CC-MAIN-2014-41
refinedweb
227
64.04
Java, J2EE & SOA Certification Training - 37k Enrolled Learners - Weekend - Live Class Advent of Java took the programming world by storm and the major reason for that is the number features it brought along. In this article we would be discussing Constructor Overloading in Java. Following pointers will be discussed in this article, So let us get started then, A constructor is a block of code used to create object of a class. Every class has a constructor, be it normal class or abstract class. A constructor is just like a method but without return type. When there is no any constructor defined for a class, a default constructor is created by compiler. Example public class Student{ //no constructor private String name; private int age; private String std; // getters and setters public void display(){ System.out.println(this.getName() + " " + this.getAge() + " " + this.getStd()); } public static void main(String args[]){ // to use display method of Student class, create object of Student Student student = new Student(); // as we have not defined any constructor, compiler creates default constructor. so that student.display(); } } In above program, default constructor is created by compiler so that object is created. It is a must to have constructor. This brings us to the next it of this article on Constructor overloading In Java. In above example Student object can be created with default constructor only. Where all other attributes of student is not initialized. But there can be certain other constructors, which is used to initialize the state of an object. for e.g – public class Student{ //attributes String name; int age; String std; //Constructors public Student(String name){ // Constructor 1 this.name = name; } public Student(String name, String std){ // Constructor 2 this.name = name; this.std = std; } public Student(String name, String std, int age){ // Constructor 3 this.name = name; this.std = std; this.age = age; } public void display(){ System.out.println(this.getName() + " " + this.getAge() + " " + this.getStd()); } public static void main(String args[]){ Student stu1 = new Student("ABC"); stu1.display(); Student stu2 = new Student("DEF", "5-C"); stu2.display(); Student stu3 = new Student("GHI", "6-C", 12); stu3.display(); } } This brings us to the next it of this article on Constructor overloading In Java. this() reference can be used inside parameterized constructor to call default constructor implicitily. Please note, this() should be the first statement inside a constructor. Example public Student(){} // Constructor 4 public Student(String name, String std, int age){ // Constructor 3 this(); // will call the default constructor. *If it is not the first statement of constructor, ERROR will occur* this.name = name; this.std = std; this.age = age; Note Thus we have come to an end of this article on ‘Constructor overloading.
https://www.edureka.co/blog/constructor-overloading-in-java/
CC-MAIN-2020-16
refinedweb
447
58.89
Let’s face it: JavaScript objects sometimes can be both awesome and scary. They are super flexible, but they often lead to a big problem. Let’s take the following example and pretend that this data comes from a database: { user: { name: "John", surname: "Doe", birthday: "1995-01-29", contacts: { email: "foo@bar.com", phone: "000 00000" }, laguages: ["english", "italian"] } } We may want to access the user phone number… but what if he hasn’t compiled it yet? What if the object above has changed for some reason, and we don’t have the contatcs object anymore? const userData = { user: { name: "John", surname: "Doe", birthday: "1995-01-29", laguages: ["english", "italian"] } } const userPhone = userData.user.contacts.phone; // => Uncaught TypeError: Cannot read property 'phone' of undefined Holy guacamole! That is such a frequent error! And how many times did you come up with the following solution? const userData = { user: { name: "John", surname: "Doe", birthday: "1995-01-29", laguages: ["english", "italian"] } } const userPhone = userData.user && userData.user.contacts && userData.user.contacts.phone; Of course it works… but there are two problems with this code: Hard to maintain and ugly code. If contacts object does not exist, userPhone constant will be false. So imagine that we need to access a property hasPaid. I think that it should be return undefined rather than false if the property does not exist, doesn’t it? Quick and dirty solutions Great libraries like Ramda and Lodash provides their own way to handle this problem: Lodash: import _ from "lodash"; _.get(userData, "user.contacts.phone"); Ramda: import R from "ramda"; R.path(['user', 'contacts', 'phone'], userData); I also wrote a tiny ~300bytes library called **MJN** that can handle this problem! import maybe from "mjn"; maybe(userData, "user.contacts.phone", () => "Phone does not exist"); Well, as you can see, we can handle that problem in few different ways… but not natively (yet!). Other Languages As you may guess, that particular problem exist in other languages… but they handle it! Let’s see some examples: Groovy: def userPhone = userData?.user?.contacts?.phone Ruby: userPhone = userData.try(:user).try(:contacts).try(:phone) C#: string userPhone = userData?.user?.contacts?.phone; As you can see, Groovy and C# implements a similar control flow… and guess how does the EcmaScript Optional Chaining Proposal look? EcmaScript Proposal const userPhone = userData?.user?.contacts?.phone; Ta-da! EcmaScript proposal looks very similar to the C# and Groovy implementation! I love it! Let’s see how to use it today: First of all, install Babel in your project: npm install --save-dev @babel/core @babel/cli @babel/plugin-proposal-optional-chaining Then make sure that your package.json file looks like this: { "name": "my-project", "version": "0.0.1", "scripts": { "build": "babel index.js -d lib", "start": "node lib/index.js" }, "devDependencies": { "@babel/cli": "^7.0.0", "@babel/core": "^7.0.0", "@babel/plugin-proposal-optional-chaining": "^7.2.0" } } Last but not least, create a .babelrc file and fill it with the following JSON: { "plugins": [ ["@babel/plugin-proposal-optional-chaining", { "loose": false }] ] } Ok great! Now we’re ready to go! Getting Data We’re gonna start with the example above: how do we get nested data? const userData = { user: { name: "John", surname: "Doe", birthday: "1995-01-29", contacts: { email: "foo@bar.com", phone: "000 00000" }, laguages: ["english", "italian"] } } const phone = userData?.user?.contacts?.phone; console.log(phone); // => "000 00000" So easy! And what if we try to access a property that does not exist? const profilePicture = userData?.user?.images?.profile; console.log(profilePicture); // => undefined We got undefined! Great! Deleting Properties Deleting properties is even easier! delete userData?.user?.languages; delete userData?.user?.contacts; console.log(userData); /* user: { name: "John", surname: "Doe", birthday: "1995-01-29" } */ Imagine doing this without the Optional Chaining Operator... how many if statement would you write?
https://www.hackdoor.io/articles/0XN6AJd1/optional-chaining-proposal
CC-MAIN-2020-10
refinedweb
634
51.75
Mmap - uses mmap to map in a file as a Perl variable use Sys::Mmap; new Mmap $str, 8192, 'structtest2.pl' or die $!; new Mmap $var, 8192 or die $!; mmap($foo, 0, PROT_READ, MAP_SHARED, FILEHANDLE) or die "mmap: $!"; @tags = $foo =~ /<(.*?)>/g; munmap($foo) or die "munmap: $!"; mmap($bar, 8192, PROT_READ|PROT_WRITE, MAP_SHARED, FILEHANDLE); substr($bar, 1024, 11) = "Hello world"; mmap($baz, 8192, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANON, STDOUT); $addr = mmap($baz, 8192, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANON, STDOUT); Sys::Mmap::hardwire($qux, $addr, 8192);. Note that PerlIO now defines a :mmap tag and presents mmap'd files as regular files, if that is your cup of joe. Several processes may share one copy of the file or string, saving memory, and concurrently making changes to portions of the file or string. When not used with a file, it is an alternative to SysV shared memory. Unlike SysV shared memory, there are no arbitrary size limits on the shared memory area, and sparce memory usage is handled optimally on most modern UNIX implementations. Using the new() method provides a tie()'d interface to mmap() that allows you to use the variable as a normal variable. If a filename is provided, the file is opened and mapped in. If the file is smaller than the length provided, the file is grown to that length. If no filename is provided, anonymous shared inheritable memory is used. Assigning to the variable will replace a section in the file corresponding to the length of the variable, leaving the remainder of the file intact and unmodified. Using substr() allows you to access the file at an offset, and does not place any requirements on the length argument to substr() or the length of the variable being inserted, provided it does not exceed the length of the memory region. This protects you from the pathological cases involved in using mmap() directly, documented below. When calling mmap() or hardwire() directly, you need to be careful how you use the variable. Some programming constructs may create copies of a string which, while unimportant for smallish strings, are far less welcome if you're mapping in a file which is a few gigabytes big. If you use PROT_WRITE and attempt to write to the file via the variable you need to be even more careful. One of the few ways in which you can safely write to the string in-place is by using substr() as an lvalue and ensuring that the part of the string that you replace is exactly the same length. Other functions will allocate other storage for the variable, and it will no longer overlay the mapped in file. Maps LENGTH bytes of (the contents of) OPTIONALFILENAME if OPTINALFILENAME is provided, otherwise uses anonymous, shared inheritable memory. This memory region is inherited by any fork()ed children. VARIABLE will now refer to the contents of that file. Any change to VARIABLE will make an identical change to the file. If LENGTH is zero and a file is specified, the current length of the file will be used. If LENGTH is larger then the file, and OPTIONALFILENAME is provided, the file is grown to that length before being mapped. This is the preferred interface, as it requires much less caution in handling the variable. VARIABLE will be tied into the "Mmap" package, and mmap() will be called for you. Assigning to VARIABLE will overwrite the beginning of the file for a length of the value being assigned in. The rest of the file or memory region after that point will be left intact. You may use substr() to assign at a given position: substr(VARIABLE, POSITION, LENGTH) = NEWVALUE Maps LENGTH bytes of (the underlying contents of) FILEHANDLE into your address space, starting at offset OFFSET and makes VARIABLE refer to that memory. The OFFSET argument can be omitted in which case it defaults to zero. The LENGTH argument can be zero in which case a stat is done on FILEHANDLE and the size of the underlying file is used instead. The PROTECTION argument should be some ORed combination of the constants PROT_READ, PROT_WRITE and PROT_EXEC or else PROT_NONE. The constants PROT_EXEC and PROT_NONE are unlikely to be useful here but are included for completeness. The FLAGS argument must include either MAP_SHARED or MAP_PRIVATE (the latter is unlikely to be useful here). If your platform supports it, you may also use MAP_ANON or MAP_ANONYMOUS. If your platform supplies MAP_FILE as a non-zero constant (necessarily non-POSIX) then you should also include that in FLAGS. POSIX.1b does not specify MAP_FILE as a FLAG argument and most if not all versions of Unix have MAP_FILE as zero. mmap returns undef on failure, and the address in memory where the variable was mapped to on success. Unmaps the part of your address space which was previously mapped in with a call to mmap(VARIABLE, ...) and makes VARIABLE become undefined. munmap returns 1 on success and undef on failure. Specifies the address in memory of a variable, possibly within a region you've mmap()ed another variable to. You must use the same percaustions to keep the variable from being reallocated, and use substr() with an exact length. If you munmap() a region that a hardwire()ed variable lives in, the hardwire()ed variable will not automatically be undefed. You must do this manually. The Mmap module exports the following constants into your namespace MAP_SHARED MAP_PRIVATE MAP_ANON MAP_ANONYMOUS MAP_FILE PROT_EXEC PROT_NONE PROT_READ PROT_WRITE Of the constants beginning MAP_, only MAP_SHARED and MAP_PRIVATE are defined in POSIX.1b and only MAP_SHARED is likely to be useful. Scott Walters doesn't know XS, and is just winging it. There must be a better way to tell Perl not to reallocate a variable in memory... The tie() interface makes writing to a substring of the variable much less efficient. One user cited his application running 10-20 times slower when "new Mmap" is used than when mmap() is called directly. Malcolm Beattie has not reviewed Scott's work and is not responsible for any bugs, errors, omissions, stylistic failings, importabilities, or design flaws in this version of the code. There should be a tied interface to hardwire() as well. Scott Walter's spelling is awful. hardwire() will segfault Perl if the mmap() area it was refering to is munmap()'d out from under it. munmap() will segfault Perl if the variable was not successfully mmap()'d previously, or if it has since been reallocated by Perl. Malcolm Beattie, 21 June 1996. Updated for Perl 5.6.x, additions, Scott Walters, Feb 2002. Aaron Kaplan kindly contributed patches to make the C ANSI compliant and contributed documentation as well.
http://search.cpan.org/dist/Sys-Mmap/Mmap.pm
crawl-002
refinedweb
1,113
62.88
HashSet implements the Set interface. HashSet uses hash table, at the background, for storage of elements; uses a mechanism called hashing. No guarantee of returning the elements in the same order when called different times or insertion order. HashSet permits null element. But the set should contain only one null value as duplicates are not permitted. The methods of HashSet are not synchronized. But a synchronized version of HashSet can be obtained. HashSet hs1 = new HashSet(); Set mySet = Collections.synchronizedSet(hs1); Now mySet methods are synchronized. But still hs1 is not synchronized. Following is the class signature: public class HashSet extends AbstractSet implements Set, Cloneable, Serializable Following are the constructors - HashSet(): Creates a HashSet object without any elements initially. The initial capacity is 10 and load factor 0.75. - HashSet(int iniCap): Creates a HashSet object without any elements initially. The initial capacity is iniCap and load factor 0.75. This increases performance, but the required capacity should be known early. - HashSet(int iniCap, float lf): Creates a HashSet object without any elements initially. The initial capacity is iniCap with load factor lf. - HashSet(Collection col): Creates a HashSet object with the elements of col. That is, other collection class elements can be added at the time of object creation itself. The initial capacity is equal to the number of elements of col. The load factor is 0.75. Load factor is the increment value of hash table when the existing capacity is exhausted. It is 0.75 over the existing capacity. HashSet can make use of the methods of Collection interface as it is derived from Collection. It adds the following method. - Object clone(): Returns a cloned HashSet object containing the same elements of the original HashSet. Two programs are given on HashSet. ======================================================== 1. Why the methods of an interface should be overridden as public only? 2. There is a way to create an object without new keyword. Would you like to know? 3. Using instanceof Keyword One stop destination for all Java String and StringBuffer operations 4 Responses Hi Sir, I have some doubt in Set. In Set how system not allowing duplicate, I mean how system checking the set , whether it is having duplicate or not ? It is difficult to guess as it is Java designer implementation. Everyone gives in his own way the answer and he thinks it is correct. How we do in C/C++. Take the element, iterate through a for loop and find out the element is already existing or not. Java can do other way also by using hashcode. sir i have seen the default initial capacity for HashSet is 16 in java7 api docs but u gave it as 10.plz correct me. Can you please check Java 2 docs and let met know.
http://way2java.com/collections/hashset-tutorial/
CC-MAIN-2014-35
refinedweb
463
60.31
Let me start by saying that I know this is an Xmonad issue. I have Minecraft running perfectly in XFCE. Minecraft is one of those games which in order for it to function properly, it has to (for lack of a better word) "steal" the mouse. Xmonad won't let it do that. When I load a map (multiplayer or otherwise) it starts me out in the game options screen. Normally I click "return to game," the cursor disappears and I play normally. In Xmonad, it makes the sound for click confirmation but stays in the options. I've tried turning borders off, setting Minecraft to be floating automatically, and setting the WMName to "LG3D" (which works for the screen resizes but not the problem I'm having.) Currently my only workaround is to run Minecraft in fullscreen mode (F11). Anyone else ran into this problem and managed to fix it? Thanks in advance :3 Last edited by Cyphus (2011-12-16 21:13:13) Offline Found a solution on my own. I'll leave this information here for future searches: When you turn off borders, you must make sure that you TURN OFF borders. Don't set borderWidth to 0; instead you must import XMonad.Layout.NoBorders and then set noBorders somewhere in your layoutHook. Here's my implementation that handles noBorders nicely with a PerWorkspace layout: import XMonad import XMonad.Hooks.DynamicLog import XMonad.Layout.PerWorkspace import XMonad.Layout.Reflect import XMonad.Layout.Spacing import XMonad.Layout.NoBorders import XMonad.Hooks.ManageDocks import XMonad.Hooks.SetWMName import XMonad.Hooks.InsertPosition import XMonad.Util.Dmenu import XMonad.Util.Run(spawnPipe) import XMonad.Util.EZConfig(additionalKeys) import System.IO myManageHook = composeAll [ className =? "net-minecraft-LauncherFrame" --> doShift "4:media" , className =? "net-minecraft-LauncherFrame" --> doFloat , manageDocks] myWorkspaces = ["1:main", "2:social", "3:dev", "4:media", "5:monitor", "6", "7", "8"] defaultLayout = tiled ||| Mirror tiled ||| Full where tiled = spacing 5 $ Tall nmaster delta ratio nmaster = 1 ratio = 3/5 delta = 5/100 mediaLayout = noBorders $ Full myLayout = onWorkspace "4:media" mediaLayout $ defaultLayout main = do xmproc <- spawnPipe "/usr/bin/xmobar ~/.xmobarrc" xmonad $ defaultConfig { manageHook = insertPosition Below Newer <+> myManageHook , layoutHook = avoidStruts $ myLayout , logHook = dynamicLogWithPP xmobarPP { ppOutput = hPutStrLn xmproc , ppTitle = xmobarColor "green" "" . shorten 150 } , terminal = "urxvt" , modMask = mod4Mask , workspaces = myWorkspaces -- keeps Minecraft in line , startupHook = setWMName "LG3D" , borderWidth = 2 -- border colors set to match 256 color terminal PS1 , normalBorderColor = "#0087ff" , focusedBorderColor = "#5fd700" } `additionalKeys` [ ((mod4Mask .|. shiftMask, xK_l), spawn "xscreensaver-command -lock") , ((mod4Mask, xK_v), spawn "gvim") , ((mod4Mask, xK_p), spawn "dmenu_run") ] Offline Hmm, mine doesn't seem to work very well. I've added `className =? "net-minecraft-LauncherFrame" --> doFloat` to my manage hook, but that doesn't seem to have any effect. What do I need to add? Reading your xmonad.hs file it seems like this'll effect all applications on the 4th workspace, is that true? if so, will minecraft not work on the other workspaces? Thanks, end Offline I was having the same problem - it turns out the class name of the Minecraft window had changed. Using xprop to get the class name and following the OP's setup using "net-minecraft-MinecraftLauncher" worked as expected. This was quite an annoying issue! Offline
https://bbs.archlinux.org/viewtopic.php?pid=1029180
CC-MAIN-2016-30
refinedweb
526
51.55
Ensuring Python Type Safety Python is a dynamically-typed language with no static type checking. Because of the way Python’s type checking works, as well as the deferred nature of runner execution, developer productivity can easily become bottle-necked by time spent investigating type-related errors. The Apache Beam SDK for Python uses type hints during pipeline construction and runtime to try to emulate the correctness guarantees achieved by true static typing. Additionally, using type hints lays some groundwork that allows the backend service to perform efficient type deduction and registration of Coder objects. Python version 3.5 introduces a module called typing to provide hints for type validators in the language. The Beam SDK for Python implements a subset of PEP 484 and aims to follow it as closely as possible in its own typehints module. These flags control Beam type safety: --no_pipeline_type_check Disables type checking during pipeline construction. Default is to perform these checks. --runtime_type_check Enables runtime type checking of every element. This may affect pipeline performance, so the default is to skip these checks. --type_check_additional Enables additional type checks. These are no enabled by default to preserve backwards compatibility. This flag accepts a comma-separate list of options: all: Enable all additional checks. ptransform_fn: Enable type hint decorators when used with the @ptransform_fndecorator. Benefits of Type Hints When you use type hints, Beam raises exceptions during pipeline construction time, rather than runtime. For example, Beam generates an exception if it detects that your pipeline applies mismatched PTransforms (where the expected outputs of one transform do not match the expected inputs of the following transform). These exceptions are raised at pipeline construction time, regardless of where your pipeline will execute. Introducing type hints for the PTransforms you define allows you to catch potential bugs up front in the local runner, rather than after minutes of execution into a deep, complex pipeline. Consider the following example, in which numbers is a PCollection of str values: The code then applies a Filter transform to the numbers collection with a callable that retrieves the even numbers. When you call p.run(), this code generates an error when trying to execute this transform because Filter expects a PCollection of integers, but is given a PCollection of strings instead. With type hints, this error could have been caught at pipeline construction time, before the pipeline even started running. The Beam SDK for Python includes some automatic type hinting: for example, some PTransforms, such as Create and simple ParDo transforms, attempt to deduce their output type given their input. However, Beam cannot deduce types in all cases. Therefore, the recommendation is that you declare type hints to aid you in performing your own type checks. Declaring Type Hints You can declare type hints on callables, DoFns, or entire PTransforms. There are three ways to declare type hints: inline during pipeline construction, as properties of the DoFn or PTransform using decorators, or as Python 3 type annotations on certain functions. You can always declare type hints inline, but if you need them for code that is going to be reused, declare them as annotations or decorators. For example, if your DoFn requires an int input, it makes more sense to declare the type hint for the input as an annotation of the arguments to process (or a property of the DoFn) rather than inline. Using Annotations has the added benefit of allowing use of a static type checker (such as mypy) to additionally type check your code. If you already use a type checker, using annotations instead of decorators reduces code duplication. However, annotations do not cover all the use cases that decorators and inline declarations do. For instance, they do not work for lambda functions. Declaring Type Hints Using Type Annotations New in version 2.21.0. To specify type hints as annotations on certain functions, use them as usual and omit any decorator hints or inline hints. Annotations are currently supported on: process()methods on DoFnsubclasses. expand()methods on PTransformsubclasses. - Functions passed to: ParDo, Map, FlatMap, Filter. The following code declares an int input and a str output type hint on the to_id transform, using annotations on my_fn. The following code demonstrates how to use annotations on PTransform subclasses. A valid annotation is a PCollection that wraps an internal (nested) type, PBegin, PDone, or None. The following code declares typehints on a custom PTransform, that takes a PCollection[int] input and outputs a PCollection[str], using annotations. The following code declares int input and output type hints on filter_evens, using annotations on FilterEvensDoFn.process. Since process returns a generator, the output type for a DoFn producing a PCollection[int] is annotated as Iterable[int] ( Generator[int, None, None] would also work here). Beam will remove the outer iterable of the return type on the DoFn.process method and functions passed to FlatMap to deduce the element type of resulting PCollection . It is an error to have a non-iterable return type annotation for these functions. Other supported iterable types include: Iterator, Generator, Tuple, List. The following code declares int input and output type hints on double_evens, using annotations on FilterEvensDoubleDoFn.process. Since process returns a list or None, the output type is annotated as Optional[List[int]]. Beam will also remove the outer Optional and (as above) the outer iterable of the return type, only on the DoFn.process method and functions passed to FlatMap. Declaring Type Hints Inline To specify type hints inline, use the methods with_input_types and with_output_types. The following example code declares an input type hint inline: When you apply the Filter transform to the numbers collection in the example above, you’ll be able to catch the error during pipeline construction. Declaring Type Hints Using Decorators To specify type hints as properties of a DoFn or PTransform, use the decorators @with_input_types() and @with_output_types(). The following code declares an int type hint on FilterEvensDoFn, using the decorator @with_input_types(). Decorators receive an arbitrary number of positional and/or keyword arguments, typically interpreted in the context of the function they’re wrapping. Generally the first argument is a type hint for the main input, and additional arguments are type hints for side inputs. Disabling Annotations Use Since this style of type hint declaration is enabled by default, here are some ways to disable it. - Using the @beam.typehints.no_annotationsdecorator on the specific function you want Beam to ignore annotations for. - Declaring type hints using the decorator or inline methods above. These will take precedence over annotations. - Calling beam.typehints.disable_type_annotations()before pipeline creation. This will prevent Beam from looking at annotations on all functions. Defining Generic Types You can use type hint annotations to define generic types. The following code specifies an input type hint that asserts the generic type T, and an output type hint that asserts the type Tuple[int, T]. If the input to MyTransform is of type str, Beam will infer the output type to be Tuple[int, str]. Kinds of Type Hints You can use type hints with any class, including Python primitive types, container classes, and user-defined classes. All classes, such as int, float, and user-defined classes, can be used to define type hints, called simple type hints. Container types such as lists, tuples, and iterables, can also be used to define type hints and are called parameterized type hints. Finally, there are some special types that don’t correspond to any concrete Python classes, such as Any, Optional, and Union, that are also permitted as type hints. Beam defines its own internal type hint types, which are still available for use for backward compatibility. It also supports Python’s typing module types, which are internally converted to Beam internal types. For new code, it is recommended to use typing module types. Simple Type Hints Type hints can be of any class, from int and str, to user-defined classes. If you have a class as a type hint, you may want to define a coder for it. Parameterized Type Hints Parameterized type hints are useful for hinting the types of container-like Python objects, such as list. These type hints further refine the elements in those container objects. The parameters for parameterized type hints can be simple types, parameterized types, or type variables. Element types that are type variables, such as T, impose relationships between the inputs and outputs of an operation (for example, List[T] -> T). Type hints can be nested, allowing you to define type hints for complex types. For example, List[Tuple[int, int, str]]. In order to avoid conflicting with the namespace of the built-in container types, the first letter is capitalized. The following parameterized type hints are permitted: Tuple[T, U] Tuple[T, ...] List[T] KV[T, U] Dict[T, U] Set[T] FrozenSet[T] Iterable[T] Iterator[T] Generator[T] PCollection[T] Note: The Tuple[T, U] type hint is a tuple with a fixed number of heterogeneously typed elements, while the Tuple[T, ...] type hint is a tuple with a variable of homogeneously typed elements. Special Type Hints The following are special type hints that don’t correspond to a class, but rather to special types introduced in PEP 484. Any Union[T, U, V] Optional[T] Runtime Type Checking In addition to using type hints for type checking at pipeline construction, you can enable runtime type checking to check that actual elements satisfy the declared type constraints during pipeline execution. For example, the following pipeline emits elements of the wrong type. Depending on the runner implementation, its execution may or may not fail at runtime. However, if you enable runtime type checking, the code is guaranteed to fail at runtime. To enable runtime type checking, set the pipeline option runtime_type_check to True. Note that because runtime type checks are done for each PCollection element, enabling this feature may incur a significant performance penalty. It is therefore recommended that runtime type checks are disabled for production pipelines. See the following section for a quicker, production-friendly alternative. Faster Runtime Type Checking You can enable faster, sampling-based runtime type checking by setting the pipeline option performance_runtime_type_check to True. The is a Python 3 only feature that works by runtime type checking a small subset of values, called a sample, using optimized Cython code. Currently, this feature does not support runtime type checking for side inputs or combining operations. These are planned to be supported in a future release of Beam. Use of Type Hints in Coders When your pipeline reads, writes, or otherwise materializes its data, the elements in your PCollection need to be encoded and decoded to and from byte strings. Byte strings are used for intermediate storage, for comparing keys in GroupByKey operations, and for reading from sources and writing to sinks. The Beam SDK for Python uses Python’s native support for serializing objects of unknown type, a process called pickling. However, using the PickleCoder comes with several drawbacks: it is less efficient in time and space, and the encoding used is not deterministic, which hinders distributed partitioning, grouping, and state lookup. To avoid these drawbacks, you can define Coder classes for encoding and decoding types in a more efficient way. You can specify a Coder to describe how the elements of a given PCollection should be encoded and decoded. In order to be correct and efficient, a Coder needs type information and for PCollections to be associated with a specific type. Type hints are what make this type information available. The Beam SDK for Python provides built-in coders for the standard Python types such as int, float, str, bytes, and unicode. Deterministic Coders If you don’t define a Coder, the default is a coder that falls back to pickling for unknown types. In some cases, you must specify a deterministic Coder or else you will get a runtime error. For example, suppose you have a PCollection of key-value pairs whose keys are Player objects. If you apply a GroupByKey transform to such a collection, its key objects might be serialized differently on different machines when a nondeterministic coder, such as the default pickle coder, is used. Since GroupByKey uses this serialized representation to compare keys, this may result in incorrect behavior. To ensure that the elements are always encoded and decoded in the same way, you need to define a deterministic Coder for the Player class. The following code shows the example Player class and how to define a Coder for it. When you use type hints, Beam infers which Coders to use, using beam.coders.registry. The following code registers PlayerCoder as a coder for the Player class. In the example, the input type declared for CombinePerKey is Tuple[Player, int]. In this case, Beam infers that the Coder objects to use are TupleCoder, PlayerCoder, and IntCoder. from typing import Tuple class Player(object): def __init__(self, team, name): self.team = team self.name = name class PlayerCoder(beam.coders.Coder): def encode(self, player): return ('%s:%s' % (player.team, player.name)).encode('utf-8') def decode(self, s): return Player(*s.decode('utf-8').split(':')) def is_deterministic(self): return True beam.coders.registry.register_coder(Player, PlayerCoder) def parse_player_and_score(csv): name, team, score = csv.split(',') return Player(team, name), int(score) totals = ( lines | beam.Map(parse_player_and_score) | beam.CombinePerKey(sum).with_input_types(Tuple[Player, int]))
https://beam.apache.org/documentation/sdks/python-type-safety/
CC-MAIN-2022-27
refinedweb
2,226
54.83
React IntlReact Intl Internationalize React apps. This library provides React components and an API to format dates, numbers, and strings, including pluralization and handling translations. OverviewOverview React Intl is part of FormatJS. It provides bindings to React via its components and API. Slack: Join us on Slack at react-intl.slack.com for help, general conversation and more Documentation React Intl's docs are in this GitHub repo's Wiki, Get Started. There are also several runnable example apps which you can reference to learn how all the pieces fit together. (If you're looking for React Intl v1, you can find it here.) FeaturesFeatures - Display numbers with separators. - Display dates and times correctly. - Display dates relative to "now". - Pluralize labels in strings. - Support for 150+ languages. - Runs in the browser and Node.js. - Built on standards. ExampleExample There are several runnable examples in this Git repo, but here's a Hello World one: import React, {Component} from 'react'; import ReactDOM from 'react-dom'; import {IntlProvider, FormattedMessage} from 'react-intl'; class App extends Component { constructor(props) { super(props); this.state = { name : 'Eric', unreadCount: 1000, }; } render() { const {name, unreadCount} = this.state; return ( <p> <FormattedMessage id="welcome" defaultMessage={`Hello {name}, you have {unreadCount, number} {unreadCount, plural, one {message} other {messages} }`} values={{name: <b>{name}</b>, unreadCount}} /> </p> ); } } ReactDOM.render( <IntlProvider locale="en"> <App /> </IntlProvider>, document.getElementById('container') ); This example would render: "Hello Eric, you have 1,000 messages." into the container element on the page. Pluralization rules: In some languages you have more than one and other. For example in ru there are the following plural rules: one, few, many and other. Check out the official Unicode CLDR documentation. ContributeContribute Let's make React Intl and FormatJS better! If you're interested in helping, all contributions are welcome and appreciated. React Intl is just one of many packages that make up the FormatJS suite of packages, and you can contribute to any/all of them, including the Format JS website itself. LicenseLicense This software is free to use under the Yahoo Inc. BSD license. See the LICENSE file for license text and copyright information.
https://www.ctolib.com/react-intl.html
CC-MAIN-2019-04
refinedweb
352
51.65
Sending an image bytearray from mobile to java http server Hello... My handymidlet shoots a foto, this is saved in a bytearray and drawn for displaying it on the phonescreen. Now I want to send this array via http to a java http server for drawing it there and saving... I got it to send and receive a string. Because of headerinformation...I had to filter the inputstream to find the string. (with bufferedreader and an added keyword) How can i filter the byte array out of the inputstream? Hi... Thank you very much for your help. This way to solve the problem really seems to be much more easier. But I still have Problems. So my phone is configured how you described. I can't really get the content-length with the server. But I tested everything with a byte array length of 20. With System.out.... I give out the testarray before sending and after receiving. They are different!!!! :( Here is my Server-Code: import java.net.Socket; import java.net.ServerSocket; import java.io.InputStream; import java.io.PrintStream; public class Server { public static void main(String args[]) { try { ServerSocket server; PrintStream os; Socket connection; InputStream is; server = new ServerSocket(8081); byte testData[]=new byte[20]; System.out.println("Waiting ..."); while (true) { connection = server.accept(); is = connection.getInputStream(); is.read(aufnahmeData); is.skip(is.available()); System.out.println("TestData: "+testData); } } catch (Exception e) { // every fault ... System.out.println("Ouch!!!"); } } } Oh you are rolling your own socket server not using a Java http server. You will need to look at the HTTP protocol spec and learn the rules of how HTTP is sent to the server. OR.. just write a servlet and run it in jetty or tomcat. (much easier) :) But here is a simple http server code I found that might help you. and another at: -Shawn thanks for everything this link helped a lot... Hi Oldshoe, It sounds like you may be doing something a bit different than is needed, and thus causing more work for yourself. On the phone you want to setup a HttpConnection using Connector.open(url); You will also need to set the request type to POST, you will also want to set the content-type header property to "application/octet", and the content-length to the size of the bytearray Then open from the HttpConnection object an outputstream, writing your byte array. Then on the server once you have the connection from the device you will want to get the content-length from the headers, build an array of that length of bytes then open the inputstream and read the input into the byte array. You can not do this as a GET type request as the length of a GET querystring is at most 2k, but is platform dependent based on the web/app server. -Shawn
https://www.java.net/node/683462
CC-MAIN-2015-35
refinedweb
476
66.54
ctermid - generate a pathname for controlling terminal #include <stdio.h> char *ctermid(char *s); The ctermid() function generates interfaces, the ctermid() function must be called with a non-NULL parameter. If s is a null pointer, the string is generated in an area that may be static (and therefore may be overwritten by each call), the address of which is returned. Otherwise s is assumed to point to a character array of at least {L_ctermid} bytes; the string is placed in this array and the value of s is returned. The symbolic constant {L_ctermid} is defined in <stdio.h>, and will have a value greater than 0. The ctermid() function will return an empty string if the pathname that would refer to the controlling terminal cannot be determined, or if the function is unsuccessful. No errors are defined. None. The difference between ctermid() and ttyname() is that ttyname() must be handed a file descriptor and returns a path of the terminal associated with that file descriptor, while ctermid() returns a string (such as /dev/tty) that will refer to the current controlling terminal if used as a pathname. None. ttyname(), <stdio.h>. Derived from Issue 1 of the SVID.
http://www.opengroup.org/onlinepubs/007908799/xsh/ctermid.html
crawl-001
refinedweb
199
60.35
get_memory_usage returning negative numbers Is there a reason that get_memory_usage should ever return a negative number? I tried running a "toy" program (after get_memory_usage returned surprisingly large numbers on a more complicated program that I'm using in my research - I wanted to see what the standard memory usage should look like for a basic counter function) and here is what happened: def toy(n): c = 0 for k in xrange(0, n): c = c+1 return c get_memory_usage(toy(1000)) -464.0859375 I could be misunderstanding how get_memory_usage works, but it seems to me that the memory usage should never be negative! (In case this is relevant, I am using Sage 4.7.1 on my laptop.)
https://ask.sagemath.org/question/8355/get_memory_usage-returning-negative-numbers/?answer=12718
CC-MAIN-2019-22
refinedweb
118
57.81
#include "Wire.h"#include "I2Cdev.h"#include "MPU6050.h"MPU6050 accelgyro;int16_t ax, ay, az;int getTilt(){ // read raw accel measurements from device accelgyro.getAcceleration(&ax, &ay, &az); if (ax < -7000) { return 4; } if (ax > 7000) { return 3; } if (az < -7000) { return 2; } if (az > 7000) { return 1; } return 0;// Array Filter to cancel out sensor fluctuationsint getTiltAverage() {; } } //90% of getTilt() returns has to be identical to rule out any unwanted fluctuations from the sensor if ((ones/arrayLength) > 0.9) return 1; if ((twos/arrayLength) > 0.9) return 2; if ((threes/arrayLength) > 0.9) return 3; if ((fours/arrayLength) > 0.9) return 4; if ((zeros/arrayLength) > 0.9) return 0; return -1;}} If you want to smooth the returned values, you can just use an decaying average for each value you read from the sensor - that only needs one global/static variable for each value, and no re-reading.The second part of your problem is that you want to wait until the input returns to center before you accept another input. To do that I'd define an enumerated type with values for each possible position, and a global/static variable holding the current reported position. Each time your function to read the state is called, you will read the device and recalculate the smoothed values, determine the instantaneous position (as an enumerated value) by comparing the smoothed values against your thresholds, and compare the instantaneous position against the reported position to decide whether to update the reported position. Finally, return the current reported position.I'd implement that with one function to read the sensor and smooth the values, another function to do the threshold comparisons and return the position as an enumerated value, and finally the function which is called to update and return the currently reported position. All together you would be looking at about a dozen lines of code. Code: [Select] if ((ones/arrayLength) > 0.9) return 1; if ((ones/arrayLength) > 0.9) return 1; if (ones > (0.9*arrayLength)) return 1;; } } This is my code responsible for reading out and smoothing the sensor values: The array to filter out the fluctuations actually does its job.My problem is with the 'return to neutral first' part. I really hope someone is willing to give me a hand here, so that I can get to the next step in my project. I would like to reward the helper while I am on a student budget but we will figure something out surely . Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=106335.0;prev_next=next
CC-MAIN-2016-36
refinedweb
451
53.1
IUP::PPlot - [GUI element] canvas-like element for creating 2D plots Creates a PPlot-based plot. PPlot is a library for creating plots that is system independent. It is available at SourceForge. However the standard PPlot distribution source code was changed for iup_pplot library (to improve features and visual appearance). $pplot = IUP::PPlot->new( TITLE=>"Simple Data", GRID=>"YES" ); Returns: the identifier of the created element, or undef if an error occurs. NOTE: You can pass to new() other ATTRIBUTE=>'value' or CALLBACKNAME=>\&func pairs relevant to this element - see IUP::Manual::02_Elements. Each plot can contain 2 axis (X and Y), a title, a legend box, a grid, a dataset area and as many datasets you want. Each data set is added using the IUP::PPlotAdd function. All other plot parameters are configured by attributes. If no attribute is set, the default values were selected to best display the plot. When setting attributes the plot is NOT redrawn until the REDRAW attribute is set or a redraw event occurs. The dataset area is delimited by a margin. Data is only plotted inside the dataset area. Axis and main title are positioned independent of this margin. It is very important to set the margins when using axis automatic scaling or the axis maybe hidden. The legend box is a list of the dataset names, each one drawn with the same color of the correspondent dataset. The box is located in one of the four corners of the dataset area. The grid is automatically spaced accordingly the current axis displayed values. The title is always centered in the top of the plot. The axis are always positioned at the origin, except when CrossOrigin is disabled, then it is positioned at the left-bottom. If values are only positive then the origin will be placed in left bottom position. If values are negative and positive then origin will be placed inside the plot. The ticks in the axis are also automatically distributed. PPlot implementation demands that the MARGIN* attributes must be set so the plot is not cropped. Zoom in can be done selecting a region using the left mouse button. Zoom out is done with a single click of the left mouse button. If the Ctrl+X key combination is pressed the zoom selection is restricted to the X axis, the Y axis will be left unchanged. If the Ctrl+Y key combination is pressed the zoom selection is restricted to the Y axis, the X axis will be left unchanged. If the Ctrl+R key combination is pressed the zoom selection is restored to a free rectangle. Each zoom in operation is stacked, so each zoom out operation goes back the the previous zoom selection. Zoom operates on AXS_XMAX, AXS_XMIN, AXS_YMAX, AXS_YMIN even if AUTOMIN/MAX is enabled. The axis may be hidden depending on the selected rectangle. If you press the Ctrl+Shift key combination, while holding the left mouse button down, a cross hair cursor will be displayed for each dataset in the plot. The X coordinate will control the cursor, the Y coordinate will reflect each dataset correspondent value. Selection and editing of a dataset can be enabled using the DS_EDIT attribute. To select all the samples in the dataset press the Shift key while clicking with the left mouse button near a sample in the dataset. To deselect all samples press the Shift key while clicking with the left mouse button in the background. To select or deselect individual samples press the Ctrl key while clicking with the left mouse button near the sample in the dataset. After selecting samples use the Del key to remove the selected samples. Also use the arrow keys to move the Y coordinate of the selected samples. Press the Ctrl key to increase the step size when moving the Y coordinate. $plot->PlotBegin($dim); # $dim = 1 (for 1D) or 2 (for 2D) Prepares a dataset to receive samples. The dimension of the data can be 1, 2 or 3. $index = $plot->PlotEnd(); Adds a 2D dataset to the plot and returns the dataset index. The data set can be empty. Redraw is NOT done until the REDRAW attribute is set. Also it will change the current dataset index to the return value. You can only set attributes of a dataset AFTER you added the dataset. Can only be called if PlotBegin was called. Whenever you create a dataset all its "DS_*" attributes will be set to the default values. Notice that DS_MODE must be set before other "DS_*" attributes. $plot->PlotNewDataSet($dim); # $dim = 1 (for 1D) or 2 (for 2D) Creates an empty dataset to receive samples. The dimension of the data can be 1 or 2. $plot->PlotAdd1D($name, $y); #or $plot->PlotAdd1D(\@name, \@y); #or $plot->PlotAdd1D($y); #or $plot->PlotAdd1D(\@y); Adds a sample to the dataset. Can only be called if PlotBegin was called with dim=1. name is an optional string used for tick labels in the X axis, and it can be undef. Names are allowed only for the first dataset and when set ticks configuration for the X axis is ignored, all the names are shown. $plot->PlotAdd2D($x, $y); #or $plot->PlotAdd2D(\@x, \@y); $plot->PlotInsert1D($index, $sample_index, \@name, \@y); #or $plot->PlotInsert1D($index, $sample_index, \@y); See PlotInsert2D. $plot->PlotInsert2D($index, $sample_index, \@x, \@y); Inserts an array of samples in the dataset ds_index at the given sample_index. Can be used only after the dataset is added to the plot (after PlotBegin). sample_index can be after the last sample so data is appended to the array. Current data is shifted if necessary. ($ix, $iy) = $plot->PlotTransform($x, $y); Converts coordinates in plot units to pixels. It should be used in PREDRAW_CB and POSTDRAW_CB callbacks only. Output variables can be undef if not used. It can be used inside other callbacks, but make sure that the drawing after a resize is done. $plot->PlotPaintTo($canvas); Plots to the given canvas instead of the display canvas. Handy if you want to save the plot into IUP::Canvas::SVG or IUP::Canvas::EMF like this: my $cnv = IUP::Canvas::SVG->new(filename=>"output.svg", width=>300, height=>210, resolution=>4); $mainplot->PlotPaintTo($cnv); $cnv->cdKillCanvas(); For more info about concept of attributes (setting/getting values etc.) see IUP::Manual::03_Attributes. Attributes specific to this element: (write-only, non inheritable) Redraw the plot and update the display. Value is ignored. All other attributes will NOT update the display, so you can set many attributes without visual output. (non inheritable) Defines if the double buffer will use standard driver image (NO - faster) or an RGB image (YES - slower). Default: NO. The IMAGERGB driver has anti-aliasing which can improve the line drawing. [Windows Only] (non inheritable) Defines if the double buffer will use GDI+ (YES) for drawing or standard GDI (NO). Default: NO. The GDI+ driver has anti-aliasing which can improve the line drawing. The font used in all text elements of the plot: title, legend and labels. The background color. The default value is white "255 255 255". The title color. The default value is black "0 0 0". (non inheritable) The title. Located always at the top center area. (non inheritable) The title font size and style. The default values depends on the FONT attribute and the returned value is undef. Set to undef, to use the FONT attribute values. Style can be "PLAIN", "BOLD", "ITALIC" or "BOLDITALIC". (non inheritable) Margin of the dataset area. PPlot implementation demands that margins must be set so the plot is not cropped. Default: "15", "15", "30", "15". Shows or hides the legend box. Can be YES or NO. Default: NO. Legend box position. Can be: "TOPLEFT", "TOPRIGHT", "BOTTOMLEFT", or "BOTTOMRIGHT. Default: "TOPRIGHT". The legend box text font size and style. Line style of the grid. Can be: "CONTINUOUS", "DASHED", "DOTTED", "DASH_DOT", "DASH_DOT_DOT". Default is "CONTINUOUS". Grid color. Default: "200 200 200". Shows or hides the grid in both or a specific axis. Can be: YES (both), HORIZONTAL, VERTICAL or NO. Default: NO. (write-only) Removes a dataset given its index. (write-only) Removes all datasets. Value is ignored. [read-only] Total number of datasets. Current dataset index. Default is -1. When a dataset is added it becomes the current dataset. The index starts at 0. All "DS_*" attributes are dependent on this value. Legend text of the current dataset. Default is dynamically generated: "plot 0", "plot 1", "plot 2", ... Color of the current dataset and it legend text. Default is dynamically generated for the 6 first datasets, others are default to black "0 0 0". The first 6 are: 0="255 0 0", 1="0 0 255", 2="0 255 0", 3="0 255 255", 4="255 0 255", 5="255 255 0". Drawing mode of the current dataset. Can be: "LINE", "BAR", "MARK" or "MARKLINE". Default: "LINE". This must be set before other "DS_*" attributes. Line style of the current dataset. Can be: "CONTINUOUS", "DASHED", "DOTTED", "DASH_DOT", "DASH_DOT_DOT". Default is "CONTINUOUS". Line width of the current dataset. Default: 1. Mark style of the current dataset. Can be: "PLUS", "STAR", "CIRCLE", "X", "BOX", "DIAMOND", "HOLLOW_CIRCLE", "HOLLOW_BOX", "HOLLOW_DIAMOND". Default is "X". Mark size of the current dataset. Default: 7. Enable or disable the display of the values near each sample. Can be YES or NO. Default: NO. (write-only) Removes a sample from the current dataset given its index. Enable or disable the current dataset selection and editing. Can be YES or NO. Default: NO. Axis, ticks and label color. Default: "0 0 0". Minimum and maximum displayed values of the respective axis. Automatically calculated values when AUTOMIN or AUTOMAX are enabled. Configures the automatic scaling of the minimum and maximum display values. Can be YES or NO. Default: YES. Text label of the respective axis. Text label position at center (YES) or at top/right (NO). Default: YES. Reverse the axis direction. Can be YES or NO. Default: NO. Default is Y oriented bottom to top, and X oriented from left to right. Allow the axis to cross the origin and to be placed inside the dataset area. Can be YES or NO. Default: YES. Configures the scale of the respective axis. Can be: LIN (liner), LOG10 (base 10), LOG2 (base 2) and LOGN (base e). Default: LIN. Axis label text font size and style. See TITLEFONTSIZE and TITLEFONTSTYLE. Enable or disable the axis tick display. Can be YES or NO. Default: YES. Axis tick number C format string. Default: "%.0f". Axis tick number font size and style. See TITLEFONTSIZE and TITLEFONTSTYLE. Configures the automatic tick spacing. Can be YES or NO. Default: YES. The spacing between major ticks. Default is 1 when AUTOTICK is disabled. Number of ticks between each major tick. Default is 5 when AUTOTICK is disabled. Configures the automatic tick size. Can be YES or NO. Default: YES. Size of ticks in pixels. Default is 5 when AUTOTICKSIZE is disabled. Size of major ticks in pixels. Default is 8 when AUTOTICKSIZE is disabled. The following common attributes are also accepted: For more info about concept of callbacks (setting callback handlers etc.) see IUP::Manual::04_Callbacks. Callbacks specific to this element: Action generated when the Del key is pressed to removed a sample from a dataset. If multiple points are selected it is called once for each selected point. Callback handler prototype: sub delete_cb_handler { my ($self, $index, $sample_index, $x, $y) = @_; #... } $index: index of the dataset $sample_index: index of the sample in the dataset $x: X coordinate value of the sample $y: Y coordinate value of the sample Returns: If IUP_IGNORE then the sample is not deleted. Actions generated when a delete operation will begin or end. But they are called only if DELETE_CB is also defined. Callback handler prototype: sub deletebegin_cb_handler { my ($self) = @_; #... } Callback handler prototype: sub deleteend_cb_handler { my ($self) = @_; #... } Returns: If DELETEBEGIN_CB returns IUP_IGNORE the delete operation for all the selected samples are aborted. Action generated when a sample is selected. If multiple points are selected it is called once for each new selected point. It is called only if the selection state of the sample is about to be changed. Callback handler prototype: sub select_cb_handler { my ($self, $index, $sample_index, $x, $y, $select) = @_; #... } $index - index of the dataset $sample_index - index of the sample in the dataset $x: X coordinate value of the sample $y: Y coordinate value of the sample $select: boolean value that a non zero value indicates if the point is to be selected. Returns: If IUP_IGNORE then the sample is not selected. Actions generated when a selection operation will begin or end. But they are called only if SELECT_CB is also defined. Callback handler prototype: sub seletebegin_cb_handler { my ($self) = @_; #... } Callback handler prototype: sub selectend_cb_handler { my ($self) = @_; #... } Returns: If SELECTBEGIN_CB returns IUP_IGNORE the selection operation is aborted. Action generated when a sample is selected. If multiple points are selected it is called once for each new selected point. It is called only if the selection state of the sample is about to be changed. Callback handler prototype: sub edit_cb_handler { my ($self, $index, $sample_index, $x, $y) = @_; my $new_x, $new_y, $retval; #... return ($new_x, $new_y, $retval); } #ZZZ-test-this $index: index of the dataset $sample_index: index of the sample in the dataset $x: X coordinate value of the sample $y: Y coordinate value of the sample $new_x: (return value) the new X coordinate value of the sample $new_y: (return value) the new Y coordinate value of the sample $retval: (return value) If IUP_IGNORE then the sample is not edited. The application can changed the new value before it is edited. Actions generated when an editing operation will begin or end. But they are called only if EDIT_CB is also defined. Callback handler prototype: sub editbegin_cb_handler { my ($self) = @_; #... } Callback handler prototype: sub editend_cb_handler { my ($self) = @_; #... } Returns: If EDITBEGIN_CB returns IUP_IGNORE the editing operation is aborted. Actions generated before and after the redraw operation. Predraw can be used to draw a different background and Postdraw can be used to draw additional information in the plot. Predraw has no restrictions, but Postdraw is clipped to the dataset area. To position elements in plot units, use the PlotTransform function. Callback handler prototype: sub predraw_cb_handler { my ($self, $canvas) = @_; #... } Callback handler prototype: sub postdraw_cb_handler { my ($self, $canvas) = @_; #... } $cnv: reference to IUP::Canvas where the draw operation occurs. The following common callbacks are also accepted: The element IUP::PPlot is used in the following sample scripts: The original doc: iup_pplot.html
http://search.cpan.org/dist/IUP/lib/IUP/PPlot.pod
CC-MAIN-2015-32
refinedweb
2,420
68.36
New I/O, usually called NIO, is a collection of APIs that offer additional capabilities for intensive I/O operations. It was introduced with the Java 1.4 release by Sun Microsystems to complement an existing standard I/O. The extended NIO that offers further new file system APIs, called NIO2, was released with Java SE 7 (“Dolphin”). NIO related questions are very popular in java interviews now-a-days. NIO2 provides two major methods of reading a file: - Using buffer and channel classes - Using Path and Files classes In this post, I am showing a couple of ways to read a file from file system. So lets start them by first showing old famous approach first, so that we can see what really changed. Old famous I/O way This example shows how we have been reading a text file using old I/O library APIs. It uses a BufferedReader object for reading. Another way can be using InputStream implementation. package com.howtodoinjava.test.nio; import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; public class WithoutNIOExample { public static void main(String[] args) { BufferedReader br = null; String sCurrentLine = null; try { br = new BufferedReader( new FileReader("test.txt")); while ((sCurrentLine = br.readLine()) != null) { System.out.println(sCurrentLine); } } catch (IOException e) { e.printStackTrace(); } finally { try { if (br != null) br.close(); } catch (IOException ex) { ex.printStackTrace(); } } } } 1) Read a small file in buffer of file size package com.howtodoinjava.test.nio; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; public class ReadFileWithFileSizeBuffer { public static void main(String args[]) { try { RandomAccessFile aFile = new RandomAccessFile( "test.txt","r"); FileChannel inChannel = aFile.getChannel(); long fileSize = inChannel.size(); ByteBuffer buffer = ByteBuffer.allocate((int) fileSize); inChannel.read(buffer); //buffer.rewind(); buffer.flip(); for (int i = 0; i < fileSize; i++) { System.out.print((char) buffer.get()); } inChannel.close(); aFile.close(); } catch (IOException exc) { System.out.println(exc); System.exit(1); } } } 2) Read a large file in chunks with fixed size buffer package com.howtodoinjava.test.nio; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; public class ReadFileWithFixedSizeBuffer { public static void main(String[] args) throws IOException { RandomAccessFile aFile = new RandomAccessFile ("test.txt", "r"); FileChannel inChannel = aFile.getChannel(); ByteBuffer buffer = ByteBuffer.allocate(1024); while(inChannel.read(buffer) > 0) { buffer.flip(); for (int i = 0; i < buffer.limit(); i++) { System.out.print((char) buffer.get()); } buffer.clear(); // do something with the data and clear/compact it. } inChannel.close(); aFile.close(); } } 3) Faster file copy with mapped byte buffer package com.howtodoinjava.test.nio; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.MappedByteBuffer; import java.nio.channels.FileChannel; public class ReadFileWithMappedByteBuffer { public static void main(String[] args) throws IOException { RandomAccessFile aFile = new RandomAccessFile ("test.txt", "r"); FileChannel inChannel = aFile.getChannel(); MappedByteBuffer buffer = inChannel.map(FileChannel.MapMode.READ_ONLY, 0, inChannel.size()); buffer.load(); for (int i = 0; i < buffer.limit(); i++) { System.out.print((char) buffer.get()); } buffer.clear(); // do something with the data and clear/compact it. inChannel.close(); aFile.close(); } } All above techniques will read the content of file and print it to console. You can do whatever you want once you have read it. Happy Learning !! MappedByteBuffer buffer = inChannel.map(FileChannel.MapMode.READ_ONLY, 0, inChannel.size()); —-> inChannel.size() it looks like size of the file. What will happen if file is having 8-10 GB. Will mappedbytebuffer is suggestible and how internally it will work without out of memory It is size of the region to be mapped. It is not buffer size. Nice examples. I just wonder that on your first example “1) Read a small file in buffer of file size” why did you not use finally{} block for aFile resource? When you close the reader it will close the file for you, theoretically you would miss the case when the reader throws an exception in the constructor after the file has opened, but I believe the only exceptions that could occur at that time would be things like out of memory and similar which would forcibly close down your JVM anyways. Hi Lokesh, I want to divide an xml file according to size using NIO. We can read char or byte only from buffer, so how can i check whether the xml file is proper or not. Animesh Brilliant question. And that means I do not know the answer 🙂 I will try to find a solution for this problem when time permits. I you get any solution, then please update me as well. Thanks !! Hi, Could you please explain why in the example “1) Read a small file in buffer of file size”, you write: 20 buffer.rewind(); 21 buffer.flip(); Wouldn’t be enough to use buffer.flip(), like in the example “2) Read a large file in chunks with fixed size buffer”? You are right. I must have forget to comment that line. Actually both methods are almost similar. In fact, you can use rewind( ) to go back and reread the data in a buffer that has already been flipped. How multiple threads could read single file parallely Hi Lokesh, thanks for the informative write-up. The key difference is that in NIO you are reading a buffer not a stream. And NIO is by design non-blocking. So the thread is not blocked. Just a question. I have to process huge text files (billions of records, each) line by line. Should I read them with IO, NIO, or NIO2? Which class? IO is not good solution. Use ReadFileWithFixedSizeBuffer example. In java 8, you can use Sorry but the old famous way do not the same – here is reading line by line – and with a Reader. The three other sources reading byte by byte – with a stream. No good! Dear friend, it’s not about lines or bytes or streams. It’s about how it was and how it is. very useful thanks
https://howtodoinjava.com/java7/nio/3-ways-to-read-files-using-java-nio/
CC-MAIN-2020-10
refinedweb
992
61.93
Pandas filter dates by month, hour, day and last N days & weeks We have dataframe with dates or timestamps columns and we would like to filter the rows by Month, Hour, day or by last n days from today’s date. Pandas has a dt accessor object for datetimelike properties of the series and can be used to access the properties from Timestamp or a collection of timestamps like a DatetimeIndex. if you need a refresher on working with datetime in pandas then check this out Here are some of these datetime Index properties that we will check out in this post: - week - weekday - day_name - is_month_end - is_month_start - is_year_end - is_year_start Let’s get started, we will first build a dataframe with date as an index Create Dataframe with DatetimeIndex We have first created a series of date range by using pd.date_range() that returns a fixed frequency DatetimeIndex and gives the range of equally spaced time points. The start date is 1-JAN-2020 and end date is 24-DEC-2020 13:27:00, with a period of 37 min, so the total length of date series was 13955, you can expect a dataframe of these many rows. The second column in dataframe is value and that has same length as the date column. We have set up the date column as the Datetimeindex of the dataframe, which is Immutable ndarray-like of datetime64 data. import pandas as pd import numpy as np dates = pd.date_range(start='01-JAN-2020', end='24-DEC-2020 13:27:00', freq='37 min') df=pd.DataFrame({'date':dates, 'value':np.random.uniform(-1,0,len(dates))}) df.set_index('date', inplace=True) df.sample(2) This is how our dataframe looks like Filter by time - Hour and Minutes We want to filter the above dataframe by a hour and minute condition, where hour is greater than or equal to 10 and less than equal to 14 and minute is greater than equal to 10 and less than equal to 30 We will use the DatetimeIndex property hour and minute to get the hour and minutes respectively from the date value or timestamp. Since the index is a DatetimeIndex we were using df.index.month, however if your date is a datetime column series then you can use the dt accessor df.date.dt.month to get the month df[((df.index.hour>=10) & (df.index.hour<=14)) & (df.index.minute>10) & (df.index.minute<30)].sort_index(ascending=True).sample(5) After the rows are filter by hour and minutes condition, I’m sorting it in ascending order and grabbing a sample set of 5. Filter by Month Here we want to filter by month and interested to find the dates where month integer value is in 2, 7,6 and 11 We will again use the DatetimeIndex month properties to get the month from the date df[df.index.month.isin([7, 6, 11, 2])].sort_index(ascending=True).sample(5) We want to filter the rows from dataframe by month but this time we will use the month names and not the integer values df[df.index.month_name().isin(['July', 'June', 'November', 'February'])].sort_index(ascending=True).sample(5) We can also filter it by abbreviated month name by taking the slice, first 3 letters of each month df[df.index.month_name().str[:3].isin(['Jul', 'Jun', 'Nov', 'Feb'])].sort_index(ascending=True).sample(5) Filter last 3 months We can also filter by last N months, Since there is no month argument for pd.Timedelta so the work around is to get the month of last date and subtract N to it. df[df.index.month >= (df.index.date[-1].month-3)] Alternatively, we can use pd.offsets() also but that throws a warning that comparison of Timestamp with datetime.date is deprecated and use pd.Timestamp(date) or ts.date() == date instead df[df.index.date > (df.index.date[-1]- pd.offsets.MonthBegin(3))] Filter by day of week We want to filter the dates by day of week and the attributes that can be used for this is day_name and weekday Here we will filter the dates by weekdays: Monday, Wednesday and Sunday from the dataframe df[df.index.day_name().isin(['Monday','Wednesday','Sunday'])].sort_index(ascending=True).head(5) We can also filter it by abbreviated weekday name by taking the slice, first 3 letters of each day df[df.index.day_name().str[:3].isin(['Mon','Wed','Sun'])].sort_index(ascending=True).head(5) Altenatively, you can also filter by index of the weekday and it starts from Monday i.e. 0, so for Mon, Wed and Sun it would be 0,2,4. df[df.index.weekday.isin([0,2,4])].sort_index(ascending=True).head(5) Filter by Month start and end Filter by month start and end can be done by using the properties is_month_start and is_month_end we want all the dates in dataframe which are either start of the month or end of the month df[(df.index.is_month_end)|(df.index.is_month_start)].sort_index(ascending=True).tail() Filter by Quarter start and end Similar to month start and end we can use property is_quarter_start and is_quarter_end So we want all the dates in dataframe which are either start of a Quarter or end of a Quarter Filter last N days, Weeks, hours, seconds We want to filter by last N days, weeks, hours and seconds, The pd.TimeDelta() represents a duration, the difference between two dates or times. So for last N days, we could get the date at the bottom of index, which is technically the last date in dataframe and then subtract the N days timedelta from it. Here we are subtracting 200 days and this will give us the last 200 days rows in the dataframe df[df.index.date > (df.index.date[-1]- pd.Timedelta(days=200))] Similarly, to filter rows by last N weeks, hours and seconds we can just construct a Timedelta from the passed arguments, Following are the allowed keywords: days, seconds, microseconds, milliseconds, minutes, hours, weeks Filter last 8 weeks: We want to filter last 8 weeks, so we construct a timedelta and subtracted it from the last date in dataframe df[df.index.date > (df.index.date[-1]- pd.Timedelta(weeks=8))]
https://kanoki.org/2022/07/16/pandas-filter-dates-by-month-hour-day-or-last-n-days-weeks/
CC-MAIN-2022-33
refinedweb
1,048
56.89
I don’t know how many times I have seen a method which I have been trying to debug with a whole stack of parameter checking code at the start of the method to ensure nothing weird happens with unexpected input. This subsequently caused the nicely abstracted methods to be less abstract than originally intended. Thankfully .Net 4.0 makes life for the developer, and subsequent debuggers a lot easier with Code Contracts. Code contracts come under the namespace System.Diagnostics.Contracts and the most used class in this is the Contract class. Pre-Conditions There are some different flavours of code contracts, the first is Pre-conditions. This will be similar to the checks done to insure your method hasn’t been passed some incorrect parameters. e.g. a simple divide example which requires that the divisor isn’t 0 public float Divide(float a, float b) { Contract.Requires(b != 0); // line to ensure code contract Return a/b; } if this method is called with b as 0, when compiling and doing static analysis in Visual Studio 2010 (depending on the code contract options set on the project options found in the standard project settings in Visual Studio) will either show as an error, therefore not allowing the project to build, or show an error. There is an override on the Requires method which allows you to put your own warning/error message if the contract is not met: Contract.Requires(b != 0, "Parameter b must not be 0 when calling divide"); this analysis is pretty clever and knows when you are trying to trick it 😉 Console.WriteLine(Divide(1, 1)); // works fine Console.WriteLine(Divide(1, 0)); var a = 1; a--; Console.WriteLine(Divide(1, a)); // finds this too 🙂 var b = 1; Random r = new Random(); b -= r.Next(2); if (b != 0) // without this, there would be an error saying that b could be 0 was unproven { Console.WriteLine(Divide(1, b)); } Post Conditions These conditions inform on the finishing state of a method, properties of the return types, objects which should not have been modified etc. e.g. public void AddToCollection(Object o) { // ensure that the collection is not changed if there is a null reference Contract.EnsuresOnThrow<NullReferenceException> (Contract.OldValue(collection.Count) == collection.Count); // ensure that the parameter passed in has not been changed Contract.Ensures(Contract.OldValue(o) == o); collection.Add(o); } now if a NullReferenceException is thrown the code contract will ensure that nothing has been added or removed from the collection. Invariants Invariants are rules for objects which should only be modified in a certain way throughout their scope. e.g. collection must not be empty, a value must not be null, a counter must be non-negative… For these type of rules you need to mark your method with the attribute: [ContractInvariantMethod] Now inside the method we can set up our rules. [ContractInvariantMethod] private void Validate() { Contract.Invariant(collection != null); Contract.Invariant(collection.Count > 0); Contract.Invariant(collection.First() != null); } this will ensure that the collection is initialized, always has at least 1 non-null entry. Code contracts seem extremely useful and I look forward to writing code and debugging using them.
https://blogs.msdn.microsoft.com/davethompson/2010/01/12/code-contracts-with-net-4-0/
CC-MAIN-2017-22
refinedweb
531
53.61
Thank you all for the feedback we got on the previous call! Here comes another round of changes and adjustments. Your opinions and use cases are welcome. Java Statics and Inheritance We are going to improve interoperability with Java statics in Kotlin by allowing Kotlin subclasses to access static members of their superclasses: we will now be able to use constants, nested classes and static utilities defined up the inheritance tree. Same for members of companion objects of supertypes. We will also allow calling Java statics on Java classes that inherit them. What won’t be allowed is calling an inherited static member on a Kotlin class: Lateinit val’s With the help of our users we found that we have previously missed an unpleasant hole in the design of lateinit val: backing fields for such properties were not final, and thus Java code could modify them freely. That makes the val-ness of such properties vanish, because no code can ever assume any immutability on them, so we decided to take this feature back: from now on, only vars can be marked lateinit. We’ll keep thinking on use cases and improvements of this feature. Backing fields and custom setters An addition to the previously announced change in backing field syntax: if a property has a custom setter or is open, the setter may be reading the field before writing it, so there’s no syntax for initializing this property for the first time, unless it’s done upon declaration. So, we now require initializers for such properties: If we really need to initialize such a property in the constructor, we’d have to introduce a backing property: Type parameter declarations Historically, Kotlin had two forms of syntax for type parameter declarations for functions: Two is too many, so we decided to keep only the first one, because it places the declaration of T before its usages, which is easier to read and code completion will be more helpful. Visibilities of subclasses and elements of declarations It’s a technical requirement, but rather intuitive: if something is public, it should not, for example, expose a private type: From now on, public classes can’t extend private ones, and public members can’t expose private types. More formally: a supertype or an element of a declaration must be (effectively) at least as visible as the declaration/class itself. Marking result expressions This one is not really decided, and very debatable indeed, but it can’t be added after 1.0, so we have been thinking about it for some time: As some of you rightfully observed, it may be difficult at times to see what expressions are used as results of blocks or lambdas. We are considering prefixing such result expressions with the ^ symbol (or maybe some other prefix) to make them visible, in the following cases: - expression is a result of a multi-line block or lambda, AND - its type is not Unit, nor Nothing. Example (from the Kotlin code base): or Note that simply highlighting result expressions in the IDE is not really a solution: - It does not prevent accidentally changing results of lambdas or block by adding a harmless-looking println()at the end, - It doesn’t work on GitHub, in Terminal or anywhere outside the IDE We did a quick experiment on the Kotlin codebase, and added all the necessary prefixes: The diff above will give you a sense of what such code might look like. To give you an idea of how often this will occur in the code: we got 493 lines changed out of about 230’000 lines of Kotlin code (0.21%), and 233 files were changed out of 2190 Kotlin files we have in our code base (every 10th file). = Backing Field Synthetic Name = I see the niceness of synthetic name of the field, like “this” or “it” but for some reason it feels strange to add another special case here. = Result Expressions = “^” is pretty subtle, does not match its use, and I am not seeing the intended symbolism (“Hey lookup?”). How about either a keyword like “yield” or “emit” or a symbol matching closures like “<-“. I hope this is optional, so I only have to use it if I think my code needs it, for safety or clarity. thisis very different from itand field, because thisa keyword, and the other two are simple automatically defined variables. ^symbolizes an arrow pointing up. Being optional, it’s a lot less useful than if mandatory Andrey, until you explain ^ as “arrow up” I think only about power or XOR). Why not use plain old returnin such cases? Restrict user to place it always in places of this type. IDE will help to eliminate discomfort from writing looong word. And as you mention – this is very rare case – there will be no big problem without IDE and good compiler error message. returnalready has a meaning of “stop the execution of the current function, and return a value”, which is not the intended meaning here. Why not? Isn’t it exactly how it reads: “Execution ends here and X is the result” – same as ^ symbol, if I got it right. I am looking at it from the perspective of Anonymous classes in Java, which are, despite being a different thing, was often used as substitution for lambdas. Imagine I have Java class (Java 7) which uses some functional interfaces. I would have to use return, since it’s Java. Now, when I will migrate such class to Kotlin it would totally makes sense for me if return statement will stay as is for multi-line blocks. I don’t think your analogy is correct: anonymous classes (both in Java and Kotlin) have explicit function declarations inside, so there’s no confusion as of where a returnis returning from. With lambdas that may look exactly like blocks, it is often unclear. yield or emit sounds good IMO, but why not simply “return”? After all, the expression does return something. returnmeans more than just “mark last expression”, it alters control flow ^ looks bad. yield or emit more more better. please don’t make it scala This is also a question whether ^ should affect control flow or not. I already see (in my head) code which uses several ^ expressions within a same file without stopping the execution, which then would be arguably harder to follow. Sorry, within a same block, not file +1 for the result expression marks being optional. Maybe IDE should just suggest marking a result expression explicitly if the construct becomes too complex and may be ambiguous. But if they will be mandatory, then IMO tooling must mark an expression as a result one automatically when having typed a code block and provide a way to move the mark to a new expression below in a few keystrokes without editing too much. Please don’t use the “^” prefix, but another one. On the German keyboard for example, pressing ^ on its own does nothing, pressing it twice results in “^^”. My condolences to those Germans who have to manipulate bits in C/C++ or write manuals with Mac OS X shortcuts in them Jokes aside, I see your point. How is <-on german keyboards? Press ^ and then a space bar Result Expressions I’m not in favour of this. If your code needs this isn’t it a sign that you should probably refactor it and move some of your logic out into a function? I find code that uses lambdas is much more readable when the lambdas are small. Other languages with similar semantics manage without this, e.g. Ruby, Scala. It’s also pretty ugly. It is not just about readability. It allows the compiler to check that you have not made an error, or have an accidental change you didn’t intend. It would be like implying all returnand hoping you didn’t make a mistake in your code that is only verified by readability. We would never drop returnyet we imply something similar by implying emitor ^or <-Why one, but not the other? I’m not actually bothered about returneither. In Ruby and Scala returnis optional in function definitions as well as in lambdas. I like that approach and was disappointed Kotlin didn’t adopt it. I’ve never had an issue with it producing code that was hard to read or prone to bugs. That’s why I’m not in favour of this change; it adds a bit of extra complexity and ugliness to the language to solve a problem that I don’t have. And if I did have it I’d solve it with some simple refactoring that would improve my code anyway. Agree with Chris. I also don’t see the need for “marking result expression”. I think it makes the code uglier and more complex. If the code is too complicated to understand, it deserves to be extracted to one or more separate methods. “More formally: a supertype or an element of a declaration must be (effectively) at least as visible as the declaration/class itself.” Why is this a technical requirement? Isn’t defining a public class that extends a private/protected class perfectly valid code in Java? What are you gonna do with the Java interoperability requirement for this cases then? “…from now on, only vars can be marked lateinit. We’ll keep thinking on use cases and improvements of this feature.” Are you planning to provide an immutable-like property delegate, so that we can use it with vars then? It is. I don’t see an issue here. You mean something like assign-once? Well, it’s an option. It is. Then why is it a technical requirement in Kotlin? (just wondering) I don’t see an issue here. How would you extend a nested protected abstract static class defined in Java from Kotlin? (not very common, but it happens) You mean something like assign-once? Well, it’s an option. Yes. Commenting further on the first subject, sometimes you simply don’t want to expose a base class that you only define internally for reusing purposes, so you might want it to be private (or protected). PS: I know the use cases for this are not so common, so you could probably reply me with some statistics from you code base, but the thing is they do exist and have a justified reason to do so. Hey, just to remark, when I said “How would you extend a nested protected abstract static class defined in Java from Kotlin?”, I meant to said “How would you extend a nested protected abstract static class defined in Java from Kotlin, exposing the subclass as public?”. Because it makes things easier to implement, check and enforce. I would just extend it. I can’t extend it with a public class, but a use case for such a situation is yet to be presented. If it turns out that this limitation actually makes people’s lives harder, we can lift it. C# people have been living with these limitations for ages, and no one seems to care. Because it makes things easier to implement, check and enforce. I see, I guess that’s a good reason for taking that decision then, everybody wants you to release 1.0 soon. === Java Statics and Inheritance === OK, Great 😉 === Lateinit val’s === I’m very glad 😀 === Backing fields and custom setters === OK === Type parameter declarations === I didn’t even know the second syntax existed :p === Visibilities of subclasses and elements of declarations === Makes sense 😉 What about an overriden method that returns a private subclass that extends the type returned by the original method ? === Marking result expressions === Great, it would make the code more readable. I agree with msgile when he says that ^ is not expressive enough. I don’t think yield is a great choice because it is not the semantic of yield in many other languages that uses this keyword. I do very much like <- . Of course, MalePeoples extends Peoples (sorry) : If the type is less visible than the method, it is not allowed. Why not simply use “=” ? “=” makes more sense than “^” for marking result expressions. Definitely don’t like the ^, it is giving me bad flash backs to lambdas in Objective-C Marking result expressions with ^ looks extremely ugly and unintuitive. Please stick to some smart keyword. ‘return’ is ok in this case. Backing property is a real sadness. I was hoping that in Kotlin we could avoid such ugly code (with different types for getter/setter & field). And I liked backing field syntax very much. I’ll miss it. +1 for the result expression marks being optional. I was wondering what is the reason for backing property dollar sign syntax removal other than string interpolation incompatibility? That one would be solved by just using $$propertyconstruct as it seems to me at first glance. Another reason is that it’s just too much syntax for use cases that are extremely rare: about 10 (ten) of them in a >1MLOC github corpus of Kotlin Marking result expressions: Very useful indeed! Why the choice of a prefixed symbol, instead of a separate symbol/word at the beginning of the line? Wouldn’t that (a separate symbol/word) make it easier to searching/editing those marks in the IDE? Marking result expressions: also if you make it mandatory, I see the point of all the people that complain about. I am in favor of making it mandatory, but you really need help from the IDE in order not to make this syntax annoying I really like the idea of marking result expressions, it would make code more readable, especially in big lambdas. Although I’m not sure about ^, <-would look more appropriate here. lateinit val While I understand the reason for this change I found lateinit valvery convenient because I knew that these fields were always set through Dagger dependency injection and I didn’t have to perform null-safety checks on them. Does this mean that I now have to write someDependency!!.doSomething()again for all lateinit varfields? Couldn’t you allow lateinit valwith a special compiler flag for people who know what they are doing? 😉 You can always fork the compiler if you want it really badly Otherwise our policy is: have a language feature for all users or don’t have it at all. Flags are for fine-tuning more low-level things. Yes, it does, and no, you couldn’t be sure it was necessarily Dagger changing them: any Java code could. Result expressions The ^ syntax is not very beautiful Please keep it as it is or at least make it optional. Just using the returnkeyword would also be ok for me. But regardless what keyword you use, please make this one optional. I agree with all but the last thing. For the expressions in a lambda, if it’s more than one line, the return statement should be there to make it clear. This is how java does their lambdas and I think it’s clearest. I’m not sure how it should be done in other expressions (if, when, try), but there’s no good reason to not use return after multiple lines in a lambda. There is: a lambda looks no different from a block in if/try/when, and there are many functions like withwhose semantics are block-like too; so, we’d have to require a returnin ifif we required it in a multiline lambda. Huh. Never thought about that… Well, if you’re going to use a symbol, I’d highly prefer = or :, if possible. I know they’re used elsewhere, so maybe it’s not possible without potentially confusing the compiler. “out” could also potentially be a decent keyword as it’s already reserved (at least for some uses), it’s short, and it lives up to the meaning pretty well. As what the issue with “^” is concerned I would prefer to simply go with “return”. As a former Smalltalk developer I like the Smalltalk-style “^” return character, but I would nevertheless prefer a solution that is transparent for the developer. Is there a need the developer needs to be aware that a sliglty different case applies? If not, free the developer from having to deal with it. I would suggest not to go with yield. It always reminds me of Thread.yield. Also in several other languages yield means to give up the CPU voluntarily. In that way I also found for..yield in Scala confusing at first sight. ^ feels obscure and not in line with Kotlin’s philosophy. If you think this feature is necessary, you should pay the (keyword) price. With returnbeing mandatory for multi-line methods, I wonder why a return(-like) keyword would be considered too heavy-weight for a multi-line lambda. Placing generic type arguments after “fun” is inconsistent with type declarations and function invocations. That’s why I always preferred placing them after the function name. Extension functions with generic receiver (as used in the code example) don’t seem like a good justification; I believe most users will never write such functions. Actually, such functions are the primary reason that I write extension functions. If you make a type that is contravariant but there’s some sort of composition function that requires covariance, the only way to gave that function is as a separate finction, whether extension or not. Usually the extension version is much more fluent. This example has come up in multiple libraries I’ve worked on and seen. Also placing it after fun is more congruent with how it’s done in Java. Obviously, we don’t necessarily want it to be like Java, but I was able to guess that syntax without ever having to look it up because it was natural to me. = Inheritance of statics= It’s important to note that Java’s handling of statics is inconsistent with itself. Static methods defined on classes can be invoked in subtypes without the class name. But static methods defined in interfaces (introduced in Java 8) must be always be qualified with the name of the defining interface: Is Kotlin going to inherit this inconsistency from Java? Or is it going to allow interface methods to be invoked without the class name? I think it would be better to require the class name when calling static methods form supertypes. If you want to invoke the method without using the class name it is possible to explicitly import the method anyway. We are currently discussing the options regarding superinterfaces. Our user feedback shows that requiring qualified usages of statics is an annoying obstacle for Java interop because of how some popular libraries are designed But can’t that easily be fixed with an import? I use a lot of static methods every day, e.g. Collectors.toList(), Assert.assertEquals()and Mockito.mock(). Those methods are almost always statically imported. IDEs make that extremely easy. What’s the problem with importing a static method you want to use without qualifying it? Is it worth adding complexity to the language to avoid a few imports? Then the rules for invoking static methods would be extremely simple and consistent. Judging by the user feedback, we decided that it is worth it. Just out of interest, which libraries are we talking about? Visibility of subclasses: Not sure what this is about and why it absolutely has to be prohibited. Shouldn’t it be possible to have a shared base class with internal functionality but return a number of derived public classes from functions? Result expressions: PLEASE PLEASE don’t use ^. <- is not really better. It would be really nice if Kotlin did not end up with Scala’s mess of random characters with magic meanings. Why not have a proper keyword like emit or supply for that. It could be optional for short statements (like if it’s the only statement in that scope). I never understood, why it is considered to be more readably to just write the name of a variable to return its value. Especially, as it only has that meaning if it really is the last statement in the control flow. Statics:/@Chris Kent: This is my main pain point in Kotlin because I’m doing a lot of TDD. Right now, it’s just not enjoyable to use popular libraries like AssertJ: There are assertThat methods that are in org.assertj.core.api.StrictAssertions, others come from org.assertj.core.api.Assertions (Assertions is derived from StrictAssertions). You can only import assertThat statically from one class. You always have to prefix the other one with the correct class. What makes it even worse: you have to know exactly which types are passed to the assertThat() static method. This is because the type of the first parameter determines whether you’re using assertThat from the base class or from the derived class. This is ugly and a pain while coding, especially because compiler errors for this aren’t really pointing you in the right direction. Please note: Even if that behavior stays that way, it would be a major relieve if the compiler searched in base classes for suitable static methods to print a helpful error message (something like “static method xy in inheritance hierarchy is not accessable through class abc in Kotlin.”). I know that static is a Java concept and not really a Kotlin concept. Even if you built a Kotlin facade for that one library, you would only have a solution for one particular case. Sounds like a lot of (unnecessary) busy work to me… really like what you guys are doing with Kotlin. The only two cents I’d like to offer is this. I remember watching a video of either Crockford or Stroustrup (can’t remember which) but they were saying how the biggest mistake Ritchie made when creating C was that he didn’t care whether a functions opening brace went at the end of the line or on the next line and so he allowed both. Thus he forever opened the door for opinions on the matter and devs have been going to war and complaining ever since. So please be decisive. Keep up the good work! I would love to live in the world where opinions on formatting are eliminated by compilers. However, we feel that building formatting style into a language would be too much of a threshold for adopters I think that ^ should be replaced by return. And anything after return should be error (unreachable code, or something like that).
https://blog.jetbrains.com/kotlin/2015/09/call-for-feedback-java-statics-result-expressions-and-more/
CC-MAIN-2018-43
refinedweb
3,781
61.87
Mysensor-ing a thermostatic valve Hi, I need to control several heaters in my house. I thought about z-wave thermostatic valves such as Danfoss or Stella-Z.... but, well... the price is a bitter pill. And while thinking about alternatives possible solution, i came across this page: The clevers guys who didi this hack really desserve warm congratulations. I am impressed. And i wonder... Do you think we could fit a nrf24 in the casing instead of the rfm12 ? And do you think we could flash a mysensor sketch in the atmega 169 ? I don't feel expert enough to start this adventure by myself... But if someone is willing to test with me, i'll be glad to help. - gregl Hero Member last edited by Some pretty cool hacking going on there! Q: How/where do you want to control/use the data? Why not just use the hack as they have done and bring the received data into the mysensors ( or directly into your end solution) i think to do via mysensors would cost more to do as its more complicated network thus needs more software mods I have ordered 2 HR25 valves. these valves are an update of the HR20 with a atmega329 instead of 169. My idea is the following: The easy part (widely documented). - connect an ISP connecto to the PCB. - connect an NRF24L01 to the PCB The harder part: - use the atmega329 with arduino boatloader and framework: Probably doable, but not natively supported. Might need some tweaking in the bootloader sources. - flash the atmega with a simple mysensor sketch as proof of concept. Then, the development step by step: - First a simple mysensor actuator wich will directly operate the valve motor. - Following step: get the internal temperature sensor working - Then: integrate PID control loop. -> The setting point would be given over the air by MySensor controller - Then use the internal LCD and buttons for local control. The LCD library of openHR (AVR C) is probably a good starting point. Why do you want to use the bootloader? For ota updating of the code? If not then the bootloader is not necessary. You have the ISP available and the tools to access it. That's right. I don't need the bootloader itself. edit: This a memo for myself : Atmega3290 is supported and is pretty close of atmega 329. Hey guys, Some news, I have received 2 HR25 TRV. I have tried to connect an ISP header to one of them. The good news is: MISO/MOSI/SCK/GND/VCC./and RESET are easily available on the upper part of the PCB. (SPI pins are used for the 3 buttons and power and reset are already available on the Jtag header. The bad news is that it is quite easy to damge the pcb while welding. I have had a problem with two coppers traces that went away of the PCB... probably because of overheat. Theses traces were so thin, that until now i hav not been able to weld a jumper to repair them. However , i will try again tonight. Regarding the software, i think i have been able to set-up the arduino IDE to compile for an atmega329p target. I have to test it to validate the process. Hi there, I have been able to make some progress. The cutted traces have been repaired with some thin wire. However, this board is definitely damaged. I think this one will remain my "guinea pig". after repairing the traces, i have tested the isp connection: sorg@samsung-ubuntu:~$ avrdude -c usbasp -p m329p avrdude: warning: cannot set sck period. please check for usbasp firmware update. avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.00s avrdude: Device signature = 0x1e950b avrdude: safemode: Fuses OK (E:FD, H:91, L:62) avrdude done. Thank you. No it's time to set-up the arduino IDE for compiling... in my Arduino/hardware/boards.txt, i have hadded the following lines: ############################################################## hr25.name=HoneyWell HR25 Thermostat/ programming with USBasp hr25.upload.using=USBasp hr25.upload.protocol=usb hr25.upload.maximum_size=30720 hr25.upload.speed=38400 hr25.build.mcu=atmega329p hr25.build.f_cpu=16000000L hr25.build.core=arduino hr25.build.variant=hr25 Then, i have created a new folder named "hr25" in Arduino/hardware/variants , and in this folder i create a file named pins_arduino.h and put the following inside: /* for ATmega329p as in the HR25 thermostat */ #ifndef Pins_Arduino_h #define Pins_Arduino_h #include <avr/pgmspace.h> #define NUM_DIGITAL_PINS 53 #define NUM_ANALOG_INPUTS 8 #define analogInputToDigitalPin(p) ((p < 8) ? (p) + 40 : -1) #define digitalPinHasPWM(p) ((p) == 12 || (p) == 14 || (p) == 15 || (p) == 13) static const uint8_t SS = 8; //PB0 static const uint8_t MOSI = 10; //PB2 static const uint8_t MISO = 11; //PB3 static const uint8_t SCK = 9; //PB1 static const uint8_t SDA = 37; //PE5 static const uint8_t SCL = 36; //PE4 // #define LED_BUILTIN 13 static const uint8_t A0 = 40; //PF0 static const uint8_t A1 = 41; //PF1 static const uint8_t A2 = 42; //PF2 static const uint8_t A3 = 43; //PF3 static const uint8_t A4 = 44; //PF4 static const uint8_t A5 = 45; //PF5 static const uint8_t A6 = 46; //PF6 static const uint8_t A7 = 47; //PF7 // Not sure of that... atmega168 use PCICR whereas the atmegaxx9 seems to use a EIMSK register... Will it work ? #define digitalPinToPCICR(p) ( (((p) >= 32) && ((p) <= 39)) || (((p) >= 8) && ((p) <= 15)) ? (&EIMSK) : \ ((uint8_t *)0) ) ) #define digitalPinToPCICRbit(p) ( (((p) >= 32) && ((p) <= 39)) ? 4 : \ ( (((p) >= 8) && ((p) <= 15)) ? 5 : \ ((uint8_t *)0) ) ) #define digitalPinToPCMSK(p) ( (((p) >= 32) && ((p) <= 39)) ? (&PCMSK0) : \ ( (((p) >= 8) && ((p) <= 15)) ? (&PCMSK1) : \ ((uint8_t *)0) ) ) #define digitalPinToPCMSKbit(p) ( (((p) >= 32) && ((p) <= 39)) ? ((p) - 32) : \ ( (((p) >= 8) && ((p) <= 15)) ? ((p) - 8) : \ ((uint8_t *)0) ) ) #define digitalPinToInterrupt(p) ((p) == 24 ? 0 : NOT_AN_INTERRUPT) , (uint16_t) &DDRC, (uint16_t) &DDRD, (uint16_t) &DDRE, (uint16_t) &DDRF, (uint16_t) &DDRG, }; const uint16_t PROGMEM port_to_output_PGM[] = { NOT_A_PORT, (uint16_t) &PORTA, (uint16_t) &PORTB, (uint16_t) &PORTC, (uint16_t) &PORTD, (uint16_t) &PORTE, (uint16_t) &PORTF, (uint16_t) &PORTG, }; const uint16_t PROGMEM port_to_input_PGM[] = { NOT_A_PORT, (uint16_t) &PINA, (uint16_t) &PINB, (uint16_t) &PINC, (uint16_t) &PIND, (uint16_t) &PINE, (uint16_t) &PINF, (uint16_t) &PING, }; const uint8_t PROGMEM digital_pin_to_port_PGM[] = { PA, // 0 // LCD_Com0 PA, // LCD_Com1 PA, // LCD_Com2 PA, PA, // LCD_Seg0 PA, // LCD_Seg1 PA, // LCD_Seg2 PA, // LCD_Seg3 PB, // 8 - NSS - PCINT8 // Contact - Thermostat installed on the valve PB, // 9 - SCK - PCINT9 // KEY °C PB, // 10 - MOSI - PCINT10 // KEY Prog PB, // 11 - MISO - PCINT11 // KEY Auto/Manu PB, // 12 - PWM - PCINT12 // H-Bridge 1 PB, // 13 - PWM - PCINT13 // Incremental coder A PB, // 14 - PWM - PCINT14 // Incremental coder B PB, // 15 - PWM - PCINT15 // H-Bridge 2 PC, // 16 // LCD_Seg12 PC, // 17 // LCD_Seg11 PC, // 18 // LCD_Seg10 PC, // 19 // LCD_Seg9 PC, // 20 // LCD_Seg8 PC, // 21 // LCD_Seg7 PC, // 22 // LCD_Seg6 PC, // 23 // LCD_Seg5 PD, // 24 - INT0 PD, // 25 // LCD_Seg21 PD, // 26 // LCD_Seg20 PD, // 27 // LCD_Seg19 PD, // 28 // LCD_Seg18 PD, // 29 // LCD_Seg17 PD, // 30 // LCD_Seg16 PD, // 31 // LCD_Seg15 PE, // 32 - RXD - PCINT0 // Header Pin7 PE, // 33 - TXD - PCINT1 // Header Pin6 PE, // 34 - PCINT2 // Header Pin2 PE, // 35 - PCINT3 // Activate reflective sensor PE, // 36 - SCL - PCINT4 // Reflective sensor output PE, // 37 - SDA - PCINT5 PE, // 38 - PCINT6 PE, // 39 - PCINT7 PF, // 40 - A0 PF, // 41 - A1 PF, // 42 - A2 // NTC measurement. PF, // 43 - A3 // Activate NTC divider bridge PF, // 44 - A4 - TCK // Header Pin4 PF, // 45 - A5 - TMS // Header Pin3 PF, // 46 - A6 - TDO // Header Pin5 PF, // 47 - A7 - TDI // Header Pin8 PG, // 48 // LCD_Seg14 PG, // 49 // LCD_Seg13 PG, // LCD_Seg4 PG, // H-Bridge 3 PG, // 52 // H-Bridge 4 NOT_A_PORT, NOT_A_PORT, NOT_A_PORT, }; const uint8_t PROGMEM digital_pin_to_bit_mask_PGM[] = { _BV(0), /* 0, port A */ _BV(1), _BV(2), _BV(3), _BV(4), _BV(5), _BV(6), _BV(7), _BV(0), /* 8, port B */ _BV(1), _BV(2), _BV(3), _BV(4), _BV(5), _BV(6), _BV(7), _BV(0), /* 16, port C */ _BV(1), _BV(2), _BV(3), _BV(4), _BV(5), _BV(6), _BV(7), _BV(0), /* 24, port D */ _BV(1), _BV(2), _BV(3), _BV(4), _BV(5), _BV(6), _BV(7), _BV(0), /* 32, port E */ _BV(1), _BV(2), _BV(3), _BV(4), _BV(5), _BV(6), _BV(7), _BV(0), /* 40, port F */ _BV(1), _BV(2), _BV(3), _BV(4), _BV(5), _BV(6), _BV(7), _BV(0), /* 48, port G */ _BV(1), _BV(2), _BV(3), _BV(4), NOT_A_PIN, NOT_A_PIN, NOT_A_PIN, }; const uint8_t PROGMEM digital_pin_to_timer_PGM[] = { NOT_ON_TIMER, /* 0 - PA0 */ NOT_ON_TIMER, /* 1 - PA1 */ NOT_ON_TIMER, /* 2 - PA2 */ NOT_ON_TIMER, /* 3 - PA3 */ NOT_ON_TIMER, /* 4 - PA4 */ NOT_ON_TIMER, /* 5 - PA5 */ NOT_ON_TIMER, /* 6 - PA6 */ NOT_ON_TIMER, /* 7 - PA7 */ NOT_ON_TIMER, /* 8 - PB0 */ NOT_ON_TIMER, /* 9 - PB1 */ NOT_ON_TIMER, /* 10 - PB2 */ NOT_ON_TIMER, /* 11 - PB3 */ TIMER0A, /* 12 - PB4 */ TIMER1A, /* 13 - PB5 */ TIMER1B, /* 14 - PB6 */ TIMER2A, /* 15 - PB7 */ NOT_ON_TIMER, /* 16 - PC0 */ NOT_ON_TIMER, /* 17 - PC1 */ NOT_ON_TIMER, /* 18 - PC2 */ NOT_ON_TIMER, /* 19 - PC3 */ NOT_ON_TIMER, /* 20 - PC4 */ NOT_ON_TIMER, /* 21 - PC5 */ NOT_ON_TIMER, /* 22 - PC6 */ NOT_ON_TIMER, /* 23 - PC7 */ NOT_ON_TIMER, /* 24 - PD0 */ NOT_ON_TIMER, /* 25 - PD1 */ NOT_ON_TIMER, /* 26 - PD2 */ NOT_ON_TIMER, /* 27 - PD3 */ NOT_ON_TIMER, /* 28 - PD4 */ NOT_ON_TIMER, /* 29 - PD5 */ NOT_ON_TIMER, /* 30 - PD6 */ NOT_ON_TIMER, /* 31 - PD7 */ NOT_ON_TIMER, /* 32 - PE0 */ NOT_ON_TIMER, /* 33 - PE1 */ NOT_ON_TIMER, /* 34 - PE2 */ NOT_ON_TIMER, /* 35 - PE3 */ NOT_ON_TIMER, /* 36 - PE4 */ NOT_ON_TIMER, /* 37 - PE5 */ NOT_ON_TIMER, /* 38 - PE6 */ NOT_ON_TIMER, /* 39 - PE7 */ NOT_ON_TIMER, /* 40 - PF0 */ NOT_ON_TIMER, /* 41 - PF1 */ NOT_ON_TIMER, /* 42 - PF2 */ NOT_ON_TIMER, /* 43 - PF3 */ NOT_ON_TIMER, /* 44 - PF4 */ NOT_ON_TIMER, /* 45 - PF5 */ NOT_ON_TIMER, /* 46 - PF6 */ NOT_ON_TIMER, /* 47 - PF7 */ NOT_ON_TIMER, /* 48 - PG0 */ NOT_ON_TIMER, /* 49 - PG1 */ NOT_ON_TIMER, /* 50 - PG2 */ NOT_ON_TIMER, /* 51 - PG3 */ NOT_ON_TIMER, /* 52 - PG4 */ NOT_ON_TIMER, /* 53 - PG5 dose not exist*/ NOT_ON_TIMER, /* 54 - PG6 dose not exist*/ NOT_ON_TIMER /* 55 - PG7 dose not exist*/ }; #endif // These serial port names are intended to allow libraries and architecture-neutral // sketches to automatically default to the correct port name for a particular type // of use. For example, a GPS module would normally connect to SERIAL_PORT_HARDWARE_OPEN, // the first hardware serial port whose RX/TX pins are not dedicated to another use. // // SERIAL_PORT_MONITOR Port which normally prints to the Arduino Serial Monitor // // SERIAL_PORT_USBVIRTUAL Port which is USB virtual serial // // SERIAL_PORT_LINUXBRIDGE Port which connects to a Linux system via Bridge library // // SERIAL_PORT_HARDWARE Hardware serial port, physical RX & TX pins. // // SERIAL_PORT_HARDWARE_OPEN Hardware serial ports which are open for use. Their RX & TX // pins are NOT connected to anything by default. #define SERIAL_PORT_MONITOR Serial #define SERIAL_PORT_HARDWARE Serial #endif DISCLAIMER: The previous file has been created by myself by mixing several sources.I am notan arduino nor AVR expert, and I do not guarantee that everything is correct. Specificaly, i have some doubt about the pin change interupts... to be tested and improved. So once this file has been created, i can start arduino and create a simple "blink" sketch. /* Blink Turns on an LED on for one second, then off for one second, repeatedly. This example code is in the public domain. */ int led = 34; // pin 34 = PE2, a pin exposed on the connector of the TRV. // } Menu Tool / Card Type, i select the HR25 entry. Then, i select file/ upload with programmer and voila ! The sketch compile and is loaded through the isp pins. Then i can see the PE2 pin blinking... Howeer, the frequecy is not correct . Instead of blinking every 1s I have a blink every 8seconds... i guess i have a problem of frequency scaling.... to be investigated but it's a good start ! The fuses of th HR25 atmega are: E:FD, H:91, L:62 If i trust the AVR duse calc: This means that the 8-divider is applied on the clock... this might explain the behavious seen above. So i changed the CKDIV8 bit: avrdude -c usbasp -p m329p -U lfuse:w:0xe2:m and voila the pin 34 , blink à 1Hz. Frequency problem solved. OK, some news.... I have connected a rf24 module to the external header, hoping to drive it with softSPI (in order to limit internal modification of the valve). While compiling a simple mysensor sketch, i can see several issues with the libraries Lowpower.h and PinchangeInt.h.... these libraries are very MCU-dependant and are not compiling. It is probably doable to modify them in order to upport the 329p, but it is out of my field of competence. I don't really understand the internals of theses libs. If someone can help, it will be apreciated. These dependencies has been removed in the development branch. These dependencies has been removed in the development branch. Is that sure ? I have not tried to compile dev branch yet, but i still find a reference to lowpower in Mysensor.h : #ifdef __cplusplus #include <Arduino.h> #include <SPI.h> #include "utility/LowPower.h" #endif Sorry.. I only read PinChangeInt. Still a Lowpower dependecy if you want to use the sleep functions. Might be good to make this as a "driver" as well in the future. But it will be hard because it is relatively hardware dependent if you want to wake up on external triggers. @hek Sure. Modifying the lowpower library to manage the 329p is definitely doable but i need to understand the internals and read the MCU datasheet. Time consuming but doable. Other general question: At this stage, is the "dev" branch compatible with a "stable" gateway and "stable" nodes ? Yes, it should. @Sorg Hello, I am really interested in this topic, do you have any new informations or progress in this project ? Hello Lucas, Last year , i did not achieve to run a sketch on the HR25. I remain convinced that it should be doable, but i am not enough aware of the inner details of the MCU / Arduino. also, my job has become a lot more time consuming, and i have not been able to spend time on this project for several months. If someone want to take over from here, i would be very happy.
https://forum.mysensors.org/topic/966/mysensor-ing-a-thermostatic-valve
CC-MAIN-2018-30
refinedweb
2,238
62.48
Do we have framework to do this kind of looking in ZK? I mean, you said " create a new InterProcessSemaphoreMutex which handles the locking mechanism.". This feels that we would have to continue opening and closing this transaction manually, which is what causes a lot of our headaches with transactions (it is not MySQL locks fault entirely, but our code structure). On Mon, Dec 18, 2017 at 7:47 AM, Marc-Aurèle Brothier <marco@exoscale.ch> wrote: > We added ZK lock for fix this issue but we will remove all current locks in > ZK in favor of ZK one. The ZK lock is already encapsulated in a project > with an interface, but more work should be done to have a proper interface > for locks which could be implemented with the "tool" you want, either a DB > lock for simplicity, or ZK for more advanced scenarios. > > @Daan you will need to add the ZK libraries in CS and have a running ZK > server somewhere. The configuration value is read from the > server.properties. If the line is empty, the ZK client is not created and > any lock request will immediately return (not holding any lock). > > @Rafael: ZK is pretty easy to setup and have running, as long as you don't > put too much data in it. Regarding our scenario here, with only locks, it's > easy. ZK would be only the gatekeeper to locks in the code, ensuring that > multi JVM can request a true lock. > For the code point of view, you're opening a connection to a ZK node (any > of a cluster) and you create a new InterProcessSemaphoreMutex which handles > the locking mechanism. > > On Mon, Dec 18, 2017 at 10:24 AM, Ivan Kudryavtsev < > kudryavtsev_ia@bw-sw.com > > wrote: > > > Rafael, > > > > - It's easy to configure and run ZK either in single node or cluster > > - zookeeper should replace mysql locking mechanism used inside ACS code > > (places where ACS locks tables or rows). > > > > I don't think from the other size, that moving from MySQL locks to ZK > locks > > is easy and light and (even implemetable) way. > > > > 2017-12-18 16:20 GMT+07:00 Rafael Weingärtner < > rafaelweingartner@gmail.com > > >: > > > > > How hard is it to configure Zookeeper and get everything up and > running? > > > BTW: what zookeeper would be managing? CloudStack management servers or > > > MySQL nodes? > > > > > > On Mon, Dec 18, 2017 at 7:13 AM, Ivan Kudryavtsev < > > > kudryavtsev_ia@bw-sw.com> > > > wrote: > > > > > > > Hello,: <> > > > > > > > > > > > > > > > > -- > > > Rafael Weingärtner > > > > > > > > > > > -- > > With best regards, Ivan Kudryavtsev > > Bitworks Software, Ltd. > > Cell: +7-923-414-1515 > > WWW: <> > > > -- Rafael Weingärtner
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201712.mbox/%3CCAG97radpcoG7x-WzEO9LwkQLpQgWsROcqfzSqDcrAYFU9Oi5ng@mail.gmail.com%3E
CC-MAIN-2018-47
refinedweb
421
71.04
Created on 2014-09-19 13:56 by juj, last changed 2015-06-21 09:47 by berker.peksag. On Windows, write a.py: import subprocess def ccall(cmdline, stdout, stderr): proc = subprocess.Popen(['python', 'b.py'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.communicate() if proc.returncode != 0: raise subprocess.CalledProcessError(proc.returncode, cmdline) return 0 # To fix subprocess.check_call, uncomment the following, which is functionally equivalent: # subprocess.check_call = ccall subprocess.check_call(['python', 'b.py'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) print 'Finished!' Then write b.py: import sys str = 'aaa' for i in range(0,16): str = str + str for i in range(0,2): print >> sys.stderr, str for i in range(0,2): print str Finally, run 'python a.py'. The application will hang. Uncomment the specicied line to fix the execution. This is a documented failure on the python subprocess page, but why not just fix it up directly in python itself? One can think that modifying stdout or stderr is not the intent for subprocess.check_call, but python certainly should not hang because of that. The same observation applies to subprocess.call() as well. > This is a documented failure on the python subprocess page, > but why not just fix it up directly in python itself? If you want to discard the output; you could use: check_call(args, stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT) check_call() passes its parameters to Popen() as is. The only parameter it knows about is args that is used to raise an exception. Do you want check_call() to inspect the parameters and to do something about stdout=PIPE, stderr=PIPE? Where "something" could be: - nothing -- the current behavior: everything works until the child process produces enough output to fill any of OS pipe buffers as documented - call proc.communicate() -- store (unlimited) output in memory instead of just hanging: everything works slowly until the system runs out of memory - replace with DEVNULL -- "do what I mean" behavior: inconsistent with the direct Popen() call - raise ValueError with informative error message (about DEVNULL option) after issueing a DeprecationWarning for a release: it fixes this particular misuse of check_call(). Are there other common "wrong in every case" check_call() parameters? Very good question akira. In one codebase where I have fixed this kind of bug, see the intended usage by the original author had certainly been to throw in a PIPE just to mute both stdout and stderr output, and there was no intent to capture the results or anything. I think passing PIPE to those is meaningless, since they effectively behave as "throw the results away", since they are not returned. Throwing an exception might be nice, but perhaps that would break existing codebases and therefore is not good to add(?). Therefore I think the best course of action would be to do what is behaviorally as developer intends: "please treat as if stdout and stderr had been captured to a pipe, and throw those pipes away, since they aren't returned.", so your third option, while inconsistent with direct Popen(), sounds most correct in practice. What do you think? I am not currently aware of other such cases, although it would be useful to go through the docs and recheck the commit history of when that documentation note was added in to see if there was more related discussions that occurred. > What do you think? I would prefer to deprecate PIPE argument for subprocess.call(): issue DeprecationWarning in 3.5 and raise ValueError in 3.6+ I've uploaded a patch that issues the warning. Hmm, that path does it for stdout=PIPE in subprocess.call only? It could equally apply to stderr=PIPE in subprocess.call as well, and also to both stdout=PIPE and stderr=PIPE in subprocess.check_call? @juj: DeprecationWarning is generated if PIPE is passed to call() as any positional or keyword argument in particular stdin, stdout, stderr. It also applies to check_call() that uses call() internally. The first place to warn uses about dangerous function calls is the documentation, and your patch doesn't touch the documentation. You can for example suggest to use check_output(), getstatusouptut() or getoutput(). Victor, the message in my patch is copied almost verbatim from the current subprocess' documentation [1] [1] People use `call(cmd, stdout=PIPE)` as a *broken* way to suppress output i.e., when they actually want `call(cmd, stdout=DEVNULL)` The issue with `call(cmd, stdout=PIPE)` that it *appears* to work if cmd doesn't produce much output i.e., it might work in tests but may hang in production. It is unrelated to check_output(), getstatusouptut() or getoutput(). This issue still reads open, but there has not been activity in a long time. May I ask what is the latest status on this? Also, any chance whether this will be part of Python 2.x? I agree with the deprecation idea. The parameter checking logic doesn’t seem right though; see Reitveld. Also, I would have made the warning specify exactly what is deprecated, in case the stack trace doesn’t identify the function, which I think would always happen with check_call(). Also be less specific about future changes, unless there is clear consensus to make this change in 3.6. Maybe something like: "Passing PIPE to call() and check_call() is deprecated; use DEVNULL instead to discard output or provide empty input" Since 3.5 is now in the beta phase, would adding this deprecation be allowed, or should it be deferred to the 3.6 branch? Also, I’m not sure what the policy is for Python 2. Maybe it would be acceptable as a Python 3 compatibility warning, triggered by the “python2 -3” option; I dunno. 3.5 have `subprocess.run`[1] that is much saner to use, and what you want to use in most cases. `call` and `check_call` docs even mention run. [1]: Martin, thank you for the review. As Matthias mentioned, the introduction of subprocess.run() perhaps deprecates this issue: old api should be left alone to avoid breaking old code, new code should use new api, those who need old api (e.g., to write 2/3 compatible code) are expected to read the docs that already contain necessary warnings).
http://bugs.python.org/issue22442
CC-MAIN-2016-44
refinedweb
1,036
57.27
dup() Duplicate a file descriptor Synopsis: #include <unistd.h> int dup( int filedes ); Since: BlackBerry 10.0.0 Arguments: - filedes - The file descriptor that you want to duplicate. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The dup() function duplicates the file descriptor specified by filedes. The new file descriptor refers to the same open file descriptor as the original, and shares any locks. The new file descriptor also: - references the same file or device - has the same open mode (read and/or write) - has an identical file position to the original (changing the position with one descriptor results in a changed position in the other). Changing the file position with one descriptor results in a changed position for the other. Calling: dup_filedes = dup( filedes ); is the same as: dup_filedes = fcntl( filedes, F_DUPFD, 0 ); Errors: - EBADF - The file descriptor, filedes, isn't a valid. - EMFILE - There are already OPEN_MAX file descriptors in use. - ENOSYS - The dup() function isn't implemented for the filesystem specified by filedes. = dup( filedes ); if( dup_filedes != -1 ) { /* process file */ /* ... */ close( dup_filedes ); } close( filedes ); return EXIT_SUCCESS; } return EXIT_FAILURE; } Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/d/dup.html
CC-MAIN-2014-41
refinedweb
216
59.09
. I followed up by asking, “How do you tell the difference between a good and bad MEAP?” He answered: - asked Troy about Sky Technologies’ strategy of using an SAP “Innerware” architecture for their MEAP. He responded that Sky Technologies was given a namespace inside of SAP by SAP to integrate their SkyMobile MEAP. It was then certified by SAP and enables SAP to have complete transactional visibility to mobile transactions. The “innerware” strategy also enables them to utilize and maximize SAP’s integration technologies including SAP NetWeaver. Many other mobile software companies choose to duplicate SAP functionality in external third party middleware which adds unnecessary layers of complexity.: Mobile Expert Interview Series: PriceWaterhouseCoopers’ Ahmed El Adl, PhD Mobile Expert Interview Series: Nokia’s John Choate Mobile Expert Interview Series – Jane and Keelin Glendon of HotButtons Visibility of the SAP transactions from (intelligent) mobile devices is good. But what happens if some of these transactions fail because the SAP System does not accept the data for some business reason (i.e. no techical issue). What happens if this leads to subsequent denial of transactions from the mobile device? Who still knows what the original user wanted to do and how can this be resolved in a feasible manner? For me this means it should be the goal to have only small time slices with no connectivity and to synchronize the changes as soon as there is connectivity again. In case some (critical) transactions are not accepted by the SAP system the user should be presented the error messages so that he can hopefully resolve the issue. I know this is hard to provide. — Bernhard Knoblauch
https://blogs.sap.com/2010/04/21/mobile-expert-interview-series-troy-oconnor/
CC-MAIN-2018-13
refinedweb
274
51.78
Chapter 3: jQuery UI Widgets Introduction. - Addressing typical challenges, such as browser compatibility issues. - Providing consistency for Ajax interactions, animations, and events. - Assisting in creating a maintainable code base through modularity. A concept that is central to the visual parts of jQuery UI is the widget. According to the official jQuery UI project, jQuery UI "provides abstractions for low-level interaction and animation, advanced effects and high-level, themeable widgets, built on top of the jQuery JavaScript Library, that you can use to build highly interactive web applications." Widgets are objects attached to page elements that supply services for managing lifetime, state, inheritance, theming, and communication with other widgets or JavaScript objects. One of the most valuable aspects of jQuery is that extensibility is built in and well defined. This extensibility is accomplished through the construction of jQuery plug-ins. Even though they have a number of extra features in addition to those in a typical jQuery plug-in, it's important to know that a widget is a jQuery plug-in. This may not be obvious because a widget is defined differently, but they are used the same way you use official jQuery methods and most custom plug-ins. Sometimes a plug-in is sufficient and other times a widget is more appropriate. When you need to apply behavior or state to individual elements and need to communicate between elements, widgets provide a number of capabilities you would otherwise have to write yourself. This chapter illustrates these capabilities. See the "Further Reading" section at the end of the chapter for more information about jQuery plug-ins and how to author them. In this chapter you will learn: -. The technologies discussed in this chapter are jQuery Plug-ins and the jQuery UI Widget Factory. The code examples used here largely come from the Widget QuickStart included with Project Silk. Widget Fundamentals If you know how to use jQuery, you know how to use a widget. In practical terms, a jQuery UI widget is a specialized jQuery plug-in. Using plug-ins makes it easy to apply behavior to the elements they are attached to. However, plug-ins lack some built-in capabilities, such as a way to associate data with its elements, expose methods, merge options with defaults, and control the plug-in's lifetime. Widgets have these capabilities built in. A plug-in can be made to have the same features as a widget, but you must add these capabilities yourself. However, before you can use a widget, it must be defined. Once it has been defined, it can be applied to elements. Widgets are defined using the widget factory. When the widget factory is invoked, it creates a widget method on the jQuery prototype, $.fn, the same place that plug-ins and other jQuery functions are located. The widget method represents the primary interface for applying the widget to elements and using the widget after it's applied. This important concept is covered in more depth in "The Widget Method" later in the chapter. Unlike other chapters, this chapter uses the Widget QuickStart for the code examples rather than the Mileage Stats Reference Implementation (Mileage Stats). The focus of the Widget QuickStart is to enable the client-side behavior for tagged keywords. When a user hovers over a keyword, the browser will display a pop-up list of popular links for that keyword from the Delicious.com bookmarking service. The following figure illustrates the QuickStart and the corresponding widgets. The page accomplishes this through the use of two widgets: - tagger adds the hover behavior to the tagged keywords. - infoBox retrieves the links and controls the box that displays them. For more information about the QuickStart or to walk through the process of building it, see Chapter 14, "Widget QuickStart." Defining a Widget The dependencies for a widget can be fulfilled with script references to the content delivery network (CDN) locations for jQuery and jQuery UI. Widgets often reside in their own .js file and are wrapped in an immediate function, as you can see in the following code example. This wrapper creates a JavaScript closure, which prevents new variables from being globally scoped. A single solution should allow no more than one global object to be created, as per well-accepted JavaScript practices. The jQuery argument at the end of the following code example becomes the $ argument passed in, which allows you to use the common $ symbol to represent the jQuery function. Because there is no second argument, the undefined argument becomes truly undefined. Therefore the $ and undefined arguments reestablish their expected behavior inside the closure in case another script previously defined these variables as something else. The call to $.widget invokes the widget factory, which makes the widget available for use. The first argument, qs.tagger, is the widget's namespace and name separated by a period (namespace.name). The name is used as the name of the widget method placed on the jQuery prototype. The second argument, called the widget prototype, is an object literal that defines the specifics of the widget. The widget prototype is the definition of the widget, and is used when the widget is applied to elements. The prototype is stored directly on the jQuery object under the namespace provided: $.qs.tagger. Using a Widget Once a widget has been defined, it's ready to be applied to DOM elements. To apply the widget to the matched elements, invoke the widget method just like you would other jQuery methods. The following code shows how to apply the tagger widget to all span elements with a data-tag attribute. Because the widget method is used as the primary interface to the widget, it's not only called when initially applying the widget to the element, it's also used for calling methods and reading and writing options and properties on the widget. When widgets are applied to elements, an instance of the widget is created and stored inside each element. This is how the widget factory knows if a widget has already been attached to an element. Managing Lifetime There are three phases of a widget's lifetime that you can control: creation, initialization, and destruction. Creation The first time the widget is applied to an element, the widget's _create function is invoked. Method names preceded with an underscore have private scope by convention, which means they only expect to be invoked from inside the widget. The following code shows the _create method in the infobox widget. // Contained in jquery.qs.infobox.js _create: function () { var that = this, name = this.name; that.infoboxElement = $('<div class="qs-infobox" />'); that.infoboxElement.appendTo('body') .bind('mouseenter.' + name, function () { mouseOverBox = true; }) .bind('mouseleave.' + name, function () { mouseOverBox = false; that.hideTagLinks(); }); }, - The _create method is the most appropriate place to perform a number of common tasks: - Adding classes to various elements the widget is attached to is the recommended way to apply styling, layout theming, and more to the widget. - Storing references to commonly accessed elements can increase performance when a particular set of elements is used from a number of methods. Simply create object-level variables for them once, and all other methods can use them. This is an accepted jQuery performance best practice. - Creating elements in the DOM is common for widgets that have requirements such as animations, effects, styling, accessibility, and cross-browser compatibility. As an example, consider the div.qs-infobox element created by the infobox widget. - Applying other widgets is recommended during creation when your widget relies on other widgets. Even if your widgets don't require each other, consider using the standard jQuery UI widgets from inside yours to add useful behaviors and interactions. Initialization The _init method is called after _create when the widget is first applied to its elements. The _init method is also called every time thereafter when the widget is invoked with no arguments or with options. This method is the recommended place for setting up more complex initialization and is a good way to support reset functionality for the widget if this is required. It's common for widgets to not implement an _init method. Destruction The widget's destroy method is used to detach a widget from an element. The goal of the destroy method is to leave the element exactly like it was before the widget was attached. Therefore, it's not surprising that common tasks are to remove any CSS classes your widget added to the element, detach any elements your widget added to the DOM, and destroy any widgets your widget applied to other elements. Here is the destroy method for the tagger widget. The last part calls the widget's base implementation of destroy and is a recommended practice when you provide your widget with a destroy method. The base destroy method will be called if you don't define one for your widget or if you explicitly call it, as in the code example above. The base implementation will remove the instance of the widget from the element and unbind all namespaced event bindings (this topic is discussed in more detail later in this chapter). Defining Options Options give widgets the ability to be extended with values and functions from the JavaScript code that creates and uses the widget. Options are automatically merged with the widget's default options during creation, and the widget factory supports change notifications when option values change. Options and their default values are defined in the options property of the widget prototype, as shown below in the infobox widget. To override default options during the creation of the widget, pass them in as an object literal to the widget method, as shown in this startup code of the widget. To read the options from inside the widget, use the options property directly, as shown in the last line of this code. Reading the values directly from options is acceptable when reading values from inside the widget, but you should not use this approach when changing the value of options. Instead, use the option method (without an 's'). The option method is called with one argument when reading an option's value, two arguments when setting a value, and a single object hash when setting more than one option. The option method should always be used to change the value of options so that change notifications will work as expected. Changing the option directly on the options property bypasses the notification mechanism. When Options Change The options on your widgets should be aware that their values can change and should be prepared when they do. To respond to changes, widgets use the _setOption method. This method is called by the widget factory just after the value has been set on the options property. The Widget QuickStart doesn't have a need for _setOption; but, as an example, if the number of links in the infobox widget were configurable by the user, the widget might need to adjust the size of the box when maxItems changes. In the code above, if maxItems is the name of the option being set, the _resizeBoxForMaxItemsOf method will be called. Rather than placing a lot of code in the _setOption method, you should place the logic in private methods. This allows you to call the logic from other places that might need it, such as _create. The last line calls the base widget's _setOption method. This will set the value of the option and will be useful for supporting a disabled state. The _setOption method is not called for the options passed in during the creation of the widget. Functions as Options Defining functions as options is a powerful way to decouple the widget from functionality better located elsewhere. For example, rather than forcing the tagger widget to know how to get a reference to the infobox widget and invoke the public methods on the infobox widget, the widgets can be kept free of any knowledge of each other by passing in the functions from the startup script, since the startup script already knows about both widgets. To set this up, the tagger widget defines activated and deactivated options. Just like normal options, these can either define defaults or omit them. The use of $.noop as a default value saves you the effort of having to ensure that the value isn't null before calling the option. Calling $.noop has no effect and won't throw any exceptions. The startup script will provide these options when it applies the tagger widget to the span elements, as shown here. In the code examples above, the options are defined inside the widget's implementation and passed in during creation. Later in this chapter you'll see how function-based options are used as callbacks for events. The Widget Method Well-designed objects have public interfaces that are intentional, intuitive, and focused. Widgets go one step further and provide a single method, referred to as the widget method, which is the entire public interface of the widget. The action the widget performs when you call this method depends on the number and type of arguments provided in the call. In addition to creating and initializing the widget, as shown earlier, the widget method is also used to do the following: - Invoke public methods - Read and write public properties - Read and write options Public Methods Public methods are defined on the widget prototype, as you can see here in the infobox widget. The public methods are hideTagLinks and displayTagLinks. Widgets must be created before their methods can be called. The following calls to the infobox widget assume the widget method has already been called once to apply the widget to the body element. To call hideTagLinks from outside the widget, use a jQuery selector to match the element and pass the name of the method to the widget method as its only argument. When you need to pass any arguments into the call, such as displayTagLinks, simply add the arguments after the method name. The option method covered earlier (in the section "Defining Options") is an example of a public method. When one argument is passed to it, the method will return the value of that option. When two arguments are passed, it will set the option specified in the first argument to the value of the second argument. When calling the option method from outside the widget, pass the method name as the first argument, the name of the option as the second, and the value as the third argument, as shown here. Public methods can also return values by placing the expression on the right-hand side of the assignment operator (=). Returning a value from methods on infobox is reasonable because infobox is only attached to a single element. But be aware that if you call a method on a wrapped set that contains more than one element, the method will only be called on and returned from the first element. In the examples so far, each time the widget method is invoked it is being called on the instance returned by the jQuery function, $(selector), which requires accessing the DOM. The next section recommends a couple of alternatives. Reusing an Instance Each time the jQuery function uses a selector to invoke the widget method, it must search the DOM. This has a negative impact on performance and is unnecessary because widget methods return a jQuery object, which includes the wrapped set of matched elements. Rather than use a selector with the jQuery method each time you need to call a method on a widget, create a variable when the widget is initially attached to the elements. The DOM will be accessed during this initialization, but it should be the only time you need to access it. In subsequent calls, such as the second line in the snippet above, you can call the widget method on the variable you created and it won't access the DOM. Using the Pseudo Selector In a situation where neither the selector nor the instance is available, there is still a way to obtain all instances of a particular widget. As long as you know the name of the widget, you can use a pseudo selector to get all instances that have been applied to elements. // Contained in an older, more tightly coupled version of startup.js $('body').infobox(); // Contained in an older, more tightly coupled version of jquery.qs.tagger.js var ibInstance = $(':qs-infobox'); ibInstance.infobox('displayTagLinks', // method name $(this).text(), // tag event.pageY + offsetY, // top event.pageX + offsetX); // left A pseudo selector begins with a colon, followed by the widget's namespace and name separated by a hyphen. The pseudo selector in the example above is :qs-infobox. Pseudo selectors have the potential to increase coupling between widgets, so be aware of this if you intend to use them. Private Members Private methods and properties have private scope, which means you can only invoke these members from inside the widget. Using private members is a good idea because they improve the readability of the code. Methods Private methods are methods that start with an underscore. They are expected to be accessed directly using the this keyword. Private methods are common and recommended. Private methods are only private by convention and cannot be enforced. This means that if a widget isn't called according to the convention for calling public methods (described later), its private methods can still be accessed. The convention is easy and consistent, and the underscore makes it easy to distinguish between the public and private interface. Properties Methods are designated as private by using underscores. Unlike methods, properties on the widget prototype are private by default; they are not designated private by prepending an underscore. The reason properties don't need underscores is that they cannot be accessed through the widget method. Because each element contains its own instance of the widget, the dataUrl property can be different for each element. Clearly dataUrl is best exposed as an option, but if this was not a configurable option you would probably want to define it so that only one copy of the value is available to all instances of the widget. Let's call these static members. Static Members To define a variable that's available to all instances of the widget, but nowhere else, place them inside the self-executing function wrapper and above the call to the widget factory, as shown in the tagger widget. Because the timer variable is defined outside the widget prototype, only a single timer will be created and shared across all instances of the tagger widget. Functions that don't rely on the instance of the widget can also be defined here. If you need access to static members from outside the widget, they can be added to the widget after the widget's definition. They are defined afterwards because they extend the widget, as you will see in a moment. Let's make a fictitious change to the infobox widget to illustrate this by moving an isolated function to a more accessible location. Inside the displayTagLinks method in the infobox widget, a function variable called displayResult is defined. The variable, displayResult, is defined in displayTagLinks because this is the only method that uses it. In our fictitious change, let's say the infobox widget needs to make Ajax calls from other methods. That means the displayResult function will need to be moved so that it is available to all methods that need it. Defining it as a static member outside the scope of the widget is a way to make this happen. The $.extend method is used to merge the object passed as the second argument into the object passed as the first argument. Therefore, the displayResult method is merged into the prototype of the widget, $.qs.infobox. With displayResult defined here, the infobox widget can use it from anywhere, as shown in this code. Events Events are an effective way to communicate between widgets without forcing them to be tightly coupled. jQuery supports and extends the DOM event model and provides the ability to raise and handle custom events that are not defined in the DOM. Raising the Event A widget raises events by using the _trigger method. The first argument to _trigger is the name of the event you are raising. If the event you are raising originates from a DOM event, the DOM event can optionally be passed as the second argument. The third argument is any data to be passed to the event handler and is also optional. The following code sample shows one way the tagger widget might raise the activated event when the mouse enters the element. In this fictitious code example, infobox is raising an activated event by binding to the mouseenter event of an element. You can also use bind, as well as the live and delegate methods, to handle events triggered from widgets. Binding Handlers Event handlers bind to widget events the same way as they bind to other events, although the name of the event is influenced by the widget's name. Notice how the name of the event being bound to has the name of the widget prepended. This is the default behavior for event names. If you prefer a different name so that your code is more readable, this behavior can be changed, as shown in the following section. Event Naming The widgetEventPrefix property defines what will be prepended to the names of the events the widget raises. By default, the value is the name of the widget and is set by the widget factory. If you want to use something other than the widget name, simply define this property and provide an alternative value. When widgetEventPrefix has a value, it will be used instead of the widget name. The code that uses this widget and binds to its activated event will use the event name tagactivated. Options as Callbacks When options are defined as functions and the option name corresponds to an event name (without the prefix), they are referred to as callbacks. The _trigger method on the base widget will automatically invoke the callback whose name matches the event being raised. // Contained in jquery.qs.tagger.js widgetEventPrefix: 'tag', options: { activated: $.noop, deactivated: $.noop }, _create: function () { var that = this, name = this.name(), tag = this.element.text(); this.element .bind('mouseenter.' + name, function (event) { that._trigger('activated', event, {name: tag}); }); }, The JavaScript that creates the tagger widget can now define the handler for the activated and deactivated events when it creates the widgets. This allows the two widgets to interact without explicitly knowing about each other. Using this approach causes the script that invokes the widgets to act as connective tissue that describes a lot about the solution in a succinct, readable format. Inheritance Sometimes, when building a widget, another widget already has many properties and much of the functionality the new widget requires. The widget factory's inheritance support is designed for this case. For illustration purposes, consider the following widget. If this widget was built elsewhere and you wanted to change its resizing behavior to an animation, a reasonable approach would be to inherit from a.container and override its resize method. Inheritance is accomplished by passing three arguments into the widget factory. The first argument is the namespace and name of the widget, the second is the prototype of the widget you want to extend from, and the third argument is the object you want to extend it with. The only difference between the signature above and the signature usually used for defining widgets is the addition of the second parameter. Inheritance is a useful tool when you are using a widget that almost does what you want it to do. In version 1.9 of jQuery UI, widgets can inherit from themselves. This makes it easy to add functionality to a widget for your application without the need of changing the original implementation. The jQuery UI bridge method allows you to retain the name of the original widget to be used with your specialized widget. Summary Using jQuery UI widgets is a great way to add modularity to client-side web applications. Widgets are objects that attach to page elements and supply services for managing lifetime, state, inheritance, theming, and communication with other widgets or JavaScript objects. Options give widgets the ability to have state that is public, readable, writable, and callable. Options are automatically merged with the widget's default options during creation, and the widget factory supports change notifications when option values change. In addition, defining functions as options is a powerful way to decouple the widget from functionality better located elsewhere. Widgets provide just a single method that represents the entire public interface of the widget. Widgets also allow for private methods that can only be invoked from within the widget. jQuery supports and extends the DOM event model and provides the ability to raise and handle custom events that are not defined in the DOM. Widgets can trigger and handle these events and options can be used as callbacks. Finally, widgets can inherit from other widgets, and in jQuery UI version 1.9, a widget can inherit from itself. Further Reading For more information about the QuickStart or to walk through the process of building it, see Chapter 14, "Widget QuickStart." For more information about the pub/sub engine, see Chapter 8, "Communication." For more information on the dataManager, see Chapter 6, "Client Data Management and Caching." Widget Factory documentation on the jQuery UI wiki: jQuery Documentation for Plugins/Authoring: jQuery UI Developer Guidelines: jQuery UI source code: Resources Microsoft Ajax Content Delivery Network (CDN) addresses:
http://msdn.microsoft.com/en-us/library/hh404085.aspx
CC-MAIN-2014-52
refinedweb
4,293
53.61
#include <subscription.h> An abstraction of a subscription stanza. Definition at line 31 of file subscription.h. Describes the different valid message types. Definition at line 41 of file subscription.h. Creates a Subscription request. Definition at line 43 of file subscription.cpp. Destructor. Definition at line 50 of file subscription.cpp. Returns the status text of a presence stanza for the given language if available. If the requested language is not available, the default status text (without a xml:lang attribute) will be returned. Definition at line 80 of file subscription.h. Returns the subscription stanza's type. Definition at line 69 of file subscription.h. Creates a Tag representation of the Stanza. The Tag is completely independent of the Stanza and will not be updated when the Stanza is modified. Definition at line 55 of file subscription.cpp.
https://camaya.net/api/gloox-1.0.23/classgloox_1_1Subscription.html
CC-MAIN-2020-34
refinedweb
140
54.18
Correct and Efficient Vuex Using. Part II In the first part of the article, we looked at such components of Vuex as storage, state, getters, mutations, and actions. You can see all in details here And we continue our review of the Vuex library and talk about modules, application structure, plugins, etc. Modules Due to the use of a single state tree, all global application data is placed in one large object. As the application grows, the storage can swell significantly. To help with this, Vuex lets you split storage into modules. Each module can contain its own state, mutations, actions, getters, and even built-in submodules, this structure is fractal. The first argument that mutations and getters receive is the local state of the module. Similarly, context.state in actions also indicates the local state of the module, and the root is available in context.rootState. By default, actions, mutations, and getters inside modules are registered in the global namespace. This allows several modules to respond to the same type of mutations/actions. If you want to make the modules more self-sufficient and ready for reuse, you can create it with your namespace by specifying the namespaced: true option. When a module is registered, all its getters, actions, and mutations will be automatically associated with this namespace, based on the path along which the module is registered. Getters and actions with their namespace will receive their local getters, dispatch, and commit. In other words, you can use the contents of a module without writing prefixes in the same module. Switching between namespaces does not affect the code inside the module. If you want to use the global state and getters, rootState and rootGetters are passed in the 3rd, and 4th arguments to the getter function, as well as the properties in the context object, passed to the action function. To trigger actions or commit mutations in the global namespace, add {root: true} with the 3rd argument to dispatch and commit. If you want to register global actions in namespaced modules, you can mark it with root: true and place the action definition to function handler. Furthermore, you can create namespaced helpers by using createNamespacedHelpers. It returns an object having new component binding helpers that are bound with the given namespace value. You may be concerned about the unpredictability of the namespace for your modules when you create a plug-in with its modules and the ability for users to add them to the Vuex repository. Your modules will also be placed in the namespace if plugin users add your modules to the module with their namespace. To adapt to this situation, you may need to get the namespace value through the plugin settings. You can register the module even after the storage has been created using the store.registerModule method. Module status will be available as store.state.myModule and store.state.nested.myModule. Dynamic module registration allows other Vue plugins also to use Vuex to manage their state by adding a module to the application data store. For example, the vuex-router-sync library integrates vue-router into vuex, reflecting a change in the current application path in a dynamically attached module. You can delete a dynamically registered module using the store.unregisterModule (moduleName). Please note that the static, defined at the time the repository was created, modules cannot be removed using this method. Sometimes we may need to create several instances of the module, for example: - creation of several storages that are used by one module, for example, to avoid stateful singletones in the SSR when using the runInNewContextoption if falseor 'once'; - registering the module several times in one repository. If we use an object to determine the state of a module, then this state object will be used by reference and cause pollution of the state of the storage/module during its mutations. This is actually the same problem with data inside Vue components. So the solution will be the same. Application structure In reality, Vuex does not impose any significant restrictions on the code structure used. However, it requires compliance with several high-level principles: - The global state of the application must be contained in global storage; - The only mechanism for changing this state are mutations that are synchronous transactions; - Asynchronous operations are encapsulated in actions or their combinations. As long as you follow these rules, you can use any project structure. If your storage file gets too large, just start putting actions, mutations, and getters into separate files. For any non-trivial application, you will most likely need to use modules. Here is an example of a possible project structure. For reference, you can use the shopping cart example. Plugins Vuex repositories accept the plugins option, which provides hooks for each mutation. The Vuex plugin is just a function that receives storage as a single parameter. Plugins are not allowed to directly change the state of the application as well as components. They can only cause changes indirectly using mutations. By causing mutations, the plugin can synchronize the data source with the data store in the application. For example, to synchronize storage with a web socket, the example is intentionally simplified, in a real situation, createWebSocketPlugin would have additional options. Sometimes a plugin may need to “take a nugget” of the application state or compare the “before” and “after” mutations. To do this, use deep copying of the state object. Impression plugins should only be used during development. When using webpack or Browserify, we can give this moment at their mercy. The plugin will be used by default. In the production environment, you will need DefinePlugin for webpack, or envify for Browserify to change the value of process.env.NODE_ENV! == 'production' to false in the final assembly. Vuex comes with a logging plugin that can be used for debugging. You can also enable the logging plugin directly using a separate <script> tag, which places the createVuexLogger function in the global namespace. Please note that this plugin makes state casts, so you should use it only at the development stage. Strict mode To enable strict mode, specify strict: true when creating the Vuex repository. In strict mode, any attempt to make changes to the Vuex state, except mutations, will throw an error. This ensures that all state mutations are explicitly tracked through debugging tools. Do not use strict mode in production! Strict mode triggers deep tracking of the application state tree in synchronous mode to detect inappropriate mutations, and this can be costly for performance when a large number of mutations occur. Be sure to turn this mode off in production to avoid performance degradation. Work with forms When using strict mode, Vuex may not seem obvious how to use the v-model with the Vuex state part. Suppose obj is a computed property that returns an object reference from the repository. In this case, the v-model will try to change the obj.message value during user actions directly. In strict mode, such changes will trigger an error because they occur outside of Vuex mutation handlers. To work with Vuex in this situation, you should bind the value to <input> and track its changes by the input or change event. Testing The main subject of unit testing in Vuex are mutations and actions. Mutations are fairly easy to test, as they are just simple functions whose behavior depends entirely on the parameters passed. One trick is that if you use ES2015 modules and put your mutations in the store.js file, then in addition to the default export, you must export the mutations using named exports. Testing activities are a bit more complicated, as they can access external APIs. When testing actions, you usually have to fake external objects - for example, calls to the API can be moved to a separate service, and as part of the tests, this service can be replaced with a fake one. To simplify the simulation of dependencies, you can use webpack and inject-loader to build test files. Getters doing complex calculations would also be helpful to test. As with mutations, everything is simple. If you correctly follow the rules for writing mutations and actions, the resulting tests should not depend on the browser API. Therefore, they can be assembled by webpack and run in Node. On the other hand, you can use mocha-loader or Karma + karma-webpack, and run tests in real browsers. Hot reboot Vuex supports hot-swapping of mutations, modules, actions, and getters at development time using the webpack Hot Module Replacement API. Similar functionality in Browserify is achievable using the browserify-hmr plugin. For mutations and modules, you need to use the store.hotUpdate() API method. Why Vuex Actions are the Ideal API Interface If you are working on a project in which the back end and front end are developing at the same time, or you are working in a UI/Frontend team that can even create a user interface before the back end exists, you are probably familiar with the problem when you need to drown out back end parts or data as the front end develops. The general way that this manifests is with purely static templates or content, with placeholders and text right in your interface templates. A step away from this is some form of fixtures, data that is statically loaded by the interface and injected into place. Both of them often face the same set of problems. When the back end is finally available, there is a bunch of refactoring to get the data in place. Even if the data structure from the back end matches your fixtures, you still have to cross to find each integration point. And if the structure is different, you should not only do this, but you must figure out how you can either change the external interface or create an abstraction layer that transforms the data. Strengths and Benefits of Vuex Storage Compared to a simple global object, the Vuex repository has many significant advantages and benefits: - Vuex Storage - Reactive. As soon as components get the state from it, they will reactively update their views every time the state changes. - Components cannot directly change the state of the repository. The only way to change the state of the repository is to commit the mutations explicitly. This ensures that every state change leaves a tracked record, which makes debugging and testing the application easier. - Components cannot directly change the state of the repository. The only way to change the state of the repository is to commit the mutations explicitly. This ensures that every state change leaves a tracked record, which makes debugging and testing the application easier. - You can easily debug your application by integrating Vuex with the Vue DevTools extension. - Vuex repository gives you a general picture of the state of how everything is connected and affects the application. - It is easier to maintain and synchronize the state between several components, even if the hierarchy of elements is changing. - Vuex enables the direct interaction of components with each other. - If the component is destroyed, the state in the Vuex repository will remain untouched. Summary When working with Vuex, we need to remember some crucial points. Vuex creates a repository that consists of states, getters, mutations, and actions. To update or change state, must make a mutation. To perform an asynchronous task, you need an action. Actions, if successful, commit a mutation that changes state, thereby updating the presentation. The application state is stored as one large JSON object. Getters are used to access values in the store. Mutations update condition. It should be remembered that mutations are synchronous. All asynchronous operations must be performed within actions. Actions change state, initiating mutations. Make it a rule to initiate mutations solely through action. Modules can be used to organize storage in several small files. Vuex makes working with Vue much more comfortable and more fun. If you are new, there may be situations where it is difficult to decide whether to use Vuex in certain areas of application. Follow instincts and reach high speed pretty quickly.
https://amoniac.eu/blog/post/correct-and-efficient-vuex-using-part-ii
CC-MAIN-2022-05
refinedweb
2,024
54.93
. /.: Most. from the two implementation described in the Introduction, there are many applications writte n in Tcl/Tk or a combination of Tcl and C. A complete list where to look for these implementation is the part 4 of the frequently-asked questions on Tcl/Tk (FAQs). I suggest you to visit Scriptics' Softwar e Central. Another good starting point is. One of the best Tcl applications running under Linux is called TkDesk a window manager and application launcher that works very well. If you're a Tcl/Tk Linux developer, feel free to send me a URL (and a description of the work) that I can link to in here. For many reasons people often like having a hard-copy manual as a reference or like to be helped by other folks online. Here you can find a selection of reference books, tutorials, www-sites and newsgroup. Many books concerning Tcl/Tk were written and are to be published. I won't even try to list them all (another howto woudn't be enough for that : ) ). You can find much more information plus additional notes at: Here I will try to summarize with some notes the book I know concerning the subject, they're all at a basic-medium level. Again, people who know the subject, have enough information about where to find advanced level books. Author: John K. Ousterhout WWW book information: cseng.aw.com/bookdetail.qry?ISBN=0%2D201%2D63337%2DX&ptype=0 Book's examples: ics.com/pub/tcl/doc/book.examples.Z Book suppliment: 4.0.ps The book primarily covers Tcl 7.3 and Tk 3.6. Author: Brent Welch WWW book information: Book's table of contents: Book promotion info at section_50000.html of the URL This updated edition describes Tcl / Tk 8.0 as it was during the beta period. Along with the material from the first edition, it also covers sockets, packag es, namespaces, a great section describing the changes in Tcl 7.4, 7.5, 7.6, and 8.0 (and Tk as well), Safe Tk and the Plugin! Editors: Donald Barnes, Marc Ewing, Erik Troan WWW book information: oks/tcltk/ Author: David Young WWW book information: A comprehensive guide to Visual TCL. This book leads reader from basic graphical user interface development concepts to meaningful application develo pment.. Author: Matt Welsh and Lar Kaufman WWW book information: talog/runux2/noframes.html Running Linux is a really well written basic book. It has a chapter on programming using Tcl/Tk. (and Perl, C, C++). Author: Timothy Webster, with Alex Francis WWW book information: Another one of the series of the paperback programming books. This one focuses on the Tcl plugin as a programming environment. Authors: Michael Doyle Hattie Schroeder WWW book information: This is a learning by example book, for people who know a bit of programming, but are not experts. It covers developing applets as well as stand-alone appli cations and simple server applications. The book comes with the Spynergy toolkit, whic h adds a variety of pure Tcl/Tk procedures for distributed processing, URL retri eval,, There are a great number of WWW resources which provide additional information about many aspects of Tcl and its extensions. A lot of material is available on the Internet: introductory papers, white papers, tutorials, slides, postscript versions of published books in draft and many more. For a complete reference please give a look at the excellent Tcl-FAQs . comp.lang.tcl is an unmoderated Usenet newsgroup, created for the discussion of the Tcl programming language and tool s that use some form of Tcl, such as the Tk toolkit for the X window system, Extended Tcl, and expect. For Tcl/Tk related announcements always refer to comp.lang.tcl.announce : you will find release announcement, patches, new application and so on. Again, faq could be retrieved at Tcl-FAQs..
http://www.faqs.org/docs/Linux-HOWTO/TclTk-HOWTO.html
CC-MAIN-2015-40
refinedweb
648
56.86
62199/unhandled-internal-error-java-heap-space Hi everyone This is an error, I'm getting when running group operator in a pig. This is a approx 1.5 GB data file. Can anyone plzz help me!! How to eliminate this error.... Thank you Hi As your file size is too large, so it reaches the maximum heap size of your VM. So try to increase the heap size. Below is the command you can use. java -Xmx2048m You can mention according to your needs. I mentioned it here in 2048. Hope this will work. You can add some more memory by ...READ MORE The problem is with the dependencies. The ...READ MORE UnknownHostException is thrown when hadoop tries to resolve ...READ MORE Have you placed the jar files in ...READ MORE Use org.apache.hive.jdbc.HiveDriver as your driver. It ...READ MORE Run the command as sudo or add the ...READ MORE In your code, you have set some ...READ MORE The tables look fine. Follow these steps ...READ MORE Hey, try this code import java.io.IOException; import java.util.Iterator; import java.util.StringTokenizer; import ...READ MORE To solve this error, add hadoop-common to ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/62199/unhandled-internal-error-java-heap-space
CC-MAIN-2020-10
refinedweb
208
80.99
Hi all, I am trying to take in 2 input files and an output file in as command line arguments. I am new to this, so bear with me if the code is a little rough. Below is what I have on that. In the command line I am typing in: > a.out lab5_1.txt lab5_2.txt ( I'm not typing the > ) I get an error saying it can't open input file # 2. This is for a program that uses a struct and linked list. What I am trying to do is get the information from lab5_1.txt and lab5_2.txt and put it one line at a time into a.out. Can anyone see what I am doing wrong? Thank you for your help, SandyThank you for your help, SandyCode:// driver int main ( int argc, char *argv[] ) { struct employee *list; int flag = 1; // create file variables FILE *infile1, *infile2, *outfile; char buffer[80]; char buffer2[80]; // check for errors in opening files if ( argc < 2 ) { fprintf (stderr, "Usage: %s filename\n", argv[0] ); exit (1); } if ( ( outfile = fopen ( argv[1], "w+" ) ) == NULL ) { fprintf (stderr, "Can't create output file.\n" ); exit (1); } if ( ( infile1 = fopen( argv[2], "r" ) ) == NULL) { fprintf (stderr, "Can't open input file 1\n" ); exit (1); } strcpy( buffer, argv[2]); if ( ( infile2 = fopen( argv[3], "r") ) == NULL ) { fprintf (stderr, "Input file 2 wouldn't open\n" ); exit (1); } strcpy( buffer2, argv[3] ); while ( fread ( buffer, 80, 2, infile1 ) != '\n' ) fwrite ( buffer, sizeof ( struct employee ), 1, outfile ); if (fclose ( infile1 ) != 0 || fclose ( infile2 ) != 0 || fclose ( outfile ) != 0 ) printf ( "Error in closing files\n" ); return (0); }
http://cboard.cprogramming.com/c-programming/38238-file-i-o-2-input-1-output-file.html
CC-MAIN-2016-18
refinedweb
270
81.22
by Tim Wagner, Ted Bashor, Paul Meijer and Pieter Humphrey 09/28/2005. This article provides an overview of WTP's sub-projects—Web Standard Tools (WST) and J2EE Platform Standard Tools (JST)—and explores the major themes of the 0.7 release: For more information about WTP and to get started with WTP and Eclipse, go to the Eclipse WTP project Web site. To get a better understanding of the WTP project, let's take a minute to look at some of the project's principles (you can find the entire list of WTP principles on the WTP project Web site): Extension of the Eclipse value proposition: The Eclipse Project has set a high standard for technical excellence, functional innovation, and overall extensibility within the Java IDE domain. We intend to apply these same standards to the Web/J2EE application-tooling domain. Vendor ecosystem: A major goal of this project is to support a vital application development tools market. WTP's exemplary functionality is useful on its own but is designed from the start to be extensible, so commercial vendors can use what this project delivers as a foundation for their own product innovation and development efficiency. Vendor neutrality: Vendor neutrality is at the core of this project. We aim to encourage Eclipse participation and drive Eclipse market acceptance by strengthening the long-term product value propositions of the widest possible range of application development vendors. Standards-based innovation: This project delivers an extensible, standards-based tooling foundation on which the widest possible range of vendors can create value-added development products for their customers and end users. Where standards exist, we will adhere to them. Although WTP's focus is on runtime technologies with accepted standards and existing deployments, it also tracks emerging standards where leading-edge tools are desirable. Where multiple technologies are widely used for a given functional need, we will attempt to support each, subject only to technical feasibility and our goal of providing the most capable and extensible foundation for the long term. These principles together endorse the view that WTP aims to extend the core tools and functionality of Eclipse to J2EE and Web applications, enabling J2EE vendors to build on top of WTP's base functionality by ensuring that WTP implements approved standards in the J2EE and Web application sphere. Figure 1 illustrates the major components of WTP. With respect to runtime components, WTP 0.7 supports JSP-based Web applications, Java Web services, and Enterprise JavaBeans. Figure 1. Major components of the WTP In addition, WTP provides a core set of server tools for easy server customization and configuration. The runtime components and server tools are discussed in more detail in the following sections. WTP supports JSP-based Web application development including the use of static HTML pages, JavaScript, CSS, JSP EL, and servlets. The JSP source editor extends the HTML editor's capabilities, providing content coloring and code assist inside Java, JSP, Tag Library, EL, and JavaScript code fragments. WTP also supports JSR 45-compliant debugging, enabling breakpoints to be set in JSP source, stepped through, and so on. JSPs are created in the WebContent folder, located in the Project Explorer tree structure of any Web Project type. A useful addition to dealing with JSP programming in WTP is the form bean wizard: It can help you turn a simple Java class into a bean by generating getter/setter methods for any private variables defined in the class. In addition, various other wizards drastically reduce development time. One example is the servlet wizard, which is the fastest route for creating servlets in WTP. The wizard will auto-generate a servlet stub and place the generated source file in the JavaSource folder, and it will add the appropriate deployment descriptors to web.xml to register the servlet and to define the matching URL pattern. WTP assists tool developers by providing models for common J2EE artifacts, such as deployment descriptors. For Web applications, editing web.xml to declare a servlet is a simple matter of opening the file in the J2EE perspective, and using the outline and design views to edit in "graphical mode." In addition, WTP provides high-fidelity editing support for all XML files, including code assist, syntax highlighting, validation, quick fixes, refactoring, and search. The ability to edit a "mixed" JSP page containing, say, JSP, HTML, and JavaScript code blocks, or to do deployment descriptor editing, is based on the Structured Source Editor (SSE) framework. SSE provides multilanguage editing support, extending the base Eclipse editor into areas other than Java editing. High-fidelity support for source editing across all the languages used for Web application development and feature compatibility with the JDT is part of WTP's goal. Many of these features are already available in WTP 0.7 when editing artifacts like XML, JSP and JSP EL, HTML, XHTML, CSS, DTD, and JavaScript. Figure 2. Source Editor SSE-based editors have been designed to interoperate with each other and with the base JDT components, so HTML and Java editing preferences would apply when editing a JSP page, for example. SSE essentially extends the basic Eclipse editor by partitioning documents into language regions, and then it associates editing services to each language. When editing one of the aforementioned artifacts, SSE provides a subset of the following features: Refer to the WTP milestone release plans for the latest feature/language matrix, or to a great article on the WTP site for more detailed explanations of these individual features. When editing XML, the source pane provides the conventional text-based editing experience, with elements highlighted in green, attributes in purple, and strings in blue if you're using the default settings. We'll return to the source editing experience shortly, but first let's see the other presentations available. If you're using the default perspective settings, then you'll also see the outline view to the right, showing a structural representation of the XML file. The outline view also provides an easy way to make structural modifications to your document, such as reordering sections or swapping items in a sequence. You can also create new elements by right-clicking an existing element and choosing the type of the new child to insert. The property view shows the attribute settings for the selected element in conventional "property/value" form. Values can be changed directly from this view. For more information, visit the eclipse.org Web site, which has an excellent XML tutorial. XML and schema files can be added to any project, but there's a simple example project type we can use to easier demonstrate the feature set. To use this, first ensure that you're in the J2EE perspective. Then, right-click in the project navigator, select New Project, then select Examples, and finally select Editing and validating XML files. This project creation wizard help you set up a project in which you can explore XML files, schemas, DTDs, and their interaction. Once the example project is created, you'll find it listed under the Other category in your Project Explorer. If you accepted the default project name, it will be called XMLExamples. Open it, then open the PublicationCatalogue directory. You'll see an example XML file and its accompanying schema. In addition to the source view, there's also a design view for XML. The design view is somewhat like a combination of the outline view and the property view in that it offers a structural image of the XML file, complete with attribute settings. However, unlike the property and outline views, in the design view all the content is visible and editable. Figure 3. Design view The XML source view performs both well-formedness and validation checking as you type. Add an element not defined in the schema, and the editor responds with a red squiggle under the unexpected element along with a detailed hover message explaining the problem. The error report disappears when the problem is corrected. Validation of the XML can also be done on demand, similar to generating build errors in a language such as Java. To initiate validation, right-click in the Project Navigator window and select Validate XML File. The validation process will inform you of the result, and any problems will be recorded in the "problems" view along with markers on the offending lines of the source itself. Like all problems, these markers will persist until the next validation or the area of the text is deleted, whichever comes first. Files with problems are also denoted in the Project Explorer, as are the projects that contain them. Reporting errors is great, but avoiding them in the first place is even better. To achieve this, content assist is available, and helps the user by offering, during text editing, completions based on schema or DTD information when it is available, and on an inferred schema (based on existing XML content) when it is not. Like XML files, XSDs can be viewed in source mode and share syntax highlighting settings with XML. The property view for schemas utilizes an enhanced UI widget in WTP known as the "tabbed property view": Each of the tabs on the left-hand side contains different information about the schema's properties. The General tab shows the target namespace and prefix, while the Other tab shows the various schema-level settings. Finally, the Documentation tab shows the schema documentation element content, if present. In addition to viewing the text of a schema, its content may be viewed in graphical form by selecting the Graph (graphical schema view) tab at the bottom of the editing pane. Double-clicking an element in this view expands the element's definition; the ellipsis nodes indicate sequences, and occurrence settings are shown beneath elements. Figure 4. XSD The property view tracks the currently selected item in the graphical view, and provides editing capability for your current schema selection in the graph view. You can use the graphical view and property editor together to edit the schema. Right-clicking a sequence node, for example, brings up a context menu that enables (among other options) the addition of a new element. Selecting a node enables its type to be changed in the property editor. As with XML files, the outline view provides insight into the schema's structure, and is a handy way to reorganize or incrementally specify a new schema—support for creating new types, elements, and attributes make it easy to create new schemas or extend existing ones without having to recall the fine points of XSD syntax. Schemas needed frequently or for offline use can be stored in the XML Catalog, which is also available in the preference pages. The XML Catalog enables both programmatic (plug-in-based) and user-defined storage of selected schemas, and permits their use to be keyed by public IDs, URIs, and namespace names in any project. WTP provides support for AXIS 1.2 Web service functionality at many levels throughout the Eclipse environment. Working with Web services in WTP is done from within a dynamic Web project type, and the Project Explorer will display the set of Web service clients, local and remote Web services, and defined or generated WSDLs being developed. Extensible wizards can be used to generate both client and server bindings, while the built-in Web Services Explorer makes it easy to perform Web service discovery, testing, and UDDI publishing. The WSDL editor provides a graphical editing view in addition to a source view, and can be used to define and modify imported schemas, types, messages, port types, bindings, and services. To create a WSDL file, select New from the Project Explorer, then select Web Services, and finally select WSDL. Supply the parent folder, if necessary, and the file. The remaining options are helpful in creating a WSDL you intend to hand-edit. Figure 5. WSDL editor The Project Explorer offers options to test WSDL files with the Web Service Explorer, validate the WSDL file against its schema, publish the WSDL, generate a client, generate a skeleton JavaBean, generate a client, or generate WSIL. Developers can start with a WSDL file and easily create a skeleton JavaBean, which is a starter implementation for the operations defined in the WSDL. After adding your own code, the Web service can be deployed to the server and tested. WTP also provides a way for a client stub to be automatically generated, as previously mentioned. Developers can start with a JavaBean class and easily do the reverse of the above to create a WSDL that describes the operations in the JavaBean. Simply supply the wizard with the class name for your JavaBean, pick the target server, associate target server and service projects, configure which methods in the JavaBean to expose as Web service operations, and the wizard will generate the Web service and offer UDDI publication options. The Web Services Explorer is a JSP Web application hosted on the Apache Tomcat servlet engine contained within Eclipse. The Web Services Explorer provides three key services to the user: Comprehensive support for discovering Web services in WS-Inspection 1.0 documents and in UDDI v2 or v3 registries using the UDDI v2 protocol. Comprehensive support for publishing Web services to UDDI v2 or v3 registries using the UDDI v2 protocol. WTP offers options for IBM, MSFT, SAP, XMethods, and NTT Communications. Comprehensive support for browsing and invoking Web services natively via their WSDL and XSD. Figure 6. Web Service Explorer WTP supports EJB development via an annotation-based model in which a single annotated source file is used to generate bean, interfaces, and EJB deployment descriptor files. In this release, XDoclet-based development of session and message-driven beans is supported, with entity beans support planned for the WTP 1.5 release. WTP and XDoclet support EJB deployment to a number of different servers, including WebLogic Server, JBoss, JOnAS, and WebSphere. EJBs can be created for standalone deployment to the server, or the EJB project can be one of many projects of a J2EE application and deployed as part of the application Enterprise Application (EAR). Modeling servers, server connections, and the deployment of project artifacts to servers is a major focus of WTP functionality. WTP models server types (known as runtimes), specific server configurations, and deployable "modules"—artifacts that can be built in the project and deployed to one or more active servers. There are two ways of defining server types in WTP: generic server definitions, an XML-based configuration file approach, and custom server definitions, a programmatic approach. The generic server support is designed to simplify the process of defining a new server type by providing easy Ant-based publishing and using properties to define specific server settings. While not all WTP functionality is available, the process of configuring a new server type is quick and easy, and a UI exists to assist developers with property settings. For more information on this approach, see Gorkem Ercan's article, Generic Server - Server Definition File Explained. WTP 0.7 ships with generic server definitions for BEA WebLogic Server 8.1 and 9.0, WebSphere 6.0.x, JOnAS 4.x, and JBoss 3.2.3. Custom server definitions expose the full power of the underlying WTP server infrastructure to customize such tasks as configuring third-party server runtimes, publishing projects to servers, adding and removing of projects to servers, server control (stop, start), and debugging. WTP adopters, such as BEA, often write custom server adapters for their commercial offerings, since this approach enables them to leverage the full power of the runtime to offer enhanced services, performance, or both. WTP also ships with custom server adapters for Tomcat 3.2, 4.0, 4.1, 5.0, and 5.5, and Geronimo 1.0 Server. Figure 7. Server plug-in Users define new servers in a two-step process when done for the first time. First, declare a new runtime in the preferences dialog. This configures the entire Eclipse development environment with the location of a server installation ("runtime") on disk and must only be done once for a given installation of a third-party server runtime. Second, the server view can be used to define an instance of a server that can be used during development for publishing modules, start/stop of server, adding and removing modules, debugging, and so on. The "Run on Server" support allows developers to quickly run a module. Modules are defined as any content that can be deployed to a server, for example, a J2EE standard deployment unit like a Web project containing a servlet. Running in debug mode, if supported for that type of module, enables the user to step through the selected module's code for troubleshooting. Several additional features add value to WTP. The Project Explorer categorizes the types of projects you can create. There are separate projects for the various J2EE runtime components. Some examples include: A J2EE application is assembled in an EAR project. Developers can use the New EAR Application Wizard to create a new EAR project and add EJB and Web service projects to it; modules from the Web and EJB projects are then automatically added to the EAR. The EAR project's Properties dialog can also be used to add more modules. The JDT project model is not hierarchical, and the "exploded archive" structure for J2EE projects—one module per project—is not very flexible. Flexible Layout eliminates project migration and enables WTP to coexist with existing directory structures. You can create EJB projects, Web service projects, and additional Web projects, and add these to the same enterprise application and/or other enterprise applications. Enterprise application projects "assemble" the associated EJBs, Web services, and Web projects into an EAR and deploy it as a single unit. Remember that your server must have an EJB container for this functionality (Tomcat, for example, cannot be used for such deployments.) Special project and XML file validators run during the build process. In WTP 0.7, Builders "assemble" J2EE modules in a project's .deployables directory. In WTP 1.0, the server publishing process handles assembly. The focus of the WTP 1.0 release will be on the further development of platform APIs to enable the first wave of products based on WTP. Following that, WTP 1.5 will be released with the Eclipse 3.2 platform release in June the newly created Data Tools project. Technology projects, such as those proposed for EJB 3.0, will likely influence WTP as they mature. Pieter Humphrey has been at BEA for 7 years, working in technical sales and product marketing to help customers understand and apply our technology.
http://www.oracle.com/technetwork/articles/entarch/eclipse-web-tools-platform-093378.html?ssSourceSiteId=otnes
CC-MAIN-2015-32
refinedweb
3,115
51.68
On Fri, Jan 07, 2005 at 04:45:47PM +0100, Florian Weimer wrote: > * Andreas Barth: > > > As far as I know, sourceforges policy is to host only software free for > > everybody. Though their policy is not the same as ours, I think this > > violates even their policy. > > SourceForge also offers paid hosting for proprietary software. I'm > pretty sure that appearance on the sourceforge.net site does not imply > anything WRT licensing. The "Terms of Use", at says: "6. LICENSING AND OTHER TERMS APPLYING TO CODE AND OTHER CONTENT POSTED ON SOURCEFORGE.NET Use, reproduction, modification, and other intellectual property rights to data stored in CVS or as a file release and posted by any user on SourceForge.net ("Source Code") shall be subject to the OSI-approved license applicable to such Source Code, or to such other licensing arrangements that may be approved by OSTG as applicable to such Source Code." If there are any alternative terms available for the hosting of proprietary software, I don't see them. SourceForge hosting proprietay software in the same namespace would be very strange. -- Glenn Maynard
https://lists.debian.org/debian-legal/2005/01/msg00207.html
CC-MAIN-2017-39
refinedweb
183
63.8
These C API functions provide general Unicode string handling. Some functions are equivalent in name, signature, and behavior to the ANSI C <string.h> functions. (For example, they do not check for bad arguments like NULL string pointers.) In some cases, only the thread-safe variant of such a function is implemented here (see u_strtok_r()). Other functions provide more Unicode-specific functionality like locale-specific upper/lower-casing and string comparison in code point order. ICU uses 16-bit Unicode (UTF-16) in the form of arrays of UChar code units. UTF-16 encodes each Unicode code point with either one or two UChar code units. (This is the default form of Unicode, and a forward-compatible extension of the original, fixed-width form that was known as UCS-2. UTF-16 superseded UCS-2 with Unicode 2.0 in 1996.) Some APIs accept a 32-bit UChar32 value for a single code point. ICU also handles 16-bit Unicode text with unpaired surrogates. Such text is not well-formed UTF-16. Code-point-related functions treat unpaired surrogates as surrogate code points, i.e., as separate units. Although UTF-16 is a variable-width encoding form (like some legacy multi-byte encodings), it is much more efficient even for random access because the code unit values for single-unit characters vs. lead units vs. trail units are completely disjoint. This means that it is easy to determine character (code point) boundaries from random offsets in the string. Unicode (UTF-16) string processing is optimized for the single-unit case. Although it is important to support supplementary characters (which use pairs of lead/trail code units called "surrogates"), their occurrence is rare. Almost all characters in modern use require only a single UChar code unit (i.e., their code point values are <=0xffff). For more details see the User Guide Strings chapter (). For a discussion of the handling of unpaired surrogates see also Jitterbug 2145 and its icu mailing list proposal on 2002-sep-18. Definition in file ustring.h. #include "unicode/utypes.h" #include "unicode/putil.h" #include "unicode/uiter.h" Go to the source code of this file.
http://icu.sourcearchive.com/documentation/4.4.1-2/ustring_8h.html
CC-MAIN-2017-39
refinedweb
360
60.21
For a news site I am currently working on, I needed to display the last time a news article was last published. I wanted to be able to show the duration based on respective major time format. For example, if an article was displayed a couple hours ago, I would want it to to display “2 hours” not “120 minutes”. More importantly, if an article hadn’t been published to the site more than a week, I don’t want the exact time duration to be displayed. I would prefer the following message: “more than a week ago”. This way, if the site administrator gets really lazy the website viewer will not know the exact time period the site was last updated. Code: public class TimePassed { public static string GetPassedTime(DateTime since) { TimeSpan ts = DateTime.Now.Subtract(since); if (ts.Days <= 7) { switch (ts.Days) { case 0: switch (ts.Hours) { case 0: switch (ts.Minutes) { case 0: return String.Format("{0} seconds ago", ts.Seconds); case 1: return "1 minute ago"; default: return String.Format("{0} minutes ago", ts.Minutes); } case 1: return "1 hour ago"; default: return String.Format("{0} hours ago", ts.Hours); } case 1: return "yesterday"; default: return String.Format("{0} days ago", ts.Days); } } else { return "more than a week ago"; } } }
https://www.surinderbhomra.com/Blog/Page-19
CC-MAIN-2020-34
refinedweb
215
70.09
It's a 25-year-old principle of object-oriented (OO) design that you shouldn't expose an object's implementation to any other classes in the program. The program is unnecessarily difficult to maintain when you expose implementation, primarily because changing an object that exposes its implementation mandates changes to all the classes that use the object. Unfortunately, the getter/setter idiom that many programmers think of as object oriented violates this fundamental OO principle in spades. Consider the example of a Money class that has a getValue() method on it that returns the "value" in dollars. You will have code like the following all over your program: ); The add() method would figure out the currency of the operand, do any necessary currency conversion (which is, properly, an operation on money), and update the total. If you used this object-that-has-the-information-does-the-work strategy to begin with, the notion of currency could be added to the Money class without any changes required in the code that uses Money objects. That is, the work of refactoring a dollars-only to an international implementation would be concentrated in a single place: the Money class. The problem Most programmers have no difficulty grasping this concept at the business-logic level (though it can take some effort to consistently think that way). Problems start to emerge, however, when the user interface (UI) enters the picture. The problem is not that you can't apply techniques like the one I just described to build a UI, but that many programmers are locked into a getter/setter mentality when it comes to user interfaces. I blame this problem on fundamentally procedural code-construction tools like Visual Basic and its clones (including the Java UI builders) that force you into this procedural, getter/setter way of thinking. (Digression: Some of you will balk at the previous statement and scream that VB is based on the hallowed Model-View-Controller (MVC) architecture, so is sacrosanct. Bear in mind that MVC was developed almost 30 years ago. In the early 1970s, the largest supercomputer was on par with today's desktops. Most machines (such as the DEC PDP-11) were 16-bit computers, with 64 KB of memory, and clock speeds measured in tens of megahertz. Your user interface was probably a stack of punched cards. If you were lucky enough to have a video terminal, then you may have been using an ASCII-based console input/output (I/O) system. We've learned a lot in the past 30 years. Even Java Swing had to replace MVC with a similar "separable-model" architecture, primarily because pure MVC doesn't sufficiently isolate the UI and domain-model layers.) So, let's define the problem in a nutshell: If an object may not expose implementation information (through get/set methods or by any other means), then it stands to reason that an object must somehow create its own user interface. That is, if the way that an object's attributes are represented is hidden from the rest of the program, then you can't extract those attributes in order to build a UI. Note, by the way, that you're not hiding the fact that an attribute exists. (I'm defining attribute, here, as an essential characteristic of the object.) You know that an Employee must have a salary or wage attribute, otherwise it wouldn't be an Employee. (It would be a Person, a Volunteer, a Vagrant, or something else that doesn't have a salary.) What you don't know—or want to know—is how that salary is represented inside the object. It could be a double, a String, a scaled long, or binary-coded decimal. It might be a "synthetic" or "derived" attribute, which is computed at runtime (from a pay grade or job title, for example, or by fetching the value from a database). Though a get method can indeed hide some of this implementation detail, as we saw with the Money example, it can't hide enough. So how does an object produce its own UI and remain maintainable? Only the most simplistic objects can support something like a displayYourself() method. Realistic objects must: - Display themselves in different formats (XML, SQL, comma-separated values, etc.). - Display different views of themselves (one view might display all the attributes; another might display only a subset of the attributes; and a third might present the attributes in a different way). - Display themselves in different environments (client side ( JComponent) and served-to-client (HTML), for example) and handle both input and output in both environments. Some of the readers of my previous getter/setter article leapt to the conclusion that I was advocating that you add methods to the object to cover all these possibilities, but that "solution" is obviously nonsensical. Not only is the resulting heavyweight object much too complicated, you'll have to constantly modify it to handle new UI requirements. Practically, an object just can't build all possible user interfaces for itself, if for no other reason than many of those UIs weren't even conceived when the class was created. Build a solution This problem's solution is to separate the UI code from the core business object by putting it into a separate class of objects. That is, you should split off some functionality that could be in the object into a separate object entirely. This bifurcation of an object's methods appears in several design patterns. You're most likely familiar with Strategy, which is used with the various java.awt.Container classes to do layout. You could solve the layout problem with a derivation solution: FlowLayoutPanel, GridLayoutPanel, BorderLayoutPanel, etc., but that mandates too many classes and a lot of duplicated code in those classes. A single heavyweight-class solution (adding methods to Container like layOutAsGrid(), layOutAsFlow(), etc.) is also impractical because you can't modify the source code for the Container simply because you need an unsupported layout..) The Builder pattern is similar to Strategy. The main difference is that the Builder class implements a strategy for constructing something (like a JComponent or XML stream that represents an object's state). Builder objects typically build their products using a multistage process as well. That is, calls to various methods of the Builder are required to complete the construction process, and the Builder typically doesn't know the order in which the calls will be made or the number of times one of its methods will be called. Builder's most important characteristic is that the business object (called the Context) doesn't know exactly what the Builder object is building. The pattern isolates the business object from its representation. The best way to see how a simple builder works is to look at one. First let's look at the Context, the business object that needs to expose a user interface. Listing 1 shows a simplistic Employee class. The Employee has name, id, and salary attributes. (Stubs for these classes are at the bottom of the listing, but these stubs are just placeholders for the real thing. You can—I hope—easily imagine how these classes would work.) This particular Context uses what I think of as a bidirectional builder. The classic Gang of Four Builder goes in one direction (output), but I've also added a Builder that an Employee object can use to initialize itself. Two Builder interfaces are required. The Employee.Exporter interface (Listing 1, line 8) handles the output direction. It defines an interface to a Builder object that constructs the representation of the current object. The Employee delegates the actual UI construction to the Builder in the export() method (on line 31). The Builder is not passed the actual fields, but instead uses Strings to pass a representation of those fields. Listing 1. Employee: The Builder Context 1 import java.util.Locale; 2 3 public class Employee 4 { private Name name; 5 private EmployeeId id; 6 private Money salary; 7 8 public interface Exporter 9 { void addName ( String name ); 10 void addID ( String id ); 11 void addSalary ( String salary ); 12 } 13 14 public interface Importer 15 { String provideName(); 16 String provideID(); 17 String provideSalary(); 18 void open(); 19 void close(); 20 } 21 22 public Employee( Importer builder ) 23 { builder.open(); 24 this.name = new Name ( builder.provideName() ); 25 this.id = new EmployeeId( builder.provideID() ); 26 this.salary = new Money ( builder.provideSalary(), 27 new Locale("en", "US") ); 28 builder.close(); 29 } 30 31 public void export( Exporter builder ) 32 { builder.addName ( name.toString() ); 33 builder.addID ( id.toString() ); 34 builder.addSalary( salary.toString() ); 35 } 36 37 //... 38 } 39 //---------------------------------------------------------------------- 40 // Unit-test stuff 41 // 42 class Name 43 { private String value; 44 public Name( String value ) 45 { this.value = value; 46 } 47 public String toString(){ return value; }; 48 } 49 50 class EmployeeId 51 { private String value; 52 public EmployeeId( String value ) 53 { this.value = value; 54 } 55 public String toString(){ return value; } 56 } 57 58 class Money 59 { private String value; 60 public Money( String value, Locale location ) 61 { this.value = value; 62 } 63 public String toString(){ return value; } 64 } Let's look at an example. The following code builds Figure 1's UI: Employee wilma = ...; JComponentExporter uiBuilder = new JComponentExporter(); // Create the builder wilma.export( uiBuilder ); // Build the user interface JComponent userInterface = uiBuilder.getJComponent(); //... someContainer.add( userInterface ); Listing 2 shows the source for the JComponentExporter. As you can see, all the UI-related code is concentrated in the Concrete Builder (the JComponentExporter), and the Context (the Employee) drives the build process without knowing exactly what it's building. Listing 2. Exporting to a client-side UI 1 import javax.swing.*; 2 import java.awt.*; 3 import java.awt.event.*; 4 5 class JComponentExporter implements Employee.Exporter 6 { private String name, id, salary; 7 8 public void addName ( String name ){ this.name = name; } 9 public void addID ( String id ){ this.id = id; } 10 public void addSalary( String salary ){ this.salary = salary; } 11 12 JComponent getJComponent() 13 { JComponent panel = new JPanel(); 14 panel.setLayout( new GridLayout(3,2) ); 15 panel.add( new JLabel("Name: ") ); 16 panel.add( new JLabel( name ) ); 17 panel.add( new JLabel("Employee ID: ") ); 18 panel.add( new JLabel( id ) ); 19 panel.add( new JLabel("Salary: ") ); 20 panel.add( new JLabel( salary ) ); 21 return panel; 22 } 23 }
https://www.infoworld.com/article/2072302/more-on-getters-and-setters.html
CC-MAIN-2020-40
refinedweb
1,726
55.44
I’d like to use Java 8 streams to take a stream of strings (for example read from a plain text file) and produce a stream of sentences. I assume sentences can cross line boundaries. So for example, I want to go from: "This is the", "first sentence. This is the", "second sentence." to: "This is the first sentence.", "This is the second sentence." I can see that it’s possible to get a stream of parts of sentences as follows: Pattern p = Pattern.compile("\\."); Stream<String> lines = Stream.of("This is the", "first sentence. This is the", "second sentence."); Stream<String> result = lines.flatMap(s -> p.splitAsStream(s)); But then I’m not sure how to produce a stream to join the fragments into sentences. I want to do this in a lazy way so that only what is needed from the original stream is read. Any ideas? Breaking text into sentences is not that easy as just looking for dots. E.g., you don’t want to split in between “Mr.Smith”… Thankfully, there is already a JRE class which takes care of that, the BreakIterator. What it doesn’t have, is Stream support, so in order to use it with streams, some support code around it is required: public class SentenceStream extends Spliterators.AbstractSpliterator<String> implements Consumer<CharSequence> { public static Stream<String> sentences(Stream<? extends CharSequence> s) { return StreamSupport.stream(new SentenceStream(s.spliterator()), false); } Spliterator<? extends CharSequence> source; CharBuffer buffer; BreakIterator iterator; public SentenceStream(Spliterator<? extends CharSequence> source) { super(Long.MAX_VALUE, ORDERED|NONNULL); this.source = source; iterator=BreakIterator.getSentenceInstance(Locale.ENGLISH); buffer=CharBuffer.allocate(100); buffer.flip(); } @Override public boolean tryAdvance(Consumer<? super String> action) { for(;;) { int next=iterator.next(); if(next!=BreakIterator.DONE && next!=buffer.limit()) { action.accept(buffer.subSequence(0, next-buffer.position()).toString()); buffer.position(next); return true; } if(!source.tryAdvance(this)) { if(buffer.hasRemaining()) { action.accept(buffer.toString()); buffer.position(0).limit(0); return true; } return false; } iterator.setText(buffer.toString()); } } @Override public void accept(CharSequence t) { buffer.compact(); if(buffer.remaining()<t.length()) { CharBuffer bigger=CharBuffer.allocate( Math.max(buffer.capacity()*2, buffer.position()+t.length())); buffer.flip(); bigger.put(buffer); buffer=bigger; } buffer.append(t).flip(); } } With that support class, you can simply say, e.g.: Stream<String> lines = Stream.of( "This is the ", "first sentence. This is the ", "second sentence."); sentences(lines).forEachOrdered(System.out::println); This is a sequential, stateful problem, which Stream's designer is not too fond of. In a more general sense, you are implementing a lexer, which converts a sequence of tokens to a sequence of another type of tokens. While you might use Stream to solve it with tricks and hacks, there is really no reason to. Just because Stream is there doesn't mean we have to use it for everything. That being said, an answer to your question is to use flatMap() with a stateful function that holds intermediary data and emits the whole sentence when a dot is encountered. There is also the issue of EOF - you'll need a sentinel value for EOF in the source stream so that the function can react to it. My StreamEx library has a collapse method which is designed to solve such tasks. First let's change your regexp to look-behind one, to leave the ending dots, so we can later use them: StreamEx.of(input).flatMap(Pattern.compile("(?<=\\.)")::splitAsStream) Here the input is array, list, JDK stream or just comma-separated strings. Next we collapse two strings if the first one does not end with dot. The merging function should join both parts into single string adding a space between them: .collapse((a, b) -> !a.endsWith("."), (a, b) -> a + ' ' + b) Finally we should trim the leading and trailing spaces if any: .map(String::trim); The whole code is here: List<String> lines = Arrays.asList("This is the", "first sentence. This is the", "second sentence. Third sentence. Fourth", "sentence. Fifth sentence.", "The last"); Stream<String> stream = StreamEx.of(lines) .flatMap(Pattern.compile("(?<=\\.)")::splitAsStream) .collapse((a, b) -> !a.endsWith("."), (a, b) -> a + ' ' + b) .map(String::trim); stream.forEach(System.out::println); The output is the following: This is the first sentence. This is the second sentence. Third sentence. Fourth sentence. Fifth sentence. The last Update: since StreamEx 0.3.4 version you can safely do the same with parallel stream.
http://m.dlxedu.com/m/askdetail/3/e6e7edc48bd44bc8d58300aeda8abf1a.html
CC-MAIN-2018-22
refinedweb
727
60.72
File Attachments By Tim Dexter-Oracle on Jul 25, 2008 I have been exchanging emails with a colleague today on how a customer might attach documents to a Publisher PDF output. If I leave aside the EB aspect of the attachments framework and try and keep things generic there are options for everyone. Im also going to assume we are talking about attaching documents - images can be pulled into a template pretty easily, either via base64 encoded XML or via a URL. The following are all API approaches, we do not expose these through the user interface. Concatenation If you attachments are all PDF documents you can use the PDFDocMerger API to bind together the attachments with the main PDF document. Its supported from 5.6.2 onwards. those of you in the standalone release - its supported in any of your versions. Heres some sample code: import oracle.apps.xdo.common.pdf.util.PDFDocMerger; ... try { File[] inpFiles = {new File("c:\\temp\\inp.pdf"), new File("c:\\temp\\1.pdf"), new File("c:\\temp\\2.pdf")}; File outFile = new File("c:\\temp\\out2.pdf"); PDFDocMerger pdfDoc = new PDFDocMerger(inpFiles,outFile); pdfDoc.process(); } catch (XDOException e) { e.printStackTrace(); } Embedding You can attach pretty much any file to a PDF document using this method. The PDF contains the attachments but they are embedded into the document, not inline but as a set of links. Now this is somewhat of a hidden API, yep, its not in the documentation. I think Leslie has me on the hook to re-write the API docs anyway. So Im making a start. You can see the main document above the attached file list. As I mentioned you can attach any file type and as many as you like too. Its supported from 10.1.3.2 onwards. Here's the code snippet: import oracle.apps.xdo.common.pdf.util.PDFFileAttacher; ... try { PDFFileAttacher pdfA = new PDFFileAttacher("c:\\temp\\inp.pdf","c:\\temp\\out.pdf"); pdfA.addFile("PDF1","c:\\temp\\1.pdf"); pdfA.addFile("PDF2","c:\\temp\\2.pdf"); pdfA.process(); } catch (XDOException e) { e.printStackTrace(); } Book Binding I covered the rolls royce of binding in a couple of articles a while back: this is supported in version 5.6.3 and higher. Attachments Done! Posted by Nagamohan Uppara on July 28, 2008 at 09:06 AM MDT # Posted by Tim on August 04, 2008 at 02:26 AM MDT # Posted by Robnxtmx on February 17, 2009 at 12:31 AM MST # Posted by Margery Conneely on August 30, 2010 at 03:00 PM MDT #
https://blogs.oracle.com/xmlpublisher/entry/file_attachments
CC-MAIN-2016-22
refinedweb
426
66.94
Code. Collaborate. Organize. No Limits. Try it Today. Bioinformatics is an interdisciplinary field joining the field of Biology and Computer Science. Bioinformatics is concerned with the organization and analysis of biological data. For molecular biologists, it. My current major goal is developing and providing a software running under Windows can provide useful tools for the analysis of Biological data especially DNA sequence data and in the same time can be loaded on personal computer. In this project, I focused on using C# and SQL database languages in bioinformatics applications and developing bioinformatics algorithms, all C# codes in this research were collected together in a single program called "AzharDNA" which can be loaded on any personal computer prepared by Microsoft Windows, The results of this program will be exported in different data types like images, texts or tables. You must have a good background in many fields like molecular biology, algorithm and mathematics which are linked to Bioinformatics, I had developed simple algorithms for every operation in this program which define how any operation begins and ends and also most algorithms will be demonstrated in this project by using flowchart techniques. DNA is double-stranded molecule each type of base on one strand forms a bond with just one type of base on the other strand according to specific rule called base pair rule. Here purines form hydrogen bonds to pyrimidines this mean that adenine (A) forms a base pair with thymine (T) and guanine (G) forms a base pair with cytosine (C). This method will take the first strand's sequence and export the second strand sequence or the complementary sequence. public static string DNA_complementry(string Seq) { string DNA_Comp = ""; char[] d = Seq.ToLower().ToCharArray(); for (int n = 0; n < d.Length; ++n) { switch (d[n]) { case ('t'): d[n] = 'a'; break; case ('a'): d[n] = 't'; break; case ('c'): d[n] = 'g'; break; case ('g'): d[n] = 'c'; break; } DNA_Comp += Convert.ToString(d[n]); } return (string)DNA_Comp; } // or this code which has been suggested by Jaime Olivares public static string DNA_complementry(string Seq) { string ret = Seq.Replace('t', '*'); ret = ret.Replace('a', 't'); ret = ret.Replace('*', 'a'); ret = ret.Replace('c', '*'); ret = ret.Replace('g', 'c'); return ret.Replace('*', 'g'); }, ant parallel RNA strand. As opposed to DNA replication, transcription results in an RNA complement that includes uracil (U) in all instances where thymine (T) would have occurred in a DNA complement. public static string DNA_To_RNA(string Seq) { string RNA = Seq.ToLower().Replace('t', 'u'); return (string)RNA; } Reverse transcriptase creates single-stranded DNA from an RNA template (link). public static string RNA_To_DNA(string Seq) { //suggested by Pete O'Hanlon DNA = Seq.ToLower().Replace('u', 't'); return (string)DNA; } It's like DNA complementary but there is U here instead of A. public static string RNA_complementry(string Seq) { string RNA_Comp = Seq.Replace('u', '*'); RNA_Comp = RNA_Comp.Replace('a', 'u'); RNA_Comp = RNA_Comp.Replace('*', 'a'); RNA_Comp = RNA_Comp.Replace('c', '*'); RNA_Comp = RNA_Comp.Replace('g', 'c'); return RNA_Comp.Replace('*', 'g'); return (string)RNA_Comp; } This method returns the reversed sequence: public static string Reversion(string Seq) { string Rev_Seq=""; char[] d = Seq.ToLower().ToCharArray(); Array.Reverse(d); for (int i = 0; i < d.Length; i++) { Rev_Seq += d[i]; } return (string)Rev_Seq; } This method calculates the percentage of each nucleotide type against the total length of the sequence. The actual percentages vary between species and organism. The specific ratio that you as a human have is part of who you are, though order, of course, also matters. First, we will count the existence for every nucleotide. public static void G_C_A_T_Content (string Seq, out int A, out int C, out int G, out int T) { int g = 0; int a = 0; int c = 0; int t = 0; for (int i = 0; i < Seq.Length; i++) { if (Seq[i] == 'a') a++; else if (Seq[i] == 't') t++; else if (Seq[i] == 'c') c++; else if (Seq[i] == 'g') g++; } G = g; C = c; T = t; A = a; } Then we will use this method in the percentage method. public static void Nu_Percentage (string Seq,out float Apr,float Cpr,float Tpr,float Gpr) { float gn = 0; float cn = 0; float tn = 0; float an = 0; G_C_A_T_Content(Seq.ToLower(), out an, out cn, out gn, out tn); float apr = an / k.Length * 100; float tpr = tn / k.Length * 100; float gpr = gn / k.Length * 100; float cpr = cn / k.Length * 100; Gpr = gpr; Cpr = cpr; Tpr = tpr; Apr = apr; } In molecular biology,). In PCR experiments, the GC-content of primers are used to predict their annealing temperature to the template DNA. A higher GC-content level indicates a higher melting temperature. public static void GC_AT_Content(string Seq,out int GC_Content,out int AT_Content ) { int gc = 0; int at=0; for (int s = 0; s < Seq.Length; s++) { if (Seq[s] == 'C' || Seq[s] == 'G') gc++; if(Seq[s]=='A'||Seq[s]=='T') at++; } GC_Content=gc; AT_Content). This will be used in PCR Primer design equations and many tools in upcoming articles. public static double DNA_MW(string Seq) { int a = 0; int c = 0; int g = 0; int t = 0; G_C_A_T_Content(Seq, out a, out c, out g, out t); double MW = 329.2 * g + 313.2 * a + 304.2 * t + 289.2 * c; return MW; } DNA denaturation, also called DNA melting, is the process by which double-stranded deoxyribonucleic acid unwinds and separates into single-stranded strands through the breaking of hydrogen bonding between the bases. public static int DNA_Melting_Temp(string Sequence) { int GC_Content; int AT_Content; GC_AT_Content(Sequence, out GC_Content, out AT_Content); int Melt = 4*GC_Content + 2*AT_Content; return Melt; } This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) char[] d = Seq.ToLower().ToCharArray(); for (int n = 0; n < d.Length; ++n) { if (d[n] == 'u') { d[n] = 't'; } DNA += Convert.ToString(d[n]); } DNA = Seq.ToLower().Replace('u', 't');.
http://www.codeproject.com/Articles/226888/AzharDNA-New-Bioinformatics-Program-Basic-tools-fo
CC-MAIN-2014-23
refinedweb
978
56.55
Scheduling Posts Discussion Great tutorial - thanks! While everything appears to be working fine, I'm still running into the turbolinks caching issues you spoke of at the ~22min mark even after installing the jquery-turbolinks gem (and restarting the server). Is there some additional configuration that needs to happen in order for the gem to prevent that buggy js behavior? Thanks in advance! That's a great question Brian. I think that what I was experiencing was that the jQuery code I wrote never gets re-executed when the page changes via Turbolinks. Usually jquery-turbolinks fixes that by hijacking the page change event and that automatically fixes it. You might double check to make sure that jquery-turbolinks is being included in your application.js file properly. That's about the only thing that I can think of off the top of my head. I've gone through this, but when trying to load the form. I get `undefined method `published_at?' for nil:NilClass`. My helper to my eye, looks word for word as per the example: (my model is article, not post) module ArticlesHelper def status_for(article) if article.published_at? if article.published_at > Time.zone.now "Scheduled" else "Published" end else "Draft" end end end To fix the Turbolinks issue you could put your js into a wrapper like this: $(document).on("turbolinks:load", function() { # your javascript goes here... }); What do you guys think? Hi Chris, Since Rails 6, to the best of my knowledge, no longer supports CoffeeScript, do you have a suggestion for using this with Webpack for those who aren't familiar with Coffee Script? CoffeeScript works just fine in Rails 6. Hi Chris, I followed this video to add draft and scheduled post to Rails 6 a few weeks ago. I ran into one issue with the Publish At dates showing when first going to the Article page. But when the page was refreshed, it worked properly. I found at that time, if I changed the following in application.js require("turbolinks").start() to require("turbolinks") - it behaved as you would expect. I've now add AJAX uploading and needed to add some further jQuery. This made me have to re-add the .start() to the turbolinks require. Any suggestions on how I can have this work properly with Rails 6? Thanks, Nav
https://gorails.com/forum/scheduling-posts-gorails-gorails
CC-MAIN-2021-39
refinedweb
389
74.69
GoogleApi.Genomics.V1.Model.Action (google_api_genomics v0.17.0) View Source Specifies a single action that runs a Docker container. Attributes commands(type: list(String.t), default: nil) - If specified, overrides the CMDspecified in the container. If the container also has an ENTRYPOINTthe values are used as entrypoint arguments. Otherwise, they are used as a command and arguments to run inside the container. credentials(type: GoogleApi.Genomics.V1.Model.Secret.t, default: nil) - If the specified image is hosted on a private registry other than Google Container Registry, the credentials required to pull the image must be specified here as an encrypted secret. The secret must decrypt to a JSON-encoded dictionary containing both usernameand passwordkeys. encryptedEnvironment(type: GoogleApi.Genomics.V1.Model.Secret.t, default: nil) - The encrypted environment to pass into the container. This environment is merged with values specified in the google.genomics.v2alpha1.Pipeline message, overwriting any duplicate values. The secret must decrypt to a JSON-encoded dictionary where key-value pairs serve as environment variable names and their values. The decoded environment variables can overwrite the values specified by the environmentfield. entrypoint(type: String.t, default: nil) - If specified, overrides the ENTRYPOINTspecified in the container. environment(type: map(), default: nil) - The environment to pass into the container. This environment is merged with values specified in the google.genomics.v2alpha1.Pipeline message, overwriting any duplicate values. In addition to the values passed here, a few other values are automatically injected into the environment. These cannot be hidden or overwritten. GOOGLE_PIPELINE_FAILEDwill be set to "1" if the pipeline failed because an action has exited with a non-zero status (and did not have the IGNORE_EXIT_STATUSflag set). This can be used to determine if additional debug or logging actions should execute. GOOGLE_LAST_EXIT_STATUSwill be set to the exit status of the last non-background action that executed. This can be used by workflow engine authors to determine whether an individual action has succeeded or failed. flags(type: list(String.t), default: nil) - The set of flags to apply to this action. imageUri(type: String.t, default: nil) - Required. The URI to pull the container image from. Note that all images referenced by actions in the pipeline are pulled before the first action runs. If multiple actions reference the same image, it is only pulled once, ensuring that the same image is used for all actions in a single pipeline. The image URI can be either a complete host and image specification (e.g., quay.io/biocontainers/samtools), a library and image name (e.g., google/cloud-sdk) or a bare image name ('bash') to pull from the default library. No schema is required in any of these cases. If the specified image is not public, the service account specified for the Virtual Machine must have access to pull the images from GCR, or appropriate credentials must be specified in the google.genomics.v2alpha1.Action.credentials field. labels(type: map(), default: nil) - Labels to associate with the action. This field is provided to assist workflow engine authors in identifying actions (for example, to indicate what sort of action they perform, such as localization or debugging). They are returned in the operation metadata, but are otherwise ignored. mounts(type: list(GoogleApi.Genomics.V1.Model.Mount.t), default: nil) - A list of mounts to make available to the action. In addition to the values specified here, every action has a special virtual disk mounted under name(type: String.t, default: nil) - An optional name for the container. The container hostname will be set to this name, making it useful for inter-container communication. The name must contain only upper and lowercase alphanumeric characters and hyphens and cannot start with a hyphen. pidNamespace(type: String.t, default: nil) - An optional identifier for a PID namespace to run the action inside. Multiple actions should use the same string to share a namespace. If unspecified, a separate isolated namespace is used. portMappings(type: map(), default: nil) - A map of containers to host port mappings for this container. If the container already specifies exposed ports, use the PUBLISH_EXPOSED_PORTSflag instead. The host port number must be less than 65536. If it is zero, an unused random port is assigned. To determine the resulting port number, consult the ContainerStartedEventin the operation metadata. timeout(type: String.t, default: nil) - The maximum amount of time to give the action to complete. If the action fails to complete before the timeout, it will be terminated and the exit status will be non-zero. The pipeline will continue or terminate based on the rules defined by the ALWAYS_RUNand IGNORE_EXIT_STATUSflags. Link to this section Summary Functions Unwrap a decoded JSON object into its complex fields. Link to this section Types Specs t() :: %GoogleApi.Genomics.V1.Model.Action{ commands: [String.t()] | nil, credentials: GoogleApi.Genomics.V1.Model.Secret.t() | nil, encryptedEnvironment: GoogleApi.Genomics.V1.Model.Secret.t() | nil, entrypoint: String.t() | nil, environment: map() | nil, flags: [String.t()] | nil, imageUri: String.t() | nil, labels: map() | nil, mounts: [GoogleApi.Genomics.V1.Model.Mount.t()] | nil, name: String.t() | nil, pidNamespace: String.t() | nil, portMappings: map() | nil, timeout: String.t() | nil } Link to this section Functions Link to this function decode(value, options)View Source Specs Unwrap a decoded JSON object into its complex fields.
https://hexdocs.pm/google_api_genomics/GoogleApi.Genomics.V1.Model.Action.html
CC-MAIN-2021-25
refinedweb
878
50.84
Heapq is an abbreviation for heap and queues. They’re notable for tackling various challenges, including finding the best element in a dataset. We can optionally sort the smallest item first or vice versa when dealing with data collections. Python’s heapq belongs to the standard library. For the formation of a heap, this function uses a conventional Python list. Heaps are physical representations of data structures. In data structures, the queue is related to the suppositional type. In determining the interface, suppositional data structures are used. The implementation, on the other hand, is usually defined by the tangible/concrete type data structures. Heapq in Python The heap queue algorithm is implemented in Python by the heapq package. The latter employs the min-heap, in which the parent’s key is less than or equal to that of its children. In this article, we’ll explain how to use the heapq module in Python and show you some examples of how to use it with primitive data types and objects that contain complex data. Furthermore, both the heap and the queue perform well together when it comes to prioritization. Two things determine the priority. The first is to give the most significant element the highest property. Secondly is allocating the highest priority property to the lowest-value items. The second and more frequent technique is used by Python heapq. Priority queue It’s a more polished version of a standard queue. The priority queue dequeues the main queue, which subsequently does the same to the lower priority items. The Python language implements this priority queue in the form of a min-heap. By associating items with priorities, a priority queue extends the queue. When two elements somehow share the priority, the queue will serve them in FIFO order (first-in, first-out). Despite not being at the front line, items with higher priorities are dequeued first. A priority queue usually supports at least three operations: - add: adds a new item to the queue’s end. - Pop: obtains the item with the highest priority that is first in the queue. - is_empty: checks to see if the queue is empty A priority queue stands as an abstract data structure. Items should be arranged by priority, for example, according to abstract data structures. They do not, however, specify how they will be implemented. Data structures that are concrete, such as a binary heap, must implement a priority queue. The heapq module in Python is a popular approach to creating a priority queue. We would allocate priorities in ascending order to create a priority queue using a min-heap. The lowest-valued item receives the highest priority. When we add a new value, the reordering process guarantees that it is stored in a heap in the position that corresponds to its priority. We then obtain the node at the top of the heap when polling things from the queue. Sorting a heap The heap is a complete binary tree in programming, notably data structures. A heap is applied in two ways, one at a time, to two different data types. The parent node has more children than the child node. The parent node is smaller or equal to the child node in the min-heap. Heapq is a Python module that employs a min-heap, as previously mentioned. For example, find the least and most significant numbers given the provided list. Functions in Heapq Let’s look at the functions supplied by Python’s heapq model, assuming you understand how the heap data structure works. - heappush(heap, item)- Push the value item into the heap with heappush(heap, item). - heappop(heap) — Pop the heap and return the smallest value. - heappushpop(heap, item) — Return the lowest value from the heap by pushing the value item into it. - heapify(x) — Convert the list x to a heap - heapreplace(heap, item) — Pop the smallest value from the heap and return it, then push the value item into the heap. Heapq with Primitive Data Types: An Example Walkthrough We’ll make a heap using the heapify function on an essential list. Though, the modules are imported using the Python library. The latter library enables all operations to be carried out. This function rearranges all list elements by putting the smallest one first while the others are not. This function is identical to sorting, but it only applies to the first index in the case of an array. The numbers are used to begin the list. After that, use the heapify method. Let’s look at some examples of heapq functions in action. To begin, you must import the heapq module. import heapq Consider the following example list num_list . num_list = [72, 114, 33, 97, 61] heapify(num_list) Adding a new node with a smaller value than the most significant nodes will percolate up the tree, while the larger values will percolate down. If we add one to a min-heap, one will percolate up with two at the top. It’s worth noting that nodes on the same level don’t have to be in precise ascending order. The heap property, for example, would be preserved if 5 and 6 are swapped. The heapq module has a complete binary tree as part of its implementation. We may use the heapq module to turn an unordered list of elements into a priority queue. The following is the outcome of heapifying this list. It’s worth noting that heapifying takes place in real-time. The heapq library’s name is used to apply and call this method, and the list is supplied as an argument. Because the user-defined function must be declared, it seems to be a built-in function. The list is printed after each call. After running the code, go to the output console. import heapq num_list = [72, 114, 33, 97, 61] heapq.heapify(num_list) print(num_list) It’s worth noting that the 0th index contains the smallest value of all, 33. Let’s add the number ten to our heap. Push an element to a heapq If you wish to add a new item to an already existing heap, you can use the heappush() function to do so. By default, every new element is added to the last index. However, if the newly added item is the smallest, you can use the heapify function to move it to the top. If the list isn’t sorted, use heapify first. After then, you can use the push feature. The list and the item you want to add to the list are included in the function call’s arguments. When you run the code after finishing it, you’ll notice that the list has been heapified, and the following row displays the list with the additional number at the end. heapq.heappush(num_list,10) print(num_list) Take the item out of the heap list (pop from a heapq) You can then remove items from the priority queue, reordering the heap so that the item with the next greatest priority is placed first. You can use the push function to get rid of the data piece from the heapq. The zeroth index element, [0], will be removed. The heap list’s smallest element is this. Consider the following list of items. Apply the heap function to it, then eliminate the element until only the smallest element remains. After using this function, the entire list is displayed when you use the print function. The first piece is the only one that is eliminated. The next lowest element is inserted at zero indexes in its place. It is time to test popping an element from our heapq as follows. print(heapq.heappop(num_list)) print(num_list) print(heapq.heappop(num_list)) print(num_list) When we pop from our heap, the smallest value from the heap is removed and returned. The number 10 is no longer in our stack. Replace one of the heap’s elements (heapreplace() ) Aside from pop and push, the heapq module also provides the heapreplace function, which pops an object and immediately pushes a new one onto the heap. Let’s think about how to replace an item in a heap. Like with previous functions, the initial step is to heapify the list. The heapified list and the element to be entered instead of another item are passed to the replacement function. Keep in mind that the smallest number is permanently deleted, and the new number is assigned to an undefined location, or the order is unknown. Let’s have a look at the heapreplace() function now. Allow us to heapreplace the value 33 in a heap. print(heapq.heapreplace(num_list,32)) print(num_list) another heapreplace() example num_list= [7, 5, 3, 10, 8, 4, 9] heapq.heapify(num_list) print(num_list) heapq.heapreplace(num_list, 6) print(num_list) The order of push and pop activities in the heappushpop() and heapreplace() routines differ. As you can see, the lowest value, 32, gets popped first, followed by our new value, 3. The value of the 0th index in the new heap is 3. heappushpop() The list is now subjected to two combined processes. The first is heappushpop(), and the second is heapreplace(), which is already well-known, as we have covered above. We’ll take two lists and heapify them independently in this example. The heappushpop() function will accept an item and add it to the list while removing the smallest item. Heappushpop reverses the heapreplace order of operations, pushing first and popping last. Let’s have a look at the heappushpop() routines in action. Let’s take the value 32 and heappushpop it. print(heapq.heappushpop(num_list,32)) print(num_list) Example: heappushpop() and heapreplace() import heapq # list one list_one =[10,12,14,11,8] # list two list_two =[10,12,14,11,7] # heapify() is responsible for converting a list into a heap heapq.heapify(list_one) heapq.heapify(list_two) # heappushpop() is responsible for pushing and popping items simultaneously print(heapq.heappushpop(list_one,9)) # using heapreplace() to both push and pop items at the same time print(heapq.heapreplace(list_two, 8)) The first print statement ejected a number, while the second replaced a new number with the smaller one. Example: heappushpop() num_list= [7, 5, 3, 10, 8, 4, 9] heapq.heapify(num_list) print(num_list) heapq.heappushpop(num_list, 4) print(num_list) When you need to conduct a push and pop in quick succession, or vice versa, the last two approaches, heappushpop() and heapreplace(), are usually very efficient. The comparison procedures are the subsequent operations to be used. Nsmallest Nsmallest returns the smallest element in the list out of all the others. The list is heapified first, and then both operations are applied line by line. import heapq # list initialization new_list =[6,7,9,4,3,5,8,10, 1] heapq.nsmallest(2,new_list) An argument accompanies the list. The two smallest numbers are to be chosen according to this logic. Nlargest Nlargest returns the most significant item and fulfills the condition by using the key if one is provided. Below is a simple implementation to illustrate this. import heapq # list initialization new_list =[6,7,9,4,3,5,8,10, 1] heapq.nlargest(2,new_list) How to Use Objects in Heapq? In the previous example, we showed how to use heapq functions with simple data types like integers. Similarly, we may organize complex data like tuples or strings using objects and heapq methods. For this, we’ll need a wrapper class that fits our circumstances. Consider the following scenario: you wish to store strings and sort them from shortest to most considerable in length. The following is a diagram of our wrapper class. class DataWrap: def __init__(self, data): self.data = data def __lt__(self, other): return len(self.data) < len(other.data) The logic to compare the lengths of strings is contained in the lt function (it is the operator overloading function for comparison operators; >, ==, >= and <=). Next, we test creating a heap of strings. #") The following is the result of printing things from the heap. It’s worth noting that the shortest word, go, is at the top of the list. Wrapping strings allows you to try out various heapq routines. Using Heapq to implement a Max Heap Using heapq, we can implement a maximum heap. To order the greatest value to the front, alter the comparison operator in the lt function. Let’s attempt the same thing with strings and their lengths, as in the last example. class DataWrap: def __init__(self, data): self.data = data def __lt__(self, other): return len(self.data) > len(other.data) #") Notice how the length comparison has altered in the DataWrap class’s lt function. The following is the result of printing things from this heap. It’s worth noting that the longest word is written is now at the top of the stack. Heapq-based Custom Priority Queue In the previous example, we utilized the heap modules to convert a list into a priority queue. We used the heap module’s operations to alter the list explicitly. Because we may still interact with the list through the list interface, this is prone to errors and can mess up our priority queue. Assume we establish a priority queue, and when someone uses the list’s ordinary append function to append a member. num_list= [7, 5, 3, 10, 8, 4, 9] heapq.heapify(num_list) print(num_list) num_list.append(2) print(num_list) The number two is now near the back of the line rather than at the front. We can wrap the list in a new wrapper class that uses the heap to get around this issue. class PriorityQueue: def __init__(self, new_elements = None): if(new_elements == None): self.new_elements = list() elif type(new_elements) == list: heapq.heapify(new_elements) self.new_elements = new_elements def __str__(self): return str(self.new_elements) # function is responsible for ascertaining if the queue is empty def isEmpty(self): return len(self.new_elements) == 0 # code snippet to insert an element in the queue def push(self, new_elements): heapq.heappush(self.new_elements, new_elements) # code to pop an element based on Priority def pop(self): heapq.heappop(self.new_elements) The constructor of the priority queue class accepts an optional list of beginning values as a parameter. In the presence of an initial values list, we heapify it to build the priority queue’s order and then assign it to the element instance variable. #creating an empty priority queue pq = PriorityQueue() #creating a priority queue with initial values pq = PriorityQueue([7, 5, 3, 10, 8, 4, 9]) print(pq) We can now perform push and pop operations. The lowest value (with the highest priority) is always at the front line. from queue import PriorityQueue pq = PriorityQueue() pq.put(5) pq.put(4) pq.put(1) pq.put(2) pq.put(5) print(pq.get()) print(pq.get()) print(pq.get()) print(pq.get()) print(pq.get()) Conclusion We are hopeful that this article has been helpful and informative to help you ace through the heapq module. Please feel free to experiment with the provided code. It is an article that enumerates all of the primary heap and queue functions and operations to work as a module. This python feature allows the user to sort the first element in a sorted array; therefore, if you want to know the first element in a sorted array, use the heapq function. Other elements, on the other hand, are not ordered. Heap is also utilized in operating systems and artificial intelligence. After reading this guide, you’ll be well-versed with heapq and its python routines.
https://www.codeunderscored.com/heapq-in-python-with-examples/
CC-MAIN-2022-21
refinedweb
2,589
65.83
Hide Forgot The packages conflict with other packages in either Extras or Core. Details following in next comment. html-xml-utils-3.7-3.fc5.i386.rpm File conflict with: /usr/bin/count File conflict with: /usr/share/man/man1/count.1.gz => Package conflicts with: fish - 1.21.12-0.fc5.i386 devel: html-xml-utils - 3.7-4.fc6.i386 File conflict with: /usr/share/man/man1/count.1.gz File conflict with: /usr/bin/count File conflict with: /usr/bin/extract => Package conflicts with: fish - 1.21.12-1.fc6.i386 => Package conflicts with: csound - 5.03.0-3.fc6.i386 normalize - 0.7.7-2.lvn6.i386 File conflict with: /usr/share/man/man1/normalize.1.gz File conflict with: /usr/bin/normalize => Package conflicts with: html-xml-utils - 3.7-4.fc6.i386 This same error is still occuring in fc6: Transaction Check Error: file /usr/bin/count conflicts between attempted installs of html-xml-utils-3.7-4.fc6 and fish-1.21.12-1.fc6 file /usr/share/man/man1/count.1.gz conflicts between attempted installs of html-xml-utils-3.7-4.fc6 and fish-1.21.12-1.fc6 Just trying to find time to sort it. Gavin. How do I resolve this again? Well, FESCO needs to tell whether explict RPM "Conflicts: ..." are permitted in Fedora Extras. Generally, file names like "count" and "extract" are much too generic for the /usr/bin namespace. It would be wise if developers used some sort of prefix to avoid (or reduce the risk of) clashes. (In reply to comment #7) > Well, FESCO needs to tell whether explict RPM "Conflicts: ..." are > permitted in Fedora Extras. Well, yes, FESCo or the PC should sort this out. Anyway: > Generally, file names like "count" and "extract" are much too generic > for the /usr/bin namespace. It would be wise if developers used some sort > of prefix to avoid (or reduce the risk of) clashes. Much agreed. This is an area where Fedora IMHO should not allow Conflicts (explicit or implicit) (*). This sepcific problem should be fixed in a different way. (*) An explicit conflicts for example might be acceptable to prevent that both postfix and sendmail get installed at the same time (not the best exmple as it might be possible these days, but it should show what I mean) I poked the Packaging Commitee. Draft: html-xml-utils - 3.7-4.fc6.i386 Conflicts: 13 File conflict in: /usr/bin/count /usr/bin/extract /usr/share/man/man1/count.1.gz Packages with the same files: fish - 1.21.12-1.fc6.i386 libextractor - 0.5.17a-1.fc7.i386 csound - 5.03.0-9.fc7.i386 We really need to find a solution for the conflicts thing. We still after months have no ratified rules from the Packaging Committee how to handle conflicts (latest proposal is at -- seems there were enough votes on the mailinglist, but its still in the Drafts section).. html-xml-utils - 3.7-4.fc6.i386 File conflict with: csound - 5.03.0-13.fc7.i386 /usr/bin/extract File conflict with: fish - 1.21.12-1.fc6.i386 /usr/bin/count /usr/share/man/man1/count.1.gz File conflict with: libextractor - 0.5.17a-1.fc7.i386 /usr/bin/extract File conflict with: surfraw - 1.0.7-3.fc8.noarch /usr/bin/cite *** Bug 248685 has been marked as a duplicate of this bug. *** file /usr/bin/normalize from install of html-xml-utils-3.7-4.fc6 conflicts with file from package normalize-0.7.7-2.lvn6 (bug 248685) The conflict with "normalize" has been covered before in comment 3. (In reply to comment #11) > We really need to find a solution for the conflicts thing. We still after months > have no ratified rules from the Packaging Committee how to handle conflicts > (latest proposal is at > -- seems there were > enough votes on the mailinglist, but its still in the Drafts section). It seems that page has now moved out of the Drafts section: However, the only suggestion that page has for this situation is to convince upstream to rename their binaries. Are there any plans or drafts to cover the case where upstream doesn't rename the binaries? > >. Alternatives? Prefix the binaries with a "hxu-"? Suffix the binaries with "-hxu" ? (In reply to comment #17) > Alternatives? Alternatives is for alternative packages/binaries. Take a look at the "normalize" case - one works with sound and the other with XML - nothing in common, right ;-) (In reply to comment #17) > Alternatives? > > Prefix the binaries with a "hxu-"? > > Suffix the binaries with "-hxu" ? Personally, I like the prefix idea the best. If we want to be *really* pedantic about it, we would worry about the possibility that someone would contribute a package named "hxu". (There do seem to be a fair number of packages whose names seem to be a random string of three letters. ;-) I think that the probability that someone would name a package "hxu" that would also contain a binary named "normalize" to be extremely low. If we don't care about this, (or are willing to deal with it when the situation arises) then I think that prefixing the binaries with "hxu-" would be a great way to go. If we do care about this situation, then the obvious solution would be to prefix the binaries with "html-xml-utils-". But that's actually kind of cumbersome. In that case, might I propose a conflicts-alternatives in reply to comment #18? (I know it wouldn't pretty, but I'm not certain what we can do that would be.) For anyone whose still keeping score at this point, I just received the following error message from yum update: file /usr/share/man/man1/index.1.gz conflicts between attempted installs of netpbm-progs-10.35.32-1.fc7 and html-xml-utils-3.7-4.fc6 I see no more conflicts with html-xml-utils and latest netpbm updates to FC7, F8 as well as in rawhide. Should we close this? > Should we close this? No, this ticket is not specific to netpbm. Changing version to '9' as part of upcoming Fedora 9 GA. More information and reason for this action is here: FYI, I was just able to update both fish and html-xml-utils to the latest in f9. So no more conflicts between those two. I don't have access to an f8 machine, so I can't check that. This bug has been fixed in Fedora 9. Not true. Easy to verify (comment 12) even without special tools: Transaction Check Error: file /usr/bin/cite conflicts between attempted installs of surfraw-1.0.7-3.fc8.noarch and html-xml-utils-3.7-5.fc9.i386 [...] Also, please fill in your real name in bugzilla. Notes for Gavin: * Upstream has renamed the programs in html-xml-utils >= 5.0 (released in November) to resolve the conflicts. * There are several releases you've missed: 3.8 to 5.1 * Careful! There's a _licence change_ from GPL to something else.
https://bugzilla.redhat.com/show_bug.cgi?id=208781
CC-MAIN-2020-05
refinedweb
1,186
68.77
The QWSServer class provides server-specific functionality in Qt/Embedded. More... #include <qwindowsystem_qws.h> List of all member functions. When you run a Qt/Embedded application, it either runs as a server or connects to an existing server. If it runs as a server, some additional operations are provided by this class. This class is instantiated by QApplication for Qt/Embedded server processes. You should never construct this class yourself. A pointer to the QWSServer instance can be obtained via the global qwsServer variable. The mouse and keyboard devices can be opened with openMouse() and openKeyboard(). (Close them with closeMouse() and closeKeyboard().) The display is refreshed with refresh(), and painting can be enabled or disabled with enablePainting(). Obtain the list of client windows with clientWindows() and find out which window is at a particular point with windowAt(). Many static functions are provided, for example, setKeyboardFilter(), setKeyboardHandler(), setDefaultKeyboard() and setDefaultMouse(). The size of the window rectangle can be set with setMaxWindowRect(), and the desktop's background can be set with setDesktopBackground(). The screen saver is controlled with setScreenSaverInterval() and screenSaverActivate(). See also Qt/Embedded. This determines what sort of QWS server to create: This enum is used to pass various options to the window system server. This specifies what sort of event has occurred to a top-level window: Warning: This class is instantiated by QApplication for Qt/Embedded server processes. You should never construct this class yourself. The flags are used for keyboard and mouse setting. The server's parent is parent and it is called name. Returns the list of top-level windows. This list will change as applications add and remove wigdets so it should not be stored for future use. The windows are sorted in stacking order from top-most to bottom-most. See also setCursorVisible(). Returns the keyboard mapping table used to convert keyboard scancodes to Qt keycodes and Unicode values. It's used by the keyboard driver in qkeyboard_qws.cpp. Returns the QWSPropertyManager, which is used for implementing X11-style window properties. This signal is emitted when the QCopChannel channel is created. Refreshes the region r. This signal is emitted immediately after the QCopChannel channel is destroyed. Note that a channel is not destroyed until all its listeners have unregistered. See also isCursorVisible(). Sets the color c to be used as the background in the absence of obscuring windows. The filter is not invoked for keys generated by virtual keyboard drivers (events sent via sendKeyEvent()). If f is 0, the most-recently added filter is removed and deleted. The caller is responsible for matching each addition with a corresponding removal. See also QWidget::showMaximized(). This signal is emitted whenever something happens to a top-level window (e.g. it's created or destroyed). w is the window to which the event of type e has occurred. This file is part of the Qt toolkit. Copyright © 1995-2007 Trolltech. All Rights Reserved.
http://idlebox.net/2007/apidocs/qt-x11-free-3.3.8.zip/qwsserver.html
CC-MAIN-2013-48
refinedweb
486
60.51
Connect Streamlit to Deta BaseConnect Streamlit to Deta Base IntroductionIntroduction This guide explains how to securely access and write to a Deta Base database from Streamlit Cloud. Deta Base is a fully-managed, fast, scalable and secure NoSQL database with a focus on end-user simplicity. This guide uses the deta Python SDK for Deta Base and Streamlit's secrets management. Sign up for Deta Base and sign inSign up for Deta Base and sign in First, you need to sign up for Deta Base. Once you have an account, sign in to Deta. When you sign in, Deta will create a default project and show you the project's Project Key and Project ID. Note down your Project Key and Project ID. Be sure to store your Project Key securely. It is shown only once, and you will need it to connect to your Deta Base. Click to see your Project Key Securely store your Project Key Add Project Key to your local app secretsAdd Project Key to your local app secrets Your local Streamlit app will read secrets from a file .streamlit/secrets.toml in your app's root directory. Create this file if it doesn't exist yet and add the Project Key (from the previous step) of your Deta Base as shown below: # .streamlit/secrets.toml deta_key = "xxx" Replace xxx above ☝️ with your Project Key from the previous step. deta to your requirements fileAdd deta to your requirements file Add the deta Python SDK for Deta Base to your requirements.txt file, preferably pinning its version (replace x.x.x with the version you want installed): # requirements.txt deta==x.x.x Write your Streamlit appWrite your Streamlit app Copy the code below to your Streamlit app and run it. The example app below writes data from a Streamlit form to a Deta Base database example-db. import streamlit as st from deta import Deta # Data to be written to Deta Base with st.form("form"): name = st.text_input("Your name") age = st.number_input("Your age") submitted = st.form_submit_button("Store in database") # Connect to Deta Base with your Project Key deta = Deta(st.secrets["deta_key"]) # Create a new database "example-db" # If you need a new database, just use another name. db = deta.Base("example-db") # If the user clicked the submit button, # write the data from the form to the database. # You can store any data you want here. Just modify that dictionary below (the entries between the {}). if submitted: db.put({"name": name, "age": age}) "---" "Here's everything stored in the database:" # This reads all items from the database and displays them to your app. # db_content is a list of dictionaries. You can do everything you want with it. db_content = db.fetch().items st.write(db_content) If everything worked out (and you used the example we created above), your app should look like this:
https://docs.streamlit.io/knowledge-base/tutorials/databases/deta-base
CC-MAIN-2022-21
refinedweb
480
65.93
I have recently gone through the (very long) slog of getting SDK Tools to work with my project. I will make another post about that because the SDK Tools are so difficult to use it's honestly comical. I am trying to get every one of my files to minify into a single app-all.js. I have had luck with doing this manually by hacking together a large requires statement and dumping Ext.Loader.history, but I need a way to do this that will run on my build server - and besides, there's no excuse for not being able to automate this. One thing I noticed after a lot of time inside JSBuilder is that some of my application's files are being loaded synchronously after Ext.onReady(), which of course breaks JSBuilder because it is not smart enough to wait after that event. JSBuilder will not write this files to the .jsb3 and they will not be included in the build. As it turns out, the documentation on Ext.app.Application is wrong! It states that using statements like views:[/*views*/], stores:[/*stores*/], etc. in your config is the same as a properly namespaced requires statement. It is obvious by looking through the source that this is not the case, and in fact some (but not all) of the documentation related to this was removed in 4.1. Unfortunately the large text block on the top of Ext.app.Application still references this functionality in 4.1, even though it was removed. I think this is good functionality and clearly should remain in 4.1. Defining all views, controllers, stores, and models in the main Application object is clean and obviously integrates well with Architect. For now I have monkey-patched Ext.app.Application (talk about obnoxious, having to monkey-patch the very foundation of your app!). I have put this bug report here because this bug affects Architect in a very fundamental way and must be remedied either on your end or by the main ExtJS team. In general this has been a very painful build process that could have been made much easier by simple documentation of the SDK, consistent documentation of ExtJS4 (how bad is it when the documentation for Ext.app.Application is wrong?!), and a simple build guide for users of Architect. I would be very happy to write one, if only to save others the headaches I have gone through this week.
https://www.sencha.com/forum/showthread.php?203566-Ext.app.Application-loads-views-stores-and-models-synchronously
CC-MAIN-2016-50
refinedweb
411
64.81
Collections are groups of items; in .NET, collections contain objects (including boxed value types). Each object contained in a collection is called an element. Some collections contain a straightforward list of elements, while others (dictionaries) contain a list of key and value pairs. The following collection types consist of a straightforward list of elements: The following collection types are dictionaries: These collection classes are organized under the System.Collections namespace. In addition to this namespace, there is also another namespace called System.Collections.Specialized, which contains a few more useful collection classes. These classes might not be as well known as the previous classes, so here is a short explanation of the list: ListDictionary This class operates very similar to the Hashtable. However, this class beats out the Hashtable on performance when it contains 10 or fewer elements. HybridDictionary This class consists of two internal collections, the ListDictionary and the Hashtable. Only one of these classes is used at any one time. The ListDictionary is used while the collection contains 10 or fewer elements, and then a switch is made to use a Hashtable when the collection grows beyond 10 elements. This switch is made transparently to the developer. Once the Hashtable is used, this collection cannot revert to using the ListDictionary even if the elements number ... No credit card required
https://www.oreilly.com/library/view/c-cookbook/0596003390/ch09.html
CC-MAIN-2019-39
refinedweb
221
56.76
Friday Sep282018 Stuff The Internet Says On Scalability For September 28th, 2018 Hey, it's HighScalability time: @danielbryantuk: "A LAMP stack is a good thing. Never inflict a distributed system on yourself unless you have too..." @mipsytipsy #CloudNativeLondon@danielbryantuk: "A LAMP stack is a good thing. Never inflict a distributed system on yourself unless you have too..." @mipsytipsy #CloudNativeLondon Do you like this sort of Stuff? Please support me on Patreon and you'll get 100 free cloud credits in heaven. Know anyone looking for a simple book explaining the cloud? Then please recommend my well reviewed book: Explain the Cloud Like I'm 10. They'll love it and you'll be their hero forever. - $2 billion: Pokémon GO revenue since launch; 10: say happy birthday to StackOverflow; $148 million: Uber data breach fine; 75%: streaming music industry revenue in the US; 5.2 TB: Fastly peak per second traffic; 10 billion: Ethereum requests per day; 01%: DNS resolution issues when the KSK rolls; 15B: projected gaming community views on Reddit; £4.1bn: saved by UK Government's Digital Transformation Journey; 10X: Core ML model runs faster on the A12 processor; 4 million: cores managed by Open Stack at Yahoo; 1PB: Azure's data box; 21 million: US Apple music subscribers; .675: Curry's league leading true shooting percentage; $3 trillion: taxes collected by IRS; 61,000: network of Mayan structures discovered using lidar; 90%: China's percentage of $4.2 billion increase in pure-play foundry market; - Quotable Quotes: - WhatsApp cofounder: I am a sellout. I acknowledge that. - MrTonyD: I was writing production code over 30 years ago (C, OS, database). It is much worse to be a software developer now. It used to be a very high autonomy job - where you were trusted to figure out your work process and usually given lots of freedom to dynamically define many of your deliverables (within reason). I remember when I first read about Agile - I looked at the practices and thought "I've done most of those." But when our nightly builds broke it was no big deal, we would just fix them when we got around to it (as opposed to managers now assigning blame and calling in people on weekends to "fix" it). And if things weren't going well then we might have daily brief meetings for a couple of weeks. But now there are managers who insist on daily standups irregardless of their actual business necessity. I could go on and on. There is a reason why I'm not a practicing programmer anymore - even though I love to code. - Tiger Woods: ." - @tomgara: "Slate makes more money from a single article that gets 50,000 page views on its site than it does from the 6 million page views it receives on Apple News in an average month" - Steve Case: We are seeing the beginnings of a slowing [in Silicon Valley] of what has been a brain drain the last 20 years. It’s not just watching where the capital flows, it’s watching where the talent flows. And the sense that you have to be here or you can’t play is going to start diminishing. - Martin Sústrik: Philosophers, by and large, tend to be architecture astronauts. Programmers' insight is that architecture astronauts fail. Or, maybe, they can succeed as in getting comfy job at IBM, but their designs don't translate into useful products. What else? After decades of pain, we have finally concluded that hierarchies of concepts don't work. That's not an obvious statement at all. After all, nobody gets fired from creating a hierarchy. - vl: I have a hilarious story about this from Google: I wanted second 30" monitor, so I filed a ticket. They sent me long email listing reasons why I shouldn't get a second monitor, including (numbers are approximate, employee count from 2013 or so) "If every googler gets an extra monitor, in a year it would be equivalent to driving Toyota Camry for 18000 miles." - @dhh: The iPhone XS is faster than an iMac Pro on the Speedometer 2.0 JavaScript benchmark. It's the fastest device I've ever tested. Insane 45% jump over the iPhone 8/X chip. How does Apple do it?! - @davidcrespo: how is this different from making entries in a traditional database how is this different from making entries in a traditional database how is this different from making entries in a traditional database how is this different from making entries in a traditional database re:@marshallk: Wow: Walmart will *require* food suppliers to upload data to IBM's blockchain in order to trace quality issues through the supply chain. They say issues that take 7 days to track today can be tracked in 2.2 seconds with the system. - @DrQz: Somewhat more revealing USL curves that go BEYOND the performance measurements (also corrects previous legend): • XDP-1 saturates single NIC at ~4 cores • DPDK scales "infinitely" but reaches NIC satn at ~13 cores • XDP-2 peaks before satn at ~15 cores due to non-zero β term. - Mark Szulyovszky: Would I have thought 3 years ago that we end up building a mobile tech stack on Microsoft’s & Facebook’s open source code, wishing that Google’s and Apple’s clunky and slow middle layer would disappear? No way. - Jeff Barr: We’re [AWS] not stopping at 12 TiB, and are planning to launch instances with 18 TiB and 24 TiB of memory in 2019. - Robert Gutman: The hyperscale leasing has been through the roof. - Joanne Itow: These high volume semiconductor products are enjoying growing end market applications but the growth is being challenged by capacity shortages and rising material costs. Companies are choosing different strategies to address these conditions resulting in a variety of fluctuations in the market. - Kevin Fogarty: the market for deep-learning chipsets will rise from $1.6 billion in 2017 to $66.3 billion by 2025, which includes CPUs, GPUs, FPGAs, ASICs, SoC accelerators, and other chipsets. - Vadim Tkachenko: the message I am trying to delivery is that MyRocks shows the better throughput on limited cloud resources, but also it is more cost efficient comparing to InnoDB - Erich Sanchack: We anticipate that, over time, air-cooled systems will be phased out and there will be a return to water. We are actively transitioning air cooling to water across our portfolio, at both the storage and chip levels, and using them to cool individual racks of up to 90kWh. - Shaun Nichols: US JEDI military cloud network is so high-tech, bidders will have to submit their proposals by hand, on DVD - Oren Eini:. - DanNeely: CPU. - Hamza Shaban: Google’s former chief executive predicts that the global Internet will split in two within the next decade, bifurcating into a Web led by the United States and another helmed by China with fewer freedoms and greater censorship - Steve Kerr: One of the things that I learned from Phil was how important it was being funny watching game film, editing stuff in from movies. Nobody I had ever played for had ever done that, and, to me, that was such an effective way of getting a message across. When you could tie together the point you’re trying to make on the basketball floor with a humorous message coming from a movie — when the message is clear and it carries over to what you’re trying to teach — you’re not having to either kiss up to the player or criticize them. - Jeff Dean: George Dahl and others in our research group worked with some chemists at different universities and took the simulator they were using, trained a neural net to do it. And now they can't distinguish the accuracy versus the original, sort of, HPC-style simulator, but the tool is 300,000 times faster. And any time you take your tool, and you make it 300,000 times faster, that just transforms how you would do science. Right? You can imagine, I'm going to go make coffee. Let me screen 100 million molecules, and then when I come back, I'll look at the 10,000 that are most interesting. - Brian Bailey: I just saw a worrying figure from Semico. They predict that SoC design costs will only rise by only 1.7% by 2023. If that is true, then I hope engineers suddenly became a lot more productive, because that figure suggests to me that the market is not yet looking at the future aggressively enough. - Jeff Dean: And I think there's huge potential for the thousands, and thousands, and thousands of heuristics in computer systems to really mostly be learned and adapt to how they're actually being used. - w8rbt: Yep. No one telling me no anymore and I can write Lambas to replace cron jobs, use RDS to replace DBs, use S3 and Glacier to replace Storage, etc. Fargate is awesome too. No gate keepers nor bureaucracy just code and git repos. That's why AWS is so awesome. And, I can show exactly how much everything costs. As well as work to reduce those costs. AWS added to a small, skilled dev team is a huge multiplier. - Ev Williams: Roughly speaking, there are two engineering cultures in Silicon Valley, which you could describe as hackers and engineers. And obviously Google is engineers. Engineers generally have computer science degrees and higher. They study the fundamentals. Hackers just want to make stuff work, and it’s not about doing it right necessarily. Facebook was kind of famously built by hackers, but they’re not the hippie hackers. Hippie hackers are a particular strain of hackers - DSHR: The legal system is gradually building trust in the evidentiary value of the Wayback Machine by requiring personal testimony from the people operating the underlying processes, and exposing these witnesses to cross-examination. Systems based on blockchains would have to undergo a similar process; those using public, permissionless blockchains would have difficulty doing so since it would not be possible to obtain testimony from "the people operating the underlying processes". In the case of ARCHANGEL, which is intended to use a permissioned blockchain, testimony would be required from both the operators of the network nodes and from the archives generating the hashes to be injected into the chain. Mere public confidence would, and should not, be sufficient. - spullara: It sounds very similar to the Rails frontend I helped replace at Twitter — no business logic was thrown away, it was done without any loss in application fidelity. We did get approximately 10x fewer servers with 10x less latency after porting it to the JVM as a result. However, without the latency improvement, I don't think we should have done it. Fewer servers just isn't as important as the developer time necessary to do the change as you just pointed out. Just as using the cloud to simplify things and reduce the amount of effort spent on infrastructure is the main driver of adoption. There is clearly a cross-over point though where you decide it is worth it. The CTOs you are speaking of are making that choice and it probably isn't a silly excuse. - Jeff Dean: So the TPU v3, which Sundar announced it at Google I/O in May 2018, essentially got water cooling. So a board has four of these chips on it, and the water cooling kind of goes to the surface of these chips and takes excess heat away. And the largest footprint deployment, we call these things pods, the TPU v2 pods were essentially made up of 64 of those devices, 256 TPU v2 chips. And the pod scale for the TPU v3 is much larger. So we have about eight times as much computation in the TPU v3 pod as we did in the TPU v2 pod. - Gather around my friends. Listen to ancient words of wisdom. What are some things that only someone who has been programming 20-50 years would know?: Everything in software development has already been invented. People just keep rediscovering it and pretending they invented it...Don’t trust the compiler. Don’t trust the tools. Don’t trust the documentation. Don’t trust yourself...We don’t need any more computer languages...Maintaining code is harder than writing it...You have been taught to program as though memory, processor time, and network bandwidth are all free and infinite. It isn’t, it isn’t, and it isn’t...You’re going to forget what your code does in a few months...Beware of programmers who speak in absolutes. Along the same lines is this oldy but goody: Notes on Distributed Systems for Young Bloods. Good comments on reddit and on HN. - What happens when the AIs get tired of humans stealing their jobs? Skynet. AI Company Accused of Using Humans to Fake Its AI. - The challenge with health checks — striking the balance between failure detection and reaction. Patterns for Resilient Architecture — Part 3: There are two types of failures that can impact a server instance: local or host failures...A major issue with health checks occurs when server instances make the wrong local decision about their health — and all of the other hosts in the fleet make that same wrong decision simultaneously — causing cascading failures throughout adjacent services. To prevent such a case, some load balancers have the ability to act smart: When individual server instances within a fleet fail health checks, the load balancer stops sending them traffic, but when server instances fail health checks at the same time, the load balancer fails open, letting traffic flow through all of them - Videos from Altitude San Francisco 2018 are now available. You might like BBR congestion control: a new model-based approach or Scaling Ethereum to 10B requests per day. - Yes, building your own is almost always cheaper—if you can. Why building your own Deep Learning Computer is 10x cheaper than AWS. Good discussion on HN. scosman: Missing 1 important point: ML workflows are super chunky. Some days we want to train 10 models in parallel, each on a server with 8 or 16 GPUs. Most days we're building data sets or evaluating work, and need zero. When it comes to inference, sometime you wanna ramp up thousands of boxes for a backfill, sometimes you need a few to keep up with streaming load. Trying to do either of these on in-house hardware would require buying way too much hardware which would sit idle most of the time, or seriously hamper our workflow/productivity. - Cloudflare's mission is to help build a better Internet. Google's mission is to organize all the world's information. It's interesting to see how aiming at different levels of the stack causes different decisions to me made. Introducing the Bandwidth Alliance: sharing the benefits of interconnected networks. What is Cloudflare offering? Free egress: So today — on the eve of Cloudflare's 8th birthday — we're excited to announce the Bandwidth Alliance: a group of forward-thinking cloud and networking companies that are committed to providing the best and most cost-efficient experience for our mutual customers...We are proud to announce the following cloud providers and hosting companies who have joined together with us in committing to reduced or zero bandwidth rates for mutual customers...To give you some sense, we estimate that current Cloudflare customers could save nearly $50 million in data transfer fees per year from hosting with a Bandwidth Alliance member as these programs come online. Also, Building With Workers KV, a Fast Distributed Key-Value Store. - Another win for Postcentralization. AWS wants to put their whitebox NAS in your datacenter. New – AWS Storage Gateway Hardware Appliance. Dog not included. - Interested in a new stack? Two years of Elixir at The Outline: We chose Elixir and Phoenix as the foundation of our website after being attracted to its concurrency model, reliability, and ergonomics...90ms is a 90th percentile response time on The Outline...We got this performance out of the box, without really any tuning or fine-grained optimizations...Elixir is so fast that we haven’t had much need for CDN or service level caching...Before reaching for Vue.js or Svelte, consider going old school and rendering your HTML on the server; you might be delighted.....Phoenix Channels are such a great abstraction over WebSockets...The Community is wonderful!...Stack traces are not always the best...Sometimes tests throw random warnings that I don’t know how to trace...Working with Ecto Associations are still hard...You usually don’t need a GenServer! - Looking for datasets? Try Google Dataset Search. - Scaling ETL: How data pipelines evolve as your business grows: Our new architecture extracts data into a data lake on Amazon’s S3. Then, our scheduler — Azkaban — transforms the data for analysis. Azkaban both populates the raw data lake as well as our data warehouse, for which we use Hive. We have an excellent team of data engineers that works closely with our business users to come up with a defined set of transformations to present as our standard business object. This allows us to decouple the source ingestion from the business level transformations. Allowing one set of engineering to concentrate on optimizing the flow from our production system into Hive tables and another set of engineers working on optimizing the data for the business. Also this means that as services are spun up on our excellent microservice architecture the access to source data is almost immediately available. This allow for insights into the data before engineers can transform it. Mitigating the downsides of the traditional ETL where business users would have to wait week or even days before measuring a new piece of the business. - Ben Kehoe, Cloud Robotics Research Scientist, Discusses Serverless @iRobot:...Serverless is all about the total cost of ownership. It’s not just about development time, but across on areas that need to support operating the environment...iRobot uses Red/Black deployments to have a completely separate stack when deploying. In addition, they flatten (or inline) their function calls on deployment. Both of these techniques are used as cost optimization techniques to prevent lambdas calling lambdas when not needed. - We don't hear all that much about Alibaba. Here's a good article Comparing Serverless offerings from Alibaba Cloud and AWS. Features are comparable if not favoring Alibaba. Available services are dominated by AWS. - A firmware bug on one of the IRS’s high-availability storage arrays caused an 11 hour outage at the IRS. There was a known fix, but because of internal rules about running code in production for 450 machine weeks at other sites before it can be installed at the IRS, the fix couldn't be installed. A sensible rule. The real problem was the SPoF. Review of the System Failure That Led to the Tax Day Outage: The root cause of the Tax Day outage was a cache overflow issue causing repeated warmstarts and had deployed a preventive script on the storage device that will monitor and correct the issue if the condition reoccurs...The Internal Revenue Service (IRS) relies extensively on information technology systems to annually collect more than $3 trillion in taxes, distribute more than $400 billion in refunds, and carry out its mission of providing service to America’s taxpayers in meeting their tax obligations. In Fiscal Year1 2017, the IRS processed more than 187 million Federal tax returns. Of those, over 150 million were individual returns, with over 87 percent of these individual returns filed electronically. During Calendar Year 2018, the IRS expects to receive approximately 153.7 million individual income tax returns, with more than 89 percent of those filed electronically...this new environment provides an agile solution for rapid deployment of continuous improvements, optimized data storage and retrieval, and flexibility to adapt to IRS growth. The IRS environment consists of 7.5 petabytes of disk space in various configurations in data centers and other facilities across the country...Unisys follows an internal company policy that requires microcode bundles to have, at a minimum, 450 machine weeks in a production environment prior to installation on IRS equipment...The Tier 1 storage architecture consists of seven high-speed, high-availability storage arrays with an average age of less than three years. Presently, three of the storage arrays, located at the Martinsburg Computing Center, host a portion of core tax processing data. However, there are no automatic failovers or built-in redundancies for these three storage arrays. Additionally, not all of the applications and systems data hosted in the Martinsburg Computing Center are fully replicated to the Memphis Computing Center. - Wonder if Turing ever read The Sandman, a short story written in 1816 about a man who falls in deeply in love with an automaton? Long ago people realized we humans are not all that good at telling us from them. Or does it say more about men who prefer to fall in love with a reflection of themselves? - Old Tech Sounds: hand mixer; 10 key; stareoscope; bolex. - Time to think of each house an edge availability zone in which all IoT stuff runs? Honeywell’s 'smart' thermostats had a big server outage and a key feature stopped working entirely — and customers were furious. I pulled our smart thermostat long ago for just his reason. When it's 105 degrees outside and your local device doesn't work because of remote problem, you get steamed. There's no reason for this code to run in a datacenter. I could run on a box in a house that's administered remotely. - Very thorough. Lots of insights. An Introduction to Serverless Microservices. - An advantage Microsoft has with VSCode is they can build first class tooling into an already excellent program. Have to wonder why Apple is letting all that backend iOS revenue slip through their digital fingers? Serverless with Microsoft Azure Functions: Using a combination of the local runtime and the extension, you can create a function project, add some functions, test and debug them locally, and then publish them to Azure — within just a few clicks or commands...Azure Function Bindings offers developers a simple way to integrate with a bunch of services...One of the really nice features is that ability to configure our bindings right alongside our function code in a function.json file...With Azure Functions, this feature comes out of the box. You just need to provide your function a HTTP Trigger, and then you can invoke it straight from the web...Function Proxies provide developers the ability to have a lightweight API gateway — right within your Function App. - Dropbox search is the opposite of Google, each customer is like their own internet. The result is a different approach. Architecture of Nautilus, the new Dropbox search engine: Nautilus consists of two mostly-independent sub-systems: indexing and serving.....We group a number of namespaces into a “partition,” which is the logical unit over which we store, index, and serve the data. We use a partitioning scheme that allows us to easily repartition namespaces in the future, as our needs change...For most documents, we rely on Apache Tika to transform the original document into a canonical HTML representation, which then gets parsed in order to extract a list of “tokens” (i.e. words) and their “attributes”.....The main advantage of using an ML-based solution for ranking is that we can use a large number of signals, as well as deal with new signals automatically. - Great explanation with nifty graphics. Accurately measuring layout on the web:. - Why is software hard? Look at what it takes just to input an address. How Etsy Localizes Addresses. - The EwoK microkernel: a microkernel targeting micro-controllers and embedded systems. It aims to bring an efficient hardening of embedded devices with a reduced impact on the device performances. EwoK has been designed to host complex drivers in userspace. - facebookincubator/LogDevice: scalable and fault tolerant distributed log system. While a file-system stores and serves data organized as files, a log system stores and delivers data organized as logs. The log can be viewed as a record-oriented, append-only, and trimmable file. Reader Comments (4) "The challenge with health checks" ends with a sentence about the Bandwidth alliance that does not seem to match the paragraphs above. It looks copy/paste related. Thanks Doug. Hi, I am the author of the post mentioned in your list: "The challenge with health checks — striking the balance between failure detection and reaction. Patterns for Resilient Architecture — Part 3...". Thanks for the callout. However, the end of the quote is not from my post "...to ensure that all our customers’ traffic on participating cloud providers could qualify for this offer. And, finally, we agreed with many other cloud providers to not just discount, but to waive data transfer fees entirely for mutual customers. " Can you please remove that from the quote? thank you! Adrian Sorry about that Adrian. There's no compiler for cutting and pasting :-)
http://highscalability.com/blog/2018/9/28/stuff-the-internet-says-on-scalability-for-september-28th-20.html
CC-MAIN-2019-18
refinedweb
4,189
63.19
> [MarioDebian] > > In other sites, iceweasel crash too, but with my small experience, > > youtube videos have more chance to crash iceweasel (metacafe or others > > crash less times) > > Can you reproduce it? Crashes are random, watching youtube videos, should crash every 5-10 videos. Today trying with metacafe, videos don't want to load :( With video.google.com crash with first video. Always, crash are when caching video or starting to play it (I suppose when flash open ALSA device) > > > Anyway, to get sources and compile from NEW? > > > > > > It is the same source as the one available from > <URL:>. > Only the version number is different. > > > My package put libflashsupport.so file in /usr/lib, this .so file is a > > iceweasel plugin or system lib? > > It is actually a iceweasel plugin plugin. It is a plugin for the > adobe flash plugin. :) > Ok fully agreed. > > In PulseAudio PerfectSetup talks about /usr/lib/libflashsupport.so > > Yes. One of the FTP-masters in debian did not like to have a plugin > in /usr/lib/, so I moved it in my package. > > > > > > > CJ van den Berg, from [pkg-pulseaudio-devel] have packaged > > flashplugin-nonfree-pulse > > > > > > > > We are talking about same package? > > Yes. See bug #449037. > > I disabled pulseaudio (compile time flag) because it is supposed to > show up in a different package. I can enable it, but I was told that > pulseaudio can handle both esd and oss and this made me believe that > it would do better as a separate package. > > Plugin have autodetection sound system. Try with pulse --> try with esound --> try with ALSA --> try with OSS You can change order without disabling it: You can move this block: flashsupport.c (line 371) #if defined(ESD) //Check if ESD is running if(!stat("/tmp/.esd/socket",&buf)) { if(audiodebug) fprintf( stderr, "ESD socket found\n"); audiodrivers = audiodrivers | AUDIO_ESD; } //ESD over network if((tmpenv=getenv("ESPEAKER"))!=NULL) { if(audiodebug) fprintf( stderr, "ESD variable ESPEAKER found\n"); audiodrivers = audiodrivers | AUDIO_ESD; } #endif and put before PULSE detection. (can use FLASH_FORCE_ESD env var too) I think that isn't good to have 2 different packages in Debian with the same proposal. There is another reason to not disable pulseaudio.... LTSP 5 comes with PulseAudio wich work many many many better than esound. I suppose that one of most important targets in debian-edu is thin clients networks and PulseAudio in sound is like compiz in desktop effects. PulseAudio use less network bandwich (200Kb/client <=> 5Mb/client) and is more robust than esound. I start using PulseAudio in thin clients from 2006 Sept, because some java educational apps don't work with esound (do you know JClic? [1][2] ) > Happy hacking, > -- > Petter Reinholdtsen > > I haven't introduced before, I'm Mario Izquierdo (aka mariodebian), I'm working right now for Madrid Goverment in his Ubuntu based distro called MaX, I'm subscribed some time ago in debian-edu and if my bit knowledge (and bad english speaking) is necesary I will try to help. [1] (in spanish but with screenshots) [2] Greetings -- Mario Izquierdo
https://lists.debian.org/debian-edu/2008/01/msg00167.html
CC-MAIN-2018-26
refinedweb
504
65.22
How To: Delete a Service Bus Service Namespace Updated: August 26, 2015 The following topic describes how to delete a service namespace from a Service Bus subscription. To delete a service namespace, in the list of namespaces select the service namespace you want to delete. To select the namespace, click anywhere to the right of the Namespace Name field (do not click on the name itself), then click Delete. Note that if you delete a service namespace, that name will become available for re-use within the same subscription after a few minutes. However, the name is blocked for re-use in any other subscription for 7 days. This enables you to delete a namespace and re-create it, while locking the name so that no one else can use it. If you want to use the namespace name in a different subscription, you can either wait for 7 days, or you can re-create the namespace under the old subscription and then create a support ticket to have that namespace moved to a different subscription.
https://msdn.microsoft.com/library/azure/hh690930.aspx
CC-MAIN-2015-40
refinedweb
176
63.43
Remuneration Statement (HRFORMS) FAQ Here you can see about Remuneration Statement (HRFORMS) Interview questions and answers that have been asked in several customer messages. 1) Performance problems - number of employees: Check the default value by calling transaction HRFORMS ==> choose "Select form",==> the tab page on the bottom right of the screen ==> choose the "Selection Screen" tab page and ==> the "Number of employees" parameter. Display this parameter on the selection screen or set a default value. Set the value to "1" to improve the system performance. The performance improvement has already been implemented in the PDF form, resulting in the side effect that the value "No. employees" = 0, does not make sense for the data processing either. This results in meaningless form output. In addition, you can control the number of spool requests in the PDF form with this value. "No. employees" = 1 means one spool output per employee, and a higher value means one spool request only is displayed. If external calls are made, "No. Employees = 0" triggers the following error: "No job started"; 2) Suppressing remuneration statements, various options of form control During the creation of remuneration statements using ABAP lists (RPCEDT*), there are various options on the selection screen regarding when a remuneration statement is to be printed and how retroactive accounting is displayed. The following information provides some tips about how this can be changed in HRFORMS. Basically, for performance reasons, you should exclude personnel numbers from the selection if you know that the output is not required or does not make any sense. For this purpose, use the method CHECK_PERSON in the BAdI HRFORM_HRF02 (the interface IF_EX_HRFORM_HRF02). This method checks if a person is to be rejected or skipped during the form creation. The method is called for every person BEFORE the data for the form is read by the database. In addition, there is the method CHECK_PERSON_LATE. Here, you can check the conditions of a wage type (option AF, DF) or insert a customer-specific condition (AZ,DZ). 3) Leaving date: HRFORMS uses the function module HR_LEAVING_DATE to determine the leaving date in the standard SAP system, in contrast to the remuneration statement without HRFORMS where the function module RP_HIRE_FIRE is used. As a result, the leaving date in HRFORMS corresponds to the field SCHLW-XFIRE. This always contains the last leaving date for reentries. The field PFIRE (last leaving date from the in-period view) is not used in HRFORMS. However, if you still want the leaving date to be displayed from the selected period, use the following modification proposal to do so: Use transaction HRFORMS_METADATA to create specific MetaStars, MetaDimensions, and so on. Firstly, create a new customer field with the relevant fields for the in-period view, for example, name the new customer field ZHIRE_FIRE_INPER and the relevant fields ZPHIRE and ZPFIRE. The customer field ZHIRE_FIRE_INPER is to be assigned to the function module that provides the entry date or leaving date for the in-period; the function module HRF_READ_EMPLOYEE_ATTRIBS. You can transfer and use the functions of this function module with a few modifications. The logic must select the entry of the internal tables 'lt_fire_dates' and 'lt_hire_dates' that is between BEGDA and ENDDA of the function module. Finally, the function must be transferred to the form and the new fields must be added to the layout. 4) Entry date: The entry date is determined by the function module HR_ENTRY_DATE where the feature ENTRY is not evaluated. The entry date is determined in a similar way in the remuneration statement using CEDT. In CEDT, (see SAP 357660), you can write a rule and, as required, display the fields SCHLW-PHIRE, I0041-DAT01 or P0016-EINDT. In HRFORMS, IT0016 is not read in the standard SAP system (this is also the case in CEDT, but the filled structures can be used as required only). Proceed as follows to read this data (IT0016): i) Call transaction HRFORMS_METADATA in change mode for a customer-specific MetaNet. ii) Create a MetaStar of the type "Master data". Enter 0016 in the infotype field. Make these entries on the far left tab of the tab page on the right. Remain in the customer namespace. iii) Use Drag and Drop to add MetaDimensions (such as employee, assignment, and so on) from the MetaDimensions tree to the MetaStar tree. Remain in the customer namespace if you create new MetaDimensions. iv) Set the read functions for the MetaDimensions that you have added to the MetaStar in the last step. Select "MetaStar" and enter a read function on the second tab page. - Use the same functions as in similar MetaStars. - Use HRF_PASS_EMPLOYEE_INFT, for example, for an employee or HRF_PASS_PERAS, for example, for an assignment. v) Define the field-to-field relationships for the MetaDimensions in the MetaStar that are filled from the fields of the MetaStar (third tab page). vi) On the fourth tab, define how the MetaFigures are filled in your MetaStar. vii) Add the new MetaStar to your InfoNet (Drag and Drop to the MetaNet tree): IT0041 is read by the MetaStar ITY_DATE_SPEC. Define the date to be printed in each case using the user exit (BAdI) HRPAD00_ENTRY_LEAVE. The standard entry date is in the dimension DIM_EMPLOYEE. 5) Error message when copying the form SAP_PAYSLIP_DE_P: Problems with ADS connection: The system issues the following error message when you copy the form: "Internal error occured in SAFP API" (message number HRFORMS 002) This error occurs if a specific RFC connection does not exist. This "ADS" connection must be configured. "Row type of DDED is not compatible with TLINE" Check the ADS connection, and in particular, if the password has expired. 6) Syntax error when generating the print program: "The field "P_IPVIEW" is unknown. It is neither contained in one of the specified tables nor it is defined by a "DATA" statement". 7) Country-specific features for Germany ==> statement of earnings directive a) General However, the changes have been made in the form SAP_PAYSLIP_DE_P only (exception 4.70, because it does not yet contain the Form Builder). The form SAP_PAYSLIP_DE has not been changed because we cannot continue with the duplication of SMARTFORM versus Form Builder due to the high maintenance effort. We will maintain the Form Builder for the tool that is used most frequently. b) Identification of wage types The standard procedure and the manipulation options are described in SAP 1438864. In addition, it is often assumed that the indicator must correspond to the actual taxation or contribution. The wage type only allows us to determine how it is encoded in this sense. Whether taxation or contribution existed is impossible to determine from a technical and, to a certain extent, business point of view. 8) Scrolling in the PDF remuneration form: Start the remuneration statement with the selection "No. employees" greater than 1. The print preview displays navigation fields (black arrow) to scroll (directly under the title "Print preview"). If "No. employees =1" is set, the system creates one spool request for every employee. However, the application cannot navigate between several spool requests. 9) Cumulation wage types: The HRForms form provides the option for SAP and customer-specific cumulation wage types. The SAP-specific cumulation wage types are defined in the table T596I in the subapplication CEDT. Customer-specific cumulation wage types can be defined with a specific subapplication (defined in the form) in the table T596J. In the class CL_HRFORM_HRF02 in the method READ_SUMWT, both tables are merged to one list and are then used during the creation of cumulation wage types from the payroll wage types. Both subapplications must be the same. Customers can also use the subapplication CEDT. This means it is easier to make changes without modifications to SAP-specific cumulation wage types. Create a customer-specific cumulation wage type with the same name with the changes in the table T596J to change an SAP-specific cumulation wage type. If the two subapplications are different, entries in the customer table T596J for an SAP-specific cumulation wage type are ignored. The same applies to entries of the SAP table T596I for customer-specific cumulation wage types. For example: Cumulation wage type (Appl) = wage types (T596I, Appl) + wage types (T596J, Appl) A(CEDT) = B(T596I, CEDT) + C(T596J, CEDT) A(CUST) = D(T596I, CUST) + E(T596J, CUST) This gives the following result in HRForms: A = B(T596I, CEDT) + E(T596J, CUST) The table T596A - Definition of subapplication (CEDT, among other things) The table T596G - Cumulation wage types per subapplication The table T596I - SAP definition of cumulation wage type The table T596J - Customer-specific definition of cumulation wage type 10) Creating remuneration statements using the report H99_HRFORMS_CALL: The report H99_HRFORMS_CALL can also be used to provide administrators with an option to create the remuneration statement without transaction HRFORMS. Advantage 1) The remuneration statement can be created directly using a report variant (with the relevant authorization), without selecting the form in advance. Advantage 2) The report can be included in the process model. 11) Transferring specific printer settings for Form Builder forms: i) Transferring additional parameters from the user master record You can set the output device and the attributes "Output Immediately" and "Delete After Output" for your own user profiles for the spool control. However, in HRForms, the output device only is transferred automatically. Use the BAdI HRFORM_HRF02 in the method BEFORE_PDF_PRINT to transfer the other two attributes as well. ... if sy-batch is initial. "only when not in batch mode CH_SFPOUTPUTPARAMS-reqimm = '*'. "Immediate print CH_SFPOUTPUTPARAMS-reqdel = '*'. "Delete after endif. ... When you enter the value "*", the specific specifications for these values are transferred to HRForms. ii) Transfer of printer to a background job The system always takes the printer from the user master record as the printer. This overwrites the printer specified in the job. This may be because many companies provide the remuneration statement using Employee Self-Service (ESS). If your company still wants to print centrally as before, you must also use the BAdI. To do this, see the attachment Badi_Printer.htm. 12) Eliminating the split indicator for specific wage types: A split indicator WPBP, for example, causes a wage type to be split in a period (multiple display). If you want to suppress this, delete the split indicator of the required wage types in the payroll table RT in the method MODIFY_PAY_RESULTS of the BAdI HRFORM_HRF02. 13) Cross-personnel number page counter for insertion: For performance reasons, the layout should always be called with the data for one personnel number only. However, as the page counting starts from the beginning during every call of the layout, you cannot use a cross-personnel number page counter. A solution would be to write the page counter to the ABAP memory and to read it from there again. In the Form Builder form SAP_PAYSLIP_DE_P, we already use this with the global variable G_TOTAL_PAGECOUNT (see SAP 1430459, point d.). However, in the layout for this base value, you must add the number of the current page to this personnel number in a script (xfa.layout.page(this)). IMPORTANT: SAP 1415445 makes this solution obsolete. 14) Retroactive accounting for master data changed subsequently: that, if retroactive accounting is performed, the current values of the for-period are not displayed in the HRFORMS remuneration statement. Instead, the values of the in-period are always displayed. This concerns all values on the master page (for example, the "Number of Children" (IT0012), "Cost center" (IT0001), and so on). 15) Form not active: HRForms forms are normally always inactive after a transport to the target system. However, you can use the BAdI HRFORM_AFTER_TRANSP toautomatically activate the forms after the transport (see SAP 744147). Caution: When you import the HR-SP, these forms are generated automatically. If a PDF form was delivered or changed within the HR-SP, it works only if there is an HTTP connection to an ADS server. You also require this connection if Adobe is not used in the system. The only way to deal with this error is to delete the form (tricky) or to establish this connection: SAP s 894009, 894389, and 925741. Otherwise, you cannot complete this upgrade step. 16) Error: "No job started" This error occurs in the case of an Adobe form if it is called externally, for example, using submit from IT0008. In this case, the "No. Employees" parameter must be set, in contrast to the general recommendations, to a different value to zero. 17) Dump: "OBJECTS_OBJREF_NOT_ASSIGNED" This dump occurs with the exception: CX_SY_REF_IS_INITIAL, mostly after the transport to another system. This is caused by missing authorizations. 18) IT0655: This infotype is evaluated in the EDT* programs and as of a specific Sync HR Support Package for 6.00 as of Enhancement Package 4, this function is also available in HRForms. (See SAP 1581310.) Otherwise, you must use the BAdI HRFORM_HRF02 and the method CHECK_PERNR. 19) Annual values missing in specific cases: When you display remuneration statements for periods to which retroactive accounting has been performed from a later period, the annual values are missing. This is due to the definition of the MetaStars CUMULATED_PAY and CUM_TAX_PAY_DE in the MetaNet, according to which current payroll results (type A) only are evaluated. As a result of retroactive accounting from a later period (for example, from March to January), there are at least two types of payroll results for January: A current (type A) with in-period March and a last (type p) with in-period January or February. If the P result does not have January as the in-period, there is still a third type of payroll result: An original (type O) with in-period January. When you display the remuneration statement of an earlier period (for example, for January), all results of later periods (that is, with an in-period after January) are ignored. If retroactive accounting has been performed from a later period (for example, March) to this earlier period (January), no current result (type A) is found for January. In these cases, it is not possible to recreate the remuneration statement subsequently, as it was originally set when retroactive accounting had not yet been performed. However, this only affects the values in the remuneration statement from the MetaStars that evaluate the current payroll results only, such as the annual values in the MetaStars CUMULATED_PAY and CUM_TAX_PAY_DE. 20) Fields are displayed with incorrect validity A complete list of - depending on validity according to time - correct entries is provided internally for each field. If entries are displayed in increasing or decreasing order, there is an incorrect setting in the layout of the form. Example: Display for administrator: The first page contains the current administrator; the second page contains the previously valid administrator, and so on. In this case, change the layout of the field from '*' to '0' or '1' (= always display current entry). 21) Error: FPRUNX 001 (ADS: ...) ADS: com.adobe.ProcessingException: com.adobe.Processin(200101ADS: com.adobe.ProcessingException: com.adob) If the ADS connection worked earlier and this error occurs now, you probably have to get around only the ADS server. If the connection is working and the error occurs only for individual personnel numbers or forms, it may be caused by incorrect master data. Example: The "Account Number" field in the form is of the type i. However, it is filled in IT0009 with a combination of numbers that also contains a hyphen. 22) Address fields are not printed, authorizations A new authorization object P_HRF_META was delivered. When this authorization is missing, address data and other things could also be missing. Authorization object P_HRF_META: Activity: 03 # display Country grouping: 99 - other countries HR Forms object name: PAYSLIP HR Forms object type: FORMCLASS # form class 23) Dump when executing a form, and syntax error when checking the relevant form layout Due to enhancements in the standard include RPCEDTD0_HRFORMS_INIT, a termination may also occur when you execute customer-defined forms, even though no change was made to these forms. In such a case, check the form in the form builder. If the system issues the error message "INCLUDE RPCEDTD0_HRFORMS_INIT: The data object "HRDATA" does not have a component with the name ...", compare the InfoNet of your form with that of the standard form. If required, add the InfoStar that is displayed with the error message above to the InfoNet of your form. 24) Form copy not available in certain languages For the copy to be available in all languages, you have to be logged on with the master language when copying (usually EN). 25) Suppressing the leave table if there is no data In the sample SAP_PAYSLIP_DE_P, the leave table is displayed even if there is no data. This is a design decision. If you want to suppress the table with its heading in a case like this, you have to remove the attribute for the minimum counter 1 on the tab page for the binding in the "Object" pallet for the "Absence" line in the layout of the "Absences" table. - This currently ensures that the system always displays a line, even if the table is empty. 26) Sorting If sorting does not occur as desired (that is, continuously on the basis of personnel numbers), check whether IT0031 was used to link the personnel numbers with a reference personnel number. These two personnel numbers are then output together so that they can be included in one envelope. If, despite this requirement, you need another type of sorting, use the presorting report H99_PRESORT_PERNR. In rare constellations, you can also sort the output according to a central person because the print program uses the logical database PNPCE. 27) Compensated leave See SAP 1849164 "Statement of Earnings Regulation (EBeschV) - FAQs", point e.) "Compensated leave days in the form SAP_PAYSLIP_DE_P". 28) Reconciling forms You do not need to reconcile the HRFORMS forms. All of the objects are cross-client objects apart from Customizing of the cumulation wage types. Tables: T596G, T596H, and T596I.
https://www.stechies.com/remuneration-statement-hrforms-interview-questions-answers/
CC-MAIN-2018-17
refinedweb
2,985
52.6
My task is to get a string with no spaces from the user and make the computer count the number of characters, letters, numbers, and special characters (i.e. !@#$%^&*) However the program seems to be skipping the first character no matter what category this character falls under. note that it does count it in the number of characters just not in its category example: cin >> aZ12!@ output: 6 characters, 1 letter, 2 numbers, 2 special characters. it always skips the first character. #include <iostream> #include <string> using namespace std; int main() { char str[100]; // available character string max is 99 characters int i; int lett; int num; int spec; cout << "Please enter a continuous string of characters with no spaces" << endl ; cout << "(example: ASO@23iow$)" << endl << endl ; //shows an example and then adds a blank line cout << "Enter your string: " ; cin >> str ; cout << endl ; while(str[i] != 0) { switch(str[i]) { case '0' ... '9': i++ && num++; break ; case 'a' ... 'z': i++ && lett++; break ; case 'A' ... 'Z': i++ && lett++; break ; default : i++ && spec++; } } cout << "your string has " << i << " characters" << endl ; //prints the number of numbers in the string cout << "Your string has " << num << " numbers in it." << endl ; cout << "Your string has " << lett << " letters in it." << endl ; cout << "Your string has " << spec << " special characters." << endl ; return 0 ; In your code, int i is not initialized. Using it is Undefined Behaviour. int i = 0; The same goes for the rest of your variables. Also this doesnt do what you think it does: i++ && lett++; This is not do both operations, its a Boolean operator. It employs something called short circuiting, which means if the first part of the && evalutes to false (ie 0), then the expression must be 0 so there is no point in evaluating the rest of it (ie the lett++ part). So for your first loop ( i == 0) your lett++ will be short circuited. Change these to: i++; lett++; If you fix this up it will work:
https://codedump.io/share/xrL7ZAKmfTmH/1/c-beginner-switch-isn39t-outputting-first-character-of-string
CC-MAIN-2018-05
refinedweb
327
79.8
Difference between revisions of "Xmonad/Frequently asked questions" Revision as of 09:38, 14 May 2008 xmonad: frequently asked questions For configuration tricks, and using xmonad.hs, see Xmonad/General_xmonad.hs_config_tips. For more documentation, see: Contents - 1 Installation - 2 Configuration - 2.1 How do I configure xmonad? - 2.2 Rebinding the mod key - 2.3 I don't want the focus to follow the mouse - 2.4 Does xmonad support a statusbar? - 2.5 Floating a window or sending it to a specific workspace by default - 2.6 Using a dock client - 2.7 Startup programs - 2.8 Setting the X cursor - 2.9 Removing the borders around mplayer - 2.10 I need to find the class / title / some other X property of my program! - 2.11 I don't use a statusbar, but I'd like to have layout displayed for some time when it changes - 2.12 How do I configure pointer-follows-focus? - 2.13 How can I make xmonad use UTF8? - 2.14 How do I use compositing with xmonad? - 3 Troubleshooting - 3.1 Help! xmonad just segfaulted - 3.2 Changes to the config file ignored - 3.3 xmonad does not detect my multi-head setup - 3.4 Missing X11 headers - 3.5 X11 fails to find libX11 or libXinerama - 3.6 Losing text when resizing xterms - 3.7 Showing fractions of lines in gvim - 3.8 XMonad doesn't save my layouts and windows - 3.9 Dynamic restart doesn't work - 3.10 Some keys not working - 3.11 Firefox's annoying popup downloader - 3.12 Copy and Paste on the Mac - 3.13 OpenOffice looks bad - 3.14 Problems with Java applications - 3.15 Cabal: Executable stanza starting with field 'flag small_base description' - 3.16 xmonad stops responding to keys (usually due to unclutter) - 3.17 An app seems to have frozen and xmonad stops responding to keys Installation! Configuration How do I configure xmonad? By editing the xmonad. For extensive information on configuring, see the links at the top of this page, and the configuration tips page. } I don't want the focus to follow the mouse Easy. There is a setting focusFollowsMouse in the xmonad.hs file; set it to False and restart with mod+q.. You can also use Gnome or KDE trays and menus with xmon Make space for a panel/dock/tray application)] } Floating a window or sending it to a specific workspace by default See Xmonad/General_xmonad.hs_config_tips for this. Using a dock client Dock clients, such as those used by Gnome, are fully supported in version xmonad 0.4 and later.. layout modifier: import XMonad import XMonad.Layout.NoBorders main = xmonad $ defaultConfig { layoutHook = smartBorders (layoutHook defaultConfig) } which adds 'smartBorders' to the default tiling modes. a from XMonad.Hooks.DynamicLog (was added after 0.6 release), which can also display current workspace, window name etc. and format them nicely. How do I configure pointer-follows-focus? Alternatively, if you are using the darcs release, you can use the already defined XMonad.Actions.UpdatePointer: myLogHook = dynamicLogWithPP .... >> updatePointer How can I make xmonad use UTF8? it's output import qualified System.IO.UTF8" Options . Troubleshooting: touch ~/.xmonad/xmonad.hs xmonad's recompilation checker, so xmonad.hs won't be recompiled. To fix this: xmonad --recompile after reinstalling the contrib library. xmonad does not detect my multi-head setup If you've installed xmonad via Debian or Ubuntu packages, note that the current version of libghc6-x11-dev was not built with Xinerama support. You'll have to build the Haskell X11, xmonad, and xmonad-contrib packages from source until this issue is resolved. If you've built from source,. The following ghci commands can help investigate your Xinerama setup. :m + Graphics.X11.Xinerama Graphics.X11.Xlib.Display compiledWithXinerama openDisplay "" >>= getScreenInfo compiledWithXinerama should give a Bool result, and getScreenInfo a list of screen Rectangles. Though this is the kind of fix that shouldn't work, it nearly always does: If compiledWithXinerama returns True, try: runhaskell Setup.lhs clean runhaskell Setup.lhs configure --user --prefix=$HOME runhaskell Setup.lhs build runhaskell Setup.lhs install --user for xmonad, and xmonad-contrib if you use it. Missing X11 headers Your build will fail if you've not installed the X11 C library headers at some point. ./configure for the Haskell X11 library will fail. To install the X11 C libs: - debian apt-get install libx11. This is also addressed by the HintedTile extension, which solves it automatically.. Dynamic restart doesn't work The dynamic reconfigure and restart feature (mod-q) assumes that xmonad is in your $PATH environment. If it isn't, restarting will have no effect. Also, see [[Xmonad/Frequently_asked_questions. OpenOffice looks bad OpenOffice won't use (strangely) the GTK look, unless the following environment variable is set: OOO_FORCE_DESKTOP=gnome Use this if you don't like the default look of OpenOffice in xmonad. cleanest way is to lie to Java about what window manager you are, by using the SetWMName extension. Another option is to use an AWT toolkit without this problem by setting:.
https://wiki.haskell.org/index.php?title=Xmonad/Frequently_asked_questions&diff=20942
CC-MAIN-2020-34
refinedweb
842
53.47
09 January 2010 15:10 [Source: ICIS news] LONDON (ICIS news)--INEOS Olefins & Polymers Europe has declared force majeure on linear low density polyethylene (LLDPE) C6 grades at its 320,000 tonne/year Grangemouth site following an unplanned closure at the end of 2009, a company source said on Friday. Production from the LLDPE plant was interrupted over the Christmas period by a technical issue linked to the main reactor. “INEOS will work to resolve this issue as quickly as possible, consistent with safe and prudent operating practice. The company expects the unit will remain closed for around two weeks,” the company said in a statement. It added that in the meantime INEOS would continue to work with customers to assist them in sourcing and securing alternative supplies. “We want to get the whole things sorted out as quickly as possible,” added the source. Polyethylene (PE) business was only beginning to be discussed for January but a clear upward trend had emerged as naphtha prices soared and product availability was tight. Spot LLDPE C4 prices were trading around €1,000/tonne ($1,430/tonne) FD (free delivered) NWE (northwest ?xml:namespace> ($1 = €0.70) To discuss issues facing the chemical industry go to ICIS connect
http://www.icis.com/Articles/2010/01/08/9324171/ineos-declares-force-majeure-on-lldpe-c6-from-grangemouth.html
CC-MAIN-2014-35
refinedweb
205
60.14
Troubleshooting¶ If you are having trouble with Builder you can help us help you by trying to do some basic troubleshooting. Here are some steps you can go through to try to discover what is going wrong. Verbose Output¶ You can increase the log verbosity of Builder by adding up to four -v when launching from the command line. # If running from flatpak flatpak run org.gnome.Builder -vvvv # If using distribution packages gnome-builder -vvvv Support Log¶ Builder has support to generate a support log which can provide us with details. From the application menu, select “Generate Support Log”. It will place a log file in your home directory. Counters¶ Builder has internal counters which can be useful to debug problems. Use the command bar (activated by Control+Enter) and type “counters” followed by Enter. This will bring up a new window containing the current values of the counters. If Builder has locked up, you can access the counters from outside of Builder. The command line tool dazzle-list-counters, can be used to access the counters. dazzle-list-counters `pidof gnome-builder` Note When running Builder from Flatpak, we do not currently expose the counters to the host. Use flatpak enter $PID /bin/bash to enter the mount namespace and then run dazzle-list-counters. Test Builder Nightly¶ If you are running the stable branch or an older distribution package, please consider trying our Nightly release to see if the bug has already been fixed. Doing this before reporting bugs helps reduce the amount of bug traffic we need to look at. We’ll usually ask you to try Nightly anyway before continuing the troubleshooting process. See installing from Flatpak for installation notes. File a Bug¶ We can help you troubleshoot! File a bug if you’re stuck and we can help you help us. See the Builder Bugzilla for creating a bug report.
http://builder.readthedocs.io/en/latest/troubleshooting.html
CC-MAIN-2018-30
refinedweb
316
65.32
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 18.16, “Using Build.scala Instead of build.sbt.” Problem In an SBT project, you want to use the project/Build.scala file instead of build.sbt to define your Scala project, or you need some examples of how to use Build.scala to solve build problems that can’t be handled in build.sbt. Solution The recommended approach when using SBT is to define all your simple settings (key/value pairs) in the build.sbt file, and handle all other work, such as build logic, in the project/Build.scala file. However, it can be useful to use only the project/Build.scala file to learn more about how it works. To demonstrate this, don’t create a build.sbt file in your project, and then do create a Build.scala file in the project subdirectory by extending the SBT Build object: import sbt._ import Keys._ object ExampleBuild extends Build { val dependencies = Seq( "org.scalatest" %% "scalatest" % "1.9.1" % "test" ) lazy val exampleProject = Project("SbtExample", file(".")) settings( version := "0.2", scalaVersion := "2.10.0", scalacOptions := Seq("-deprecation"), libraryDependencies ++= dependencies ) } With just this Build.scala file, you can now run all the usual SBT commands in your project, including compile, run, package, and so on. Discussion The Build.scala file shown in the Solution is equivalent to the following build.sbt file: name := "SbtExample" version := "0.2" scalaVersion := "2.10.0" scalacOptions += "-deprecation" libraryDependencies += "org.scalatest" %% "scalatest" % "1.9.1" % "test" As mentioned, the recommended approach when working with SBT is to define your basic settings in the build.sbt file, and perform all other work in a Build.scala file, so creating a Build.scala file with only settings in it is not a best practice. However, when you first start working with a Build.scala file, it’s helpful to see a “getting started” example like this. Also, although the convention is to name this file Build.scala, this is only a convention, which I use here for simplicity. You can give your build file any legal Scala filename, as long as you place the file in the project directory with a .scala suffix. Another convention is to name this file after the name of your project, so the Scalaz project uses the name ScalazBuild.scala (as shown at this URL). The “Full Configuration Example” in the SBT documentation The Full Configuration Example in the SBT documentation and the ScalazBuild.scala build file both show many more examples of what can be put in a Build.scala file. For instance, the Full Configuration Example shows how to add a series of resolvers to a project: // build 'oracleResolvers') } object CDAP2Build extends Build { import Resolvers._ // more code here ... // use 'oracleResolvers' here lazy val server = Project ( "server", file ("cdap2-server"), settings = buildSettings ++ Seq (resolvers := oracleResolvers, libraryDependencies ++= serverDeps) ) dependsOn (common) This code is similar to the example shown in Recipe 18.11, “Telling SBT How to Find a Repository,” where the following configuration line is added to a build.sbt file: resolvers += "Java.net Maven2 Repository" at "" The ScalazBuild.scala file also shows many examples of using TaskKey and SettingKey, which are different types of keys that can be used in SBT project definition files. See Also - The Full Configuration Example in the SBT documentation - The ScalazBuild.scala file - For more examples of using Build.scala files, see Recipe 18.6, “Creating a Project with Subprojects”; Recipe 18.10, “Using GitHub Projects as Dependencies”; and Recipe 18.11, “Telling SBT How to Find a Repository” The Scala Cookbook This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly: You can find the Scala Cookbook at these locations: Add new comment
https://alvinalexander.com/scala/sbt-how-to-use-build.scala-instead-of-build.sbt
CC-MAIN-2017-39
refinedweb
631
60.92
Adventures in .NET Assembly Cannot Be FoundThis causes the following error message:The assembly <assemblyname> could not be found at <path> or could not be loaded.You can still edit and save the document. Contact your administrator or the author of this document for further assistance. I did some digging around on Microsoft's VSTO Troubleshooting Page , and found this: Class=<namespace>.<classname> <Assembly: System.ComponentModel.DescriptionAttribute( "OfficeStartupClass, Version=1.0, Class=<namespace>.<classname>")> For information on changing the assembly path manually, see How to: Link an Assembly to a Word or Excel File. For a code example that changes the property from relative to absolute, see Code: Change the Assembly Path from Relative to Absolute in a Document (Visual Basic), or Code: Change the Assembly Path from Relative to Absolute in a Document (C#). For information on the custom properties, see Word and Excel Project Properties. Allow me to add to this list one more problem that can cause this error: the target machine does not have the Office 2003 PIA's installed. This is not documented anywhere that I can find, so I post it here. Cheers! I have created an excel file with the appropriate assembly which works fine if the assembly is located anywhere on my PC but not if the assembly is on a network with the excel file located locally. I have gone through everything I can think of, including all the solutions you have listed, but I still get the same error. Any ideas???? 1) The permission set for an assembly must be set up under Machine->All Code->LocalIntranet_Zone. 2) The _AssemblyLocation0 Document property must be give the fully qualified network path to the Assembly. You Should be able to run caspol.exe -rsg <Full path to Assembly> and see if there is some other permission set that is keeping the code from executing. Caspol.exe is located at <system>\Microsoft.NET\Framework\v1.1.4322. If this doesn't work, reply to this comment with more details about your project. I am having a similar problem with a WORD template, and have addressed both the assembly name and the caspol issues. Still I get the error. Any other ideas? Thanks in advance.
http://weblogs.asp.net/taganov/archive/2004/09/24/233847.aspx
crawl-002
refinedweb
371
56.15
/* * Tcl Handles -- * * Handles in Tcl are commonly used as a small string that represents * a larger set of data in a program. Tcl uses these for things like * file handles, sockets and the like. Tk uses them for fonts, images * and even widgets. * * Handles allow an extension to return a small string to the caller * that is usually held in a variable for later use. File handles * at the Tcl level look like 'file1'. All the caller need care about * is that little string to get to their handle. Internally, that little * 'file1' handle points to a much larger set of data. * * The standard method for handles in C extensions is to create a struct * that defines the data for your handle that also contains some very * basic information about the handle's origins and locations. When we * create a new handle, we also store the string representation in a * Tcl hash table so that we can look it up later if we're given just the * handle string. * * Let's assume we are creating an extension called Foo. The main * component in our extension that uses handles are Bars, and we are * storing data from an external source (another library, perhaps). * So, we need to create handler functions for Foo to create and use * handles for Bars. * */ /* Define a simple macro for getting our handles and returning * an error if we don't find one. Foo_GetBar will set the error * result in Tcl for us if we fail to find the handle. */ #define GETBAR( obj, ptr ) \ if( (ptr = Foo_GetBar( interp, obj )) == NULL ) return TCL_ERROR; /* Define a new Tcl object type that we will use for our handles. * We define our own object type because we want to store pointers * to our information within it. This way, if our object shimmers * to another object type (like a string or int), we will be able * to recognize it and shimmer back. */ static Tcl_ObjType Foo_BarType = { "foobar", NULL, NULL, NULL, NULL }; /* Initialize two variables that we'll use to keep track of our * handle counts. We increment the count variable everytime we * create a new handle, so that we never re-use a handle name. * * The epoch variable is incremented anytime a handle is deleted. * We increment the epoch because our handle can end up with many * relationships that we don't keep track of. For example: * * set bar [create bar handle] * set myBar $bar * * We now have two variables that point to our object. The problem * occurs when we do this: * * delete bar $bar * * Now, myBar points to an object that still thinks it is a Bar handle, * but we deleted the struct when we deleted the $bar handle. By tracking * the epoch, we can tell that something has been deleted, and our data may * no longer be valid. So, we're forced to go check the hash table for our * data. If we don't find it, our handle was deleted, and we return with * an error. * * All this basically means that you don't have to keep track of what people * are doing with your handles. The code will simply take care of cleaning * up useless objects as handles are deleted. */ static int barCount = 0; static int barEpoch = 0; /* Create our Tcl hash table to store our handle look-ups. * We keep track of all of our handles in a hash table so that * we can always go back to something and look up our data should * we lose the pointer to our struct. */ static Tcl_HashTable bars; /* Now, we want to define a struct that will hold our data. The first * three fields are Tcl-related and make it really easy for us to circle * back and find our related pieces. */ typedef struct Foo_Bar { Tcl_Interp *interp; /* The Tcl interpreter where we were created. */ Tcl_Obj *objPtr; /* The object that contains our string rep. */ Tcl_HashEntry *hashPtr; /* The pointer to our entry in the hash table. */ Bar *bar; /* Our native data. */ } Foo_Bar; /* *---------------------------------------------------------------------- * Foo_NewBar -- * * Create a new Bar object. * * Results: * Returns a pointer to the new object or NULL if an error occurred. * * Side effects: * Allocates enough memory to hold our structure and stores * the new object in a hash table. * *---------------------------------------------------------------------- */ static Foo_Bar * Foo_NewBar( Tcl_Interp *interp, Bar *bar ) { int new; char handleName[16 + TCL_INTEGER_SPACE]; Tcl_Obj *handleObj; Tcl_HashEntry *entryPtr; Foo_Bar *barStruct; /* Allocate enough memory for our struct. */ barStruct = (Foo_Bar *)ckalloc( sizeof(Foo_Bar) ); /* Create our handle string. */ sprintf( handleName, "bar%d", barCount++ ); /* Create our Tcl object and store pointers to our information in the internalRep. */ handleObj = Tcl_NewStringObj( handleName, -1 ); handleObj->typePtr = &Foo_BarType; handleObj->internalRep.twoPtrValue.ptr1 = barStruct; handleObj->internalRep.twoPtrValue.ptr2 = (void *)barEpoch; /* Setup our structure. */ barStruct->interp = interp; barStruct->objPtr = handleObj; barStruct->bar = bar; /* Store our information in the hash table. */ entryPtr = Tcl_CreateHashEntry( &bars, handleName, &new ); Tcl_SetHashValue( entryPtr, barStruct ); /* Store a pointer to our hash entry. */ barStruct->hashPtr = entryPtr; /* Set the Tcl object result so that our caller can just return the new handle as the Tcl result. */ Tcl_SetObjResult( interp, handleObj ); /* Return a pointer to the new struct. */ return barStruct; } /* *---------------------------------------------------------------------- * Foo_GetBar -- * * Get a pointer to a Foo_Bar object. * * Results: * Returns a pointer to the object. * * Side effects: * None * *---------------------------------------------------------------------- */ static Foo_Bar * Foo_GetBar( Tcl_Interp *interp, Tcl_Obj *objPtr ) { /* Check to see if this object is our type and has the same epoch as the current epoch. If either of these is false, we need to go out to the hash table to find our data. */ if( objPtr->typePtr != &Foo_BarType || (int)objPtr->internalRep.twoPtrValue.ptr2 != barEpoch ) { char *name; Foo_Bar *bar; Tcl_HashEntry *entryPtr; name = Tcl_GetString( objPtr ); entryPtr = Tcl_FindHashEntry( &bars, name ); if( !entryPtr ) { if( interp ) { Tcl_SetObjResult(interp, Tcl_ObjPrintf("invalid bar \"%s\"", name)); } return NULL; } if (objPtr->typePtr != &Foo_BarType && obj->typePtr && obj->typePtr->freeIntRepProc) { obj->typePtr->freeIntRepProc (obj); } bar = Tcl_GetHashValue( entryPtr ); objPtr->typePtr = &Foo_BarType; objPtr->internalRep.twoPtrValue.ptr1 = bar; objPtr->internalRep.twoPtrValue.ptr2 = (void *)barEpoch; } return (Foo_Bar *)objPtr->internalRep.twoPtrValue.ptr1; } /* *---------------------------------------------------------------------- * Foo_FreeBar -- * * Free a Foo_Bar object and all of its related pieces. * * Results: * Returns TCL_OK on success or TCL_ERROR on failure. * * Side effects: * Frees all the memory associated with the object as well * as delete the entry from the hash table. * *---------------------------------------------------------------------- */ int Foo_FreeBar( Tcl_Interp *interp, Foo_Bar *bar ) { /* Free up the native data. */ FreeBarFunction( bar->bar ); /* Delete our entry from the hash table. */ Tcl_DeleteHashEntry( bar->hashPtr ); /* Free the memory we allocated for the struct. */ ckfree( (char *)bar ); /* Increment the epoch. */ barEpoch++; return TCL_OK; } /* *---------------------------------------------------------------------- * Foo_Init -- * * Initialize the Foo extension. * * Results: * Returns TCL_OK on success or TCL_ERROR on failure. * * Side effects: * None * *---------------------------------------------------------------------- */ static int Foo_Init( Tcl_Interp *interp ) { /* Initialize our hash table. */ Tcl_InitHashTable( &bars, TCL_STRING_KEYS ); /* The rest of the initialization for our extension would go here. */ } Notice that the Foo_BarType record uses NULL for all the methods. There's no freeIntRepProc, since freeing the internal rep is a no-op -- deleting a Tcl_Obj * that points to a Bar doesn't delete the Bar, only the reference to the Bar. There's no updateStringProc because every object of this type always has a string rep. There's no setFromAnyProc because we never call Tcl_ConvertToType with typePtr = Foo_BarType -- and nobody else can either, since Foo_BarType is static and we never register it with Tcl_RegisterObjType. And there's no dupIntRepProc because Tcl's default implementation (a bitwise copy of objPtr->internalRep) does the right thing for this object type.So the only thing the Foo_BarType record is used for is to identify whether or not a Tcl_Obj * is one of "our" Tcl_Objs. George Peter Staplin May 18, 2005 - I suggest that you check the result of new when creating the hash entry, because integer overflow could occur and result in clobbering an existing object. A better behavior is to repeat the key/handle creation (with an increment of barCount) until new is true. The barCount overflow would be an issue in applications that create many handles, or run for long periods of time.Thanks for sharing this code. I found it useful. :) Q: What the heck is this epoch stuff and do I really need it?A: Primarily a mechanism for keeping track of whether cached data is still valid, or might need to be looked up again. You can do without it if you in Foo_GetBar always take the long route (calling Tcl_FindHashEntry etc.).Thank you. I will do without it because I simply can't grok all this. Does anyone have a simple example that uses handle0, handle1, and has a structure that keeps track of application specific pointers? George Peter Staplin 2007.09.21 I believe this code has a leak. I originally used it as an example. I found that this pattern was needed to fully eliminate leaks: (void)Tcl_GetString (obj); if (obj->typePtr && obj->typePtr->freeIntRepProc) { obj->typePtr->freeIntRepProc (obj); } /* Now you can safely replace the Tcl_ObjType pointer */nodk 2013-08-24: where exactly does this patch apply? I can't figure this out. 2007-09-21 VI Adding another field and a subtype table will allow for storing void * pointers with a subtype. On the extract the subtype can be checked before converting back. See the tcl plugin in pidgin (tcl_ref.c) for an idea of what I'm talking about RLE (2013-08-24): See Handles as commands
http://wiki.tcl.tk/13881
CC-MAIN-2016-44
refinedweb
1,516
64.41
Today we’re going to talk about MVVM pattern. We will discuss its advantages in comparison to MVC and take a look at two implementation examples, a small and a big one. You will be able to use the latter in your work as an example of a good architecture for almost any MVVM project. So, let’s start with the basics :) The basics One of the most widespread iOS patterns beginners normally start with is MVC (Model-View-Controller). Until some point we all use it in our projects. But time passes and your Controller grows bigger and bigger overloading itself. Let me remind you here that MVC Controller can talk both to Model and View. MVVM has a little bit different structure and you should memorize it: User → View → ViewController → ViewModel → Model. It means that the user sees a button (View), touches it and then ViewController takes it from there performing actions with the UI, for example changes the color of this button. Then ViewModel sends a data request to the server, adds data to the Model and performs some other actions with the Model. The main takeaway: we have a new ViewModel class that talks to the Model which means Controller is no longer responsible for it. Now the Controller does what it is supposed to do: works with Views and doesn’t even know that the Model exists. Practice Libraries working with MVVM pattern include ReactiveCocoa, SwiftBond/Bond and ReactiveX/RxSwift. Today we’re going to talk about the last framework — RxSwift. If you want to learn more, read about the difference between RxSwift and ReactiveCocoa . In short, RxSwift is a more modern solution written in Swift while ReactiveCocoa has been around for a little longer with its core being written in Objective-C. ReactiveCocoa has a lot of fans and a lot of tutorials are available for it. Bond is somewhat smaller and it’s a good choice for beginners but we’re pros here, aren’t we? So let’s leave it aside. RxSwift is a an extension of ReactiveX and it is getting more and more fans. But the choice is yours, of course. Simple example Let’s start with a simple example (it is indeed very basic, maybe I shouldn’t have shown it to you, oh well :) The task is easy: we use UIPageControl to show some images. We need only two elements for implementing it: UICollectionView and UIPageControl . Oh, and when on the app launch you need to demonstrate your user a logic of your app, you can use it too. And here’s our “masterpiece”: And one more thing. For our images to be centered right during scrolling we use CollectionViewFlowLayoutCenterItem and associate it with UICollectionViewFlowLayoutCenterItem.swift class (you can find it in the project folder). Here’s a GitHub link . Our Podfile: target 'PageControl_project' do use_frameworks! pod 'RxCocoa' pod 'RxSwift' end RxCocoa is an extesion for all UIKit elements. So we can write: UIButton().rx_tap and receive ControlEvent that belongs to ObservableType . Let’s say we have UISearchBar . Without RxSwift we would normally write our Controller as a delegate and watch changes of the text property. With RxSwift we can write something like this: searchBar .rx_text .subscribeNext { (text) in print(text) } And the key point here. For our task we don’t sign up Controller as a delegate for UICollectionView . We do the following instead: override func viewDidLoad() { super.viewDidLoad() setup() } func setup() { //initialize viewModel viewModel = ViewModel() viewModel.getData() //set pageCtr.numberOfPages //images should not be nil .filter { [unowned self] (images) -> Bool in self.pageCtrl.numberOfPages = images.count return images.count > 0 } //bind to collectionView //set pageCtrl.currentPage to selected row .bindTo(collView.rx_itemsWithCellIdentifier("Cell", cellType: Cell.self)) { [unowned self] (row, element, cell) in cell.cellImageView.image = element self.pageCtrl.currentPage = row } //add to disposeableBag, when system will call deinit - we`ll get rid of this connection .addDisposableTo(disposeBag) } As a result, we write less code, our code becomes more readable and if we don’t have data ( ViewModel.getData() returns Observable<[UIImage?]> ), nothing happens, we wouldn’t even start the whole process. Let’s take a closer look at the method of ViewModel getData() class. If we weren’t receiving data from the server (we will look at it a bit later), I would have added method for receiving data but since we are, I use private dataSource with images that I simply added to the project. func getData() -> Observable<[UIImage?]> { let obsDataSource = Observable.just(dataSource) return obsDataSource } Here we create Observable object and use just method that tells: return sequence that contains only one element, UIImage elements array. Note that ViewModel class is a structure. This way when using additional properties of this class we will have a ready initialization. Complex example I hope everything is clear with the first example. Now it’s time for the 2nd one. But first, some more tips. When working with sequences, in the end of each call you need to add addDisposableTo(disposeBag) to the object. let disposeBag = DisposeBag() — in example it is declared as property. Thanks to it when the system calls deinit resources are freed for Observable objects. Next, in this project we will be using Moya. It is an abstract class above, for example, Alamofire that, in its turn, is an abstract class above NSURLSession and so on. Why do we need it? For even more abstraction and for our code to look professional and have no slightly different methods that are identical in practice. Moya has an extension written for RxSwift. It is called Moya/RxSwift (yepp, pretty straightforward, isn’t it?). Let’s start with Podfile: platform :ios, '8.0' use_frameworks! target 'RxMoyaExample' do pod 'Moya/RxSwift' pod 'Moya-ModelMapper/RxSwift' pod 'RxCocoa' pod 'RxOptional' end To be able to work with Moya we need to create enum and put it under control of the TargetType protocol. In ReactX project folder this file is called GithubEndpoint.swift . We will be using api for github. We will only have four endpoints but you can add as many as you need in your own project. enum GitHub { case UserProfile(username: String) case Repos(username: String) case Repo(fullName: String) case Issues(repositoryFullName: String) } private extension String { var URLEscapedString: String { return stringByAddingPercentEncodingWithAllowedCharacters( NSCharacterSet.URLHostAllowedCharacterSet())! } } We will need private extension for String later. Now let’s turn GithubEndpoint into a subordinate of the TargetType protocol: extension GitHub: TargetType { var baseURL: NSURL { return NSURL(string: "")! } var path: String { switch self { case .Repos(let name): return "/users/\(name.URLEscapedString)/repos" case .UserProfile(let name): return "/users/\(name.URLEscapedString)" case .Repo(let name): return "/repos/\(name)" case .Issues(let repositoryName): return "/repos/\(repositoryName)/issues" } } var method: Moya.Method { return .GET } var parameters: [String:AnyObject]? { return nil } var sampleData: NSData { switch self { case .Repos(_): return "{{\"id\": \"1\", \"language\": \"Swift\", \"url\": \"\", \"name\": \"Router\"}}}".dataUsingEncoding(NSUTF8StringEncoding)! case .UserProfile(let name): return "{\"login\": \"\(name)\", \"id\": 100}".dataUsingEncoding(NSUTF8StringEncoding)! case .Repo(_): return "{\"id\": \"1\", \"language\": \"Swift\", \"url\": \"\", \"name\": \"Router\"}".dataUsingEncoding(NSUTF8StringEncoding)! case .Issues(_): return "{\"id\": 132942471, \"number\": 405, \"title\": \"Updates example with fix to String extension by changing to Optional\", \"body\": \"Fix it pls.\"}".dataUsingEncoding(NSUTF8StringEncoding)! } } } If you’re using methods other than GET , you can use switch.parameters — since we aren’t transferring anything, we simply return nil . Using switch you can transfer additional information your server needs. sampleData — since Moya works with texts, this variable is a must. Let’s start with our example. Here’s the storyboard: Binding elements with our Controller: @IBOutlet weak var searchBar: UISearchBar! @IBOutlet weak var tableView: UITableView! Adding several properties in our ViewController: var provider: RxMoyaProvider<GitHub>! var latestRepositoryName: Observable<String> { return searchBar .rx_text .filter { $0.characters.count > 2 } .throttle(0.5, scheduler: MainScheduler.instance) .distinctUntilChanged() } provider — it’s a Moya object with our enum type. latestRepositoryName — Observable<String>. Each time user starts writing something in the searchBar we watch the changes by subscribing to them. rx_text is from RxCocoa, category for UIKit elements we imported. You can take a look at other properties yourself. Then we filter text and use only the one that has over 2 symbols. throttle — a very useful property. If a user is typing too fast, we create a small timeout to prevent “bothering” the server. distinctUntilChanged — checks the previous text and if the text was changed, it lets it go further. Creating a model: import Mapper struct Repository: Mappable { let identifier: Int let language: String let name: String let fullName: String init(map: Mapper) throws { try identifier = map.from("id") try language = map.from("language") try name = map.from("name") try fullName = map.from("full_name") } } struct Issue: Mappable { let identifier: Int let number: Int let title: String let body: String init(map: Mapper) throws { try identifier = map.from("id") try number = map.from("number") try title = map.from("title") try body = map.from("body") } } And now we create ViewModel(IssueTrackerModel) : import Foundation import Moya import Mapper import Moya_ModelMapper import RxOptional import RxSwift struct IssueTrackerModel { let provider: RxMoyaProvider<GitHub> let repositoryName: Observable<String> func trackIssues() -> Observable<[Issue]> { return repositoryName .observeOn(MainScheduler.instance) .flatMapLatest { name -> Observable<Repository?> in return self.findRepository(name) } .flatMapLatest { repository -> Observable<[Issue]?> in guard let repository = repository else { return Observable.just(nil) } return self.findIssues(repository) } .replaceNilWith([]) } private func findIssues(repository: Repository) -> Observable<[Issue]?> { return self.provider .request(GitHub.Issues(repositoryFullName: repository.fullName)) .debug() .mapArrayOptional(Issue.self) } private func findRepository(name: String) -> Observable<Repository?> { return self.provider .request(GitHub.Repo(fullName: name)) .debug() .mapObjectOptional(Repository.self) } } Firstly, we have created two methods: findRepository and findIssues . The first one will be returning optional Repository object, the second — Observable[Issue] according to the same logic. mapObjectOptional() method will return optional object in case nothing is found and mapArrayOptional() will return optional array . debug() — will display debug info in the console. trackIssues() joins these two methods. flatMapLatest() is an important element here creating one sequence after another. Its fundamental difference from flatMap() is that when flatMap() receives a value, it starts a long task and when receiving the next value it is completing the previous operation. And it’s not what we need since the user might start entering another text. We need to cancel the previous operation and start a new one — flatMapLatest() will help us here. Observable.just(nil) — simply returns nil that will be further replaced with an empty array with the next method. replaceNilWith([]) — replaces nil in an empty array. Now we need to bind this data with UITableView . Remember that we don’t need to subscribe to UITableViewDataSource , RxSwift has rx_itemsWithCellFactory method for it. Here’s a setup() method within viewDidLoad() : func setup() { provider = RxMoyaProvider<GitHub>() issueTrackerModel = IssueTrackerModel(provider: provider, repositoryName: latestRepositoryName) issueTrackerModel .trackIssues() .bindTo(tableView.rx_itemsWithCellFactory) { (tableView, row, item) in print(item) let cell = tableView.dequeueReusableCellWithIdentifier("Cell") cell!.textLabel?.text = item.title return cell! } .addDisposableTo(disposeBag) //if tableView item is selected - deselect it) tableView .rx_itemSelected .subscribeNext { indexPath in if self.searchBar.isFirstResponder() == true { self.view.endEditing(true) } }.addDisposableTo(disposeBag) } Another thing to remember is in which stream operations will be carried. We have two methods: subscribeOn() and observeOn() . What’s the difference between them? subscribeOn() indicates in which stream the chain of events will be started while observeOn() — where the next one should be started (see the image below). Here’s an example: .observeOn(ConcurrentDispatchQueueScheduler(globalConcurrentQueueQOS: .Background)) .subscribeOn(MainScheduler.instance) So, looking through all the code you can see that with MVVM we need much less lines of code for writing this app than with MVC pattern. But when you start using MVVM, it doesn’t mean that it should go everywhere — pattern choice should be logical and consistent. Don’t forget that your top priority is writing good code that is easy to change and understand. Code from this article is available on GitHub .
http://126kr.com/article/50qwg1bdsoa
CC-MAIN-2017-04
refinedweb
1,961
51.14
curl_slist_append - add a string to an slist #include <curl/curl.h> struct curl_slist *curl_slist_append(struct curl_slist * list, const char * string ); curl_slist_append() appends a specified string to a linked list of strings. The existing list should be passed as the first argument while the new list is returned from this function. The specified string has been appended when this function returns. curl_slist_append() copies the string. The list should be freed again (after usage) with curl_slist_free_all(3). A null pointer is returned if anything went wrong, otherwise the new list pointer is returned. CURL handle; curl_slist *slist=NULL; slist = curl_slist_append(slist, "pragma:"); curl_easy_setopt(handle, CURLOPT_HTTPHEADER, slist); curl_easy_perform(handle); curl_slist_free_all(slist); /* free the list again */ curl_slist_free_all (3) This HTML page was made with roffit.
http://maemo.org/api_refs/3.0/connectivity/curl/curl_slist_append.html
CC-MAIN-2017-04
refinedweb
121
65.73
had done somethings mistake in pathInfo, try to change to this solutions : Instead of pathInfo="PROVIDER=Microso Change to pathInfo="Provider=Microso Regards x_com Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial Could you tell me what was wrong with the previous statement. Now I get a new error it says that the type or namespace name "server" could not be found in your statment. I assume your access database resides in the "root" directory - ie. "/info.mdb". If so, the problem with your connection string path is that the ABSOLUTE FILE PATH is required. For example, the path info should be something like: pathInfo="Provider=Microso As coolsahara mentioned, if you want to use a RELATIVE PATH (as perhaps this makes more sense to you - this is most common) you would use: pathInfo="Provider=Microso Hope it all works.
https://www.experts-exchange.com/questions/20778062/Getting-an-error-unrecognized-escape-sequence.html
CC-MAIN-2018-30
refinedweb
169
59.84
Released my fourth React Native app After releasing my first three React Native fitness apps, I published my fourth one for Android today: Motivational Alarm Clock There are a lot of great motivational videos on YouTube and I thought it would be a good idea to start my day with one of these, instead of a beeping sound. And this is exactly what it does: You set your alarms for specific days of the week and whenever an alarm goes off, it starts playing a daily new inspiring video. You can read more about the Motivational Alarm Clock app on the Google Play store - this post is about the development part. How was development? Smooth. It took me three weeks from the idea to publishing it on the app store, mostly because I was already familiar with React Native, and my current way to structure the app state works well for me. Note that the three weeks also include a lot of non-development work like creating the images, creating a playlist for the motivational videos, doing some keyword research, writing an app store description, testing on an actual device, … But, let’s have a look at some development choices I made: - Handling state: I used Reduxand seamless-immutable. You can read more about app state handling on my blog, for instance here or here. Material Design: For my first three apps I used react-native-material-design, but it’s not maintained anymore and lacks some components I wanted to use, like a TabBar. I switched to react-native-elementswhich is a great collection of good-looking components, but they don’t ship with a Material Design integration. What I ended up doing was to only use the font and color styles of react-native-material-designand combine these with the react-native-elementscomponents. This worked better than expected, especially when I found out it’s nicely modularized and you can access just the styling part by installing react-native-material-design-styles: import { color, typography } from 'react-native-material-design-styles' - Portrait/Landscape Mode: You can watch the video in either Portrait or Landscape mode, but all other scenes should be in Portrait mode. React-native-orientationhas a lockToPortrait/lockToLandscapefunction that you can use to achieve that by coupling it with its orientation-changelistener. I noticed the more experience I get, the more likely I am to build my own React Native modules. I always have this vision of how the User Experience should be, and if I cannot achieve it with existing modules, I end up building something on my own now, instead of settling for a sub-optimal solution with existing tools. The hardest part for me, someone with little prior native Android experience, was actually getting the gradle build system to work in Android Studio - the programming part and figuring out Intents was easy. - I created a react-native-app-launcherlibrary using Android’s AlarmManagerto schedule the alarms in a way that an alarm icon is shown in the Status Bar. The alarms still work when the device is idle or the application is closed. - When there is no Internet access, the alarm simply falls back to a ringtone alarm, so I wrote a small SoundManagermodule in Java that can play / pause the default ringtone and adjusts its volume. It’s on GitHub I decided to release the source code for the whole app in the hope that it will be helpful to someone just starting out with React Native. You often only see small code snippets or the infamous To-Do app, but having many components interact with each other poses different challenges again. It’s available on GitHub as react-native-motivation-app. I’m aware that someone could just disregard my copyright and publish the app under his own name, but I feel like it’s worth the risk and in the end people are lazy and developers usually have better things to do. And it probably won’t earn you any money. So, I’m not really concerned.
https://cmichel.io/released-fourth-react-native-app
CC-MAIN-2018-34
refinedweb
677
53.24
Back to index #include <nsSimpleURI.h> Definition at line 54 of file nsSimpleURI.h. Definition at line 64 of file nsSimpleURI.cpp. { NS_INIT_AGGREGATED(outer); } Definition at line 69 of file nsSimpleURI.cpp. { } Clones the current URI. For some protocols, this is more than just an optimization. For example, under MacOS, the spec of a file URL does not necessarily uniquely identify a file since two volumes could share the same name. Definition at line 375 of file nsSimpleURI.cpp. { NS_ENSURE_ARG_POINTER(aResult); NS_ENSURE_PROPER_AGGREGATION(aOuter, aIID); nsSimpleURI* url = new nsSimpleURI(aOuter); if (url == nsnull) return NS_ERROR_OUT_OF_MEMORY; nsresult rv = url->AggregatedQueryInterface(aIID, aResult); if (NS_FAILED(rv)) delete url; return rv; } URI equivalence test (not a strict string comparison). eg. ==.. This method resolves a relative string into an absolute URI string, using this URI as the base. NOTE: some implementations may have no concept of a relative URI.. The URI host with an ASCII compatible encoding. Follows the IDNA draft spec for converting internationalized domain names (UTF-8) to ASCII for compatibility with existing internet infrasture. Definition at line 220 of file nsIURI.idl. The URI spec with an ASCII compatible encoding. Host portion follows the IDNA draft spec. Other parts are URL-escaped per the rules of RFC2396. The result is strictly ASCII. Definition at line 213 of file nsIURI.idl.. The host is the internet domain name to which this URI refers. It could be an IPv4 (or IPv6) address literal. If supported, it could be a non-ASCII internationalized domain name. Characters are NOT escaped. Definition at line 152 of file nsIURI.idl. The host:port (or simply the host, if port == -1). Characters are NOT escaped. Definition at line 143 of file nsIURI.idl. Return language type from list in nsIProgrammingLanguage. Definition at line 98 of file nsIClassInfo.idl. Definition at line 105 of file nsIClassInfo.idl. Definition at line 72 of file nsSimpleURI.h. Definition at line 71 of file nsSimpleURI. Definition at line 231 of file nsIURI.idl. Definition at line 136 of file nsIURI.idl. The path, typically including at least a leading '/' (but may also be empty, depending on the protocol). Some characters may be escaped. Definition at line 166 of file nsIURI.idl. Definition at line 107 of file nsIClassInfo.idl. A port value of -1 corresponds to the protocol's default port (eg. -1 implies port 80 for http URIs). Definition at line 158 of file nsIURI.idl. The prePath (eg. scheme://user:password:port) returns the string before the path. This is useful for authentication or managing sessions. Some characters may be escaped. Definition at line 114 of file nsIURI.idl. Definition at line 118 of file nsIClassInfo.idl. The Scheme is the protocol to which this URI refers. The scheme is restricted to the US-ASCII charset per RFC2396. Definition at line 120 of file nsIURI.idl. Bitflags for 'flags' attribute. Definition at line 103 of file nsIClassInfo.idl. Returns a string representation of the URI. Setting the spec causes the new spec to be parsed, initializing the URI. Some characters may be escaped. Definition at line 106 of file nsIURI.idl. Definition at line 104 of file nsIClassInfo.idl. The optional username and password, assuming the preHost consists of username:password. Some characters may be escaped. Definition at line 135 of file nsIURI.idl. The username:password (or username only if value doesn't contain a ':') Some characters may be escaped. Definition at line 127 of file nsIURI.idl.
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/classns_simple_u_r_i.html
CC-MAIN-2016-44
refinedweb
578
62.75
Generative typography has become a popular field for graphic experiments. Designers combined their knowledge in typography with creative coding, and the possibility to use algorithms to change the shape of glyphs opened up many possibilities. The outcome is new, innovative ways of looking at typefaces, not necessarily in terms of legibility, but to explore and push the boundaries of what a typeface can be. I've conducted various experiments around how to manipulate typefaces with algorithms. The outcome of this would be rendered on screen, but a crucial part was missing: the result wasn't reusable as an actual typeface. I needed the potential to integrate the outcome into my workflow and wanted to use the font in Illustrator CC, Photoshop CC and InDesign CC, and ultimately as web fonts on websites. The motivation to write the Fontastic library came from here. With Fontastic, you can define the shapes of each character, add typographic parameters such as advance width and baseline, and save it as a TrueType Font (TTF) and the Web Open Font Format (WOFF) file. For example, you can create an abstract illustration for each letter of the alphabet, resulting in a non-legible text, visualisation, pattern or artwork when you render a text in this typeface. Or you could use sensor data to change an existing font based on real-time data (like the current weather or your location). This step-by-step tutorial shows how to define a font with your own glyphs and include it within a website. 01. Download and install the library First, you'll need to download and install Processing. You can either use the simple IDE that comes with it, or any Java IDE – Eclipse, for example. Then, head here and download the Zip file containing the library. It contains a folder called Fontastic. Copy this to your libraries folder inside your Processing sketch folder, usually found under Documents/Processing. 02. Create a new font Open a new sketch in Processing and import the Fontastic library. You can do this via Sketch > Import library > Fontastic, which you should find under the contributed libraries. If not, go back to step 1 and ensure the Fontastic folder is inside your libraries folder. Restart Processing to register any new libraries that you copy there. Importing the library will add the following line to your sketch: import fontastic.*; Now, you can make a new Fontastic object, which will provide you with all the functionalities. For this example, we'll name it f: Fontastic f = new Fontastic(this, "ExampleFont"); This also sets the font name, which will also be used as the TTF filename. Next, you'll want to make some settings for your font, for example the author or the spacing between characters. The default advance width is set to 512. f.setAuthor("Andreas Koller"); f.setAdvanceWidth(250); The advance width can also be set individually for each character. Next, let's define a shape for the character A. For this, we need to create a PVector array with as many points as you want. In this example, we will define a shape made out of four points. The coordinate system originates in the bottom left corner, the Y values go from bottom to top (unlike the 2D renderer in Processing) and the default size for a glyph is 512x1024 units. PVector[] points = new PVector[4]; points[0] = new PVector(0, 0); points[1] = new PVector(random(512), 0); points[2] = new PVector(random(512), random(1024)); points[3] = new PVector(0, random(1024)); f.addGlyph('A'). addContour(points); To generate the font files from the given glyph definitions, call buildFont(). An optional cleanup afterwards deletes the files that have been generated by the underlying font engine, DoubleType. f.buildFont(); f.cleanup(); After running this sketch by pressing the play button or Cmd/Ctrl+R, you'll find a TTF and WOFF file, as well as an HTML site showing all glyphs of the web font in the directory data/ExampleFont/bin. The HTML file uses the CSS rule @font-face to include the web font: @font-face { font-family: "ExampleFont"; src: url("ExampleFont.woff") format(‘woff’);} div { font-family: ExampleFont; } 03. Add more complexity For now, we only added one contour per glyph. Defining more complex glyphs can be done by retrieving the glyph with getGlyph and passing another PVector array containing the contour vertices. f.getGlyph('A').addContour(points); Alternatively, you could use the variable type FGlyph to store the glyph. If you create more than one contour, intersecting contours in TrueType format are solved using a non-zero winding fill algorithm. Now we're ready to have some fun. We can define any shapes for any character, so making a font for text visualisation is easy. For example, the FrequencyFont sketch that comes with the library shows how to use the frequency of letters in the English language to draw bars of different heights. If you set a text in this font in Illustrator, it will generate a bar chart visualisation. You can also use bezier curves to define a glyph: you'll need an FPoint array defining the coordinates of the points and the two control points for the bezier curve. The WaveFont example shows the simplest way of doing this and offers a function to preview the glyphs. FPoint[] points = new FPoint[3]; points[0] = new FPoint(0, 0); points[0]. setControlPoint2(200,200); points[1] = new FPoint(512, 0); points[1]. setControlPoint1(312,200); points[2] = new FPoint(256,1024); f.addGlyph('A'). addContour(points); 04. Manipulate existing fonts Finally, you can use an existing font as a starting point, manipulate its shape and create a new one. I'd recommend using the Geomerative library by Ricard Marxer. Fontastic comes with two examples demonstrating this technique: the ConfettiFont places circles around its outline, with a random offset. The DistortFont takes the outline and adds noise, creating jagged letterforms. These should get you started in creating your own font manipulation based on algorithms or data, such as reading weather, wind and temperature data from the web to generate a font representing the weather situation; or a font that changes according to stock market or Twitter trends. Fonts as data visualisation also offer a wide range of possibilities, as the example PieChartFont shows. This abstract font contains a pie shape for each letter, and if you set a text in this font it will be rendered as pie charts representing the words. Happy font creating! Words: Andreas Koller This article originally appeared in net magazine issue 242. Liked this? Read these! - Download the best free fonts - The designer's guide to special characters - How to make an app: try these great tutorials Got a question? Ask away in the comments!
https://www.creativebloq.com/netmag/how-create-web-fonts-using-code-11410339
CC-MAIN-2018-43
refinedweb
1,129
53.92
Catching Exceptions in Coroutines! First let’s look at the kind of problem we’re looking to solve. Simply put, it’s any coroutine that throws an exception! For example, say you’re trying to write a function that calls a callback when some asynchronous operation is done. Here’s a function that downloads a file and stores it to disk: using System.Collections; using UnityEngine; public static class WebDownloader { public static void DownloadToFile( string url, string filePath, Func<IEnumerator, Coroutine> coroutineStarter, Action<bool> callback ) { coroutineStarter(Download(url, filePath, callback)); } private static IEnumerator DownloadUrl( string url, string filePath, Action<bool> callback ) { var www = new WWW(url); yield return www; if (string.IsNullOrEmpty()) { File.WriteAllBytes(filePath,); callback(true); } else { callback(false); } } } And here’s some code that uses it: using UnityEngine; public class TestScript : MonoBehaviour { void Start() { WebDownloader.DownloadToFile( "", "/path/to/store", StartCoroutine, success => Debug.Log("download and store " + (success ? "success" : "failure")); ); } } One problem is that File.WriteAllBytes might fail for a variety of reasons. There could be permissions issues or the path might be invalid, among various other problems. We might have also written any number of other bugs, even something as simple as a NullReferenceException. The point is: we want to call the user back no matter what happens or we’ll leave them hanging forever. Throwing an exception from a coroutine terminates the coroutine. It doesn’t cause the app to crash because Unity implicitly catches all uncaught exceptions, logs them out, then continues the app. That leads to an interesting predicament. Since Unity is the one running the coroutine, the code that started the coroutine doesn’t even know the exception was thrown! At this point you might be thinking that we should simply insert a try-catch block around the whole coroutine: private static IEnumerator DownloadUrl( string url, string filePath, Action<bool> callback ) { try { var www = new WWW(url); yield return www; if (string.IsNullOrEmpty()) { File.WriteAllBytes(filePath,); callback(true); } else { callback(false); } } catch (Exception ex) { callback(false); } } If you try that, you’ll be greeted by a compiler error: error CS1626: Cannot yield a value in the body of a try block with a catch clause You might be able to add several try-catch blocks around all the parts of your function except the yield return statements, but your function will soon become extremely long and the contortions required to make sure a yield return statement never throws can be very awkward. Which leads us to the workaround that’s the point of today’s article. There are two techniques that we can combine to successfully catch the exception. First is to insert another iterator function in between the coroutine and Unity to catch the exception. An iterator function is any function that returns IEnumerator, IEnumerable, or their generic ( <T>) equivalents. That means we end up with a function like this: public static class CoroutineUtils { public static IEnumerator RunThrowingIterator(IEnumerator enumerator) { foreach (var cur in enumerator) { yield return cur; } } } Then we use it like this: ) ) ); } } We’ve basically wrapped our coroutine iterator function with another iterator function. Currently the wrapper just runs our coroutine iterator function, but now we have a middle layer we can use to catch the exception. So how do we do that? Well we already know that we can’t just try-catch the whole body of RunThrowingIterator because it too contains a yield return statement. Instead, we can use the second technique to break down the foreach so that we can catch just part of it: public static class CoroutineUtils { public static IEnumerator RunThrowingIterator(IEnumerator enumerator) { while (true) { if (enumerator.MoveNext() == false) { break; } var current = enumerator.Current; yield return current; } } } It’s certainly a lot more verbose than the foreach loop, but now we can see the hidden parts that foreach was doing for us: MoveNext to resume the function and Current to get the value passed to yield return. Technically, both of those can throw so we should catch everything but the yield return current; part of the while loop’s body: public static IEnumerator RunThrowingIterator( IEnumerator enumerator ) { while (true) { object current; try { if (enumerator.MoveNext() == false) { break; } current = enumerator.Current; } catch (Exception ex) { yield break; } yield return current; } done(null); } Now if the coroutine iterator function throws an exception at any point then RunThrowingIterator will catch it and immediately terminate. The final step is to notify users of RunThrowingIterator that the exception happened or that no exception happened at the time the coroutine iterator function finished. Unfortunately we can’t really return a value from an iterator function since that’s taken up already by the IEnumerator, but we can call a callback. Here’s the final form of the utility code: using System; using System.Collections; using UnityEngine; /// <summary> /// Utility functions to handle exceptions thrown from coroutine and iterator functions /// /// </summary> public static class CoroutineUtils { /// <summary> /// Start a coroutine that might throw an exception. Call the callback with the exception if it /// does or null if it finishes without throwing an exception. /// </summary> /// <param name="monoBehaviour">MonoBehaviour to start the coroutine on</param> /// <param name="enumerator">Iterator function to run as the coroutine</param> /// <param name="done">Callback to call when the coroutine has thrown an exception or finished. /// The thrown exception or null is passed as the parameter.</param> /// <returns>The started coroutine</returns> public static Coroutine StartThrowingCoroutine( this MonoBehaviour monoBehaviour, IEnumerator enumerator, Action<Exception> done ) { return monoBehaviour.StartCoroutine(RunThrowingIterator(enumerator, done)); } /// <summary> /// Run an iterator function that might throw an exception. Call the callback with the exception /// if it does or null if it finishes without throwing an exception. /// </summary> /// <param name="enumerator">Iterator function to run</param> /// <param name="done">Callback to call when the iterator has thrown an exception or finished. /// The thrown exception or null is passed as the parameter.</param> /// <returns>An enumerator that runs the given enumerator</returns> public static IEnumerator RunThrowingIterator( IEnumerator enumerator, Action<Exception> done ) { while (true) { object current; try { if (enumerator.MoveNext() == false) { break; } current = enumerator.Current; } catch (Exception ex) { done(ex); yield break; } yield return current; } done(null); } } Finally, WebDownloader.DownloadToFile uses CoroutineUtils.RunThrowingIterator to guarantee its callback is called regardless of exceptions: ), ex => callback(ex == null) ) ); } } As a bonus, I added in an extension function to make it even easier to use as a StartCoroutine replacement in your MonoBehaviour classes. Simply use it like this: using System.Collections; using UnityEngine; public class TestScript : MonoBehaviour { void Start() { StartThrowingCoroutine( MyIterator(1), ex => { if (ex != null) { Debug.LogError("Coroutine threw exception: " + ex); } else { Debug.Log("Success!"); } } ); } private IEnumerator MyIterator(int i) { yield return null; if (i == 0) throw new Exception("0"); yield return null; if (i == 1) throw new Exception("1"); yield return null; if (i == 2) throw new Exception("2"); yield return null; if (i == 3) throw new Exception("3"); yield return null; } } And that’s all! Feel free to use these utility functions to handle exceptions in your coroutines or any other iterator functions. #1 by Pierre-Luc on July 11th, 2017 · | Quote Very good job, thank you! #2 by Horace on November 21st, 2017 · | Quote Hi Jackson, Can you tell me why there is both break; and yield break; in the throwing iterator loop? (And thanks for the awesome articles!) #3 by jackson on November 21st, 2017 · | Quote Hey Horace, The two statements are confusingly similar as they both include the word “break.” They actually have very different behaviors. The breakstatement is just what you’re used to normally with loops. It moves the point of execution to after the loop, essentially “breaking out” of the loop by skipping the rest of it and the next loop condition check. The yield breakstatement is like a version of breakfor iterators. It ends the iterator function immediately just like if the function’s execution hit the end. It’s more like returnin a non-iterator function, but unfortunately they named another statement yield returnso they couldn’t use that. Again, very confusing. In this particular loop I used breakwhen the iterator being run signals that it’s done by returning falsefrom MoveNext. That’ll happen when the iterator hits the end of the function or calls yield break. When the iterator is done, I want to stop the loop and go straight to the last line of the function which calls the doneparameter with nullto indicate that the iterator function was not done because it threw an exception. Then I used yield breakwhen an exception is caught. I used this right after I called the doneparameter with the exception that was thrown by the iterator function. I used yield breakbecause I wanted to end the function immediately rather than continuing its execution after the loop, which is what breakwould do. If I used breakthen the function would have continued after the loop by calling the doneparameter again with null. That would have signaled the user that the iterator was done twice and that it was done with both an exception and not an exception, clearly not what the function should be doing. It’s tricky code, so it definitely takes a bit of examination. Hopefully that clears things up. #4 by Horace on November 22nd, 2017 · | Quote Thanks, I see! Thanks to you I was able to understand enough to make my own version of your function, which works slightly differently to suit my needs. It catches any yield returned IEnumerators and iterates over them, instead of letting them “escape”. #5 by jackson on November 22nd, 2017 · | Quote I’m glad you were able to tweak it into something that works for you. One little tip: the ischeck followed by an ascast at at the end can be simplified to avoid double type-checking: #6 by Horace on November 22nd, 2017 · | Quote (The formatting chewed some of my code, but not too much is missing.) #7 by hunbuso on March 12th, 2018 · | Quote we encountered the same problem, can’t catch exception in Unity when start coroutine with a call back function. Thanks very much, your article help me to solve our problem. #8 by Mike on June 27th, 2018 · | Quote Thanks for writing the article, very useful technique. For me, I start to find the intent of the code a little hard to read with the yields and enumerators. I usually end up wrapping my functional code (the file download) in a thread-calling-class and then then just check a finished flag in the thread-calling-class in the Update() of the calling monobehavior to see when its finished. Then check to see if an exception has been caught by wrapping the functional code in a try catch on the other thread and recording any exceptions thrown in the thread-calling-class (which is a member variable of the calling monobehavior). The downside is you cannot use anything that requires the main thread. #9 by jackson on June 28th, 2018 · | Quote That’s understandable, especially given what goes on behind the scenes to make iterator functions happen. A hand-coded state machine can be more clear to some readers, more flexible, and more efficient. Threading is fundamentally different than coroutines though, as I’m sure you know. The lack of Unity API access make it harder to use them, but it’s getting easier now that the Job System has been introduced. See my tutorial for more information on what you can and can’t use from off the main thread, which also applies to any other non-job threads.
https://jacksondunstan.com/articles/3718
CC-MAIN-2019-09
refinedweb
1,916
52.09
@wanderson, I did some previous random number testing and used an SD card to capture the output, so I didn't have to dedicate a PC to the task. Would that be acceptable? @wanderson, have a look at the sketch here, and let me know what you think: #include <Entropy.h>void setup(){ Serial.begin(115200); Entropy.Initialize();}void loop(){ Serial.println(Entropy.random());} I have also had difficulties with Strings in the past, but there were improvements in 1.0, so I thought I'd give it another go. My expectation would be that it should be able to run pretty much indefinitely.
http://forum.arduino.cc/index.php?topic=108380.msg828642
CC-MAIN-2014-52
refinedweb
104
59.8
By DevvyExclusive To Rense.com 4-1-19: Retail apocalypse? JCPenney, Payless, LifeWay announce 3,000+ combined store closures, March 29, 2019: ." It's not all about Amazon. When people don't have extra money they don't buy - especially when you become unemployed. Walmart is quietly closing stores - here's the full list, March 29, 2019 Yeah. It's what happens when you cook the books: The Economy Grew Slower Than Previously Thought in the Fourth Quarter, March 28, 2019 Just Before The Great Recession, Mountains Of Unsold Goods Piled Up In U.S. Warehouses - And Now It Is Happening Again - Don't skip this one. Greyerz - The Most Important Chart Of This Century And What Will Take The World By Surprise, March 24, 2019 The Next Leg Down is Here... Will We Get a Dip... or a CRASH?, March 25, 2019 Peter Schiff: "We're Not Borrowing Ourselves Rich, We're Borrowing Ourselves BROKE!" Housing Falters - Blame The Tax Changes?, March 26, 2019: "What this suggests to me is that Congress - by capping the write off of interest and local taxes - is "killing us softly" with a tax policy enacted well over a year ago amidst much fanfare and hype. Now we see the results." More foreign workers while laid off Americans who need any job to pay the rent and eat will go without: DHS Nielsen OK's Visas for 30,000 Extra H-2B Workers, March 29, 2019: "DHS Secretary Kirstjen Nielsen will allow the importation of 30,000 extra foreign laborers for landscapers, resorts, and other businesses despite President Donald Trump's "Hire American" promise that has helped raise blue-collar wages by four percent in 2018. "The decision will allow companies to import 96,000 H-2B laborers in 2019, up from the 66,000 allowed by the law." 71% Entering Employment Came from Outside Labor Force Under Trump, March 19, 2019 The horrific flooding last month in Iowa, Nebraska and surrounding states is going to have severe repercussions for our treasured farming and ranching families. Up to one million calves drowned and more than a million aces flooded. How heartbreaking that is for ranchers to find a way to bury all those calves and dead livestock. US Grain Bins Collapse After Catastrophic Iowa Floods Farm, livestock losses catastrophic from flooding in Nebraska, Iowa Nebraska Flooding Broke 17 Records - Farmers Being Absolutely Devastated, March 18, 2019 I can't even think about the thousands of doggies, cats and other animals that drowned. It goes without saying the human misery is off the charts. Our fellow Americans affected will be fighting for survival for a long time to come. Sadly, we are likely to see more of this: Farmers Facing Higher Suicide Rate If you think the cost of food has been going up, we will see even steeper prices due to all the flooding. Farmers and ranchers will have no choice if they are to survive. I must repeat myself here: America became the greatest debt free nation on this earth because of our manufacturing, industrial and agricultural sectors. The U.S. Congress has destroyed them over the decades along with the unconstitutional, privately owned 'Federal' Reserve. The American people kept reelecting the same SOBs who destroyed their job sectors. They continue to do so and worse by allegedly electing enough Democrats (socialists and communists) to control the U.S. House. Not that incumbent Republicans are much better as we slide down the slick path to financial ruin. Again. Nowhere in Art. 1, Sec. 8 of the U.S. Constitution does it authorize Congress to regulate or control our agriculture. Period. The Agricultural Act of 1938 began the unconstitutional interference by the U.S. Congress kissed and blessed by both parties. Paired up with the 'Fed' ever since and what we have is a disaster for farmers and ranchers passed along to us. From regulations choking our farmers and ranchers to death (as well as our fishing families), destructive agreements (NAFTA) and treaties, it's a wonder any have survived. Every year House and Senate incumbents honk about how much money they are going to spend for some new farming bill. Just more Band Aids instead of sewing up the patient once and for all. Obama-era rules jeopardize California farming - 'If this unprecedented prosecution succeeds, it threatens nearly every farm in the U.S.' Feds force $1.1 million fine from farmer prosecuted for plowing own land - Army Corps under Obama tagged furrow puddles as 'navigable waters' Farmers Who Disputed Frog-Focused Habitat Lose Suit, March 28, 2019 It's not just federal dragoons, it's also local governments. I can tell you right now those who founded this constitutional republic wouldn't have stood for this tyranny for five minutes. "Tell them (the South Carolinians who wanted to nullify the Tariff Act of 1832) that I will hang the first man of them I can get my hands on to the first tree I can find." -President Andy Jackson Greenest US county sues man for growing too many organic veggies SWAT Team Raids Texas Organic Farm, Holds Residents At Gunpoint Before the flooding I was paying $2.50 for a head of organic lettuce; a few days ago it bumped to $2.98 a head. That's an increase of $.48 for just one item in my cart. Even lettuce grown in Mexico was running nearly $2.00 a head. I have not purchased ONE vegetable or piece of fruit from Mexico, Chile, Guatemala, Honduras or any other foreign country since NAFTA was signed into "law" in 1994. I support American farmers or I GO WITHOUT and I DO. No way will I eat anything coming from those countries who do NOT have the same safeguards as our farmers are required to have for crops. In Mexico they grow fruits and vegetables near factories that dump their filthy water into canals for irrigation. I don't know about other South American countries but Mexico, Honduras, Guatemala and El Salvador have been deliberate facilitators of the human invasion of diseased free loaders flooding our borders. They don't get my money. The bottom line is it's going to get a lot worse and Trump is going to be blamed. Bad new trade treaties and the 'Fed' coupled with the national debt (smaller government is nothing more than a campaign slogan) IS going to kill us. Again, I say, proceed with caution. I have written before about the Fed & The Farmer. Deliberate destruction of that sector. Bring in the big mega corporations to feed you filthy, hormone packed animals and vegetables grown with deadly pesticides. Second jury verdict has come down: Jury Slams Monsanto With $80M Verdict in Roundup Cancer Case, March 27, 2019 The Fed is the head of the beast ." President Andrew Jackson; Whitehouse.gov Is Inflation Beginning? Are You Ready?, March 23, 2019 Why Markets Aren't Buying What The Fed Is Selling - Here's Why, March 25, 2019 The Federal Reserve's Controlled Demolition Of The Economy Is Almost Complete, March 28, 2019 Politics Has Failed, Now Central Banks Are Failing, March 22, 2019 Will The Fed Cut Interest Rates? Also, What Has Happened In Japan Is Stunning, March 26, 2019 Fed to End Rally?, March 22, 2019 The Global Economic Reset Begins With An Engineered Crash, March 13, 2019 Fed's Evans says inflation could run to 2.5% before rate hikes are needed, March 25, 2019 - That's right. Create inflation so we all get screwed. President Trump has made some comments about the Fed; lately a few more have been trickling along as well as from his "advisers". President Trump: Replace The Dollar With Gold As The Global Currency To Make America Great Again Trump blames Fed for holding back economic growth in 2018, March 21, 2019 White House advisor Larry Kudlow says Fed should 'immediately' cut rates, March 29, 2019 We are well past "End the Fed" although is should be done which I have explained in my Why A Bankrupt America booklet and the only solution left. A dear friend of mine, Tom Selgas, put together a really great slide presentation on explaining the 'silent theft' of fiat currency. I urge you to take the time to look at his slide presentation because it is straightforward, concise and then send the link to mailing lists and social media. Lawful Money Presentation. Note: For a thorough, comprehensive education on the Fed, the income tax, education, Medicare, SS, the critical, fraudulent ratification of the Seventeenth Amendment and more, be sure to order my book, Taking Politics Out of Solutions. 400 pages of facts and solutions. Links: Treasury expands penalty relief to more taxpayers State Police: I-68 shutdown was result of threat to kill President Trump, blow up Pentagon, March 27, 2019 Roundup Is Losing in Court But Farms Aren't About to Give It Up Stop killing us: Final Solution to Glyphosate Question is Banning Farm to Consumer Legal Defense Fund
https://rense.com/general96/lettuce-2-98-a-head-no-inflation.php
CC-MAIN-2019-18
refinedweb
1,506
61.46
This site uses strictly necessary cookies. More Information Hello! I am getting an error when I try to find the length of an array! All I am trying to accomplish with this script is spawn a random gameobject from an array. To do that I need to find the length of an array, yet for some reason Unity won't let me do that. The error I am getting is: CS0117 UnityEngine.GameObject[] does not contain a definition for 'Length' CS0117 UnityEngine.GameObject[] does not contain a definition for 'Length' Here is my code, also note I have tried to just print the length of the array to the console out of the instantiate and just in the start function, it resulted in the same error. using UnityEngine; using System.Collections; public class SpawnScript : MonoBehaviour { public GameObject[] objects; // runs spawn func void Start () { Spawn(); } //spawns in a random item from an array and then does it every .5 seconds. void Spawn() { Instantiate(objects[Random.Range (0, objects.Length)], transform.position, Quaternion.identity); Invoke ("Spawn", .5f); } } Thanks for viewing (and hopefully helping)! This is... odd. I'm not seeing anything wrong with your code... this answer shows pretty much the same thing... The only thing I can think of is that perhaps there's some weird internal $$anonymous$$onoBehaviour property called objects that is interfering? That shouldn't be the case because yours should be the one in scope. But try renaming objects to something else? Complete stab in the dark I know. objects if you are still having issues you could always use a list ins$$anonymous$$d. Add using System.Collections.Generic; to your namespaces and just use List<GameObject> ins$$anonymous$$d. using System.Collections.Generic; List<GameObject> So, although I could not fix this error still. I am too lazy to do the research on lists, and as I am the only one working on the game I just declared a public value that I could change in the editor depending on the size of the array, obviously not what you want to have to do, but it being so minor I don't really mind. If anyone else has any ideas I would be open to them. Also Dave, thanks for the response and indeed I have tried different names and that doesn't seem to fix the problem. I will have to contact Unity's support $$anonymous$$m and see if perhaps I am just missing a package or something. Thanks all! In this kind of issue, you should provide Unity version, hardware, OS, etc... I recreated the essence of this in a new C# script, ran it, worked fine. using UnityEngine; using System.Collections; public class ArrayTest : $$anonymous$$onoBehaviour { public GameObject[] objects; void Start () { Debug.Log(objects.Length); } } No errors, console/debug.log displays 0 Answer by benbendixon · Sep 06, 2015 at 03:29 PM @bunny83 @wibble82 Yes I am sure I haven't changed anything. I even created a new project and tried running a debug line that would just print the length of a simple array and that didn't work. I have not "given up" on this question, I just simply don't have anything to add. I have been trying to work with a support member. Feel free to mark the question as unreproduceable. Did you try to look through a step-by-step debugger? @benbendixon: Well, you haven't addressed any of the questions we asked back. Like: What exact Unity version do you use? Where exactly do you get that error? In Unity, in $$anonymous$$onoDevelop, in another IDE you use like VisualStudio? I asked you to copy and paste the exact and complete error message from the console that includes the file name and line / column numbers. Apart from that there are at least 2 different "XXX doesn't contain a definition for YYY" errors. There is CS0117 and CS1061. CS1061 would usually apply to your case but you say you get CS0117 which applies to cases where you try to access a member in a "base class" which is completely impossible for array types since you can't derive classes from an array type. If you ignore all the other counter questions, at least tell use which exact Unity version you're using. For example i still use "5.1.1f1 Personal". You can see that version in the "About" screen. Nobody is able to reproduce your problem and i don't even know any way to purposely generate that exact error. @Bunny83 I haven't answered any of the questions because I am already in contact with unity support and this appears to be something that only they would be able to figure out since it is their software and there is an issue with my install or something like that. If you would like me to answer them I will. I am running Unity 4.6.3f Personal It happens in line 16 and I already gave the error message, I would copy and paste but I am much farther along in my game and am no longer using this script because of the error. I get the error in unity and monodevelop. Answer by Cryptomane · Sep 06, 2015 at 04:07 PM Works fine for me, even if the array was empty. The only thing that comes to my mind might be some corrupted library in your project, OR ... Do you have a "GameObject" class in your project? No I do not, as I said it is something wrong with my install of unity obviously. I am working with support. Answer by OceKat3 · Oct 27, 2019 at 10:32 PM @benbendixon This is how I fixed the problem for myself: Use an uppercase L in "Length" instead of a lowercase one. The unity documentation makes it seem like you should use a lowercase L, which I find weird. The documentation? Can you provide the link? Probably this one. I agree that the documentation is a little bit misleading as they describe both UnityScript's Array class (which does not exist anymore) and native C# built-in arrays. The code example is about the built-in arrays while the actual page is about the Array class. The "length" property of that class actually was lower case. If you want to look up information on standard / built-in C# / .NET framework classes, head for the $$anonymous$$SDN. How Do I Add An Instantiated Object To An Array? 3 Answers Instantiate from array into array? 2 Answers Different array value at different times 1 Answer Get components of multiple instantiated objects? 2 Answers Instantiate large block maps 2 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1039855/cannot-find-the-length-of-an-array.html
CC-MAIN-2021-21
refinedweb
1,116
73.88
Flow Document Generator¶ You can use the Flow Document Generator to create documentation associated with a project. It will generate on the fly a Microsoft Word™ .docx file that provides information regarding the following objects of the flow: Datasets Recipes Managed Folders Saved Models Model Evaluation Stores Labeling Tasks Generate a flow document¶ Note To use this feature, the graphical export library must be activated on your DSS instance. Please check with your administrator. You can generate a document from a project with the following steps: Go to the flow of the project Click the Flow Actionsbutton on the bottom-right corner Export documentation On the modal dialog, select the default template or upload your own template, and click Exportto generate the document After the document is generated, you will be able to download your generated document using the Downloadlink Custom templates¶ If you want the document generator to use your own template, you need to use Microsoft Word™ to create a .docx file. You can create your base document as you would create a normal Word document Sample templates¶ Instead of starting from scratch, you can modify the default template: Download the default template here Placeholders¶ A placeholder is defined by a placeholder name inside two brackets, like {{project.name}}. The generator will read the placeholder and replace it with the value retrieved from the project. There are multiple types of placeholders, which can produce text, an image or a variable. Conditional placeholders¶ You can customize even more your template by using conditional placeholders to display different information depending on the values of some placeholders A conditional placeholder contains three elements, and each of them needs to be on a separate line: a condition a text to display if the condition is valid a closing element The condition itself is composed of three parts: a text placeholder an operator ( ==or !=) a reference value Example: {{if my.placeholder == myReferenceValue }} The placeholder is replaced by its value during document generation and compared to the reference value. If the condition is correct, the text is displayed, otherwise nothing will appear on the final document. Note To check if the value in the placeholder is empty, have a condition with no reference value. Example: {{if project.short_desc != }} Project description: {{ project.short_desc }} {{endif project.short_desc }} Here is a more advanced example with a conditional boolean placeholders using a variable provided by an iterable placeholder. Boolean placeholders return the values “Yes” or “No” as text. {{ if $recipe.is_code_recipe == Yes }} Recipe code: {{ $recipe.payload }} {{ endif $recipe.is_code_recipe }} Iterable Placeholders¶ Iterable placeholders contain one or multiple objects and must be used with a foreach keyword, like this: {{ foreach $variableName in placeholder.name }} (replace variableName with the name you want for your variable, and placeholder.name with the name of the placeholder). Iterable placeholders provide a variable that can be used in other placeholders, depending on its type. The type of the variable depends on the iterable placeholder that provided it. see placeholder list. Syntaxe rules: A variable name must start with a $and must not contain any .. Iterable placeholders need to be closed with a {{ endforeach $variableName }} For example, the placeholder flow.datasets will iterate over all the datasets of the flow and the variable it provides is of type $dataset so it can use all the placeholders starting with $dataset. Here’s an example of how it can be used: {{ foreach $d in flow.datasets }} Dataset name: {{ $d.name }} Dataset type: {{ $d.type }} {{ endforeach $d }} In this example, we iterate over all the datasets contained in the placeholder flow.datasets and for each of these placeholders, we print the name and the type of the dataset. It is possible to have an iterable placeholder inside another iterable. For example, to print the schema columns of all the datasets, you would do: {{ foreach $dataset in flow.datasets }} Dataset name: {{ $dataset.name }} Dataset type: {{ $dataset.type }} Schema: {{ foreach $column in $dataset.schema.columns }} Column name: {{ $column.name }} Column type: {{ $column.type }} {{ endforeach $column }} {{ endforeach $dataset }} Count placeholders¶ To know the number of elements in an iterable placeholder, use .count after the name of the iterable. This can be useful when used with a conditional placeholder if you don’t want to display a section if there is no element. For example: {{ if flow.models.count != 0 }} Saved model section There are {{ flow.models.count }} in the flow. {{ foreach $model in flow.models }} // Display model info {{ endforeach $model }} {{ endif flow.models.count }} Type placeholders¶ To know the type of a variable, use .$type after the name of the variable. This can be useful when used with a conditional placeholder if you want to display something different for a specific type in an iterable placeholder that can output multiple types. See union-type iterable placeholders. Union-types iterables¶ Some iterable placeholders can iterate over multiple types at the same time. For example, if you want to iterate over the outputs of a recipe, the type of the outputs can be datasets, managed folders, saved models or, model evaluation stores. A variable created by an iterable placeholder with multiple output types can use all the placeholders common to the types it can be. The main flow objects (datasets, folders, recipes, models, labeling tasks and model evalutation stores) have at least these common placeholders: id name creation.date creation.user last_modification.date last_modification.user short_desc long_desc tags.list If you want to use a placeholder specific to only one of the types, you need to use a conditional placeholder to check the type of the variable. For example, to iterate over the outputs of a recipe: {{ foreach $output in $recipe.outputs.all }} Output name: {{ $output.name }} Output description: {{ $output.short_desc }} {{ if $output.$type = $dataset }} // this placeholder would fail for other types, so it has to be used only for datasets Dataset connection: {{ $output.connection }} {{endif $output.$type }} {{ endforeach $output }} Note If there is a problem during the generation of the document, for example if a placeholder contains a typo, if a placeholder is used on a variable with the wrong type or if there is no “end” placeholder after foreach or if placeholders, the placeholders will be removed in the final document and there will be a warning displayed after the generation.
https://doc.dataiku.com/dss/latest/flow/flow-document-generator.html
CC-MAIN-2022-33
refinedweb
1,040
56.15
This action might not be possible to undo. Are you sure you want to continue? United States: Financial Services Financials DESKTOP Positioning for the next leg of the rally Fundamentals suggest further upside We see the best opportunities in Large Cap Banks, Brokers, Asset Managers, and Homebuilders given the backdrop of low rates, higher asset prices, moderating credit costs and improving capital markets activity. Higher interest rates and regulatory overhang are the big downside risks. Best Buy ideas We focus investors on our top stock ideas including JPM and BAC in large-cap banks; STI, CMA, FITB and KEY in regional banks; UNM, XL PGR, and ACE in insurance; BEN and BX in asset managers; EVR, LAZ and PJC in brokers; NDAQ and CME in market structure; CBG in Real Estate and DHI in Homebuilders. In credit, we favor BAC, LLOYDS, BPCEGP, STANLN in Banks and Farmers, CNA and RDN in Insurance. Our investment framework Four themes guide us: (1) Potential for consumer provision leverage, (2) a focus on those companies that can return capital to shareholders, (3) improving capital market activity in 2010 and (4) stabilizing real estate prices as the hunt for yield hits real assets. Best Sell ideas We remain concerned on CRE given the long-tail nature of losses; avoid BRE, REG, DRE and ESS. Prime jumbo losses likely to worsen; avoid HCBK. Jessica Binder, CFA (212) 902-7693 | jessica.binder@gs.com Goldman Sachs & Co. Richard Ramsden (212) 357-9981 | richard.ramsden@gs.com Goldman Sachs & Co. Financials as a part of your portfolio Financials are now the second largest sector in the S&P 500 and we think there is further upside as we move towards normalized returns given attractive valuation. Investors have moved towards a neutral weighting in the sector, but are underweight regional banks. The best performers YTD have been the most underweighted sectors. What we are watching We highlight four sections of this report for PMs: (1) an in-depth analysis of mutual fund positioning across the Financials sector (p. 5); (2) a closer look at the idea of Financials being “cheap cyclicals” (p. 7); (3) capital management across the sector (p. 14); (4) initial thoughts around Basel III (p. 30). Brian Foran (212) 855-9908 | brian.foran@gs.com Goldman Sachs & Co. Louise Pitt (212) 902-3644 | louise.pitt@gs.com Goldman Sachs & Co.. Goldman Sachs Global Investment Research Global Investment Research 1 April 7, 2010 United States: Financial Services Table of contents Portfolio Manager Summary: Life after the crisis Thinking about Financials in the context of a portfolio A return to micro from macro Theme #1: Provision leverage in consumer loan portfolios Theme #2: Capital management is beginning to be a key differentiator across the sector Theme #3: Capital market should bounce from a disappointing 4Q2009 Theme #4: Real estate prices are stabilizing as the hunt for yield hits real assets Short rates are likely to stay lower for longer, but have to go up eventually Regulatory issues likely to remain a topic for the foreseeable future Sector views: Attractive Large Banks, Asset Managers, Homebuilders and Brokers Disclosures 3 5 10 11 14 17 20 26 30 34 37 GS Financials Equity Research Team Banks Richard Ramsden Brian Foran Adriana Kalova Quan Mai Insurance Christopher M. Neczypor Christopher Giovanni Eric Fraser Cooper McGuire Vikas Jain Asset Managers Marc Irizarry Alexander Blostein, CFA Neha Killa Market Structure & Brokers Dan Harris, CFA Jason Harbes, CFA Real Estate/REITs Jonathan Habermann Sloan Bohlen Jehan Mahmood Siddharth Raizada Homebuilders Joshua Pollard Anto Savarirajan GS Financials Credit Research Team Banks Louise Pitt Insurance and Managed Care Donna Halverstadt Amanda Lynam Financials Specialist Financials Sector Specialist Jessica Binder, CFA Goldman Sachs Global Investment Research 2 April 7, 2010 United States: Financial Services Portfolio Manager Summary: Positioning for the next leg of the rally We remain bullish on Financials given the backdrop of low rates, higher asset prices, moderating credit costs and improving capital markets activity and see the best opportunities in Large Cap Banks, Brokers, Asset Managers, and Homebuilders. Higher interest rates and the regulatory overhang are the biggest downside risks, although appear manageable near-term. Financials are now the second largest sector in the S&P 500, and we think there could be further upside given attractive valuation levels even after the rally. While investors have largely closed out underweight positions from last year, they remain underweight many of the regional banks. Positioning has been a big driver of returns thus far this year, and correlation across stocks in the sector has fallen dramatically since the start of the year. We highlight four key themes for stock-picking across the sector: Provision leverage in consumer loan portfolios: The credit cycle is moderating as non-performing asset formation is slowing and reserves are closer to peak levels. The improvement is most clear in consumer and in commercial (C&I) loans, and should current trends continue, we see the potential for reserve releases later this year. Key stock ideas: BAC, JPM. Avoid HCBK. Returning capital to shareholders: Buybacks and dividends have become a bigger differentiator across the sector. We highlight companies that screen well on these metrics and highlight potential new entrants, which could result in significant relative outperformance. Within the banks sector, high free cash yields imply dividend yields should be attractive once regulatory pressure eases. Key stock ideas: BEN, JPM, NDAQ, UNM, XL. Capital market should bounce from a disappointing 4Q: Trading activity was weaker than many expected in the first quarter, but FICC should show qoq improvement. Investment banking got off to a slow start this year, but has recently started to pick up. We believe this is just the beginning of a multi-year recovery in M&A. Smid-cap brokers and alternative asset managers are well-positioned to benefit, as are some of the large-cap banks. Key stock ideas: BAC, BEN, BX, CBG, EVR, JPM, NDAQ. Real estate prices are stabilizing as the hunt for yield hits real assets: While we may just be in the eye of the storm, low interest rates have helped push some issues further into the future. Homebuilders are well positioned to benefit from an improvement in new home sales from the depressed levels of 2009. Shadow inventory remains a concern, but is more likely to limit the strength of the recovery rather than creating another downturn in the very short-term. On the commercial side, sentiment is better than reality; the few recent transactions that have occurred imply that prices are recovering faster than the fundamentals would suggest. Key stock ideas: BAC, CBG, DHI, JPM, MTG, STI, BX. Avoid BRE, REG, DRE and ESS. Despite this positive backdrop, investors remain focused more on potential downside risks: Rates: Low rates have unquestionably helped to stimulate the economy, not only by lowering funding costs, but also by supporting housing demand and boosting capital market activity. The improvement in credit can in part be attributed to low rates, given that the majority of loans in the United States are floating rate. Our economists forecast the Fed Funds rate will stay near-zero through 2011. However, even if rates were to increase, we expect money market outflows to continue. Avoid FII. Regulatory outlook: While it is difficult to know what the exact timing and impact of regulation will be, it is clear this is an area of focus for the foreseeable future. Banks are likely to be the most impacted across the space, and issues fall within two areas right now: the potential impact on normalized earnings, and the push for companies to hold more capital. One potential beneficiary will likely be exchanges if volume is pushed towards exchanges and clearinghouses. Other sectors where new regulatory proposals are likely to have an impact are Insurance, Rating Agencies and some Asset Managers/Discount Brokers that have money market funds. Goldman Sachs Global Investment Research 3 April 7, 2010 United States: Financial Services Exhibit 1: Top Ideas across the Financials sector Stock ideas from the Financials business unit; priced as of the market close of April 7; $ millions, except per-share data Upside/downside Target price to target price 20.00 130.00 18.00 18.00 17.00 40.00 54.00 25.00 35.00 26.00 23.00 25.00 10.00 72.00 21.00 13.00 33.00 7% 15% 23% 11% 42% 30% 19% 17% 23% 3% 18% -33% -23% -24% -21% -8% -13% Key Financials investing themes Provision Capital Capital Leverage Allocation Markets Real Estate Company name Buy Bank of America Corporation Franklin Resources, Inc. The Blackstone Group L.P. CB Richard Ellis Group Inc. D.R. Horton, Inc. Evercore Partners Inc. J.P. Morgan Chase & Co. The Nasdaq Stock Market, Inc. SunTrust Banks, Inc. Unum Group XL Capital Ltd. Sell BRE Properties, Inc. Duke Realty Corp. Essex Property Trust, Inc. Federated Investors, Inc. Hudson City Bancorp, Inc. Regency Centers Corporation Ticker BAC BEN BX CBG DHI EVR JPM NDAQ STI UNM XL BRE DRE ESS FII HCBK REG Sector Banks Asset Managers Asset Managers REITS Homebuilders MktStructure Banks MktStructure Banks Insurance Insurance REITS REITS REITS Asset Managers Banks REITS Market cap (current) Price 185.0 25.9 16.6 3.9 3.8 1.2 178.7 4.6 14.2 8.4 6.7 1.9 3.0 2.6 2.7 7.5 2.6 18.62 112.83 14.68 16.23 11.93 30.68 45.32 21.42 28.53 25.36 19.47 37.05 13.01 94.92 26.52 14.20 38.12 For important disclosures, please go to. For methodology and risks associated with our price targets, please see our previously published research. Source: Goldman Sachs Research estimates. Exhibit 2: GS Financials: Summary of rankings by sub-sectors Exhibit 3: Financials have underperformed since October 19 17 15 13 11 9 7 5 7-May-09 7-Mar-10 7-Mar-09 7-Nov-08 7-Aug-09 7-Nov-09 7-Apr-09 7-Oct-08 7-Oct-09 7-Feb-09 7-Sep-09 7-Dec-08 7-Dec-09 7-Feb-10 7-Jan-09 7-Jun-09 7-Jan-10 7-Apr-10 7-Jul-09 XLF SPX Performance 6-Mar-09 13-Oct-09 6-Mar-09 13-Oct-09 7-Apr-10 7-Apr-10 146% 7% 165% 57% 10% 73% Equity Coverage Views Attractive Neutral Cautious Asset Managers Credit Cards Life Insurance Brokers Discount Brokers Specialty Finance Homebuilders Insurance Brokers Large-cap Banks Market Structure Mortgage Insurance Non-Life Insurance Regional Banks REITs Trust Banks Credit Coverage Views Attractive Neutral US Banks Insurance European Banks Mortgage Insurance Source: Goldman Sachs Research. 1300 1200 1100 1000 900 800 700 600 Cautious Source: Bloomberg, Goldman Sachs Research. Goldman Sachs Global Investment Research 4 which tends to have a large retail ownership base. Asset Managers. mutual funds appear to be underweight every single regional bank with the exception of Marshall & Ilsley. due largely to an underweight position in Financials. However. While funds have increased their weighting in certain regionals over the last few months. Funds are now much closer to a benchmark weight. most mutual funds remain underweight the group. they have driven them even higher. Within the Financials sector at least. CMA.3 Total Return YTD 25 20 15 10 5 0 -5 -100 Multi-line Insurance Residential REITs Other Diversified Financial Services Specialized REITs Retail REITs Industrial REITs Specialized Finance Real Estate Services Diversified Banks Office REITs Diversified REITs Insurance Brokers Consumer Finance Asset Management & Custody Banks Investment Banking & Brokerage Thrifts & Mortgage Finance Property & Casualty Insurance Multi-Sector Holdings -50 0 50 100 150 200 Mutual Fund Overweight/(Underweight) as of 12/31/09 Source: Lionshare via FactSet and Goldman Sachs ECS Research. RF. Life & Health Insurance and REITs. See Exhibits 5-8. However. think fundamentals will once again become the bigger driver. KEY and MTB. The stocks that have seen the biggest increase in mutual fund ownership are MI. those funds in the Lipper Large-Cap Core Index were trailing the benchmark by about 80 bp on average. they have not been able to keep up with the benchmark. The big increases have been in BAC and WFC. funds have largely closed out their underweights in the large-cap banks sector over the last few quarters. and as investors have increased their weights towards those sectors. we view the recent rally as largely positioning-driven and over time. an underweight position in large-cap banks. they have not fared as well this year. S&P 500 Sub-sector Property & Casualty Insurance Consumer Finance Asset Management & Custody Banks Investment Banking & Brokerage Diversified Banks Real Estate Services Multi-Sector Holdings Specialized Finance Office REITs Industrial REITs Multi-line Insurance Thrifts & Mortgage Finance Diversified REITs Insurance Brokers Residential REITs Retail REITs Regional Banks Life & Health Insurance Specialized REITs Other Diversified Financial Services Current SPX Current (bp) Current Overweight/ Weight Mutual Fund (Underweight) (bp) Weight (bp) 280 209 71 138 78 60 185 127 58 173 144 30 223 206 17 7 4 3 5 4 0 42 44 -1 6 10 -4 1 6 -4 35 40 -5 6 12 -6 1 11 -11 9 23 -14 3 19 -16 8 29 -21 74 109 -34 79 114 -35 5 48 -43 370 413 -43 Despite the outperformance of Regional banks. part of this is due to the fact that they were underweight the sectors that have performed the best. Brokers and Homebuilders. STI. See Exhibit 4. while mutual funds have taken down their exposure to JPM and C. Goldman Sachs Global Investment Research 5 . Exhibit 4: The ‘pain trade’ in Financials: those sectors that were most underweight have rallied the most year-to-date 40 35 30 Life & Health Insurance Regional Banks y = -0. including Regional Banks. Our favorite sectors are Large-Cap Banks. While a large part of this is due to an underweight in BBT. and in particular. and as of March 31. As a result.2 R2 = 0.April 7. Mutual funds generally outperformed their benchmarks in 2009.1x + 14. 2010 United States: Financial Services Thinking about Financials in the context of a portfolio This section was written in conjunction with David Kostin and the Portfolio Strategy team. April 7. 2010 United States: Financial Services Exhibit 5: Mutual funds are still underweight regional banks 200 150 100 50 Mutual Fund 0 Jan-06 Jul-06 Jan-07 Jul-07 Jan-08 Jul-08 Dec-08 Jun-09 Dec-09 Current SPX Exhibit 6: How mutual funds are positioned within Regional Banks Current Mutual Fund Weight (bp) BBT FITB PBCT RF CINF HCBK FHN STI CMA HBAN KEY MTB ZION SNV FNFG CYN MI 3 3 0 5 1 3 0 10 4 1 4 4 2 0 0 0 6 Current SPX Weight (bp) 21 10 5 9 4 6 3 13 6 4 6 6 3 0 0 0 4 Current (bp) Overweight/ (Underweight) -18 -7 -5 -4 -3 -3 -3 -3 -3 -3 -2 -2 -1 0 0 0 2 Change in Mutual Fund Wgt Jun-09 to Current (bp) -7 1 0 4 -1 -2 -1 4 3 1 3 3 0 0 0 0 5 Position Size (bp) 0 -20 -40 -60 -80 -100 -120 -140 Jan-06 Jan-07 Overweight/(Underweight) Jan-08 Dec-08 Jun-09 Jul-06 Jul-07 Jul-08 Dec-09 Source: Lionshare via FactSet and Goldman Sachs ECS Research Current Source: Lionshare via FactSet and Goldman Sachs ECS Research Exhibit 7: Mutual funds have largely closed out their underweight position in Large-cap banks 1200 1000 800 600 400 200 0 Jan-06 Jan-07 Jan-08 Dec-08 Jun-09 Jul-06 Jul-07 Jul-08 Dec-09 Current Mutual Fund SPX Exhibit 8: How positioning has changed within large-cap banks since last summer Current (bp) Overweight/ (Underweight) BAC WFC USB MS PNC JPM C 2 21 -15 6 -1 8 -53 June-09 (bp) Overweight/ (Underweight) -46 -2 -20 7 2 27 -10 Change (bp) 47 24 5 -1 -3 -19 -43 Position Size (bp) 0 -50 -100 -150 -200 -250 -300 Jan-06 Jan-07 Jan-08 Overweight/(Underweight) Dec-08 Jun-09 Jul-06 Jul-07 Jul-08 Dec-09 Source: Lionshare via FactSet and Goldman Sachs ECS Research Current Source: Lionshare via FactSet and Goldman Sachs ECS Research Goldman Sachs Global Investment Research 6 . 18%).4x multiple they are currently trading at. Looking at the sector. we expect banks to generate a normalized return on tangible equity of 15%. This is up from a low of 11% in January 2009. However. In thinking about the normalized return on tangible equity.5x tangible book. but it still a fraction of the 22% weight at the peak. almost half the market cap is in the Banks sector. other cyclical sectors are now back to trading at a premium to their historical valuation. and potentially even become the largest sector of the market again. Exhibit 9: Financials as a percentage of the S&P 500 25% Exhibit 10: Sub-sector breakdown of the Financials sector Insurance Brokers Discount Brokers 1% 1% Specialty Finance 1% 20% SPX Weight Market Structure 2% Asset Managers 4% Banks: Trust 4% Specialty Finance Credit Cards 4% Banks: Regional 6% LifeInsure 6% 15% 10% Banks: Large-cap 48% 5% 0% Dec-74 Dec-76 Dec-78 Dec-80 Dec-82 Dec-84 Dec-86 Dec-88 Dec-90 Dec-92 Dec-94 Dec-96 Dec-98 Dec-00 Dec-02 Dec-04 Dec-06 Dec-08 Dec-10 REITS 8% NonLifeInsurance 15% Source: Compustat and Goldman Sachs Research. the sector weighting has also been boosted by the addition of Berkshire Hathaway to the S&P 500 Index. which is an increasingly regulated metric. which has led some to suggest that Financials. One pushback to this argument is that Banks should trade at a discount to history as it is unlikely that returns ever reach historical levels. We see more room to run as Financials returns continue to recover towards normalized levels and there is room for multiple expansion. While much of this increase stems from the relative outperformance of the sector since the market bottom. One of the big questions that comes up is whether Financials can outperform further. 2010 United States: Financial Services Financials are currently the second largest sector of the S&P 500. Berkshire is now the fourth largest company in the sector. which we expect to be 1. lower than the average over the last 15 years (1. and the largest company by far in Non-Life Insurance. are “cheap” cyclicals that offer leverage to the market recovery. If this were the case. Even if returns end up being below that 15% level. In comparison.5% of the total market cap. but in-line with the early-1990s. one key factor is leverage. JP Morgan and Wells Fargo accounting for 30% of the sector market cap alone. Source: Compustat and Goldman Sachs Research. but higher than the average since Goldman Sachs Global Investment Research 7 . with Bank of America. it suggests that banks should trade at 2. accounting for 16.April 7. and in particular the banks. significantly higher than the current 1. there is still room for multiple expansion. Many financial sub-sectors are trading at a discount to historical valuations (see Exhibit 11). based on our estimates. We have more comfort in our sustainable ROA forecast.1%. as Exhibit 12 shows. which is still lower than average returns in the 2000s. banks trade at a discount to history 25% R2 = 73% Eventually should get back here 20% 2003 19941993 15% 1995 1992 1996 Normalized 2002 2000 2001 2007 2006 2005 2004 1997 1999 1998 Return on Tangible Equity Mortgage Insurance (2) Life insurance (2) Banks (1) Non-life insurance (2) Market structure Asset Managers Discount brokers REITs (3) Average 10% 1990 5% 1991 2010 2008 We may never see this again 2009 0% -5% 50% 100% 150% 200% 250% 300% 350% 400% 450% Price to Tangible Book (1): Price / Tangible Book.7x 17. Source: Goldman Sachs Research estimates.8x 16.7x 1. C is an extreme example. One other issue that investors are wrestling with is the impact of dilution on earnings.2x 11. (3) Price/FFO Source: Goldman Sachs Research estimates.5x 16.5x 13. 2010 United States: Financial Services 1934 (75 bp).4x 12. But in many cases.April 7.8x 12.9x 13. earnings per share would still be significantly depressed due to the increase in share count.4x 18. although this is clearly still an area of debate among regulators. particularly when compared to other sectors in the market.3x 17. Goldman Sachs Global Investment Research 8 .4x 15.6x 23. (2) Price / Book.2x Historical avg multiple 1.8x 13. most banks have not seen a comparable increase in earning assets.9x -Current multiple Price to Earnings Industrials Materials Discretionary Energy Info Tech Average (P/E) 17.0x 20. with most of the dilution being caused by the Banks. Citigroup exemplifies this story. we assume that banks are required to hold 8% Tier 1 common capital.2x 0.3x 18.8x 15. Despite the increase in shares. We calculate that shares are up 60% on average across Financials. Exhibit 11: Financials mostly trade at a discount to history Current multiple 1. even if pre-provision were to return to its previous run-rate.5x Premium / Discount to historical average 0% -48% -30% -43% -43% -6% 13% 30% -16% Premium / Discount to historical average 51% 39% 27% 17% -16% 24% Exhibit 12: Even adjusting for lower ROE. To get to 15% return on tangible equity.2x 1.8x 13. The dilution in Financials stocks has been extreme over the last two years.0x 18. and even adjusted for dilution.9x 1. See Exhibits 13-15.2x -Historical avg multiple 11.9x 0.7x 2. most banks are still trading at a substantial discount to the historical average. We believe the large caps are trading at a bigger discount to their long-term average earnings multiples than regionals and thus rate the large cap banks Attractive and the regionals Neutral. Goldman Sachs Research estimates.62 3Q09 $0. Exhibit 15: Large banks and regionals are trading at a 24% discount to long-term multiples price to normalized EPS by bank.0x Difference -24% -19% -21% Note: regionals ex Northeast. Goldman Sachs Global Investment Research WAL BAC MS BK C 9 .2x 10. Long-term avg since 1985 where available.30 $3.0x 14.0x Price to Normalized EPS 12.8x 9.000 Shares (mm) 30.0x 8. Goldman Sachs Research estimates.0x 6.000 20.5x 11.80 $1.April 7.000 10.30 $1.65 2Q08 3Q08 4Q08 1Q09 2Q09 $0.000 0 1Q08 $4. GS-coverage 18.30 $2.80 $3. FNFG NTRS FITB ZION FHN PNC PBCT COF HCBK DFS WFC JPM STT STI MI RF HBAN CMA CYN USB BBT AXP KEY Source: FactSet.0x 4.0x 2.30 $0.0x 0.80 $2.5x 12.0x 10..0x Indicates "Buy" rated stock Large banks Regionals Average Price to Earnings Normalized Long-term Avg 8.30 1Q08 2Q08 3Q08 4Q08 1Q09 2Q09 3Q09 4Q09 Gov't announced its intention to convert into common shares * market-cap weighted Implied Normalized EPS Consumer Discretionary Information Technology Consumer Staples Telecom Services Industrials Energy Health Care Materials Utilities Financials Pro-forma for gov't conversion $0. Source: Company data.4x 12.39 4Q09 Source: Goldman Sachs Research estimates. 2010 United States: Financial Services Exhibit 13: There has been significant dilution in Financials over the last year Change in Share Count (2007-2009) Average* Median -3% -1% -3% -2% -3% -3% -1% -1% 0% 0% 1% 2% 3% 0% 4% 1% 5% 4% 59% 12% Exhibit 14: Pre-provision shrinkage and increase in share count has resulted in a big decline in normalized earning power 40.80 $0.0x 16. 33 0.65 0.32 0. Source: Goldman Sachs Research. this is not surprising.02 0. to concentrate on the fundamental issues.02 0.25 0.13 0.27 0.70 0.16 0.64 0.02 0.21 0. Goldman Sachs Global Investment Research 10 .44 0.09 0.60 0.11 0.47 0.52 0.03 0.63 0.76 Exhibit 17: Financials correlation is at the lowest level since 2006 realized correlation across stocks 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Oct-06 Oct-07 Oct-08 Feb-07 Feb-08 Feb-09 Oct-09 Dec-06 Dec-07 Dec-08 Dec-09 Feb-10 Jun-06 Jun-07 Jun-08 Jun-09 Apr-06 Apr-07 Apr-08 Apr-09 Aug-06 Aug-07 Aug-08 Aug-09 Apr-10 Jun-10 S&P 500 Financials ETF XLF XLY XLU XLV XLB XLE XLP XLK XLI SPX S&P Sector Financials Discretionary Utilities Healthcare Materials Energy Staples Technology Industrials S&P 500 Current 0.42 0.34 0. once again. While that has certainly been the case over the last few years.34 5-year percentile 0.48 0.54 0. In some ways.46 Note: The percentile is the rank of the current value as a percentage of the total observations. as regulatory fears from earlier this year dissipate.03 0.34 0. correlation across the group has started to fall dramatically in recent days (see Exhibit 17).April 7.33 0.22 0. with many of the stocks trading in lock-step with one another (see Exhibit 16).54 0.60 0. Source: Goldman Sachs Research.38 0.62 0.26 0. While we still see some key themes helping drive returns. The upcoming earnings season should provide investors with evidence as to these differentiating trends and provide opportunities for generating alpha. 2010 United States: Financial Services A return to micro from macro Financials are often considered one of the most macro-driven sectors in the market.61 0.19 0. and realize that there are many ways to differentiate across the group.38 1-year percentile 0.14 0. Exhibit 16: Financials tend to be one of the most highly correlated sectors ranked by 5-year percentile 1-year median 0.40 0.40 0.40 0.29 0.59 0. many of these are more stock-specific and cut across sectors (consumer provision leverage and capital management) as opposed to being large macro themes. investors are starting.60 0.58 0.69 5-year median 0.38 0. S&P LCD. Trepp. Loanperformance. Loanperformance. prime jumbo is getting worse MBS (2006 & 2007 vintages) Exhibit 21: CRE delinquencies continue to trend up CMBS CMBS . Exhibit 20: Within resi mortgages. Jan 10 Oct 06 Oct 07 Oct 08 Oct 09 Source: Company data.April 7. 60+ Delinquency Subprime Op ARM Alt-A Prime Jumbo Home Equity FRE/FNM Source: Company data. 2010 United States: Financial Services Theme #1: Provision leverage in consumer loan portfolios The credit cycle is clearly moderating.8% 5. Trepp. we see potential for reserve releases later this year. S&P LCD.2% 12. Goldman Sachs Research. Trepp.MoM change.3% 6. Defaults 1Q09 2Q09 3Q09 4Q09 1Q10 TD $ 19.0% 8.7% 8. Goldman Sachs Global Investment Research Dec-07 Jan-08 Feb-08 Mar-08 Apr-08 MayJun-08 Jul-08 Aug-08 Sep-08 Oct-08 Nov-08 Dec-08 Jan-09 Feb-09 Mar-09 Apr-09 MayJun-09 Jul-09 Aug-09 Sep-09 Oct-09 Nov-09 Dec-09 Jan-10 Feb-10 700 bps 600 bps 500 bps 400 bps 300 bps 200 bps 100 bps 0 bps -100 bps -200 bps -300 bps -400 bps QoQ Change in 30+ Delinquency 1Q08 3Q08 1Q09 3Q09 1Q10 TD 2Q08 3Q09 2Q09 4Q09 70 bps 60 bps 50 bps 40 bps 30 bps 20 bps 10 bps 0 bps -10 bps Avg chg in delinquency 1Q09 +19bps 2Q09 +32bps 3Q09 +37bps 4Q09 +56bps 1Q10 TD +80bps . On the other hand.4% 3. Exhibit 18: Credit card delinquencies have been better thus far in 2010 Credit Card Avg chg delinquency 2Q09 -13bps 3Q09 +4bps 4Q09 +3bps 1Q10 -12bps Exhibit 19: C&I defaults have started to slow down as well Leveraged Loans (proxy for C&I) Lagging 12-month Default Rate 12% 10% 8% 6% 4% 2% 0% Jan-99 Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Jan-05 Jan-06 Jan-07 Jan-08 Jan-09 Jan-10 11 60 Month over month change 50 40 30 20 10 0 -10 -20 -30 # of defaults Ann. as non-performing asset formation is slowing and reserves are closer to peak levels. Goldman Sachs Research. Trepp.1% 8. The improvement is most clear in consumer and commercial (C&I). Loanperformance.8% 16. Loanperformance. S&P LCD.6% $ of defaults # 8. Goldman Sachs Research. Goldman Sachs Research.5% July 09 Apr 06 Apr 07 Apr 08 Jan 06 Jan 07 Jan 08 Jan 09 Apr 09 Jul 06 Jul 07 Jul 08 Source: Company data. some prime jumbo mortgages and CRE continue to get worse. S&P LCD. See Exhibits 18-21. and should current trends continue. Source: Company data. we favor the large banks and credit card issuers vs. On this theme. BAC and JPM are our best ideas given leverage to consumer credit improvement and attractive valuation at 7X our normalized earnings estimates. Goldman Sachs Global Investment Research 12 . In particular. Exhibit 22: Scorecard – stocks with leverage to US consumer credit moderation US consumer credit cost as % of of revenue DFS COF AXP BAC JPM USB WFC Average 70% 35% 23% 6% 5% 3% 2% 16% US consumer credit cost as % of total credit cost * DFS AXP COF JPM BAC USB WFC Average 100% 85% 77% 51% 47% 40% 29% 62% US consumer credit as % of normalized earnings ** DFS COF JPM AXP BAC USB WFC Average 70% 60% 30% 30% 25% 20% 15% 28% Source: Company reports. as C&I is likely in-line with historical seasonal patterns based on commercial bankruptcies and leveraged loan defaults (although as a caveat. FactSet. in particular. We continue to believe high but stable unemployment leads to lower delinquency. Total delinquency was down 6 bp month on month while early delinquencies are down for the fourth straight month (see Exhibit 22). continues to improve. Goldman Sachs Research. See Exhibit 23. Looking ahead to earnings. and auto charge-offs have tracked down 12% using monthly data through February from Capital One and AmeriCredit. as evident in the most recent credit card master trust data.April 7. the regional banks. bank charge-offs typically fall over 20% in 1Q relative to 4Q based on data since 1985. Half of this seasonal decline is driven by declines in commercial charge-offs (C&I) with the remainder driven by commercial real estate and auto. while seasonally March to May are always strong on tax refunds and other factors. This year losses look set to fall although by a smaller degree. this regression approach tends to undershoot as losses are peaking). Delinquencies usually fall 8% over those months. early delinquencies are down 14%. Since the peak in October. Commercial real estate may be the one outlier in seasonality as delinquency data from the CMBS market implies that commercial mortgage issues are still increasing. 2010 United States: Financial Services Consumer credit. 86% 1.69% 0.71% 0.00% 0.62% 0.89% 0. left chart on dollar losses.34% 0.04% 0. 4Q -10% -20% -30% 2Q vs.62% 0.55% 0.53% 2.88% 0.91% 1.30% 0.20% 1. 1Q 3Q vs.56% 0.30% 0.75% 1.07% 1Q FY1 1.26% 1.56% 0. 4Q (bps) -61 bps -60 bps -57 bps -55 bps -51 bps -45 bps -44 bps -43 bps -43 bps -30 bps -25 bps -22 bps -20 bps -18 bps -17 bps -14 bps -13 bps -10 bps -8 bps -6 bps -6 bps -5 bps -4 bps 9 bps -28 bps 40% 30% 20% 10% 0% 1Q vs.32% 0.33% 1.41% 1. Goldman Sachs Global Investment Research Avg QoQ change since 1985 13 .65% 1.70% 0. 4Q 91) Year 1992 1990 1987 1993 1986 1991 1989 1988 1994 2006 2004 2002 2001 2003 1995 2000 2005 1999 1998 1997 1996 2007 2009 2008 Average 4Q FY0 1. 2010 United States: Financial Services Exhibit 23: The seasonality of credit – losses typically fall over 20% in 1Q vs.20% 0. with improvement in C&I.60% 0.70% 0.49% 0. CRE and auto avg quarter over quarter change in net charge-offs since 1985.58% 0.92% 0.79% 1Q vs.08% 0.76% 0.49% 0. right table on % NCOs (1992 = 1Q 92 vs.06% 0.76% 0.90% 1.86% 1.86% 0.64% 0. 3Q Source: Federal Reserve.95% 0.25% 1.48% 2.38% 0.90% 1. Goldman Sachs Research.April 7.61% 0. 4Q.65% 1. 2Q 4Q vs.64% 0. REITs are still one of the highest yielding sectors.0% PL BXP EVR PSA CNS CLMS CBL AB 2. have already expressed a desire to increase the dividend back to a more “normalized” level. either by paying dividends or by buying back stock.0% Dividend Yield (2009) 3. USB and NTRS.0% 2. some companies (such as BAC) have expressed a desire to buy back stock and reduce some of the dilution that occurred as a result of large capital raises in 2009. The dividend yield of the sector has fallen from an average of 2. but could start to normalize in 2011.0% Market Structure Mortgage Insurance Insurance Brokers Regional Banks Credit Cards Trust Banks Specialty Finance Non-Life Insurance Homebuilders Brokers Asset Managers Life Insurance Large Banks Financials MHP TROW VR PRE RE TRV VNO CB AWH ACE 2. dividends are much more likely than buybacks.5% PTP LAZ UNM AON 1. but more recently have been closer to 45%.0% 0. but has to date largely been limited in the sector.5% Source: Goldman Sachs Research estimates. payout ratios averaged 37% since 1992.5% Dividend Yield Exhibit 25: Companies expected to grow their dividend by 5%+ this year 70% 60% Dividend Growth (2009-2010) 50% 40% DUF 30% 20% 10% MS 0% 0.5% 4. While banks are currently limited in terms of how much capital they are able to return to shareholders in the form of buybacks and dividends.5% in the years leading up to the crisis to about 1.5% 5. and are expected to increase dividends by 7% this year. Source: Goldman Sachs Research estimates.5% 3. Goldman Sachs Global Investment Research 14 . REIT and Asset Manager sectors screen especially well on this metric.0% 2.0% 4.5% 3. Exhibit 25 highlights the 28 companies across our coverage universe that are expected to grow dividends by 5% this year. we believe that once regulatory uncertainty clears. Companies in the Insurance.April 7. the potential payouts may be substantial. This implies that dividend yields could be 5%-6%. Large banks are still at the low end of the spectrum and bring down the sector average. at least initially.5% REITs 0. such as JPM. we look at historical payout ratios and apply them to our normalized EPS levels. Historically. Exhibit 24: Financials sector dividend yield 3. M&A is also a possibility.0% 1. Some banks.5% 1.5% currently (see Exhibit 24). In our opinion. In order to estimate what the yield could potentially be. That being said. many Financials have accumulated excess capital positions and are increasingly willing to put cash to work. 2010 United States: Financial Services Theme #2: Capital management is beginning to be a key differentiator across the sector While banks tend to receive a lot of focus for their inability to pay dividends.0% 1. significantly higher than the current S&P 500 average of 1.0% 0.9% (see Exhibits 26-27). 85 $6.8% 20. Inc. The PMI Group.0% 8. Life Insurance Source: Goldman Sachs Research estimates.0% 50.93 1. Arch Capital Group Ltd.0% 15.0% 4. Inc. Knight Capital Group.April 7.2% 21.41 1. Janus Capital Group Inc. 2010 United States: Financial Services Exhibit 26: Banks pay 30-40% of earnings in dividend 55.40 $4.0% 40.96 2.0% Long-term average = 37% 2004-2007 average = 45% Exhibit 27: Normalized dividend yields could be significant Div Payout Ratio* GS Normalized EPS BAC WFC JPM $2.50 Peak 45% 45% 45% 45% 45% LT Avg 37% 37% 37% 37% 37% Normalized Div Peak 1.6% 17. We highlight the groups and stocks that have the highest remaining authorized share repurchases as a percentage of market cap (see Exhibits 28-29).0% USB 25.0% Insurance Brokers Credit Cards Non-Life Insurance Specialty Finance Regional Banks Homebuilders Large Banks Trust Banks Financials Brokers Asset Managers Exhibit 29: Buy and Neutral rated companies with the largest remaining repurchase authorizations as a percentage of market cap Remaining buyback authorization / market cap 25. Exhibit 28: Sectors with the largest remaining repurchase authorizations as a percentage of market cap Remaining buyback authorization / market cap 14.08 1.35 $6. nine companies in Financials have announced new buyback programs.8% 19.3% 23.7% 17.0% 12.50 $2.93 LT Avg 0. Ltd. For these names.0% 2.0% PNC 20.0% 6.0% 45.28 2.9% 19. Validus Holdings. Platinum Underwriters Holdings Ticker TRV ACGL JNS VR MCO MTH AON PMI NITE PTP Sector NonLifeInsurance NonLifeInsurance Asset Managers NonLifeInsurance Specialty Finance Homebuilders Insurance Brokers Mortgage Insurance Market Structure NonLifeInsurance Source: Goldman Sachs Research estimates. Buybacks have also picked up recently – since the start of the year.0% Jan-92 Jan-93 Jan-94 Jan-95 Jan-96 Jan-97 Jan-98 Jan-99 Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Jan-05 Jan-06 Jan-07 Source: Goldman Sachs Research.05 2. Inc.89 1.41 AVG Yield on Normal Div Peak 6% 6% 6% 5% 5% 6% LT Avg 5% 5% 5% 4% 4% 5% 30.5% 25.5% Company Name The Travelers Companies.0% Market Structure Mortgage Insurance REITs 0.0% 10. Goldman Sachs Global Investment Research 15 . Aon Corp. Source: Goldman Sachs Research. Moody's Corporation Meritage Homes Corp.61 2. Market Structure and Asset Management space. completion of these programs has the potential to drive upside and significant EPS accretion. primarily in the Non-Life Insurance.0% 35. 0 Increased buyback program SFG 8% 3% 3% 07/29/2009 08/13/2009 08/28/2009 09/15/2009 09/30/2009 10/15/2009 10/30/2009 11/16/2009 12/02/2009 12/17/2009 01/05/2010 01/21/2010 02/05/2010 02/23/2010 03/10/2010 Mar. and while the two traded together for most of the year.0 UNM 2.e. Goldman Sachs Global Investment Research 03/25/2010 -1% Announces intention to resum e share repurchases 16 . Goldman Sachs Research.Apr.Feb09 09 09 09 09 09 10 10 AVG Source: Biryini Associates.Nov.0 1.0 3.5 3. 2010 United States: Financial Services Our focus on buybacks in the context of capital allocation is largely aimed at identifying supports to both the market and company stock prices. including Unum Group (UNM).Oct.Dec.April 7.Jun09 09 09 09 Jul. SFG can be shown to have significantly outperformed UNM following the announcement of its share repurchase (see Exhibit 31). UnumProvident (UNM) and StanCorp (SFG) are smid-cap life insurance companies with similar underlying businesses (i.5 2. 2010 Stock return (%) 4 day return (%) around authorization 9% 7% 6% 5% 4% 3% 2% 1% 0% 03/09/2009 03/24/2009 04/08/2009 04/24/2009 05/11/2009 05/27/2009 06/11/2009 06/26/2009 07/14/2009 Indexed Price Performance Exhibit 31: Shares have reacted favorably to SFG’s buyback announcement Stock return (%) .Aug. Source: Factset. disability insurance).. Recent analysis by John Marshall of our Cross-Product team suggests that stocks that announced buybacks during the past year outperformed the S&P 500 by 290 bp in the four days around the buyback announcement (see Exhibit 30).SPX return (%) 4. For example.5 1. XL Capital (XL) and Public Storage (PSA). There are a number of stocks that we expect will begin to buyback stock this year.Sep. We have seen this in the financial space as well. Exhibit 30: Stock reactions around share repurchase announcements Through February. With these as a backdrop we are aware of investor focus on the impact of buybacks on stocks.May.Jan. We expect US-based M&A to increased 10-20% over 2009. using a three-factor regression model based on business fixed investment. Since 1982. and the Goldman Sachs economists do not expect much of a change over the course of 2010. However.April 7. See Exhibits 32-33. 1996. We remain optimistic that trends will improve over the course of this year. Sluggish equity volumes and low volatility has hurt commission growth. helped by rising global GDP. while F/X and commodities have lagged somewhat. Thus far in 2010. Markit. 2010 United States: Financial Services Theme #3: Capital market should bounce from a disappointing 4Q2009 While 1Q2010 did not shape up quite as strongly as many had hoped or expected. Exhibit 32: Client activity across various products remains strong in 1Q2010 AVD volumes. and in all but three of the years (1989. and unemployment trends as the input variables. the same period in 2009. but it is too soon to call it a trend. Goldman Sachs Research. with notable improvement in Asia-based activity outweighing an 11% yoy decline in European volumes. which has started to occur very recently. debt issuance for 1Q10 is quarter-ized. Low interest rates and a steep yield curve should continue to support a variety of carry trade strategies this year. Source: BATS. we remain upbeat regarding trends for the remainder of 2010. improving sentiment. 2000) the model accurately predicted at least the directionality of announced US M&A. FICC results this quarter should show a seasonal improvement. driven primarily by volume increases in rates and credit. QTD change for indices 40% 35% 30% 25% QoQ change 20% 15% 10% 5% 0% -5% -10% Debt issuance Interest Rate volumes FX volumes Commodities volumes Credit indices -4% -2% 19% 14% 17% 38% 4Q09 1Q10 Exhibit 33: Although equity trading is down year-over-year average daily trading volumes for Tape A/B/C shares in bn 12 Tape C Tape B Tape A 9 6 6% 4% 4% 1% 3 0 1Q07 2Q07 3Q07 4Q07 1Q08 2Q08 3Q08 4Q08 1Q09 2Q09 3Q09 4Q09 1Q10 Source: CME. Dealogic. equities appear to be off to a very slow start this year despite the fact that 1Q is typically the strongest quarter of the year. real GDP. also hurting revenues. these three variables have had a 90% correlation (81% R-squared) to US M&A volumes. The big focus area is M&A. Goldman Sachs Global Investment Research 17 . Goldman Sachs Research. CEO confidence and access to credit markets. which started off the year slowly but has picked up in recent weeks. and issuance has been weaker than expected. Equities could pick up over the course of this year if we start to see inflows into US domestic funds. announced global M&A volumes are up 12% vs. 000 800. respectively) given their more diversified business models. See Exhibits 36-37. but we note that M&A has likely benefited their other businesses. with an annual CAGR of 18%. Emerging markets have also become an increasingly important area for M&A. Since 1996. and notably. Source: Company reports. as evidenced by their recently announced advisory mandates for Prudential plc’s pending acquisition of AIA.000 600. such as lending. which could come to market if conditions continue to stabilize.April 7. Asia has had the most growth in M&A volumes. and trading. Goldman Sachs Research. have less exposure to M&A as a percent of their overall revenues (6% and 3%. such as Morgan Stanley and JPMorgan. Evercore has advised on some of the largest transactions of the past year. In addition. Lazard has the largest backlog across the smidcap broker space. 2006-9 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% EVR GHL LAZ JEF DUF PJC RJF SF Average = 36% Announced Global M&A Deal Volumes ($ mn) Americas EMEA Asia-Pac Source: Company reports. Blackstone remains our top Buy (CL) idea among the alternative asset managers. and Lazard and Blackstone increased their presence as well. Larger firms.200.000 0 1Q10 (Q-ized) 1Q98 3Q98 1Q99 3Q99 1Q00 3Q00 Exhibit 35: …and the boutiques are the most leveraged to M&A trends advisory revenues as % of total revenues. Over the last two quarters sponsor-backed IPO filings reached $6 billion in value across 31 deals. underwriting. See Exhibits 34-35. which are the most leveraged to a rebound. including BNSF/Berkshire and ACS/Xerox.000 400.400. Goldman Sachs Research.000 1.000 1. 2010 United States: Financial Services Given our belief that we are in the first year of a multi-year recovery in global M&A volumes. JPMorgan and Morgan Stanley are among the strongest large-cap participants in Asia-based M&A thus far in 2010.000 200. BX is well positioned to deploy capital amid improving credit availability and attractive valuation prospects as it currently has $28 billion in dry powder (29% of AUM). Exhibit 34: The pace of M&A announcements has quickened in the past six months.000 1. Goldman Sachs Global Investment Research 18 .600. Alternative asset managers are also well-positioned for a recovery considering record levels of dry powder and improving financing conditions for deals. compared with 8% in EMEA and just 3% in the United States.000. sponsor-backed IPOs are likely to pick-up given the current backlog. we remain Attractive on the smid-cap brokers and boutiques. Despite a soft start to the year. led by a recovery in the Americas… 1. Goldman Sachs Research.left axis % of total M&A . Goldman Sachs Research.right axis Source: Dealogic. 2010 United States: Financial Services Exhibit 36: Financial sponsor M&A volumes are off to a soft start in 2010 Financial sponsor backed M&A announcements ($ billions) 350 300 25% Exhibit 37: …but dry powder remains at record levels Committed but not yet invested private equity capital globally (as of Dec ’09) 600 501 462 379 163 300 186 178 280 100 259 503 62 Asia 250 200 150 100 5% 50 0 2009 Q3 2010 Q1 0% 15% % of total Private Equity Dry Powder ($ bn) 20% Sponsor Volumes ($ bn) 500 400 EU 10% 200 US 2003 2004 2005 2006 2007 2008 2009 Sponsor Volumes ($ mn) .April 7. Source: Prequin. Goldman Sachs Global Investment Research 19 . Exhibit 39: Great affordability sets the stage for better sales ahead 60% 55% 50% 45% 40% 35% 30% 25% 20% 15% 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009 Exhibit 38: New home sales are unsustainably low Affordability = Mortgage/Income +1 SD Average . and (3) we expect stability in house prices as lenders continue to work with borrowers to avoid foreclosures. prices have recently shown more stability. See Exhibits 38-39. The strong spring selling season (late January-end of April): We expect positive macro and micro data points suggesting that the Spring. We have already heard plenty of positive micro data points and expect the macro data to reflect this soon.1 SD Source: US Census Bureau. Better industry figures and significant share shift to the large public builders and away from small. Source: US Census Bureau. On the commercial side. The brief slowdown (May-June): As the tax credit draws to a close. but there is potential for more transactions over the course of 2010 and into 2011. All in. We expect three distinct periods of sales activity for the group. private developers sets the stage for better equity prices across the builder space. Goldman Sachs Global Investment Research 20 . sentiment appears to have moved ahead of the fundamentals. when 50-60% of a builder’s annual deliveries are pre-ordered. 2010 United States: Financial Services Theme #4: Real estate prices are stabilizing as the hunt for yield hits real assets While we may just be in the eye of the storm. is going well. Residential real estate showing signs of stabilization Homebuilders are well positioned to benefit from an improvement in new home sales from the very depressed levels of 2009. The likely effect is a much strong March and April than expected but a more subdued May and June. as there is still the risk from ARM resets and CRE debt re-financing. low interest rates have pushed these issues further out into the future. aided by a lower mix of distressed sales. affordability combined with a return of jobs and confidence sets the stage for higher sales from current trough levels. On the residential side. The resumption of growth (July-December): We expect new home sales to return to positive growth as three factors drive growth: (1) the Goldman Sachs economists are expecting non-farm payrolls to begin to grow in March and to continue to do so throughout 2010. (2) we expect mortgage rates to remain low. we expect 1-2 months of pulled-forward demand due to the expiration of the government’s homebuyer tax credit.April 7. Source: Federal Reserve. While it may seem that we have never had these levels of inventory we note that there were 16 months ahead of the 1982-84 doubling of new home sales.0 Norm al Supply 0. as rates feel in the early 80’s but are unlikely to fall from current levels. The cash on banks’ balance sheets is at much higher levels than ever before.200 US Banking Industry Cash Assets ($bn) 1. 2010 United States: Financial Services Historically. banks have not been price sensitive with delinquent and foreclosed properties but we believe this time is different. The high level of shadow inventory has the potential to make this adjustment this cycle take a lot longer.5 months of total inventory across the United States today with an approximate split of 1:2. Exhibit 40: Inventory is about 18 months. Our conversations with banks and distressed real estate investors suggest that further accommodative policies are being implemented internally. and the number of spec homes it has should enable the company to take share from other builders.000 800 600 400 200 0 01/03/73 01/03/75 01/03/77 01/03/79 01/03/81 01/03/83 01/03/85 01/03/87 01/03/89 01/03/91 01/03/93 01/03/95 01/03/97 01/03/99 01/03/01 01/03/03 01/03/05 01/03/07 01/03/09 Cash at $1.0 Foreclosures 10. Consider: There are 18. new home sales have doubled off the bottom over a two-year period. creating lower urgency to move distressed properties from a bank perspective (see Exhibit 41). After liquidating many foreclosed properties in 2008 banks are much more sensitive to home prices driving lower supply to the market than would otherwise be the case.0 15.0 90D+ 5. given the magnitude of the potential issues if a bank’s entire balance sheet had to be written down to reflect another material decline in home prices. Source: US Census Bureau. We have seen principal reductions and mortgage term extensions grow as percentage of usage in aggregate loan modifications. Historically. however.0 Mar-94 Feb-96 May-90 Oct-03 Jul-86 Jun-88 Jan-98 Dec-99 Nov-01 Sep-82 Aug-84 Sep-05 Apr-92 Aug-07 Jul-09 Note: Data based on quarterly filings. only slightly higher than in 1982 Current + shadow inventory 25. See Exhibit 40. Although we are not expecting a quick doubling of sales.5 as of 3Q09 * Exhibit 41: Cash at banks has created low urgency in moving distressed properties at lower prices 1. DR Horton (DHI) is our favorite name. It is one of the few builders that will be profitable in 2010. with 6.3TN = 11% of total assets 20. we continue to believe that these currently low levels of housing starts and new home sales will not be sustained in a growing economy.5 months of “regular inventory” and 12 months of “shadow”. Goldman Sachs Global Investment Research 21 . Within the homebuilder space.0 Total Months Supply of Home Inventory Adjusted m onths' supply w as 21.400 1.April 7. monthly data points suggest recent decline as sales have increased. One big question on this topic is what the impact of rates will be. is whether Freddie Mac steps up its put back rate. Specifically. Exhibit 42: HAMP continues to grow which could begin to meaningfully benefit MI losses on a go-forward basis Mortgage Insurance Industry Participation in Home Affordable Modification Program HAMP Permanent Mods (# of loans) 120. the most recent data point (January HAMP report from the Treasury) suggests some early signs of success (see Exhibit 42). See Exhibit 43.444.5% of delinquent mortgages (60 day+). While not yet material to the overall 4. Fannie Mae has been driving most of the volume and the focus is still on the 2007 vintage. cumulative permanent modifications increased to 160.000 loans which have been permanently modified by the servicers and are pending final borrower approval.623 3.421 $18.000 0 Incremental 1Q2010 ► X Implied Mod Jan.265 = Implied Cure Benefit $150.000 40.150. Furthermore. Goldman Sachs Global Investment Research 22 . which are likely to be a risk to banks earnings this year.5 million borrowers behind on their payments. therefore.665. a 75% increase in one month.000. In addition. More importantly. One issue that has come up a lot more recently is rep and warranty charges. there are an additional 76. 2010 Implied Mortgage Insurers Source: United States Treasury Department. Recent data points suggest continued acceleration of put-back requests from the GSEs.936 3.000) is a mere 3. Goldman Sachs Research. this issue will likely last for several quarters / years as it’s still unclear how much ultimately gets put back at this point. it theoretically does so in part due to anticipation of successful mortgage modifications.000 MTG RDN PMI GNW Other MIs 116.465 60.498 Reserve Per Loan $26.662 4.000 20. company commentaries. 2010 United States: Financial Services While shadow inventory continues to grow.611 $19. recent news from Bank of America that they are willing to forgiveness principal for borrowers where loan-to-value ratios are above 120% imply that banks are willing to work with some borrowers.499 5.198 10.621 $76.613 $86. Mods MTG RDN PMI GNW 1.312 1.546. particularly in those circumstances where losses are likely to be significant anyway.April 7.207 MIs = 15% Mortgage Insurers 4Q 2009 HAMP 17. the rate of acceleration is meaningful.000 80.773 $19. A big swing factor.874 1. While the sum of these two (192.000 100.860 MIs = 15% HAMP Jan.116 $68.221 1.297 66. although data is skewed by GNMA put-backs where underlying risk is guaranteed by HUD $25 bn Estimated Gov't Insured Mortgage Repurchases Estimated Non Gov't Insured Mortgage Repurchases Provisions ($mn) Reserves ($mn) 4Q09 450 400 316 220 59 1. a long-term average of 12x. 4Q08 1Q09 2Q09 3Q09 4Q09 Note: estimates based on BAC's 3Q and 4Q disclosure of gov't vs.1 bn $4.445 49% 2. That said. Capitalization has improved across the sector but on average. Similarly. properties with more challenging capital or leasing hurdles. Source: SNL DataSource.500 200 106 nr nr Amount of Mortgage Repurchases $20 bn $19. we maintain our Neutral coverage view on Regional Bank stocks. and company data. We maintain that CRE values are highly dependant on funding costs as rent and occupancy growth should be modest beyond 2010. Pricing – It has been difficult to assess a base level of CRE pricing as financing remains limited (lack of CMBS) and transactions volumes are off 80% from peak levels of 2007 (see Exhibit 44). We maintain our Neutral coverage view on REIT equities as current valuation has already discounted a robust recovery in fundamentals. the commercial real estate crisis seems to be on hold and in certain examples pricing and fundamentals have improved from the bottom. recent data points indicate that CRE prices have tightened as it seems that there is too much capital chasing too few deals for high-quality assets (see Exhibit 45). STI and FHN.3 bn $0. which make timing of CRE loan losses difficult to predict.7 bn Repuchases by Vintage * Vintage % of Total Pre-2007 2007 2008 20% 60% 20% Ticker BAC * JPM * WFC STI 3Q09 322 300 146 136 26 930 $15 bn *: based on JPM.8 bn 2006 origination share Implied market run rate *: 4Q09 estimated. REITs now trade at 17x our 2010 FFO estimates vs. $10 bn $7. data points are limited thus far as asset transaction and lease activity to date has been low. That said.April 7. 2010 United States: Financial Services Exhibit 43: Bank repurchases continue to increase. CRE as a percentage of total risk based capital remains high at 107%. CRE pricing stabilizing but on low volume. Goldman Sachs research.4 bn $0 bn 1Q08 2Q08 3Q08 $1.9 bn $2. non-gov't insured repurchases.950 Ticker JPM STI FHN FITB BAC * 3Q09 nr 123 61 10 nr 4Q09 1. “extend and pretend” loan modifications by banks remain prevalent. we believe there should be a bifurcation in pricing for Class A assets vs. *: mgmt indicated that the reserves were original established as part of the CFC acquisition and are currently "in the billions". Lastly. While this is encouraging.9 bn FHN Total $5 bn $1. refi gap remains a question In the current low rate environment.9 bn $4. Goldman Sachs Global Investment Research 23 . a true recovery may take longer than in prior cycles. Goldman Sachs Global Investment Research 24 . Source: PPR.0% 4.9% -18.9% 2011E 7.1% -16. Fundamentals – In most markets. signs of the bottom for rents and occupancy are emerging and we expect comparisons to improve on a quarterly basis over the course of this year.7% -15. Source: Real Capital Analytics.3% 2.7% -18.4% -25.6% -29.0mn 12.0% 8.5% 4Q10E -9. Real Capital Analytics.0% J A J O J A J O J A J O J A J O J A J O J A J O J A J O J A J O J A J O J '01 '02 '03 '04 '05 '06 '07 '09 '10 avg cap rate spread (bps) 600 500 400 300 200 100 Recent CRE Transactions Asset Griffin Towers (office) Columbia Uptown (apt) The Palatine (apt) 8599 Rochester Ave (ind) Value $90.0% 10.1% 2Q10E -16.9% -20.5% Vacancy Rate.8% 5.2% -18.3% Angelo Gordon JV Van Metre Compaies Crescent Heights KTR Capital Partners Seller Maguire Properties Pennrose Properties Monument Realty Panattoni Dev'l Monthly Price Change Index Value (Right Axis) Source: Moody’s. as our economists expect the unemployment rate to pick up over the course of this year and not peak until the first half of 2011. Exhibit 46: FFO year-on-year growth comparison to improve incrementally in 2010 FFO growth by sector Regional Malls Office Apartments Industrial Shopping Centers REIT Average 1Q10E -32.3% -30. For REITs specifically.5% -12.7% -4.1% -15.6% -19.7% 2010E -18.8mn 118.4% -12. See Exhibits 46-47.7% -19. Bloomberg.5% 3Q10E -10.1% 5.6% -2.0mn 11.2% -14.5% 7.5% 2. we expect FFO growth to be flat by year-end and turn positive in early 2011.April 7.0% 4.6% -22.5% -19.0% 2. 2010 United States: Financial Services Exhibit 44: CRE values are still off 30-40% but may be inflecting indexed as of YE-2000 10% 8% 6% 4% 2% 0% -2% -4% -6% -8% -10% Dec-00 Dec-01 Dec-02 Dec-03 Dec-04 Dec-05 Dec-06 Dec-07 Dec-08 Dec-09 Jun-01 Jun-02 Jun-03 Jun-04 Jun-05 Jun-06 Jun-07 Jun-08 Jun-09 100 120 140 160 180 200 Exhibit 45: Spreads still wide but recent deals show tighter bids can be hit as of March 2010 10-Year Treasury 12. Source: Goldman Sachs Research estimates.8% 3. by sector Exhibit 47: CRE fundamentals lag the broader economy – we do not anticipate a recovery until 2012 / 2013 20% 18% 16% 14% 12% 10% 8% 6% 4% 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 M ultifamily Office CRE fundamentals typically lag the economy by 18-24 months Retail Industrial We expect FFO growth to improve on quarterly basis going into 2010 with modest recovery in 2H and 2011.6% -20.6% 3.3% -16. That being said.2% -7.3mn Date Cap rate Buyer Mar-10 Mar-10 Feb-10 Jan-10 8.0% 6.0% -41.0% 4. Market rents have started to flatten out after a period of steep declines in late 2008 and much of 2009.4% -1. 2010 United States: Financial Services Bank losses – The key concern for banks are what losses may ultimately total.April 7.0% 2. Part of the issue is persistency – given the long-tailed nature.0% 1.0% 3. To date.0% 70% 9 10 64% 69% 6. a fraction of the 7% we expect them to eventually realize. See Exhibits 48-49.0% Exhibit 49: It will take 10 years to reach cumulative default Commercial Mortgage Losses: Cumulative recognized to date by banks 4.0% 0.0% 3Q07 4Q07 1Q08 2Q08 3Q08 4Q08 1Q09 2Q09 3Q09 4Q09 GS est 60% 28% 38% 50% 40% 18% 30% 9% 20% 0% 2% 10% 0% 0 1 2 3 4 5 6 7 8 47% 57% 5.5%. Goldman Sachs Global Investment Research 23 25 99% 100% 99% 99% .0% 100% 15 16 88% 91% 17 18 93% 95% 19 20 97% 98% 21 22 Years since origination Source: PPR.0% CRE cumulative default profile 80% 11 12 74% 79% 90% 13 14 83% 85% 7. we expect it could take up to 15 years for banks to fully realize the losses on CRE. Source: Goldman Sachs Research estimates. banks have recognized losses of about 2. Exhibit 48: Banks recognized losses are a fraction of what they may ultimately end up being 8. 00% 1.00% 7. Target Fed Funds * Prior to Yesterday's Discount Rate Increase 38 bp 2 .00% Jul-03 Jul-04 Jul-05 Jul-06 Jul-07 Jul-08 Oct-03 Oct-04 Oct-05 Oct-06 Oct-07 Oct-08 Jul-09 Oct-09 Jan-03 Jan-04 Jan-05 Jan-06 Jan-07 Jan-08 Jan-09 Apr-03 Apr-04 Apr-05 Apr-06 Apr-07 Apr-08 Apr-09 Jan-10 1 .April 7.75% from some providers and construction loans is around 3%-4%. Goldman Sachs Global Investment Research 26 .00% 6.00% 2. debt service coverage has stayed above 1X (see Exhibit 52).5 0 2003 .0 0 1 . Over 60% of loans in the United States are floating rate. If and as rates start to increase. including increasing the spread between the discount rate and the Fed Funds rate.0 0 Febr ua ry 2 01 0 Impl ied September 2 00 9 Implied 0 . Typically. but have to go up eventually Low rates has unquestionably helped to stimulate the economy. we estimate that the rate on home equity loans is as low as 2. and reducing the outstanding balances in the Term Auction Facility towards zero. One of the big questions with regards to interest rates is whether an increase will cause a new round of credit problems. option ARMs are originated with a fixed teaser rate that is good for a defined period of time. So while many properties have loan-to-value readings above 100% as a result of falling prices. the market expects rates to start increasing as early as the second half of this year. so low rates have helped keep borrowing costs quite low. Source: CME/CBOT. Discount brokers are one of the areas that have the most to gain given their sensitivity to the short end of the curve. but investors have tempered their expectations in recent months. In addition. 2 . but also by supporting housing demand and boosting capital market activity. target Fed funds 8. Goldman Sachs Research.0 0 FEB M AR 10 10 APR M AY 10 10 JUN 10 JLY 10 A UG 10 SEP 10 OC T 10 NOV DEC 10 10 JAN 11 FEB M AR AP R MA Y JUN 11 11 11 11 11 JLY 11 AUG 11 SEP 11 OC T NOV 11 11 DEC 11 JAN 12 Source: Federal Reserve.2007 Median 100 bp *: using mid point of target ranges. Using the CME curve as a proxy. it is less likely that there will be much payment shock. 2010 United States: Financial Services Short rates are likely to stay lower for longer. they forecast the Fed Funds rate to stay near-zero through 2011. Goldman Sachs Research. See Exhibits 50-51.5 0 Exhibit 51: CBOT Fed fund futures now imply 100 bps of Fed rate hikes through May 2011 (vs prior expectations of such hikes by October 2010) Implied Fed funds rate Target Fed Funds Discount Rate Discount Rate vs. For example. While our economists do expect the Fed to reverse most “technical” factors in the near term.00% 0. Exhibit 50: Fed moving discount rate back toward more normalized levels relative to Fed Funds discount rate vs. although higher rates may hurt credit trends.00% 3. if rates start to increase because of stronger growth. outflows are likely to continue as investors move into higher risk-reward assets.00% 4. While money market funds could also gain as yields move back to normal levels. another positive impact of low rates is that as rates on option ARMs reset. not only by cutting funding costs. Regional banks will also likely see an improvement in margins. we would likely review our positioning across the sector.5 0 0 .00% 5. April 7. After that period. there is some concern that an increase in rates may result in a new round of losses (see Exhibit 52). Goldman Sachs Global Investment Research 27 . delinquencies have picked up following the reset. particularly when the payment shock is high.8 1. Low rates imply that payment shock will fall even further to 20%-30% next year as interest rates stay near zero.4x Change % D60 6m after reset Exhibit 53: Delinquencies positively correlated with payment shock 70% 60% 50% 40% 30% 20% 10% 0% 1-25% 25-50% 51-75% 76-100% 101+% paym ent shock at reset Source: Loanperformance. When the Fed does begin to tighten its fiscal policy and short-term yields begin to shift higher. 2010 United States: Financial Services often five years. net interest margins should move back to more normalized levels. as these companies typically invest cash collateral in LIBOR-based securities but pay out Fed Funds-based rates. Given that 2010 and 2011 are peak years for option ARM resets. We estimate that the average EPS effect on the Discounters for the first 100 bp move in Fed Funds/Treasury yields will be roughly 24% on our 2011 estimates (see Exhibit 54). Similarly. Exhibit 55 summarizes how we would be positioned should rates start to increase. Currently. floating rate LIB+50bp w ith 100bp floor **Cash flow divided by debt expense Source: Goldman Sachs Research One of the biggest beneficiaries of rate increases across the space would be the discount brokers.50% 4. which is down considerably from 160% at the end of 2007. A significant amount of CRE matures over the next few years as well and likely will need to be re-financed. plus a spread. Loan to value 2007 Origination Annual cash flow Cap rate Property value Loan Loan to value (LTV) Loan rate* Annual debt expense Debt service coverage (DSC**) 5 5% 100 70 70% 5.9 1.0x -73% -39% 1.50% 2.0x Today 4 8% 50 70 140% 1. the rate is reset and then floats based on a specified index (often the Monthly Treasury Average (MTA). -20% 1. a rising Fed Funds rate should benefit security lending spreads at trust banks. Historically. payment shock is approximately 30%-40%.3x *Assume 30y amortization schedule. Exhibit 52: Debt service coverage vs.6x -50% 0% 2. approximately 10% of total industry assets or 40% annualized organic decay. However. The yield differential between money market funds and CDs remains at the historically wide level of 125 bp. deposit mix shift from non-interest bearing to CDs becomes a headwind Trust banks benefit most in a high rate environment after the Fed has stopped rising rates.28 $0.24 ($0. Goldman Sachs Global Investment Research 28 .50 $1. what is important is what is driving the higher rates. In the first quarter. which is likely to keep pushing investors out of money funds.33 $0. But if rates rise because of better growth expectations. then money markets should see inflows as investors flock to safety.17 $0.April 7.30 $0.40 $1. money markets actually see more dramatic outflows as investors move up the risk curve. 2010 United States: Financial Services Exhibit 54: SCHW and TRAD most sensitive to a 100 bp shift higher in rates 2011E EPS Charles Schwab TradeStation TD Ameritrade optionsXpress E*TRADE Financial Average Note: TRAD estimate based on 100 bps increase in US Treasury yield $0. and is one of the key reasons behind our CL-Sell rating on the stock. While higher yields should theoretically also help money market funds given the more attractive yield. See Exhibits 56-57. money market funds saw outflows of nearly $325 billion.80 $0. This would mark a record quarterly outflow for the industry. Goldman Sachs Research.3% Regionals Above 3%.1% Cards (ex AXP) Rationale(s) Immediate leverage to higher rates Business model has become more asset sensitive but it is hard to pass on to customers with Fed funds above 1% Below 1%. Source: Company data.00) % Change 41% 41% 19% 18% (0%) 24% Exhibit 55: The outlook for different sectors when rates rise Fed Funds Best Interest Income Performance Discount Brokers 0% . regionals don't benefit much given interest rate floors 1% . Above 3% Trust Banks Source: Goldman Sachs Research estimates. FII is one of the most leveraged names to money market funds.11 Impact $0. If rates are going up because of inflation concerns. higher rates do not necessarily imply that money market outflows will reverse. The first few increases in rates are usually neutral to negative for trust banks NII. 000 1Q09* -10% -20% -30% -40% 50% 40% Annualized Organic Growth Rate 30% 20% 10% Exhibit 57: The yield differential between MMFs and CDs remains wide 7-day annualized MMF yield versus 1-year CD rate 6. Goldman Sachs Research. Goldman Sachs Global Investment Research 29 .000 200.000 0% 0 -100.0% 2.000 -300.0% 4.0% 125 bps 0.0% 3.000 400. Goldman Sachs Research.0% 5.0% 1.April 7. 2010 United States: Financial Services Exhibit 56: Money market funds are on track to see record outflows in 1Q10 Quarterly money market fund flows.000 100.000 -200.0% Apr-07 Apr-08 Apr-09 Oct-06 Oct-07 Oct-08 Feb-07 Feb-08 Feb-09 Oct-09 Dec-06 Aug-07 Dec-07 Aug-08 Dec-08 Aug-09 Dec-09 Feb-10 Jun-07 Jun-08 Jun-09 Money Market Yield 1-Year CD Rate *1Q10 is "quarterized" Flows (left axis) Organic growth (right axis) Source: Investment Company Institute. 1Q2010 data is quarterized based on 2/18 data 500.000 Money Market Flows ($ mm) 300. Source: Bloomberg. 6 0.0 0.9 -0.1 0.0 PBCT 0.9 1.5 0.7 C 0.0 0.0 0.5 1.0 -0.0 -0.2 0.0 ----0.24 $0.5 0.6 1.6 1.4 PNC 0.1 --0. But.0 0.1 0.2 0.1 0. Assuming USB reports make up 80% of total repo outstanding. none of the legislation. which are currently under development.2 0.1 FITB 0.0 -0.0 Avg 3.9 3.2 0.14 $0.1 --0.8 0.6 0.0 ----0. regulation and proposals has a major impact on its own. So far.2 0.2 0.0 RF 0. (3): estimated using 15bps of Total Assets .96 $0.23 $0.1 MI 0.2 1.0 --0.3 USB 0.8 0.3 BBT 0.21 $0.1 -0.6 -0. there has been a lot of focus recently on the Basel III proposals. given a TARP tax (Financial Crisis Responsibility Fee) + CARD Act + balance sheet caps + overdraft fee limitations.2 -0.g.1 0.0 ----0.1 --0. the CARD act).7 3. (2) banks may pass on costs to customers.4 1. 2010 United States: Financial Services Regulatory issues likely to remain a topic for the foreseeable future While most of the focus recently has been on the Senate version of the financial regulatory reform bill.1 Regionals FHN FNFG HCBK HBAN KEY 0.33 $0.16 15% 23% 15% 15% 4% 7% 6% 8% 12% 8% 13% 24% 16% 5% 2% 5% 6% 6% 0% 3% 7% 7% 7% 0% 13% 7% 2% 7% 9% 2% 3% 4% 5% 1% 15% 0% 2% 8% 6% 6% 23% 2% 2% 4% 4% 3% 15% 0% 0% 9% 0% 6% 15% 2% 0% 2% 0% 0% 4% 5% 0% 2% 0% 0% 7% 4% 0% 2% 0% 0% 6% 0% 0% 4% 3% 0% 8% 0% 0% 6% 5% 0% 12% 0% 0% 5% 3% 0% 8% 0% 11% 2% 0% 0% 13% 2% 21% 2% 0% 0% 24% 0% 15% 1% 0% 0% 16% 3% 0% 2% 0% 0% 5% 2% 0% 0% 0% 0% 2% 3% 0% 2% 0% 0% 5% 4% 0% 2% 0% 0% 6% 6% 0% 0% 0% 0% 6% 0% 0% 0% 0% 0% 0% 0% 0% 3% 0% 0% 3% 6% 0% 1% 0% 0% 7% 5% 0% 2% 0% 0% 7% 4% 0% 3% 0% 0% 7% 0% 0% 0% 0% 0% 0% 9% 0% 4% 0% 0% 13% 5% 0% 2% 0% 0% 7% 2% 0% 0% 0% 0% 2% 6% 0% 1% 0% 0% 7% 3% 2% 3% 1% 1% 9% (1): estimated using 15% of annual deposit servicing charges (5% for trust banks). BAC and C.1 --0.67 $0. JPM.2 0.7 0. Also assuming 10% for trust banks as they reduce the repo books.6 1. Specifically.8 -0.0 0. On the capital side.4 5.7 0. where netting of most derivatives is no longer allowed. we estimate the cumulative impact of potential regulation actions as 9% of our normalized earnings on an equalweighted basis.9 1.10 $0.0 0.5 0. which would affect large US banks with capital market operations such as MS. (2): estimated where not provided.4 3.9 29.0 -0.1 --0. cumulative outcome.43 $0.0 0.8 0.2 0.01 $0.0 --0.0 -0.0 --0.UST Repos. Looking first at some of the proposals that would have a direct effect on earnings. We note there is still a great deal of uncertainty around many of the outstanding issues.2 -0.1 0.15 $0.04 $0.1 --0.3 0.1 0.00 $0. the cumulative effects add up quickly.0 ----0.0 0.0 -0.0 --0.3 AXP 0.2 STI 0.0 --0.2 0.0 0.00 $0.32 $0. (4) assuming 10% decline in b/s size for big 3 banks. One of the main concerns for investors has been the grossed-up leverage ratio.1 0.5 0.39 $0.39 $0. That said. (1) it is unclear which proposals will ultimately pass.4 1.10 $0.9 0.11 $0.9 0.3 0.FDIC-assessed deposits .8 1.2 0.0 -0. although one risk investors struggle with is the final.5 1.1 -0.1 -0.18 $0. and potentially more to come. there are also many regulatory and legislative proposals related to capital and liquidity levels that are likely to have an impact on the sector.1 -0. along with major Goldman Sachs Global Investment Research 30 .2 0.2 Large Banks JPM MS WFC 0.15 $0.5 1.4 0.0 5.07 $0.0 -0.0 CMA 0.2 0.05 $0.0 -0. we expect the net effect to be manageable for most banks under coverage.0 -0.9 0. and (3) some of the impact is already reflected in our estimates (e.3 1.1 -0.8 0.10 $0.5 5. Goldman Sachs Research estimates.37 $0.0 0.2 0.1 0..April 7.04 $0.0 0.0 --0.0 9.1 0.1 0.1 0. similar to banks that provide guidance. Source: Company reports.1 -0.0 0.8 0.41 $1.3 -0.2 WAL 0.9 1.3 0. Exhibit 58: We estimate that regulatory actions could negatively impact banks’ normalized earnings by 9% $bn Regulatory Impact (pre-tax) OD / NSF Fees (1) CARD Act (2) Financial Crisis Responsibility Fee (3) Restrictions on "Liabilities" (4) Restrictions on "Prop" (5) Total Regulatory Impact After-tax Impact S/O (bn) "Gross" EPS hit % of Normalized EPS Impact by Regulatory Action OD / NSF Fees (1) CARD Act (2) Financial Crisis Responsibility Fee (3) Restrictions on "Liabilities" (4) Restrictions on "Prop" (5) Total BAC 0. See Exhibit 58.2 CYN 0.0 0.5 0.0 --0.04 $0.1 --0.4 Trust Banks BK NTRS STT 0.9 0.0 0.0 0.0 ZION 0.0 ----0.5 0.7 Cards COF DFS 0.3 5.9 1.7 0.0 --0.4 0.Tier 1 Capital .0 1. (5): using disclosed % of revenue by bank where applicable.0 0. Leverage measured using tangible co mmo n equity. banks may choose to exit this market as it becomes prohibitively capital intensive (see Exhibit 60). Goldman Sachs Research estimates. a stringent requirement would likely result in further deleveraging at large banks. non-agency RMBS capital utilization would likely increase to 33% from 5% currently under the new proposal. bank lending and securitization have shrunk by over $1 trillion. B asel III o n a pro -forma basis with no future earnings. RMBS only accounts for 5% of total trading revenue. and as a result. not bigger (see Exhibit 61). Based on our calculations. 10% 0% Revenues * excluding agency M BS Capital Utilization Source: Company data. but at the same time regulatory efforts to make banks hold more capital or to limit non-deposit liabilities both imply that the banking industry would become smaller. 2010 United States: Financial Services international banks such as Credit Suisse. In addition. etc. UBS. leverage on Basel III proposal Exhibit 60: Non-agency mortgage could turn prohibitively capital intensive under market risk proposals our estimate of non-agency mortgage revenues currently as % of total across industry. which has been offset by government lending (via Fannie. Goldman Sachs Global Investment Research 31 . So far this cycle. based on our estimates. private markets must take up the slack. and capital utilization under proposed market risk framework 100% 90% 80% 180x 160x 140x Leverage Ratios 120x 100x 80x 60x 40x Current: 19X average gross leverage ratio Basel III as proposed: 78X average gross leverage ratio 70% 60% 50% 40% 30% 20% All Other Mortgages* 20x 0x C BAC WFC JPM MS WFC C BAC JPM MS M easured as 4Q09 = current. under the proposed market risk framework. changes in b/s size etc. While the leverage threshold has not been set. Part of the reason there is such a focus is the potential impact these new capital requirements will have on credit growth. Source: Goldman Sachs Research estimates. risk weighting for most assets held on banks’ trading books would increase significantly. For the longer term. Exhibit 59: Basel III gross leverage with no netting of derivatives could quadruple leverage ratios current leverage (TCE as denominatory) vs. Deutsche Bank.April 7. Freddie and the FHA). the average gross leverage ratio for the major US banks could quadruple from the current level (see Exhibit 59). For example. is a likely further reduction in credit availability and liquidity across markets and products. Source: Company reports. consumer and corporate credit outstanding of approximately $23 trillion Outstanding ($TN)* Non-banks + securitization Bank loans Government incl GSEs Total 9. non-banks/securitization and the government/GSEs based on total US mortgage. Forcing large banks to shrink their balance sheets would disproportionately hit consumer credit availability and would also be an issue for agency MBS demand.5 22. The unintended consequence. Goldman Sachs Research. commercial real estate. WFC and MS) have an almost 60% share of total assets and total liabilities (broadly defined) and about 40% of total loans and deposits in the United States.2 6. The top 5 banks in the United States (BAC. Goldman Sachs Global Investment Research 32 . C. in our view.2 7. Source: Federal Reserve. SNL. In addition. and C&I. other consumer. agency MBS and mortgages (see Exhibits 62-63). Exhibit 62: The top 5 banks have more than 50% share of liabilities & assets top 5 banks as % of total US banking industry 60% 55% Top 5 Banks' Market Share 50% 45% 40% 35% 30% 25% Liabilities Assets Loans Deposits 42% 40% 57% 56% Top 5 Banks' Share by Loan Type 50% 40% 30% 20% 10% 0% Cards Other (Managed) Consumer Home Equity C&I US Agency Mortgages Treasuries MBS CRE 16% Exhibit 63: The top 5 banks have large market shares across most products top 5 banks as % of total US banking industry 60% 56% 54% 51% 48% 47% 45% 43% Source: Company reports.April 7. Goldman Sachs Research. Specifically. home equity. In addition they have +40% of the banking system’s holdings in US Treasuries.9 % of US Credit Market 40% 31% 29% 100% YoY % Change -12% -7% +8% -3% YoY $bn Change -607 -552 +495 -664 Private credit is being transferred to Government balance sheet Non-banks and securitization account for biggest piece of credit outstanding and credit shrinkage *: non-financial non-government credit outstanding. SNL. 2010 United States: Financial Services Exhibit 61: Where credit comes from – banks vs. Goldman Sachs Research. JPM. the top 5 banks have more than 50% market share of total credit card outstanding. most proposed bank reforms have been targeted at the large banks. We summarize these proposals in Exhibit 64. Currently plaintiffs must prove that rating agencies have knowingly and maliciously committed fraud in rating practices for financial gain. while various regulatory changes have been discussed in both houses of congress and by the regulatory bodies (SEC. 2010 United States: Financial Services On the derivatives side. calls for improved trading transparency should help exchanges and firms with electronic trading platforms to attract higher share from OTC markets. The probability of passing this piece of reform remains largely unknown. CFTC). in our view. or the mark-to-market value of the fund's net assets. however. Rating Agencies Financial reform and legal risk Asset Managers/ Discount Brokers Money Market Reform Source: Goldman Sachs Research. Other sectors likely to be impacted include Insurance. benefitting the exchanges or entities that control the clearinghouses for those products. pushing the industry one step closer to a floating NAV structure. rather than the stable $1.an unexpected move. In addition. there has been little actual change in the past year. should Basel III or similar measures be implemented. which has a much lower burden of proof and consequently would significantly increase legal risk for both MCO and MHP. This is not to say that all of the regulatory reforms are solely directed at the banking sector. Moreover. with any implementation of a transaction tax or curtailment of risk-taking. Possible excess capital would be free to support new business. funds now have to disclose "shadow" NAV. thereby driving up the risk weighting of non-cleared assets. Paul Volcker is pushing for higher capital requirements.April 7.00 NAV on a monthly basis . Under the proposed bill that legal pleading standard would be changed to recklessness or negligence. Goldman Sachs Global Investment Research 33 . As part of the SEC's MMF reform. Downside risk to volumes remains as well. the Rating Agencies and Asset Managers/Discount Brokers. boost investment returns or be returned to shareholders The current House and Senate versions of the Financial Reform bill have language that would negatively impact the rating agencies from a legal risk perspective. more trading assets are likely to be cleared. firm-appropriate risk management practices. rewarding those firms that effectively do so with increased capital flexibility. Exhibit 64: Current regulatory proposals likely to affect Financials Sector Insurance Topic Solvency II/International Financial Reporting Standard (IFRS) Description The aim of these proposals is to incentivize firms to use modern. However. While we continue to believe valuing the mortgage insurers is best done on a residual value basis. b) the benefit from low rates in fee businesses . stand out as an exception as we are starting to see some price increases there. Buy: JPM*. That said. Homebuilders and Brokers Exhibit 65: Key themes across Financials (* are stocks on the Conviction List. will benefit most. and future appetite from mortgage originators for private mortgage insurance is at best unclear. The credit cycle is moderating and net interest margins are expanding. future pricing. BX* Sell:FII* Market Structure (Neutral) Dan Harris Buy: NDAQ*. Goldman Sachs Global Investment Research 34 . We have a Neutral view on Market Structure. realization of large reserve deficiencies. are potential acquisition targets. We maintain our Attractive view on the Asset managers as we believe 2010 will be a year of both retail and institutional rerisking. with a favorable bias towards NDAQ and CME. Personal lines.mortgage and capital markets. We are Neutral on regional banks. remain comfortably above last year's levels. as our thesis that high but stable unemployment = lower delinquencies is now playing out. We believe residual value represents the best proxy for potential value as the uncertainty around GSE reform. 2010 United States: Financial Services Sector views: Attractive Large Banks. the near term implications from recent proposals to incorporate principal forgiveness into mortgage modification programs has the potential to be a significant positive for the mortgage insurers. we have a positive bias toward the small and mid-cap companies. and c) attractive valuation at ~7X normalized earnings. but at the wrong time. We believe the smid-caps have simpler business models with more stable returns. trust banks are stuck with trough earnings and trough valuations. PGR. Credit quality is rapidly improving. LNC Sell: HIG Non-Life Insurance (Neutral) Chris Neczypor Buy: XL. while the big banks also provide an avenue to play the credit theme. notably interest rates and F/X. We prefer names with leverage to credit but also the ability to grow assets outside of card. In the context of our Cautious coverage view for Life insurance. Consequently. We prefer names that are inexpensive on normalized earnings and/or exposed to corporate credit which is improving quickly. though we generally believe exchanges and more specifically those with clearing houses. We maintain our Neutral coverage view for Non-life insurance as we lack conviction that the next two years will bring evidence of a turn in the pricing cycle. and thus have unwarranted valuations compared to larger peers given strong capital positions and less risky portfolios. however. The key cycle drivers-a collapse in ROEs. ACE Sell: ALL Mortgage Insurance Chris Neczypor (Neutral) Buy: MTG Sell: PMI Asset Managers (Attractive) Marc Irizarry/ Alex Blostein Buy: BEN*. while equity options and cash equities are lower. BAC* Buy: STI*. Asset Managers. coverage view for each sector is shown) Sector Equity research Large Banks (Attractive) Regional Banks (Neutral) Trust Banks (Neutral) Credit Cards (Neutral) Richard Ramsden We are Attractive on large cap banks given a) outsized exposure to consumer credit which will improve even in a high but stable unemployment environment. and a decline in cash flow-have yet to emerge. These are good businesses that generate lots of capital. loans are shrinking at over 10% per year as well making the risk/reward more balanced. future leverage.TO Source: Goldman Sachs Research. We believe the large caps face weak organic growth prospects and that investors are still focused on tail risk and capital given regulatory and rating agency uncertainties. this shift should drive higher fee rates and further margin improvement at still palatable group valuation of 17X 2010E P/E. KEY Sell: HCBK na Analyst Key Themes Top Stock Ideas Brian Foran Brian Foran Brian Foran Buy: DFS Life Insurance (Cautious) Chris Neczypor/ Chris Giovanni Buy: UNM. Our top pick in the space remains MTG where we estimate residual value to be between $15-16. Regulatory changes remain significant catalysts and overhangs. Trust banks are in the right place. but ~40% of revenues are tied to interest rates and FX volatility and right now both are a big drag. CMA.April 7. CME Sell: X. Our framework for picking stocks in a soft market focuses on finding relative value within the space. Overall. Volume trends in certain asset classes. but on the flip side loans are shrinking and valuation is ~10X normalized EPS. driving stronger flows into long-dated asset classes and away from lowering yielding money market funds. As a result. FITB. we believe 2011 EPS estimates remain too high. We continue to highlight REITs with discounted multiples vs. We maintain a Neutral ratings for both Moody's Corporation (MCO) and McGraw-Hill (MHP) but believe double-digit earnings growth potential and highly efficient cash flow conversion are compelling drivers as the global debt markets continue to recover and grow. 2010 United States: Financial Services Exhibit 65 cont'd: Key themes across Financials (* are stocks on the Conviction List. Concerns over regulatory reform. BPCEGP. Continuing concerns include GSE-resolution (the debate is just beginning but contains potential seeds of uncertainty over the future demand for private MI). The pricing cycle remains a defining industry issue. BPO and TCO). given our 'lower for longer' interest rate view. Recent positive news include an increased focus on principal forgiveness on the part of both private and public entities and a reported jump in cure rates. We favor ETFC for its credit exposure rather than rates exposure. Focus remains on restructuring stories as liability management transactions continue and the divergence in spreads is still wider than in the US banks. 1Q earnings less of a driver of performance in our view as investors focus on the longer term reform implications. Announced M&A is up. Spreads offer better value in our view in both cash and CDS with exception of Italian and most of French banks where spreads remain tight. The investing framework within our coverage is to "Buy the Profits. LAZ. while FICC should increase up to 50% QoQ. STANLN U: ISPIM. and within our preferred triple-B p/c space we favor short-tail personal lines writers such as Farmers. suggesting advisory firms will have strong backlogs and somewhat softer results. PJC Sell: JEF Buy: ETFC Sell: na Buy: CBG* Sell: BRE* na na Buy: DHI* Sell: Louise Pitt OP: BAC U: WFC OP: LLOYDS. That said. Goldman Sachs Global Investment Research 35 . and the ability to support both embedded losses and more robust new business with current capital bases. though completed trends are lower. We are Neutral on the MI space as we see potentially positive swing factors being counter-acted by continuing uncertainties. though the leverage to rising rates is significant. Within the non-life space. One of our favorite names from a fundamental perspective has been Unum. we prefer relatively wider triple-B names to the much tighter “top of class” names. they have not disappeared. still-nascent economic and housing recoveries. Today REITs trade at implied cap rates close to 7% which we believe is acceptable given how risk free rates currently are. we view valuation as fair from an implied cap rate basis. we also maintain a Cautious outlook as we believe upside for investors remains hindered through 2010 due to unclear risks posed by potential regulatory and legal reform. CNA U: ENH Mortgage Insurance Donna Halverstadt/ Amanda Lynam OP: RDN U: PMI ** Rating Agencies and Tax Preparers fall into our Specialty Finance coverage group. Buy: EVR*. We are Attractive on the homebuilders as five factors are like to drive higher equity prices for the space: (1) a solid spring selling season (2) further stability in home prices (3) low mortgage rates (4) a return to positive non farm payrolls and (5) the share shift to large public builders away form small private developers." After 3 straight years of losses we believe that builders with 2010 profits will outperform the space. coverage view for each sector is shown) Sector Analyst Key Themes Top Stock Ideas Smid-cap Brokers/ Boutique M&A (Attractive) Discount Brokers (Neutral) Dan Harris Dan Harris REITs (Neutral) Jay Habermann/ Sloan Bohlen Rating Agencies (Cauious**) Sloan Bohlen Tax Preparers (Cautious**) Sloan Bohlen Homebuilders (Attractive) Credit research US Banks (Attractive) European Banks (Attractive) Josh Pollard Our Attractive coverage view is based on a longer term improvement in capital markets. peers as our top REIT stock picks (CBL. Our Neutral coverage view is predicated on the offset of slowing trading trends offset by the longer term opportunity for earnings improvement through interest rate changes. Both HRB and JTX have lost market share seasonto-date for various reasons but in summary we believe pricing power for traditional "brick and mortar" tax prep services has been impaired and presents potential structural challenges to the current business model. BNP Louise Pitt Insurance Donna Halverstadt/ Amanda Lynam OP: Farmers. However. capital guidelines and resulting rating agency downgrades dominate the sector once again as issuance volumes are light and spreads continue to remain firm. which has been/remains in a much more stable space compared to traditional ‘life’ names. Source: Goldman Sachs Research. though 1Q results are likely to be soft. We remain Neutral rated on REITs and while the shares seem expensive on most commonly used metrics such as P / FFO or NAV and dividend yield. While the intensity of concerns over capital and asset quality in the Life space have subsided. ACAFP. We remain Neutral rated on HRB but keep a Cautious outlook given headwinds presented by both the macro environment (high unemployment) and changing consumer preferences (shift to digital). We favor Evercore and Lazard for their opportunity to gain market share in M&A over the next year. Relatively tight spreads drive our Neutral rating on the UNM bond. Concerns over new capital guidelines also worrying investors but realization of bondholder losses in stress scenarios has been more prevalent in parts of Europe so this is less of a “new” risk.April 7. but relative to other insurance names we find it attractive in CDS. Equities trading is likely lower. CFA Richard Ramsden Brian Foran Daniel Harris.00 Target price Upside/downside period to target price 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 6 months 12 months 12 months 12 months 12 months 12 months 12 months 6 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 12 months 14% -15% 7% 15% -33% 23% 11% 8% 8% 19% 5% 42% 11% 30% -21% 12% -8% -12% 2% 19% 17% 29% -14% -13% 17% 3% 29% -56% 23% 3% -5% 18% Analyst Christopher M. SunTrust Banks.7 0.00 1. Neczypor Richard Ramsden Marc Irizarry Jonathan Habermann Marc Irizarry Sloan Bohlen Brian Foran Brian Foran Daniel Harris. Fifth Third Bancorp Hudson City Bancorp. Comerica. 2010 United States: Financial Services Exhibit 66: Price target data for select financial stocks Company name ACE Limited The Allstate Corp.20 28. Inc.gs. E*TRADE Financial Corp.00 23.00 130. please see our previously published research. Inc.P.62 112. please go to. CFA Marc Irizarry Brian Foran Brian Foran Christopher M.49 45.00 3.93 1. Source: Goldman Sachs Research. Horton.53 25.36 29.47 Target price 61. Inc. The Progressive Corporation Piper Jaffray Companies Inc.00 21. Comerica.00 47. Inc.0 17.8 3.31 15.67 6. Inc.7 53. Inc.7 11.0 25.81 28.00 16. The Hartford Financial Services Jefferies Group Inc. Ticker ACE ALL BAC BEN BRE BX CBG CMA CMA CME DFS DHI ETFC EVR FII FITB HCBK HIG JEF JPM KEY LAZ LNC MTG NDAQ PGR PJC PMI STI UNM X.00 55. Neczypor Christopher M.7 7. Inc.6 14.77 18.00 44. Neczypor Daniel Harris.3 5.2 1.00 17. Lincoln National Corp. The PMI Group. Inc.00 44.00 20. Neczypor Brian Foran Christopher Giovanni Daniel Harris. XL Capital Ltd.5 4.5 9.9 16. CFA Christopher M.00 10. Unum Group TSX Group.html.54 19.9 1.32 8. The Nasdaq Stock Market.4 4.90 40. CFA Christopher M.4 2.00 28.68 16. Neczypor Daniel Harris.79 314.83 37.00 35.com/research/hedge. Inc.TO XL Sector Insurance Insurance Banks Asset Managers REITS Asset Managers REITS Banks Banks MktStructure Banks Homebuilders MktStructure MktStructure Asset Managers Banks Banks Insurance MktStructure Banks Banks MktStructure Insurance Insurance MktStructure Insurance MktStructure Insurance Banks Insurance MktStructure Insurance Rating Buy Sell Buy* Buy* Sell* Buy* Buy* Buy Buy Buy Buy Buy* Buy Buy* Sell* Buy Sell Sell Sell Buy* Buy Buy Buy Buy Buy* Buy Buy Sell Buy* Buy Sell Buy Market cap (current) Price 18. Neczypor Daniel Harris. CFA Christopher M.P.6 13.9 6.R.52 14.00 28.00 27.2 3.2 6.50 25.00 25. Neczypor For important disclosures. J. CME Group Inc.00 26.30 14.05 14. KeyCorp Lazard Ltd. CFA Brian Foran Joshua Pollard Daniel Harris. Discover Financial Services D. MGIC Investment Corp.79 40.00 10.6 3.2 8. The Blackstone Group L. Evercore Partners Inc.45 42.00 26. CFA Daniel Harris.1 20.00 375.40 31.April 7.00 18.00 54.00 25. Inc.00 16. CFA Christopher M.6 185.5 1.4 7. BRE Properties. Goldman Sachs Global Investment Research 36 . Morgan Chase & Co.56 32.71 30.00 18.2 2.52 36.23 40. CB Richard Ellis Group Inc.00 20. For methodology and risks associated with our price targets.1 6. Neczypor Christopher M.42 19.9 9. Federated Investors.17 11.36 11.68 26.1 178. Inc.51 21.00 25.2 0.00 13.9 8. Bank of America Corporation Franklin Resources. except to the extent already made above with respect to United States laws and regulations. professionals reporting to analysts and members of their households from owning securities of any company in the analyst's area of coverage. Regulatory disclosures Disclosures based on United States laws and regulations. 1% or other ownership. This research discusses Rule 144a securities.C. Seoul Branch. directorships. compensation for certain services. Australia: This research. should read this research in conjunction with prior Goldman Sachs research on the covered companies referred to herein and should refer to the risk warnings that have been sent to them by Goldman Sachs International.html Disclosures applicable to the companies included in this compendium can be found in the latest relevant published research. and any access to it. persons reporting to analysts or members of their households from serving as an officer. directorships. types of client relationships. Disclosures Company-specific regulatory disclosures The following disclosures relate to relationships between The Goldman Sachs Group. Inc. Hong Kong: Further information on the securities of covered companies referred to in this research may be obtained on request from Goldman Sachs (Asia) L. 1% or other ownership. Sachs & Co. A copy of these risks warnings. United Kingdom: Persons who would be categorized as retail clients in the United Kingdom. types of client relationships. Disclosures Company-specific regulatory disclosures The following disclosures relate to relationships between The Goldman Sachs Group. and agreed to take responsibility for. (with its affiliates. this research in Canada if and to the extent it relates to credit securities of Canadian issuers.April 7. Inc. and a glossary of certain financial terms used in this report. Analyst as officer or director: Goldman Sachs policy prohibits its analysts.goldmansachs. which generally are available only to Qualified Institutional Buyers.C. Goldman. This research discusses Rule 144a securities. is intended only for "wholesale clients" within the meaning of the Australian Corporations Act.L. managed/co-managed public offerings in prior periods.com/research/hedge. Analysts may conduct site visits but are prohibited from accepting payment or reimbursement by the company of travel expenses for such visits. "Goldman Sachs") and companies covered by the Global Investment Research Division of Goldman Sachs and referred to in this research.. director. market making and/or specialist role. Goldman Sachs Global Investment Research 37 .html Disclosures applicable to the companies included in this compendium can be found in the latest relevant published research. but are information and analysis not having product promotion as their main purpose and do not provide appraisal within the meaning of the Russian Law on Appraisal. Korea: Further information on the subject company or companies referred to in this research may be obtained from Goldman Sachs (Asia) L. advisory board member or employee of any company in the analyst's area of coverage. Japan: See below.sipc. See company-specific disclosures above for any of the following disclosures as to companies referred to in this report: manager or co-manager in a pending transaction.goldmansachs. Regulatory disclosures Disclosures based on United States laws and regulations. is a member of SIPC(). Compendium report: please see disclosures at. Singapore: Further information on the covered companies referred to in this research may be obtained from Goldman Sachs (Singapore) Pte. 2010 United States: Financial Services Credit disclosures Compendium report: please see disclosures at. compensation for certain services. India: Further information on the subject company or companies referred to in this research may be obtained from Goldman Sachs (India) Securities Private Limited. Canada: Goldman Sachs Canada Inc. market making and/or specialist role. which generally are available only to Qualified Institutional Buyers. has approved of. "Goldman Sachs") and companies covered by the Global Investment Research Division of Goldman Sachs and referred to in this research. (with its affiliates. are available from Goldman Sachs International on request. Investors should carefully consider their own investment risk. as such term is defined in the rules of the Financial Services Authority. Russia: Research reports distributed in the Russian Federation are not advertising as defined in Russian law. Additional disclosures required under the laws and regulations of jurisdictions other than the United States The following disclosures are required under or based on the laws by the jurisdiction indicated. Market Making: Goldman Sachs usually makes a market in fixed income securities of issuers discussed in this report and usually deals as a principal in these securities.L. Investment results are the responsibility of the individual investor. (Company Number: 198602165W).com/research/hedge. See company-specific disclosures above for any of the following disclosures as to companies referred to in this report: manager or co-manager in a pending transaction. Taiwan: This material is for reference only and must not be reprinted without permission. managed/co-managed public offerings in prior periods. Analyst compensation: Analysts are paid in part based on the profitability of Goldman Sachs. Ownership and material conflicts of interest: Goldman Sachs policy prohibits its analysts. which includes investment banking revenues. including tax advice. and any access to it. (Company Number: 198602165W). Our salespeople. financial situations. and employees. and buy or sell.html or from Research Compliance. We seek to update our research as appropriate. No part of this material may be (i) copied. Australia: This research. photocopied or duplicated in any form by any means or (ii) redistributed without the prior written consent of The Goldman Sachs Group. and other derivatives. One New York Plaza. Goldman Sachs Global Investment Research 38 . Korea: Further information on the subject company or companies referred to in this research may be obtained from Goldman Sachs (Asia) L. It does not constitute a personal recommendation or take into account the particular investment objectives. Investors should carefully consider their own investment risk. officers. 2010 United States: Financial Services Ownership and material conflicts of interest: Goldman Sachs policy prohibits its analysts. and it should not be relied on as such.com/publications/risks/riskchap1. NY 10004.C. has approved of. Sachs & Co. is intended only for "wholesale clients" within the meaning of the Australian Corporations Act. which includes investment banking revenues. Transactions cost may be significant in option strategies calling for multiple purchase and sales of options such as spreads. in printed form. We have investment banking and other business relationships with a substantial percentage of the companies covered by our Global Investment Research Division. Additional disclosures required under the laws and regulations of jurisdictions other than the United States The following disclosures are required under or based on the laws by the jurisdiction indicated. General disclosures in addition to specific disclosures required by certain jurisdictions This research is for our clients only. as such term is defined in the rules of the Financial Services Authority. Our research is disseminated primarily electronically.April 7. in some cases.L. directors. seek professional advice. The price and value of the investments referred to in this research and the income from them may fluctuate. or income derived from. Inc. New York. advisory board member or employee of any company in the analyst's area of coverage. traders. Analysts may conduct site visits but are prohibited from accepting payment or reimbursement by the company of travel expenses for such visits. Market Making: Goldman Sachs usually makes a market in fixed income securities of issuers discussed in this report and usually deals as a principal in these securities. Other than disclosures relating to Goldman Sachs. Past performance is not a guide to future performance. A copy of these risks warnings. investment management. including those involving futures. give rise to substantial risk and are not suitable for all investors. and agreed to take responsibility for. United Kingdom: Persons who would be categorized as retail clients in the United Kingdom. and a glossary of certain financial terms used in this report.org). but various regulations may prevent us from doing so. Disclosure information is also available at. this research is based on current public information that we consider reliable. Inc. Investment results are the responsibility of the individual investor. Seoul Branch. Certain transactions. Goldman Sachs conducts a global full-service. Our asset management area. director. certain investments. if appropriate. and a loss of original capital may occur. Taiwan: This material is for reference only and must not be reprinted without permission. excluding research analysts.gs. professionals reporting to analysts and members of their households from owning securities of any company in the analyst's area of coverage. are available from Goldman Sachs International on request. but are information and analysis not having product promotion as their main purpose and do not provide appraisal within the meaning of the Russian Law on Appraisal. Analyst compensation: Analysts are paid in part based on the profitability of Goldman Sachs.L.jsp. Clients should consider whether any advice or recommendation in this research is suitable for their particular circumstances and. This research is not an offer to sell or the solicitation of an offer to buy any security in any jurisdiction where such an offer or solicitation would be illegal. Other than certain industry reports published on a periodic basis. Russia: Research reports distributed in the Russian Federation are not advertising as defined in Russian law.com/research/hedge. this research in Canada if and to the extent it relates to credit securities of Canadian issuers. integrated investment banking. will from time to time have long or short positions in. Singapore: Further information on the covered companies referred to in this research may be obtained from Goldman Sachs (Singapore) Pte. Hong Kong: Further information on the securities of covered companies referred to in this research may be obtained on request from Goldman Sachs (Asia) L. act as principal in. India: Further information on the subject company or companies referred to in this research may be obtained from Goldman Sachs (India) Securities Private Limited. except to the extent already made above with respect to United States laws and regulations. is a member of SIPC(. but we do not represent it is accurate or complete. Canada: Goldman Sachs Canada Inc.. Investors should review current options disclosure documents which are available from Goldman Sachs sales representatives or at. the securities or derivatives (including options and warrants) thereof of covered companies referred to in this research. Copyright 2010 The Goldman Sachs Group. should read this research in conjunction with prior Goldman Sachs research on the covered companies referred to herein and should refer to the risk warnings that have been sent to them by Goldman Sachs International.theocc.C. Electronic research is simultaneously available to all clients. and. Goldman. Japan: See below. or needs of individual clients. persons reporting to analysts or members of their households from serving as an officer. Analyst as officer or director: Goldman Sachs policy prohibits its analysts. the large majority of reports are published at irregular intervals as appropriate in the analyst's judgment. We and our affiliates. our proprietary trading desks and investing businesses may make investment decisions that are inconsistent with the recommendations or views expressed in this research. options. and other professionals may provide oral or written market commentary or trading strategies to our clients and our proprietary trading desks that reflect opinions that are contrary to the opinions expressed in this research. Fluctuations in exchange rates could have adverse effects on the value or price of. future returns are not guaranteed. and brokerage business.sipc. Supporting documentation will be supplied upon request. gs. It can be used for in-depth analysis of a single company. ROACE. EPS.g. hereby certify that all of the views expressed in this report accurately reflect our personal views about the subject company or companies and its or their securities. Disclosures Coverage group(s) of stocks by primary analyst(s) Compendium report: please see disclosures at. e.g. Growth. EV/DACF.April 7. Price/Book. The four key attributes depicted are: growth. Disclosures applicable to the companies included in this compendium can be found in the latest relevant published research.com/research/hedge. Such assignments equate to Buy. Quantum Quantum is Goldman Sachs' proprietary database providing access to detailed financial statement histories.g. returns. Brian Foran and Louise Pitt. EBITDA.763 equity securities. stocks not so assigned are deemed Neutral. Revenue.com/research/hedge. We also certify that no part of our compensation was. CROCI.html. e. Disclosures applicable to the companies included in this compendium can be found in the latest relevant published research.html. The precise calculation of each metric may vary depending on the fiscal year. Coverage groups and views and related definitions' below. See 'Ratings. or to make comparisons between companies in different sectors and markets. directly or indirectly. e. Distribution of ratings/investment banking relationships Goldman Sachs Investment Research global coverage universe Rating Distribution Investment Banking Relationships Buy Hold Sell Buy Hold Sell Global 31% 53% 16% 53% 47% 40% As of January 1. Richard Ramsden. Goldman Sachs assigns stocks as Buys and Sells on various regional Investment Lists. Volatility is measured as trailing twelve-month volatility adjusted for dividends. related to the specific recommendations or views expressed in this report. industry and region but the standard approach is as follows: Growth is a composite of next year's estimate over current year's estimate. is or will be.gs. 2010 United States: Financial Services Reg AC We. Jessica Binder. P/E. multiple and volatility. forecasts and ratios. EV/EBITDA. 2010. EV/FCF. Company-specific regulatory disclosures Compendium report: please see disclosures at. Return is a year one prospective aggregate of various return on capital measures. Goldman Sachs Global Investment Research had investment ratings on 2. Hold and Sell for the purposes of the above disclosure required by NASD/NYSE rules. and ROE. Multiple is a composite of one-year forward valuation ratios. Goldman Sachs Global Investment Research 39 . CFA. dividend yield. returns and multiple are indexed based on composites of several methodologies to determine the stocks percentile ranking within the region's coverage universe. Investment Profile The Goldman Sachs Investment Profile provides investment context for a security by comparing key attributes of that security to its peer group and market. and therefore may not be subject to NASD Rule 2711/NYSE Rules 472 restrictions on communications with subject company. and any access to it. Ratings. with changes of ratings and price targets in prior periods. Attractive (A). Distribution of ratings: See the distribution of ratings disclosure above. Being assigned a Buy or Sell on an Investment List is determined by a stock's return potential relative to its coverage group as described below. The Goldman Sachs Global Investment Research 40 . Investment results are the responsibility of the individual investor. the distribution of Buys and Sells in any particular coverage group may vary as determined by the regional Investment Review Committee. The analyst assigns one of the following coverage views which represents the analyst's investment outlook on the coverage group relative to the group's historical fundamentals and/or valuation. or. directorships. types of client relationships. if electronic format or if with respect to multiple companies which are the subject of this report. professionals reporting to analysts and members of their households from owning securities of any company in the analyst's area of coverage. has approved of. India: Further information on the subject company or companies referred to in this research may be obtained from Goldman Sachs (India) Securities Private Limited. Japan: Goldman Sachs Japan Co. analysts may not be associated persons of Goldman Sachs & Co. is intended only for "wholesale clients" within the meaning of the Australian Corporations Act. which includes investment banking revenues.S. Seoul Branch. (Company Number: 198602165W).html. price target and associated time horizon are stated in each report adding or reiterating an Investment List membership. Japan: See below.L. advisory board member or employee of any company in the analyst's area of coverage. except to the extent already made above pursuant to United States laws and regulations.com/research/hedge. Hong Kong: Further information on the securities of covered companies referred to in this research may be obtained on request from Goldman Sachs (Asia) L.com/client_services/global_investment_research/europeanpolicy. Price chart: See the price chart.com/research/hedge. See company-specific disclosures as to any applicable disclosures required by Japanese stock exchanges. Additional disclosures required under the laws and regulations of jurisdictions other than the United States The following disclosures are those required by the jurisdiction indicated. registered with the Kanto Financial Bureau (Registration No. and a glossary of certain financial terms used in this report. should read this research in conjunction with prior Goldman Sachs research on the covered companies referred to herein and should refer to the risk warnings that have been sent to them by Goldman Sachs International.C. Australia: This research. Analysts may conduct site visits but are prohibited from accepting payment or reimbursement by the company of travel expenses for such visits. European Union: Disclosure information in relation to Article 4 (1) (d) and Article 6 (2) of the European Commission Directive 2003/126/EC is available at. as such term is defined in the rules of the Financial Services Authority. Taiwan: This material is for reference only and must not be reprinted without permission. 2010 United States: Financial Services Price target and rating history chart(s) Compendium report: please see disclosures at. Neutral (N). Regional Conviction Buy and Sell lists represent investment recommendations focused on either the size of the potential return or the likelihood of the realization of the return. Any stock not assigned as a Buy or a Sell on an Investment List is deemed Neutral. and is a member of Japan Securities Dealers Association (JSDA) and Financial Futures Association of Japan (FFAJ). this research in Canada if and to the extent it relates to equity securities of Canadian issuers. persons reporting to analysts or members of their households from serving as an officer. for equity securities. Goldman Sachs usually makes a market in fixed income securities of issuers discussed in this report and usually deals as a principal in these securities. Sales and purchase of equities are subject to commission pre-determined with clients plus consumption tax. the Japanese Securities Dealers Association or the Japanese Securities Finance Company. Russia: Research reports distributed in the Russian Federation are not advertising as defined in the Russian legislation. Canada: Goldman Sachs & Co. Each regional Investment Review Committee manages various regional Investment Lists to a global guideline of 25%-35% of stocks as Buy and 10%-15% of stocks as Sell. coverage groups and views and related definitions Buy (B). The return potential. Disclosures applicable to the companies included in this compendium can be found in the latest relevant published research. managed/co-managed public offerings in prior periods. compensation for certain services. are available from Goldman Sachs International on request.. is a Financial Instrument Dealer under the Financial Instrument and Exchange Law. Investors should carefully consider their own investment risk.L. A copy of these risks warnings. Return potential represents the price differential between the current share price and the price target expected during the time horizon associated with the price target.April 7. Singapore: Further information on the covered companies referred to in this research may be obtained from Goldman Sachs (Singapore) Pte. Sell (S) -Analysts recommend stocks as Buys or Sells for inclusion on various regional Investment Lists. Analyst as officer or director: Goldman Sachs policy prohibits its analysts. United Kingdom: Persons who would be categorized as retail clients in the United Kingdom. however. Non-U. director. Price targets are required for all covered stocks. public appearances and trading securities held by the analysts. Analyst compensation: Analysts are paid in part based on the profitability of Goldman Sachs. Korea: Further information on the subject company or companies referred to in this research may be obtained from Goldman Sachs (Asia) L.C.html. on the Goldman Sachs website at. Regulatory disclosures Disclosures required by United States laws and regulations See company-specific regulatory disclosures above for any of the following disclosures required as to companies referred to in this report: manager or co-manager in a pending transaction.gs.gs.gs. The following are additional required disclosures: Ownership and material conflicts of interest: Goldman Sachs policy prohibits its analysts.com/research/hedge. and agreed to take responsibility for. above.html. Coverage groups and views: A list of all stocks in each coverage group is available by primary analyst. but are information and analysis not having product promotion as their main purpose and do not provide appraisal within the meaning of the Russian legislation on appraisal activity. Ltd. market making and/or specialist role.. Analysts: Non-U. 1% or other ownership. stock and coverage group at. 69).html which states the European Policy for Managing Conflicts of Interest in Connection with Investment Research.gs. please contact your sales representative or go to. This research is not an offer to sell or the solicitation of an offer to buy any security in any jurisdiction where such an offer or solicitation would be illegal. and other professionals may provide oral or written market commentary or trading strategies to our clients and our proprietary trading desks that reflect opinions that are contrary to the opinions expressed in this research. For all research available on a particular stock. investment management. Coverage Suspended (CS). and buy or sell. officers. and pursuant to certain contractual arrangements. (Company Number: 198602165W). The investment outlook over the following 12 months is unfavorable relative to the coverage group's historical fundamentals and/or valuation.C. in New Zealand by Goldman Sachs JBWere (NZ) Limited on behalf of Goldman Sachs. Not Available or Not Applicable (NA). our proprietary trading desks and investing businesses may make investment decisions that are inconsistent with the recommendations or views expressed in this research. One New York Plaza. directors. The information is not meaningful and is therefore excluded. It does not constitute a personal recommendation or take into account the particular investment objectives.. We and our affiliates. Neutral (N). in Singapore by Goldman Sachs (Singapore) Pte. traders.L.gs. Supporting documentation will be supplied upon request. General disclosures This research is for our clients only. integrated investment banking. and other derivatives. Past performance is not a guide to future performance.com/research/hedge. photocopied or duplicated in any form by any means or (ii) redistributed without the prior written consent of The Goldman Sachs Group. The investment rating and target price have been removed pursuant to Goldman Sachs policy when Goldman Sachs is acting in an advisory capacity in a merger or strategic transaction involving this company and in certain other circumstances. or needs of individual clients. authorized and regulated by the Financial Services Authority. Other than certain industry reports published on a periodic basis. The previous investment rating and price target. will from time to time have long or short positions in. Goldman Sachs conducts a global full-service. regulated by the Bundesanstalt für Finanzdienstleistungsaufsicht. if any. Goldman Sachs Global Investment Research 41 . seek professional advice. Goldman Sachs Research has suspended the investment rating and price target for this stock. We have investment banking and other business relationships with a substantial percentage of the companies covered by our Global Investment Research Division..theocc.L. act as principal in. or income derived from.html or from Research Compliance. if any. Goldman Sachs & Co. Not Meaningful (NM). give rise to substantial risk and are not suitable for all investors. and a loss of original capital may occur.C. options. New York. including tax advice. oHG. in the Republic of Korea by Goldman Sachs (Asia) L. has approved this research in connection with its distribution in the European Union and United Kingdom. in Japan by Goldman Sachs Japan Co. and employees. if appropriate..sipc. and research on macroeconomics. Seoul Branch. and in the United States of America by Goldman Sachs & Co.com/publications/risks/riskchap1. the securities or derivatives. because there is not a sufficient fundamental basis for determining an investment rating or target. This research is disseminated in Australia by Goldman Sachs JBWere Pty Ltd (ABN 21 006 797 897) on behalf of Goldman Sachs. Goldman Sachs does not cover this company. but we do not represent it is accurate or complete. nor is Goldman Sachs responsible for the redistribution of our research by third party aggregators. certain investments. the large majority of reports are published at irregular intervals as appropriate in the analyst's judgment. Copyright 2010 The Goldman Sachs Group. Inc. Ltd. Goldman Sachs & Co. Not all research content is redistributed to our clients or available to third-party aggregators.. We seek to update our research as appropriate. Not Rated (NR). this research is based on current public information that we consider reliable. excluding equity and credit analysts. regarding Canadian equities and by Goldman Sachs & Co. European Union: Goldman Sachs International. commodities and portfolio strategy. 2010 United States: Financial Services investment outlook over the following 12 months is favorable relative to the coverage group's historical fundamentals and/or valuation.gs. All research reports are disseminated and available to all clients simultaneously through electronic publication to our internal client websites. the United States broker dealer. Other than disclosures relating to Goldman Sachs. future returns are not guaranteed. financial situations. Transactions cost may be significant in option strategies calling for multiple purchase and sales of options such as spreads.. Rating Suspended (RS). Analysts based in Goldman Sachs offices around the world produce equity research on industries and companies. referred to in this research.. but various regulations may prevent us from doing so. Global product. Goldman Sachs International has approved this research in connection with its distribution in the United Kingdom and European Union. (all other research). may also distribute research in Germany. Fluctuations in exchange rates could have adverse effects on the value or price of. are no longer in effect for this stock and should not be relied upon. Cautious (C). including those involving futures.com. in Hong Kong by Goldman Sachs (Asia) L. on a global basis. Inc. in Canada by Goldman Sachs & Co. currencies. Disclosure information is also available at. and brokerage business. and it should not be relied on as such. Investors should review current options disclosure documents which are available from Goldman Sachs sales representatives or at. NY 10004. The price and value of investments referred to in this research and the income from them may fluctuate. Goldman Sachs has suspended coverage of this company. The investment outlook over the following 12 months is neutral relative to the coverage group's historical fundamentals and/or valuation. The information is not available for display or is not applicable. Our asset management area. distributing entities The Global Investment Research Division of Goldman Sachs produces and distributes research products for clients of Goldman Sachs.April 7. Certain transactions. Our salespeople. in India by Goldman Sachs (India) Securities Private Ltd.jsp. is a member of SIPC (). Clients should consider whether any advice or recommendation in this research is suitable for their particular circumstances and. No part of this material may be (i) copied. Not Covered (NC). in Russia by OOO Goldman Sachs. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/36502126/GS-Financials-Positioning-for-the-Next-Leg-of-the-Rally
CC-MAIN-2016-50
refinedweb
21,266
57.37
Help fixing XML/InDesign layoutcitizensis Jun 27, 2011 1:39 PM Hi everyone, I've been slamming my head into the wall that is InDesign/XML for far too long. Can anyone here offer me some guidance? I'm trying to generate a catalog of books with XML and InDesign. Here is a screenshot of the XML structure: Note: The XML has been simplified for [hopefully] enhanced readability. I assure you the actual nodes are properly closed. - The catalog has many subjects - The subject contains one page for featured books (editions). - The featured page can contain one, two, or three featured editions. Each case has its own master page. - The subject also contains multiple non-featured editions (all_editions) There is one master page for all editions. Here is a screenshot of the three_feature_edition master page story editor: Note: The above would be repeated to accommodate all three featured editions. There are two ways to import the XML: Append and Merge. Each presents its own set of problems. - Merge( with option Delete elements, frames, and content that do not match imported XML): - Tagged content in master pages that aren't used in the first subject is erased. - Ex: If the first subject has one_featured_edition then two_featured_edition and three_featured_edition master pages are emptied. - Merge( without option Delete elements, frames, and content that do not match imported XML): - All master pages remain intact, except each subject contains every edition in the XML file (not just editions for that subject) - Append - All returns and frame/column breaks specified in master page are ignored, so content from each node is ButtedRightAgainstOneAnother. At this point I feel like I'm closest to achieving what I need with the append method. What do I need to do to keep the breaks/returns intact? You can see in the story editor screenshot where I have placed the break characters Any help would be much appreciated. Thanks. 1. Re: Help fixing XML/InDesign layoutJohn Hawkinson Jun 27, 2011 2:48 PM (in response to citizensis) Err. I would recommend you include XML in your post with >> Syntax Highlighting so that it's possible for us to cut-and-paste your XML and try things. Anyhow, XML in InDesign, welcome to pain! Are you mapping your tags to paragraph styles? Why not use them and set Keep Options > Start Paragraph > In Next Frame, etc. I haven't done much with master pages and XML tagging, I wasn't really aware that was a workflow that was effective. So perhaps I can't help you much more. 2. Re: Help fixing XML/InDesign layoutcitizensis Jun 27, 2011 3:45 PM (in response to John Hawkinson) Thanks for the response John. Are you mapping your tags to paragraph styles? Why not use them and set Keep Options > Start Paragraph > In Next Frame, etc. The tags are mostly attached to character styles. Perhaps converting more of them to paragraph styles will solve some things. I'll try this and report back. I haven't done much with master pages and XML tagging, I wasn't really aware that was a workflow that was effective The master pages are definitely complicating things. The layout requires three different styles for featured editions, so I believe they're necessary. 3. Re: Help fixing XML/InDesign layoutFred Goldman Jun 27, 2011 4:48 PM (in response to citizensis) You have got to get this book: qid=1309218305&sr=8-1 It goes through step by step exactly what you are trying to accomplish. It has been a while since I read it, but I remeber that you are not to tag anything on the master pages and you are suppod to use merge. Autoflowing was also part of the correct workflow. 4. Re: Help fixing XML/InDesign layoutJohn Hawkinson Jun 27, 2011 8:10 PM (in response to citizensis) The master pages are definitely complicating things. The layout requires three different styles for featured editions, so I believe they're necessary. Sorry, I don't understand. Can you supply a screenshot and more detail, please? 5. Re: Help fixing XML/InDesign layoutlocavore Jul 16, 2011 1:47 PM (in response to citizensis) I think you should explore what the InDesign namespace can do for you, by hard-coding the desired paragraph and character styles into the XML You would need an XSLT file to apply during import. See an old presentation of mine for a college catalog example, / and Cari Jansens'
https://forums.adobe.com/message/3765828
CC-MAIN-2018-39
refinedweb
744
62.88
xml parsing in ruby Discussion in 'Ruby' started by Saleem Vighio, Dec 15, 2010. - Clarification on XML parsing & namespaces (xml.dom.minidom)Greg Wogan-Browne, Jan 25, 2005, in forum: Python - Replies: - 1 - Views: - 1,232 - Uche Ogbuji - Jan 28, 2005 Sequential XML parsing with xml.sax, Aug 23, 2005, in forum: Python - Replies: - 2 - Views: - 656 What libraries should I use for MIME parsing, XML parsing, and MySQL ?John Levine, Feb 2, 2012, in forum: Ruby - Replies: - 0 - Views: - 911 - John Levine - Feb 2, 2012 Different results parsing a XML file with XML::Simple (XML::Sax vs. XML::Parser)Erik Wasser, Mar 2, 2006, in forum: Perl Misc - Replies: - 5 - Views: - 804 - Peter J. Holzer - Mar 5, 2006
http://www.thecodingforums.com/threads/xml-parsing-in-ruby.865588/
CC-MAIN-2016-18
refinedweb
119
72.56
Opened 8 years ago Closed 8 years ago #3947 closed (duplicate) Localized labels don't work with newforms Description I have this form. from django import newforms as forms from django.utils.translation import gettext_lazy as _ LANGUAGES = ( ('cs', _('Czech')), ('en', _('English')), ) class ChangeProfileForm(forms.Form): language = forms.ChoiceField(label=_('Language'), choices=LANGUAGES) email = forms.EmailField(label=_('Email')) phone = forms.CharField(label=_('Phone')) If localized labels are in UTF-8 and I don't use attached patch, forms don't work (UnicodeDecodeError). Attachments (1) Change History (2) Changed 8 years ago by anonymous comment:1 Changed 8 years ago by mtredinnick - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to duplicate - Status changed from new to closed Note: See TracTickets for help on using tickets. This is the same underlying problem as #3924. I'm working on it now.
https://code.djangoproject.com/ticket/3947
CC-MAIN-2015-14
refinedweb
148
50.53
Hi all, The other day I was trying to add a simple autoupdate functionality to a little tool I developed, and I needed to check the version of current assembly against the udpated one. If current assembly was older than the updated one, I needed to substitute the older one with the newer. Plain and simple. This was my first attempt to achieve this (code has been simplified): using System.Reflection; using System.IO; … // Get current and updated assemblies Assembly currentAssembly = Assembly.LoadFile(currentAssemblyPath); Assembly updatedAssembly = Assembly.LoadFile(updatedAssemblyPath); AssemblyName currentAssemblyName = currentAssembly.GetName(); AssemblyName updatedAssemblyName = updatedAssembly.GetName(); // Compare both versions if (updatedAssemblyName.Version.CompareTo(currentAssemblyName.Version) <= 0) { // There’s nothing to update return; } // Update older version File.Copy(updatedAssemblyPath, currentAssemblyPath, true); But File.Copy failes because current assembly is in use. Why? Because of Assembly.LoadFile. When we load an assembly no other process (including ours) can change or delete the file because we are using it. The issue is that we can’t unload an assembly that we loaded in an AppDomain unless the AppDomain itself gets unloaded. Here I’m using the default AppDomain which will only get unloaded when the application exits. So then I tried creating a new AppDomain, load the assemblies in there and unload the AppDomain afterwards before changing the file. It didn’t help either. So… How can we get the assembly version without loading the assembly? The solution is easy: using System.Reflection; using System.IO; … //; } // Update older version File.Copy(updatedAssemblyPath, currentAssemblyPath, true); AssemblyName.GetAssemblyName won’t load the assembly, so we can change the file afterwards. I hope this helps. Kind regards, Alex (Alejandro Campos Magencio) PS: In case you are interested, I will post my simple autoupdate code one of these days, once I have time to change a couple of things on it. Great solution! I’ve been searching for that a long time ago! Thank you! Great tip. Just what I was looking for. Thanks! Faced the same issues, and considered going down the Appdomain path , till I saw your post. Good one. It’s interesting reading this article and comparing .NET with Java. I had to do this in Java some months ago. There were several problems: You can write the on-disk JAR file on the fly without problems. You only need to reload the JAR file afterwards to get the updated one to memory. So, in this little comparison, one up, one down. Great article! Dang, that was easy! First I have used Assembly.ReflectionOnlyLoad with CustomAttributeData class and that actually worked, but I had to apply the same fix to our older solution, which is compiled with .NET 1.1 and those methods were not introduced yet. Luckily, quick search brought me to your solution, which is much easier and cleaner. Thanks! Doesn’t seem to work on Compact Framework 2.0 though. AssemblyName.GetAssemblyName is not supported. 🙁 HI, Thanks for the post. I’d like to ask still for your support. I have to identify the assembly’s version in order to identify where the file (a dll in this case) will be installed. I have the following call: PS H:tempinfraGlobalAssemblyCache> $name= [System.Reflection.AssemblyName]::GetAssemblyName("InfraManagedTracing.dll") and the output I get is the following: <i> Exception calling "GetAssemblyName" with "1" argument(s): "Could not load file or assembly ‘H:Documents and SettingsorlandoInfraManagedTracing.dll’ or one of its dependencies. The system cannot f ind the file specified." At line:1 char:57 + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException</i> apparently the function still requires me to load the assembly, something I defnitely don’t want. Any cue what am I missing? Thanks in advance, Thanks for this code. I’ve been trying to find a way to do this without actually loading the file. Thx for teh code! Nice sollution. "Duh", I said to myself after reading. Thanks! THANK YOU ** Bit of a problem m8, I have this third party assembly and I used this method to interrogate it's version. I get 1.0.5023.14735 but wen I open the same file in the the windows explorer and click on Properties->Details I get 2.10.8.2935. Can anyone say why there's a difference?
https://blogs.msdn.microsoft.com/alejacma/2008/09/05/how-to-get-assembly-version-without-loading-it/
CC-MAIN-2016-44
refinedweb
707
52.05
The). The FPR is the ratio of negative instances that are incorrectly classified as positive. It is equal to 1 – the true negative rate (TNR), which is the ratio of negative cases that are correctly classified as negative. The TNR is also called specificity. Plotting The ROC Curve I will explain this by using the classification that I did in my article on Binary Classification. I will continue this from where I left in the Binary Classification. To plot the ROC curve, you first use the roc_curve() function to compute the TPR and FPR for various threshold values: from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_train_5, y_scores) Then you can plot the FPR against the TPR using Matplotlib: Code language: Python (python)Code language: Python (python) def plot_roc_curve(fpr, tpr, label=None): plt.plot(fpr, tpr, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'k--') # dashed diagonal plt.axis([0, 1, 0, 1]) # Not shown in the book plt.xlabel('False Positive Rate (Fall-Out)', fontsize=16) # Not shown plt.ylabel('True Positive Rate (Recall)', fontsize=16) # Not shown plt.grid(True) # Not shown plt.figure(figsize=(8, 6)) # Not shown plot_roc_curve(fpr, tpr) plt.plot([4.837e-3, 4.837e-3], [0., 0.4368], "r:") # Not shown plt.plot([0.0, 4.837e-3], [0.4368, 0.4368], "r:") # Not shown plt.plot([4.837e-3], [0.4368], "ro") # Not shown save_fig("roc_curve_plot") # Not shown plt.show() So there is a trade-off: the higher the recall (TPR), the more false positives (FPR) the classifier produces. The dotted line represents the ROC curve of a purely random classifier; a good classifier stays as far away from that line as possible (toward the top-left corner). In the output above, The ROC plots, the false positive rate against the true positive rate for all possible thresholds; the red circle highlights the chosen ratio (at 43.68% recall). One way to compare classifiers is to measure the area under the curve (AUC). A perfect classifier will have a ROC AUC equal to 1, whereas a purely random classifier will have a ROC AUC equal to 0.5. Scikit-Learn provides a function to compute the ROC AUC: Code language: Python (python)Code language: Python (python) from sklearn.metrics import roc_auc_score roc_auc_score(y_train_5, y_scores) 0.9604938554008616 Since the ROC curve is so similar to the precision/recall (PR) curve, you may wonder how to decide which one to use. As a rule of thumb, you should prefer the PR curve whenever the positive class is rare or when you care more about the false positives than the false negatives. Otherwise, use the ROC curve. Using The ROC Curve in Classification Let’s now train a RandomForestClassifier and compare its ROC curve and ROC AUC score to those of the SGDClassifier. First, you need to get scores for each instance in the training set: Code language: Python (python)Code language: Python (python) from sklearn.ensemble import RandomForestClassifier forest_clf = RandomForestClassifier(n_estimators=100, random_state=42) y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3, method="predict_proba") The roc_curve() function expects labels and scores, but instead of scores, you can give it class probabilities. Let’s use the positive class’s probability as the score: Code language: Python (python)Code language: Python (python) y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest) Now you are ready to plot the ROC. It is useful to plot the first ROC curve as well as to see how they compare: Code language: Python (python)Code language: Python (python) plt.figure(figsize=(8, 6)) plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD") plot_roc_curve(fpr_forest, tpr_forest, "Random Forest") plt.plot([4.837e-3, 4.837e-3], [0., 0.4368], "r:") plt.plot([0.0, 4.837e-3], [0.4368, 0.4368], "r:") plt.plot([4.837e-3], [0.4368], "ro") plt.plot([4.837e-3, 4.837e-3], [0., 0.9487], "r:") plt.plot([4.837e-3], [0.9487], "ro") plt.grid(True) plt.legend(loc="lower right", fontsize=16) save_fig("roc_curve_comparison_plot") plt.show() As you can see in the output above, the RandomForestClassifier’s curve looks much better than the SGDClassifier’s: it comes much closer to the top-left corner. As a result, its ROC AUC score is also significantly better: Code language: Python (python)Code language: Python (python) roc_auc_score(y_train_5, y_scores_forest) 0.9983436731328145 I hope you liked this article. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium, to read more amazing articles.
https://thecleverprogrammer.com/2020/07/26/roc-curve-in-machine-learning/
CC-MAIN-2022-33
refinedweb
763
57.06
>>。 解题思路:因为直接计算比较困难,又因为只能搬整数块石头,所以使用二分法逼近答案。具体注意事项见代码~ #include <iostream> #include <cstdio> #include <algorithm> #include <cstring> using namespace std; int a[50005]; int vis[50005]; int main() { //freopen("test.txt","r",stdin); int l,m,n,i; int up,low,mid; int ans; scanf("%d%d%d",&l,&n,&m); a[0] = 0; a[n+1] = l; for(i=1;i<=n;i++) { scanf("%d",&a[i]); } sort(a,a+n+1); low = 0;up = l + 1;mid = (low + up)/2;//特别注意这里要把上界声明为l+1,因为l本身可能是答案,例如数据1 0 0 while(up-low>1) { //memset(vis,0,sizeof(vis)); //ans = mid; int last = a[0]; int k = 0; int flag = 0; for(i=1;i<=n;i++) { if(k>m) { flag = 1; break; } else if(a[i]-last<mid) { k++; } else { last = a[i]; } } if(k>m) flag = 1; if(a[n+1]-last<mid && k==m) flag = 1; if(flag) up = mid; else low = mid; mid = (low+up)/2; } printf("%d\n",low); //cout << "Hello world!" << endl; return 0; } - 顶 - 0 - 踩 - 0 - • poj 3258 River Hopscotch(二分法,最小值最大化) - • 白鹭引擎在WebAssembly中的实践 - • POJ 3258 River Hopscotch(二分法) - • Apache Weex:移动研发的进阶之路--董岩 - • POJ3258 River Hopscotch(最大化最小值/二分法) - • 微信小程序开发实战-快递查询 - • POJ 3258-River Hopscotch(二分法-最大化最短距离) - • Python内功修炼
http://blog.csdn.net/wchhlbt/article/details/51540661
CC-MAIN-2017-47
refinedweb
198
51.14
This is an interesting chart. It is a 2 year chart, so keep that in mind. Why 2 years? I think it shows an interesting progression of things weakening and falling behind. Remember that these charts start all the lines as equal at the ‘cherry picked’ first cell. Then show change relative to each other in percentages from that starting point. What is that lowest, weakest line? The gold colored one? It is gold. The gray one that it is ‘dancing’ with is FXY the Japanese Yen. Remember that these ETFs (Exchange Traded Funds) are NOT the ‘stuff’ themselves, but often hold ‘wasting assets’ like futures and options contracts. For that reason, most of them will show long term decay even if the basic asset is stable. That is why they are trade vehicles only and NOT investment… So, OK, we have the Yen and Gold running down roughly together. That does somewhat reflect reality in that Japan is weak and Japan is really pushing the Keynesian Money Stimulus (with no gain…) and Gold has dropped in price. Now these two are both inflation hedges, so this is saying the Big Money is not so worried about inflation hedging and more worried about getting some return (and 0% or near-zero US bank deposits don’t cut it.) Next up is that orange Copper ETF JJC. It, too, is drifting down. After the ‘any inflation hedge will do’ died, the industrially demanded copper fell out of bed. Not a big global building boom going on and not a lot of demand for electrical appliances. Hmmmm… The egg yolk yellow line just above them is a net-about-zero but with big (tradable) volatility. Clearly, though, the Emerging Market hasn’t emerged much. (That EEM includes some Brazil and China… the two prior golden child markets). Next up is that green FXF line. The Swiss Franc. My “go to” currency for store of value. Net zero like EEM, but far less volatile. Relative to the $US, it has been stable. Even as an ETF with some long term ‘slippage’. Nice to know, but not very valuable. (Unless you were in Yen or € maybe). We’ve also all heard about fracking and enhanced recovery putting some pressure on oil. It has also been volatile, and had a nice run up to about last June at +20%, but has been crashing in the last 4 months. That plunge at the end is particularly strong. No Joy in Saudi or Iran tonight… As productivity dropped off, and copper demand reflected it, eventually the demand for fuels followed so those prices drooped. That’s how I’d read this. (Might be nice to get some global consumption numbers, but they lag on release date…) Which leaves us with the top three lines. Here the main ticker symbol is RUT – the Russell 2000. These are the smaller stocks. Tend to lead, up or down. Overbid in strong markets, harder to dump in weak ones. The Blue line is my standard of value, SPY, the S&P 500. Very hard to beat it long term. The red line is QQQQ the Nasdaq 100 (that is dominated by just a few tech companies such as Apple AAPL). Tends to ‘hang high’ longer in runs up as folks “buy the story” longer. An interesting thing about this set is that back in about June to August you can see that RUT had a “Failure to Advance” while QQQQ recovered from the “dip” and kept on going. SPY caught up to RUT, but then merged with it. Both sideways… Now the way I usually read charts, I’d look at those three thin SMA Simple Moving Average lines on RUT and call it a “topping weave”. During up times, price bounces off the top side (see the first 1/2 of the chart). At the top, all 4 merge and weave. Rather like now. Then when the bear market starts, prices drop below the SMA stack and bounce down from the underside. To me, that last peak in Sept looks a lot like the start of that. We have 4 peaks in a row, each a little lower. SPY in comparison is only on the first “failure to advance” double top of 2 peaks. QQQQ has not yet joined the dead zone party, but ought to shortly. Now there is one really big HOWEVER attached to this. Markets do “odd things” at major elections and “odd things” at the end of a calendar year. The first as “who wins” often determines who will win the Lobby Lottery in D.C. and the second due to “tax issues” and holiday risk avoidance. So is this a Holiday Anomaly? An Election Anomaly? A Market top, to be followed by “Dive Dive Dive!!”? That is where chart reading turns into crystal ball gazing… IMHO, it will depend in the short term on how folks evaluate the mixed Republican Congress with Obama Obstinance will shake out. Long term IMHO it depends on how the Super Keynesian Global Meth Binge settles out. It looks, to me, like the US stock markets have been on a stimulus driven run up. The Fed has announced the imminent end of asset buying. In theory, that ought to roll over the top of the market. While not yet confirmed (prices have to return to and bounce off the SMA stack from below to confirm it) this configuration looks to me like “last call” to exit stocks for a while. As soon as the first Fed rate hike hits, it’s pretty much guaranteed. What would I do? Shift to very much faster charts and swing trade those (clear) cyclical swings of prices. One to 2 month runs. Since metals and ‘other currencies’ (other than the Swiss Franc) are pretty much a No Joy too, just ‘going to cash’ looks as good as anything. So “Long, Short, or Cash” in fast swing trades. As soon as a clear bottom is in commodities, they would be a good ‘dead cat bounce’ and then ‘recovery’ trade. For now, they are not very interesting. Worth watching grains / meats commodities, though, if the winter is truely brutal. Plenty of time for that, though, since spring is still a ways off (and for now JJG too is dropping with GLD). In the context of potential Deflation events shapping up for the EU Zone and Japan, cash has its merits. That’s my guess, anyway… and worth all you paid for it ;-) Interesting chart :) Gold seemed like a good idea at the time! A form of saving that, I’m told, is growing in Australia, is 12 to ~24 month Mortgage lending which does NOT involve a bank. Typically the mortgage broker makes a margin of 2% so the borrower might pay 10% interest and the lenders get 8%. The loan to valuation ratio is typically 65% which means the market can drop 35% without the risk of capital loss. If the borrower defaults, then the broker salvages the loan by selling the property. Based on experience (!) the risk of capital loss is far lower than the risk that a company defaulting on repayment of the bonds that it has issued. Is this sort of business growing in the USA too? e.g. Well at least it is an EMSmith guess and not a WAG ;-)! I just bought 3prs of 501 LEVIs $62 per——-$62.00!! Mexican laborers must have got quite a pay raise. The cases of KERR jars I also bought today have gone up, From $14 to $19 over the last year. The prices on nearly everything We buy has increased quite a bit over the last year, only fuel has decreased. I’ve seen the stagnation of income over the last 6+ years, serious inflation in day to day purchases for the last year. Two weeks ago I purchased 1/2 CDX panels $19 for $13 – 2012 plywood. I am still waiting for the deflation in stuff I buy or at least an increase in income. The prices of Silver, copper, gold, oil, etc. received by the producers has definitely dropped, At retail only fuel prices have decreased as of yet. Bankers want the prices of their assets to hold or increase, everything else to hold or decrease. AT some time they must trade their bags of money for things of value. At this point only stocks create ROI. One of the founders of Home Depot said things were great for him at present because he could borrow from his bankers at next to zero cost and earn 3% on Wall Street investments. Real risk investments that increased the economy were out of the question because of the burden of regulation. pg p.g.sharrow — and what is his ROI if the market drops 35% ? Copper is sometimes called “the world’s best economist” as it is supposed to tell you how the world’s economy is going. Sliding price means a recession or depression coming. Coupled with previous article it looks like trouble with deflation. @P.G: Well, I’ve had much of the same effect. What I buy goes up, what I don’t buy (can’t buy?) goes down, what I make stagnant or dropping depending on year. $62 for Levi 501s? REALLY? My god, that’s insane. For that much I can make my own from canvas and pop rivets… It’s just cotton sail cloth, sewn and a few copper rivets. (I stopped buying them with they crossed over $30 some many years back. The “knockoffs” were good enough and better value / $$$ even if replaced more often.) Noticed that “Asparagus” jars ( wide mouth 1 1/2 pt ) are now sold in flats of 9 instead of the usual 12, but the price is about the same. 1/2 Gallons sold in 1/2 dozens instead of the old dozen box. @Sandy: When I was a kid, lots of real estate was sold with short term private money. Then the bank mortgages kind of took over. Now? Well, since I don’t know of anyone who is buying houses… I’m sure it is happening somewhere, but no with anyone I know. Oh, and per gold looking like a good idea at the time…. I seem to remember taking a LOT of flack from people about calling a top in Gold and especially in Silver some long time back. So I’d have to say “it seemed like a good idea at the time to SOME people”… ;-) Watch the charts. They tell you when it is a good idea. (I can review how to read them if folks have forgotten… but I’ve posted it so many times it was getting redundant and I figured maybe kind of boring…) @Graham No.3: IFF you are rich enough, you hedge. Long the basic asset, with an option (often out of the money by a fair amount) that limits the loss. So he would loose about 4% in a 35% market crash, and at the end of it still own the asset. Hard for “The Little Guy” to do that (mostly due to economic scale and expense of advisers, not lack of ‘product’). Yes, Little Orphan Annie and Daddy Warbucks had it right with his call to “Buy Copper! I need more Copper!”. It is one of my ‘go to’ indicators of economic activity. Before the houses can be sold, or the cars, or the machinery to industry, the copper must be mined and refined and sold and made into wire and pipes. Copper moves first, THEN the other industries. That it is dropping right now is not particularly encouraging. I’d speculate a short term happy bounce in the stock market based on the notion that The Republicans will fix things; but I think it beyond their control. IFF the bounce fails to exceed prior peaks, and instead falls below the SMA stack, well… So keep watching copper for an early turn; but as of now it is saying global demand is weak. A couple interesting charts on this link, shows historical inflation adjusted copper prices, and U.S. Interest rate chart for 3 month T bills. EMSmith says “… So keep watching copper for an early turn; but as of now it is saying global demand is weak.” Well the buildout in China is over. Copper is the most heavily used, restricted supply, industrial metal so a good yardstick for economic activity. Restricted enough so a greedy evil bastard can manipulate it for awhile. So where would the next large demand spring from? Most copper, once won from the earth, is recycled and not used up, so the amount available just keeps rising. If anything copper has decreased in real cost over the last 80 years. This reminds me, I will need to buy a bunch more wire and tubing over this winter. pg @EMSmith; You really think I could get by with imitation jeans? ;-) I don’t punch a computer keyboard for a living. Levis are my armor for the day to day combat of life in the field. I have had them turn a chainsaw from eating my leg!. They last me several times longer the any jeans I have ever tried. Besides you know that a heavy, dark blue denim Wrangler shirt and shrink to fit 501 Levis are Uniform of the Day for me, have been for most of my 68 years. Never owned any other suit except my Navy duds. Lol pg @Larry: Interesting chart. Copper, spot today, is about $3 and a few pennies. Less than the end point of the inflation adjusted chart. So still dropping. That T-Bill chart is rather dramatic, too. Back when a 10 year was about 14%, Regan had put the brakes on things (via Volker IIRC). I knew where things were headed, so bought a ‘ladder’ of ‘strips’ for the Spouse IRA. My only mistake was making the ladder 5, 10, 15 year instead of 10, 15, 20… or 15, 20, 30… It was about $1k / chunk, maturing to about $5k, $10k,, something like that. At any rate, I got a big win by doing nothing at all for a decade+ @P.G.: I know. I grew up in 501s in farm country. Wasn’t until I was in college that I wore other stuff (and even then kept 501s for the ‘rough stuff’.) But once they hit $25 to $30 / pair, just could not justify it for my needs. (Swapped over to bib overalls ;-) Still, at $62 / each, I’d be looking to get some sail cloth and try making my own… ( Mom was a seamstress in WWII making uniforms in the UK. I learned to sew and knit when I was about 5… and helped make some of my shirts in grammar school and high school…) There’s some way tough sail cloth out there… though getting the right blue might be hard ;-) @EMSmith; the creation of my own pants is not that difficult, just another chore. See: “The Return of An Old Friend” I just created a wizards hat and a heavy full length coat from worn out shirts and pants last winter as well as a covering for my FRP disk this fall from a light canvas painters drop cloth. Over the years, tractor seats and pickup seats have had coverings made on this 1900 Singer tailor’s sewing machine. I too learn to knit and sew at a very young age as well as cooking, cleaning, and changing diapers before I was promoted to field hand. ;-) Farm country kids contributed to the family resources at an early age. Not at all like moderns that play into their mid 20s. We will see how ambitious I am this winter. 8-) pg @P.G.: Looks exactly like the one my Mum had (on which I learned to sew and prior to learning, on which I ran a needle through my thumb… just backed it out and removed the stitch and all was well… after a while ;-) Had a simlar machine of my own ( a White, IIRC) with an electric motor too. Gave it away when I got rid of a lot of my stuff and moved onto a boat for a couple of years. Among life regrets… ought to have just stored stuff… @EMSmith; Well, it appears that David Stockman agrees with you. This hour on Varney, Fox Business, said much the same as you, Get clear of the market and into Cash. American cash as it will be buggered least in this rush to the bottom and therefor Deflation of American prices as the rest of the world attempts to reflate their inflation Genii. pg Also get to become your own emergency healthcare provider when you live rural: Exactly the same thing is going on in Europe. Interesting tale on high frequency trading and on how the big exchanges were front running trades due to timing delays in trades, and how one trader figured it out and broke their system to return to fair market trading. @P.G.: Nice to have some confirmation… I’ve got an interesting set of comparison graphs of USA vs EU vs Japan vs EEM to post some time…. Short form: Japan is dead flat along with emerging markets. EU rolled over when the banks failed the stress test in about last January. Only the USA is still climbing, and that is looking ever more unsupportable to me. We can’t lift the world on 2% GDP growth (most of which or all of which is fictional statistics games anyway…). To quote somebody or other: “I’ve got a baaaaadddd feeling about this…” I’ll try to get that posting up tonight. @Larry: R. de Haan had a couple of nice links on another thread too: I’m seeing less reason to participate at all and more reason to just buy things I can use and sit on them… ( I.e. pay off all debt. House, car, cards, whatever. Exit the game.) I saw those links — just more of the same. Regarding precious metals markets, you know the time to sell if you own precious metals is when every late night TV program or show that touches on finances has wall to wall ads about “now is the time to buy gold!” (or get your new low interest rate mortgage now!) That is a clear sign that the folks who have cornered a good share of the commodity are trying to dump it on average Joe investors. The average Joe investor is almost by definition the last to get in on the action and is invariably the designated sucker to get stuck with the worthless stock or option just before it nose dives. I saw this before in the 1980’s when the Hunts tried to corner the market in silver and everyone was pushing gold. I had just happened to have been reading a book about the great depression and one of the major players in the 1920’s stock market who cashed out just prior to the bust, told the story about how he realized the market was about to crater when he heard shoe shine boys talking about hot tips on the market and secretaries trading tips in the elevators. He took that as the sign to get out while he still could and he avoided getting totally wiped out like so many other major players. Even ignoring malicious intent, it is only natural for those inside a system to try to tweak things to their advantage. Over time the compounded effect of dozens or thousands of such moves tends to shift the advantage to the insiders. It is like in a casino. You know (or should know) that the one party which is sure to never lose in the long run is the casino and the one party sure to come out on the short end of the stick is the player (on average over long periods of time). Only in a handful of games can the player use good number theory and smart play to beat the house, and he will most assuredly be the exception rather than the rule. It is the average recreational player who always gets skinned, it is just the nature of the beast. If that was not the case the market/casino would go out of business in short order. If all the players are winning, the casino is going bankrupt. If everyone is pushing you to buy while you can, you can be sure that someone else is trying to sell while they still can. That is why the book “Black Swan” is such an interesting read. People are not good at seeing the logical outcome of current trends. As the saying goes “If it cannot continue for ever it must end!” Or “If you can’t figure out who the sucker is at the table, it is you!”
https://chiefio.wordpress.com/2014/11/07/breakdown-of-rut-vs-qqqq-and-weak-metals-and-currencies/
CC-MAIN-2018-17
refinedweb
3,503
80.41