text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Name | Synopsis | Description | Return Values | Attributes | See Also #include <stdlib.h> char *ptsname(int fildes); The ptsname() function returns the name of the slave pseudo-terminal device associated with a master pseudo-terminal device. fildes is a file descriptor returned from a successful open of the master device. ptsname() returns a pointer to a string containing the null-terminated path name of the slave device of the form /dev/pts/N, where N is a non-negative integer. Upon successful completion, the function ptsname() returns a pointer to a string which is the name of the pseudo-terminal slave device. This value points to a static data area that is overwritten by each call to ptsname(). Upon failure, ptsname() returns NULL. This could occur if fildes is an invalid file descriptor or if the slave device name does not exist in the file system. See attributes(5) for descriptions of the following attributes: open(2), grantpt(3C), ttyname(3C), unlockpt(3C), attributes(5), standards(5) STREAMS Programming Guide Name | Synopsis | Description | Return Values | Attributes | See Also
http://docs.oracle.com/cd/E19253-01/816-5168/6mbb3hrnd/index.html
CC-MAIN-2015-27
refinedweb
176
53.41
Pre-lab: Generic Collections This week in lecture, we learned how to work with generic collections. Let's review. Java before generics (pre-2004) To date, we've seen collections that hold values of a particular type. For example, an AList is pre-defined to hold ints: public class AList { private int[] items; // items are ints private int size; public int get(int i) { // Returns an int // ... } public void insertBack(int x) { // Takes an int // ... } // ... } We could imagine a class called ObjectList, which is like AList except that it can hold values of any type. It would look something like this: public class ObjectList { private Object[] items; // items are Objects private int size; public Object get(int i) { // Returns an Object // ... } public void insertBack(Object x) { // Takes an Object // ... } // ... } Let's look at an example. Suppose we run the following: ObjectList dogs = new ObjectList(); Dog fifi = new Dog("Fifi"); dogs.insertBack((Object) fifi); // The (Object) cast here is optional (why?) Can you see why ObjectList would be hard to use? The get() method in ObjectList returns objects of type Object. In order to get a member ObjectList as a Dog, you would need to downcast back to the actual type: Dog dog1 = dogs.get(0); // Won't compile since dogs.get(0) returns an Object Dog dog2 = (Dog) dogs.get(0); // Okay, but annoying In fact, this introduces a type safety problem: since we can put any Object into our ObjectList, there's no guarantee that the downcast will work! ObjectList dogs = new ObjectList(); Cat toby = new Cat("Toby"); dogs.insertBack((Object) toby); // Okay fifi = (Dog) dogs.get(0); // Compiles, but throws ClassCastException at runtime Java with generics We need a way to tell Java that our list type is generic—that it can hold values of any type, but only a single type at a time, chosen when the list is instantiated. Here's how we do that: public class GenericList<T> { private T[] items; // items are of type T private int size; public T get(int i) { // Returns an object of type T // ... } public void insertBack(T x) { // Takes an object of type T // ... } // ... } What's going on here? We've defined a list type called GenericList that has a type parameter ( T). Wherever you see a T is where the actual type argument will go, once it's known. Since GenericList is generic, we must provide a type argument whenever we instantiating a GenericList. For example, let's make a GenericList that can hold Dog objects: GenericList<Dog> dogs = new GenericList<Dog>(); Now this particular GenericList, which we've named dogs, will behave as though every T were replaced by Dog. So items will be of type Dog[], get() will return a Dog, and so on. And now no casting is required when we use get(): dogs.insertBack(fifi); fifi = dogs.get(0); // No downcast from Object to Dog And those pesky cats can't get into our dogs list! dogs.insertBack(toby); // Won't work In fact, the code above won't even compile—since we declared dogs to be of type GenericList<Dog>, the Java compiler knows that the insertBack() method should only accept Dog objects and not Cat objects. Four more things about generics First, GenericList<Dog> is usually read as " GenericList of Dogs". Makes sense, right? Second, T is a common name for type parameters, but there's nothing special about it. You could use U or V or even TypeArgumentGoesHere. But T is the typical name by convention. Third, a generic class can have multiple type parameters: public class Cons<U, V> { private U car; private V cdr; } Finally, when instantiating a generic class, the type argument can usually be replaced with <>. So the instantiation of dogs GenericList<Dog> dogs = new GenericList<Dog>(); // From above can be written as: GenericList<Dog> dogs = new GenericList<>(); // Easier on the eyes This makes your code easier to read, especially when you have something like HashMap<Integer, HashMap<Integer, HashMap<String, Float>>> trans = new HashMap<>(); which is a HashMap mapping ints to HashMaps mapping ints to HashMaps mapping strings to floats. Introduction 6 starter files: git pull skeleton master Start the lab by booting up the HugLife simulator. To do this, use the following commands (make sure you're in the lab6. The commands to run the simulator are the same as above: $ javac -g huglife/*.java creatures/*.java $ java huglife.HugLife samplesolo a skeleton containing a few simple tests is provided. You can run these tests from the command-line like this: $ javac huglife/*.java creatures/*.java $ java creatures.TestPlip. Use the commands: $. If you're still stuck, let a TA or a lab assistant know!. Magic Word In case you missed it: be sure to set TestPlip.MAGIC_WORD to this week's magic word—you'll find the variable defined on line 20 of TestPlip.java. If you're submitting early, use "early" as the magic word. Submission Create a ZIP archive containing your creatures directory and upload it to Gradescope. If you don't know how to create a ZIP archive, try to find instructions on Google before asking others for help. The upload dialog should look something like this: Make sure the "Name" column is completely correct, or else the autograder won't run correctly. The autograder for this lab is very basic. If your HugLife simulation looks mostly right—that is, if it resembles the animation from the introduction—you probably did everything correctly.
http://sp16.datastructur.es/materials/lab/lab6/lab6.html
CC-MAIN-2020-50
refinedweb
913
64.51
inclusion of Object Teams in the Indigo release train is discussed. Contents - 1 Extending Java - 2 Extensible Java IDE - 3 User Install Experience (see also OTJLD §A OT/J Syntax): -) - counting only changes in existing files, not added files which live in the objectteams namespace. - As of version 4.4 M2 / OTDT 2.3 M2 the differences affect more than 2300 file locations. 86.000 test cases. All tests pass and no build is ever published for which this is not true. Exception as of Luna: JDT/UI's LeakTestSuite should some failures. We need to investigate if this indicates a context-sensitive bug in JDT/UI or a bug caused by OTDT. would. With the above solution it was still possible that p2 would suggest to install the Object Teams patch feature during check for updates. Resolved: This is a issue in p2 has been resolved via bug 3501.
http://wiki.eclipse.org/index.php?title=OTDT/JDTCore&redirect=no
CC-MAIN-2016-26
refinedweb
150
66.84
On 12/30/2010 03:44 AM, Kirill A. Shutemov wrote:>>> If no rpcmount mountoption, no rpc_pipefs was found at>>> '/var/lib/nfs/rpc_pipefs' and we are in init's mount namespace, we use>>> init_rpc_pipefs.>>>> It's the "we are in init's mount namespace" that I was wondering about.>>>> So if I naievely chroot, nfs mount stops working the way it did before I>> chrooted unless I do an extra setup step?>> No. It will work as before since you are still in init's mount namespace.> Creating new mount namespace changes rules.Ah, CLONE_NEWNS and then you need /var/lib/nfs/rpc_pipefs. Got it.I'm kind of surprised that the kernel cares about a specific path under /var/lib. (Seems like policy in the kernel somehow.) Can't it just check the current process's mount list to see if an instance of rpc_pipefs is mounted in the current namespace the way lxc looks for cgroups? Or are there potential performance/scalability issues with that?Rob
http://lkml.org/lkml/2010/12/30/35
CC-MAIN-2016-26
refinedweb
169
74.69
This article seeks to demonstrate a potential use of the ASP.NET Pipeline architecture. The Handler technique of dealing with requests on the server-side is adopted to place a watermark string on all images sent to the client. The original image is however not modified in the process. A copy of the image is created and modified and flushed into the output stream connected to the client browser. If your website offers a picture gallery, this method could be used to put a custom message (watermark) into every image that gets rendered on the client browser. More often than not, users download images and forget where they came from. The technique employed by the code in this article can serve to effectively advertise the source of the image. With a little imagination, the basic idea can be used to provide custom captions for images as well. One could come up with a myriad of uses for this technique. Old ASP was based on sending a response for every client request that makes its way through several ISAPI filters installed on the IIS. These ISAPI filters and extensions required to be coded in C++, and hence was not widely adopted to follow, although, they offered great benefits by adding more punch to the services offered by the web server. However, ASP.NET takes the focus away from ISAPI and introduces the concepts of handlers and modules to meet this end. Read on, people! Requests are received by IIS and passed to the ASP.NET Worker Process (aspnet_wp.exe) by an ISAPI filter (provided by ASP.NET) called aspnet_isapi.dll. This filter re-routes the request to the worker process, thereby bypassing a lot of IIS features in favor of those offered by the CLR and ASP.NET. The worker process dispatches HTTP requests through a pipeline which contains several modules that can modify and filter the requests. From that point on, the request is wrapped up into an instance of HttpContext and piped through a number of ASP.NET classes that implement the IHttpModule interface. There are a number of system-level HTTP modules, providing services ranging from authentication to state management to output caching. The number of modules that get to intercept the request is based upon settings within the host machine's machine.config file and the application's web.config file. In classic ASP, this role of providing pre- and post-processing fell upon ISAPI filters. HttpContext IHttpModule of our web server, just like what ISAPI extensions and filters are used to do for IIS. Every incoming request will have a URI. ASP.NET allows us to map every single URI to a specific handler class that will send a suitable response. A URI that is not mapped to a specific handler class will be passed to ASP.NET's default handler. This default handler will target the URI as a file name, loading the file specified within the URI. By default, requests for a ‘.aspx’ page are handled by a compiled class that inherits from the Page class (the request handler in this case), which implements the IHttpHandler interface. If we want another handler to cater to a request, we need to map the request to the desired handler class in the web application's configuration files, and also instruct the ASP.NET ISAPI extension to look out for the particular request. Classes implementing IHttpHandler can hook into the HTTP pipeline and service requests through the interface's ProcessRequest method. Page IHttpHandler ProcessRequest The ASP.NET ISAPI extension will only pick those URI requests, it has been mapped or configured to acquire. This configuration has to be done in the IIS web server configuration property sheet. This article will focus only on one potential use of HTTP handlers. To create a class that acts as a HTTP handler, we must implement the IHttpHandler interface. The interface has two prominent members that we must implement in our class: - Sub ProcessRequest Property readonly IsReusable as Boolean An HTTP handler does not have access to session state variables and objects by default. To acquire this privilege, the handler class also needs to implement either one of the following interfaces depending on the extent of access required: - IRequiresSessionState IReadOnlySessionState Both the above interfaces do not have any member signatures to implement. They simply serve as markers or flags for the ASP.NET engine to determine the degree of access to session information that must be provided to the Handler object. When the ASP.NET engine receives a request, it will decide on which handler to invoke, by screening for <HttpHandler> elements in the web.config file. This is what the element entry should look like in a configuration file: - <HttpHandler> <httpHandler> <add verb="" path="" type="" validate=""/> </httpHandler> verb e.g. verb="*", verb="GET, PUT". The verb attribute is used when you want to restrict requests via 'POST' or 'GET' or 'HEAD'. You'll just want to stick with a '*' - this will allow all of the above. path e.g. path="*.aspx", path="resource/" type e.g. type="Namespace.Class, AssemblyName" validate Here, we will attempt to insert a custom watermark into every image file that is requested. To follow this sample, I expect you to have a basic idea of how to use the System.Drawing namespace and classes. For the uninitiated, you use either a brush or a pen to draw on a canvas (to put it bluntly!). System.Drawing This class is our HTTP Handler. You could compile this class as a separate DLL assembly or as a part of another assembly...it is your call. However, note the name of the class, the namespace it belongs to and the assembly it is packaged in, as these pieces of information prove vital at the handler registration stage. The ASP.NET worker process passes the HttpContext object (that wraps the request) to the ProcessRequest routine. We obtain the physical path of the requested image on the server, and proceed to apply a watermark to the image. The resultant image is then written to the output stream of the Response object. Follow the comments in the ImageWatermark class to comprehend its logic. Response ImageWatermark Imports System.Web Imports System.Drawing Public Class ImageHandler Implements IHttpHandler Public ReadOnly Property IsReusable() _ As Boolean Implements IHttpHandler.IsReusable 'We dont want this class to be reused simply 'because we dont want other requests to wait and lie pending Get 'suggests that an instance of this class cannot 'be reused to serve more than one request. Return False End Get End Property Public Sub ProcessRequest(ByVal context _ As HttpContext) Implements IHttpHandler.ProcessRequest Dim output As ImageWatermark = _ New ImageWatermark(context.Request.PhysicalPath) output.AddWaterMark("This is the custome string") output.Image.Save(context.Response.OutputStream, _ Drawing.Imaging.ImageFormat.Jpeg) End Sub Private Class ImageWatermark Private bmp As Bitmap Public Sub New(ByVal physicalPathToImage As String) bmp = New Bitmap(physicalPathToImage) End Sub Public Sub AddWaterMark(ByVal watermark As String) 'get the drawing canvas (graphics object) from the bitmap Dim canvas As Graphics Try canvas = Graphics.FromImage(bmp) Catch e As Exception 'You cannot create a Graphics object 'from an image with an indexed pixel format. 'If you want to open this image and draw 'on it you need to do the following... 'size the new bitmap to the source bitmaps dimensions Dim bmpNew As Bitmap = New Bitmap(bmp.Width, bmp.Height) canvas = Graphics.FromImage(bmpNew) 'draw the old bitmaps contents to the new bitmap 'paint the entire region of the old bitmap to the 'new bitmap..use the rectangle type to 'select area of the source image canvas.DrawImage(bmp, New Rectangle(0, 0, _ bmpNew.Width, bmpNew.Height), 0, 0, _ bmp.Width, bmp.Height, GraphicsUnit.Pixel) bmp = bmpNew End Try canvas.DrawString(watermark, _ New Font("Verdana", 14, FontStyle.Bold), _ New SolidBrush(Color.Beige), 0, 0) End Sub Public ReadOnly Property Image() As Bitmap Get Return bmp End Get End Property End Class End Class At this point, I should probably bring your attention to the exception handling block employed in the AddWaterMark routine of the ImageWatermark class. It was not part of my original idea, simply because I did not expect to encounter the following error message: AddWaterMark Error: An unhandled exception of type 'System.Exception' occurred in system.drawing.dll Additional information: A Graphics object cannot be created from an image that has an indexed pixel format... As it turns out, a GIF image with an indexed pixel format does not allow its color palette to be modified. As a workaround, we draw the contents of the image into a new Bitmap class and proceed with our operation on the new instance. This error really annoyed me until I found out! Thanks to the Internet! Bitmap This part is easy. Just add the following XML section to the <System.Web> element of your web.config file. Note that I have registered the handler to only deal with requests for GIF and JPG files. The assembly name provided is Home. You should replace it with the name of the assembly you compile the handler class into. <System.Web> <httpHandlers> <add verb="*" path="*.jpg,*.gif" type="ImageHandler,Home" validate="false"/> </httpHandlers> This is the last and most critical step to getting this whole exercise to work. We need to tell IIS to pass requests to files with .jpg and .gif extensions to ASP.NET's very own ISAPI filter-aspnet_isapi.dll. To do so, we have to map the extensions to the filter process. This is done in the Application Configuration property sheet of your website or virtual directory. Pump up the Internet Services Manager, and access your web or virtual directory properties. Repeat steps 2 to 6 for each extension you wish to handle. With that out of the way, load up your favorite browser and try accessing any image file, or any page that contains graphics pulled from your website, or a virtual directory on your website, depending on your IIS configuration. And that's that! Please feel free to contact me at jaison_n_john@hotmail.com if you have any query or feedback to impose on my humble self. Seasons Greetings to one and all! This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here Dim oMem As New MemoryStream Select Case LCase(Right(context.Request.PhysicalPath, 3)) Case "jpg" output.Image.Save(oMem, Drawing.Imaging.ImageFormat.Jpeg) context.Response.ContentType = "image/jpeg" Case "gif" output.Image.Save(oMem, Drawing.Imaging.ImageFormat.Gif) context.Response.ContentType = "image/gif" Case "png" output.Image.Save(oMem, Drawing.Imaging.ImageFormat.Png) context.Response.ContentType = "image/png" End Select oMem.WriteTo(context.Response.OutputStream) General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/5561/Watermark-Website-Images-At-Runtime?fid=29502&df=90&mpp=25&noise=3&prof=True&sort=Position&view=Normal&spc=Relaxed
CC-MAIN-2016-26
refinedweb
1,848
57.16
Data associated with each field in the dialog box. More... #include <vgui_dialog_impl.h> Data associated with each field in the dialog box. The representation of a dialog box in vgui is simply as a list of these elements. Definition at line 149 of file vgui_dialog_impl.h. Definition at line 165 of file vgui_dialog_impl.h. Field to collect data from the user. The derived GUI implementation should not delete these. Definition at line 163 of file vgui_dialog_impl.h. What type of field this is (int, bool, file browser, etc). Definition at line 152 of file vgui_dialog_impl.h. A pointer to a GUI widget for this field, if one exists. This is null in most cases since it is easier to construct widgets as we need them, except perhaps for something complicated like a file browser or colour chooser. The GUI implementation is completely responsible for this pointer (i.e. ensuring memory deallocation when the dialog closes, etc.) Definition at line 160 of file vgui_dialog_impl.h.
http://public.kitware.com/vxl/doc/release/core/vgui/html/structvgui__dialog__impl_1_1element.html
crawl-003
refinedweb
164
60.82
IN THIS ARTICLE In April, Arduino launched the new MKR1000 board, a move that I’m sure made countless IoT fanatics, including myself, jump for joy. The MKR1000 integrates the functionalities of the Arduino Uno and a WiFi shield into one neat little two-inch board, making it easier than ever to add all sorts of objects to the “Internet of Things”. With its small size and built-in WiFi capabilities, this little device presents the perfect opportunity for anyone to use PubNub’s global data stream network to transmit data from practically anywhere. To prove that point, I set out creating this demo, in which I’ll walk through how to set up the MKR1000 with PubNub and then use it to transmit temperature data from different locations within San Francisco, a city known for its microclimates. If you follow along, you’ll have access to temperature data from any locations you choose, and you’ll be able to monitor it in realtime. To top it all off, I’ll show you how to use both the EON Chart Builder and Mapbox Editor to present the data in just a few easy steps. Let’s get started! The full code repository is available on GitHub. Getting Started with the Arduino MKR1000 Before you can get started working with the MKR, there are a few simple things you need to do to make sure that you have the correct software installed to use this magic little board. Installing Drivers and Updating Libraries After downloading the latest Arduino IDE, open up the software. To select the right board, go to Tools > Board > Boards Manager. Select the box that says “Arduino SAMD Boards (32-bits ARM Cortex-M0+)” and click Install. Now go back to Tools > Board and you should see the board listed. Click it, and plug in your device with a micro-USB cable. If you’re working on a PC, a box should pop up to install the driver – do it. Mac users can rest easy and ignore this step. To work with the WiFi capabilities of the MKR1000, as well as collect sensor data, you‘ll need to install both the WiFi101 library and the DHT sensor library, a library which handles all the functions of the DHT22 Temperature Sensor we’re using in this tutorial. Do this by going to Sketch > Include Library > Manage Libraries. Find each library and and click Install to be able to use them in your project. Lastly, you’ll need to install the PubNub library if you haven’t done so already, which you can do through the PubNub Arduino documentation page. To make it compatible with the MKR1000, you’ll need to tweak a few lines of the library. Open up PubNub.h in any basic text editor, and switch the comments for the ethernet and WiFi lines, as shown below. //#define PubNub_Ethernet #define PubNub_WiFi Also modify the line that includes the WiFi library so that it includes WiFi101 instead. #elif defined(PubNub_WiFi) #include <WiFi101.h> Now that everything’s set up, you’re ready to start programming your board! Connecting to WiFi To connect your board to WiFi, be sure to include the WiFi101 library at the top of your code, then input your network id and password as variables and set your initial WiFi status. #include <SPI.h> #include <WiFi101.h> char ssid[] = "network-id"; char pass[] = "network-password"; int status = WL_IDLE_STATUS; In the setup portion of your code, you need to establish your WiFi connection. It’s best to also print this information to the console to indicate whether or not you were actually successful in connecting. void setup() { Serial.begin(9600); while (status != WL_CONNECTED) { Serial.print("Attempting to connect to network... "); status = WiFi.begin(ssid, pass); delay(10000); } Serial.print("SSID:"); Serial.println(WiFi.SSID()); } Open up the console and run this code. If you see your SSID printed on the screen, congratulations! You successfully connected your MKR1000 to WiFi. Setting Up the Hardware For this project, you’re going to build a fairly basic circuit in order to connect your temperature sensor to the Arduino. At the same time, you’re going to wire the MKR1000 so that you can run it off battery power, which is necessary in order for it to operate while unplugged from the computer. Keep in mind that connecting the MKR to the breadboard can be a bit tricky, so it might be helpful to place a block under one side of the board so that it stays connected while you’re using it. Hardware You’ll Need - 1 Arduino MKR1000 or Genuino MKR1000 - 1 resistor (10k) - 5 male/male wires - 1 breadboard - 1 DHT22 sensor - 1 battery holder (4AA) - 4 AA batteries Building the DHT22 Circuit Now that you have all the pieces, let’s put them together. First attach the ground and 5v pins on the MKR1000 to the breadboard, then hook up your DHT22 temperature sensor as shown. You should double check that the legs of your sensor are connected to the right places – the left leg should be attached to power, the right leg to ground, and the second leg from the left to a digital Arduino pin (I used pin 6) and to a resistor, which is also connected to power. Insert the batteries into the holder and connect the black wire to the negative column on your breadboard and the red wire to the VIN pin on the Arduino. Your Arduino should light up to show that it’s connected to power, and ta-da! You’ve got a fully wired circuit. You can unplug the battery for now, you won’t need the board to run by itself until after you write the program. Sending Temperature Data with PubNub Part of what defines IoT is the fact that it combines everyday pieces of hardware, objects, and activities with software components that connect them to the digital world. This connection is what PubNub specializes in – it provides the software necessary for devices to launch information from the real world into the digital world. Since you’ve already got the hardware all set up, the next steps of this project involve reading the data and using PubNub’s global data stream network to publish it. That way, in true IoT fashion, you can access it from anywhere. You’ll need a few libraries for this project, so be sure that all of these are included at the top of your program. #include <SPI.h> #include <WiFi101.h> #include <PubNub.h> #include <DHT.h> Reading Data from the DHT22 Sensor To use the sensor, you first need to initialize it using its pin and type. #define DHTPIN 6 #define DHTTYPE DHT22 After that, reading the sensor data only requires two lines of code: one to start the sensor in void setup() and one to read it in void loop(). void setup() { dht.begin(); } void loop() { float temp = dht.readTemperature(true); } Publishing Data to PubNub In order to publish the data to PubNub, you first need to initialize PubNub, then format the data as a JSON. After that, you can use PubNub’s Arduino library to send the message over the MKR’s WiFi connection. Initializing PubNub To start using PubNub, you first need to input your publish and subscribe keys at the top of your code, then define the channel you’ll be using for your data. char pubkey[] = "demo"; char subkey[] = "demo"; char channel[] = "temperature"; Next you need to initialize PuNub in the setup portion of your code. PubNub.begin(pubkey, subkey); Formatting Messages To convert the data from a float to a JSON, you need to define two helper functions. The first is a commonly used functions that converts the float that the sensor records to a string. char *dtostrf (double val, signed char width, unsigned char dec, char *s) { char m[20]; sprintf(m, "%%%d.%df", width, dec); sprintf(s, m, val); return s; } The other function joins long strings together, which you need to do in order to create a JSON. char longString[100]; char *joinStrings(char* string1, char* string2, char* string3) { longString[0] = 0; strcat(longString, string1); strcat (longString, string2); strcat (longString, string3); return longString; } Using those functions, we can convert the float data into a JSON. If you’re using various locations to transmit your data, like I am, then you need to change the “Location” term in the JSON to reflect the different locations. For this project, one of my MKR1000s runs code that says “SoMa” while another says “Ingleside,” and so on. char msg[200]; dtostrf(temp, 7, 2, msg); char* json = joinStrings("{\"eon\":{\"Location\":\"", msg, "\"}}"); Now the data is ready to be published. Sending the Messages Use the PubNub.publish() method to publish the data to your channel over the WiFi connection. WiFiClient *client = PubNub.publish(channel, json); Now use the code below to print the data to the console and add a delay, which help to clean up your data. while (client->connected()) { char c = client->read(); Serial.print(c); } client->stop(); delay(5000); Visualizing Data with EON and Mapbox With awesome tools like the EON Chart Builder and Mapbox Editor, you can easily visualize and embed your data anywhere you like in just a couple quick steps. It’s pretty self-explanatory for the most part, but I’ll walk through it here so you can see how I got to my finished product. Using EON Chart Builder To visualize your data on a graph like the one below, you just need to input your subscribe key and channel name into the EON Chart Builder and adjust the settings to your liking. You can change the colors of your graph, assign labels to the the x and y axes labels, modify the number of data points shown, and indicate whether or not you want to include historical channel data in your project. Then just copy and paste the HTML code at the bottom of the screen to your HTML file. It’s that easy! Mapping the Data with Mapbox For this project, you can use Mapbox to show the location of your sensor(s) on a map, then tweak the HTML so that your temperature data is visible as well. Start off by making an account if you don’t already have one – it’s free! Using Mapbox Editor Your Mapbox account comes with access to a full JavaScript SDK as well as a tool called Mapbox Editor. To get started, just copy and paste the code from the SDK into your HTML, then open up the editor. There, you can customize your map to your liking with just a few clicks and export the additional HTML to your project. Just like that, you’ve got a map to manipulate at your fingertips. Inserting Temperature Data To include accurate temperature data in your map, you need to write a function that inputs the collected data into the “description” portion of your marker. Because JavaScript executes code asynchronously, you have to use nested functions to run the code in the order you want it – first retrieve the data using PubNub, then print it to the description box. The first function will retrieve the latest data from PubNub and then call another function, which is specified in the function call. function getDescription(doanother) { pubnub.history({ channel: "temperature", count:3, callback: function(m) { somaTemp = JSON.stringify(m); somaTemp = somaTemp.substr(somaTemp.search("So")+9,5); somaTemp = somaTemp + '°F'; doanother(); } }); }; When you call that function, you then need to define another function as a parameter which assigns that value to the description. This forces the code to be executed in the correct order. getDescription(function() { L.mapbox.featureLayer({ type: 'Feature', geometry: { type: 'Point', coordinates: [-122.402801,37.784689] }, properties: { title: 'SoMa', description: somaTemp, 'marker-size': 'large', 'marker-color': '#f86767', } }).addTo(map); }); And you’re done! You’ve successfully set up your Arduino, linked it with PubNub, and visualized the data using both the EON Chart Builder and Mapbox. Taking it Further So there you have it! You’ve now built a wireless distributed temperature monitoring system using just an Arduino MKR1000, a temperature sensor, and a few batteries and wires. With the help of PubNub, you can access this system from almost anywhere in the world. To take it further, you could use the same basic steps to monitor any type of sensor-collected data in realtime, whether it’s humidity, light, motion, or anything else you can think of. You could also expand the number of locations you’re using by putting a different MKR in each one, and voila! Your project is global. Armed with a WiFi connection and a data streaming service, your possibilities are endless.
https://www.pubnub.com/blog/wirelessly-tracking-temperature-data-with-the-arduino-mkr1000/
CC-MAIN-2020-34
refinedweb
2,129
69.31
Writing XML documents is very straightforward, as I hope Chapters 3 and 4 proved. Reading XML documents is not nearly as simple. Fortunately, you don't have to do all the work yourself; you can use an XML parser to read the document for you. The XML parser exposes the contents of an XML document through an API, which a client application then reads. In addition to reading the document and providing the contents to the client application, the parser checks the document for well-formedness and (optionally) validity. If it finds an error, it informs the client application. InputStreams and Readers It's time to reverse the examples of Chapters 3 and 4. Instead of putting information into an XML document, I'm going to take information out of one. In particular, I'm going to use an example that reads the response from the Fibonacci XML-RPC servlet introduced in Chapter 3. This document takes the form shown in Example 5.1. Example 5.1 A Response from the Fibonacci XML-RPC Server <?xml version="1.0"?> <methodResponse> <params> <param> <value><double>28657</double></value> </param> </params> </methodResponse> The clients for the XML-RPC server developed in Chapter 3 simply printed the entire document on the console. Now I want to extract just the answer and strip out all of the markup. In this situation, the user interface will look something like this: C:\XMLJAVA>java"FibonacciClient"9 34 From the user's perspective, the XML is completely hidden. The user neither knows nor cares that the request is being sent and the response is being received in an XML document. Those are merely implementation details. In fact, the user may not even know that the request is being sent over the network rather than being processed locally. All the user sees is the very basic command line interface. Obviously you could attach a fancier GUI front end, but this is not a book about GUI programming, so I'll leave that as an exercise for the reader. Given that you're writing a client to talk to an XML-RPC server, you know that the documents you're processing always take this form. You know that the root element is methodResponse. You know that the methodResponse element contains a single params element that in turn contains a param element. You know that this param element contains a single value element. (For the moment, I'm going to ignore the possibility of a fault response to keep the examples smaller and simpler. Adding a fault response would be straightforward, and we'll do that in later chapters.) The XML-RPC specification specifies all of this. If any of it is violated in the response you get back from the server, then that server is not sending correct XML-RPC. You'd probably respond to this by throwing an exception. Given that you're writing a client to talk to the specific servlet at, you know that the value element contains a single double element that in turn contains a string representing a double. This isn't true for all XML-RPC servers, but it is true for this one. If the server returned a value with a type other than double, you'd probably respond by throwing an exception, just as you would if a local method you expected to return a Double instead returned a String. The only significant difference is that in the XML-RPC case, neither the compiler nor the virtual machine can do any type checking. Thus you may want to be a bit more explicit about handling a case in which something unexpected is returned. The main point is this: Most programs you write are going to read documents written in a specific XML vocabulary. They will not be designed to handle absolutely any well-formed document that comes down the pipe. Your programs will make assumptions about the content and structure of those documents, just as they now make assumptions about the content and structure of external objects. If you are concerned that your assumptions may occasionally be violated (and you should be), then you can validate your documents against a schema of some kind so you know up front if you're being fed bad data. However, you do need to make some assumptions about the format of your documents before you can process them reasonably. It's simple enough to hook up an InputStream and/or an InputStreamReader to the document, and read it out. For example, the following method reads an input XML document from the specified input stream and copies it to System.out: public printXML(InputStream xml) { int c; while ((c = xml.read()) != -1) System.out.write(c); } To actually extract the information, a little more work is required. You need to determine which pieces of the input you actually want and separate those out from all the rest of the text. In the Fibonacci XML-RPC example, you need to extract the text string between the <double> and </double> tags and then convert it to a java.math.BigInteger object. (Remember, I'm using a double here only because XML-RPC's ints aren't big enough to handle Fibonacci numbers. However, all the responses should contain an integral value.) The readFibonacciXMLRPCResponse() method in Example 5.2 does exactly this by first reading the entire XML document into a StringBuffer, converting the buffer to a String, and then using the indexOf() and substring() methods to extract the desired information. The main() method connects to the server using the URL and URLConnection classes, sends a request document to the server using the OutputStream and OutputStreamWriter classes, and passes InputStream containing the response XML document to the readFibonacciXMLRPCResponse() method. Example 5.2 Reading an XML-RPC Response import java.net.*; import java.io.*; import java.math.BigInteger; public class FibonacciClient { static String defaultServer = ""; public static void main(String[] args) { if (args.length <= 0) { System.out.println( "Usage: java FibonacciClient number url" ); return; } String server = defaultServer;); // Write the request InputStream in = connection.getInputStream(); BigInteger result = readFibonacciXMLRPCResponse(in); System.out.println(result); in.close(); connection.disconnect(); } catch (IOException e) { System.err.println(e); } } private static BigInteger readFibonacciXMLRPCResponse( InputStream in) throws IOException, NumberFormatException, StringIndexOutOfBoundsException { StringBuffer sb = new StringBuffer(); Reader reader = new InputStreamReader(in, "UTF-8"); int c; while ((c = in.read()) != -1) sb.append((char) c); String document = sb.toString(); String startTag = "<value><double>"; String endTag = "</double></value>"; int start = document.indexOf(startTag) + startTag.length(); int end = document.indexOf(endTag); String result = document.substring(start, end); return new BigInteger(result); } } Reading the response XML document is more work than writing the request document, but still plausible. This stream-and string-based solution is far from robust, however, and will fail if any one of the following conditions is present: The document returned is encoded in UTF-16 instead of UTF-8. An earlier part of the document contains the text “<value><double>,” even in a comment. The response is written with line breaks between the value and double tags, like this: <value> <double>28657</double> </value> There's extra white space inside the double tags, like this: <double >28657</double > Perhaps worse than these potential pitfalls are all the malformed responses FibonacciClient will accept, even though it should recognize and reject them. And this is a simple example in which we just want one piece of data that's clearly marked up. The more data you want from an XML document, and the more complex and flexible the markup, the harder it is to find using basic string matching or even the regular expressions introduced in Java 1.4. Straight text parsing is not the appropriate tool with which to navigate an XML document. The structure and semantics of an XML document are encoded in the document's markup, its tags, and its attributes; and you need a tool that is designed to recognize and understand this structure as well as reporting any possible errors in this structure. The tool you need is called an XML parser.
http://www.informit.com/articles/article.aspx?p=30609
CC-MAIN-2016-50
refinedweb
1,346
53.21
idntify_widget A flutter plugin using the implementation of the IDntify service. Installation First, add idntify_widget and camera as a dependency in your pubspec.yaml file. environment: sdk: '>=2.12.0 <3.0.0' flutter: '>=2.0' dependencies: extended_image: ^1.0.0 camera: Due to the use of some package dependencies you'll need to change some configuration on your Flutter app. If you want to know more about these packages you can read the documentation of the camera and image_picker packages. If you just want to use the API then don't add the camera package and you're just ready to use it. iOS In order to use this package iOS 10.0 or higher is needed. For the camera package> For the image_picker package you must In order to use this package Android sdk version 21 or higher is needed. For the camera package change the minimum Android sdk version to 21 (or higher) in your android/app/build.gradle file. minSdkVersion 21 For the image_picker package the configuration depends on the sdk version. API < 29 No configuration required - the plugin should work out of the box. API 29+ Add android:requestLegacyExternalStorage="true" as an attribute to the <application> tag in AndroidManifest.xml. The attribute is false by default on apps targeting Android Q. Usage Before starting to write code you must have already set your application 'origin' and generated your API key. If everything is ready then it depends on what kind of integration would you like to use. Widget The widget goes through all the steps of the process doing all the though work for you. Keep in mind that each time the widget is recreated a new transaction process will be created. It's recommended to make the widget to expand as the same size of its parent widget if that's the case. Just use an Expanded() or a Flexible(); Now it's time to write code! You must need to get a list of the available cameras on the device. Don't worry, is a single line. Then you just call the Idntify widget with three required parameters: an API key, an 'origin' and a reference of the available cameras. You can also include the stage and the callback functions of certain events. Here is a simple example. import 'package:camera/camera.dart'; import 'package:flutter/material.dart'; import 'package:idntify_widget/idntify_widget.dart'; List<CameraDescription> cameras; void main() async { WidgetsFlutterBinding.ensureInitialized(); cameras = await availableCameras(); runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Example', theme: ThemeData( primarySwatch: Colors.blue, ), home: Scaffold( appBar: AppBar( title: Text('Simple Example'), ), body: Idntify( '<<YOUR API_KEY>>', '<<YOUR ORIGIN>>', cameras, stage: Stage.dev, //Stage.prod onTransactionFinished: () => print('finished'), onStepChange: (step) => print('step: ${step}') ) ); } } API This works really simple. Just create an instance of the IdntifyApiService class sending an API key, an 'origin' and the stage. At this point you just call the functions whenever you want. Keep in mind that the correct process is to create a transaction first, then add two documents, at the add the selfie and it'll retrieve if the transaction was completed. Here is a simple example. import 'package:idntify_widget/idntify_widget.dart'; import 'dart:typed_data'; IdntifyApiService api = IdntifyApiService('<<API_KEY>>', <<ORIGIN>>, Stage.dev); // If it is correct then the other functions will work. await api.createTransaction(); Uint8List frontalID = your_file_in_bytes. Uint8List reverseID = your_file_in_bytes. await api.addDocument(frontalID, DocumentType.frontal); await api.addDocument(frontalID, DocumentType.reverse); Uint8List selfiePicture = your_file_in_bytes; // A 1-2 seconds video. Uint8list selfieVideo = your_file_in_bytes; // If you want to access to the properties of the response object final IdntifyResponse response = await api.addSelfie(selfiePicture, selfieVideo); print('$response.message'); TODO xImprove error handling xImprove IdntifyApiService xRefactor or rewrite getCamera() xRefactor to a clean build()in Idntifywidget. Optional: use routes. xAdd responsive support.
https://pub.dev/documentation/idntify_widget/latest/
CC-MAIN-2021-21
refinedweb
631
50.94
The Q3Socket class provides a buffered TCP connection. More... #include <Q3Socket>IODevice. The Q3Socket class provides a buffered TCP connection.3Device() returns a pointer to the Q3SocketDevice used for this socket. Q33Socket is not suitable for use in threads. If you need to uses sockets in threads use the lower-level Q3SocketDevice class. See also Q3SocketDevice, QHostAddress, and QSocketNotifier. This enum specifies the possible errors: This enum defines the connection states: Creates a Q3Socket object in Q3Socket::Idle state. The parent and name arguments are passed on to the QObject constructor. Destroys the socket. Closes the connection if necessary. See also close(). Returns the host address of this socket. (This is normally the main IP address of the host, but can be e.g. 127.0.0.1 for connections to localhost.) Returns the current read index. Since Q3Socket is a sequential device, the current read index is always zero. This is an overloaded member function, provided for convenience. Moves the read index forward to index and returns true if the operation was successful; otherwise returns false. Moving the index forward means skipping incoming data. Returns true if there is no more data to read; otherwise returns false. Reimplemented from QIODevice. Returns the number of incoming bytes that can be read, i.e. the size of the input buffer. Equivalent to size(). Reimplemented from QIODevice. See also bytesToWrite(). Returns the number of bytes that are waiting to be written, i.e. the size of the output buffer. Reimplemented from QIODevice. See also bytesAvailable() and clearPendingData(). This signal is emitted when data has been written to the network. The nbytes parameter specifies how many bytes were written. The bytesToWrite() function is often used in the same context; it indicates how many buffered bytes there are left to write. See also writeBlock() and bytesToWrite(). Returns true if it's possible to read an entire line of text from this socket at this time; otherwise returns false. Note that if the peer closes the connection unexpectedly, this function returns false. This means that loops such as this won't work: while( !socket->canReadLine() ) // WRONG ; Reimplemented from QIODevice. See also readLine(). Deletes the data that is waiting to be written. This is useful if you want to close the socket without waiting for all the data to be written. See also bytesToWrite(), close(), and delayedCloseFinished(). Closes the socket. The read buffer is cleared. If the output buffer is empty, the state is set to Q3Socket::Idle and the connection is terminated immediately. If the output buffer still contains data to be written, Q3Socket goes into the Q3Socket::Closing state and the rest of the data will be written. When all of the outgoing data have been written, the state is set to Q3Socket::Idle and the connection is terminated. At this point, the delayedCloseFinished() signal is emitted. If you don't want that the data of the output buffer is written, call clearPendingData() before you call close(). Reimplemented from QIODevice. See also state(), bytesToWrite(), and clearPendingData(). Attempts to make a connection to host on the specified port and return immediately. Any connection or pending connection is closed immediately, and Q33Socket will do a normal DNS lookup if required. Note that port is in native byte order, unlike some other libraries. See also state(). This signal is emitted after connectToHost() has been called and a connection has been successfully established. See also connectToHost() and connectionClosed(). This signal is emitted when the other end has closed the connection. The read buffers may contain buffered input data which you can read after the connection was closed. See also connectToHost() and close(). This signal is emitted when a delayed close is finished. If you call close() and there is buffered output data to be written, Q3Socket goes into the Q3Socket::Closing state and returns immediately. It will then keep writing to the socket until all the data has been written. Then, the delayedCloseFinished() signal is emitted. See also close(). This signal is emitted after an error occurred. The error parameter is the Error value. Implementation of the abstract virtual QIODevice::flush() function. This function always returns true. Reads a single byte/character from the internal read buffer. Returns the byte/character read, or -1 if there is nothing to be read. See also bytesAvailable() and putch(). This signal is emitted after connectToHost() has been called and the host lookup has succeeded. See also connected(). Opens the socket using the specified QIODevice file mode m. This function is called automatically when needed and you should not call it yourself. Reimplemented from QIODevice. See also close(). This is an overloaded member function, provided for convenience. Returns the address of the connected peer if the socket is in Connected state; otherwise an empty QHostAddress is returned. Returns the host name as specified to the connectToHost() function. An empty string is returned if none has been set. Returns the peer's host port number, normally as specified to the connectToHost() function. If none has been set, this function returns 0. Note that Qt always uses native byte order, i.e. 67 is 67 in Qt; there is no need to call htons(). Returns the host port number of this socket, in native byte order. Writes the character ch to the output buffer. Returns ch, or -1 if an error occurred. See also getch(). Returns the size of the read buffer. See also setReadBufferSize(). Reads maxlen bytes from the socket into data and returns the number of bytes read. Returns -1 if an error occurred. Reimplemented from QIODevice. This signal is emitted every time there is new incoming(). Sets the size of the Q3Socket's internal read buffer to bufSize. Usually Q3Socket reads all data that is available from the operating system's socket. If the buffer size is limited to a certain size, this means that the Q3(). Sets the socket to use socket and the state() to Connected. The socket must already be connected. This allows us to use the Q3Socket class as a wrapper for other socket types (e.g. Unix Domain Sockets). See also socket(). Sets the internal socket device to device. Passing a device of 0 will cause the internal socket device to be used. Any existing connection will be disconnected before using the new device. The new device should not be connected before being associated with a Q3Socket; after setting the socket call connectToHost() to make the connection. This function is useful if you need to subclass Q3SocketDevice and want to use the Q3Socket API, for example, to implement Unix domain sockets. See also socketDevice(). Returns the number of incoming bytes that can be read right now (like bytesAvailable()). Reimplemented from QIODevice. Returns the socket number, or -1 if there is no socket at the moment. See also setSocket(). Returns a pointer to the internal socket device. There is normally no need to manipulate the socket device directly since this class does the necessary setup for most applications. See also setSocketDevice(). Returns the current state of the socket connection. See also Q3Socket::State. This implementation of the virtual function QIODevice::ungetch() prepends the character ch to the read buffer so that the next read returns this character as the first character of the output. Wait up to msecs milliseconds for more data to be available.(). This is an overloaded member function, provided for convenience. Writes len bytes to the socket from data and returns the number of bytes written. Returns -1 if an error occurred. Reimplemented from QIODevice.
http://doc.trolltech.com/4.3/q3socket.html
crawl-002
refinedweb
1,251
61.53
keyctl_dh_compute — Compute a Diffie-Hellman shared secret or public key keyctl_dh_compute_kdf — Derive key from a Diffie-Hellman shared secret Synopsis #include <keyutils.h> long keyctl_dh_compute(key_serial_t private, key_serial_t prime, key_serial_t base, char *buffer, size_t buflen); long keyctl_dh_compute_alloc(key_serial_t private, key_serial_t prime, key_serial_t base, void **_buffer); long keyctl_dh_compute_kdf(key_serial_t private, key_serial_t prime, key_serial_t base, char *hashname, char *otherinfo, size_t otherinfolen, char *buffer, size_t buflen); Description keyctl_dh_compute() computes a Diffie-Hellman public key or shared secret. That computation is: base ^ private ( mod prime ) When base is a key containing the shared generator value, the remote public key is computed. When base is a key containing the remote public key, the shared secret is computed. base, private, and prime must all refer to user-type keys containing the parameters for the computation. Each of these keys must grant the caller read permission in order for them to be used. buffer and buflen specify the buffer into which the computed result will be placed. buflen may be zero, in which case the buffer is not used and the minimum buffer length is fetched. keyctl_dh_compute_alloc() is similar to keyctl_dh_compute() except that it allocates a buffer big enough to hold the payload data and places the data in it. If successful, a pointer to the buffer is placed in *_buffer. The caller must free the buffer. keyctl_dh_compute_kdf() derives a key from a Diffie-Hellman shared secret according to the protocol specified in SP800-56A. The Diffie-Hellman computation is based on the same primitives as discussed for keyctl_dh_compute(). To implement the protocol of SP800-56A base is a key containing the remote public key to compute the Diffie-Hellman shared secret. That shared secret is post-processed with a key derivation function. The hashname specifies the Linux kernel crypto API name for a hash that shall be used for the key derivation function, such as sha256. The hashname must be a NULL terminated string. See /proc/crypto for available hashes on the system. Following the specification of SP800-56A section 5.8.1.2 the otherinfo parameter may be provided. The format of the OtherInfo field is defined by the caller. The caller may also specify NULL as a valid argument when no OtherInfo data shall be processed. The length of the otherinfo parameter is specified with otherinfolen and is restricted to a maximum length by the kernel. The KDF returns the requested number of bytes specified with the genlen or the buflen parameter depending on the invoked function. buffer and buflen specify the buffer into which the computed result will be placed. Return Value On success keyctl_dh_compute() returns the amount of data placed into the buffer when buflen is non-zero. When buflen is zero, the minimum buffer length to hold the data is returned. On success keyctl_dh_compute_alloc() returns the amount of data in the buffer. On error, both functions set errno to an appropriate code and return the value -1. Errors - ENOKEY One of the keys specified is invalid or not readable. - EINVAL The buffer pointer is invalid or buflen is too small. - EOPNOTSUPP One of the keys was not a valid user key. - EMSGSIZE When using keyctl_dh_compute_kdf(), the size of either otherinfolen or buflen is too big. Linking This is a library function that can be found in libkeyutils. When linking, -lkeyutils should be specified to the linker. See Also keyctl(1), keyctl(2), keyctl(3), keyutils(7) Referenced By keyctl(2), keyctl(3). The man pages keyctl_dh_compute_alloc(3) and keyctl_dh_compute_kdf(3) are aliases of keyctl_dh_compute(3).
https://dashdash.io/3/keyctl_dh_compute_kdf
CC-MAIN-2021-43
refinedweb
584
56.25
#include <core.hpp> XML/YAML File Storage Class. The class describes an object associated with XML or YAML file. It can be used to store data to such a file or read and decode the data. The storage is organized as a tree of nested sequences (or lists) and mappings. Sequence is a heterogenious array, which elements are accessed by indices or sequentially using an iterator. Mapping is analogue of std::map or C structure, which elements are accessed by names. The most top level structure is a mapping. Leaves of the file storage tree are integers, floating-point numbers and text strings. For example, the following code: will produce the following file: %YAML:1.0 test_int: 5 test_real: 3.1000000000000001e+00 test_string: ABCDEFGH test_mat: !!opencv-matrix rows: 3 cols: 3 dt: f data: [ 1., 0., 0., 0., 1., 0., 0., 0., 1. ] test_list: - 1.0000000000000000e-13 - 2 - 3.1415926535897931e+00 - -3435345 - "2-502 2-029 3egegeg" - { month:12, day:31, year:1969 } test_map: x: 1 y: 2 width: 100 height: 200 lbp: [ 0, 1, 1, 0, 1, 1, 0, 1 ] and to read the file above, the following code can be used: file storage mode the default constructor the full constructor that opens file storage for reading or writing the constructor that takes pointer to the C FileStorage structure the destructor. calls release() returns the normalized object name for the specified file name returns the first element of the top-level mapping returns true if the object is associated with currently opened file. opens file storage for reading or writing. The previous storage is closed with release() returns pointer to the underlying C FileStorage structure returns pointer to the underlying C FileStorage structure returns the specified element of the top-level mapping returns the specified element of the top-level mapping closes the file and releases all the memory buffers closes the file, releases all the memory buffers and returns the text string returns the top-level mapping. YAML supports multiple streams writes the registered C structure (CvMat, CvMatND, CvSeq). See cvWrite() writes one or more numbers of the specified format to the currently written structure the currently written element the underlying C FileStorage structure the writer state the stack of written structures
https://docs.opencv.org/ref/2.4.13.3/da/d56/classcv_1_1FileStorage.html
CC-MAIN-2022-33
refinedweb
374
61.16
A small Python package to extract content from web pages. Project description Markout A small Python package I made to extract HTML content from web pages. It is very customizable and I made it to fit my needs (extract multiple pages' code to Markdown, but only some HTML tags which I needed). Due to its purpose being able to convert specific HTML tags into a desired Markdown format this script does not generate any standard output, rather, it uses custom tokens specified in a configuration file, so the output can be formatted into any anything. Usage Importing into your code To use this package you'll need to install it using pip: pip install markout-html Then just import it into your code: from markout_html import * After that you can use the extract_url and extract_html functions: result = extract_url( # HTML page link '', # Tokens to format each HTML tags contents (you can extract only the ones you want) { 'p': "\n** {} **" }, # Only extract contents inside this tag 'article' ) result = extract_html( # HTML code string '<html>some html code</html>', # Tokens to format each HTML tags contents (you can extract only the ones you want) { 'p': "\n** {} **" }, # Only extract contents inside this tag 'article' ) Using the CLI command Below are a few examples with better description on how to use this package command if you don't want to create a Python script! If you just want to extract using a string in the terminal, you can use markout_html --extract [string]. You can use the command markout_html with the flag --help for more info. Configuration All configurations can be found into a single file: .markoutrc.json (you can specify another name in the terminal with the flag --config), if you don't load a configuration file the script will use its default values. There is an example of configuration in the repository root! To specify a different configuration file use: markout_html --config [filename] The configuration file values links - object of links to be extracted, each link has a destination value (output file). Example: { "links": { "": "out/post.md", "": "out/other_post.md" } } The example above will get the HTML from and extract the results into out/post.md. only_on - string that specify where (which HTML tag) to extract the contents from (e.g. : html, body, main). Example: { "only_on": "article" } tokens - object in which each specified HTML tag will be extract into a formatted string and then placed on the output file. Example: { "tokens": { "header": "# {}", "h1": "\n# {}", "h2": "\n# {}", "b": "\n## {}", "li": "+ {}", "i": "** {} **", "p": "\n{}", "span": "{}" } } On the example above, the contents of the HTML tag <header> will be extract into the # {} string, so for example, if we had <header>Some text here!</header> the result would've been # Some text here! (this formats the text into Markdown). Contributions Feel free to leave your contribution here, I would really appreciate it! Also, if you have any doubts or troubles using this package just contact me or leave an issue. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/markout-html/
CC-MAIN-2021-04
refinedweb
514
58.01
Details Description In org.apache.solr.analysis HTMLStripCharFilter use a wrong isHex()-method that return characters like 'X', 'Y' as valid hex chars: private boolean isHex(int ch) { return (ch>='0' && ch<='9') || (ch>='A' && ch<='Z') || (ch>='a' && ch<='z'); } If only characters from [0-9a-zA-Z] are allowed, the readNumericEntity method will detect faster a mismatch. Activity - All - Work Log - History - Activity - Transitions Nice catch Bernhard! I think this is a new record for "time between bug introduced in code and bug reported" ... from what i can tell this has been in there since Solr 1.0 (but as you point out, didn't actually cause bad behavior since the final integer parsing would result in a MISMATCH and backtracking. Still good to fix it though – thank you for reporting it. Committed revision 1208032. - trunk Committed revision 1208037. - 3x
https://issues.apache.org/jira/browse/SOLR-2819
CC-MAIN-2016-50
refinedweb
142
57.27
Mr CMember Content count89 Joined Last visited Community Reputation137 Neutral About Mr C - RankMember - I was wondering, since UDP is connection-less how do you store new connections? I haven't found a clear example of this. With TCP there is a connection made which you could store in a struct and/or put into a list easily. - Thank you both for your replies, I will add the fixes and see what I end up with. Although TCP might be "easier" for this, my Prof. wants me to use UDP. Sadly I am at his mercy :) Thanks again. C#/XNA socket programming question Mr C posted a topic in Networking and MultiplayerHi all. I didn't want to make a post about this but google has proven rather unhelpful. I am in school for Game Programming and I am working on an XNA based tech demo for a class. I am trying to teach myself UDP socket programming with the goal of eventually making multiplayer game. Right now my grasp on socket programming is rudimentary at best though through example I have written a simple chat server/client. I would love to be able to use Lidgren for this but as it is a tech demo I need to write all the code myself, which I am fine with. The Message: NONE On any key press, which at least means it is getting to my server. KeyboardState keyboardState = new KeyboardState(); if (Keyboard.GetState().GetPressedKeys().Length > 0) { // Default movement is none if (!keyboardState.IsKeyDown(Keys.W) || !keyboardState.IsKeyDown(Keys.A) || !keyboardState.IsKeyDown(Keys.S) || !keyboardState.IsKeyDown(Keys.D)) { MoveDir = MoveDirection.NONE; } if (keyboardState.IsKeyDown(Keys.W)) { MoveDir = MoveDirection.UP; } if (keyboardState.IsKeyDown(Keys.A)) { MoveDir = MoveDirection.LEFT; } if (keyboardState.IsKeyDown(Keys.S)) { MoveDir = MoveDirection.DOWN; } if (keyboardState.IsKeyDown(Keys.D)) { MoveDir = MoveDirection.RIGHT; } byte[] send_buffer = Encoding.ASCII.GetBytes(MoveDir.ToString()); try { sending_socket.SendTo(send_buffer, sending_end_point); } catch (Exception send_exception) { exception_thrown = true; Console.WriteLine(" Exception {0}", send_exception.Message); } if (exception_thrown == false) { Console.WriteLine("Message has been sent to the broadcast address"); } else { exception_thrown = false; Console.WriteLine("The exception indicates the message was not sent."); } } That is the chunk of relevant code inside my XNA update function. MoveDir is declared at the start of my file as an instance of Movedirection. // Move direction enumerator enum MoveDirection { UP, DOWN, LEFT, RIGHT, NONE } is my enum I don't think it matters but my server code is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net; using System.Net.Sockets; using System.Threading; namespace Server { public class UDPListener { private const int listenPort = 11000; static int Main(string[] args) { bool done = false; UdpClient listener = new UdpClient(listenPort); IPEndPoint groupEP = new IPEndPoint(IPAddress.Any, listenPort); string recieved_userName; string received_data; byte[] receive_name_array; byte[] receive_byte_array; receive_name_array = listener.Receive(ref groupEP); try { while (!done) { receive_byte_array = listener.Receive(ref groupEP); Console.WriteLine("Received a broadcast from {0}", groupEP.ToString()); received_data = Encoding.ASCII.GetString(receive_byte_array, 0, receive_byte_array.Length); Console.Write("Message: {0}\n", received_data); } } catch (Exception e) { Console.WriteLine(e.ToString()); } listener.Close(); return 0; } } } Anyways, like I said I am new to this and trying to figure it all out. My level of experience is a college level data structures course. XNA does not support "true keyboard input" ([url][/url]) and I know there HAS to be a simpler way to get this done besides what is suggested in that post. I don't need it to be pretty, just to work. If this is a dumb question, I apologize for wasting peoples time. Thanks for any help. Edit: I was unsure if I should post this here or in the networking section, since it has XNA specific stuff I chose here, please move if it is out of place. I feel really dumb, but I am stuck on this (changing direction of a sprite) Mr C replied to Mr C's topic in For BeginnersTurns out my base issue was having may variables for certain things in the wrong place. Now it works, at least as far as back and fourth bouncing (other issues present but I can deal) I had re-written the code with float v_x = 5; float v_y = 0; but had put them before the part dealing with the movement, thus putting them inside of a loop, so they where constantly being made and redefined. I moved them to outside of the loop and things work now. I did learn a lot doing this though, and thanks for pointing me in the right direction as far as the math itself. I feel really dumb, but I am stuck on this (changing direction of a sprite) Mr C replied to Mr C's topic in For BeginnersQuote:Original post by Litheon I think you are looking for 2D Vector Reflection. :) If only I understood what any of that meant... I feel really dumb, but I am stuck on this (changing direction of a sprite) Mr C posted a topic in For BeginnersHey, I am currently working on making a pong clone (for learning purposes). I have it to the point where both paddles and the ball are onscreen, both paddles can be controlled, and there are checks in place to make sure nothing goes off screen. However when I added the ball, I ran into problems. I can get it to move at the start fine (as the code below shows), and it stops where its supposed to (well, close enough anyways). The problem is I cannot for the life of me figure out how to get it to change direction. I know its going to be something stupid but I have been working on this for awhile now and I feel I have gotten to the point where I should ask for help. Note: This project is being done with SFML in Code::Blocks, if that information makes any difference. #include <SFML/Graphics.hpp> #include <iostream> int main() { // Create the main rendering window sf::RenderWindow App(sf::VideoMode(800, 600, 32), "SFML Pong"); App.SetFramerateLimit(60); // Limits framerate // Next 3 lines display window size in console std::cout << App.GetHeight(); std::cout << "\n"; std::cout << App.GetWidth(); sf::Image bluePaddle; sf::Image redPaddle; sf::Image ball; // next 3 (currently 2) if's load images and displays error message if there is a problem if (!bluePaddle.LoadFromFile("bluePaddle.png")) { std::cout << "Error, bluePaddle.png failed to load"; } if (!redPaddle.LoadFromFile("redPaddle.png")) { std::cout << "Error,redPaddle.png failed to load"; } if (!ball.LoadFromFile("ball.png")) { std::cout << "Error, ball.png failed to load"; } // set blue paddle sprite and values sf::Sprite bluePaddleSprite(bluePaddle); bluePaddleSprite.SetY(200); // set red paddle sprite and values sf::Sprite redPaddleSprite(redPaddle); redPaddleSprite.SetX(784); redPaddleSprite.SetY(200); // set the ball's sprite and values sf::Sprite ballSprite(ball); ballSprite.SetX(250); ballSprite.SetY(250); // Start game loop while (App.IsOpened()) { // Process events sf::Event Event; while (App.GetEvent(Event)) { // Close window : exit if (Event.Type == sf::Event::Closed) App.Close(); // A key has been pressed if (Event.Type == sf::Event::KeyPressed) { // Escape key : exit if (Event.Key.Code == sf::Key::Escape) App.Close(); } } // Clear the screen App.Clear(sf::Color(0, 0, 0)); //next 2 if's for bluePaddles border gaurds (collision detection, makes sure it stays in bounds) if (bluePaddleSprite.GetPosition().y < 0) { bluePaddleSprite.SetY(0.5); } if(bluePaddleSprite.GetPosition().y > App.GetHeight()-bluePaddle.GetHeight()) { bluePaddleSprite.SetY(455); } //nest 2 ifs are for redPaddles vorder gaurds (same as blue) if (redPaddleSprite.GetPosition().y < 0) { redPaddleSprite.SetY(0.5); } if(redPaddleSprite.GetPosition().y > App.GetHeight()-redPaddle.GetHeight()) { redPaddleSprite.SetY(455); } //-> start of code dealing with ball. This bit will deal with ball movement/collision/etc); } //<- end of all the work with ball //this chunk provides the code for player control (movement) if (App.GetInput().IsKeyDown(sf::Key::W)) { bluePaddleSprite.Move(0, 150 * App.GetFrameTime() * -1); } else if (App.GetInput().IsKeyDown(sf::Key::S)) { bluePaddleSprite.Move(0, 150 * App.GetFrameTime() * 1); } //this bit is a tester for red before I put in AI, to make sure movement works. (tested working) if (App.GetInput().IsKeyDown(sf::Key::Up)) { redPaddleSprite.Move(0, 150 * App.GetFrameTime() * -1); } else if (App.GetInput().IsKeyDown(sf::Key::Down)) { redPaddleSprite.Move(0, 150 * App.GetFrameTime() * 1); } //Draws the blue Paddle App.Draw(bluePaddleSprite); //Draws the red paddle App.Draw(redPaddleSprite); //Draws the ball App.Draw(ballSprite); // Display window contents on screen App.Display(); } return EXIT_SUCCESS; } Is my full code. The piece of code in question is:); } There are a few other bugs, such as that although the ball stops, it stops at the point even if the paddle is not there (leading me to think I made it so its checking the y value of the area the paddle is on, instead of just the paddle). However my biggest problem right now is just getting the ball to change directions on contact. Thank you, and sorry for the trouble. A good sprite resource I found Mr C replied to Mr C's topic in 2D and 3D ArtQuote:Original post by OrangyTang I'm not sure I understand your definition of "free to use" sprites. These all seem to be ripped from various commercial games of some kind. Sorry, I realized they where rips at a later point. But for learning/hobby they should be fine. Apologies for not reading it carefully. I was thinking these would be good for somebody learning who does not have an artist/ability to make sprites. I would think for a commercial release you would want your own images anyways. - Quote:Original post by mongrol For. So basically have both images tied to one call and then pick which one you want at the time? - JTippetts, thank you very much for taking the time to provide such a detailed explanation. Currently I am in the process of making a spell class to learn how to attache objects to images. I plan to build this code up slowly until I have a working mage that can walk/cast correctly and have the spell "shoot" forward and perhaps add objects in the world to learn collision detection. You lost me abit on the void Object::preRender() { Animation *anim = animationset->getAnimation(curanimation); sf::IntRect subrect=anim->getSubRect(curframe); sf::Image *image=anim->getImage(curframe); sprite.SetSubRect(subrect); sprite.SetImage(image); } chunk but that is probably due to me not understanding pointers as well as I would like. Thanks again. A good sprite resource I found Mr C posted a topic in 2D and 3D ArtThe topic above seems to have been archived, or I would have posted this there. is a site that has a great collection of free to use sprites. is an example, and when you save the image there is no background. Hope this helps some people. Better way to swap images in SFML? Mr C posted a topic in For BeginnersHey all, I am working on a bit of code and I was wondering if any of you knew a better way to change one image into another then the way I am using. I looked on the SFML site and could not find anything... #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> #include <string> int main() { //sf::Clock gameClock; //float timeElapsed = 0; try { sf::RenderWindow App(sf::VideoMode(800, 600, 32), "SFML Window"); sf::Image mageReady; sf::Image mageCast; sf::Image Fire_Ball; if (!mageReady.LoadFromFile("mageReady.png")) { throw std::string("Your image failed to load"); } if (!mageCast.LoadFromFile("mageCast.png")) { throw std::string("Your image failed to load"); } if (!Fire_Ball.LoadFromFile("fireball.png")) { throw std::string("Your image failed to load"); } sf::Sprite mageReadySprite; mageReadySprite.SetImage(mageReady); mageReadySprite.SetCenter(38, 38); sf::Sprite mageCastSprite; mageCastSprite.SetImage(mageCast); mageCastSprite.SetCenter(38, 38); sf::Sprite Fire_Ball_Sprite; Fire_Ball_Sprite.SetImage(Fire_Ball); bool isNotCasting = true; bool isCasting = false; bool showFireBall = false; bool Running = true; while (Running) { //timeElapsed = gameClock.GetElapsedTime(); sf::Event myEvent; while (App.GetEvent(myEvent)) { // Window closed if (myEvent.Type == sf::Event::Closed) App.Close(); // Escape key pressed if ((myEvent.Type == sf::Event::KeyPressed) && (myEvent.Key.Code == sf::Key::Escape)) App.Close(); //show mage cast/spell. if ((myEvent.Type == sf::Event::KeyPressed) && (myEvent.Key.Code == sf::Key::C)) { isCasting = true; showFireBall = true; mageCastSprite.SetX(mageReadySprite.GetPosition().x); mageCastSprite.SetY(mageReadySprite.GetPosition().y); Fire_Ball_Sprite.SetX(mageCastSprite.GetPosition().x + 10); Fire_Ball_Sprite.SetY(mageCastSprite.GetPosition().y -60); } } // Clear the screen (fill it with black color) App.Clear(sf::Color(255, 255, 255)); //draw mage sprite if (isNotCasting == true) { App.Draw(mageReadySprite); } if (isNotCasting == false) { isNotCasting = false; } //draw mage cast if (isCasting == true && showFireBall == true) { isNotCasting = false; App.Draw(mageCastSprite); App.Draw(Fire_Ball_Sprite); } // Get elapsed time float ElapsedTime = App.GetFrameTime(); // Move the sprite if (App.GetInput().IsKeyDown(sf::Key::Left)) mageReadySprite.Move(-100 * ElapsedTime, 0); if (App.GetInput().IsKeyDown(sf::Key::Right)) mageReadySprite.Move( 100 * ElapsedTime, 0); if (App.GetInput().IsKeyDown(sf::Key::Up)) mageReadySprite.Move(0, -100 * ElapsedTime); if (App.GetInput().IsKeyDown(sf::Key::Down)) mageReadySprite.Move(0, 100 * ElapsedTime); if (isCasting == true){ if (App.GetInput().IsKeyDown(sf::Key::Left)) mageCastSprite.Move(-100 * ElapsedTime, 0); if (App.GetInput().IsKeyDown(sf::Key::Right)) mageCastSprite.Move( 100 * ElapsedTime, 0); if (App.GetInput().IsKeyDown(sf::Key::Up)) mageCastSprite.Move(0, -100 * ElapsedTime); if (App.GetInput().IsKeyDown(sf::Key::Down)) mageCastSprite.Move(0, 100 * ElapsedTime); } App.Display(); } } catch (std::string message) { std::cout << "you fail because " << message; } return 0; } Right now I have it so that when the key is pressed it puts the mageCast sprite where the mageReady sprite was, and hides the mageReady. I am looking for either a way to temporarily (or permanently) remove/delete an image when I want. The goal of this little program is to eventually have the mage walk, cast and shoot (show it casting, have the spell move X amount and then delete itself), then go back to the first image (mageReady). Right now I can move with the first image, and "C" will swap it into the second one and place the fireball where I want it. I feel that there must be a better way to do what I am trying to do.. Thanks all. Edit: Does this belong in the Alternative Game Libraries forum? I am not sure, please move if it does, and sorry if this is the wrong place. Battlefield Bad Company 2 Questions Mr C replied to John Stuart's topic in For BeginnersIf you look closely you will find that the buildings in BFBC 2 are "segemented", that is they don't blow up dynamically (though they do a good job faking it). My theory is when enough damage is dealt it calls a functions to play an animation for that part of the building blowing up. I could be wrong but when you die in a building from it collapsing it is not like you are struck by different debris, you just die as it falls around you. So it plays the animation that shows chunks of it exploding, then deletes those objects from the initial structure. Think of a puzzle that makes an image and removing a piece of that puzzle. In the Afro Samurai game they did it so when you cut of an arm it was removed from the body object. That's my thoughts on it anyways. - Quote:Original post by _fastcall I had originally assumed you had "using namespace std", and as you posted, my assumption is incorrect. Try this: std::cin.ignore( std::numeric_limits<std::streamsize>::max(), '\n' ); (Or alternately: using std::numeric_limits; using std::streamsize) Jesus Fish, I love you. - Quote:Original post by _fastcall Quote:Original post by Mr C I tried using cin.ignore(numeric_limits<streamsize>::max(),'\n'); but it says numeric_limits not delcared in scope. Oops, I should have mentioned that you need to include <limits> in order to use std::numeric_limits. EDIT: Yeah, you need to clear out the extra input left when the user enters the integer before asking for a line of text. (Entering "1Hello world!" works as expected; the integer is read, then the remainder is read by getline, and saved to the file.) Well, I tried doing #include <limits> but that was a no go. main.cpp|26|error: `numeric_limits' was not declared in this scope| as well as streamsize and max having the same issue... At this point my issue is I can't even enter a string... before I had it so I could enter "The brown dog jumped" but only "The" would be saved... - my code is: #include <iostream> #include <string> #include <fstream> using std::cin; using std::cout; using std::string; using std::ofstream; using std::ifstream; using std::istreambuf_iterator; using std::getline; int main() { int choice; cout << "Hello user! I was told to greet you in a nice and polite way! Lets be friend?\n"; cout << "So, do you want to create a new file or load the last one?\n"; cout << "1)new 2)load: "; cin >> choice; if (choice == 1){ string text; cout << "Enter the text: "; getline (cin,text); ofstream myOutFileStream("save1.txt"); myOutFileStream << text; myOutFileStream.close(); } if (choice == 2){ ifstream myInFileStream("save1.txt"); string save1((istreambuf_iterator<char>(myInFileStream)), istreambuf_iterator<char>()); cout << save1; myInFileStream.close(); } } It compiles and runs, but it will not even let me enter the string. I feel like I am missing something obvious/doing something dumb.. but I don't see it. Edit: As a side note, if I enter text in the file before hand it reads it fine, even if there are multiple words...
https://www.gamedev.net/profile/114697-mr-c/?tab=topics
CC-MAIN-2018-05
refinedweb
2,914
59.09
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. Mark Mitchell <mark@codesourcery.com> writes: | With the new symbol visibility patches, we have to be careful not to | get incorrect linkage in the libraries; if the default visibility is | not "default" when we include the headers, things get confused. | Things are even worse for ports that use "hidden" visibility by | default; the ARM/Symbian OS port will be one such. | | A tricky problem here was that this pattern was present in a lot of | the libsupc++ files: | | // cxxabi.h | | namespace __cxxabiv1 { | extern "C" void f(); | } | | // f.c | | #include <cxxabi.h> | | extern "C" void f() {} | | This is *not* a definition of the function declared in the header; | it's just an overloaded function. You must write "__cxxabiv1::f" or | put the definition inside the namespace. If the "visibility" patch does something like that, then the visibility patch is doing something very very *odd*. An overload set cannot contain two different functions with a C language linkage. We don't have overloads in C. If the above code is accepted, then what appears at the global scope above is a definition for the "f" at __cxxabiv1 scope. 7.5/6. [Note: because of the one definition rule (3.2),] | I did not attempt to fix all of the V3 headers; I'm only concerned | with libsupc++ at the moment. However, similar changes should | probably be made throughout V3. Otherwise, linkage will be wrong if | people #include (say) <iostream> with a non-default symbol visibility. I'm not convinced by your example and the rationale you gave. What am I missing? Most certainly, in your example ::f and __cxxabiv1::f refer to the same function and we should have &::f == &__cxxabiv1::f. I agree with your patch explicitly seeting the visibility, but I do not agree with the overload bits. -- Gaby
https://gcc.gnu.org/legacy-ml/libstdc++/2004-08/msg00036.html
CC-MAIN-2020-24
refinedweb
316
66.23
Get help on our Front End Development projects. @DanLaff here you can see how I have done it until now. the code pen you sent me makes sense, using a for loop to iterate trough the buttons. That was smart. But what if you would like to send in a value, for example if the user click on 7, like the way I have done it here. <button onclick="tall(7)" value="7" class="btn btn-primary" id="seven">7</button> ``` if(val.stream === null){ $("#" + username).css("box-shadow", "5px 0px 10px red"); $("#" + username + ">p").text("Offline") .css("color","red"); } else {$("#" + username).css("box-shadow", "5px 0px 10px green"); $("#" + username + ">p").text("Online") .css("color","green"); } Hi, got a somewhat weird question. I have a button with and onclick effect, and I'm trying to build up the url of the onclick based on previous button presses, but not really sure how. Example <button class= </button> I'd like to set {{VARIABLE}} using javascript (from a previous onclick()), but not exactly sure how I'd do that. I was thinking about putting like a div and then replacing the value using javascript, but I feel that the '' will negate the div. Okey, I have fixed a little bit more here now. So how can I change all this code to click functions instead? addTogether(2)(3) should return 5.aren't passing. I'm guessing that they mean the 3 is the function factory argument that is passed in in the code, but I'm not sure. You can't call a function with two arguments like they are suggesting that I know of. I'm just not quite getting why these don't pass. Any thoughts? function addTogether(num1, num2) { var add3 = makeAdder([3]); var sum; var number = true; // Function to test if arguments are numbers function numTest(args) { for (var prop in args) { console.log(args[prop]); if (typeof args[prop] !== 'number' || Array.isArray(args)) { number = false; } } } // 1 argument case if (arguments.length === 1) { numTest(arguments); if (number === false) { return undefined; } return (add3(2)); } // 2 arguments case if (arguments.length > 1) { numTest(arguments); if (number === false) { return undefined; } sum = num1 + num2; return sum; } // makeAdder function function makeAdder(x) { return function(y) { numTest(x); if (number === false) { return undefined; } return x + y; }; } } addTogether(2); makeAdderlooks right to me without the ifstatement @DanLaff Thanks Dan, Now I understood what you mean with your code example. You mean something like this I agree with you, this is much better and much cleaner in my opinion as well, thanks for showing me that!! ;) nicolaimagnussen sends brownie points to @danlaff :sparkles: :thumbsup: :sparkles:
https://gitter.im/FreeCodeCamp/HelpFrontEnd?at=5a6a58df4a6b0dd32b49cc8a
CC-MAIN-2020-10
refinedweb
440
65.42
Mass assignment, also known as over-posting, is an attack used on websites that use model-binding. It is used to set values on the server that a developer did not expect to be set. This is a well known attack now, and has been discussed many times before, (it was a famous attack used against GitHub some years ago). In this post I describe how to stay safe from oper posting with Razor Pages. This post is an updated version of one I wrote several years ago, talking about over posting attacks in ASP.NET Core MVC controllers. The basic premise is exactly the same, though Razor Pages makes it much easier to do the "right" thing. What is mass assignment? Mass assignment occurs during the model binding phase of a Razor Pages request. It happens when a user sends data in a request that you weren't expecting to be there, and that data is used to modify state on the server. It's easier to understand with an example. Lets imagine you have a form on your website where a user can edit their name. On the same form, you also want to display some details about the user that they shouldn't be able to edit - whether they're an admin user. Lets imagine you have the following very simple domain model of a user: public class AppUser { public string Name { get; set; } public bool IsAdmin { get; set; } } It has three properties, but you only actually allow the user to edit the Name property - the IsAdmin property is just used to control the markup they see, by adding an "Admin" badge to the markup. @page @model VulnerableModel <h2> Edit user @if (Model.CurrentUser.IsAdmin) { <span class="badge badge-primary">Admin</span> } </h2> <form method="post"> <div class="form-group"> <label asp-</label> <input asp- <span asp-</span> </div> <button type="submit" class="btn btn-primary">Submit</button> </form> This gives a form that looks something like this: In the above Razor Page, the CurrentUser property exposes the AppUser instance that we use to display the form correctly. The vulnerability in the Razor Page is because we're directly model-binding a domain model AppUser instance to the incoming request and using that data to update the database: Don't use the code below, it's riddled with issues! public class VulnerableModel : PageModel { private readonly AppUserService _users; public VulnerableModel(AppUserService users) { _users = users; } [BindProperty] // Binds the AppUser properties directly to the request public AppUser CurrentUser { get; set; } public IActionResult OnGet(int id) { CurrentUser = _users.Get(id); // load the current user. Needs null checks etc return Page(); } public IActionResult OnPost(int id) { if (!ModelState.IsValid) { return Page(); } _users.Upsert(id, CurrentUser); // update the user with the properties provided in AppUser return RedirectToPage(); } } On the face of it, for example, a malicious user can set the IsAdmin field to true, even though you didn't render a form field for it. The model binder will dutifully bind the value to the request: If you update your database/state with the provided IsAdmin value (as the previous Razor Page does) then you have just fallen victim to mass assignment/over posting! There's a very simple way to solve this with Razor Pages, and thankfully, it's pretty much the default approach for Razor Pages. Using a dedicated InputModel to prevent over posting The solution to this problem is actually very commonly known, and comes down to this: use a dedicated InputModel. Instead of model-binding to the domain model AppUser class that contains the IsAdmin property, create a dedicated InputModel that contains only the properties that you want to bind in your form. This is commonly defined as a nested class in the Razor Page where it's used. With this approach, we can update the Razor Page as follows: public class SafeModel : PageModel { private readonly AppUserService _users; public SafeModel(AppUserService users) { _users = users; } [BindProperty] public InputModel Input { get; set; } // Only this property is model bound public AppUser CurrentUser { get; set; } // NOT model bound public IActionResult OnGet(int id) { CurrentUser = _users.Get(id); // Needs null checks etc Input = new InputModel { Name = CurrentUser.Name }; // Create an InputModel from the AppUser return Page(); } public IActionResult OnPost(int id) { if (!ModelState.IsValid) { CurrentUser = _users.Get(id); // Need to re-set properties that weren't model bound return Page(); } var user = _users.Get(id); user.Name = Input.Name; // Only update the properties that have changed _users.Upsert(id, user); return RedirectToPage(); } // Only properties on this nested class will be model bound public class InputModel { public string Name { get; set; } } } We then update the Razor Page slightly, so that the form inputs bind to the Input property, instead of CurrentUser: <div class="form-group"> <label asp-</label> <input asp- <span asp-</span> </div> There's a few things to note with this solution: - In the example above, we still have access to the same AppUserobject in the view as we did before, so we can achieve exactly the same functionality (i.e. display the IsAdminbadge). - Only the Inputproperty is model bound, so malicious users can only set properties that exist on the InputModel - We have to "re-populate" values in the OnPostthat weren't model bound. In practical terms this was required for correctness previously too, I just ignored it… - To set values on our "domain" AppUserobject, we rely on "manual" left-right copying from the InputModelto the AppUserbefore you save it. Overall, there's essentially no down-sides to this approach. The only additional work you have to do is define the nested class InputModel, and also copy the values from the input to the domain object, but I'd argue they're not really downsides. First, the nested InputModel isn't strictly necessary. In this very simple example, it's pretty much redundant, as it only has a single property, which could be set directly on the PageModel instead. If you prefer, you could do this: public class SafeModel : PageModel { [BindProperty] public string Name { get; set; } } In practice though, your InputModel will likely contain many properties, potentially with multiple data annotation attributes for validation etc. I really like having all that encapsulated in a nested class. It also simplifies the PageModel overall and makes all your pages consistent, as every page has just a single bound property called Input of type PAGENAME.InputModel. Also, being a nested class, I don't have to jump around in the file system, so there's no real overhead there either. The final point, having to copy values back and forth between your InputModel and your domain object ( AppUser) is a bit annoying. But there's not really anything you can do about that. Code like that has to exist somewhere in your application, and you already know it can't be in the model binder! You can potentially use tools like AutoMapper to automate some of this. Another approach, which keeps separate Input and Output models is using a mediator. With this approach, the request is directly model-bound to a "command" which is dispatched to a mediator for handling. This command is the "input" model. The response from the mediator serves as the output model. Using a separate InputModel like this really is the canonical way to avoid over-posting in Razor Pages, but I think it's interesting to consider why this approach didn't seem to be as prevalent with MVC. Defending against over posting in MVC In my previous post on over posting in ASP.NET Core MVC, I described multiple different ways to protect yourself from this sort of attack, many of which used extra features of the model binder to "ignore" the IsAdmin property. This typically involves adding extra attributes, like [Bind], [BindNever], or [ModelMetadataType] to convince the model binder to ignore the IsAdmin field. The simplest option, and the best in my (and others) opinion, is simply to use separate input and output models for MVC too. The "Output" model would contain the IsAdmin and Name properties, so can render the view as before. The "Input" model would only contain the Name property, so isn't vulnerable to over posting, just as for Razor Pages. public class InputModel { public string Name { get; set; } } So if the answer is as simple as that, why isn't in more popular? To be clear, it is very popular, especially if you're using the Mediator pattern with something like MediatR. I really mean why isn't it the default in all sample code for example? As far as I can tell, the reason that separate Input/Output models wasn't more popular stems from several things: - The C# convention of a separate file per class. Even the small overhead of creating another file can be enough to discourage good practices! - The "default" MVC layout. Storing Controller, View, and Models files separately in a project, means lots of jumping around the file system. Coupled with the separate-file convention, that's just more overhead. Feature slices are designed to avoid this problem. - Properties on the Output model must be model-bound to the equivalent properties on the Input model. That means properties on the Input model must be named exactly the same as those on the Output model that are used to render the view. Similarly, validation metadata must be kept in-sync between the models. - The perceived additional left-right copying between models required. I say perceived, because once you close the over-posting vulnerability you realistically have to have some left-right copying somewhere, it just wasn't always as obvious! These minor annoyances all add up in MVC which seems to discourage the "separate input/output" model best practice. So why didn't that happen for Razor Pages? Razor Pages inherently tackles the first 2 points, by co-locating handlers, models, and views. It's hard to overstate just how beneficial this is compared to separate MVC views and controllers, but you really have to try it to believe it! Point 3 above could be tackled in MVC either by using inheritance, by using separate "metadata" classes, or by using composition. Razor Pages favours the composition approach, where the InputModel is composed with the other properties required to render the view on the PageModel ( CurrentUser in my previous example). This neatly side-steps many of the issues with using composition, and just fits really well into the Razor Pages model. Point 4 is still there for Razor Pages, but as I mentioned, it's pretty much a fact of life. The only way around that is to bind directly to domain models, which you should never do, even if the ASP.NET Core getting started code does it!😱 Bonus: over posting protection != authorization Before we finish, I just want to address a point that always seems to come up when discussing over posting: You could edit the idparameter to update the name for a different user. How does separate-models protect against that? The short answer: it doesn't. But it's not trying to. The Razor Page I described above allows anyone to edit the name of any AppUser - you just need to provide a valid ID in the URL. We can't easily remove the ID from the URL, or prevent users from sending it, as we need to know which user to edit the name for. There's only really 3 feasible approaches: - Store the ID in state on the server-side. Now you've got a whole different set of problems to manage! - Encrypt the ID and echo it back in the request. Again, way more complex than you need, and if done incorrectly can be a security hole, or not offer the protection you think it does. - Verify a user is authorized to edit the name. There are well-established patterns for resource-based authorization. The final point there is clearly the correct approach to take. Before you accept a POST request that edits the name of a user, verify that the authenticated user is authorized to make that change! There's no need for some sort of custom approach - ASP.NET Core has support for imperative resource-based authorization out of the box. I also have a (rather old now) post on creating custom authorization handlers, and the source code for this post includes a basic example. Summary In this post I discussed mass assignment attacks, and how they work on a Razor Pages application. I then showed how to avoid the attack, by creating a nested InputModel in your Razor Page, and only using BindProperty on this single type. This keeps your vulnerable surface-area very explicit, while not exposing other values that you might need to display the Razor view correctly (i.e. IsAdmin). This approach is pretty standard for Razor Pages, but it wasn't as easy to fall into the pit of success for MVC. The overall design of Razor Pages helps to counteract the impediments, so if you haven't already, I strongly suggest trying them out. Finally I discussed an issue that comes up a lot that conflates over-posting with more general authorization. These are two very different topics - you can still be vulnerable to over-posting even if you have authorization, and vice-versa. In general, resource-based authorization is a good approach for tackling this side-issue. Whatever you do, don't bind directly to your EntityFramework domain models. Pretty please.
https://andrewlock.net/preventing-mass-assignment-or-over-posting-with-razor-pages-in-asp-net-core/
CC-MAIN-2021-04
refinedweb
2,247
52.09
Server-side Optimization with Nginx and pm-static This article is part of a series on building a sample application — a multi-image gallery blog — for performance benchmarking and optimizations. (View the repo here.) Let’s continue optimizing our app. We’re starting with on-the-fly thumbnail generation that takes 28 seconds per request, depending on the platform running your demo app (in my case it was a slow filesystem integration between host OS and Vagrant), and bring it down to a pretty acceptable 0.7 seconds. Admittedly, this 28 seconds should only happen on initial load. After the tuning, we were able to achieve production-ready times: Troubleshooting It is assumed that you’ve gone through the bootstrapping process and have the app running on your machine — either virtual or real. Note: if you’re hosting the Homestead Improved box on a Windows machine, there might be an issue with shared folders. This can be solved by adding type: "nfs" setting to the folder in Homestead.yaml: You should also run vagrant up from a shell/powershell interface that has administrative privileges if problems persist (right-click, run as administrator). In one example before doing this, we got 20 to 30 second load times on every request, and couldn’t get a rate faster than one request per second (it was closer to 0.5 per second): The Process Let’s go through the testing process. We installed Locust on our host, and created a very simple locustfile.py: from locust import HttpLocust, TaskSet, task class UserBehavior(TaskSet): @task(1) def index(self): self.client.get("/") class WebsiteUser(HttpLocust): task_set = UserBehavior min_wait = 300 max_wait = 1000 Then we downloaded ngrok to our guest machine and tunneled all HTTP connections through it, so that we can test our application over a static URL. Then we started Locust and swarmed our app with 100 parallel users: Our server stack consisted of PHP 7.1.10, Nginx 1.13.3 and MySQL 5.7.19, on Ubuntu 16.04. PHP-FPM and its Process Manager Setting php-fpm spawns its own processes, independent of the web-server process. Management of the number of these processes is configured in /etc/php/7.1/fpm/pool.d/ (7.1 here can be exchanged for the actual PHP version number currently in use). In this file, we find the pm setting. This setting can be set to dynamic, ondemand and static. Dynamic is maybe the most common wisdom; it allows the server to juggle the number of spawned PHP processes between several settings: pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'. ; This value sets the limit on the number of simultaneous requests that will be ; served. pm.max_children = 6 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 3 ; The desired minimum number of idle server processes ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 2 ; The desired maximum number of idle server proceses ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 4 The meanings of these values are self-explanatory, and the spawning of processes is being done on demand, but constrained by these minimum and maximum values. After fixing the Windows shared-folders issue with nfs, and testing with Locust, we were able to get approximately five requests per second, with around 17–19% failures, with 100 concurrent users. Once it was swarmed with requests, the server slowed down and each request took over ten seconds to finish. Then we changed the pm setting to ondemand. Ondemand means that there are no minimum processes: once the requests stop, all the processes will stop. Some advocate this setting, because it means the server won’t be spending any resources in its idle state, but for the dedicated (non-shared) server instances this isn’t necessarily the best. Spawning a process includes an overhead, and what is gained in memory is being lost in time needed to spawn processes on-demand. The settings that are relevant here are: pm.max_children = 6 ; and pm.process_idle_timeout = 20s; ; The number of seconds after which an idle process will be killed. ; Note: Used only when pm is set to 'ondemand' ; Default Value: 10s When testing, we increased these settings a bit, having to worry about resources less. There’s also pm.max_requests, which can be changed, and which designates the number of requests each child process should execute before respawning. This setting is a tradeoff between speed and stability, where 0 means unlimited. ondemand didn’t bring much change, except that we noticed more initial waiting time when we started swarming our application with requests, and more initial failures. In other words, there were no big changes: the application was able to serve around four to maximum six requests per second. Waiting time and rate of failures were similar to the dynamic setup. Then we tried the pm = static setting, allowing our PHP processes to take over the maximum of the server’s resources, short of swapping, or driving the CPU to a halt. This setting means we’re forcing the maximum out of our system at all times. It also means that — within our server’s constraints — there won’t be any spawning overhead time cost. What we saw was an improvement of 20%. The rate of failed requests was still significant, though, and the response time was still not very good. The system was far from being ready for production. However, on Pingdom Tools, we got a bearable 3.48 seconds when the system was not under pressure: This meant that pm static was an improvement, but in the case of a bigger load, it would still go down. In one of the previous articles, we explained how Nginx can itself serve as a caching system, both for static and dynamic content. So we reached for the Nginx wizardry, and tried to bring our application to a whole new level of performance. And we succeeded. Let’s see how. Nginx and fastcgi Caching proxy_cache_path /home/vagrant/Code/ng-cache levels=1:2 keys_zone=ng_cache:10m max_size=10g inactive=60m; proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; fastcgi_cache_path /home/vagrant/Code/ngd-cache levels=1:2 keys_zone=ngd_cache:10m inactive=60m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; add_header NGINX_FASTCGI_CACHE $upstream_cache_status; server { listen 80; listen 443 ssl http2; server_name nginx-performance.app; root "/home/vagrant/Code/project-nginx/public"; index index.html index.htm index.php; charset utf-8; proxy_cache ng_cache; location / { try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } access_log off; error_log /var/log/nginx/nginx-performance.app-error.log error; sendfile off; client_max_body_size 100m; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.1-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors off; fastcgi_buffer_size 16k; fastcgi_buffers 4 16k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_cache ngd_cache; fastcgi_cache_valid 60m; } location ~ /\.ht { deny all; } ssl_certificate /etc/nginx/ssl/nginx-performance.app.crt; ssl_certificate_key /etc/nginx/ssl/nginx-performance.app.key; } We opened our Nginx virtual host file and added the above settings. Let’s explain them. proxy_cache_path /home/vagrant/Code/ng-cache levels=1:2 keys_zone=ng_cache:10m max_size=10g inactive=60m; As explained in Apache vs Nginx Performance: Optimization Techniques, proxy_cache_path is used for caching of static assets — like images, style sheets, JavaScript files. The path itself needs to exist; we need to create those directories. levels designates depth of the directories inside that path/folder. Traversing can be costly for the request time, so it’s good to keep it small. Keys zone is a name; every virtual host can (and should) use a separate one. Max size means maximum size of the cache, and inactive means the time items will be kept in cache even if they aren’t requested. After that time of inactivity, the cache for a resource will be repopulated. proxy_cache_use_stale and fastcgi_cache_use_stale are interesting, as they can provide the “always online” feature we can see with CDN providers like Cloudflare: if the back end goes offline, Nginx will serve these resources from cache. This failure-proofs our website to a degree. All the fastcgi_cache_* settings are for the PHP-generated (dynamic) content, and proxy_cache_* settings are for the static files. fastcgi_cache_key defines a key for caching. fastcgi_ignore_headers disables processing some response header fields from the FastCGI backend. There’s another interesting setting we could have used: fastcgi_cache_purge This defines requests which will be able to purge the cache. Nginx (its ngx_http_fastcgi_module) gives us quite the comprehensive toolset for caching. One example of using the above directive would be: fastcgi_cache_path /data/nginx/cache keys_zone=cache_zone:10m; map $request_method $purge_method { PURGE 1; default 0; } server { ... location / { fastcgi_pass backend; fastcgi_cache cache_zone; fastcgi_cache_key $uri; fastcgi_cache_purge $purge_method; } } Here, PURGE REST request would be able to delete things from cache. It’s also possible to revalidate the cache under some conditions. In our configuration, we didn’t use all the intricacies and capabilities of Nginx, but it’s good to know they’re there if we need them. We added Nginx headers to our responses, to be able to tell whether the resource was served from cache or not: add_header NGINX_FASTCGI_CACHE $upstream_cache_status; Then, we can inspect and dissect our page load time to see what works and what doesn’t: To warm up the cache, we’ll need to go through the requests for each of the resources. fastcgi_cache_methods can be useful for caching specific request methods, like POST. GET and HEAD are cached by default. There’s also byte-range caching, which can be used for video-streaming optimization, as outlined here. One could easily design a whole private CDN network with all the configurability that Nginx offers. Having enabled the above configuration — both for the static and the dynamic content of our website — we started Locust, and swarmed our system with 100 parallel users. The difference in results was nothing short of amazing. The strain the server was under previously could not be felt now. We can see that the median time per request was 170 milliseconds. That is around a hundredfold improvement. Requests per second were above 100. We can also see, in the Average Response Time chart, that the initial requests saw spikes in response times, and after that, the response time declined more and more, to around 130ms. Nginx caching brought us some great improvements. The main bottleneck with this application will not be hardware resources, even if they’re modest. We can also see that the percentage of failed requests went from 17% to 0.53%. We then went to Pingdom’s page test and tested our website: We can see that we managed to bring page load time well below one second! We also tested the single gallery page, which has additional “baggage” of related, and newest galleries: We are attaching a HAR file report of this test for analysis. Conclusion In this article, some of the points covered in my earlier discussion on Nginx performance were tested, and other settings like process management, and the measure of its impact on page load time were discussed and analyzed. Did we miss anything worth mentioning? Can you think of other Nginx settings we could apply to this app to improve the performance?
https://www.sitepoint.com/server-side-optimization-with-nginx-and-pm-static/
CC-MAIN-2022-33
refinedweb
1,948
54.83
This article is about a simple C++ class for reading and writing variable-length data streams. This feature is often required when programming compression routines like LZW, or when dealing with binary formats like Flash or PDF. It offers the developer a possibility to write binary chunks of custom size: like 11 bits, or 27 bits, etc. So, basically, output is byte-aligned, but the inner structure can hold different length data chunks. I found a working solution to this problem here on the CodeProject. But, this one is written in C# .NET, so I could not use it directly. I have not translated the original article, but written this one from scratch. I hope that someone will find this work useful. Using the class CBitStream is very simple. Please see the code below: CBitStream #include <span class="code-string">"BitStream.h"</span> CBitStream bitStream; bitStream.WriteBit(1); bitStream.WriteByte('a'); bitStream.WriteWord(0x4441); bitStream.WriteDWord(0x44410D0A); bitStream.WriteData((LPBYTE)"ANSI text...", 12); bitStream.WriteData((LPWORD)_T("UNICODE text..."), 15); bitStream.SaveStream(_T("Enter output file path here...")); I have found that writing the CBitStream class described in this article was not too difficult as I have thought in first place. Also, now I have a simple tool that can help me in my everyday.
http://www.codeproject.com/script/Articles/View.aspx?aid=32783
CC-MAIN-2014-52
refinedweb
215
67.04
Release Notes Version 3.2.1 Bugfix release, contains mainly bugfixes but some performance tweaks. Richard Vaughan (rtv) vaughan@sfu.ca - 2009.10.13 Version 3.2.0 This minor release fixes many bugs, has some performance improvements and some significant internal and API changes, including: - introduced an internal event queue, so that there is no longer any atomic world update interval. Each model can have its update interval specified individually - worldfile syntax checking improved: poses and sizes are checked for correct vector length - pkg-config file for libstage now contains correct dependencies, making it much easier to build things using libstage - controllers can now take an argument string from the worldfile and command line - better powerpack model - cleaned up namespace quite a bit - controllers and simulators using libstage may need some simple tweaks, but the design is cleaner and more consistent. Richard Vaughan (rtv) vaughan@sfu.ca - 2009.9.12 Version 3.1.0 This minor release includes some major improvements, including: - Added or reinstated models - grippers - linear actuator - Powerpacks models power consumption, charging and transferring - Models can provide occuapancy grids with Stg::Model::Rasterize() - WebStage is Stage with WebSim support - Many bugfixes - Many performance improvements, including faster raytracing - Better support for very large worlds - More example controllers in <stage>/examples/ctrl - Faster raytracing - Replaced GLib-2.0 with STL, pthreads, etc. - Better docs (but still lots to do here) Richard Vaughan (rtv) vaughan@sfu.ca - 2009.7.22 Version 3.0.1 This version incorporates a number of fixes made since the previous major release: - Player plugin - added unit test suite to verify functionality (build instructions in libstageplugin/test directory) - many bug fixes - blobfinder, fiducial, laser, position2d, simulation, sonar, and speech interfaces are now working correctly - CMake script fixes - OpenGL headers located properly - FLTK sourced correctly - dependency failures should show up before compiling - fixed bug where fiducial sensor would return duplicate results - fixed some rendering glitches - blobfinder now returns blobs in the correct horizontal order - blobfinder sensor visualization displays in the plane of the screen Version 3.0.0 Stage 3.0.0 consists of these components: - libstage - a robot simulation C++ library. - libstageplugin - a simulation plugin for Player - stage - a stand-alone robot simulator This is a a major new release of Stage. The main new features are: - 2.5D models - 3D user interface - very much faster raytracing than 2.X series - plugin code modules attach at run time to any model. Useful for data filtering or complete robot controllers without Player (examples included) - 3D camera model, with color and depth for each pixel - first "official" release of the libstage API - save/save-as/reload file dialog box - improved data visualization options - select and drag/rotate multuple robots by shift-clicking libstageplugin still needs some work - currently it only supports position2d, laser and sonar interfaces It also uses too much CPU. Player/Stage users may choose to wait for libstageplugin to improve before trying Stage-3.0. Note that your Stage-2.0 world files will probably need to be updated to work with Stage-3.0. The main difference world file syntax is that poses and velocities are now specied as [x y z theta] instead of the old [x y theta]. Sizes are now specifed as [x y z] instead of [x y]. Some useful parts of Stage 2.x have not yet been ported to 3.0, including - gripper & puck - wireless comms - audio comms - blinkenights This is the first release of a lot of new code. It has been used in my lab for a while now, but there are bound to be bugs and quirks. Please use the bug tracker and feature request system on Sourceforge to help us fix and improve Stage. As always, your patches are very welcome. Richard Vaughan (rtv) vaughan@sfu.ca - 2008.7.12 Version. Richard Vaughan (rtv) vaughan@sfu.ca 2006.3.24 Version 2.0.0 This Features Significant)/worlds to get the idea. Worlds can be very large (thousands of meters square).>="">/worlds to Tue Oct 20 15:42:05 2009 for Stage by
http://playerstage.sourceforge.net/doc/Stage-3.2.1/release.html
CC-MAIN-2015-27
refinedweb
679
54.63
! -- Generated from file 'micca.man' by tcllib/doctools with format 'html' --> <! -- Copyright © 2015 - 2017 by G. Andrew Mangogna --> <! -- micca.1 --> Micca is a program that transforms a domain specific language (DSL) script of an Executable Model into "C" code to implement the logic of the model. The DSL is defined in the micca-DSL manual page. available options are: The micca code generator creates "C" identifiers for various functions and variables. The identifiers names all have suffixes matching the regular expression, __[A-Z0-9]+, appended to them to make them unique. Code supplied as part of state actions or other processing must avoid identifiers that end in two underscore characters followed by a arbitrary number of upper case alphabetic or decimal numeric characters. The functions of external scope for the run time code all begin with "mrt_". The data types for the run time all begin with "MRT_". A successful run of micca yields two files: a "C" source file and a "C" header file. To link a running application, the main function must be supplied. The following is an example of how the main function might appear. #include <stdlib.h> #include "micca_rt.h" int main( int argc, char *argv[]) { /* * Hardware and other low level system initialization is usually done * first. */ /* * Initialize the run-time code itself. */ mrt_Initialize() ; /* * Initialize domains, bridges and any other code that might require access * to the facilities of the run-time code. Typically, each domain in the * system would have an "init()" domain operation and these can be invoked * here. Sometimes domain interactions are such that a second round of * initialization is required. Bridges between domains may also require * that the initialization for a domain be done before the bridge can be * initialized. Once mrt_Initialize() has been invoked, domains may * generate events and do other model level activities. Regardless of how * the initialization is accomplished, it is system specific and, * unfortunately, only temporally cohesive. */ /* * Entering the event loop causes the system to run. */ mrt_EventLoop() ; /* * It is possible that domain activities can cause the main loop to exit. * Here we consider that successful. Other actions are possible and * especially if the event loop is exited as a result of some unrecoverable * system error. */ return EXIT_SUCCESS ; } The required elements of main are to invoke mrt_Initialize() before any run-time elements might be needed and to invoke mrt_EventLoop() to cause the system to run. The following "C" preprocessor symbols may be used to control features included in the object files. Note both the domain "C" files and the run-time source should be compiled with the same set of preprocessor defines. The run-time uses the standard assert macro and the assertions may be removed by defining this symbol. Defining this symbol to 7, compiles in code to use features of the ARM v7-M architecture. Normally, compilers typically define this symbol by default to match the ARM architecture for which they are generating code. The ARM v7-M architecture code uses the PendSV exception when synchronizing from interrupt context to the background context. If defined, this symbol will exclude naming information about classes, relationships and other domain entities from being compiled in. For small memory systems, strings can consume a considerable amount of space and are usually only used during debugging. Defining this symbol removes code from the run-time that traces event dispatch. Event dispatch tracing is important during testing and debugging but may be removed from the delivered system. Defining this symbol insures that "stdio.h" is not included and no references are made to functions in the standard I/O library. This is useful for smaller embedded systems that cannot support the memory required by the standard I/O library. The value of this symbol sets the maximum number of relationships that can be modified during a data transaction. The default value is 64. The value of this symbol sets the number event control blocks which are used for signaling events. The default value is 32. This number represents the maximum number of signaled events that may be _in flight_ at the same time. This value of this symbol set the maximum number of bytes that can be occupied by event parameters or sync function parameters. The default value is 32. The value of this symbol defines the maximum number of synchronization requests from the foreground processing that may be outstanding at the same time. The default value is 10. This symbol represent the number of interrupts that may occur during the execution of a state activity. The value of this symbol is the maximum number of instance references that may be held in an instance reference set. The default value is 128. Defining this symbol includes code to print the function name, file name and line number for all functions generated by the code generator. This information forms a trace of executed functions. Defining this symbol overrides the instrumentation code expansion that is placed at the beginning of each generated function. By default when MRT_INSTRUMENT is defined, then MRT_INSTRUMENT_ENTRY is defined as follows: printf("%s: %s %d\n", __func__, __FILE__, __LINE__) ; The MRT_DEBUG macro has the same invocation interface as printf(). If MRT_INSTRUMENT is defined, then MRT_DEBUG invocations will include the implied printf invocations. Otherwise, the implied printf invocations are removed from the code (i.e. MRT_DEBUG is defined as empty). If MRT_INSTRUMENT is defined, then the expansion of MRT_DEBUG may be overridden. The MRT_DOMAIN_NAME macro is defined early in the domain's code file and resolves to a string containing the name of the domain. The MRT_CLASS_NAME macro is defined within any class context to be a string literal that matches the name of the class. This symbol is redefined for each class context and is useful for generic debugging or instrumentation output. The MRT_STATE_NAME macro is defined within the action of a state in a state model be a string literal that matches the name of the state. This symbol is redefined for each state action and is useful for generic debugging or instrumentation output. The runtime provides several functions that are used to control the state machine event dispatch mechanism. The mrt_Initialize is invoked to initalize all the internal data structures of the micca run-time. It must be invoked before any other run-time function. The mrt_EventLoop function is the preferred way for a micca generated application to run. Application supplied main functions should enter the event loop by invoking mrt_EventLoop() causing the application to start running. This function dispatches events, causing the system to execute, and will not return unless some action in the domain invokes mrt_SyncToEventLoop. A state activity may invoke mrt_SyncToEventLoop to request that the run-time event loop exit and return to its caller (typically main) at the end of the current ongoing thread of control. The return value is a boolean indicating whether the event loop had already been requested to exit (true). This function provides a way for domain activities to stop the system from running without causing an explicit error. The micca run time provides two function to enable finer control over dispatching events. Using these functions is not the preferred way to cause a micca generated application to run, but they are useful in test situations and when it is necessary to integrate with legacy applications that might have their own event loop. A boolean value that determines if the function waits for an event to begin the thread of control if there is no thread of control currently ready to run. Waiting is useful if the thread of control will be started by a delayed signal. The mrt_DispatchThreadOfControl function runs at most one thread of control before returning to the caller. It returns a boolean value indicating if a thread of control was actually run. A thread of control is started when an event that was signaled outside any instance context (e.g. by a domain operation or by a portal operation) or a delayed signal whose delay time has expired is dispatched. The thread of control continues until all the events have been delivered which were signaled from within any instance context as a direct or indirect result of any transition caused by the event that started the thread of control. At the end of the thread of control, the referential integrity implied by the relationships of the domain is evaluated. Any class where instances of the class were created or deleted or any of the relationships in which a class participates are modified will be evaluated to determine if the constraints implied by the relationships of the domain are preserved. The mrt_DispatchSingleEvent function dispatches at most one event from the event queue and returns a boolean indicated if an event was dispatched. If the dispatched event ended a thread of control, then the normal processing associated with ending a thread of control happens (i.e. referential integrity is checked). This function provides a means of fine grained control over event dispatching that can be useful in testing situations or when it is necessary to integrate micca generated domains with legacy code. Applications may substitute their own fatal error handler for the default one. All errors detected by the run-time cause a fatal error condition. By default, a message is printed to the standard error and the standard library function, abort() is invoked. Applications can install a new hander by invoking mrt_SetFatalErrorHandler which returns a pointer to the previous handler. If the application supplied error handler returns, the run-time will still invoke abort() to insure that no further attempts to dispatch events are made. Application supplied error handlers may also use the standard library functions, setjmp() and longjmp() to perform a non-local transfer of control. This allows the application to exert finer control over error situations. N.B. that restarting event dispatch after a fatal error must be done with great care. The following example shows one way to return execution control to an application if a fatal error occurs. #include <setjmp.h> #include "micca_rt.h" static jmp_buf fataljmp ; static MRT_FatalErrorHandler prevHandler ; static void jumpFatalHandler( MRT_ErrorCode errNum, char const *fmt, va_list alist) { /* * Print a message using the default error handler. */ prevHandler(errNum, fmt, alist) ; /* * Non-local goto to return control back to main. */ longjmp(fataljmp, errNum) ; } int main( int argc, char *argv[]) { /* * Initialize the run-time code itself. */ mrt_Initialize() ; prevHandler = mrt_SetFatalErrorHandler(jumpFatalHandler) ; for (;;) { int errCode = setjmp(fataljmp) ; if (errCode == 0) { /* * The first time through, we enter the event loop to cause the * application to run. "mrt_EventLoop()" does not return unless * some domain action invokes "mrt_SyncToEventLoop()". So in the * event that we return from the event loop, it was purposeful and * we probably want to end the application. */ mrt_EventLoop() ; break ; } else { /* * Control returns here in the event of a fatal error in the * run-time. The value of "errCode" is the fatal error number. The * application can do anything it wishes. If the application * determines that the program should end, then executing a break * statement here (as is done below) will accomplish that. * Otherwise, it may wish to take whatever corrective action or * other notifications are necessary. Falling through this else * clause will restart the event loop. */ break ; } } return EXIT_SUCCESS ; } This technique can be used effectively when testing if the test case can force a fatal error and then repair the state of things after jumping out of the event loop. Then the event loop can be restarted.
http://chiselapp.com/user/mangoa01/repository/mrtools/doc/trunk/micca/doc/HTML/files/micca.html
CC-MAIN-2017-43
refinedweb
1,902
54.73
Hi, I'm struggling to get a function that fetches JSON data from an API endpoint at using {fetch}. Their documentation requires a JSON object to be passed with the POST request. Below is an example of that object from documentation. { "postcodes" : ["OX49 5NU", "M32 0JG", "NE30 1DP"] } My function is as follows, and is stored in the backend (per CORS requirement): import { fetch } from 'wix-fetch'; export function callAPI() { return fetch("", { "method": "post", "headers": { "Content-Type": "application/json" }, "body": { "postcodes": ["EH14 4AS", "PE28 4UX"] }, }) .then((httpResponse) => { if (httpResponse.ok) { return httpResponse.json(); } else { console.log(httpResponse); return Promise.reject("callAPI failed"); } }) .then((json) => console.log(json.result)) .catch(err => console.log(err)); } httpResponse.status is 400, i.e. bad request. I'm not overly familiar with constructing JSON objects or POST requests. I've made life difficult for myself because I had GET request working fine for a single postcode! Any help is greatly appreciated! Please enter the full URL of the JSON files as and. Also, since this is a fetch from an external resource, I would recommend using get method to fetch the data. Thanks for the reply Sam. I would prefer not to use individual GET fetches because I will always be fetching JSON data for two postcodes. If I used GET, I would have to store and process the JSON from the first response before fetching the second. Are you able to tell me why my POST fetch is unsuccessful and how I might rectify it? If I were to proceed with a GET approach, how best could I store the first GET fetches response? Possibly the API server also returns a reason that explains what's wrong with the request. Try getting and printing the error response body (using)
https://www.wix.com/corvid/forum/community-discussion/fetch-json-with-post-method
CC-MAIN-2019-47
refinedweb
295
66.74
User Agent: Mozilla/5.0 (Android; Linux i686; rv:21.0) Gecko/21.0 Firefox/21.0 Build ID: 20130124162801 Steps to reproduce: Tried to disable form zooming with formhelper.zoom in about:config. The zoom feature makes filling out long forms tedious due to contant zooming in and out. Actual results: Nothing. Expected results: Should have disabled the formhelper.zoom functionality. The pref is 'formhelper.autozoom' but I believe that is not what you want. Are you referring to the recently added functionality in bug 725018? I don't believe the patches added a toggle state preference. (In reply to vinceying113 from comment #0) > Tried to disable form zooming with formhelper.zoom in about:config. The > zoom feature makes filling out long forms tedious due to contant zooming in > and out. If you switch between field, why would this cause a zoom out? Do you have a URL you've been seeing this on? We should probably make the behaviour from bug 725018 depend on these prefs. The prefs look like they were used in XUL fennec. Aaron, the reason I want to disable the zoom is Firefox zooms all the way in to the field - so descriptive text next to the field is not visible. So, on a form with lots of fields, I have to zoom out to view the descriptor on the next field, then click on the field which zooms all the way in, and so on. Thanks. I see, thanks; I was thinking that you were jumping directly between fields and it would first zoom out. This worth tracking for recent browser preference changes and additions? Probably. IMO, the autozoom feature is broken (jumping to max zoom on tiny form fields, where you can't even see next one-line field without scrolling), and I figure I'm probably not the only one with that opinion. The WebOS browser had a zoom disable, and I turned it off even though that zoom wasn't as frustrating as Firefox's. Can you file a bug with a link to an example? I'd like to try and fix any issues people find (but I'm not opposed to making this pref work either). Created attachment 774912 [details] [diff] [review] patch The attached patch hooks up formhelper.mode, and removes the other formhelper.* options (autozoom, autozoom.caret, and restore). The OP's desire (disabling auto-zooming) can now be accomplished by setting formhelper.mode to 0. Here is the rationale, based on my understanding of the code (I am new to this code so please forgive me and correct me if I've gotten something wrong): - The rationale for removing formhelper.autozoom.caret and formhelper.restore is that the underlying functionality is not implemented in native fennec. If it's later implemented, they can be added again. - The rationale for removing formhelper.autozoom is that in native fennec, the formhelper doesn't currently do anything besides zooming, and so formhelper.mode and formhelper.autozoom would mean the same thing (except that formhelper.mode allows discriminating between tablets and phones, a flexibility that I think is desirable). I don't know enough about the codebase to review the patch, but the described solution sounds practical as long as the prefs' meanings are consistent across whatever platforms they're implemented on (I know I carry preference bundles with me across machines). The doc for Form Assistant isn't clear but suggests that formhelper.mode should also control the autocomplete popup, which *is* particularly helpful on mobile, and it seems that autozoom was intended to be (is?) controlled separately. (Also, of course, I found the pref by searching about:config for "zoom"! "formhelper.mode" is not at all obvious and should have an option in the UI if that's where this control lives.) (In reply to Christopher Smith from comment #11) > The doc for Form Assistant isn't clear Where can I find this doc? is the only Google result for "formhelper.mode" and appears to be the only description of the preference's settings. (In reply to Christopher Smith from comment #11) > The doc for Form Assistant > isn't clear but suggests that formhelper.mode should also control the > autocomplete popup, which *is* particularly helpful on mobile, and it seems > that autozoom was intended to be (is?) controlled separately. If I'm reading the code right, xul fennec used auto-complete regardless of formhelper.mode (and current fennec certainly does). Do you think that should change? Either way, I will update the wiki once we settle on a solution. My experience with autocomplete on Android has been... erratic. I think the option to disable it should be a privacy setting, and I don't know enough to suggest where in the namespace it should go. Comment on attachment 774912 [details] [diff] [review] patch Review of attachment 774912 [details] [diff] [review]: ----------------------------------------------------------------- Looks good to me. Please update the commit message though, so that it reflects what the patch is doing rather than just copying the bug summary. Created attachment 775806 [details] [diff] [review] updated patch Upon a more careful reading of the code, the current form helper performs two tasks that deserve to be differentiated: scrolling to bring the focused input element into view, and zooming in on it. The updated patch: - Modifies the default behaviour (formhelper.mode = 2) so that scrolling and zooming happens on phones and only srolling happens on tablets. Setting formhelper.mode = 1 enables zooming on tablets as well. Setting formhelper.mode = 0 disables both scrolling and zooming everywhere. - Restore the formhelper.autozoom preferenece (defaults to true), which can be set to false to disable zooming for all devices. I updated to reflect the changes in the updated patch. Try results for the updated patch: (In reply to Christopher Smith from comment #11) > (Also, of > course, I found the pref by searching about:config for "zoom"! > "formhelper.mode" is not at all obvious and should have an option in the UI > if that's where this control lives.) This concern is addressed by restoring the formhelper.autozoom preference in the updated patch. (In reply to Christopher Smith from comment #15) > My experience with autocomplete on Android has been... erratic. I think the > option to disable it should be a privacy setting, and I don't know enough to > suggest where in the namespace it should go. I filed bug 893979 for this.
https://bugzilla.mozilla.org/show_bug.cgi?id=834613
CC-MAIN-2017-17
refinedweb
1,068
66.13
This was was cloned from bug 14364 as part of operation convergence. Originally filed: 2011-10-03 14:00:00 +0000 Original reporter: Louis-R <louisremi@mozilla.com> ================================================================================ #0 Louis-R 2011-10-03 14:00:00 +0000 --------------------------------------------------------------------------------. ================================================================================ #1 Ian 'Hixie' Hickson 2011-10-03 22:35:53 +0000 -------------------------------------------------------------------------------- We're not going to add sub-optimal solutions just so we can get something out one year earlier, when the Web is going to last decades. :-) What we need here is a clear understanding of the use cases and requirements. What are the cases where you're wishing you could add URLs to the appcache dynamically? ================================================================================ #2 Philipp Hagemeister 2011-10-07 12:35:38 +0000 -------------------------------------------------------------------------------- Wouldn't that allow anyone to hijack a website forever? 1. Attacker temporarily gains control over the content of , and writes <html manifest="data:text/cache-manifest;base64,Q0FDSEUgTUFOSUZFU1QK"> example.com defaced! </html> 2. User visits, puts the page in appcache. 3. Rightful owner of example.com regains control (or domain ownership changes if the domain was hijacked, ...). 4. User visits, still sees defacement. How can the rightful owner of example.com ever serve the user anything? On the other hand, locking the content (and scripts) of a website forever could also provide benefits to a carefully-engineered project. JavaScript on the page could somehow download the new version, cryptographically verify it (beyond SSL, which may be compromised by .gov actors, like google.com in Iran recently), and only then update to the new version. ================================================================================ #3 Ian 'Hixie' Hickson 2011-10-21 22:43:39 +0000 -------------------------------------------------------------------------------- Yeah we're definitely not using data: for this. Status: Did Not Understand Request Change Description: no spec change Rationale: What are the use cases for making appcache dynamic? (I'm not saying there aren't any, I just need to know what they are to design the solution for them.) ================================================================================ #4 Louis-R 2011-10-24 19:52:37 +0000 -------------------------------------------------------------------------------- Granted, using data isn't the best option. I've written an extensive blog post about the use cases for a dynamic appcache: tl;dr: if you build an rss reader with checkbox to make articles available offline, it's easy to store/delete the text content of the article at will using localStorage or indexedDb, but it's impossible to store/delete associated images (and sounds/videos). You could dynamically generate a cache manifest for all "offline enabled" articles, but the client would have to re-download all resources every-time the manifest is updated, as you know. (and you can't store images as data-uris, since they come from different origins) Mozilla implemented a simple "OfflineResourceList" API which solves that problem by enhancing applicationCache with "add()" and "remove()" methods. This is the kind of solution I am looking for, although "add" is a confusing name, since it should be able to update a particular resource too. There is a risk that this API could cause confusion amongst web developers. Should they use a cache manifest or abandon it completely in favor of the JS API? I believe the cache manifest should be advocated to be used for the application structure+presentation+logic (HTML, CSS, JS), while the dynamic API should be used for the application *content* (medias, xml, json). ================================================================================ #5 Ian 'Hixie' Hickson 2011-10-25 02:26:46 +0000 -------------------------------------------------------------------------------- Thanks, will investigate. ================================================================================ #6 Ian 'Hixie' Hickson 2011-10-27 00:15:14 +0000 --------------------------------------------------------------------------------. ================================================================================ #7 Ian 'Hixie' Hickson 2011-11-03 16:03:26 +0000 -------------------------------------------------------------------------------- Status: Partially Accepted Change Description: none yet Rationale: The use case described in comment 6 seems reasonable. I have marked this LATER so that we can look add this once browsers have caught up with what we've specified so far. ================================================================================ #8 Simon Pieters 2011-11-04 06:16:57 +0000 -------------------------------------------------------------------------------- I believe this has already happened. ================================================================================ #9 Ian 'Hixie' Hickson 2011-11-04 17:08:04 +0000 -------------------------------------------------------------------------------- I didn't mean just with appcache. Do I take it from your comment that there is implementation interest in adding this now? ================================================================================ #10 Anne 2011-11-15 12:18:52 +0000 -------------------------------------------------------------------------------- It seems both developers and implementors want this, yes. ================================================================================ #11 michaeln@google.com 2011-11-15 22:48:10 +0000 -------------------------------------------------------------------------------- I think this request makes sense but is not the most pressing issue to resolve, this would be of great convenience. But tweeking the model for loading pages from, and associating pages with, and updating caches such that it works for wider variety of use cases is more of a priority (imo). I'd like to see that get in better shape prior to mixing in support for ad-hoc resources. ================================================================================ #12 Ian 'Hixie' Hickson 2012-05-03 18:12:24 +0000 -------------------------------------------------------------------------------- An idea I was kicking around would be". That would allow authors to implement the above add/remove functionality themselves just by pushing the data into a blob store (FIlesystem API, Index DB), which would be just a few lines of code, while also allowing much more flexible approaches. Any opinions? ================================================================================ #13 Philipp Hagemeister 2012-05-03 21:05:53 +0000 -------------------------------------------------------------------------------- The JavaScript redirector sounds fantastic, but it sounds complicated to implement in the current state. Wouldn't it be way simpler to just load a defined fallback HTML document? For example, given the following appcache: CACHE MANIFEST ALIAS: /x.html /serve-file.html /files/* /serve-file.html # serve-file.html is automatically included in the appcache The request to /files/test.html would just render serve-file.html, but under the original (window.)location (just like FALLBACK does). In fact, ALIAS would be exactly like a FALLBACK entry that always fails to load. Additionally, the * placeholder would allow marking whole multiple URLs as belonging to the manifest. On review, this seems very easy to implement, both for user agent and web application authors. As a downside, it doesn't allow embedding of non-HTML resources like images. It does allow downloads via window.location.replace(dataUri). To me, that doesn't like a big deal since any dynamically generated page should be using data URIs for dynamically generated images/scripts/styles in the first place. ================================================================================ #14 Ian 'Hixie' Hickson 2012-05-04 18:10:01 +0000 -------------------------------------------------------------------------------- The idea would be to render pages, images, etc from data in IndexDB, not to just to hardcode aliases. (This is in the context of wanting to add and remove URLs from the appcache, which would be easily implementable using a worker as described above.) ================================================================================ #15 michaeln@google.com 2012-05-04 22:54:11 +0000 -------------------------------------------------------------------------------- > Wouldn't it be way simpler to just load a defined fallback HTML document? For > example, given the following appcache: > > CACHE MANIFEST > ALIAS: > /x.html /serve-file.html > /files/* /serve-file.html > # serve-file.html is automatically included in the appcache Chromium's appcache actually has a feature that's very close to whats described here, with a slightly different syntax. The url in the first column is considered a namespace prefix just like entries in the FALLBACK section. CHROMIUM-INTERCEPT: /Bugs/Public/show_bug.cgi?id= return /Bugs/Public/bug_shower_page.html I dont think this addresses what this particular w3c issue is about. ================================================================================ So the idea here is that an appcache manifest contains a same-origin reference to a JS file known as its interceptor. When a Document's application cache is complete and has a declared interceptor, the networking model changes to a third model that acts as follows: - open a connection to a worker, or create one if none yet exists, that is an ApplicationCacheInterceptWorkerGlobalScope for the given application cache, using the JS file for the interceptor as mentioned in the manifest. - each time there is a network request to the same origin as the manifest, send a MessageEvent event to this worker using the event name "request", whose payload is an object of the following form: { method: 'GET', // or POST or whatever, '' for non-HTTP(S) origins url: '', // the url being fetched headers: { 'header': ['value', 'value'] // each HTTP request header }, body: '', // the request body (e.g. for POST requests) port: a_MessagePort_object, } - The passed port expects data in the following manner: - the first message to be sent has to be one of these: - a Blob or File, which is treated as the resource payload. - null, which is treated like {} as described below. - an object with an attribute named "action", whose value is interpreted as follows: - "passthrough": fall back to the normal appcache net model. - "cache": serve the file from the cache, or act if it is a network error if the file isn't there. - "network": do it via the network, ignoring the cache. - anything else: act as if the "action" attribute is absent, as described next. - an object without the "action" attribute, which is then treated as meaning the resource had a network error. - anything else, which is stringified and then treated as the response including headers, but possibly incomplete. - the second and subsequent messages, which are only acted upon if the first was not an object, are either of these: - null, which is treated like {} as described below. - an object, in which case the resource is assumed to be finished, as if the network connection had closed. - anything else, which is stringified and treated as a more response data. - if there's a Content-Size header, and data is transmitted past the specified size (as interpreted per HTTP rules), then the extraneous data is discarded. - swapCache() disconnects from the worker if there is one (so that the new cache's worker can kick in if necessary). One possible problem with this approach is that it doesn't kick in until a Document exists, which is after the first attempt at fetching a file: the main "master" file thus always comes from the cache (or maybe network, in the case of prefer-online stuff). Is that a problem? One much more serious problem with this approach is that it totally fails to handle the use case in #4 above. Specifically, it doesn't work for cross-origin images (because you can't let the interceptor see the cookies or data from cross-origin resources), and it doesn't work for an RSS reader that puts the articles in an <iframe> (because <iframe>s aren't fed from the appcache of their parent browsing context). Maybe we should file a separate bug for the interceptor idea and more clearly lay out the use cases for that idea. For the RSS reader idea not based on <iframe>s, how about just adding a simple API to applicationCache for adding and removing extra files? applicationCache.addExtra(url); applicationCache.removeExtra(url); applicationCache.getExtras(function (files) { ... }); // files is an Array of URLs I think post #4 was talking about the add/remove/get methods. Was there a specific reason the interceptor model came into picture ? The uses cases btw from my side for having the add/remove/get methods is the following: (not sure if it's in the right format :)) - offline mp3 player : User needs an option to see which files are available offline .. - online mp3 player : Users needs to able to add and remove files from local storage because storage is limited and cannot expect the user to cache all mp3 files. Yeah, I went off into the weeds here. I've filed two bugs to replace this: bug 20084 for the add/remove API bug 20083 for the interceptor *** This bug has been marked as a duplicate of bug 20084 ***
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17974
CC-MAIN-2015-35
refinedweb
1,907
61.16
Today I want to talk about asynchrony that does not involve any multithreading whatsoever. People keep on asking me “but how is it possible to have asynchrony without multithreading?” A strange question to ask because you probably already know the answer. Let me turn the question around: how is it possible to have multitasking without multiple CPUs? You can’t do two things “at the same time” if there’s only one thing doing the work! But you already know the answer to that: multitasking on a single core simply means that the operating system stops one task, saves its continuation somewhere, switches to another task, runs it for a while, saves its continuation, and eventually switches back to continue the first task. Concurrency is an illusion in a single-core system; it is not the case that two things are really happening at the same time. How is it possible for one waiter to serve two tables “at the same time”? It isn’t: the tables take turns being served. A skillful waiter makes each guest feel like their needs are met immediately by scheduling the tasks so that no one has to wait. Asynchrony without multithreading is the same idea. You do a task for a while, and when it yields control, you do another task for a while on that thread. You hope that no one ever has to wait unacceptably long to be served. Remember a while back I briefly sketched how early versions of Windows implemented multiple processes? Back in the day there was only one thread of control; each process ran for a while and then yielded control back to the operating system. The operating system would then loop around the various processes, giving each one a chance to run. If one of them decided to hog the processor, then the others became non-responsive. It was an entirely cooperative venture. So let’s talk about multi-threading for a bit. Remember a while back, in 2003, I talked a bit about the apartment threading model? The idea here is that writing thread-safe code is expensive and difficult; if you don’t have to take on that expense, then don’t. If we can guarantee that only “the UI thread” will call a particular control then that control does not have to be safe for use on multiple threads. Most UI components are apartment threaded, and therefore the UI thread acts like Windows 3: everyone has to cooperate, otherwise the UI stops updating. A surprising number of people have magical beliefs about how exactly applications respond to user inputs in Windows. I assure you that it is not magic. The way that interactive user interfaces are built in Windows is quite straightforward. When something happens, say, a mouse click on a button, the operating system makes a note of it. At some point, a process asks the operating system “did anything interesting happen recently?” and the operating system says “why yes, someone clicked this thing.” The process then does whatever action is appropriate for that. What happens is up to the process; it can choose to ignore the click, handle it in its own special way, or tell the operating system “go ahead and do whatever the default is for that kind of event.” All this is typically driven by some of the simplest code you’ll ever see: while(GetMessage(&msg, NULL, 0, 0) > 0) { TranslateMessage(&msg); DispatchMessage(&msg); } That’s it. Somewhere in the heart of every process that has a UI thread is a loop that looks remarkably like this one. One call gets the next message. That message might be at too low a level for you; for example, it might say that a key with a particular keyboard code number was pressed. You might want that translated into “the numlock key was pressed”. TranslateMessage does that. There might be some more specific procedure that deals with this message. DispatchMessage passes the message along to the appropriate procedure. I want to emphasize that this is not magic. It’s a while loop. It runs like any other while loop in C that you’ve ever seen. The loop repeatedly calls three methods, each of which reads or writes a buffer and takes some action before returning. If one of those methods takes a long time to return (typically DispatchMessage is the long-running one of course since it is the one actually doing the work associated with the message) then guess what? The UI doesn’t fetch, translate or dispatch notifications from the operating system until such a time as it does return. (Or, unless some other method on the call chain is pumping the message queue, as Raymond points out in the linked article. We’ll return to this point below.) Let’s take an even simpler version of our document archiving code from last time: void FrobAll() { for(int i = 0; i < 100; ++i) Frob(i); } Suppose you’re running this code as the result of a button click, and a “someone is trying to resize the window” message arrives in the operating system during the first call to Frob. What happens? Nothing, that’s what. The message stays in the queue until everything returns control back to that message loop. The message loop isn’t running; how could it be? It’s just a while loop, and the thread that contains that code is busy Frobbing. The window does not resize until all 100 Frobs are done. Now suppose you have async void FrobAll() { for(int i = 0; i < 100; ++i) { await FrobAsync(i); // somehow get a started task for doing a Frob(i) operation on this thread } } What happens now? Someone clicks a button. The message for the click is queued up. The message loop dispatches the message and ultimately calls FrobAll. FrobAll creates a new task with an action. The task code sends a message to its own thread saying “hey, when you have a minute, call me”. It then returns control to FrobAll. FrobAll creates an awaiter for the task and signs up a continuation for the task. Control then returns back to the message loop. The message loop sees that there is a message waiting for it: please call me back. So the message loop dispatches the message, and the task starts up the action. It does the first call to Frob. Now, suppose another message, say, a resize event, occurs at this point. What happens? Nothing. The message loop isn’t running. We’re busy Frobbing. The message goes in the queue, unprocessed. The first Frob completes and control returns to the task. It marks itself as completed and sends another message to the message queue: “when you have a minute, please call my continuation”. (*) The task call is done. Control returns to the message loop. It sees that there is a pending window resize message. That is then dispatched. You see how async makes the UI more responsive without having any threads? Now you only have to wait for one Frob to finish, not for all of them to finish, before the UI responds. That might still not be good enough, of course. It might be the case that every Frob takes too long. To solve that problem, you could make each call to Frob itself spawn short asynchronous tasks, so that there would be more opportunities for the message loop to run. Or, you really could start the task up on a new thread. (The tricky bit then becomes posting the message to run the continuation of the task to the right message loop on the right thread; that’s an advanced topic that I won’t cover today.) Anyway, the message loop dispatches the resize event and then checks its queue again, and sees that it has been asked to call the continuation of the first task. It does so; control branches into the middle of FrobAll and we pick up going around the loop again. The second time through, again we create a new task… and the cycle continues. The thing I want to emphasize here is that we stayed on one thread the whole time. All we’re doing here is breaking up the work into little pieces and sticking the work onto a queue; each piece of work sticks the next piece of work onto the queue. We rely on the fact that there’s a message loop somewhere taking work off that queue and performing it. UPDATE: A number of people have asked me “so does this mean that the Task Asynchrony Pattern only works on UI threads that have message loops?” No. The Task Parallel Library was explicitly designed to solve problems involving concurrency; task asynchrony extends that work. There are mechanisms that allow asynchrony to work in multithreaded environments without message loops that drive user interfaces, like ASP.NET. The intention of this article was to describe how asynchrony works on a UI thread without multithreading, not to say that asynchrony only works on a UI thread without multithreading. I’ll talk at a later date about server scenarios where other kinds of “orchestration” code works out which tasks run when. Extra bonus topic: Old hands at VB know that in order to get UI responsiveness you can use this trick: Sub FrobAll() For i = 0 to 99 Call Frob(i) DoEvents End Sub Does that do the same thing as the C# 5 async program above? Did VB6 actually support continuation passing style? No; this is a much simpler trick. DoEvents does not transfer control back to the original message loop with some sort of “resume here” message like the task awaiting does. Rather, it starts up a second message loop (which, remember, is just a perfectly normal while loop), clears out the backlog of pending messages, and then returns control back to FrobAll. Do you see why this is potentially dangerous? What if we are in FrobAll as a result of a button click? And what if while frobbing, the user pressed the button again? DoEvents runs another message loop, which clears out the message queue, and now we are running FrobAll within FrobAll; it has become reentrant. And of course, it can happen again, and now we’re running a third instance of FrobAll… Of course, the same is true of task based asynchrony! If you start asynchronously frobbing due to a button click, and there is a second button click while more frobbing work is pending, then you get a second set of tasks created. To prevent this it is probably a good idea to make FrobAll return a Task, and then do something like: async void Button_OnClick(whatever) { button.Disable(); await FrobAll(); button.Enable(); } so that the button cannot be clicked again while the asynchronous work is still pending. Next time: Asynchrony is awesome but not a panacea: a real life story about how things can go terribly wrong. ———— (*) Or, it invokes the continuation right then and there. Whether the continuation is invoked aggressively or is itself simply posted back as more work for the thread to do is user-configurable, but that is an advanced topic that I might not get to. Perhaps I'm not understanding this right, but based on your post it sounds like await only works in applications that actually HAVE a message loop (i.e. windows applications). Awesome….makes a lot more sense now. After reading your blog for several years it has become apparent that Frob()ing is a common idiom. Any chance we can see this as a keyword (even contextual) in the next version of C#? Am I correct in thinking we're not protected by locks using the new functionality then, unless we don't allow recursive acquires (which is recommended anyway). This post reminds me of an SO question I tried to answer a while ago, where code achieving a similar purpose was put on MSDN: stackoverflow.com/…/dispatcher-vs-multithreading I chose the example of connecting to an FTP site where you really want another thread doing the work, as it usually takes 10s of seconds to time out on something like that. How exactly do the tasks created in FrobAll() actually get started? A _TaskEx.Run()_ would seem to be needed instead of _new Task()_. @Lenny: applications run in a Context that determines how tasks are queued and run. Your task might be running in a UI context or an ASP.NET context or a multi-threaded context; each of these will queue tasks differently. So you're not limited to applications with a message loop. Look at the SynchronizationContext and TaskScheduler classes for more details. I think you plagiarized my blog's title: "It's Not Magic." 🙂 I am also confused on how the Windows message loop is suddenly aware of the fact that the task needs to continue running. Is this predicated on the assumption that a specific task Context is posting and intercepting specific messages in the message loop? I'm not too up-to-speed on all the .NET 4.0 stuff and the task framework. Does this mean async and await only works in GUI applications, and more specifically, on threads that have such a message loop (so async won't work on console applications)? It'll also work on non-GUI threads; but it will work differently there: since there's no message loop to post to, the task will be enqueued as work item in the thread pool. This means that any thread from the pool may run the continuation, so each 'await' might jump from one thread to the next. Of course, if you really want to stay on a single thread in a console application, you still have the option to write your own message loop and SynchronizationContext implementation. To continue with my previous comment, if we take your document download and archive concept and apply it here, won't we potentially get lengthy hangs if we struggle to connect to the download service (as far as I know, that's a single operation which can't be broken up into smaller tasks)? Assuming we're writing a Windows Forms app here, how does the new async stuff work regarding lengthy operations that can't be broken up? Do I just go back and do things the old way? Alex: WinSock has async versions of probably every network call you would make, so nothing has to be done on a new thread to avoid blocking the UI. However, as Raymond Chen pointed out recently, there's no way to make the CreateFile call async (you need a file handle to associate with an IOCP but you don't get a handle until CreateFile returns). Thus, anything that calls CreateFile (and many APIs call CreateFile internally) has to be run on another thread to avoid blocking the UI. CreateFile can block on accessing a network server, spinning up a HD, seeking a CD-ROM, or any number of other possibilities. @Daniel Grunwald: So await _does_ require separate threads if there's no message loop available? That doesn't jive with what microsoft have been telling us throughout – 'async doesn't need threading!' Maybe I am missing something, but I was a bit confused that the Task.Start() would schedule the Task to be executed on the UI thread. I pasted this code in a Windows Forms app with the following implementation of Frob. private void Frob(int i) { button1.Text = "Demo Frob:" + i; } This throws an InvalidOperationException: "Cross-thread operation not valid: Control 'button1' accessed from a thread other than the thread it was created on." I'm not sure about other UI frameworks like Silverlight, but for Windows Forms you will need to explicitly pass a TaskScheduler to make sure the Task gets scheduled on the UI thread (In Windows Forms): task.Start(TaskScheduler.FromCurrentSynchronizationContext()); So if the C# asynchronous programming language feature doesn't require multi-threading or parallelism, why, if I declare a method as async and want to return something from it, does C# insist I return an instance of System.Threading.Tasks.Task<T>, from the task parallel library? Sounds pretty parallelly/thready to me… 🙂 It does feel like the language dependency has gone to the wrong layer of the framework here. Surely making language-integrated async depend on reactive's IObservable/IObserver would have been cleaner and helped preserve the separation from the task parallel library's highly thread-related code? Or should we be working with IAwaitable and IAwaiter interfaces? (Just as the foreach pattern is encoded in the IEnumerable/IEnumerator interface pair but explicit implementation of them is purely optional, providing interfaces that encapsulate the await pattern would also be useful, as well as providing a natural return type for async methods, and somewhere to hook extension methods too…) (unrelated aside – any chance of foreach being updated to allow extension implementations of the foreach pattern too, for consistency?) The Task<T> return type just feels wrong; as if you'd introduced iterators with the requirement that if you yield return in a method, the return type must be List<T>. @James Hart: Task<T> isn't really to do with threads. It's to do with "a task which may complete at some point". Maybe that will involve other threads – maybe it won't. Just because it's part of the TPL doesn't mean it's explicitly about threads. I was originally concerned about the use of Task<T> as well, but I think it's neutral enough to be okay. In fact, windows will spin it's own message loop in certain situations, such as while the user is moving or resizing a window (in DefWindowProc()) or while DialogBox() or ShowMessage() is running. This means you can't do any magic in your message loop, if you care about dropped messages, such as implementing a UI thread dispatcher! The best way I've found to handle implementing a message loop dispatcher is a WM_APP_DISPATCH message posted to the main window (or a not shown message-only window), callback in LPARAM, param in WPARAM. Of course, now I need a global HWND to post these to, which kinda sucks, and I need to worry about lifetimes of windows vs. messages, or I need a extra window. A possible alternative is PostThreadMessage() combined with SetWindowsHookEx(WH_GETMESSAGE), which seems slightly cleaner, but I'm not confident enough that I'm not going to miss some messages in some screwball situation. What do the Forms / WPF Dispatchers do? That's why the name Task is kind of unfortunate, perhaps Future (like it was called in the CTP) or Promise would have been more explicit (Task seems to indicate it can only be CPU bound). Eric, could you further elaborate on the pros&cons of doing UI asynchrony either as continuations or as DoEvents? Your button-disabling example isn't solved any differently with DoEvents. On the general subject of the new syntax enhancements, I'm a little irritated by the obvious equivalence of awaiting in an async method and yielding in an iterator. As I see it, the following two code snippets should be equivalent: //first async Foo AsyncMethod() { //… do some (potentially async) stuff someValue = await taskExpression; //… return whatever; } //second Task<Foo> AsyncByHand() { TaskCompletionSource<Foo> tcs = new TaskCompletionSource<Foo>(); IEnumerator<Task> enumerator = IteratorHelper(tcs); //does not execute any "real" code Action<Task> continuation = null; continuation = (unused) => { //needs Task parameter to serve as continuation in Task.ContinueWith if (enumerator.MoveNext()) enumerator.Current.ContinueWith(continuation); //enumerator.Current is the yielded task } continuation(null); //execute first part of asynchronous operation return tcs.Task; } IEnumerator<Task> IteratorHelper(TaskCompletionSource<Foo> tcs) { //do exactly the same "usual" operations var task = taskExpression; yield return task; someValue = task.Result; //… tcs.SetResult(whatever); yield break; } In the second snippet, yielding a task works just like awaiting – it saves the current state and signs up everything left to do as a continuation. Granted, we now need three lines instead of one if we are interested in the result of the awaited task, but this is a rather small tax to pay. The boilerplate code in AsyncByHand could easily be factored out into a generic helper method taking a Func<TaskCompletionSource<T>, IEnumerator<Task>> (although argument passing to the iterator may require overloads to this helper); throwing in hypothetical anonymous iterators would get rid of the additional method. Sure, you could argue that awaiting is more general than what I just hacked together because of the Awaiter approach, but this is marginal because one could always replace Task by a different ("smaller") interface. So what's left of all the new sugar? Not much you couldn't accomplish using the existing sugar (iterators) in very few more lines. Don't get me wrong, I truly appreciate the effort of taking C# to new areas, but I'm wondering how this feature achieved the proverbly 100 points necessary to be implemented. Asynchronous programming (and especially I/O) is quite important, but I think the largest impediment to its more widespread use is not syntax, but rather awareness among programmers. @Apollonius Good luck with task compositions using yields… @Apollonious await taskExpression returns inmediately. The continuation is saved and called when taskExpression finishes. Thats the whole point of the async approach. yield taskExpression does not return until said method has finished. The continuation saved is the iterator block state for further iterations. You are effectively blocking any other execution from happening until taskExpression finished so I'm not sure how you're proposing to manage asynchronousity with yields… I might be missing something… Yes, you are missing something. await taskExpression will evaluate taskExpression just as yield return taskExpression does; the whole point of both is that taskExpression evaluates to a Task, i.e. a promise for a value, not the value itself. Note how yielding the task will make AsyncByHand sign up the next iterator round (i.e. everything left to do) as a continuation of the task. This behaviour is identical to that of await! Regarding Focus' comment: You apparently don't understand that task compositions do not really have much to do with the language feature. Task composition is just the process of creating a new task from old tasks, which is already possible today (see System.Threading.Tasks.TaskFactory.ContinueWhenAll/ContinueWhenAny). There is no reason why this wouldn't work with my solution. @Apollonius (might be a repost, mi posts don't seem to show up) 1. Is your example asynchronous or is it multithreaded? 2. I see your point, but you are just restating what Eric has proved in the whole past series. Its not so much about programming asynchronous code (which we can since C# 1.0), its about making it easy. With all due respect, your code reads like a nightmare compared to the async / await pattern that looks almost exactly the same as good old synchronous code. That, IMHO is a major step forward. LINQ, dynamics, named parameters, etc. are all features that weren't necessary in the language, you could very well do everything these features do before but it was in most cases extremely painful to do so. I guess the C# team is following the same line of thought with asynchronous programming. What I provided is not an "example". It is a rough sketch of how to translate the "await" feature to an iterator. As such, it is "just async, not multithreaded", whatever you mean by that. To your second point: I acknowledge that there are different opinions on this one, but I think that once you take out the body of AsyncByHand – which, as I said, could be refactored into a library method – you end up with almost the same code as with await. I am simply doubting that introducing a fairly large changes – new keywords in the language, new rewriting passes in the compiler – is worth the benefits which I personally consider quite small. (I know, I keep repeating myself.) But note also that I'm not so much restating what Eric said but rather contradicting him in a way; the C# team apparently seeks the easiest possible syntax as the key to promoting asynchronicity, while I am actually content with a fairly easy syntax and think awareness is more important. (Apologies to Eric if I have incorrectly stated "his opinion" on this stance, and for highjacking the comment discussion on his blog) Honestly, this is just my opinion; it's not binding for anyone, just a counterweight to the "Huh, that's awesome" comments posted earlier. I've only recently been able to keep up with all the recent articles on this matter. One question that popped in mind was — what does this all mean about DEBUGGING my application? Suppose i am following the execution path line by line, all of a sudden will be jumping back and forth to some continuation? how is it possible to debug such a scenario? or did i get it all wrong ? Is it possible to specificically start the async task in a new thread ? something like- awaitNewThread FrobAsync(i); instead of – await FrobAsync(i); On a side note regarding your example of disabling the button prior to starting FrobAll, what happens if there are already multiple button click messages in the queue before the first button click is processed. Will disabling the button cause any pending messages for that button in the queue to be discarded (in GetMessage or DispatchMessage maybe) or will they be delivered? I haven't yet read the rest of this series, so I may be jumping ahead, but there's a really important point here that Eric seems to be getting right up to without saying it outright. In this post he made it very clear that this is *logical concurrency* without *threads* – and, furthermore, that running threads doesn't imply that things will be literally happening at the same time anyway – as the single CPU example makes clear. In otherwords multithreading is a way to abstract concurrency away from physical CPUs/ Cores, just as the new Task Parallel Library is. As Eric said, before we had pre-emptive multitasking this was, conceptually, how we did it anyway (IIRC we had yield() statements that actually performed a similar job to DoEvents() – so not quite the same). So? OS scheduled threading has some serious disadvantages: 1. You can't predict when a context is going to switch – it might happen while you're updating something. 2. As a consequence of (1) we have to use locks and semaphores and events and other thread sync primitives to protect shared state. 3. It can be harder than you think to identify shared state. 4. (2) can be very tricky to get right – even harder to make efficient safely. It's easy to introduce deadlocks, get the locking granularity wrong, or not lock something you should (I use the word "lock", here, to encompas mutexes, critical sections, semaphores, etc). 5. There is a cost associated with locking (in addition to the cost already associated with thread context switching). You pay this cost whether you needed it at the time or not – locking is a defensive technique. 6. Communication between threads is hard and must be done with protected shared state or some ad-hoc composition of threading primitives. Framework/ library support tends to be either very low-level (if existent) or very heavyweight. That's just off-the-top-of-my-head! I'm probably preaching to the converted here – but I wanted to stress the above so we have the right mindset. Now. With the co-operative model (on a single thread) *pretty much all of this boils away*! As Eric says, it's not a panacea – but it *is* a different model of working, and most of the problems I listed are a result of the pre-emptive model. They arise precisely because we don't have control over when a context switch will occur. With the co-operative model we get that control back. In some ways it's as if you write your code as a sequence of critical sections. Within a CS you know that you have exclusive access to all shared state until the end of the CS. The same with an async task. No need for any addition locks. The locks and the context switch get rolled into one. Now I glossed over one important caveat there: This is only possible when running on one thread. That means limiting yourself to one CPU/ core. Doesn't that make this all pointless? Not really. For many applications that's *all* you need. Threading is often only introduced to improve response times (as Eric's original example bore out). And with platform support tasks such as reading files or waiting for network packets can be defined as async and running physically concurrently – still without introducing application threads (think back to Eric's example in the previous series on continuations). Furthermore, it's often more practical to keep physical concurrency at the process level. Running four, isolated, processes that use async tasks internally on a quad-core CPU will give you maximum utilisation *with no threading required*. Grid computing tends to work this way already. But there are still many applications that would benefit from physical concurrency in-process. Even in these cases it can be simpler to keep the physically concurrent parts seperate from the logically concurrent parts. Keep communication between threads limited in scope and through isolation interfaces such as queues or message passing facilities. I.e. what we're already doing – but we can narrow the scope and impact drastically further. I think all this is why we should be very excited about the TPL – as well as technologies like Grand Central Dispatch on the Mac (which goes some way towards the same goals). Let Moore's law "continue"… The async void FrobAll seems to be awaited in Button_OnClick, if I understand it right. But in the Visual Studio 11 preview, async void methods really do return void (not Task). So you can't await them. I'm pretty sure it must have worked in the original CTP because I wrote a blog post that mused on taking a different approach to the problem of ensuring two button clicks don't launch two simultaneous "Frobs": smellegantcode.wordpress.com/…/c-5-0-asyncawait-and-gui-events Of course it could be fixed by changing your async void method to return Task<bool> or something. But would I be right if I took a strong hint from this change: that side-effecting asyncs should not be freely composed in the same way as value-returning asyncs? Is there something dangerous about doing this? I see that changing "async void FrobAll()" to "async Task FrobAll()" restores the ability to await, so I guess it's not frowned upon after all. Makes a lot sens now!
https://blogs.msdn.microsoft.com/ericlippert/2010/11/04/asynchrony-in-c-5-0-part-four-its-not-magic/
CC-MAIN-2017-13
refinedweb
5,178
61.36
Name only resolve a query for a zone for which it has authority. If a name server can't resolve the query, it passes the query to other name servers that can. The name server caches the query results to reduce the DNS traffic on the network. The DNS Service uses a client/server model for name resolution. To resolve a forward lookup query, which resolves a name to an IP address, a client passes a query to a local name server. The local name server either resolves the query and provides an IP address or queries another name server for resolution. Figure 5.3 represents a client querying the name server for an IP address of. The numbers in Figure 5.3 depict the following activities: When a name server is processing a query, it might be required to send out several queries to find the answer. With each query, the name server discovers other name servers that have authority for a portion of the domain namespace. The name server caches these query results to reduce network traffic. When a name server receives a query result, the following process takes place (see Figure 5.4): Caching query results enables the name server to resolve other queries to the same portion of the domain namespace quickly. A reverse lookup query maps an IP address to a name. Troubleshooting tools, such as the nslookup command-line tool, use reverse lookup queries to report back host names. Additionally, certain applications implement security based on the ability to connect to names, not IP addresses. Because the DNS distributed database is indexed by name and not by IP address, a reverse lookup query would require an exhaustive search of every domain name. To solve this problem, in-addr.arpa was created. This special second-level domain follows the same hierarchical naming scheme as the rest of the domain namespace; however, it is based on IP addresses, not domain names, as follows: For example, Figure 5.5 shows a dotted-decimal representation of the IP address 192.168.16.200. A company that has an assigned IP address range of 192.168.16.0 to 192.168.16.255 with a subnet mask of 255.255.255.0 has authority over the 16.168.192.in-addr.arpa domain. Here are some questions to help you determine whether you have learned enough to move on to the next lesson. If you have difficulty answering these questions, review the material in this lesson before beginning the next lesson. The answers are in Appendix A, "Questions and Answers."
https://etutorials.org/Microsoft+Products/microsoft+windows+xp+professional+training+kit/Chapter+5+-+Using+the+DNS+Service+and+Active+Directory+Service/Lesson+2nbspUnderstanding+Name+Resolution/
CC-MAIN-2022-21
refinedweb
431
65.32
How to Take Input in Java Using a Scanner Using the Scanner class in Java, you can read data from external sources such as text files. The process only requires a few straightforward steps, but you do need to tailor it to suit your own file and program. The Java platform provides standard libraries you can use for input and output operations. By first creating instances of the classes in these libraries and then using methods of the Scanner class to read your file content, you can acquire the content in a way that suits the logic of your program. Instructions - 1 Import the necessary Java resources for your input process. Add the following statement at the top of your Java class file, importing the standard Java resources for input and output: import java.io.*; In order to use the Scanner class, you also need to import the class file for it, so add the following additional import statement: import java.util.Scanner; Once your program has these classes imported, you can create objects to carry out input operations as you require. - 2 Create try and catch blocks to take care of any input exceptions that may occur. When your Java programs read data from an external source, you risk unforeseen errors, such as a file not being where it should be or not having the correct content in it. For this reason, you need to include your input processing code inside a try block, following this with a catch block to handle exceptions, as follows: try { //try to carry out input processes here } catch(IOException ioException) { System.out.println(ioException.getMessage()); } If the program does throw an exception, your code will write the details out to standard output. - 3 Instantiate the input and Scanner classes for your operation. To use a Scanner object, you first need to create FileReader and BufferedReader objects. Add the following code inside your try block, creating an instance of the FileReader class and passing it the name and location of your own file as a parameter: FileReader fileRead = new FileReader("yourfile.txt"); Add the following line, creating an instance of the BufferedReader class, passing your FileReader instance as a parameter: BufferedReader buffRead = new BufferedReader(fileRead); Create an instance of the Scanner class, passing it your BufferedReader object, as follows: Scanner fileScan = new Scanner(buffRead); Your program is now ready to read and process the content of the file. - 4 Use a while loop to process your file content. The Scanner can read your file in sections, so you need it to continue reading until the file has been exhausted. Add the following loop outline structure inside your try block, on the line after you create your Scanner object: while(fileScan.hasNext()) { //read the file contents here } //close the scanner fileScan.close(); This loop will keep executing until the Scanner has read all of the file contents. Inside the loop, you can add processing to scan each item of data in the file. Once your loop finishes, the Scanner has done its job, so you can close it. - 5 Read the content of your file using the Scanner. The Scanner class gives you a range of options in terms of how you process the content of your file. You can read the file one line at a time, or can read single bytes and numbers, with various numerical types supported. To read the file in individual lines, add the following code inside the while loop: String nextLine = fileScan.nextLine(); This code stores a single line from the file in a String variable each time the loop executes. You can carry out whatever tasks you need using the file content inside the while loop. (See References 1, 2) Tips & Warnings Try the different Scanner methods to read your file content until you find one to suit your program. Programs using external data generally need lots of testing to ensure they function correctly. References Resources - Photo Credit Ryan McVay/Digital Vision/Getty Images
http://www.ehow.com/how_12028018_input-java-using-scanner.html
crawl-003
refinedweb
666
59.33
Acfs - API client for services Acfs is a library to develop API client libraries for single services within a larger service oriented application. Acfs covers model and service abstraction, convenient query and filter methods, full middleware stack for pre-processing requests and responses on a per service level and automatic request queuing and parallel processing. See Usage for more. Installation Add this line to your application's Gemfile: gem 'acfs', '~> 0.21.0' Note: Acfs is under development. I'll try to avoid changes to the public API but internal APIs may change quite often. And then execute: > bundle Or install it yourself as: > gem install acfs Usage First you need to define your service(s): class UserService < Acfs::Service self.base_url = '' # You can configure middlewares you want to use for the service here. # Each service has it own middleware stack. # use Acfs::Middleware::JsonDecoder use Acfs::Middleware::MessagePackDecoder end This specifies where the UserService is located. You can now create some models representing resources served by the UserService. class User < Acfs::Resource service UserService # Associate `User` model with `UserService`. # Define model attributes and types # Types are needed to parse and generate request and response payload. attribute :id, :uuid # Types can be classes or symbols. # Symbols will be used to load a class from `Acfs::Model::Attributes` namespace. # Eg. `:uuid` will load class `Acfs::Model::Attributes::Uuid`. attribute :name, :string, default: 'Anonymous' attribute :age, ::Acfs::Model::Attributes::Integer # Or use :integer end The service and model classes can be shipped as a gem or git submodule to be included by the frontend application(s). You can use the model there: @user = User.find 14 @user.loaded? #=> false Acfs.run # This will run all queued request as parallel as possible. # For @user the following URL will be requested: # `` @model.name # => "..." @users = User.all @users.loaded? #=> false Acfs.run # Will request `` @users #=> [<User>, ...] If you need multiple resources or dependent resources first define a "plan" how they can be loaded: @user = User.find(5) do |user| # Block will be executed right after user with id 5 is loaded # You can load additional resources also from other services # Eg. fetch comments from `CommentSerivce`. The line below will # load comments from `` @comments = Comment.where user: user.id # You can load multiple resources in parallel if you have multiple # ids. @friends = User.find 1, 4, 10 do |friends| # This block will be executed when all friends are loaded. # [ ... ] end end Acfs.run # This call will fire all request as parallel as possible. # The sequence above would look similar to: # # Start Fin # |===================| `Acfs.run` # |====| /users/5 # | |==============| /comments?user=5 # | |======| /users/1 # | |=======| /users/4 # | |======| /users/10 # Now we can access all resources: @user.name # => "John @comments.size # => 25 @friends[0].name # => "Miraculix" Use .find_by to get first element only. .find_by will call the index-Action and return the first resource. Optionally passed params will be sent as GET parameters and can be used for filtering in the service's controller. @user = User.find_by age: 24 Acfs.run # Will request `` @user # Contains the first user object returned by the index action If no object can be found, .find_by will return nil. The optional callback will then be called with nil as parameter. Use .find_by! to raise an Acfs::ResourceNotFound exception if no object can be found. .find_by! will only invoke the optional callback if an object was successfully loaded. Acfs has basic update support using PUT requests: @user = User.find 5 @user.name = "Bob" @user.changed? # => true @user.persisted? # => false @user.save # Or .save! # Will PUT new resource to service synchronously. @user.changed? # => false @user.persisted? # => true Singleton resources Singletons can be used in Acfs by creating a new resource which inherits from SingletonResource: class Single < Acfs::SingletonResource service UserService # Associate `Single` model with `UserService`. # Define model attributes and types as with regular resources attribute :name, :string, default: 'Anonymous' attribute :age, :integer end The following code explains the routing for singleton resource requests: my_single = Single.new mysingle.save # sends POST request to /single my_single = Single.find Acfs.run # sends GET request to /single my_single.age = 28 my_single.save # sends PUT request to /single my_single.delete # sends DELETE request to /single You also can pass parameters to the find call, these will sent as GET params to the index action: my_single = Single.find name: 'Max' Acfs.run # sends GET request with param to /single?name=Max Resource Inheritance Acfs provides a resource inheritance similar to ActiveRecord Single Table Inheritance. If a type attribute exists and is a valid subclass of your resource they will be converted to you subclassed resources: class Computer < Acfs::Resource ... end class Pc < Computer end class Mac < Computer end With the following response on GET /computers the collection will contain the appropriate subclass resources: [ { "id": 5, "type": "Computer"}, { "id": 6, "type": "Mac"}, { "id": 8, "type": "Pc"} ] @computers = Computer.all Acfs.run @computer[0].class # => Computer @computer[1].class # => Mac @computer[2].class # => Pc Stubbing You can stub resources in applications using an Acfs service client: # spec_helper.rb # This will enable stabs before each spec and clear internal state # after each spec. require 'acfs/rspec' before do @stub = Acfs::Stub.resource MyUser, :read, with: { id: 1 }, return: { id: 1, name: 'John Smith', age: 32 } Acfs::Stub.resource MyUser, :read, with: { id: 2 }, raise: :not_found Acfs::Stub.resource Session, :create, with: { ident: 'john@exmaple.org', password: 's3cr3t' }, return: { id: 'longhash', user: 1 } Acfs::Stub.resource MyUser, :update, with: lambda { |op| op.data.include? :my_var }, raise: 400 end it 'should find user number one' do user = MyUser.find 1 Acfs.run expect(user.id).to be == 1 expect(user.name).to be == 'John Smith' expect(user.age).to be == 32 expect(@stub).to be_called expect(@stub).to_not be_called 5.times end it 'should not find user number two' do MyUser.find 3 expect { Acfs.run }.to raise_error(Acfs::ResourceNotFound) end it 'should allow stub resource creation' do session = Session.create! ident: 'john@exmaple.org', password: 's3cr3t' expect(session.id).to be == 'longhash' expect(session.user).to be == 1 end By default Acfs raises an error when a non stubbed resource should be requested. You can switch of the behavior: before do Acfs::Stub.allow_requests = true end it 'should find user number one' do user = MyUser.find 1 Acfs.run # Would have raised Acfs::RealRequestNotAllowedError # Will run real request to user service instead. end Instrumentation Acfs supports instrumentation via active support. Acfs expose to following events acfs.operation.complete(operation, response): Acfs operation completed acfs.runner.sync_run(operation): Run operation right now skipping queue. acfs.runner.enqueue(operation): Enqueue operation to be run later. acfs.before_run: directly before acfs.run acfs.run: Run all queued operations. Read official guide to see to to subscribe. Roadmap - Update - Better new? detection eg. storing ETag from request resources. - Use PATCH for with only changed attributes and If-Unmodifed-Sinceand If-Matchheader fields if resource was surly loaded from service and not created with an id (e.g User.new id: 5, name: "john"). - Conflict detection (ETag / If-Unmodified-Since) - High level features - Support for custom mime types on client and server side. ( application/vnd.myservice.user.v2+msgpack) - Server side components - Reusing model definitions for generating responses? - Rails responders providing REST operations with integrated ETag, Modified Headers, conflict detection, ... - Documentation Contributing - Fork it - Create your feature branch ( git checkout -b my-new-feature) - Add specs for your feature - Implement your feature - Commit your changes ( git commit -am 'Add some feature') - Push to the branch ( git push origin my-new-feature) - Create new Pull Request Contributors License MIT License Copyright (c) 2013 Jan Graichen. MIT license, see LICENSE for more details.
http://www.rubydoc.info/github/jgraichen/acfs/master/frames
CC-MAIN-2017-51
refinedweb
1,268
52.36
Let’s start by adding more asynchrony in our applications with setInterval() and setTimeout(). You may have explored these functions before in Introduction to Programming, but even if that’s the case, you’ll be learning a few new tools now. In this lesson, we’ll cover using arrow functions and testing the passage of time with Jasmine. These functions are perfect (and simple) examples of async code that needs to run later, not now. setTimeout() calls a piece of code once, after a set duration of time. setInterval() calls a piece of code multiple times, with a specific interval of time between each call. Here are a few real world examples. Let’s say you visit a page and start reading an article. After twenty seconds, a notification pops up asking if you’d like to subscribe to a newsletter. This is a perfect use case for setTimeout(). The code might look something like this: setTimeout(function(){ alert("Hello friend! Maybe you should sign up for our newsletter!"); }, 20000); A few things to note here. The code snippet above takes two arguments. The first is the function that should be called after the specified duration of time. The second is the duration of time that needs to pass before the code is called, in milliseconds. The duration always comes after the function, which can be tricky if the function is particularly long or utilizes callbacks. There’s one other interesting thing to note about setTimeout(). What if we were to call the function like this: setTimeout(function(){ //function goes here }, 0); The timeout is set to 0, so is the function asynchronous or not? Yes, it is. Not only that, but the function might not execute immediately even though the timeout is set to 0. This is because asynchronous code is queued up in JavaScript’s event loop, and when an async function is called, it’s added to the end of the queue. In the past, some developers have intentionally used setTimeout() in this manner to make synchronous code asynchronous. We won’t cover JavaScript’s internals here, but you may want to do further research on your own on how the event loop works. We’ve covered setTimeout(). What about setInterval()? Let’s say we have an application where we want to refresh and display the current time every second. We’d utilize setInterval() to check and update the time every second instead. This code would look similar to the code snippet above: setInterval(function(){ //check and display time every second! }, 1000); Once again, the function that needs to be called goes first followed by the duration of the interval between each call. You can also cancel a setTimeout() or setInterval() with their respective clear methods ( clearTimeout() and clearInterval().) Now that we’ve covered the basics of of using these methods, let’s start building the business logic for a very simple game that takes advantage of these methods. (You’ll get the chance to build an application with more complex logic on Monday.) Perhaps you’ve heard the old saying: “Sometimes you eat the bear, and sometimes the bear eats you.” In this game, you need to feed the bear regularly or the bear will try to eat you. Specifically, the bear needs to be fed every ten seconds or you’ll get eaten. (It really is very hungry!) This is a perfect opportunity to use setInterval(). We’re going to use test-driven development to set up our logic. That means we’ll start with some tests in Jasmine. Because we are testing the passage of time, we’ll need to use Jasmine Clock to provide helper methods that simulate the passage of time. Go ahead and set up a project with Jasmine and Karma. Next, create a file in the spec folder called hungrybear-spec.js. Here’s our basic test setup along with a basic test that implements the Jasmine Clock. import { HungryBear } from './../js/hungrybear.js'; describe('HungryBear', function() { let fuzzy = new HungryBear("Fuzzy"); beforeEach(function() { jasmine.clock().install(); }); afterEach(function() { jasmine.clock().uninstall(); }); it('should have a name and a food level of 10 when it is created', function() { expect(fuzzy.name).toEqual("Fuzzy"); expect(fuzzy.foodLevel).toEqual(10); }); it('should have a food level of 7 after 3001 milliseconds', function() { jasmine.clock().tick(3001); expect(fuzzy.foodLevel).toEqual(7); }); }); With Jasmine Clock, we need to set up the clock before each test and then tear it down after each test, so we add a beforeEach() and afterEach() block where we install and then uninstall the clock. Once the clock is installed, we have access to the tick() helper method. Each tick is a millisecond, so when we call jasmine.clock().tick(3001);, that means just over three seconds have passed. When fuzzy is created, he should have a foodLevel of 10. After three seconds, fuzzy’s foodLevel should be down to 7. Let’s create a js directory in our root directory and then a hungrybear.js file inside of that. It’s time to write some code to make our tests pass: export class HungryBear { constructor(name) { this.name = name; this.foodLevel = 10; } setHunger() { setInterval(() => { this.foodLevel--; }, 1000); } } You’re already familiar with classes and constructors so let’s focus on the setHunger() function. When setHunger() is called, it needs to decrement the bear’s food level by 1 every second. Within setHunger(), we utilize a callback to setInterval(). Note the use of fat arrow notation here. Remember that this doesn’t have lexical scope inside a nested function unless we use arrow notation, and we don’t want to use the var that = this hack! If we try running our tests now, the first passes but the second doesn’t. That’s because we don’t call the setHunger() function in our tests. We’ll need to write more tests that utilize setHunger() so let’s add it to our beforeEach() block: beforeEach(function() { jasmine.clock().install(); fuzzy.setHunger(); }); Now both tests should pass. Since this is a simple game, we only have a few more small things to implement. We need to be able to feed() the bear and check whether or not we got eaten. Let’s add some more tests: … it('should get very hungry if the food level drops below zero', function() { fuzzy.foodLevel = 0; expect(fuzzy.didYouGetEaten()).toEqual(true); }); it('should get very hungry if 10 seconds pass without feeding', function() { jasmine.clock().tick(10001); expect(fuzzy.didYouGetEaten()).toEqual(true); }); it('should have a food level of ten if it is fed', function() { jasmine.clock().tick(9001); fuzzy.feed(); expect(fuzzy.foodLevel).toEqual(10); }); ... The top two tests are variants on each other. The first checks if we get eaten when fuzzy’s food level drops to 0. The second checks if we get eaten after ten seconds pass without a feeding. Both should evaluate to true. Now that we’ve established that fuzzy’s food level goes down, the last test checks if feeding fuzzy after nine seconds brings his foodLevel back up to 10. We also could’ve written this final test by setting fuzzy’s foodLevel to 1 first and then feeding him. Here are the functions we need to make this pass: ... didYouGetEaten() { if (this.foodLevel > 0) { return false; } else { return true; } } feed() { this.foodLevel = 10; } ... Now all tests should be passing! There is still one key piece of functionality missing from this application (other than the UI, which we won’t cover here). A player could still feed the bear after getting ‘eaten' and keep playing. We’ll leave it to you to add this functionality to the application (and implement it in the browser with jQuery if you like). In this lesson, we’ve gone over using setTimeout() and setInterval() and built a very basic game that uses setInterval(). We also learned how to use Jasmine Clock to test the passage of time and used fat arrow notation to ensure this has context in a nested function. Most importantly, we’ve had the opportunity to practice working with asynchronous JavaScript.
https://www.learnhowtoprogram.com/javascript/asynchrony-and-apis-in-javascript/exploring-asynchrony-with-setinterval-and-settimeout
CC-MAIN-2018-34
refinedweb
1,353
66.13
Ruby Array Exercises: Find the difference between the largest and smallest values of a given array of integers of length 1 or more Ruby Array: Exercise-30 with Solution Write a Ruby program to find the difference between the largest and smallest values of a given array of integers of length 1 or more. Ruby Code: def check_array(nums) max = nums[0]; min = nums[0]; nums.each do |item| if(item > max) max = item; elsif(item < min) min = item end end return (max-min) end print check_array([3, 4, 5, 6]),"\n" print check_array([3, 4, 5]),"\n" print check_array([3, 4]) Output: 3 2 1 Flowchart: Ruby Code Editor: Contribute your code and comments through Disqus. Previous: Write a Ruby program to get the number of even integers in a given array. Next: Write a Ruby program to compute the average values of an given array, except the largest and smallest values.
https://www.w3resource.com/ruby-exercises/array/ruby-array-exercise-30.php
CC-MAIN-2021-21
refinedweb
153
51.52
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Class Methods2:47 with Kenneth Love Let's add some function to our classes (class functions are called methods) so we can do fancier things! - 0:00 Attributes are great. - 0:02 But lots of times we want our classes to have conditional actions or - 0:06 give us back something that's been calculated. - 0:08 So, we'll write functions in our classes. - 0:11 Functions that belong to classes though are called methods. - 0:14 They're the same piece of Python. - 0:16 They just belong to a class so we give them a new name. - 0:20 Let's go back to our monster.py and back to our monster class again. - 0:25 And let's give it a battle cry function. - 0:28 And this function will shout whatever the creature says. - 0:33 So we're gonna need to add another attribute as well. - 0:36 So, before I finish writing that, let's add a sound. - 0:39 And we'll say that the default sound is a roar. - 0:42 All right, so def battlecry and - 0:44 it has to take an argument called self and we're going to return self.sound.upper. - 0:52 So what is this self argument to our method? - 0:54 Except in some special cases, every method that you create on a class takes, - 0:59 at the very least, the self argument. - 1:01 Self always represents the instance that you're calling the method on. - 1:05 But you don't ever have to pass it in yourself, but you do have to write it. - 1:08 It doesn't have to be called self. - 1:10 That's just kind of the general consensus that everyone uses. - 1:15 Handily though, inside of our method, - 1:16 we can use the self variable to get information from the current instance, so - 1:21 let's go back to the console and try again. - 1:26 So we'll go back here. - 1:27 Let's, let's make this a little bigger so we can see that. - 1:29 All right. - 1:30 And we do Python, and - 1:33 from monster import monster, and let's do monster.battlecry. - 1:46 And we got a type error. - 1:48 Battlecry is missing a required positional argument. - 1:51 So the reason that we got that error is because we tried to - 1:56 call this on the class and not on an instance of the class. - 2:00 So, let's make a new instance. - 2:03 We'll do Jubjub again, we go to a monster. - 2:08 And let's do Jubjub.battlecry and then we get ROAR in all caps. - 2:15 So, I'm not sure that a Jubjub bird would roar. - 2:19 But at least our method worked, so let's change the sound of our monster. - 2:23 So instead of saying jubjub bird, let's say that the sound is equal to tweet. - 2:30 And now let's call jubjub.battlecry again, and now we get TWEET, in all caps. - 2:38 That's a much more appropriate sound for a giant killer bird. - 2:42 That's a pretty simple method. - 2:43 Let's look at a more complicated but more useful example in our next video.
https://teamtreehouse.com/library/class-methods-2
CC-MAIN-2017-17
refinedweb
591
84.78
Is this an XML file? I'm guessing that your text file contains XML markup tags. Maybe it's something else, though, like for mailing lists and form letters. In Word online help I looked up "importing xml files" and got a bunch of hits. Here are instructions from one of them: 1. Place the insertion point where you want to insert the data. 2. On the Insert menu, click Field, and in the Field names box, click IncludeText. 3. In the Filename or URL box, type the name of the file, including its system path or URL. 4. Select the Namespace mappings check box, and type a namespace in the format xmlns:variable="namespace". For example, xmlns:a="resume-schema". 5. If you want to insert only a fragment of data rather than the whole file, select the XPath expression check box, and then type the XPath (XML Path Language (XPath): A language used to address parts of an XML document. XPath also provides basic facilities for manipulation of strings, numbers, and Booleans.) expression in the box provided. For example, a:Resume/a:Name specifies the Name element in the root element Resume. 6. If you want to use an Extensible Stylesheet Language Transformation (XSLT) (XSL Transformation (XSLT): A file that is used to transform XML documents into other types of documents, such as HTML or XML. It is designed for use as part of XSL.) to format the data, select the XSL Transformation check box, and type the name of the file, including its system path or URL. 7. Click OK. Good luck. using VB to generate word letters from txt files This conversation is currently closed to new comments.
https://www.techrepublic.com/forums/discussions/using-vb-to-generate-word-letters-from-txt-files/
CC-MAIN-2018-51
refinedweb
282
73.07
Lesson 6: Making Single-Press Buttons create a single-press button. Single-press buttons are useful if you wish to prevent accidental repeated button presses For example, if you wish to create a button to send an SMS, you do not want to accidentally send 5 SMSes if the user accidentally hits the button five times. Single-press buttons help to prevent these kinds of errors from happening. == Always include these libraries. Annikken Andee needs them to work with the Arduino! #include <SPI.h> #include <Andee.h> // We'll create a display and a button to show you how to // program a button to do something AndeeHelper displaybox; AndeeHelper button;() { // Let's draw a display box! displaybox.setId(0); // Each object must have a unique ID number displaybox.setType(DATA_OUT); // This defines your object as a display box displaybox.setLocation(0, 0, FULL); displaybox.setTitle("Single Press Buttons"); displaybox.setData("Waiting for button press"); button.setId(1); // Don't forget to assign a unique ID number button.setType(BUTTON_IN); // Defines object as a button button.setLocation(1,0,FULL); button.setTitle("Press me!"); // Optional: button.requireAck(true); // By default this is already set to true to prevent accidental presses // after the first button press. // You can't use setData() and setUnit() for buttons. } Here's how you're user interface will look: Arduino will run instructions here repeatedly until you power it off. void loop() { // Here's how you code a single-press button action. if( button.isPressed() ) { // Prevents a repeated button press. Button will not be available until Arduino // replies the smartphone with an acknowledgement message. button.ack(); // Add action here! // In this example, pressing the button will change the text in the display box above displaybox.setData("Button has been pressed!"); } displaybox.update(); // Don't forget to call update() or else your objects won't show up button.update(); delay(500); // Always leave a short delay for Bluetooth communication }
http://resources.annikken.com/index.php?title=Lesson_6:_Making_Single-Press_Buttons
CC-MAIN-2017-09
refinedweb
321
51.85
MVC in EXTJS4, folder structure? MVC in EXTJS4, folder structure? Hi everybody! Does anybody know how will the folder structure and extjs4 model-view controller finally be ?? And, could anybody give me information about it, please ?? Thanks a million Typically it falls into /controller /model /view Above is not ExtJS4 folder structure, just a general folder structure used for any MVC framework. IIRC, one of the conference videos display the structure when it was presented. Regards, Scott. When you call Ext.regApplication is creates a namespace for your application, and the sub-namespaces: application.models application.views application.controllers application.stores You can see this in src/Application.js I've been laying out my code in the same way, in-line with how you generally layout namespaces in another language on disk (i.e. .NET). I have a few applications spread across repos, and seems to work well. The main application entry point sorts the viewport out, with the other application launch methods wiring up Direct providers and stuff as needed. Cheers, Westy Hi, I searched the conference video that shows this structure but can not find it. Please, someone has a link to this video? Thanks very much Module-based MVC folder-structure Module-based MVC folder-structure Hi, Is it possible to create another folder-structure for Extjs4 MVC-application with modules, because if application's controllers count is too large, it will be a headache to write another mvc-component and maintain all my code. In my test-application I cannot create module-structure, because of Ext.Loader. What is the best solution for this? Similar Threads Online filemanager that uploads json or xml file containing folder structureBy renasis in forum Ext 3.x: Help & DiscussionReplies: 2Last Post: 6 Feb 2011, 4:27 PM Parent folder icon changed on deletion of child folderBy sachin sachdeva in forum Ext 3.x: Help & DiscussionReplies: 0Last Post: 11 Oct 2010, 10:20 PM Parent folder icon changed on deletion of child folderBy sachin sachdeva in forum Ext 3.x: Help & DiscussionReplies: 0Last Post: 11 Oct 2010, 9:47 PM Folder Structure (Path) ProblemBy sosamv in forum Ext 2.x: Help & DiscussionReplies: 5Last Post: 30 Jun 2009, 11:40 AM Empty tree folder expand -> want still be folderBy iliakan in forum Ext 1.x: Help & DiscussionReplies: 2Last Post: 17 Aug 2007, 12:38 PM
http://www.sencha.com/forum/showthread.php?126539-MVC-in-EXTJS4-folder-structure
CC-MAIN-2014-41
refinedweb
394
55.54
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. OpenERP engine modification. How? Is it possible to inherit OpenERP engine classes? For example, report.rml_parse or osv.fields? The purpose is behavior replacement of printing all reports. How can I do this? Maybe there are some methods like this(it's just thoughts, nothing else): from openerp.report import rml_parse class my_parser(object): ...blah-blah-blah... rml_parse = my_parser Or something like this: from openerp.report import rml_parse class my_parser(rml_parse): ...blah-blah-blah... Last variant perfectly works with report in same module, but it don't work with!
https://www.odoo.com/forum/help-1/question/openerp-engine-modification-how-38858
CC-MAIN-2016-50
refinedweb
121
54.69
clone, __clone2 - create a child process #include <sched: If CLONE_PARENT is not set, then (as with fork(2)) the childs. Every process lives in a namespace. The namespace of a process is the data (the set of mounts) describing the file hierarchy as seen by that process. After a fork(2) or clone(2)(2) threads threads. There is no entry for clone() in libc5. glibc2 provides clone() as described in this manual page. The clone() and sys_clone calls are Linux specific and should not be used in programs intended to be portable.s stack area, and stack_size specifies the size of the stack pointed to by child_stack_base.); fork (2) futex (2) getpid (2) gettid (2) set_thread_area (2) set_tid_address (2) tkill (2) unshare (2) wait (2) Advertisements
http://www.tutorialspoint.com/unix_system_calls/clone.htm
CC-MAIN-2021-04
refinedweb
127
74.59
Opened 3 years ago Last modified 2 years ago Django seems to screw up the python time settings. I put TIME_ZONE = 'Europe/Amsterdam' in my current project. The datetime.datetime.now() then returns a different time that if i just run python and run a datetime.datetime.now() (which returns the CORRECT time in Europe/Amsterdam). The difference is exactly one hour. This difference is VERY annoying, because I can not do synchornization, different imports, etc. :( Since all Django is doing here is setting the TZ environment variable, this is strange; it should just work. If you are on a Unix-based system (Linux / Solaris / Mac OSX / ...), could you please supply the results of the following. The first and third (at least) should probably work on Windows, too. Not sure how to change the environment-local timezone for Windows, though. Everywhere there is "..." below, please supply the value your system shows. python >>> import datetime, os, time >>> os.environ.get('TZ') ... >>> datetime.datetime.now() ... >>> time.localtime() ... >>> time.gmtime() ... TZ=Europe/Amsterdam python >>> import datetime >>> datetime.datetime.now() ... and finally ./manage.py shell >>> import datetime, os, time >>> os.environ['TZ'] ... >>> datetime.datetime.now() ... >>> time.localtime() ... >>> time.gmtime() ... In theory, if your machine is set to Europe/Amsterdam time, these should all return the same things for now() and the latter two should return "Europe/Amsterdam" for the timezone, whilst the first one may or may not return a value for the TZ environment variable. The first test will also show what your machine currently things is the difference between its local timezone and UTC. Ok this is getting stranger every minute.. I'm running on a windows box, with time settings set to Europe/Amsterdam. Local time on box is 11:19 running on the 'runserver' in django: running in apache: so there is something wrong with the built-in server ?? I'm having the same problem. I'm in Ann Arbor, Michigan. I tried setting the time zone code to both Detroit and Chicago in the settings file. And it just gives me GMT. This is on Windows ME using the test server. But it works correctly on Linux/Apache. One more thing. I'm using SQLite with Windows and the test server. And PostgreSQL with Linux/Apache. Arg! Windows! :-( This is not going to be possible to solve in a way that isn't a little bit painful. Timezone strings on Windows are a mess. The database isn't as complete as the strings used on Unix variants and there are differences in the common areas. On Linux, it is possible, if you set the wrong string to always get the "non daylight-savings" version and it looks like the same thing is possible on Windows, too. This link and the associated thread explains the pain with trying to make this portable: Messing with the environment via TZ in Windows is probably a doomed exercise. We really need somebody with Windows expertise to write a win32 patch that does The Right Thing(tm) here. None of the main Django developers seem to use Windows, so this is going to be fixed by somebody who is comfortable on the platform scratching their own itch and submitting a patch. This problem occurs on MYSQL SQLite3 and postgresql on Windows. the TZ stuff works fine until you import django.db Once that happens the TZ for the process is set to UTC. Why? no clue, but it is something that django is doing and is not DB specific. It is windows specific. Have not had time to track down what the %#%%$@ django is doing on windows to cause this but it has caused some major problems for us as it throws off matplotlib (used to generate dynamic graphs) for our date based graphing. As this is effecting the C runtime of the process this even causes problems for SciPy?. This bug needs to be highlited in the documentation! I have talked to 5 people on teh irc channel who were effected by this. The solution has been to have a special settings variable that the other code uses to skew data (sometimes the querey result data, sometimes lie about the TZ to matplotlib), and set it special if we are windows. Fix for windows timezone problems with development server Ok, patch attached disables the settings.TIME_ZONE functionality in Windows, since it doesn't work anyway. As people have reported (and I can confirm) this causes the built-in development server to treat all times as UTC no matter what TIME_ZONE is set to. On a side note, time.tzset() should probably be set after the os.environ['TZ'] = self.TIME_ZONE line (according to Python documentation) but I didn't want to change anything more in this patch than getting this Windows bug fixed. To clarify, this doesn't disable settings.TIME_ZONE in Windows, it just doesn't use it for time and datetime modules (because it didn't work anyway, and caused problems with the development server). It is still required for the django.db stuff. Ok, in a further bout of weirdness (and further testing), it's not even removing the os.environ['TZ'] = self.TIME_ZONE which fixes this problem. Seemingly all that is required to fix it in windows is to import time. Not actually use it - just import it. Weird. Ok, here's my findings so far: hey, just checked to be sure if i have the newest version, but yeah. this patch didnt get into svn so far, so i add my note. i had the same problem on my windows dev env, and the patch helped. thanks alot. Accepted because it's a confirmed bug. In fact, I'm tempted to mark as ready for checkin since it's not going to have any side effects on other platforms, and Windows just plain doesn't work with os.environ['TZ']. Probably needs some sort of documentation addition to explain that Windows people should always just set settings.TIME_ZONE to their local time unless they want problems. Docs for this ticket, see comments below. Docs added, does two things: As always, I leave the stage untouched for triage review :) Thanks Marc. Let's see what the core have to say about this... I say it's a rather common use-case so the Windows note is justified. However personally I'm not sure about having documentation reference a ticket. I didn't want to reference the ticket also, but option 2 was to create a "windows.txt" titled "Windows specific issues" which for the newcomer could be translated to "Oh hell, this thing called Django has trouble with windows!!" but it's an option, maybe if there arise more Windows specific issues we'll have to create a windows.txt file hehe. just as a note, #2625 is probably a duplicate of this. One question before I commit this: how sensitive is the solution to where the "import time" statement is? I would like to have it just before or just after "import os" in order to keep things in a logical order (system imports all together). I have no problem with referencing a bug in the docs, by the way. The bug report doesn't contain crucial information -- just backup supporting evidence. So I'll leave that reference there; you can still read the docs and make sense of them without referring to the bug report. Changing status due to last comment by mtredinnick. Could be ready for checking when it has an answer ;) import moved, docs included I tested the new patch, still works fine. When applying this and reading everything over, the docs read better without the ticket reference, so I removed it (it's in the code, though, so we can track back why we're doing this funky stuff). Sorry for the change-of-mind, guys. Replying to SmileyChris: I tested the new patch, still works fine.? (In [4487]) Fixed #2315 -- added work around for Windows timezone setting (i.e. we can't do it). This will work until somebody wants to write some full Win32 timezone changing code for us. Thanks to Marc Fargas and SmileyChris? for the combined patch.? Replying to SmileyChris:. Forget about my last comment, patch already checked-in! ;) Replying to mtredinnick:. Replying to Marc Fargas <telenieko@telenieko.com>: Just to clarify, Chris posted a new patch (the one he tested) with the import statement moved to a better place. All is good. Sorry, didn't spot the new upload ;) doc tweak: postgres' options don't always correlate I'm seeing similar problems on FreeBSD 6. Turns out that FreeBSD doesn't recognize 'US/Eastern' as an option for TIME_ZONE. I've attached a patch that clarifies this in the settings.py docs. (In [4678]) Fixed #2315 -- Clarified that the available PostgreSQL timezone options may provide more options than are strictly available. This is probably the best we can do for such a varied area of standardisation. I did not apply the docs patch attached here, because /etc/localtime is a binary file and asking people to reverse engineer /usr/share/zoneinfo/ is a bit of a hurdle. We can't really include a full tutorial on setting timezone options, so I've included a warning about the slight variability instead. seduction Here's my post on how to get laid in the next month or two. This post does not maximize your chances of "getting really good at pickup" - in fact, it does the opposite. By Edgewall Software.
http://code.djangoproject.com/ticket/2315
crawl-002
refinedweb
1,589
65.93
Qt Namespace The Qt namespace contains miscellaneous identifiers used throughout the Qt library. More... #include <Qt> Types Functions Detailed Description The Qt namespace contains miscellaneous identifiers used throughout the Qt library.Attribute An anchor has one or more of the following attributes:<< See also QBrush. enum Qt::CaseSensitivity enum Qt::CheckState This enum describes the state of checkable items, controls, and widgets. See also QCheckBox, Qt::ItemFlags, and Qt::ItemDataRole.: This enum was introduced or modified in Qt 4.8.. See also QGesture.: See also QColor. typedef Qt:. enum Qt::HitTestAccuracy This enum contains the types of accuracy that can be used by the QTextDocument class when testing for mouse clicks on text documents. This enum is defined in the <QTextDocument> header file. enum Qt::ImageConversionFlag flags Qt::ImageConversionFlags(). enum Qt::Key The key names used by Qt. See also QKeyEvent::key(). enum Qt::KeyboardModifier flags Qt::KeyboardModifiers. enum Qt::Modifier. enum Qt::MouseButton flags Qt::MouseButtons This enum type describes the different mouse buttons. The MouseButtons type is a typedef for QFlags<MouseButton>. It stores an OR combination of MouseButton values. See also KeyboardModifier and Modifier. enum Qt::NavigationMode. See also QPen. enum Qt::PenJoinStyle This enum type defines the pen join styles supported by Qt, i.e. which joins between two connected lines can be drawn using QPainter. See also QPen. enum Qt::PenStyle This enum type defines the pen styles that can be drawn using QPainter. The styles are: See also QPen. enum Qt::ScrollBarPolicy This enum type describes the various modes of QAbstractScrollArea's scroll bars. (The modes for the horizontal and vertical scroll bars are independent.)(). typedef Qt::WFlags Synonym for Qt::WindowFlags. enum Qt::WhiteSpaceMode This enum describes the types of whitespace mode that are used by the QTextDocument class to meet the requirements of different kinds of textual information. This enum is defined in the <QTextDocument> header file. enum Qt::WidgetAttribute.() and mightBeRichText(). QString Qt::escape ( const QString & plain )(). bool Qt::mightBeRichText ( const QString & text )
http://developer.blackberry.com/native/reference/cascades/qt.html
CC-MAIN-2015-14
refinedweb
332
53.07
C# Decimal - Decimal data type is used for monetary calculations. - It is being used for computations that involve money. - It is used for financial calculations, such as calculations discount prices, percentages etc. - The value which is being stored in decimal must be suffixed with character (m) (e.g.: 12.22m). Example: using System; namespace csharpBasic { // Start class definition / declaration. class Program { // Static main method void type declaration. static void Main(string[] args) { /* decimal type variable initialization. * A character M is being used as a suffix which indicates that this is a decimal type. */ decimal price = 155.2M; // Print variable. Console.WriteLine("Price: " + price); Console.ReadKey(); } // End of main method definition. } // End of class. }
https://tutorialstown.com/csharp-decimal/
CC-MAIN-2018-17
refinedweb
113
54.69
How To Train ML Models With Mislabeled Data 3 Tips on how to train machine learning models efficiently when your data is noisy and mislabeled… In this article, I would like to talk about 3 tricks that helped me to efficiently train models and win a silver medal in a kaggle competition where the dataset was mislabeled and contained a significant amount of noise. By using those 3 tricks, I managed to deal with the noisy data and finish in the 114th position out of 3900 teams in this competition. Rule n° 1 in data science: Garbage In = Garbage Out. Mislabeled data is part of real world data, not all the datasets are clean. Most datasets tend to have some amount of noise which can be challenging when training a machine learning model. The good news is that the Garbage In = Garbage Out rule can be overcome with some tricks that can help your model adapt to the mislabeled data. A brief introduction to the dataset: Cassava leaf disease prediction: It’s a computer vision competition with a dataset of 21,367 labeled images of cassava plants. The aim of the competition was to classify each cassava image into four disease categories or a fifth category indicating a healthy leaf. After a quick exploratory data analysis, I realized that some of the images were mislabeled, let’s have the example of the 2 images below: We can clearly see that the 1st image contains diseased leaves while the 2nd one has healthy leaves . Well, both images were labeled as ‘healthy’ in this dataset, which makes the task of the model harder since it has to extract and learn the features of healthy and diseased leaves and assign them to the same class: Healthy. In the following section, I would like to talk about 3 tricks I found useful to deal with noisy datasets: 1- Bi-Tempered loss function: Picking the right loss function is very critical in machine learning. It depends a lot on your data, task and metric. In this case, we have a multi-class classification (5 classes) with categorical accuracy as a metric. So, the first loss function that comes to mind is categorical cross-entropy. However, we have a mislabeled dataset and the cross-entropy loss is very sensitive to outliers. Mislabeled images can stretch the decision boundaries and dominate the overall loss. To solve this problem, Google AI researchers introduced a “bi-tempered” generalization of the logistic loss endowed with two tunable parameters that handle those situations well, which they call “temperatures” — t1, which characterizes boundedness, and t2 for tail-heaviness. It’s basically a cross-entropy loss with 2 new tunable parameters t1 and t2. The standard cross-entropy can be recovered by setting both t1 and t2 equal to 1. So, what happens when we tune t1 and t2 parameters? Let’s understand what’s happening here: - With small margin noise: The noise stretched the decision boundary in a heavy tailed form. This was solved with the Bi-Tempered loss by tuning the t2 parameter from t2=1 to t2=4. - With large margin noise: The large noise stretched the decision boundary in a bounded way, covering more surface than the heavy tail in the case of small margin noise. The Bi-Tempered loss solved this by tuning the t1 parameter from t1=1 to t1=0.2. - With random noise: Here, we can see both heavy tails and bounded decision boundaries, so both t1 and t2 are adjusted in the Bi-Tempered loss. The best way to finetune the t1 and t2 parameters is by plotting your model’s decision boundary and checking if your decision boundary is heavy tailed, bounded or both, then tweak the t1 and t2 parameters accordingly. If you are dealing with tabular data, you can use the Plot_decision_regions() function to visualize your model’s decision boundaries. #Import package from mlxtend.plotting import plot_decision_regions# Plot decision boundary plot_decision_regions(x=x_test, y=y_test, clf=model, legend=2) plt.show() You can learn more about the Bi-Tempered loss in the Google AI blog and their github repository. 2- Self Distillation: If you are already familiar with knowledge distillation where knowledge transfer takes place from a teacher to a student model, self distillation is a very similar concept. This new concept was introduced in the paper: Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation. The idea is so simple: Self Distillation: You train your model and then you retrain it using itself as a teacher. The paper discusses a more advanced approach that includes several loss functions and some architecture modifications (Additional bottleneck and fully connected layers). In this article. I’d like to introduce a much simpler approach. I read about this approach in the first place solution of the plant pathology competition on kaggle, where the winner team used self-distillation to deal with the noisy dataset. You can check the code in their github repository. Self Distillation in 3 steps: - 1- Split your dataset to k folds cross-validation. - 2- Train model 1 to predict the out of folds predictions. - 3- After saving the out of folds predictions predicted by our model, we load them and blend them with the original labels. The blending coefficients are tunable, the original labels should have a higher coefficient. The out of fold predictions here are class probabilities predicted by model 1: - In this particular example we have a multiclass classification with 5 classes [0,1,2,3,4]. - The labels are one hot encoded. Class 2 is represented as [0,0,1,0,0]. - Model 1 predicted the class 2 correctly: [0.1, 0.1 ,0.4 ,0.1 ,0.3]. Giving it a probability of 0.4, higher than the other classes. But, it also gave class 4 a high probability of 0.3. - Model 2 will use this information to improve its predictions. 3- Ensemble learning: Ensemble learning is well known to improve the quality of predictions in general. In the case of noisy datasets it can be very helpful because each model has a different architecture and learns different patterns. I was planning to try Vision Transformer models released by Google AI in the paper: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, and this competition was the perfect place to try and learn more about them since they introduce a new concept in computer vision that is different than convolutional neural networks that are dominating the field. In short, the ensemble of a vision transformer model with 2 different CNN architectures improves the predictions quality of single models: To sum up, You can train machine learning models with mislabeled data by using: - The Bi-Tempered loss function and tuning its parameters t1 and t2 correctly. - Self Distillation: Train your model and retrain it again using itself as a teacher. - Ensemble learning: Ensemble the predictions of different models. If you would like to learn more details about the models training process, check the summary of my approach on kaggle. The charts were made with the app draw.io THE END
https://aminey.medium.com/how-to-train-ml-models-with-mislabeled-data-cf4bb353b3d9?source=user_profile---------1----------------------------
CC-MAIN-2022-40
refinedweb
1,193
60.35
0 I wanted to write a simple program that uses classes but I've been running into errors. Anyone have a clue as to what is wrong with my code because I can't seem to figure it out. #include <iostream> #include <iomanip> #include <cmath> using namespace std; class cone{ private: double radius; double height; public: cone(); void setvolume();// double conevol(); }; //implementation void cone(){ double radius; double height; } void setvolume(double r,double h){ double radius=r; double height=h; } double conevol(double r,double h){ double vol=(1/3.0)*(3.14159265)*r*r*h; return vol; } int main(){ cone A; cout<<"The current volume of the cone is: "<<A.convol<<endl; A.setvolume(3.4,5.63); cout<<"The volume of the cone is now"<<A.conevol()<<endl; system("PAUSE"); return 0; } Edited by Pyler: question missing
https://www.daniweb.com/programming/software-development/threads/428288/code-errors
CC-MAIN-2018-30
refinedweb
139
61.36
This morning we became aware of a Twitter campaign run from the website. This campaign is intended to provide Microsoft with feedback about our decision to continue to use Microsoft Word for composing and displaying e-mail in the upcoming release of Microsoft Outlook 2010. The Email Standards Project, which developed the website that promotes the current Twitter campaign, is backed by the maker of “email marketing campaign” software. First, while we don’t yet have a broadly-available beta version of Microsoft Office 2010, we can confirm that Outlook 2010 does use Word 2010 for composing and displaying e-mail, just as it did in Office 2007.. Word enables Outlook customers to write professional-looking and visually stunning e-mail messages. You can read more about this in our whitepaper, outlining the benefits and the reason behind using Word as Outlook’s e-mail editor. SmartArt Drawing and Charting tools Table and Formatting tools Mini Toolbar for formatting Word has always done a great job of displaying the HTML which is commonly found in e-mails around the world. We have always made information available about what HTML we support in Outlook; for example, you can find our latest information for our Office 2007 products here. For e-mail viewing, Word also provides security benefits that are not available in a browser: Word cannot run web script or other active content that may threaten the security and safety of our customers. We are focused on creating a great e-mail experience for the end user, and we support any standard that makes this better. To that end,.. As usual, we appreciate the feedback from our customers, via Twitter or on our Outlook team blog. -- William Kennedy Corporate Vice President, Office Communications and Forms Team Microsoft Corporation I hope MS not forget that not all other users on the world without Outlook and sorry NO, this new OOXML file format will not fix the problems. Receive EMail written in Word2003 was sometimes a hopeless thing, when you use another eMail program. This post makes it look like Word is the only way to author e-mail posts in Outlook 2010. Of course, those of us who limit our authoring of e-mails to plain-text and only view arriving HTML-formatted e-mail don't have to worry about whether Word is used for editing rich-formatted e-mail. Right? But what happens when someone sends me one of these rich emails if I have an email client other than Outlook/Word? Will it be compatible? Our issue and by our, i do not mean "creators of email marketing software", I mean web developers, is that you are using words RENDERING engine to DISPLAY the email in the client. It Matters not to us how how you create the emails, jsut how you display them and others created using Standards. I assume that HTML created in Words will display correctly in IE? Then why not use IE rendering engine for the DISPLAY of all HTML formatted emails??? Hey guys, Thanks for commenting on this. I would like to ask one question though,. We're not looking for anything special, unique or far fetched. Just let us design our HTML emails the same way we design our HTML websites. You let us do that and you've done your job, so we can do ours. Thanks again, I really hope you guys will consider improving Outlook's rendering capabilities. Just because Word formats the email doesn't mean the email is sent as a Word document. It gets converted to HTML. The lack of CSS support may mean poorer designs (tables and the like) but it in no way means that other email viewers will have any more problems with it than web browsers would. If it's the same as 2007, where I can turn off using Word for the editor, that makes me happy. Good job ignoring this MARKETING CAMPAIGN by Campaign Monitor, as that's all it is. 25 year experience! In that case rock on with table based layouts I read you response with joy and sadness. Joy that you have recognised a very powerful message sent by the users of Twitter today, but sadness that you do not seem to understand what the problems here are. I disagree with the comment about there being no widely-recognised consensus about what is appropriate HTML for displaying HTML emails. HTML is the standard - why should there be a different standard for emails??! Fix it! Fix it! Fix it! Fix it! Fix it! That is all at this time. The intertubes have spoken. I have lead a webdesign agency that, among other things, builds e-mail marketing campaigns. Outlook was always an issue because of Word's poor rendering of standard HTML. It is just sad that Word does not have the ability to properly render CSS, which is a de-facto standard to position elements in HTML today. It's not all about e-mail marketing either. These days I am at MSFT and even our PA can't get the formatting of e-mails to wish co-workers a happy birthday come out correctly. There is no reason to use Word as rendering engine. To use it for mail creation is fine, but please use IE or a standards compliant rendering engine for display. Thanks. So keep the Word authoring tools, but fix the HTML that it outputs to you can use a browser to render it, just like everyone else. And the email standards project might not be anything official, but it is an attempt to establish some consensus, while you just do what you want. "The 'Email Standards Project' does not represent a sanctioned standard or an industry consensus in this area. Should such a consensus arise, we will of course work with other e-mail vendors to provide rich support in our products" I think this campaign just showed you that such a consensus has arisen. Or is the support of 18,000+ Twitter users not enough? Peter: E-mail written with the Word editor is just e-mail; has nothing to do with the new Office OpenXML document format. It's just an HTML, RTF or Plain Text e-mail and any e-mail client in the world that supports those formats will be able to read it fine. Dennis: Yes, as of Outlook 2007 Word is the *ONLY* editor in Outlook so you are using Word to create plain text or Rich Text (RTF) e-mails as well. -Ben- Please make Outlook 2010 render email according to standards. Anyone who has ever created an email blast will tell you that switching to Word as the rendering engine in Outlook 2007 was a bad decision. If the next version of Office does not do a better job rendering HTML email it will be a step in the wrong direction. Actually, why bother generating HTML at all, if your target recipient is also using Outlook? Why not just send a multipart/alternative message containing text/plain and {some Word-specific MIME type} parts? Honestly, if you want to operate in your own little Office ecosystem and ignore what the rest of the world is doing, that's fine, but why not use the mechanisms which have existed since before Outlook was created to do so? "[click here to] Read this issue online if you can’t see the images or are using Outlook 2007." - Quoted from the official Microsoft Xbox newsletter. Even your companies own marketing teams cant send out appealing newsletters using the tools you are providing. At least give us a meta tag that triggers Trident >.< Sorry Microsoft you just don't get it. Your rendering of email is so far off what every other client can do - including Hotmail/Windows Live Mail - if they can do it, inside a web browser with all their other bits and bobs, why can't you? I'd suggest you go have a chat with your colleagues. Great - you can create your email within Word - that's great functionality. But what about viewing emails that people who *heaven forbid* haven't created their email in Word? How do those email display on other platforms - I'm assuming you'd hope they'd display correctly - or are you adding all sorts of proprietary tags, as Word has a habit of doing, so that they only look ok if you load the email up in Outlook somewhere else... So what if the campaign is run by the creators of Email Marketing Software... it needs someone like that to get behind it to give it the exposure needed. There are now over 19,000 web designers and email marketeers here using all sorts of different packages who are agreeing that you're going down the wrong path... listen to them! I should mention that as far as I know Gmail strips CSS. I can't remember if it's only external CSS files, or maybe it only supports a subset of CSS, but the point stands. This blog post is a distraction from the original intent of the 'movement' on twitter. They are concerned with how emails will _DISPLAY_ in Outlook 2010, but this blog post spends most of it's time talking about the virtues of _COMPOSING_ email using Word. You (Microsoft) have clearly missed the point of the campaign - the concern is only with how received emails are rendered. By failing to support CSS formatting, and even major portions of HTML, you completely wreck the design of all but the most outdated emails. If IE can display it correctly on a website, Outlooks should be able to display it in an email. Is that so difficult to understand? Please please please let Outlook 2010 display CSS properly. CSS is the standard layout technology for the web, and it makes sense to use it in email. @ Jim Deville Have you ever looked at the HTML it creates? If you do you'll see why that's a problem too. All we want are web standards! We want to code emails the way we code websites! And as for security reasons being so crucial, then install a HTML rendering engine to display emails and force scripts to be turned off then. Slandering the ESP isn't becoming either, these guys are working with other email clients for the greater good. Unless I'm misunderstanding, The "Email Standards Project" does represent an industry consensus. That's what the whole point of it is. Surely the frustrating experiences of all your end users who open CSS-based e-mails in Outlook count for something. Alex is right -- use Word to formulate e-mails to send. Disable all the scripting necessary to make Outlook secure, but render html in a predictable, standard fashion so that we marketers aren't required to hack specifically for your e-mail client. William, can you say that every e-mail you've received in Outlook '07 looks as intended? I receive e-mails every day from national advertisers that don't render correctly in Outlook. Travis Bell said exactly what I was thinking. There may be no "set standard" for how to use HTML in an email, but in reality with the HTML standard there is one: HTML. Noone said we want ActiveX support in our emails, we want CSS and HTML support in those emails. HTML and CSS, in and of themselves, if you block ActiveX, and Javascript, etc, is no more harmful than MS Word rendering it within it's tight rules system. IMHO, I think this route is being taken because it already works and means less work having to support HTML/CSS standards.. I have a company that send emails on behalf of clients, and to be honest, Outlook 2007 is a real pain... it has taken development a step back in many ways as it doesn't support common standards. The primary culprit is the limited CSS support - there is no support for CSS floats or for CSS positioning. With the exception of color, CSS background properties are not supported; this includes background-attachment, background-position, background-repeat and background-image. I hope that MS takes the campaign running on seriously! Travis, "Standards compliant" implies a standard. They are saying their isn't one. This is a step backwards from a standards point of view. Build the emails in Word that's fine but give outlook a different rendering engine that mirrors what you just gave us with IE8. Why would you take this step back? While there is no de facto standard for HTML email, there is a de jure standard, and that's what the whole fixoutlook.org commentary is about. HTML 4 and CSS 2.1, without needing to resort to <font> tags, etc... are what email designers want. In my experience as someone who provides email marketing to clients, building HTML email can be an incredibly painful experience. These messages are generally hand-coded, not composed in Outlook, which means we get none of the benefits of Outlook's slick authoring abilities. Unfortunately, hand-coded HTML email is very hard to make look good in Outlook 2007. Adding consistent support for a few key CSS properties — margin, padding and float — would make my daily work much easier. Building HTML email for Outlook 07 is like making a site for Internet Explorer 6 -- it is time consuming, confusing and totally unsatisfying. It's a guessing game: maybe I should try line-height, maybe padding will work on this element if I nest it in a table... In the end, the reading experience is invariably poorer for Outlook users than for users of other email clients. I can't even begin to imagine how complex an app like Outlook is under the hood but I hope you can look at ways to improve the CSS capabilities of Outlook 2010. I've got no axe to grind, not that I think that the email marketing company mentioned does. It was just as annoying when Notes was the client that caused the most problems when sending out completely legal, solicited emails. Lotus have got their act together and the latest version is much better. It's only Microsoft that seems to be going backward. Security and ease-of-use are worthwhile aims, but I don't understand how this helps? Oh good, marketing spiel rather than words from actual people. Whether your reasoning is good or not, the people who are opposed to using Word for rendering are not the kind of people who are going to be won over by heavily authored statements like this. You would probably have been better off not making any statement at all. Here's the reality, whenever someone wants to include rich content in an email (ie charts, graphs, pictures), they do it as attachments. I have NEVER encountered someone using anything beyond the basic rich text formatting options when composing an email, most people don't even use those. For the record, I don't work in a company full of purists either, I work with your everyday office computer users. This makes me highly skeptical that there is a market big enough to justify all this functionality that **only other users of outlook can actually appreciate** Please let your developers write your next response. I mean this in all honesty and kindness, but I don't trust or buy into statements that come from management. Travis Bell stated the issue about as succinctly as anyone could. As a Web developer, I'm happy to have him speak on my behalf on this issue. In the end, it's just that simple: designing an HTML email should be exactly the same process as designing any other HTML document, like a Web page. In my opinion, the Email Standards Project <em>has</em> created an industry consensus. If Microsoft can re-navigate software as broadly used an Internet Explorer onto a course that gives respect to Web standards (not perfect, but far better than IE6), why is the team crafting Outlook so determined to move in completely the opposite direction? Thanks so much for the comments WIlliam. <blockquote>"There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability."</blockquote> I'm not sure you guys are seeing the point being made by the 18,000 people who have tweeted about this today. The consensus *is* web standards. Even if you don't support everything in the W3C specification, those that you do support should be standards compliant. That means margin and padding should work, table formatting shouldn't break. Even basic box model support would be a huge step forward. If you're interested, there is a complete list of the basic CSS properties Outlook need to support to bring it inline with the rest of the majority of the industry from a standards perspective: The issue here isn't about composing emails in Outlook. I don't care if people use Word to compose emails. What I care about is what people see when they receive and view HTML emails in Outlook 2007+. The fact that Outlook 2003 had pretty exceptional CSS support and you all decided to switch to Word for rendering in Outlook 2007 was a gigantic step backwards. That's what I'm especially mad about. Regarding there being a "widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability" - I encourage you to work with the Email Standards Project. They're the only one's who've stepped forward to try and deal with this issue. Yes, FreshView/CampaignMonitor is behind it, but I truly believe the ESP has all of our best interests at heart. The problem is that every other major email client has settled on a rather broad set of standard HTML support (nothing special or fancy, just standard) and Outlook has not. There is no inherent risk in supporting web standards. For the most part, as long as you disallow scripts and embeds you should be done. The result of this is that users of Outlook will see a much worse rendering of simple HTML. You already have IE and the Trident rendering engine. Why not use that to render HTML within Outlook? You could improve Word's HTML rendering engine to add support for features made in the last decade. That might make people happy. Seems like a waste to maintain two HTML rendering engines within the same company, though. Couldn't the email be composed using Word and then converted to HTML when it is sent, and have all HTML email displayed using IE? If there's a worry that IE wouldn't display it properly due to rendering engine differences then surely there's also the worry that every other HTML email client might display Word-generated emails incorrectly as well. Why do you need an agreed subset of HTML? What's wrong with straight out HTML in the first place? I understand the power of Word as the authoring tool, but frankly it sucks for rendering. As a web designer who frequently builds HTML emails for my clients, I'd love to be able to build them the same way I build HTML web pages - using all CSS and current design standards. But because so many of the e-mail recipients are forced to use Outlook, I'm forced to make incredible design sacrifices in order to build something that will look somewhat-acceptable in Outlook, and all because it uses Word's rendering engine. Whatever Office users are using to build their own HTML emails should have absolutely no effect on what Outlook uses to render them. Please make our lives easier, and help ensure that everyone all over the world has a chance to view content in the same way, and consider bringing Outlook up to speed with current web standards. I agree with Travis. As a developer, I don't as much what Outlook users will use to create their own e-mails to other people. What I do care about is that e-mails that my company creates will be displayed correctly in Outlook using modern HTML coding techniques. As Alex suggested, go ahead and use Word to create the e-mails, but use IE to display them. That way, we can develop better code-quality e-mails for use in our backend systems and know that they will render correctly. Mr. Kennedy misses the point. The issue is the rendering engine used for displaying HTML emails. The Word 2007 rendering engine used in Outlook 2007 is far interior to Internet Explorer engine used in Outlook 2003. As someone who codes HTML emails for a living, it is ridiculous that we cannot reliably use padding, image floats, and background images in our emails. Outlook 2003 used the Internet Explorer engine for rendering HTML and allowed users to compose email using Word if they wished. Outlook 2010 should go back to this dual approach. BTW, I am not a CSS zealot and don't think tables are evil, etc. I'm guessing it would be to hard to determine if an email was sent from Outlook, using Word specific features and use that rendering engine in that case. Otherwise, default to the IE8 rendering engine? Fine, we get it, you can create rich email in Word. But since they get converted to HTML, why do you have to use Word's rendering engine to display them? Web standards are the way forward, with Outlook 2010 and IE6 Microsoft is single-handedly holding back the web and making web designers' jobs more frustrating. PS. I love it that the examples of rich emails you gave were charts and diagrams. it's almost straight of out of the Apple adverts. This is absurd. No wide consensus? We're not talking HTML, here, we're talking CSS — and CSS rules like "float" have been around since 1.0. The real problem here is that you're unable or unwilling to make your various formatting tools (shown above) spit out proper HTML, so that your email client can _read_ proper HTML/CSS. In addition, you're stating that because no clear industry standard exists for HTML emails, you think the best solution is to use the rendering engine of a proprietary text processor. Aside from that being a flawed conclusion, in my opinion, the basis is also terribly wrong: I believe the success of this Twitter campaign makes it quite obvious that a standard does exist. Perhaps you should start a campaign for _preserving_ the Word Engine and see how many retweets you get. Apologies if this is a bit of a flame comment, but come on — it's Outlook 2010. One would think it could render HTML emails better than Netscape Navigator 4. Like I said, this is just absurd. This again underlines Microsoft's intolerance for open standards. HTML and CSS are open standards and other mail clients have no problem supporting them. This leaves users to make their own free choice based on features, user interface and other preferences, rather than treating their customers with contempt by locking them into a product ecosystem on the pretense of greater usability. I'm not sure I can add to what has already been said, except to say that everyone here and on Twitter is right. The reason why the Twitter campaign was so successful is because anyone who has tried to write an HTML email campaign that works with Outlook has been incredibly frustrated. There is an entire industry build around trying to make sure HTML email campaigns work in Outlook. That should tell you something. Blog posts like this do not help. It doesn't even sound like it was written by a person, more like a team of marketing experts. Anyone who has ever tried to make a decent HTML email for Outlook knows this post for what it is. William, Kudos for the quick response. But instead of going on defense, you should actually *listen* to your customers' concerns. Your entire post fundamentally misses the point. It's not about e-mail *creation*, it's about e-mail *display*. It's cool (tho naieve) that you guys think Outlook is used to create every e-mail on the planet, but it isn't. As a .NET developer, I've created many websites that use simple SMTP clients to send HTML formatted messages that were designed using Visual Studio or Expression Web. You guys like having the Word 2010 engine being the one to display HTML: fine. So, update Word 2010 to render markup the same way IE8 does. Seems like a really no-brainer solution to the problem, that doesn't need a "standards body" to create. There already is a standards body for HTML, and that should be good enough for you guys. Robert McLaws Windows-Now.com "For e-mail viewing, Word also provides security benefits that are not available in a browser: Word cannot run web script or other active content that may threaten the security and safety of our customers." So there's no easy way to disable scripting and active content while viewing email with the IE rendering engine? That sounds broken to me. The main point of contention is the RENDERING of HTML emails. Williams response seems focused on Word's AUTHORING capabilities. Why is the Word engine not limited to authoring and IE8 used for rendering? If composing emails in Outlook sent to other Outlook users is top priority, then you guys are on the right track. Unfortunately this isn't the case. Without support for extremely basic thing like background images in CSS it's near impossible to create a rich email that works for all users. I don't know about anyone else, but in my life it's not 100% of my friends using Outlook for email. Exactly what Travis said. A lot of people are unhappy about this. Travis Bell pretty much said what I wanted to say. So I'll thank him for stating my thoughts so clearly. I personally think it's good that you focus on a great user-experience when composing e-mails. But the issue as I see it is not if/if not to use Word 2010 as the e-mail editor - The issue here is that you also use its rendering engine when displaying incoming HTML e-mails. I understand that it doesn't make sense to use it for composing but not for displaying. But couldn't you then at least fix some of the many bugs and improve your CSS support? That's all we ask - everybody wins. I suggest you take another look at and see if you can improve any of the issues raised in that blog post. The complain on twitter via fixoutlook.org is not about how emails are created but about how they are rendered. To take a concrete exemple, when building a newsletter, webdevelopers have to play with the rendering behaviour of all the different mail clients, so again, as I'm not the first one commenting this blog post, why not using a well-known standards such as HTML? in other words, why not using IE renderer? If it's just because of some security problems, I still don't understand why standard compliant DOM elements are not supported? Using Word for writing emails seems to be your main focus which is fine. For rendering/displaying emails I feel like it should render like any other web page. You should give the user different options so that they can customize this if they don't want to use one or the other. I understand the bias seeing as this is an MSDN blog, but you seem to assume that the world exists to use Microsoft and nothing more. In a perfect world, we'd all use only MS Word to create and send perfect HTML emails to recipients on only MS Outlook, seeing as its the 'best e-mail authoring experience around'. But we all know this isn't the case. Email marketers everywhere rely on Outlook and other email clients to properly render messages NOT created in MS Word. However, seeing as Outlook (like it or not) represents an overwhelming percentage of consumer inboxes in almost every industry, any obstacle to a message's rendering effectively impacts the revenue of hundreds of companies around the world. It's because of this fact that the Twitter campaign is taking off - not because petty designers are lazy or need something to yell about, but because Outlook simply needs better CSS support as we saw in older versions of Outlook (pre-2007) where IE was the rendering engine. Your explanation in this post, however informative, still seems myopic and all too self-serving. When dollars/pounds are at stake, especially in this economic climate, errors of judgment of this scale need taking to task, which is exactly what the Email Standards Project are trying to do. Regardless of whether or not they're a 'recognized consensus', they still represent a valid reason for seriously reconsidering the dev road map for Outlook 2010. \\ Thanks, Jon As others above have said, if you improve Word's rendering of HTML to be (at least more) standards-compliant, then you solve the problem. Is there a reason why you can't do that and also keep supporting all of the Office-only rich features that are so important. I don't think you should punish Outlook users by forcing them to hand-code HTML emails. We're not asking you to remove any composing capabilities. Just bring the rendering engine up to date so HTML email not built in Outlook displays properly. I'm tired of all the workarounds I have to code just for your email client. My company stopped sending HTML emails with Outlook 2007 because the Word engine adds crap code to the package which other some clients email don't understand. Honestly, how hard is it to support the web standards that have been around for years now? Please? "." The whole point of the original statement is that there is no standard for HTML in email. Additionally, implementing the full standard for HTML, XHTML or CSS would open up all kinds of fun new tools for spammers. As a developer for SendLabs, an email marketing tool, I can say that rendering emails consistently across clients is a nightmare. I can also say that Outlook is by no means the worst culprit. GMail and Lotus are where the bulk of our rendering issues lie. In the end, the primary function of email is sending text-based messages. Word is Microsoft's text document editor which makes it the most likely choice. While we developers may be more vocal, in the end I'd say the vast majority of Outlook users would prefer a MS Word based editing experience rather than a Dreamweaver-esque WYSIWYG. Yes, there is no census on a subset in HTML-Email, but of course there are standards in HTML itself. Word-generated HTML isn't really follwing these standards, despite the same possibilities, especially new versions like HTML 5. So why not push those standards with the new version and make sure that ANY email client can have the described stunning experience with emails from Outlook? (Sure, selling products is an answer...) it is a shame that you guys continue to go against the current on these topics which create many issues for developers worldwide. It is a fact that the most problematic email client is outlook, the same applies for your web browser aka: Internet Explorer. HTML in Email is the spawn of the devil and should be banned. Do your own thing Microsoft - as long as you continue to allow us to specify TEXT ONLY emails. While the Email Standards Project is backed by freshview, that doesn't mean the project is not worth listening to - bringing it up detracts from the issue. They have done some great research, and have good ideas on what HTML/CSS should be supported in an email client. As a developer, I am mostly concerned about the rendering quality of email clients. If I need to send automated email, I would like to be able to construct the HTML like it is 2009, and not 1999. I know you have to balance the needs of Outlook users with the needs of developers, but I think you can make both happy by supporting the rendering of modern HTML. We all understand Microsoft's desire to have its own emails created and viewed consistently. That is perfectly reasonable. Can you not understand the rest of the world's desire to stop creating HTML emails using the equivalent of HTML 3.2? Does the Outlook team disregard the existence of DOCTYPEs when they make these statements about consistency? Surely it wouldn't be difficult to implement your Outlook-authored emails using a Microsoft doctype, and include real HTML/CSS support for developers who prefer it. That would be a true commitment to interoperability. You could have your cake AND eat it too! Sadly, the ideas in this blog post reiterate the traditional Microsoft sentiment: they are only concerned with the user experience of non-technical end-users of their products. They are in no way concerned with the thousands of developers and designers who deal with the quirks of their awkward and selfish technical decisions day in day out. "There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability" I can't believe I just read that: HTML4 and CSS2 have been standards for many years now. It has taken IE8 to render them (more or less) correctly in a Microsoft browser... now get your act together and do the same for the standard Microsoft e-mail client. The problem, as Alex stated, is not with CREATING HTML emails in Outlook. It's the way they are rendered when you RECEIVE them. I think Travis put it well. Let us build and let you render a HTML email the same way we do web pages. Not a big ask I'd think? Cheers for the response. This appears to be the same awful argument for improving the user interface of browsers (adding tabs, RSS buttons, etc) instead of improving the rendering of good HTML that we heard back with IE6. It's code word for we'd rather have people be able to send graphs made in Word to other Outlook users than to make it easier for email marketers and their developers to send beautiful emails! Disappointing that web standards is simply not considered as a necessity given all the work to bring Internet Explorer 8 up to speed in passing the Acid2 test. Observe web standards please, Word is far to out of date to be the engine. As a developer who often has to code HTML emails for clients, I'm quite frankly appalled at this - the Word HTML rendering engine just isn't up to the job, as it doesn't support many common and absolute basic HTML attributes. Outlook 2007 is one of the worst clients for displaying HTML emails, and now 2010 will continue this? What a disgrace. Thanks for nothing for making things many times more difficult, frustrating and annoying. HTML is evil - I'd like a plain text only option please To be honest, this is a terrible decision. When people talk HTML standards, they talk about the ability to create a latest version HTML layout (Using all the latest tools - eg. Style Sheets & Divs, not tables) and it the HTML will render in all HTML Viewers (browsers, email, etc) and the output will look exactly the same. Microsoft (and some other much smaller groups), keep suggesting that it is about the technology itself. It is not. The issue is about simplicity. Do it once, do it right, do it universally, and it just works. We are always having to do it twice, just so that we fit microsofts platforms in somewhere. This is how you are losing market leadership on IE and Outlook. Otherwise excellent, strong products.. but they just no longer "fit". Ok, here is my POV, and why I suscribed to fixoutlook.org : If you want to include Word capabilities into Outlook, just make it send mails in Word's native format, not HTML. HTML IS a standard, normalized by W3C. As is, it should be rendered properly by any user agent that claim to support it. W3C specs for HTML *DO NOT* specify that HTML (and CSS) must be sent over HTTP and not by email. Specs specify how HTML/CSS should be renderded. If Outlook does (attempt to) render HTML/CSS, then IT IS a HTML User Agent and then should conform to specifications. Your beating around the bush here. Your not talking to users, your talking to developers and designers. Stop telling us the security and ease of use of your so called product. I don't care about power, or ease of use. Give me my HTML and CSS standard emails. I'm pretty freaking sure that if there's a group that writes and tests expectations, such as the support for HTML and CSS in email, and it happens to be named the Email Standards Project, there's a standard. Now, it's not written in a giant non exinsistant tablet declaring the laws of the web, or penned in the constitution, but neither is my right to not be slapped upside the face by a major development company. Just please stop being so proprietary. The amount of heartache you have given me over the years from the combination of 98/2000/XP and Internet Explorer is unbearable. I'm not the first one to say that i've nearly broken into tears spending hours trying to fix the mess when your software destroys my neatly written code. If "there is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability," then why not join the discussion and work on it with the rest of us? I would think the Outlook team has that responsibility. Also, while the Email Standards Project was created by Freshview, they take pretty unbiased stance when it comes to testing for standards compliance in email clients. This is email "acid test" they use: Nothing in that list is bleeding-edge or caters to one specific email client, it's all pretty standard stuff that's been available for quite a while. I look forward to Outlook working more closely with web developers and designers. FYI: Gmail needs to step it up as well. If Word is the rendering engine, can Word be fixed to render the same HTML & CSS that has been approved and standardized by the W3C? We aren't asking to render anything different than what has already been approved. I understand the security in running scripts within HTML, but that is not what we are asking for.... What about W3C compliance? You've attempted to fix some of the issues in IE8. Why are you choosing to go "backwards" and force designers to think in terms of web pages versus emails? It's bad enough that we already have to test in IE6, IE7, and IE8 (besides your competitor's browsers) just to make sure that we catch everyone of their nuances. Outlook 2007 for HTML email design has been a pain to deal with. Now you want to continue with the same practice in 2010? What is going to happen when you come out with IE9? Are we going to have to a Virtual PC for every version of Outlook and every version of IE just to make sure that it looks okay and presents out message as we intend? There has got to be a way to balance easy of use with standards compliance. Adobe has done a pretty good job with Dreamweaver. Why can't you do the same with Word and Outlook? IE7 and IE8 have been a huge improvement over IE6. We're really glad to see that. Just as Ani M says above, designing HTML emails for Word has been a huge problem. In most other email clients, the designs degrade predictably, but Outlook '07 is in a world all its own. I'm always having to answer the question "why does it look weird in Outlook?" Time to go back to TEXT only emails!!!! Thanks! If Outlook users want to send charts, can't they attach a DOC or a PNG like everyone else? If you really think your users need all those bells and whistles natively in their mail client, you should be working towards giving Word the ability to export to the accepted format (standards-based HTML), not reinventing the rendering engine. How hard would it be to send and display Word formatted emails using some custom header or code and then employ IE8 for rendering email as a backup? Then you would actually have web development fans instead of continually alienating the community on which you will rely when everything moves to web apps, which it will. When we talk about sending HTML e-mail, I assume it is as some sort of MIME multi-part, yes? Why is Word needed to render it for viewing (not editing)? Or are people jumping to conclusions about that? I know that outlook has a great piece of the email clients market... But why setting apart from the rest of the world? Are Microsoft creating their own standards for email clients? it really looks like. I dont want to send emails fromm outlook if it will look crapy on my client computer just because they dont have the same software that i do. What about interoperability? Please reconsider better css support in your next releases HTML email formatting is a nightmare everywhere at the moment, and Outlook really is the worst offender (closely followed by web based clients like hotmail/gmail/yahoo) The complaint here is about the display rendering aspects, not the composing. IE7 and IE8 have made considerable improvements in this regard even if you did have to introduce 'compliance mode'. Do the same and you'll find a much larger fan base for this product. This entire campaign is your customers trying to help you and influential ones at that - these are the grassroots designers and developers who guide small businesses, friends and family everywhere. These are the people that will create the next Flickr, Twitter or Facebook. These are in fact the same market that your silverlight/codeplex/web framework/.NET MVC groups are trying so hard to crack into and that your marketing and PR people are hunting for. Listen to them. There are people out there who haven't upgraded their offices past Outlook 2000 because of issues with the rendering engine. Forwarding messages on, in particular, often completely breaks the layout of the message. For those of us in Web Development, I don't particularly care how you put the HTML into an email - what I care about is how my HTML is displayed. Having to use 90s-era HTML to ensure that customers using Outlook can see my email the way it was meant to be displayed is frustrating, and I feel there are accessibility issues with using hacky table based layouts in email. (I do and will always include a plain text version, but many people don't) I don't dispute the power of the word engine in outlook for composing messages, but thats not what this campaign is about. This campaign is about recieving nicely formatted email - and being assured that the email your sending is going to render the way you want it to by the reciever viewing the message. If I send an email to someone with nice graphics and formatting, I just have to cross my fingers that the reciever can view it correctly. It's not like a webpage where I can test it in all browsers - and I don't have time to test all email clients to see if the messages renders ok. Unfortunately, if the message doesn't render nicely - I come off looking unprofessional, all because a simple standard cannot be agreed upon. Email should be easy to send and easy to recieve, and I shouldn't even have to worry about testing an email I want to send to my clients. By not supporting some kind of standard in message display (and standard HTML/CSS seems the most logical choice), we are not making any progress here. As a person that does e-mail marketing, let me tell you that outlook 2007 is a real disappointment. It's such a disappointment that in fact I refuse to upgrade, and still use outlook 2003 because it has exactly what a lot of the people above me are asking for. A different engine for creating e-mails and another one for displaying them. The secret, that is not being mentioned here, is that your authoring engine could not author html that your very own rendering engine could render correctly. So you decided to ignore the world and replace the rendering engine. Even if we concede that there are no standards for email(debatable), if you wanted our respect and really cared for your clients, you could have started an initiative on creating those very standards that you claim are missing instead of making life difficult for designers and diminishing over-all user experience. I'm curious how many normal people really spend a lot of time generating rich HTML emails? In my experience the only rich HTML emails I receive are marketing emails. Do normal people have real world requirements for sending richly formatted emails? I'm genuinely curious. I guess my first comment didn't go through? --- If "there is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability," then why not join in the discussion and help define email standards? I would think the Outlook team has that responsibility. Even though the Email Standards Projects was created by Freshview, their "acid test" is unbiased and tests each email client against a set of CSS properties that have been around for quite a while. Outlook isn't alone in its poor compliance, Gmail and Lotus Notes test poorly too. So, this isn't a problem exclusive to Microsoft. You have covered authoring emails, but as designers we're more concerned with how emails look when they are received. If you're going to let us use HTML, then let us use standards compliant HTML. Shouldn't HTML be the same whether its on a browser, a mobile phone, or in an email client? Isn't that why the W3C came up with the HTML standards? I think your problem is that you're separating webpage HTML from email HTML when they should be one and the same. HTML is HTML. "here are some images that show some of the rich e-mail that our customers can send, without having to be a professional HTML web designer" Yes, and those formatted messages can only be viewed correctly in Outlook, whereas most of the features you tout (smartart, charts) don't even carry over properly into Entourage 2008 (Microsoft's own Mac email client), let alone email clients used by thousands. All of this would not be such an issues were Microsoft not attempting to defend a regression in functionality. Outlook 2003 contains more complete support for web standards than 2007 or 2010 - the basic functionality of rendering formatted emails has gone backwards. "There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability. " This is true. There is no "eHTML" standard (perhaps there should be?). However, Microsoft claims to support HTML. Not "we provide partial support for some HTML tags from the HTML 3.2 standard". Please. Acknowledge that what happened from Outlook 2003 to Outlook 2007 was a step backwards, rather than packaging this as some sort of win for your locked-in customers. As Alex has raised, it is the Rendering that is important. Dan has suggested that the issue is a marketing campaign by a email service provider. That is far, far from the truth. The majority of software systems will use the Internet in some way and as part of that will use email most likley HTML-formatted email. Even Microsoft the X-Box communication has problems with Outlook 2007. When Microsoft makes an email "authoring" decision that causes rendering to come out WRONG this has a flow-on effect to the software industry and the wider customer base. Because Microsoft (and Outlook) have such significant roles in the market it generally means cost for other software providers to provide obscure compatibility with the Microsoft product. As the world continues to embrace a "digital economy", more and more of our business and personal life activities move online. It is very sad that Microsoft, the market leader, cannot produce a product that embraces the concept of a COMPLETE user experience ... authoring and viewing. Microsoft ... this is a message from your customers! They were appalled with the failure of Outlook 2007 to properly render HTML and are equally appalled by the decision to fail to address the serious shortfall after a further 3 years. I think this post misses the point of the objections - the problem is not directly related to the fact Word is the editor/rendering engine, nor does anyone seem to be disputing the fact that by using Word as an editor you can create interesting content. The point is that Word is a low-fidelity renderer of some aspects of standards-based HTML & CSS, and that is disappointing to many, given Microsoft's drive to fully support standards elsewhere. Given the fact that Microsoft has a strong standards-based browser in IE8, and has the editing tools in Expression Web, Visual Studio, etc. it is unfortunate that Word is still the editor of choice for this format of email in Outlook. There's gotta be a better way than just using Word to render e-mails. At least with Outlook 2003, we had IE as a rendering engine and could get some nice things done with HTML e-mails. Why not do a hybrid? Use Word for composing messages and have more of a standards based rendering engine (maybe built from IE8?) for reading HTML e-mails. The Email Standards Project is the authority right now. No one else seems to care. Is the WASP not an authority? The W3C? I hope the door is not closed on this issue... Why don't you guys create some sort of trigger that allows email marketers to take advantage of the Trident rendering engine, and make the default email setting use word. When 'Joe Corporate' composes an email, it would use MS Word then. Then, when we send out an email, we can place some HTML code in our email which will then use Trident rather than word on the recipients side. I have always used Microsoft's email clients, my all time favorite is Windows Mail. I understand Office's vision to make possible for no specialized people to produce all sorts of contents, but I wouldn't like that to be the cause of a step behind. Great blog by the way. Seriously, how hard would it be for a 3rd party to write and release an Outlook add-in that changes the email rendering engine to use IE, or even better a WebKit release? If MS aren't going to play ball on this, perhaps somebody else should come to the party. I don't personally mind that Word is used to create e-mail in Outlook. In fact, I can appreciate its ability to allow the easy creation of rich text messages. My issue specifically is with the use of word to read e-mail. I find Word to be far less accessible as a way to read e-mail than Outlook 2003, Outlook Express and other mail reading software. Word just causes more problems when reading mail with our specialized screen readers such as JAWS, System Access and Window-Eyes. Please, at a minimum, consider allowing us to adjust a setting that would allow us to decide how we wish to read our e-mails in Outlook. Thank you. I actually agree with the web-standards compliance debate. Outlook should absolutely be sending compliant message bodies. There's nothing wrong with using Word as the editor because it is really a great editor. Hands down... No arguments there whatsoever but what we're talking about is the standards-compliant output. We would love to have Word as the editor but asking for Outlook to send the e-mail with web standards compliant message bodies. Thanks for listening to the feedback! We really hope you just take action for the outpour of feedback from industry leaders. Ed B. I don't think my last message made it past moderation (take from that what you will). I'll just leave it with saying I agree completely with what Travis and the vast majority of people are saying on here - please stop standing in the way of progress in the world of web standards and make Outlook 2010 render emails with standard HTML and CSS. My job is hard enough rebugging for IE6 without this kind of stuff too. thanks for the blog post.. but I think it's time for a change. please, guys. "As an example, here are some images that show some of the rich e-mail that our customers can send, without having to be a professional HTML web designer. " This is fine, but why do you have to make the jobs of "professional HTML web designers" more difficult? As mentioned many times, what is so hard about letting people use word to compose their emails, but still render emails based on web standards? I do hope you will use a proper rendering engine (e.g. IE), rather than Word, for displaying emails in Outlook. It's silly not to properly support CSS in this day and age. Thankyou for taking the time to write a measured response to the Twitter campaign, but I have to take issue with your assumption that Word is the best tool for composing emails. Do you have any metrics for the percentage of users who might send emails containing graphs, SmartArt and other "power features" vs. those just using it as a simple text editor? All I can remember from moving from Outlook 2000 to 2007 was that, aside from the adding of these power features, the very basics of email composition (not even HTML rendering) went downhill. I could no longer interleave the auto-indented original mail with my un-indented replies as the formatting tools wouldn't seem to allow it. Nor could I copy and paste blocks of text around or even delete paragraph breaks without blocks being erroneously re-styled with neighbouring spacing and layout rules. In my opinion, the use of a powerful word processor with a hierarchical styling system is not the best way to quickly compose emails and responses in a relatively small window. I am in no doubt that "the easy option" (from your software engineering and testing standpoint) of Word for composition and Word for rendering produces consistent results in an Outlook-only environment. However, surely IE8's rendering engine would produce higher fidelity results for emails received from corporate environments running other software such as Notes, Evolution, Thunderbird or GMail For Domains? Surely users of Outlook will (correctly) blame Outlook for inaccurate display of emails received from external sources, and this will reflect badly on Microsoft and the Office suite? Whilst the fixoutlook.org site may concentrate on what you may quickly brush aside as frivolous HTML marketing emails, it is also concerned with rich emails from e-commerce webapps, such as order confirmation emails, printable offer forms and the like. I spend some of my life as a web developer creating the latter kind of rich HTML mail, and even after taking into account the need to put on my 1990s HTML hat and use lowest-common-denominator tables and font tags, I continue to spend around 30% of my time dealing with rendering quirks and bugs from Word/Outlook 2007. Not only is this infuriating for people in my line of work, but other web companies might not have the money or resources to devote effort to working around Word, so emails will go out broken for your users. And trying to work around the problem, and perhaps composing HTML emails for mass consumption using Word to begin with is not a feasible solution either, as not only are the tools and design expressiveness lacking, but other mail clients and webmail systems (including Windows Live Mail) will take issue with the odd, non-complaint CSS and markup generated. Your final paragraph comes as rather inconsiderate to the large percentage of web developers who would stand behind the Email Standards Project. Perhaps there is no official "email industry" support, whatever you might define that as, but can you really so brazenly ignore the fact that you are significantly increasing the workload of any web developer whose site sends out a rich email to a user? The level of standards support we ask for is simply something similar to that given by the browsers in common use today, so there is not such a disparity in the accuracy of display of HTML content on the web and HTML content over email. I've seen you say elsewhere that including two rendering engines in Outlook would be needlessly resource intensive, but would using the IE engine installed on the system really be such a strain on modern hardware, whilst having the Word engine loaded to take over when the appropriate HTML namespaces were detected in an Outlook-to-Outlook message? Thankyou for taking the time to read my comment, and please consider commissioning some metrics on rich email use before dismissing this Twitter campaign as the uninformed anti-Microsoft bandwagon that the tone of your blog post seems to suggest you believe it to be. One thing I have to ask is why Word was ever designed to create HTML anyway. It's a word processor, not web design software. If Microsoft customers want to create web pages, there was FrontPage and now Expression Web. Why (literally) reinvent the wheel. How many Office customers use more than 50% of the functions of Word? Do you really think there are people out there creating quality web sites and pages with Word? Even from a layout standpoint, Word is terrible. So why would you choose to include HTML rendering in a word processor, and then essentially force all web designers to deal with using a word processor's HTML engine for rendering dynamic email. It just doesn't make sense. I think Leo Davidson hit the nail on the head with his comments about Word code rendering in IE. Even if you completely take designers and developers out of the picture and focus on end users, the ones who the switch to Word was for ,they're not being served well by the current lack of standards support in Word. The code it produces is a nightmare and it simply doesn't render well in other email clients. Wouldn't it be a win win situation to improve that? You help your customers be able to truly create visually stunning emails that THEY can send out and have them received by everyone not just fellow Outlook 2007/2010 users and people sending HTML emails to them can also ensure that they remain stunning. No one is asking for the sun and the moon or even the ability to add script (I think most everyone can agree that would be bad), we're just asking that, for the things you do support, that it be standards compliant. Listen to the W3C, those are consensus built standards! (written on a computer running Vista, composed in Word 2007 while listening to a Zune, not an Microsoft hater) As others have mentioned, a simple switch would be awesome. Leave Word as the default, but for those of us who are authoring *HTML* emails, the rendering engine should be IE, not Word. For those who have used Outlook (Word) to author their emails, the rendering engine should be Word. I just can't understand why you'd do this. Word is a terrible HTML renderer, and everybody knows it. There's got to be some kind of compromise we can all come to? The WWW has been held up for years thanks to the shoddy renderer in IE6, and doing the same with email is just incredibly sad, not to mention irresponsible. Word just isn't up to the task at this point, it needs a lot of love (which has already been given to IE no less,) before it will be. :) i'm disappointed that some common ground cannot be achieved here. i'm certain there is a technical rationale for not allowing FULL web-standards rendering in email. i can think of at least a few malicious or otherwise unscrupulous techniques that would allow you to do things that would be a disservice to the end user. i think the solution for all involved, however, is to use a subset of CSS rendering for email - modify the rendering engine to only allow certain selectors and attributes (layout, visual formatting, but no @import, no url() no :before :after content; etc.) internet explorer already has a 'zoned' security model - how much more difficult is it to add 'email' to the list of IE zones? i really don't know any of those technical stuff, but i just hope to be able to see the html correctly, but not the outlook2007 weird display! no matter what the outlook group say (i appreciate you take time explaining it here), we all know it's a problem. i'm just wondering haven't you outlook developers never encountered the annoying html display problem? so many newsletters i got showed up ugly. or actually you guys didn't use outlook yourselves? Just to hammer the message home: I work for a web design agency. We often make html email campaigns for our clients. It doesn't matter to me, or anyone else in the same position as me, what engine you use for creating HTML. What matters is the rendering. HTML is a standard. CSS 2.1 is a standard. and outlook 2007 can't cope with them at all. There shouldn't need to be links in emails saying "if this email is mangled click here and be taken to a browser, any browser, it's got to be better than this." As for you lovely people saying HTML email is the devil. text-only versions can be sent alongside HTML emails, that is what campaign monitor does. and you can set up your email client in such a way that it will use a text version if available. Thanks People don't get it apparently... 1. You compose message in word and it gets converted to HTML. If you want CSS, etc.. turn off word editing (it's pretty quick option to find.. give it a try), and write it by scratch. 2. Viewing HTML in word is better than a browser because it prevents javascript etc from running. If you open up a spam email and an image loads or javascript runs, that's how they know it's a live email address. 3. People are acting like this is something new.. this is the way it's been for nearly a decade and now you're taking issue.. please understand what you're babbling about before you "join the cause". 4. This is a giant marketing campaign to get a true browser built in so they can take advantage of #2 in this list. You're a pawn.. you've been duped.. thanks for playing. Let's assume there is no accepted standard. I disagree, as does your browser team and many thousands of web professionals, but lets put that aside for a moment. Don't you have a responsibility to, at the very least, continue to support your *own* rendering standards as per Outlook 2000? See the differences here: Not going forward is one (problematic) thing, but to continue a giant leap backwards seems extremely unfortunate, and will worsen the user experience for both Outlook users and email authors. If this is going to be the case, then please fix Word which will then fix Outlook. There are broad standards and for the web to work properly, they must be followed. Companies are starting to see the failings of Outlook 2007 - like Internet Explorer was fixed up (mostly), they MUST be fixed in Outlook 2010. In my opinion using Word to render and compose HTML emails is the worst idea ever. Even though I like Outlook, I use Windows Live Mail for both business and personal mail because it simply works better (not sure what it uses for composing and rendering). Why not use Internet Explorer's rendering engine for displaying HTML emails and a modified version of Visual Studio's Designer for authoring. Then we'll finally get closer to being able to compose more standards-compliant emails which will allow one to create a nice looking email that is a LOT smaller in size because instead of using 500k worth of useless markup one can achieve the same result with a little bit of CSS and be done. Another nice feature would be to allow Base64 encoded images to be embedded inside emails. This will make it possible to create engaging emails that don't rely on active Internet connection and have the images always available on the remote server as well. This will also eliminate the need to block all images by default because with Base64 encoded images spammers can't do their tricks that use malicious php scripts as source for their images that track roughly how many people read their junk email and whatever else they can do. Even allowing some extremely basic javascript could be done in a safe manner by only allowing an extremely specific array of functionality such as roll over images, show/hide layers and other usefil things that could improve the email experience rather than completely cripple it by using Word for HTML editing. We hear and read about interoperability, openness, standards compliance but then we get things like using Word, a program that is incompatible with everything else in the first place but is also light years away from producing compatible and standards compliant anything. I understand that Outlook should make things easy for users to write HTML email. But why can't you see the problems that using Word for rendering HTML emails causes, resulting in a poorer user experience. Even despite the best efforts of many to ensure that emails do render correctly in Outlook, even emails that are authored in Outlook, frequently do not. Surely this is a concern? Having users being able to *write* HTML email is handy, for some users, some of the time. Being able to *read* email they have received from 3rd parties is essential for all users, all of the time. Maybe, instead of arguing over whether or not Outlook should use Word for authoring/rendering, what if Word could actually generate and render standards-compliant HTML. In my experience, it seems that MS Office products consistently live in their own little world of non-standards compliance. Perhaps this is a marketing strategy to keep customers coming back. But is there any reason Word cannot generate valid, compliant HTML? Quite often, clients will come to me with copy for an email, but they have done it in Word, and so is quite difficult to make it so the rest of the world can see. If MS Office is all about efficiency and productivity, why not make it efficient and productive for ALL aspects of the world, not just the intra-company conversations, memos, and reports? No CSS support? But you do have some sort of Microsoft corporate standard, don't you? If Windows Live Hotmail supports a good healthy number of CSS selectors and properties, Outlook SHOULD be consistent and support those as well. You work at the same company called Microsoft right? Or perhaps there is a lot of bitter rivalry with the Windows Live Hotmail/Mail group? Well played, Microsoft. But if the power of Word is all in the interface, why can't the final product be standards-compliant? Hovering toolbars could work just as well in an HTML authoring program as they would in Word. SmartArt graphics are just that - graphics - and should end up as nothing more than an image in the final e-mail (as I suspect they already do). I understand Microsoft would want to use existing code from its products to promote its ecosystem. At the very least, standards-compliant authoring should be an option. Word in Outlook is a disaster - combining the two creates headaches when one program crashes the other. This would be a moot point if Word, with all its power as an HTML editing tool, actually supported a larger set of the CSS standards. As a generator of emails to be viewed by the largest range of recipients, I don't want to be restricted to 1990's HTML - and as a recipient of emails from a wide variety of senders, I don't think it's reasonable that supporting HTML emails from Outlook should require a heap of obsolete or complex formatting handlers. The fact that an online marketing firm is pushing for standards compliance in a product which will be widely used is not automatically a reason to dismiss the request. I would contend that most of the demand for consistent display across email programs comes from people who have a professional interest of some kind - but this is a growing segment of emails as journalism and the like become increasingly electronic. Why should the consumers of formatted emails have to pay more for content because a large segment of the recipient market is locked into a non-standard formatting grammar? Do your worst Microsoft, I'm still only reading plaintext email. I agree that Word is arguably the most powerful way to create professional interesting email... however I was wondering if there are any other programs that can be used to render email in Outlook Copy pasting MS word html causes a client of mine to break her site frequently. When MS Word produces XHTML standard like everyone else then I won't complain. I feel I have little new to add, except my own voice, one more among the thousands who have already spoken up in support of standards and against using Word to render e-mails. I work for a company that sends out seven regular e-mails a week, to thousands of recipients, not to mention all the automatically generated, transactional e-mails. These recipients are opening our e-mails with such a wide variety of e-mail clients that if we want to have them render correctly in any inbox, we must abide by the lowest common denominator - and that is Outlook. Please, please, raise yourself above this level. Going from Outlook 2003 to 2007 was a significant hit in the ability of the client to display existing content AND to co-exist with other e-mail systems. If this were still 1993, where e-mail only traveled inside the office, never touching another company, your argument here would make sense. But it's not: Outlook should use a halfway decent HTML parser, because that's part of what it means to be a decent e-mail client these days. "Word enables Outlook customers to write professional-looking and visually stunning e-mail messages." Is there really a reason for that? I've never received a supposedly visually stunning email from an Outlook user in my life. Unless you're referring to the ones that have images, word art, and emoticons strewn haphazardly through them. HTML emails are not designed in Outlook by your standard customers. They are designed by us, using standards that are accepted across the entire internet. With the only notable exception of your software. Exchange and Outlook are hyper proprietary. That's why so many organizations are moving away from both products. While there may not be a consensus on how much HTML is appropriate for an email, it doesn't make sense to become more proprietary. Here's my suggestion: all HTML 4.01 is appropriate with no scripting or plugins. Stop trying to make email a Word doc. I don't use PowerPoint to make a website. I could not agree more with everyone else voicing their opinion above that all we want is a rendering engine that has some level of support for standards that have been around for over a decade. The logic that is in the response by Microsoft to a degree reminds me of the decisions US auto makers made years ago to ignore industry trends and consumer insights. They stuck to their short-sighted beliefs and held on to their history, and now they are in a difficult position. As the comments have stated, we don't care how you create your emails, just render them according to the now well-established web standards so we can all save a lot of time and money. Otherwise it's time to start weaning all the networks I admin off of Outlook and maybe Office in general. It's already happening anyway without any influence from me. I already put in a good chunk of support time dealing with users having attachment problems with malformed Outlook HTML emails and then I do my other job and build email marketing templates that take a lot of extra time because Outlook can't handle CSS. It's hard to believe that you don't see the value and opportunity that Outlook is missing, especially as we are finally seeing IE6 use fading. Email that isn't plain text, or start with the plain text equivalent of whatever screwed-up rich text scheme the crack-smoking programmers have devised, it should be quickly and silently discarded as "content-free junk". If you cannot convey your message in plain text, then the problem is that you cannot write (and likely cannot think). The solution is to learn to write, not to clutter up your communication with irrelevant eye-candy. That isn't to say eye-candy isn't nice -- but email isn't the place for it. (Also, when you reply, trim what you don't reply to, don't top-post, and indicate the difference between your reply and what you're quoting by inserting '> ' before the quoted material.) There are some inaccuracies in this post which need to be addressed - firstly, whilst Freshview are indeed one of the world's foremost providers of email marketing software, this makes them very much an extremely experienced and knowledgeable group of people- that they are service providers is irrelevant - Microsoft are service providers too - does this mean we should ignore what you have to say about I.T. as you may be biased? Secondly, whilst Freshview are heavily involved, they are not the Email Standards Project. Thirdly, with over 20,000 tweets recorded in ONE DAY, perhaps its time Microsoft sat up and listened to what people have to say - lest they continue to lose market share to other vendors who have listened AND have been able to provide and come to the party with concensus on these issues. Its a dire shame for 2009 - a year when Microsoft finally was able to resolve over a decade of antagonism with the web design community by releasing IE8 - for which I sincerely commend you - but then to be working towards a HUGE step backwards in the rendering of its email software. This make no sense. Creating mails with Word is also problem. It gives big overhead. Majority of mails are in plain text or some basic formatting. Using Word to create e-mail is like driving a Ferrari to buy bread and milk in local store. Then, on other side, when mail comes in Outlook, Word is starting up again. It is too much time consuming and standard non-compliant. What a mess. It looks like this blog is the victim of a marketing campaign as well. I guess no one could fill up a blog as well as e-mail marketers. I've been on both end of web development. I'm trying to remember a time when a designer submitted his work to me via e-mail with the expectation that it was supposed to be a useful design or site prototype. No, it never happened. I dumbstruck to imagine that one would. I wouldn't do that, either. This "web designer" feedback is unremarkable. Of course, the problem is that users are becoming educated enough to avoid clicking hyperlinks in e-mail messages. Since they're not going to the marketer, the marketer is finding another way to come to them. Outlook is fine. It's working for corporations and small business, and for the individual users that are comfortable with it, I think it's working for them as well. I'm a Mac user and small-time developer, and I quite honestly don't give a damn if Microsoft clients can't read simple HTML 4.0 emails. Too bad for them. I can't be bothered jumping through hoops to go back in time to the mid 90s. I provide a plain text version of the email for primitive clients like Outlook. As time goes on I think Microsoft will have to do a better job rendering standards rather than rely on their slowly diminishing stature. I feel that Microsoft made a good step forward by focussing on web standards with the release of Internet Explorer 8, just to take two steps back by using the Word HTML rendering in Outlook 2010. Please embrace web standards for Outlook 2010 aswell! Can you not see you are going backwards? Outlook 2000 had rendered e-mails better then Outlook 2007. As a web developer it is a nightmare coding e-mail campaigns for Outlook 2007 and above because of the poor standards. If you at least implement some of the suggestions suggested at we will all be happy. This is basic HTML/CSS that you used to support, so please do so again. I think Microsoft should help providing a real standard for email and not simply wait for it. In addition, everything which is being created using your "familiar and powerful tool", if I quote your post, could be respecting the best practices used by professionals for creating emails. We could then avoid another "specific hack for microsoft products" (i.e. IE6&7) that web developer use every day. You really said nothing with this post. All you needed to say was: > There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability. Then the debate is about how good/bad of a renderer you are using instead of what standards you are trying to force upon people. Why not solve this issue the same way you solved the backwards compatibility in IE8 - with a META tag. Using a META tag, someone sending a specially designed HTML mail could get it to render using the IE8 rendering engine - and using Word rendering if the META tag is not present. I think is fine to use the Word engine as the editor for Outlook, but you should make sure it outputs compliant html. This is also very important when using Word to author blogs (a feature that I actually use). For me, Microsoft does not need to make email marketers happy. Normal users do not "design" their email. I've always thought that the HTML e-mail feature set was too large - making it a viable method of advertising (a.k.a. SPAM!) There's no real reason to support fonts. If the e-mail client has a nice default font, then using strong, em, and so on - the meaning can be conveyed. Can't really get rid of links, people need those - boy would it be nice if we could. Attachments are necessary - but inline images probably aren't. Inline images attract people to style their emails for marketing purposes, but people sharing photos can (and usually do) send them as attachments. Images are already restricted because they cause a visit to a particular URL, which can be identified for tracking purposes. I would like to see images removed from the "HTML Email Specification" - which apparently needs defining. *wonders if the W3C already defined it, but can't be bothered to look* I have made several systems which send out HTML e-mail, and the problem is always to make it look good when opened in Outlook. I'm not overestimating if I say that at least 50% of the time is spent tweaking the format to please Outlook. The article above does not convince me at all that using Word as the editor/renderer is a good idea. PLEASE give us standard HTML! It's pretty simple really. proper HTML should just render properly in Outlook. Saying there's no definitive rules about what standards to support is a cop out. The HTML standard is pretty mature, guys. CSS has been around a long time. HTML and CSS are not intrinsic security risks. Rendering it properly has been achieved via numerous browsers - except one of the most popular email applications on the planet. Seriously, it's about utility for your users, not about simplifying things for developers... Properly formatted HTML means that things display as they were intended, which improves the user experience. Even if you're not impressed by the outcry on Twitter and on this site, please just go back to user experience first principles - which is that properly rendered HTML will improve their experience of email and, therefore, your product. I love Microsoft's work, but on this issue, I really have to agree with the vast majority of comments: we need to be able to read emails as if it's in a browser windows. Perhaps a healthy compromise would be that it's like seeing a document preview (same as a PDF attachment for example) - and click on a tab at the top of the message and see it in the the same area (rather than opening up the browser separately). An additional setting along the lines of "Always show in preview mode from Sender X".. That way you keep the Word engine, and readers can get the email easily in a decent look. I'm no technical guru, but surely there's a better solution.. Thanks for replying to the outcry among developers about your decision to use Word as the HTML rendering engine for Outlook. If you don't want two engines in Outlook, please then fix the HTML rendering engine in Word for displaying e-mails. You're bringing out a new version of your software right? There's got to be progress in your software for basic things like rendering HTML in e-mail. Why not just fix Word to support HTML and CSS web standards? This would help in so many areas of everyday usage. Pasting content from the web into a Word document would work better. Pasting Word content into CMS systems would work better. Sending artwork from Word to other email clients would work better. Maybe this campaign should be called FixWord.org! If we're stuck with Word as an email render engine, then please FIX WORD to support web HTML CSS standards that the ENTIRE REST OF THE WORLD USES. 'nuff caps. to all web developers: I don't care about web standards in MY FRIGGIN EMAIL!!!!!! If I want to see a promotion, I click a link to your site... Uh... I don't care how you implement composing and rendering, but it must be interoperable, i.e. read and write correctly messages that follow the relevant standards. There are lots of other clients than Outlook out there (Mac users, anyone?). Rendering using Word must not be required for correct rendering. This means: * standards compliant HTML rendering * better MIME support (i.e. not making a mess of PGP-signed messages; this does not require PGP support, only decent MIME support so that the text content is displayed) * not making a mess with encodings, i.e. marking windows-1252 as iso-8859-1 (Outlook Express does it correctly, why does Outlook make a mess?). All this is the bare minimum for interoperability. I stopped using Outlook because of these issues. *It's not a decent mail client any more*, it has become only a client for internal, tightly integrated, Exchange-based entreprise messaging. Just adding my voice to the many. I don't use Outlook, but let it be clear that this is very hurting to newsletters, campaigns, and other HTML content that could be sent to people's inboxes. There is a consensus and agreed standard for HTML content in email, and that is that it shouldn't use the Word rendering engine. The manner in which HTML e-mail is renderin Outlook 2007 represents a significant step back in terms of accessibility to blind and visually impaired users who use Outlook. Using IE to render HTML in 2003 worked well for screen reader users and the 2007 method of doing so was a regression. To have full access to HTML e-mail, a screen reader user has to take a secondary step of viewing the e-mail in the browser--a step that takes additional time and keystrokes that no one else is made to suffer. With the supposed "commitment" to accessibility that Microsoft likes to tout, the continued persistence will lose me and many others as customers. With many other solutions providing functionality that Outlook does, It is unlikely that I will upgrade to 2010 or continue to use 2007 as it directly affects my level of productivity. As a developer I spend a lot of time each week trying to explain to customers why the clean, efficient modern code and look and feel we create for their website can't be duplicated in their email marketing. It just doesn't make sense. I think Microsoft has responded well to feedback over the past years and has worked hard to create good, standards-compliant browsers IE7 and IE8. Why are you moving forwards in this area and backwards in email? As for "there is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability" - come off it! As a Microsoft .NET developer of 8 years the more and more I learn about Microsoft the less I am inclined to continue to use/recommend their technologies. I've recently been fortunate enough to use more open source technologies and feel that the future is with them for the sole reason that they evolve. Rendering the right HTML is not just an outlook problem but also an IE (which is the most popular browser in the world) problem. I hope for the sake of Microsoft that you listen to the developers because we hold the power now to direct usage. It is simply not good enough to say "we do it this way and you deal with it". Sooner or later developers will not tolerate this and you will lose. I know most of the people commenting here are not your customers. But think about this: while you might make today's big corporations happy for now, the people who will create the Apple and Microsoft of tomorrow are the ones positing here and taking part in the campaign, and they will probably not be using Microsoft products. If you all manage to make web pages that work on IE, FF, Opera and Safari, just spend 5 more minutes to make sure your email looks ok on all email clients. Who knows, maybe MS will give us Outlook for free, so we can test emails before we send them out. :) MS is the best, if only they could make VS to work 250% faster then it works now, i would not have to spend 1 000$ on new pc just to work normal with VS 2008. But there is no reason to complain, as long as they give us MSDN subscribtions, we get for small fee all software that would cost us like 100 000$ per year. They are not that bad, you only have to understand that they need to render it with word so that all those cool stuff made in word would show up correctly. Did you ever see how clients make emails? How stupid 90% of PC users are? What all they do and in which ways? They are c/p gods, they copy 10 different encodings in one email :D and you want IE to render that? They call MS and ask why does not autocad drawing show up as image in email, etc... Its not MS that sucks, it's those 90% stupid users that suck. MS, congratulations on trying to make milions of people happy, while each one of them has different demands. And don't mind this so-called web developers and e-mail campaign designers (is that a real job?!?), real developers will always understand your problems and appriciate your efforts - being a developer, either web or desktop, is not easy job, we are not in DOS anymore, it's not about 16 colors anymore, it's so much more, and each month it advances. You are doing great job. (from IE fan - using it since 1995 and loving it, altrough, you did wait to long between IE6 and IE7) I'm not an IT guy or a web designer etc... just a guy who uses Microsoft Office. Now that I have a new laptop with Vista (another story all together) and Office 2007, I'm finding that features I had in older versions no longer exist. In Outlook, just as one example and since this is the topic here, there is no "Out of Office Assistant" anymore. You have to be on an Exchange Server. Why? Independent people can't be out of the office? Now I'm scared to ever upgrade again! As to what end users want, letting aside a marketing departments campaign mail. is it really little more than emboldening, underlining, spell checking, image insertion, and document attachment. Isn't that what 99% of users actual do with email. Why do I need Word for that, really, where's the requirement. I understand that you want people to use your products, and that you wish to continue selling more products, but driving spurious use cases to try and bolster an applications use is a bit too much. Don't just foist 'features' onto end users and then tell them it's good for them. As to Microsoft's commitment on interoperability in documents, and Web standards compatibility, your scorecard is poor, could do a whole lot better. Being interoperable, and standards compliant doesn't just mean you can send you MS application produced files to another MS application user. Come on, get on with the job of real interoperability, and adherence to standards. That's a hard job in itself. But if you make the real commitment there, you won't get these reactions, and you'll find that people will really appreciate it. As a REAL Outlook user, I'm glad it supports "simple" HTML. I have no desire for CSS or anything else that is not needed in regular office email correspondance. Even if the renderer was improved, what would it do when I click on reply and started editing? If the editor could not support everything the renderer could, the display of the email would change drastically. I don't feel any bit sorry for email spammers. Nobody should be using the Smartart and Charting tools in the body of an E-Mail - how can you be sure that the recipient not only has Outlook, but the same version as you ? Not to mention the large size they become, and that many rules based filters just block them. You caved and gave us standards for IE8. Please do the same for Outlook. I guess the impact of the failure of the "embrace, extend, extinguish" strategy in the IE team hasn't been felt in the Office team yet. in our company two thirds of our computers run XP, one third are Macintoshes running leopard. one Win Server and 4 linux servers. we use Outlook 2003 and Entourage but were evaluating Apple Mail since it will support Exchange, were also evaluating Win 7 to make a long story short it all about compatibility! were not going to upgrade unless it a better product and it works. Windows Mail it simply to use and it works the only reason why we don't use Windows Mail is because it lack Exchange, and Entourage 2008 the best of the lot. pity u didn't make a pc version, both passed ACID test set by the Email Standards Project, which begs the question why not use IE8 Trident or even better use modern rendering engine like WebKit for Outlook. who idea was it to use Word to render html in Outlook 2010? come on it a word processor if this is a tactical decision to lock in customers, makes no sense. that the reason why in our company we use Java not C# (by the way i actual like C#) we use Apache not IIS ect... the amount of business's using branded html (plus CSS) for marketing campaigns??? what will end up happening is people will not upgrade which has been a problem for office 2007 or even worst switch to different email client, or switch platforms. Keep up the good work guys and PLEASE don't listen to spammers and their thinly disguised "we're about the standards" campaign... Respect: Internet standards, web users and designers. Stop: Alienating millions to make profit. The number of emails I get that do not display correctly in Outlook 2007 is a disgrace. Is it the email author's fault? Oh, no it can't be, because when I click on the link that says "If you can't read this email correctly, click here to read it on our website." it looks fine. This whole shambles needs a re-think. I thought I had detected a change of direction in Microsoft now that Bill Gates had stepped down, perhaps this has not reached the Office team yet. Oh, and I'm not anti-Microsoft, I have made the company I work for 100% Microsoft software based where possible. Windows Server/client, SQL Server, Exchange, Commerce Server, Dynamics NAV, Office 2007 etc - so you can see I am a pretty good advocate of Microsoft. The mess that is Office 2007 and in particular Outlook 2007 has seriously had me thinking about alternatives. I have never, ever seen an email with SmartArt or Charts embedded in it. Please leave Word for composing and displaying communications intended for print and let Outlook use a standards based model. I'm good with using Word as the editor in outlook. But I would LOVE IE8 to RENDER (display) the mail. I think you missed the point. Lets hope you get it. You rendering choices are so different to everyone else's. MS used IE rendering previously, which while not perfect was much better than the 'word in outlook' approach. You MS customers want their emails to look the same everywhere, also in other non MS email clients. Your stance is unhelpful. It's also not like we are actually asking much. We want you to deliver a product that displays our craftsmanship it a professional way. We want more and we want better please. The way Internet explorer has embraced standards since version 7 has been very helpful and inspiring to the community. I always find outlook utterly frustrating when we are sending out our HTML emails because it is the only client in which they do not display well. It is a shame that they work in an older version of outlook and yet fail in the most current one. You need to take a step forward, not stay in the past and allow the world to see emails in the same way. One other point. Security is not an issue in css. So supporting it in a useful and meaningful way would just make our day. "it’s the best e-mail authoring experience around" What about viewing experience? How can you release e-mail software in XXI century that cannot display CSS-formatted messages properly? There are many useful cases it's needed :/ I hope Microsoft will fix another problem - why isn´t it possible to use Word for individual ad-emailing INCLUDING attachments? Since Office 2000 users wait for that - maybe Office 2010 will help to make that possible! :s ababiec - you're the first person I've seen on here who, like me, is a user, not a marketer/web designer. Most businesses leave online images switched off by default for their users, so we purposely *break* marketing emails to protect ourselves from spam. We don't care what marketing messages look like for the most part. Understandably, marketers do. I agree that the rendering should be standards-compliant as far as can be made reasonably secure. And using IE8 for rendering would be great *if it worked seamlessly*. We had separate editing/rendering engines with Outlook 2003 and before, at least those of us who used Word as our editor did. We were using IE6 to render, and guess what? IT SUCKED. Relatively simple emails used to suffer from rendering inconsistencies. (Don't even get me started on IE rich text editing. Numbered lists in particular used to make me want to throw things.) Yes, ideally the IE rendering engine should have been fixed. But in the real world where resources are limited and features are prioritised, I'd rather have a good experience for the basic emails from my colleagues which *do* occasionally contain embedded objects. The marketers are a special-interest group. While their points are valid, they are *not* representative of your user population. So, Microsoft, show us you have nothing to hide; tell us what the telemetry says. How many emails aren't rendering properly? What percentage of users even allow images to display? I know this argument isn't about external images, but it would be one indicator of how many users care about rendering fidelity for marketing mails. The argument for using Word for rendering HTML mails are somewhat skewed towards the editor functions. I think for _display_ of emails, MS should use the browser, preferably even the default bowser and not just IE. You can not edit the preview window anyway. However, as an email editor the arguments given are much more convincing. By the way, are you aware that you CAN show any HTML mail from Outlook 2007 using IE? All you have to do is open the mail, choose action+other actions+view in browser. It should be a snap for MS to supply an automatic feature within Outlook that does exactly this as a preference (skipping the display of the email in Word, of course). Again, there is no point in using Word to display an email in your inbox. Fix it! Please get with the program and support email HTML standards. You probably haven't seen these, or you'd be behind them 100%, but you can find more information here: Email is about communication, and all communication is based on standards, otherwise we could not understand one another. Many people do not understand why you would use your funky word processor to interpret HTML. Please reconsider your decision prior to the release of your new software. I'm sure you can fit this new feature in before your release date. Thank you. “There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability.” Why does there even need to be one? It's HTML for crying out loud. Are you by any chance one of these seasoned prevaricators that sits on the W3C HTML working group? This kind of obfuscated language is intended to sound like it highlights a problem, but all it does is obscure the fact that there is NO problem, and throw a feint to try and distract people. if u cant understand English, someone will help u geeks out...just like the following post...so dont worry.. :D The whole point of fixoutlook.org is about the rendering, not the authoring. Don't get me started about every middle-managers wet dream of sending bloated Word docs with fancy (and useless) 3D bar graphs to their entire staff, but that's not what this campaign is about. It's about asking Microsoft to provide a rendering engine for Outlook that will properly render HTML with CSS. How hard can that be? This statement reeks of smug contempt for people who try to make proper HTML e-mails that are as lightweight as possible. And, TechieBird, "in the real world where resources are limited and features are prioritised"? Please, this is Microsoft we're talking about. It should not be above them to create/implement a decent HTML rendering engine. Microsoft and stantards out the standard. I am astounded. For years I was blissfully ignorant of what Microsoft was doing with most all their products, but that stopped the day our company decided to stop snail mailing client's their info packets and send them via e-mail. The e-mails weren't super large or complex or image laden, but yet Outlook obliterated the WC3-validated code. Then the tricky education of what Outlook supports and doesn't support began. It's basically a game of "if it was 1993 and I was coding this, how would I do it?" It doesn't matter that this anti-Word rendering effort was started by a "maker of 'e-mail marketing campaign' software." Microsoft's blatant disregard for web standards is shameful and effects us all. HTML e-mails are not going away, in fact their use is only going to grow as more and more companies chose not to print and mail. The question now is do we want e-mails that are low file sized, quick to load and easily read by everyone, including those with disabilities, or do we stick with 1993 technology until Microsoft's decline is far enough along that the entire planet could care less about their stupid decisions? Integrate Word html engine in Outlook. That's really smart... Go Microsoft, Go. If you don't get it, you don't get it. Dumb until the end... Go Microsoft, Go. Why don't use IE 8 rendering engine? It doesn't comply with standards, but it's a lot better that Word... It should be a no brain decision, but even these ones you don't get it... Maybe you have became too big and too blind... My sincere advice, be Humble... Cya If you want to use Word to render e-mail in Outlook, that's fine. But use IE to render HTML in Word. And with all of the resources available to the Office team, it is not credible that you can't manage to make Word produce proper markup. Your attitude at present is fuelled at best by laziness, at worst by contempt for your customers. I love Office, and I love Outlook, but this is nothing less than unacceptable. Oh, and HTML should be used for e-mail composition, not a subset thereof. I agree, the Word layout structure is one that I would rather do with out, and many times when there are formating issues, this is my first troubleshooting step to change this to HTML or plain text just to avoid the hassle. I agree, your allowing IE8 to be optionally installed with Win7, why not allow this feature in outlook 2010 to be disabled or removed... Obviously, we are supporting the product because it is the only one in the market that does what it does, why not sympathize with the end user and let them have this preference... I mean, they paid $100 for outlook (2007), why not... Right, there is no standard for HTML in mails. Most of the time, it's difficult to know if the mail will be displayed correctly (as designed) on the client computer. That's why I say my users not to write a mail which can be understood correctlly if it's well displayed ! If they need to apply styles, they join a file. I laugh when I see the examples ! Yes, it's great... when you have Outlook or Word... But for others users ? Do you think people will migrate to Office to read mails ? I'm not sure they want/can. As a developer of a CMS, rather than an e-mail marketer, I'll throw my perspective in. Web apps need to be able to send out e-mails, and those e-mails need to be styled appropriately to match that webapp. i.e. branding. Therefore HTML/CSS is needed to achieve what people will expect, and it is needed to a level beyond what regular people might need when sending/receiving e-mails. Outlook 2007 is awful, and even if there is no 100% agreement on how to do things, 99% of what HTML and CSS does can be made to work in all other e-mail clients other than Outlook 2007. This creates a particular problem, and basically means that you can't achieve certain things unless you are willing to invest in implementing a from-the-ground up design for e-mails - and even then, it's severely limited. Now taking things further, our software needs to be able to embed web content within e-mails, to summarise new things, or for various other reasons. For example, in a newsletter we'll want to show news summaries, and those news summaries will be styled in standard XHTML and CSS - in fact we need to embed our CMS's CSS because it needs to be rendered using a common subsystem that controls how content relating to the website should display. This kind of thing should not break, it's not rocket science, but currently Outlook cannot handle mildly elaborate CSS scenarios. Writing a whole parallel set of CSS just for e-mail is absolutely crazy, and very limiting if any website content author had done precision design using non-stock styles. Now how can MS fix this? All you need to do is something like allow a new meta tag like <meta name="email_standards" content="1" /> or whatever. If you get an e-mail like that, render it using Trident. If someone starts a reply to it, re-render it using Word if you feel you have to - at that point it's not so bad as it's already been read. The standard of the output from Word has always been a matter of derision, and the recent spat over 'open' formats has not changed anyones opinion for the better. Stating that there is no current standard is dis-ingenuous at the very least; there are a number of standards for HTML, all of which have been abused by MS in the past, and most of which have been adopted by ALL other email engines. That's ridiculous! Sure, technically there's no email HTML standard, but we web designers want to be able to design emails like we design web pages... with STANDARDS-BASED HTML markup. Saying there's no email HTML standard is just an excuse. For us web developers, our issue isn't that you're using Word to create the emails, it's that you're using its *rendering engine* to *display* the emails. If you want to use Word for message composition, fine, but please use IE's rendering engine for displaying emails! Utterly disingenuous, as many before me have already pointed out. @ababiec - if you're not a developer, what do you care what's going on under the hood? Do you even understand what you're talking about with your "CSS or anything else not needed" comment? If you're a "REAL user" (a develop[er isn't real?) using Outlook to compose, presumably you neither know nor care what kind of HTML is generated, so long as your message looks right at the other end. And that's the point people are trying to make. Word-generated HTML is such an unholy mess that it doesn't render properly in other clients, and email from sources other than Outlook/Word often doesn't render properly in Outlook. This is more short-sighted monopoly-think, that imagines giving a horrible experience to users of non-MS products will coerce them into the MS camp. With the usage stats of non-IE browsers steadily climbing, I say good luck with that... jevoenv said: "It's not all about e-mail marketing either. These days I am at MSFT and even our PA can't get the formatting of e-mails to wish co-workers a happy birthday come out correctly." How about this: "Happy Birthday." Why do you feel the need for fancy graphics, bizarre fonts or precise positioning to deliver a simple message? Many email USERS hate the crap that marketers and those who wish to emulate their way feel that they have to put on our screens. If you've got a simple message, deliver it simply. If you've got something more complex, put it on a web server and mail me a link. I'll ignore that just as easily but won't hate you as much. I like standards and microsoft :-) Hey, just let them merge ;-) Microsoft Team, I am not much of a web developer, I'm more of an end user. I have no axe to grind with MS; I am not a Linux disciple. It's fine with me that Microsoft makes gobs of money, and I do not feel the need to vent my spleen about bugs/crashes/perceived monopolies, etc. I love using Outlook to compose email messages, but really, how many people create graphs and Smart Art IN the email? Wouldn't they be much more likely to compose stuff like that in Word (say, in a report), then cut and paste? (Thank you for the ability to do that. BTW!) If you're addressing richness just from the email sender, you're addressing only half the user experience. Those of us who send email also RECEIVE email. As a marketer who uses the email marketing programs, I want the message I spent hours on to look as good in someone's inbox as it does in my web-based editor. And as clever as the Business Contact Manager Home module is, I will never use it for email marketing just to address rendering issues in Outlook email. CAN-SPAM puts the fear of the devil into marketers (as it was designed to do), ISPs limit email traffic, and the features of a dedicated email marketing program make it much easier to manage lists and track results than BCM. Thanks for addressing the concerns, Microsoft. I love your product, but like all of us, you've still got some work to do! While you?re at it, can you stop calling your proprietary character set ?ISO-8859-1?? I?m sick of seeing odd characters pooped all over my email! ? thanks There are two main issues here. Using Word as the HTML composer means that HTML emails sent through Outlook will have badly formed HTML, and so are likely to render badly in other email clients if people make full use of the possibilities available to them - unless the quality of HTML output from Word 2010 is better by several orders of magnitude than from previous versions of Word. Netscape 4 could output generally valid and concise HTML more than 10 years ago, but every incarnation of Outlook produces code that is more bloated and less standards compliant than before. Given the accepted move within the web industry to move towards standards, it is unbelievable that email is moving in the opposite direction. You say that this is to facilitate rich content emails between Outlook users - but (a) only the tiniest minority of users will make use of this, and (b) why do we have to put up with Word continuing to output such abysmal HTML? But the MORE IMPORTANT problem is not to do with sending but with receiving. The majority of email users are not using Outlook, so anyone creating emails to send to a general audience has to accommodate a variety of different email clients. Most of these have some reasonable level of support for web standards - except Outlook/Word, which fails to recognise a huge number of standard and common HTML elements. This means that anyone using Outlook and receiving an HTML email from an external source is likely to have a substandard rendering of the message - is that what you want? For every email that comes in to look wrong? I am sure that if you did a straw poll of Outlook users, you wouldn't find one in a thousand who supports that state of affairs, but that is what you are giving them. WHY?! I'm a web developer and regularly part of my job is to build HTML emails for clients. Clients are well aware of how fantastic a properly built HTML email can do for their branding. Unfortunately, if they see they're designs in Outlook 2007 they're shocked and blame the developer. I put a lot of effort into developing my HTML emails to render properly in a range of email clients and webmail services, but Outlook 2007 is impossible. Besides the rules imposed on us (no background images, no positioning, etc) we now also have to deal with a plethora of HTML rendering bugs that just can't be solved. For example, font-family has to be declared on every single table element, paddings can only be applied to td elements, and what on earth is going on with the crazy column spacing bug?!?! () Please please please... don't leave us developers out. I understand that its ideal to have the same product render and edit... but if that's the case then FIX THE WORD RENDERING ENGINE TO THE STANDARDS OF IE8! I cannot tell my clients that they can no longer have graphical, multi-columned HTML emails because the market leader no longer supports them... they'll laugh in my face... this technology should be progressing not regressing. Ever since Microsoft moved rendering of Outlook to the MS-Word platform email has been problematic for the significan part of the internet that does not use Outlook or Outlook express. There are different email programs because people make choices. Do we all drive Buicks? Buicks are great cars with lots of room and creature comforts. Should we build out highways to accommodate Buicks because GM says it is best? Oh, wait. GM is going Bankrupt because the market shifted and the worlds economy tanked. Likewise with email. Outside of a corporate environment I have not used Outlook yet I have a significant bit of influence without it. What of the 10's of millions of others that also decline to use outlook, should we have to deal with the bloatware crafted emails from Outlook? Should our emails not be as designed simply because one vendor says it's "Built Like a Rock" Follow the standards as the rest of the world, or change them for the whole world. Didn't we already have this debate with IE? Standards simplify things for everyone who handles the marketing and technical aspects of managing e-mail. Simplifying web and e-mail design only stands to make your customers more money in the long run which seems like something you'd want to do since it would help them to justify paying for Office upgrades. No doubt, Word makes a great e-mail composer. But that's not the point. The point is that it should also fully support HTML/CSS e-mail when viewing e-mail. Someone has missed the point of fixoutlook.org here. @TechieBird I agree that having separate engines is a a pain. But have you ever seen an HTML file that Word created? It's filled with a lot of useless code that doesn't even render properly. The same thing that happens when someone using Outlook 07 sends an email with HTML objects (like SmartArt, etc) to someone with Outlook 03, or any other email client. And please, please - understand the difference between spam, and the email that you actually want - like transactional email from an online merchant (perhaps notifications that your package has been sent, or that your hotel room has been reserved), or new information from companies that you're interested in and have signed up for personally (retail, arts, etc). Keep in mind that marketers are part of every business, and business is Microsoft's bread and butter; so we're far from a special interest group. The bigger issue here (and the reason that people are supporting the email standards project) is that MS should adhere to the VERY widely accepted HTML standards already in place. However, if they're too concerned that doing that will affect security, then why not work with a standard that has been accepted by several other major email providers, including Yahoo, GMail, and others. A lot of this back and forth from this blog and others is starting to sound like the mess that came from the IE6 team, until Firefox started taking serious market share from them. Our text editing application is better at rendering HTML than our Internet Browser,.. what!?!?! And if the word generated mails are turned into html anyway, why not keep rendering using ie? Fact is that while IE can display any Word html perfectly, the same sure cannot be said the other way around, so what Microsoft is really saying, is that we don't care about rendering just as long as we can handle mails between Outlook Clients. And ababiec, not all HTML mails are spam,.. Creating email newsletters is one of the services we offer to our clients. We do this by creating a "web standard" design that complies with the majority of email systems. Afterwards we downgrade (!!) our design to the only system that does not follow standards: MS Outlook. More than ever, clients are not willing to pay for this extra work. The result is a growing number of email campaigns that use standard HTML + CSS. Users of MS Outlook will not get the best viewing experience, but that's a choice MS made for them, we and the rest of the design community should not back up mistakes for MS. This is typical Microsoft hubris and dominance. They want to make it such that every non-Microsoft email program has to now support MS Word format XML in order to display the message. It's one more step towards their new document standard that they keep wanting to push on everyone without consensus approval. Do what I'm going to do. If you get a message you can't read, and know it's not spam, keep replying back that you can't read it. Even if you can slightly read it but part of the message is messed up -- reply back that you can't read it. If enough of us do this, Microsoft will be screwed. I gave om on Ms products a long time ago. It was around the time when they started the wga-thing. Now I am running Debians stable and nothing can get me back Not allowing the multiple benefits of web standards to sway the decision on Outlook 2010 was plain wrong. Mistakes are good for us, one way or another, when we learn from them and work to correct them. Please allow one benefit from this mistake to be the appropriate Microsoft person/people deciding to adopt web standards for Outlook 2010 HTML rendering and in doing so, make friends across the business and design (and many other) communities. Thanks, cheers, -Alan. From my perspective the argument is as follows: Certainly does MS want to enable its end users to create graphically pleasing emails via Outlook - which is just fine for me. However one must admit that in real (business) life, only a few users are really using this capability beyond some basic formatting (bold, font, colour). The most messages in my mailbox that want to make a graphic impact are machine created ones. Either they are mailing campaigns from people / companies that I invited to mail me. Or, and this is the key point for me, they are "systems communications" where you receive automatically generated mails. And that is where MS is really missing the point. Why make it hard for developers to provide a decent customer experience - even if it is just a mail reply message when you signed up for just another account somewhere? It definite is a legitimate aim to provide decent capabilities for branding. And I do not see that MS is making any good progress here. So: Please divide between editing / creating and rendering... O. It must be nice having your head so far up your @$$. HTML emails should follow HTML standards. There. I've simplified it for you. "There is no widely-recognized consensus in the industry" Please define the number of individuals required before the outlook team recognizes the de jure standard to be a consensus? 18,000+ individuals tweeted about this in 24 hours. How many do you think you might get before the release of 2010? This isn't a twit-head only campaign. Most people join twitter and send only one post. Guess how many of us can be convinced to join twitter just to respond to this issue? Does your team really want to create a grass-roots campaign that casts negative light on your products? "There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability." Really? So when an email has a multi-part message in which the type is "text/html", you somehow are unbound by HTML standards? Please stop confusing me. As been said repeatedly here, the issue is what engine is used to render the HTML in Outlook. What's the harm in using the IE engine? the benefits seem clear. We don't need all of these features. They are rarely useful. Keep it simple. If you need this type of presentation, then use Word or Powerpoint! 90% of emails can suffice as plain text or standard HTML! Please make standard HTML/CSS available in Outlook! Instead of being confrontational with your end-users, why not try to open a dialog? You're trying to provide a solution for people, when those people (even though it's not all people) ignoring their point by not opening a dialog with them only serves to alienate those people. This reduces your sales rather than increases sales. Why can't there be a compromise where you support properly formatted html emails and render them with IE8's engine? Keep Word as the way emails in Outlook are composed, output html as you see fit, but when an email is received, why not switch over to IE8 for the rendering? I don't see that addressed in your post. IE8 would have a hissy fit if it had to parse Word-generated emails: Only Word can render Word-HTML properly. Luckily, I think it's pretty rare for people to bother with anything beyond bulleted lists in normal emails - who could be arsed to generate a pie chart in an email, for example? So it's only an issue for people sending out emails with complicated layouts and lots of images, i.e. for marketing purposes. It's a better idea to send out plain text or very simple emails anyway - most corporate email environments will block images. And if rendering complicated layouts is so fraught with danger because the main email client uses a weird rendering engine, then, weirdly enough, MS are doing us all a favour. Ian Muir said: "Additionally, implementing the full standard for HTML, XHTML or CSS would open up all kinds of fun new tools for spammers." Way to prove you don't know what you're talking about. HTML, XHTML and CSS are all just harmless plain text. There is no code executing in them, only tags to be interpreted by a renderer. Javascript execution would be a problem, but nobody is suggesting Outlook allow javascript to execute, and disabling its execution in an embedded IE instance should be trivial. Given that Outlook 2007 will be in used for many (over 10) years to come. Whatever is done with Outlook 2010 emails will still have to be kept simple enought for Outloook 2007 to display. Microsoft wants users to be able to create emails using a Word-like interface and expect it to be rendered consistently by Outlook recipients. Web designers want to be able to use standard HTML to build emails and have them rendered correctly in Outlook. So why don't you support both? Emails sent from outlook should have a different content type, e.g. Content-type: text/html+word. Outlook can then choose a rendering engine (Explorer or Word) based on the content type. Anyone see a feasibility issue here? All or nothing is a fallacy. The first fallacy is that Microsoft Word could not be made to create standards based Documents. The Office family has started using XML and in doing has stuck somewhat closely to the standards. The tools available for standards HTML are there or could be there by supporting the software community. The second fallacy is that this must be all or nothing. Go ahead default to a Word HTML generator that is not using the standards but give your users who care a chance to be able to stay your users rather than trying every other app hoping we can find something better. Overall I like Outlook a lot. It has its issues but what program that does everything it does is not going to have some issues, I get that. About 20% of my business is training people to Outlook and I would love to be able to set my clients up with something that will work with standards. I hope that Microsoft will act on this and help to move everyone forward. You see, if more people can use Office they will use Office. Creating Islands of isolation is not the way to build market share. I work for a $14mil nation-wide not-for-profit. Outlook's proprietary features have caused much embarrassment when members of my staff tried to use them in emails to non-Microsoft using colleagues, friends, and as we are a not-for-profit, donors. Microsoft has caused my organization to appear unprofessional, and because of this, it is now our company's policy that RTF is turned off in all of our clients. My staff only cares about the rich editing features to the extent that everyone in the world can view their effects. Mr. Kennedy, please listen to the comments on the many posts here and fix your product. If you won't fix your product, then shift the email paradigm before Google beats you to it. It does not matter if it being done by a e-mail marketing company, we need standards for e-mail! What about when you send to people who are not using outlook????? Microsoft is not the only game in town, if you keep going this way like you did with IE you are going to find yourself in losing customers! WAKE UP MICROSOFT!!! Most email clients use web browser components to render emails, and therefore get support for all relevant parts of the web standards. Why can Outlook not use MSHTML to render emails? Posting inline replys to to HTML emails doesn't make any sense, so who cares if whatever rich text editor you use (Word in this case) can't open it? And while I'm here can we please have the broken quoting fixed in plain text emails so people don't have to use things like (or send emails that are a complete mess). Deja vu? This sounds all too much like the arguments made against changing standards support in IE6. It started when a Microsoft product was found to handle things in a nonstandard manner that the larger community of developers found troublesome. We're now on the second phase of the process, where the software maintainers hold that their method and strategy is fine and proper, even if it breaks their fundamental compatibility with the rest of the community. Now we'll move on to the last phase, where enough of an outcry is generated that a secondary application like Thunderbird or GMail begins to get better PR and marketshare. Well after this happens, Microsoft will begin to implement better compatibility in order to appease the community and retain their userbase. It's sad that this has to play out again so soon after the IE6 debacle. One would hope that losing 20% of a monopolistic market would be enough of a jolt to change the corporate culture at Microsoft to recognize the importance of standards on the internet, but this post by William belies that hope. William, your product is a communications tool, and digital communications rely on standards. Despite your claim to the contrary, there is a standard for email rendering: HTML and CSS. YAHOO, GMail, OSX Mail, Thunderbird, and Hotmail all render consistently using this standard, leaving Outlook as the IE6-like outlier. If you wish to use Word to make the editing portion of your user experience better, that's fantastic, and I encourage you to do so. Simply make the results obey standard HTML/CSS rules, and use a real HTML/CSS renderer to display emails from others. The screen shots in this post are the best argument for taking Word out of email I've ever seen. I can't believe that you think it's a 'feature' that a user can use proprietary extensions to make 'rich' email messages. Email is about communication, not document creation and compatibility is critically important. You'd think after the TNEF fiasco you start to value standard more. Obviously, you're not listening to us but in my role as CTO I take the following actions: 1. Any emails with proprietary formats are discarded at my email server. 2. Any client _capable_ of generating invalid content, or _incapable_ of rendering standard content correctly is banned. I work at a consulting firm and we're a "Microsoft Gold Certified Partner", (if that makes you listen M$) and I am primarily a consumer of Outlook, though occasionally I do create a web application where the client wants HTML email to be sent, though NEVER for a marketing campaign. Usually they just want their banner at the top and maybe a background gradient or something, just to keep their branding consistent. If Outlook could render the email with standards it would be very easy to provide this functionality. As it is, I usually have to rework what the client wanted to an extremely stripped design or just use plain text email. But I have a solution... provide a feature similar to IE8's meta tag rendering options. If you're not familiar, including the proper meta tag will instruct IE8 to render the page like IE7. So if something in IE8 were to break your design, this is an easy way to put a stint in a site while you work out the CSS kinks. Outlook could look in the <head> section of an HTML email and check for a meta tag that instructs Outlook to render the HTML in the email as it would for IE8. If that meta tag isn't there, then use the Word rendering engine. That way, embedded objects and all the other Word proprietary features will be the default, but if the HTML email asks to be rendered in IE8 then it can be. The advantage of this is that the person composing the email would need to know how to turn this feature on. This way nothing needs to be stripped out of the Office suite and re-worked (expensive and time consuming). Instead just continue to add new and better functionality, isn't that the point of buying software in the first place? We all want new and better features, this seems like a no brainier to me. Word still uses the old Winhelp hand mouse pointer for hyperlinks - this looks so ugly under Vista and Windows 7. Please finally use the system hand mouse pointer in Office 14 (also for the hyperlink in the About box). Why not use both? There are benefits in sending HTML e-mail newsletters through a service such as Campaign Monitor and Word's rendering engine simply cannot handle rendering these pages properly. Personally I find I receive far more e-mails of this kind then Word designed e-mails, however I'm not in a medium or large corporate environment where this might be more common. This switch could be managed through the mime types. Word coded e-mails would have a mime type of HTML/word or something like that and Outlook would know to render the e-mail with the Word engine. If the e-mail uses the mime type of HTML/text or whatever the typical HTML e-mail mime type is then the IE engine would be used. I should add that I don't care if Outlook can actually generate proper W3C compliant code, all I care about is the ability to properly render the e-mail sent to me. I'm going to continue to use third parties to manage mail lists and send newsletters so Outlook's ability to design and generate code is irrelevant. Damien BRUN made a simple and compelling argument above. If Outlook is to support HTML, it should be standard HTML (even if it needs to be a subset). Using tables for layout is no longer standard. The argument: "because there's no standard specifically for email, we're going to ignore all standards" is very, very weak. We already have to design special CSS and even whole pages to support IE (good job on improving IE7 and 8 though). Sending out emails to customers should be at least that easy. A third rendering option in Outlook is just another annoyance that, if you followed standards (or used your IE8 engine) could be avoided. We're not complaining that Word is a terrible editor; but having Outlook react differently to universally-standard HTML makes it a pain to keep a uniform appearance independent of the user's email client. This is the right decision, Outlook should continue using the word engine... ababiec completely misses the point though. No one is complaining about Word being used to CREATE emails in Outlook. That's all well and good. Where the problem lies is with Word being what is used to DISPLAY the emails that are received. This is a problem because Word does not understand CSS, a key component in how a modern, standards-compliant email is designed. This failure prevents emails from looking the way the author intended them. Trying to skew this as a problem that only exists for "zomg dirty evil email marketers" (whoever the hell that is supposed to represent) completely misses the point and devalues the conversation. This is about the content-rich emails YOU want to receive and YOU signed up for from YOUR favorite brand and companies, and what MSFT has done since Outlook 2007 is to prevent those emails from displaying properly by using a rendering engine designed for word processing not for web pages. See, emails today use the same code as web pages and the logical thing to do is to use your webpage rendering engine (IE) not your word processing engine (Word). Failure to do this has added an extra burden on email designers everywhere to create a regressive version of what their email could otherwise be, just to ensure it displays somewhat reasonably in Outlook in addition to the rest of the email clients that use proper rendering engines. Can you have it display email from other users of Outlook using the Word engine and email from other services using the IE one? Outlook is the number 1 problem e-mail client for us. For some reason, MSFT turned off the ability to submit forms from outlook. We have an application that sends notifications to users and allows them to respond with a form submission. It works fine in every single mail client EXCEPT Outlook. Basically, we're just recommending that our clients switch to Thunderbird or some other decent MUA. Outlook is just garbage. ababiec, Techibird, I work for a public K-12 school district. Though we do not use e-mail for marketing purposes, our administrative staff, teachers, and coaches use a listserv type system to communicate with parents on a group basis. Our system has the option of sending HTML email, plain text, or both. Some, though certainly not all, of our users spend a good deal of time composing the emails that will be sent out to a large audience in our community. I would love to see support for basic HTML/CSS rendering according to web standards in Microsoft's bleeding edge email client, as well as other market leading clients & webmail etc. If for no other reason than to have the communication to parents display in a professional and consistent way. That said, we use Group Policy to turn images off by default in Outlook. However, for an e-mail one is interested in our users have the option of displaying the images. I suppose we are indeed a special-interest group as well, not really representative of business users in general, but here we are. I very good reason to finally switch away from Outlook. Especially with the growing support for exchange server through alternative clients... Of course you believe Word is the best tool - it's your tool. The problem is that it is not standards compliant with the rest of the web. Not surprising given the MS track record in this area. "charting tools, SmartArt, and richly formatted tables for our professional customers" Why do you need these things in an email when you could attach a Word document? Do people really want to compose a chart in an email client and send it? Why listen to us though when you can just develop products like a drunken monkey and create such software gems as Vista? Great job on that by the way! Keep up the good work! The problem is not the authoring experience, but rather the display of HTML e-mails. Word is much worse than IE6 at displaying HTML (and why shouldn't it be, it's a word processor, not a browser). Just use a locked-down version of the IE8 engine for rendering HTML e-mail. And by all means continue to use Word for authoring - and make it produce proper HTML. I don't understand why you had to ruin a working experience in Outlook 2007. I am not a marketer, but an Outlook and Word user. I am actually a big fan of MS Office products, they're ubiquitous for a reason, they're high quality products. MS argument here is that customers are used to Word and prefer to use rich tools to compose emails, no counterpoint there. Word (even though has more overhead) is a decent way to compose emails... but that does not make Word a good renderer for email. Why not use Word to compose and IE to render? What's the draw back there... give the users the flexibility they deserve in presenting their information the way to present it. While I do get spam and if that breaks I don't care but, I also subscribe to marketing emails that I would like to see unbroken. MS please don't push users further towards using web based productivity software... won't you hate to loose customers over something silly like how emails render? And users will switch over something silly like that! This just make no sense, there is a subset of html for email, it's called web standard, just as always your behind of everyone Microsoft won't listen unless they feel threatened by some sort of competition. So, here's what I do. All email messages have a header explaining that, if the recipient is using a Microsoft product, then they probably can't see the message properly. I have a filter on my inbox that returns to sender any mail created using Microsoft products along with a short explanation. On some of my web sites I redirect IE users to a more basic version of the site complete with an expanation as to why this is happening. These steps (especially the inbox filter) have an interesting effect on some people. A few get in touch and start asking questions. I tell them the truth as I understand and experience it. It is amazing how many of them end up using Thunderbird/Firefox/Open Office after this conversation. This is a shame because I would prefer it if Microsoft listened, got it right and all my clients (and me) used nothing but Microsoft. Support would be so much easier. Until I can rely on Microsoft products to inter-operate with the rest of the world then this won't be possible. We all have to take a stand, vote with our wallets, use any opportunity to spread the word(!) and create some sort of impetus for Microsoft to just LISTEN! Simon. You guys have got to be kidding me. No standards for email? Then why does Outlook Express, now called Internet Mail, support modern CSS? Why does Apple Mail support modern CSS? Why does Thunderbird support modern CSS? Why does Microsoft's own XBox division not bother to format emails for Outlook 07 users? There's a reason the Office division is viewed derisively at Microsoft, and this is a good example of why. Rather than admit that your decision is based on something idiotic like trying to cater to users who use Outlook to send email to other people using Outlook, you make up some ridiculous nonsense about a lack of standards. If standards didn't exist, why do HTML emails look nearly the same across all email clients except for those that use WORD as their rendering engine? I was at a web standards conference a couple of years ago and a rep from Microsoft sat at our table at breakfast and asked what Microsoft could do to gain more acceptance from the standards community. It's unfortunate that only ONE product division seemed to take those sentiments to heart. Heed the collective cry* of anguish over your misanthropic decision to continue to use Word to render email in Outlook 2010. Please, listen to your customers and use a proper rendering engine that supports the basic HTML and CSS standards set down by the W3C. These standards have been in place for many, many years now, and it is up to you to bring your software up to meet those standards as best you can. Anything less is both a flagrant disregard for your customers, and a grave lack of ambition from the Outlook engineering team. *Over 21,000 as of writing this comment -> I have no desire for CSS or anything else that -> is not needed in regular office -> Even if the renderer was improved, what would -> it do when I click on reply and started editing? -> If the editor could not support everything the -> renderer could, the display of the email would -> change drastically. Is Outlook there to let marketing people send me fancy looking emails, or to let me commutate with my co-workers? 99% of important emails I read in Outlook come from within the company, most internet emails display with no problems. E.g. I never had an email from gmail fail to display nicely in Outlook. Why don’t marking people just keep the message (text) and use a tool like Gmail to design and send their emails? Also: to the "text email only" purists out there, you probably also think the internet should only be text-based web pages. A lot of people LIKE to receive nice-looking emails. That's what multipart messages are for, so if you want plain text you can have it. That's not the issue here. How about HTML 4 and CSS 2.1? Hows that for standards? Clearly, not everyone agrees that Word is the best way to author e-mail content and I can't imagine anyone would say Word's HTML rendering capabilities are "great." While I understand that it needs to be easy for people to author rich e-mail that aren't web developers, just about anyone who does e-mail campaigns needs the same cross-platform compatibility that we pretty much have in browsers now. It's been far too long that developers have to struggle to make designs (email or web) work in MS products, and, frankly, well designed HTML e-mails that look great in most clients look terrible in Outlook, which is bad for end users. Can you not have a DOCTYPE switch or HTTP-EQUIV META tag like IE has to decide whether the e-mail should be rendered via Word or via Trident? By default, you can use Word rendering, and those of us that know what we are doing can turn on Trident / IE rendering with a flag. This only seems fair given that we all have to spend a lot of time (and someone has to pay for it, like your end users that pay designers to make e-mail campaigns) making great HTML e-mails look marginally good in Outlook. Outlook is becoming the new IE6, and it's pretty clear that Microsoft doesn't actually care about interoperability, despite making statements otherwise, and certainly doesn't care about developers. Is there not some middle ground like my suggestion above that can benefit everyone without hurting anyone? Nice to know that as Microsoft moves forward with Outlook, they seem to go in reverse. I work in a large company that, of course, uses the Microsoft Office suite. To date, I do not believe that I have EVER seen a single email from any other fellow employee using Microsoft Outlook that contained native Word-based SmartArt, Charting, or Tables (expect perhaps some tables that were created in either MS Word or MS Excel that get pasted into an email either as a word-based table object or HTML-based table object... I'm not entirely sure). If users want to send something that clearly depends upon Word-based rendering, then they simply ATTACH a word-based document with a clearly indentified Word filename extension. On the other hand, I have seen thousands of emails that utilize HTML-rendering or HTML-based tables. It cannot be disputed, at least with a clear conscience, that any HTML-renderer based email object or mechanism would have far more broad acceptance than a Word-specific email object. Even Microsoft's tools such as Word, Excel, Powerpoint support conversion to HTML-based forms. Wouldn't be a much more universally accepted solution to allow users that chose to do so, to author their email content in whatever application they wanted to... and then support HTML-based cut-n-paste operations into Outlook? That seems far more sensible to the vast majority of users. Please do not let the misguided FTC and European Union actions of the past, intimidate Microsoft away from the sensible use of HTML-rendering in other applications such as Microsoft Outlook. After reviewing these blog entries, I don't see a good reason why Microsoft can't use Internet Explorer to render emails in Outlook. Security should be a non-issue, I mean, IE is supposedly secure enough to browse the web, so it ought to be able to handle spam. I'm in end user support, not marketing. Believe it or not, there are actually people out there who like to read their spam (one guy even prints out his travel offer emails) and if they don't look right, I hear about it. Use Word for composition, and IE 8 for display, problem solved. no one cares how microsoft solves this problem - whether you fix the word engine or use the IE8 renderer. I fully sympathize with the concept that a Word-based UI makes it easier for MS' users to author rich emails. but rich emails are based on HTML, and HTML is "a sanctioned standard or an industry consensus"... just 'meet the bar' of the standard. I can't imagine how the biggest software company in the world could accept any less of themselves. Why on earth would anybody want to use HTML in an email message? My experience is that it is only ever spammers and scammers who want to do this. I read all my emails in plain text. If the sender can't get their message across in plain text, then I don't need to know. And for all those talking about a standard, there is one already. It's called RFC822. The nasty conclusion we can all figure out from this is... Word doesn’t compose HTML in any remotely-standard way. It's full of inside tricks and shivs to make it do Office-only crap and account for backward compatibility going back 20 years. If Word’s HTML was even remotely easy to normalize – even by post process – Microsoft would eagerly do it. This shows that Word HTML really is THAT bad. (And William's "answer" totally misses the point. The point is about rendering, not composition) Judging by the official response to the campaign initiated by Campaign Monitor (and supported by industry professionals using a wide variety of ESPs), I'm guessing Microsoft won't pay any additional heed to these comments. But it does seem to me that the official response is a clear indication that William Kennedy, Corporate Vice President, Office Communications and Forms Team, Microsoft Corporation (or the PR person who wrote this response) either doesn't understand the problem or truly doesn't care. Forcing the industry to take a step backwards in compliance to use table-based layout simply to render email properly -- when they could use the IE rendering engine -- is the business equivalent of taking your ball and going home. Hopefully the exodus from Windows to other operating systems (or, at the least) to other email clients will continue to the point where Redmond is forced to at least consider the argument from a perspective other than the status quo. Personally, I feel like the campaign was interesting and worth it, and a nice example of how new technologies can be used to give voice to a disparate group. Too bad, in this case, it was a proverbial tree falling in an empty forest... no one was there to hear it. /Jim It's not about authoring emails you ninnies, its about displaying the incoming emails with any semblance of standards compliance. Why does Microsoft continue to fracture internet standards. This is why every web developer hates IE. Go ahead and keep Word for composing emails if you must, but use a true web engine for rendering the incoming emails. How do users of Outlook without Office do HTML mail ? either they can't or somehow there's a way to switch Word editing off in this scenario. Lots of time is spent on design and compatibility. Why make it harder? We want our designs to appear as they should! Lame and disappointing campaign execution by 'Let's Fix It'. While I agree with the cause, using Twitter Corporation's proprietary service as the only means for an internet user to submit feedback is just as bad/backward as using HTML tables in Outlook. That's like, um, forcing people to use Microsoft Corporation's technology (READ: Outlook) as the sole means to manage email communication. So let's see: No web site feedback form was constructed for this cause. Meaning, that no web designer was hired to design a form---even a (gasp!) table-based one!, CSS out an email and/or contact page, etc., to collect the data, nor were any database professionals used to set up a system to parse the collected feedback and organize/collate for the organizers. Nope. Just passed off to a single company: Twitter---that you are stuck having to use as the sole means to communicate this campaign via the (open?) web. This is so late 1990's when 'AOL Keyword: National Geographic' was on the cover of that pretty yellow magazine instead of the more sensible and open '' they use now. Backward we go... just like Outlook 2010. There is nothing wrong with using Twitter as an additional means, like Facebook, etc. to communicate and enhance a web site. But it should not be the <strong>only</strong> means, just like using IE should not be the only means to access the web. That said, can you folks please make Outlook 2010 work better with CSS? Chris We have already run into this problem with a newsletter that was generated by a vendor for internal distribution. Doesn't display properly in Outlook 2007, though it does in IE7 as well as Firefox, Outlook 2003, Outlook Express and even Windows Mail (Vista). If you try to edit it in Outlook 2007 you can get it to display properly but the resulting E-Mail is now 400k instead of 17k. Outlook should display web content exactly the same way as web browsers do. All I want to do is float a div. While there is no technical HTML standard, it is a function widely used. Can you explain why, a globally used platform, such as outlook has decided to not support this frequently applied HTML code? Mail messages should not be in Word rendered HTML, in fact they should not be in HTML at all. Mail messages should be in plain ASCII. GP> Nope they don't get it. They just don't get it. I respectfully disagree. There is an informal consensus where most e-mail clients support at least some HTML layout features. There is certainly a consensus that tables should not be used for layout. Unless the opinion of the Outlook team is that e-mails should not contain layout at all? E-mail is moving beyond messages between individuals and is now seen as a multimedia communications channel. To cripple Outlook 2010 by considering rich formatting only as a way for people to slightly embellish their individual messages is to hold the product back from the way people and organizations are using e-mail. @TechieBird, ababiec: This isn't just marketers - many of my business clients have expressed disappointment at the limitations of e-mail stationery. Email is essentially an internet experience. Increasingly, email creation and dissemination is performed exclusively through the internet. Rather than embracing the standard language and construction conventions of the internet, Microsoft is sticking to a format that has repeatedly been proven to be incompatible with online use. Not only is this disappointing from the standpoint of someone who would like to be able to use the full range of html tools to create emails, it seems like Microsoft is shooting itself in the foot from a business standpoint, especially given the rise of internet-based hand held devices. Mirosoft is clearly focusing on the priorities of someone whacking together 'professional looking' HTML emails here and not the experience of people recieving them... While in one sense I can understand them thinking of the needs of those who might want to create such emails I doubt this would be 100% of outlook users - on the other hand almost all users will use the software to recieve HTML email and using Word to render it is a crappy solution. They are failing not only the designers and developers who want to create user friendly newsletters, but also their customers who want to recieve them. To spout complete tripe about 'subsets of HTML' and a lack of industry consensus or standards is at best delusional and at worst downright dishonest. There are WC3 standards for HTML and CSS. These are applicable to browsers and email programs alike. Just because Microsoft does not feel they should apply to their software does not make the known and commonly accepted standards simply not exist. Of all companies, Microsoft should not be accusing groups such as the Email Standards Project (which may included companies but also includes individuals and users with no affiliation to freshview) of pushing their own agenda or interests. If anything, that is the pot calling the kettle black. And a final note... "Word has always done a great job of displaying the HTML which is commonly found in e-mails around the world" Perhaps on mars, but on planet earth, it ignores the most common HTML and CSS standards and makes HTML emails which work in almost every other email program look, quite frankly, like a pigs breakfast. How is it that one of the largest and most successful companies is unable to manage what a large number of smaller companies have already achieved. The main problem is that Word doesn't look the same when you are composing it, as when you receive it in the IE display engine. So your solution is to replace the valid display engine (IE) with an invalid Word HTML display engine? That seems a little backwards to me. Open up to real standards, not your own internal walled off standards. You are starting to back yourself into a corner with these sorts of moves. Why not concentrate on updating the HTML composing portion of Word so that it emits (renders?) valid HTML that will be displayed correctly in an IE8-type display engine. (or other valid HTML renderer eg. gecko etc) Otherwise Outlook becomes a walled garden.. and as more businesses run email marketing campaigns (yes not spam ones) the CEOs are going to start wondering why their carefully constructed emails look fine in every other email client, but look like crap in Outlook. That's when Outlook is going to get tossed out... "But in the real world where resources are limited and features are prioritised," Yet somehow Mozilla, Google, Opera, and Apple have all managed to make wonderfully featured browsers with full current standards compliance, with several aspects of HTML5 and CSS3... Microsoft has no less of a resource supply than any of them. What's their excuse for being the ONLY ONES still stuck in 2000? The rest of the internet has moved on, and we're busy yelling back at them, trying to help them catch up, but they continue to ignore us. If Microsoft actually fixed IE's rendering engine and made it work right in Outlook, I'd have one less major reason to be mad at them. @TechieBird: You're missing the point. Yes, your emails may look right to you. Know why? Because some poor developer spent days ripping his hair out trying to make it look right. That's the problem. Emails continue to look right because we continue to build them using horrid techniques, as they're the only ones that work on all clients. All we want is for Microsoft to fix the rendering engine in Outlook so we can build proper HTML that will still look right in all the other clients (which already understand standards), and also look right in Outlook. Whether or not fixoutlook.org is sponsored by Campaign Monitor is irrelevant. What *is* relevant is that Microsoft does not support web standards. And this needs to be fixed regardless of who is pushing the agenda. ababiec / TechieBird - if all you want to do is send and receive internal email then the current Microsoft stance is fine for you. But Outlook needs to work for all users, not just internal corporate email. The problem is that if someone sends you an email that wasn't created in Outlook, whether that's - an external supplier that you _do_ want to hear from - a website you've registered with sending a welcome email - or any other email client on the planet then there is a fair chance that email could look scrappy with the current Outlook rendering just because basic CSS attributes are not supported. This is especially the case when emails that rendered fine in Outlook 2003 were suddenly broken in Outlook 2007 seemingly without any warning. Having to add outdated attributes just to support the same thing as CSS is already designed to do just makes no sense (e.g. align="right" for images instead of being able to use a class that sets "float:right"). How can there be a security issue in supporting one and not the other? All that is being asked is that Word 2010 include support for a few basic _standard_ CSS rendering attributes that should be just as simple to support on the editing side; Word already knows how to render things that CSS describes with floats and padding/margins, it just decides to ignore converting these at the moment. Please continue ignoring the mass-mailers on this, Microsoft. In fact, please start a counter-tweet campaign so I can voice my solidarity with you against the tweeters. While bold text and bulleted lists are useful, email should never look like a web page. There should never be navigation bars, sidebars, or anything else that requires CSS-based or even table-based layouts. It's been stated before and I totally agree: I don't care if you want to use Word to AUTHOR an email, but the HTML that it generates is awful. The end result is passing emails between Outlook and any other client is a hassle -- Anything I send from my Mac to Outlook recipients gets mangled (on their end) and vice-versa. So go with my blessings: use Word to author emails, just please update the HTML it outputs to something that every other program in the rest of the world can understand properly. And if you're not going to use IE8 to render emails, help Word interpret the good, clean HTML that every other program in the rest of the world generates. As for standards, they DO exist. There are a number of very, very basic CSS commands that pretty much every other email program supports and Outlook doesn't. As a number of people have stated, standards for HTML exist as outlined by W3C -- if you're using HTML in email, shouldn't it adhere to the same standards?! (As a sidenote, you want people to use Word to create HTML -- shouldn't the HTML it creates adhere to W3C standards and basic web design best practices also?) To all the people saying this is about making life easier for marketers, you're wrong. Look at what having standards on the web has done for us all: more quality websites that look better, work better on all browsers, and can do more. If Microsoft makes life easier for marketers, they'll be making life easier for consumers also. Microsoft has a unique opportunity here: as the maker of one of the most widely-used email clients, they have the opportunity to lead the charge to universal standards for email that make EVERYONE'S lives better. Instead, they're going with their own proprietary stuff that creates bloated, junky emails. It's just sad. I think MS should update either the Word engine or give users flexibility to use standard HTML engine (from browser) for email display. It drives me nuts when the email cannot display the correct CSS tags because stupid word cannot decipher it right. Please, please, please support CSS rendering. I am a developer and writer, and I support increasing standardization between web and email rendering. As far as Outlook, I switched from 2007 BACK to 2003 partly because of the lack of composing and rendering choices (plus performance). I have nothing to do with marketing email -- just writing user documentation for software. "Word has always done a great job of displaying the HTML which is commonly found in e-mails around the world." No it hasn't. That's because the consensus is to support HTML as a whole, not just a subset; and to add on support for CSS as well. "The 'Email Standards Project' does not represent a sanctioned standard or an industry consensus in this area. Should such a consensus arise, we will of course work with other e-mail vendors to provide rich support in our products." If by "industry consensus" you mean "Microsoft's Consensus," you're right. Yahoo!, Apple, Google, and other email client developers are all supporting web standards. The Email Standards Project does represent the consensus of the *WEB DESIGN* industry. I found this issue interesting, because I recently encountered this problem in Outlook 2007 when rendering HTML generated by Microsoft's own software. Automatically e-mailed reports from SQL Server Reporting Services are not properly displayed in Outlook 2007 - the tables are compressed horizontally. This annoyance was reported to me by some of my colleagues. I still use Outlook 2003 and the reports are rendered correctly in my e-mail. Reading all this makes me smile. Justifies my stance that: a: I really like only getting text emails, not html b: I really really hate email in general Funny that twitter folks are complaining about email. I am also an Outlook user, from 2000, to XP, 2003 to 2007. I would much rather Microsoft adhere to open standards than 'cripple' their software in ways to achieve their marketing goals. People who know me, know that I don't use that term lightly! Come on Microsoft, you are better than this! I love Word. I love Outlook. But please *do* follow web standards. Don,t do another IE out of Outlook where IE "extended" the web standards. Compliance is key. Prove us that you really rock at MS! I'm with ababiec and TechieBird. E-mail is not the web. It's an entirely different medium. Saying that "it's not about how it's composed but about how it's rendered" is like saying "It doesn't matter that an aitplane must take off from an aitport. It should be able to land on any piece of concrete the pilot wishes, including the Interstate, and still deliver the passengers." A hammer should never be used to drive a screw. Frankly, anything a mail client can do to _disrupt_ mail marketing is 100% OK by me. If I want the information, I'll use a web browser to read the marketer's web page. Well, After looking at the campaign efforts I thought Microsoft would take the user experience stuff seriously and work on to improve it. We don't care what rendering engine you use, but at the end of the day Outlook can display what a web based mail can then the job is done. Its so sad that outlook till today does not support image as background, which I feel would give lots of new rich experience when sending creative mails. Siddharth Menon Borget Solutions There’s some merit to Word generating OUTGOING Emails. This allows additional functionality and user familiarity for many. However, I would like to see RECEIVED emails rendered with the Explorer engine by default. Wouldn’t this solve our problem? You have got to be kidding me. You are sacrificing web standards in the name of MS Word graphs and clip art? Give me a break. How about you(Microsoft) stop being the least common denominator when it comes to standards? Is abusing standards the the hill you want to die on? Jeeze! You are holding EVERYONE back by doing this. What a reliable disappointment Microsoft has become. P.S. You do not need MS Word to enable a WYSIWYG editor... you have the ASP.NET custom control already developed in the AJAX Control Toolkit! As has been said at length above, the complaint is about Outlook continuing to use word to Render HTML, not word as an editor to create messages. It's nothing but disingenuous to claim otherwise. Most people making the complaint are not spammers. Lots of people send out HTML email, lots of businesses large and small send 100% legitimate newsletters, etc. Asking for proper rendering of HTML email (and YES HTML IS A STANDARD) is not something that only spammers want. 99% of my contacts send me plain-text messages and that's why I switched Outlook to show me text-only messages as default. I don't care about HTML messages. HTML belongs inside websites and not inside e-mails. Why not have a flag or something? A flag that says, "this was made with the Word engine" and when outlook sees that it uses it's handy Word engine renderer. Otherwise, it uses IE8. Surely I'm the 400123125th person (or so) to suggest such a 'solution' so I'm guessing it's not actually a solution. Would be curious to hear why not. The Outlook team wants to thank everyone who has responded to this post and the online campaign around Outlook and Word. We value your feedback and have read and logged every comment on this page. At this time, we believe that the unique and relevant perspectives and opinions of this community have been stated and appropriately noted, and rest assured we will continue to read and record any additional feedback made, though it will not be published. Dev Balasubramanian Outlook Product Manager The power of Word? Are you kidding me? Open any web page today in Word and it can't even retain the exact layout, let alone generate clean HTML. Is Word only for designing from blank HTML pages and editing existing pages created in Word? If not, why is the renderer stuck in the 90s? Please improve Office's horrible renderer and bring it up-to-date so it at least retains the layout of modern web pages. If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://web.archive.org/web/20090627004005/http:/blogs.msdn.com/outlook/archive/2009/06/24/the-power-of-word-in-outlook.aspx
CC-MAIN-2017-34
refinedweb
28,833
70.23
I'm building an app (ruby 1.9.3, rails 3.2.2). The app has teams, and in my teams_controller I want to have a method called apply_to_coach_team, with a form that allows you to apply to coach a new team. The problem is, every time I click the link to go to the form I get the Couldn't find Team with id=apply_to_coach_team The app seems to be going to the teams#show method, and looking for a team with the id above. in routes.db I have: match "/teams/apply_to_coach_team" => "teams#apply_to_coach_team", :as => :apply_to_coach_team my teams_controller is just an empty method def apply_to_coach_team end and my view is just a stub for now <h1> apply to coach new team </h1> My link to this page looks like this : <%= link_to "apply to coach new team", apply_to_coach_team_path, class: "btn btn-mini btn-action" %> Can anyone tell me what simple mistake I'm making please!?? Thanks in advance --.
https://grokbase.com/t/gg/rubyonrails-talk/1258sqhw0m/rails-simple-noob-question-about-routes
CC-MAIN-2022-05
refinedweb
159
63.83
This action might not be possible to undo. Are you sure you want to continue? Michael J. Eager, Eager Consulting February, 2007 It would be wonderful if we could write programs that were guaranteed to work correctly and never needed to be debugged. Until that halcyon day, the normal pro gramming cycle is going to involve writing a program, compiling it, executing it, and then the (somewhat) dreaded scourge of debugging it. And then repeat until the pro gram works as expected. It is possible to debug programs by in serting code that prints values of various in teresting variables. Indeed, in some situa tions, such as debugging kernel drivers, this may be the preferred method. There are lowlevel debuggers that allow you to step through the executable program, instruc tion by instruction, displaying registers and memory contents in binary. But it is much easier to use a sourcelev el debugger which allows you to step through a program's source, set break points, print variable values, and perhaps a few other functions such as allowing you to call a function in your program while in the debugger. The problem is how to coordi nate two completely different programs, the compiler and the debugger, so that the program can be debugged. quence of simple operations, registers, memory addresses, and binary values which the processor actually understands. After all, the processor really doesn't care whether you used object oriented program ming, templates, or smart pointers; it only understands a very simple set of operations on a limited number of registers and mem ory locations containing binary values. As a compiler reads and parses the source of a program, it collects a variety of information about the program, such as the line numbers where a variable or function is declared or used. Semantic analysis ex tends this information to fill in details such as the types of variables and arguments of functions. Optimizations may move parts of the program around, combine similar pieces, expand inline functions, or remove parts which are unneeded. Finally, code generation takes this internal representa tion of the program and generates the actu al machine instructions. Often, there is an other pass over the machine code to per form what are called "peephole" optimiza tions that may further rearrange or modify the code, for example, to eliminate dupli cate instructions. difficult time connecting its manipulations of lowlevel code to the original source which generated it. The second challenge is how to describe the executable program and its relationship to the original source with enough detail to allow a debugger to provide the program mer useful information. At the same time, the description has to be concise enough so that it does not take up an extreme amount of space or require significant processor time to interpret. This is where the DWARF Debugging Format comes in: it is a compact representation of the relationship between the executable program and the source in a way that is reasonably efficient for a debug ger to process. The Debugging Process W Allinall, the compiler's task is to take the wellcrafted and understandable source Translating from code and convert it into efficient but essen Source to tially unintelligible machine language. The better the compiler achieves the goal of cre Executable ating tight and fast code, the more likely it he process of compiling a program is that the result will be difficult to under from humanreadable form into the bi stand. nary that a processor executes is quite com During this translation process, the plex, but it essentially involves successively compiler collects information about the recasting the source into simpler and sim program which will be useful later when pler forms, discarding information at each step until, eventually, the result is the se the program is debugged. There are two challenges in doing this well. The first is that in the later parts of this process, it may Michael Eager is Principal Consultant at be difficult for the compiler to relate the Eager Consulting (), changes it is making to the program to the specializing in development tools for original source code that the programmer embedded systems. He was a member wrote. For example, the peephole optimizer of PLSIG's DWARF standardization com may remove an instruction because it was mittee and has been Chair of the able to switch around the order of a test in DWARF Standards Committee since code that was generated by an inline func 1999. Michael can be contacted at tion in the instantiation of a C++ template. eager@eagercon.com. By the time it gets its metaphorical hands © Eager Consulting, 2006, 2007 on the program, the optimizer may have a T hen a programmer runs a program under a debugger, there are some common operations which he or she may want to do. The most common of these are setting a breakpoint to stop the debugger at a particular point in the source, either by specifying the line number or a function name. When this breakpoint is hit, then the programmer usually would like to dis play the values of local or global variables, or the arguments to the function. Display ing the call stack lets the programmer know how the program arrived at the breakpoint in cases where there are multiple execution paths. After reviewing this information, the programmer can ask the debugger to con tinue execution of the program under test. There are a number of additional opera tions that are useful in debugging. For ex ample, it may be helpful to be able to step through a program line by line, either en tering or stepping over called functions. Setting a breakpoint at every instance of a template or inline function can be impor tant for debugging C++ programs. It can be helpful to stop just before the end of a function so that the return value can be dis played or changed. Sometimes the pro grammer may want to bypass execution of a function, returning a known value instead One might want a de bugger (or some other program analysis tool) to keep track of whether certain sec tions of code had been executed or not. rudimentary support for debuggers. sponsor of PLSIG. Eager . When UI folded. although this is not mentioned in any of the DWARF standards. Documentation on the extensions is generally spotty or difficult to obtain. Although the original standard is readily available from IEEE. Open88 in turn was a supporter of Unix International. it isn’t the same in each architecture which uses the format. which corresponds to the organization of the source better than other formats. part of Unix International (UI). The task of a debugger is to provide the programmer with a view of the executing program in as natural and understandable fashion as possible. This means that the debugger has to essentially reverse much of the compiler’s carefully crafted transformations. the most important of mentation on the debugging format is both which was to reduce the amount of data sketchy and difficult to obtain. COFF. oc casionally cryptic and lessthanconsistent debugging format. that were generated. Stabs encodes the information about a program in text strings. The name stabs comes from symbol ta ble strings. stabs has evolved over time into a quite complex. Initially quite simple. Some debuggers allow the programmer to call functions in the program being tested. It became an IEEE standard in 1990. Although it is an IEEE standard. In 1992. displaying the type of a variable can avoid having to look up the type in the source files. Mo torola pulled the plug on the processor. Unfortunately. There are also data related operations that are useful. used by Microsoft Windows beginning with Windows 95. DWARF 2 was released as a draft standard in 1990. stabs is still incorrectly) computed. The Programming Lan guages Special Interest Group (PLSIG). widely used. several orga nizations independently decided to extend DWARF 1 and 2. since it was developed along with the ELF object file format. There are many variations in COFF. For example. Microtec made a num ber of extensions to support C++ and opti mzed code which are poorly documented. especially for PECOFF is the object module format small processors. IEEE695. but since COFF includes support for named sections. in many ways IEEE 695 is more like the proprietary formats. The name may be an acronym for “Debugging With Attribut ed Record Formats”. and the Windows PECOFF. being able to debug multi threaded programs or programs stored in writeonly memory. OMF de fines public name and line number infor mation for debuggers and can also contain The challenge of a debugging data for Microsoft CV. a variety of different debugging formats such as stabs have been used with COFF. and three versions of DWARF. tongue 2 Debugging Formats T here are several debugging formats: stabs. the author wrote an extensive document de scribing the stabs generated by Sun Microsytems' com pilers. shortly after PLSIG released the draft standard. it was never widely distributed. while permitting a wide range of control over its execution. Docu dress several issues. while attempting to reverse engineer the 1 IEEE695 is a standard object file and debugging format developed jointly by Mi crotec Research and HP in the late 1980’s for embedded environments. DOS and OS/2 systems. or AIX format de mat. PECOFF. Stabs is not standard ized nor well documented1. OMF only provides the most and even easy. The name DWARF is something of a pun. which in turn resulted in the demise of Open88. Dis playing the value of a variable in different formats. There are some operations which might be called advanced debugging functions: for example. Introduction to the DWARF Debugging Format 2 Michael J.of what the function would have (possibly Sun extensions. including XCOFF (used on IBM RS/6000). A final standard was never re leased. Although the original DWARF had several clear shortcomings. is to make this possible bugging data. all that remained of the PLSIG was a mailing list and a variety of ftp sites that had various versions of the DWARF 2 draft standard. A Brief History of DWARF2 DWARF 1 ─ Unix SVR4 sdb and PLSIG WARF originated with the C compiler and sdb debugger in Unix System V Release 4 (SVR4) developed by Bell Labs in the mid1980s. but others might be applicable to any architec ture. a consortium of companies that were developing computers using the 88000. In the notsodistant past. like DWARF. The IEEE standard was never revised to in corporate any of the Microtec and other changes. Rudimentary debugging infor mation was defined with the COFF format. intended to be usable with al most any machine architecture. since the debugging data were originally saved as strings in Unix’s a. The most significant problem with COFF is that despite the Common in its name. In an example of the domino theory in action. Some of these extensions were specific to a single architecture. I’m not going to de scribe these in any detail. It is based on the COFF for DWARF 2 ─ PLSIG mat and contains both COFF debugging he PLSIG continued on to develop and data and Microsoft’s own proprietary Code document extensions to DWARF to ad View or CV4 debugging data format. The de bugging format is block structured. documented the DWARF generated by SVR4 as DWARF 1 in 1989. Sun Micro systems has made a number of extensions to stabs. Documentation of these variants is avail able to varying degrees but neither the ob ject module format nor the debugging in formation is standardized. the different organiza tions didn’t work together on these exten sions. IBM PM. to name some common ones. The intent here is only to mention them to place the DWARF Debugging Format in context. OMF. which resulted in UI being disbanded.out object file’s symbol table. Or as a GCC developer might suggest. There were also addi D T OMF stands for Object Module Format and is the object file format used in CP/M. debugging pro grams that had been optimized would have been considered an advanced feature. converting the pro gram’s data and state back into the terms that the programmer originally used in the program’s source. the PLSIG de cided to standardize the SVR4 format with only minimal modification. Since Unix International had disap peared and PLSIG disbanded. or displaying a memory or register in a specified format is helpful. most notably that it was not very compact. It was widely adopted within the embedded sector where it continues to be used today. COFF stands for Common Object File Format and originated with Unix System V Release 3. Unfortunately. GCC has made other extensions. fatal flaws were discovered in Motorola's 88000 microprocessor. ECOFF (used on MIPS and Alpha). tions to support new languages such as the upandcoming C++ language. Nonetheless. It is a very flexible specification. as well as a small number of embedded systems. there was interest in extending DWARF to have better support for the HP/Intel IA64 architecture as well as better documentation of the ABI used by C++ programs. Eager . At that time. there was little impetus to revise (or even finalize) the document until the end of 1999. There may be multiple definitions of the same name in different scopes. which specifies what the DIE describes and a list of at tributes which fill in details and further de scribes the entity. so that a debugger can recognize and ignore an extension. where names are known only with in the scope in which they are defined. A public review draft was released to solicit public comments in October and the final version of the DWARF 3 Standard was released in Jan uary. similar to the compiler’s internal tree. DWARF does not dupli cate information that is contained in the object file. rather than being bound to only describing one language or one ver sion of a language on a limited range of ar chitectures. The DWARF description of a program is a tree structure. The DWARF Committee became the DWARF Workgroup of the Free Standards Group in 2003. DWARF 3 ─ Free Standards Group D espite several online discussions about DWARF on the PLSIG email list (which survived under X/Open (later Open Group) sponsorship after UI’s demise). This creates lexical scopes.org. the standard ization hit what might be called a soft patch. The nodes may represent types. 2006. variables (such as the start address hello. in particular) wanted to insure that the DWARF standard was readily available and to avoid the possible divergence caused by multiple sources for the standard.) DWARF is also designed to be extensible to describe virtually any proce dural programming language on any ma chine architecture.c LowPC = 0x0 HighPC = 0x2b Producer = GCC DWARF Overview M ost modern programming languages are block structured: each entity (a class definition or a function. While DWARF is most commonly asso ciated with the ELF object file format. A statement may a compound statement that in turn can contain data definitions and ex ecutable statements. DWARF was well on its way to following COFF and becoming a collection of divergent implementations rather than being an industry standard. then in successive enclosing scopes until you find the symbol.c Line = 2 Type = int LowPC = 0x0 HighPC = 0x2b External = yes DIE – Base Type Name = int ByteSize = 4 Encoding = signed integer Figure 1. where each node can have children or siblings.firmly in cheek. This is a compact format where only the information that is needed to de scribe an aspect of a program is provided. All that is necessary is that the different data sections that make up the DWARF data be identifiable in the object file or executable. After the Free Standards Group merged with Open Source Develop ment Labs (OSDL) to form the Linux Foun dation. Attributes may contain a variety of values: constants (such as a function name). and multiple functions.c: 1: int main() 2: { 3: printf("Hello World!\n"). To find the definition of a particular symbol in Introduction to the DWARF Debugging Format DIE – Subprogram Name = main File = hello. DWARF follows this model in that it is also block structured. relat ed to each other. If a node contains multiple entities. they are all siblings. Graphical representation of DWARF data 3 Michael J. and the author took over as Chair for the revived DWARF Committee. Debugging Information Entry (DIE) Tags and Attributes T he basic descriptive entity in DWARF is the Debugging Information Entry (DIE). The committee (and this author. These two efforts separat ed. 5: } DIE – Compilation Unit Dir = /home/dwarf/exam ples Name = hello. Each descriptive enti ty in DWARF (except for the topmost entry which describes the source file) is con tained within a parent entry and may con tain children entities. the extensions were well documented: all you have to do is read the compiler source code. a program. the DWARF Committee returned to independent status and created its own web site at dwarfstd. Compilers very naturally represent a program internally as a tree. 4: return 0. or functions. variables. Active development and clarification of the DWARF 3 Standard resumed early in 2005 with the goal to resolve any open is sues in the standard. Following more than 18 months of de velopment work and creation of a draft of the DWARF 3 specification. even if it might not understand its meaning. for example) is contained within another entity. A DIE has a tag. The format is extensible in a uniform fash ion. (This is much bet ter than the situation with most other de bugging formats where the debugger gets fatally confused attempting to read modi fied data. Each file in a C program may contain multiple data definitions. It can and has been used with other object file formats. multiple variable defini tions. Within each C function there may be several data defini tions followed by executable statements. A DIE (except for the top most) is contained in or owned by a parent DIE and may have sibling DIEs or children DIEs. such as identifying the processor architecture or whether the file is written in bigendian or littleendian format. you first look in the current scope. it is independent of the object file format. an integer type The DWARF base types allow a number which can hold integer values be of different encodings to be described. or even be tween different versions of the bits of a 32bit word. The base types allow the compiler to The DW_TAG_variable DIE for x gives its describe almost any mapping between a name and a type attribute. It has two “children”. rather than named x. for example. Eager . a value that is stored in the upper 16 bits of a reference to a DIE is the offset from the four byte word. The prima ry types. not specified. 2. gram DIE. one tion of int on a 16bit processor. In a processor which supports both 32bit With DWARF Version 1 and and 64bit floating point values following DW_TAG_base_type other debugging formats. plier to pick the actual specifications that ample taken from an implementation of best fit the target processor. C only specifies some ue is 16 bits wide and an offset from the DIE (such as for the type of a function’s re general characteristics. This be comes awkward when the same hardware can support sentation of its DWARF description. allowing the com highorder bit of zero. there is a start of the compilation unit where the DIE bit size attribute that specifies that the val can be found. We also talk about a DIE “own same tools. we use the tag and attribute names de Figure 4 describes an integer variable fined in the DWARF standard. Figure 3 describes a 16bit integer ing examples. the first is the different compilers make differ DW_AT_name = word ent implementation decisions for DIE describing main and the second de DW_AT_byte_size = 4 the same target processor. and usual ly ways of creating new data types. Since DWARF is intended to be used with a vari ety of languages. (This is a reallife ex turn value). and packed decimal. character. while the base type DIE is refer patibility between different com Figure 3. an DW_AT_encoding = signed ambiguity remaining: for example. 32. References can be to previ A Introduction to the DWARF Debugging Format 4 Michael J.c guages. in the actual DWARF data. While Java provides a complete defini DWARF base types provide <1>: DW_TAG_base_type the lowest level mapping be DW_AT_name = int tween the simple data types and DW_AT_byte_size = 4 how they are implemented on DW_AT_encoding = signed the target machine's hardware. make it difficult to have com unit DIE. The topmost DIE represents the compilation different size integers or when DW_TAG_base_type unit. 16bit word type stored in the top 16 enced by the Type attribute in the subpro pilers or debuggers. Base Types E very programming language defines several basic scalar data types. it abstracts out the basics and provides a representation that can be used for all supported language. There are a number of builtin data types. possibly even within the Figure 4. Figure 2b shows a similar defini which has a variety of attributes. Figure 2a shows the DIE which describes int on a typical 32bit processor. both C and Java define int and dou ble. The the other information that is usually con names of tags are all prefixed with tained in a DIE describing a variable. In this base type. an encoding Type Composition (signed binary integer). or references to another tion for these types.for a function). in addi DW_AT_name = int mented. like Pascal. int base type on 32bit processor. (In Figure of which is a reference to a type definition.) program with a simplified graphical repre be defined. pointers. (For the moment we will ignore the more informal names used above. are the base types. built directly on the hardware. DW_TAG_base_type specify how this should be imple floating point. DWARF description of “int x”. One compiler might im DW_AT_byte_size = 4 tion to binary integers. For clarity. This makes the definition of int <2>: DW_TAG_variable DW_AT_name = x explicit for both Java and C and DW_AT_type = <1> allows different definitions to be used. Other data types are con structed as collections or compositions of these base types. various data structures. the DIEs are how it is actually implemented on the pro labeled sequentially in the this and follow cessor. same program. Pascal doesn't cluding address. the encodings rep DW_AT_name = int compiler and debugger are sup resented by “float” are different depending DW_AT_byte_size = 2 posed to share a common under on the size of the value. Types of DIEs D IEs can be split into two general types. DW_AT_encoding = signed standing about whether an int is Figure 2b. this is determined by the en ger types as 32bit values no mat coding that the hardware actually supports. or even 64 bits. allow new base types to half of a word on the stack. The at tributes specify the name (int). Describing Data and Types M ost programming languages have so phisticated descriptions of data.) signed binary integer occupying four bytes. ing” or “containing” the children DIEs. These scribing the base type int which is the type DW_AT_bit_size = 16 of the value returned by main.) DW_TAG and the names of attributes are The base type for int describes it as a prefixed with DW_AT. often undocument DW_AT_bit_offset = 0 DW_AT_encoding = signed program DIE is a child of the compilation ed. the ac other might use a 16bit integer. There is still a little plement this as a single byte. and the size in named variable is described by a DIE bytes (4). in tween 0 and 100. the the IEEE754 standard. int base type on 16bit processor 16. ter how they are defined. fixed point. For ex ample. which refers to programming language scalar type and the base type DIE. tual encoding for a floating point number is a third might implement all inte Figure 2a. The sub assumptions. Some lan Pascal that passed 16bit integers in the top Figure 1 shows C's classic hello. Those that describe data including data types and those that describe functions and other executable code. C and C++ allow bit fields as class members that are not simple variables. maybe not a fixed address. any which defines whether the data is variable in the same compilation can refer 3 stored in column major order (as in Fortan) ence the same DIE . Declaring a variable const just says that you cannot modify the variable without using an explicit cast. We'll DW_AT_name = char Some languages. They have a name which represents a chunk of memory (or register) that can contain some kind of a value. A constant is used to describe languages that have true named constants as part of the language. Figure 5 shows DWARF to describe both C style arrays. Once we rray types are described by a DIE have created a base type DIE for int. since it may only have abstract meth ods and constant data members. to some degree. DWARF description of “int *px”. as well as restrictions on how it can be modified (e. These type DIEs can be chained together to de Structures. class. This implies that there same organization. Variables V <1>: DW_TAG_variable DW_AT_name = px DW_AT_type = <2> <2>: DW_TAG_pointer_type DW_AT_byte_size = 4 DW_AT_type = <3> <3>: DW_TAG_base_type DW_AT_name = word DW_AT_byte_size = 4 DW_AT_encoding = signed Unions. This allows different variables with the same name to be defined in different files without conflicting. or C restrict types. line. scription of a simple variable. Other DIEs de high bounds. column) triplet. such as bly other attributes. DWARF documents where the variable is declared in the source file with a (file. allow a variable to be declared the others have essentially the DW_AT_encoding = unsigned without defining it.g. function parameters. Those declared outside a function have either global or file scope. and in turn which can have any value for the low and references the int base type. and each occupies its own space. These are described with bit offset from the start of the class instance to the leftmost bit of the bit field and a bit size that says how many bits the member occupies. “const char **argv” which is de stance is known at compile time. C++ allows the programmer Some compilers define a common set of type defini 4 tions at the start of every compilation unit. or protected. But 3 For example. whether it is const) are described by the type of the variable. Most variables have a location attribute will have a byte size attribute. In C. Either is valid. The Java interface has a subset of the properties of a C++ class. subrange type that gives the lower and up A new type is created as a modification of per bounds of each dimension. should be a real definition of the variable Figure 6. (C does not have constants as part of the language. spec dex. Classes. pri Well. <1>: DW_TAG_variable DW_AT_name = argv DW_AT_type = <2> <2>: DW_TAG_pointer_type DW_AT_byte_size = 4 DW_AT_type = <3> Most languages allow the programmer to group data to gether into structures (called struct in C and C++.ously defined DIEs. Each of the components of the struc ture generally has a unique name and may have a different type. The loader relocates references to addresses within an executable Introduction to the DWARF Debugging Format 5 Michael J. like C or C++ (but describe the class DIE here. but DW_AT_byte_size = 1 not Pascal). These are described with the accessibility attribute. C and C++ have the union and Pascal has the variant record that are similar to a structure except that the component occupy the same memory locations. Others only to specify whether a member is public. The scope of a variable defines where the vari able known within the program and is. and interface. A DIE each of the class's members. although In the simplest of cases. hopefully somewhere that “const char **argv”. <5>: DW_TAG_base_type union. a variable is stored 4 there may be some additional attributes. but one that is a fixed generate the definitions for the types which are actually referenced in the program. and record in Pascal). a pointer to an int on our typical 32bit ma which always have zero as the lowest in chine. ent of the DIEs which describe the compiler or debugger can find. determined by where the variable is declared. and Interfaces ariables are generally pretty simple. Each of these descriptions looks very much like the de that describes where the variable is stored. ifies that its size is four bytes. variables de clared within a function or block have func tion or block scope. then it scribed in Figure 6. For example. in memory and has a fixed address . This allows another type. and variables. The kind of values that the variable can contain. What distinguishes variables is where the variable is stored and its scope. or to Array DIEs which are defined later. If the size of an in telling the debugger where it is. A vate.) A formal pa rameter represents values passed to a func tion. DWARF uses the base types to construct The index for the array is represented by a other data type definitions by composition. such as Ada param eters. offset from where the executable is loaded. DWARF description of The DIE for a class is the par somewhere else. We'll come back to that a bit later. Eager . This DIE defines a pointer type. True to its heritage. DWARF uses <4>: DW_TAG_const_type the C/C++ terminology and has DW_AT_type = <5> DIEs which describe struct. scribe the const or volatile attributes. Although each language has its own terminology (C++ calls the components of a class mem <3>: DW_TAG_pointer_type bers while Pascal calls them DW_AT_byte_size = 4 fields) the underlying organiza DW_AT_type = <4> tion can be described in DWARF. as well as arrays in Pascal or Ada. or in row major order (as in C or C++). describing a variable declaration provides a Each class has a name and possi description of the variable without actually scribe more complex data types. C++ reference type. class in C++. DWARF splits variables into three cate gories: constants. Figure 5. as in Figure 4.. It also allows different functions or compilations to reference the same variable. the source line information and the lo cation attributes are not shown. Figure 8a shows the source for strndup. DWARF description of variables a. DWARF doesn't totyped and gives the low and high PC val describe the calling conventions for a func ues for the routine. Figure 8b lists the DWARF generated for this file. and variable c is at offset –12 within the current function's stack frame. Other variables may require some what more complicated computations to lo cate the data. Variable a has a fixed location in memory. and accesses to the pro cessor's memory or registers. The DIEs for these variables follow the parameter DIEs. after all func tions. the location at tribute is the offset. In Figure 8b. that is.many variables. or a list of memory ranges if the function does not occupy a contiguous set of memory addresses. DIE <5> describes the function strndup. while displaying its value as an unsigned integer. and an attribute which indicates whether the subprogram is external. Although this great flexibility is seldom used in practice. 3: { The frame base attribute is a lo 4: register int b. Introduction to the DWARF Debugging Format 6 Michael J. This DIE has a name. such as a structure where some data are stored in memory and some are stored in registers. may own variable DIEs or nested lexical block DIEs. In other cases. cation expression that computes 6: } the address of the stack frame for the function. to define types used by the pa rameters. This can be an arbitrarily complex computation. Figure 7. The subprogram DIE owns DIEs that describe the subpro gram. b. DWARF describes both with a Subprogram DIE. and c. A function may define variables that may be local or global. with a wide range of arith metic operations. and c. DIE <2> shows the defi nition of size_t which is a typdef of unsigned int. If the parameter is op tional or has a default value. tests and branches within the expression. b.c. and locating it may be as simple as adding a fixed offset to a frame pointer. the variable may be stored in a register. Although a was declared first. variable b is in register 0. Eager . tecture. pression that specifies where the 2: void foo() address of the caller is stored. A DWARF location expression contains a sequence of opera tions which tell a debugger how to locate the data. are dynamically allo cated and locating them requires some (usually simple) computation. These are useful <1>: DW_TAG_subprogram since some of the most common DW_AT_name = foo optimizations that a compiler <2>: DW_TAG_variable DW_AT_name = b might do are to eliminate in DW_AT_type = <4> structions that explicitly save the DW_AT_location = return address or frame pointer. <3>: (DW_OP_reg0) DW_TAG_variable DW_AT_name = c DW_AT_type = <4> DW_AT_location = (DW_OP_fbreg: -12) DW_TAG_base_type DW_AT_name = int DW_AT_byte_size = 4 DW_AT_encoding = signed DW_TAG_variable DW_AT_name = a DW_AT_type = <4> DW_AT_external = 1 DW_AT_location = (DW_OP_addr: 0) <4>: <5>: so that at runtime the location attribute contains the actual memory address. This has a pointer to its sibling. A Subprogram DIE has attributes that give the low and high memory addresses that the subprogram occupies. Subroutines that scribed in DIE <10>. Describing Executable Code Functions and Subprograms D WARF treats functions that return val ues and subroutines that do not as variations of the same thing. DIE <10>.c: address attribute is a location ex 1: int a. DIE <5> also de do not return values (like C void functions) scribes the subroutine as external and pro do not have this attribute. a local variable may be allocated on the stack. a function in gcc that dupli cates a string. 5: int c. visible outside the current compilation. Figure 7 shows DIEs for three variables named a. Drifting slight ly away from its roots in C terminology. The parameters that may be passed to a function are rep resented by variable DIEs which have the variable parameter at tribute. As in previous exam ples. These are represented by lexical block DIEs which in turn. the location expression should allow the location of a variable's data to be described no matter how com plex the language definition or how clever the compiler's optimizations. The formal parameters tion. The actual location of a will be filled in by the linker. There may be attributes that help a debugger to locate the subpro gram's data or to find the current subprogram's caller. Many languages allow nesting of lexical blocks. for example. if it is con tiguous. The value that a function returns is giv The function returns a pointer to char. The low PC address is assumed to be the entry point for the routine unless another one is explicitly specified. the DIE to describe it is generated later. calls to evaluate other loca tion expressions. The DIEs for the param eters are in the same order as the argument list for the func tion. but there may be additional DIEs interspersed. A variable that is a member of a C++ class may require more complex computations to determine the location of the base class within a derived class. along with an appropriate reloca tion table entry. de en by the type attribute. Location Expressions D WARF provides a very general scheme to describe how to locate the data rep resented by a variable. all of the following DIEs are children of the Subprogram DIE. In an object file. For example. a source location triplet. This allows a debugger to display the type of formal argument n as a size_t. The DWARF location expression can contain a sequence of operators and values that are evaluated by a simple stack ma chine. that is defined in the Application Bina and local variables of the routine are de ry Interface (ABI) for the particular archi scribed in DIEs <6> to <9>. The return fig7. Here is a somewhat longer example. these are represented by at tributes. such as those declared within a C function. There are even operations used to describe data which is split up and stored in different lo cations. A second scheme to compress the data is to use abbreviations. only an index into a table of abbreviations is stored. DWARF calls each separately com The Compilation Unit DIE is the parent piled source file a compilation unit. Each of the DIEs has a type (called its TAG) and a number of attributes. then the next DIE is its first child. size_t). s. the first DIEs will de The DWARF data for each compilation scribe data types. Unfortunately.. 7: 8: char * 9: strndup (const char *s. independent of whether the compilation actually uses all of the abbreviations or types. Compilation Unit M ost interesting programs consists of more than a single file. 20: 21: result[len] = '\0'. Each ab breviation gives the tag value. the DWARF data is unwieldy. Many compilers gener ate the same abbreviation table or base types for every compilation. unit starts with a Compilation Unit DIE. then the functions that make up the source This DIE contains general information file. In stead of storing the value of the TAG of the DIE and the attributevalue pairs. This makes it easier for a debugger are in the same order in which they appear in the source file. Less commonly used are features of DWARF Version 3 which allow references from one compilation unit to the DWARF data stored in another compilation unit or in a shared library. The remaining children are represented as the siblings of this first child. 13: 14: if (n < len) 15: len = n. If the compilation unit is not contiguous. Each attributes is represented by a attribute type and a value. This is a significant reduction in the amount of data that needs to be saved at some expense in added complexi ty. all of which have the same set of attributes. 8b.strndup. to jump to the next function in a compilation) then a sib ling attribute can be added to the DIE. Each source file that makes up a program is compiled independently and then linked together with system libraries to make up the pro gram. If the DIE can not have children. <1>: DW_TAG_base_type DW_AT_name = int DW_AT_byte_size = 4 DW_AT_encoding = signed <2>: DW_TAG_typedef DW_AT_name = size_t DW_AT_type = <3> <3>: DW_TAG_base_type DW_AT_name = unsigned int DW_AT_byte_size = 4 DW_AT_encoding = unsigned <4>: DW_TAG_base_type DW_AT_name = long int DW_AT_byte_size = 4 DW_AT_encoding = signed <5>: DW_TAG_subprogram DW_AT_sibling = <10> DW_AT_external = 1 DW_AT_name = strndup DW_AT_prototyped = 1 DW_AT_type = <10> DW_AT_low_pc = 0 DW_AT_high_pc = 0x7b <6>: DW_TAG_formal_parameter DW_AT_name = s DW_AT_type = <12> DW_AT_location = (DW_OP_fbreg: 0) <7>: DW_TAG_formal_parameter DW_AT_name = n DW_AT_type = <2> DW_AT_location = (DW_OP_fbreg: 4) <8>: DW_TAG_variable DW_AT_name = result DW_AT_type = <10> DW_AT_location = (DW_OP_fbreg: -28) <9>: DW_TAG_variable DW_AT_name = len DW_AT_type = <2> DW_AT_location = (DW_OP_fbreg: -24) <10>: DW_TAG_pointer_type DW_AT_byte_size = 4 DW_AT_type = <11> <11>: DW_TAG_base_type DW_AT_name = char DW_AT_byte_size = 1 DW_AT_encoding = signed char <12>: DW_TAG_pointer_type DW_AT_byte_size = 4 DW_AT_type = <13> <13>: DW_TAG_const_type DW_AT_type = <11> Figure 8b. Figure 9 shows the abbreviation for the formal parameter DIE used in Figure 8b. 12: size_t len = strlen (s). DWARF description for strndup. a string which identifies the producer of the DWARF data. and off sets into the DWARF data sections to help locate the line num ber and macro infor mation. links to the sibling or child DIEs can be eliminated. and a list of attributes with the type of value it ex pects.h" 2: #include <stddef. If the compilation unit is contiguous (i.h> 3: 4: extern size_t strlen (const char*). Although DWARF allows great flexibility in which DIEs and attributes it may generate. then a list of the memory addresses that the code occupies is provided by the compiler and linker. If the compiler writer thinks that it might be use ful to be able to jump from one DIE to its sibling without stepping through each of its children DIEs (for example. it is loaded into memory in one piece) then there are values for the low and high memory addresses for the unit. Michael J.c. most compilers only generate a limited set of DIEs. Each DIE may have a sibling and several DIEs that it contains. Source for strndup. These can be 5 Figure 8a. 16: 17: result = (char *) malloc (len + 1). len).c. of all of the DIEs that describe the compila tion unit. 5: extern PTR malloc (size_t). 6: extern PTR memcpy (PTR. 22: return (char *) memcpy (result. followed by the attribute codes.c: 1: #include "ansidecl. If the DIE can have children. DWARF Versions 2 and 3 offer several ways to reduce the size of the data which needs to be saved with the object file. the next DIE is its sib ling. This way. this is not a very dense encoding. The first is to "flatten" the tree by saving it in prefix order. a flag indi cating whether the DIE has children. followed by global data. Data encoding C onceptually. the programming lan guage used. size_t n) 10: { 11: char *result. Generally. the DWARF data that de scribes a program is a tree. DIE <6> in Figure 8 is actually encod ed as shown5.e. The encoded entry also includes the file and line values which are not shown in Fig. Eager . 18: if (!result) 19: return 0. Each type of DIE is defined to either have children or not. including the di rectory and name of the source file. 23: } about the compila tion. The DIEs for variables and functions Introduction to the DWARF Debugging Format 7 to identify which compilation unit created the code at a particular memory address. const PTR. Without compres sion. chine address 0x47 0 51 0 yes no no no no 0 you can look for the source line that is clos or to the line 0x50 0 53 0 yes no no no no 0 est to the memory address.Abbrev 5: DW_TAG_formal_parameter [no children] DW_AT_name DW_FORM_string DW_AT_decl_file DW_FORM_data1 DW_AT_decl_line DW_FORM_data1 DW_AT_type DW_FORM_ref4 DW_AT_location DW_FORM_block1 ost debuggers have a very difficult time displaying and debugging code Figure 9. but can be complete line number table. The compiler did not identify the basic blocks in this code. so there is another column that in Finally. 0x59 0 54 0 yes no no no no 0 number. for example.c. opcodes of the Address File Line Col Stmt Block End Prolog Epilog ISA this can be looked at as a matrix with one line number column containing the memory addresses program. set 0x6a 0 54 0 yes no no no no 0 DWARF has extended this with added the column 0x73 0 55 0 yes no no no no 0 columns to convey additional information number. This is quite rudimentary information.h around or remove them. Line Number Table for strndup. ┌──────────────────────────────── │ ┌──────────────────────────── │ │ ┌──────────────────────── │ │ │ ┌───────────────────── │ │ │ │ ┌──────────────── │ │ │ │ │ ┌───────── │ │ │ │ │ │ ┌───── abbreviation 5 ”s” file 1 line 41 type DIE offset location (fbreg + 0) terminating NUL for this row in the line number program. Other DWARF Data used by a debugger to display the values The finite state machine is initialized for a macro or possibly translate the macro with a set of default values. the quite simple: 0x24 0 46 0 yes no no no no 0 table gives you the memory address to for example. this is the same for each function and the debugger knows exactly cessors can execute more than one instruc line number) into a single opcode. the end of the prolog or the start of the epilog to the function. A set of special opcodes calling functions and passing argu combine the most common operations (in that the debugger can stop after all of the ments. Line Number Table line number he DWARF line table contains the map table is gener ping between the source lines (for the ated by exe executable parts of a program) and the cuting one or memory that contains the code that corre more of the sponds to the source. Macro Information 05 7300 01 29 0000010c 9100 00 M T E Introduction to the DWARF Debugging Format 8 Michael J. beginning of basic are more than a certain number of argu ments. such as instruction set. and column). In the simplest form. it would be huge. it may move instructions which indi File 1: stddef. DWARF com line numbers. how to find the argument values and the tion set. Notice that only the machine addresses that represent the beginning in struction of a statement are stored. if a row of the line number ta dicates which set is stored at the specified ble has the same source triplet as the previ return address for the function. each compilation unit. rather than being of instructions called a line number pro DWARF includes the description of the duplicated in each. if this table were 6 Calling this a line number program is something of a ferent calling sequences depending on how stored with one row for each machine in misnomer. etc. the Call Frame Information of the code which represents the prolog of end of the function prolog. line. If you want generally 0x1a 0 44 0 yes no no no no 0 to set a breakpoint at a particular line. Abbreviation entry and encoded form. These instructions are interpreted by a simple finite state machine to recreate the macros defined in the program. The program describes much more than just the function is written. In the crementing the memory address and either arguments to a function have been loaded or before the function returns. if there struction. Some pro incrementing or decrementing the source simplest case.c the program. sequence of machine instructions. For some processors. then no instructions are generated As you might imagine. with the macros. end of function prolog. Converse add a value to 0x32 0 49 0 yes no no no no 0 ly. The 0x0 0 42 0 yes no no no no 0 and another column containing the source opcodes are 0x9 0 44 0 yes no no no no 0 triplet (file. but may dress repre be scattered and interleaved with the in sents the start structions for other nearby source state of an source ments. machine location. there may be dif ous row. As a compiler optimizes File 0: strndup. Each row in the into the corresponding source language.c. The user sees the origi nal source file. gram6. There may be different calling se blocks. It may be useful to identify the end statement. or the start of very processor has a certain way of a function or the beginning of the epilog. Eager . Figure 10 lists the line number program for strndup. or 0x7b 0 56 0 yes no yes no no 0 set a flag about a program. if your program has a fault (say. which has macros. while the code corresponds to whatever the macros saved in a shared library and referenced by presses this data by encoding it as sequence generated. usually defined in the ABI. so the function epilog. 0x2c 0 47 0 yes no no no no 0 store the breakpoint instruction. using a either the ma 0x41 0 50 0 yes no no no no 0 bad pointer) at some location in memory. This table is encoded in just 31 bytes in the line number program. The code for a giv cates that the memory ad en source statement may not be stored as a Figure 10. The names of these sections all start with ". data and types in the source in a compact language and ma chineindependent fashion. The ELF sections and their contents are . For improved efficiency. This avoids the need to relocate the debugging data. Each of the different kinds of DWARF data are stored in their own section.debug_abbrev . for their comments and advice about this paper." locat ing the previous function.debug_info .debug_pubnames A lookup table for global objects and functions . Eager . which speeds up program loading and debugging. The first column contains the machine address while the subsequent columns contain the values of the machine registers when the instruction at that ad dress is executed.freestandard s.debug_ranges . An other optimization may be to eliminate a register which points to the current call frame. and locate the call frame for the calling function. The ba sic concepts for the DWARF debug informa tion are straightforward. locate the current call frame. There is one row in this table for each address that contains code. There is quite a bit of subtlety in DWARF as well.debug_str Address ranges referenced by DIEs String table used by . The complete DWARF Version 3 Stan dard is available for download without cost at the DWARF website (dwarf. The DWARF Call Frame Information (CFI) provides the debugger with enough information about how a function is called so that it can locate each of the arguments to the function. not quite a nutshell.debug_loc .debug_". A small change in the optimizations and the debugger may no longer be able to walk the stack to the call ing function. ELF sections W Summary hile DWARF is defined in a way that allows it to be used with any object file format. In structions on registering for the mailing list are also on the website. A program is de scribed as a tree with nodes representing the various functions.debug_line . Luckily.debug_frame . Future direc tions for DWARF are to improve the de scription of optimized code so that debug gers can better navigate the code which ad vanced compiler optimizations generate. Compilers will try to optimize the calling sequence to make code both smaller and faster.org). One common optimization is when there is a simple function which doesn't call any others (a leaf function) to use its caller stack frame instead of creating its own. given that it needs to ex press the many different nuances for a wide range of programming languages and dif ferent machine architectures. the CFI is encoded as a sequence of instructions that are interpreted to generate a table. The CFI describes how to unwind the stack.quences depending on operating systems. it is both te dious and errorprone.debug_macinfo . Introduction to the DWARF Debugging Format 9 Michael J. Well. Like the line number ta ble. it's most often used with ELF. The line table provides the mapping between the exe cutable instructions and the source that generated them. the location where the function was called. formerly of HP.debug_aranges Abbreviations used in the .debug_info section A mapping between memory address and compilation Call Frame Information The core DWARF data containing DIEs Line Number Program Macro descriptions A lookup table for global objects and functions S o there you have it ─ DWARF in a nut shell.debug_pubtypes A lookup table for global types . most references to DWARF data use an offset from the start of the data for the current compilation. This information is used by the debugger to "unwind the stack. so the CFI encoding is quite compact. very little changes between two machine instructions. Like the line number table.debug_info Acknowledgements I want to thank Chris Quenelle of Sun Microsystems and Ron Brender. if this table were actually created it would be huge. Some registers may be preserved across the call while others are not. and the val ues passed. . There is also a mailing list for ques tions and discussion about DWARF. Thanks also to Susan Heimlich for her many editorial comments. While it may be possible for the debugger to puzzle out all the possible permutations in calling sequence or optimizations. ALLOC. DEBUGGING . RELOC. The options are -w -w[liaprmfFso] l i a p r m f F s o display all DWARF sections display specific sections line table debug info abbreviation table public names ranges macro table debug frame (encoded) debug frame (decoded) string table location lists The DWARF listing for all but the smallest programs is quite voluminous. READONLY. READONLY. so it would be a good idea to direct readelf’s output to a file and then browse the file with less or an editor such as vi. DEBUGGING . READONLY. The ELF sections can be displayed using objump with the –h option.c $ objdump –h strndup. READONLY.debug_frame 00000034 00000000 00000000 000002bc 2**2 CONTENTS. RELOC. CODE . Eager .GNU-stack 00000000 00000000 00000000 00000384 2**0 CONTENTS. $ gcc –g –c strndup. DEBUGGING . READONLY. RELOC.note. DATA .data 00000000 00000000 00000000 000000b0 2**2 CONTENTS. DEBUGGING . DEBUGGING .debug_pubnames 0000001e 00000000 00000000 0000031c 2**0 CONTENTS.Generating DWARF with GCC It’s very simple to generate DWARF with gcc. LOAD. READONLY. READONLY.debug_line 00000080 00000000 00000000 0000023b 2**0 CONTENTS. RELOC.text 1 2 3 4 5 6 7 8 9 10 11 file format elf32-i386 Size VMA LMA File off Algn 0000007b 00000000 00000000 00000034 2**2 CONTENTS.o strndup.debug_info 00000118 00000000 00000000 00000123 2**0 CONTENTS.comment 0000002a 00000000 00000000 0000035a 2**0 CONTENTS. READONLY . DEBUGGING .debug_aranges 00000020 00000000 00000000 0000033a 2**0 CONTENTS. RELOC.bss 00000000 00000000 00000000 000000b0 2**2 ALLOC . RELOC.debug_abbrev 00000073 00000000 00000000 000000b0 2**0 CONTENTS. READONLY Printing DWARF using Readelf Readelf can display and decode the DWARF data in an object or executable file. Introduction to the DWARF Debugging Format 10 Michael J.debug_loc 0000002c 00000000 00000000 000002f0 2**0 CONTENTS. ALLOC.o: Sections: Idx Name 0 . DEBUGGING . READONLY. LOAD. Simply specify the –g option to generate debugging information.
https://www.scribd.com/document/153965857/Debugging-Using-DWARF
CC-MAIN-2017-09
refinedweb
9,011
66.54
A quick and dirty vehicle entry/start sketch for Arduino Quick and dirty sketch for the Elechouse PN532 board I have in I2C mode for starting the car. Or anything else you want to do really. Code mostly yoinked from elechouse PN532 examples folder, ringuid references yoinked from John's Suggestions, additions, changes? #include <EEPROM.h> /**************************************************************************/ /*! This example will attempt to connect to an ISO14443A card or tag and retrieve some basic information about it that can be used to determine what type of card it is. Note that you need the baud rate to be 115200 because we need to print out the data and read from the card at the same time! */ /**************************************************************************/ int triggerPin1 = 2; int triggerPin2 = 3; int triggerPin3 = 4; // choose to SPI or I2C or HSU #if 0 #include <SPI.h> #include <PN532_SPI.h> #include "PN532.h" PN532SPI pn532spi(SPI, 10); PN532 nfc(pn532spi); #elif 0 #include <PN532_HSU.h> #include <PN532.h> PN532_HSU pn532hsu(Serial1); PN532 nfc(pn532hsu); #else #include <Wire.h> #include <PN532_I2C.h> #include <PN532.h> PN532_I2C pn532i2c(Wire); PN532 nfc(pn532i2c); #endif void setup(void) { pinMode(triggerPin1, OUTPUT); pinMode(triggerPin2, OUTPUT); pinMode(triggerPin3, OUTPUT); Serial.begin(115200); Serial.println("Hello!"); nfc.begin(); uint32_t versiondata = nfc.getFirmwareVersion(); if (! versiondata) { Serial.print(versiondata); Serial.print("PN53x key scanner board not online"); while (1); // halt } // Got ok data, print it out! Serial.print("Found key scanner board PN5"); Serial.println((versiondata>>24) & 0xFF, HEX); Serial.print("Firmware ver. "); Serial.print((versiondata>>16) & 0xFF, DEC); Serial.print('.'); Serial.println((versiondata>>8) & 0xFF, DEC); // Set the max number of retry attempts to read from a card // This prevents us from waiting forever for a card, which is // the default behaviour of the PN532. nfc.setPassiveActivationRetries(0xFF); // configure board to read RFID tags nfc.SAMConfig(); Serial.println("Waiting for a valid card"); }[0], &uidLength); if (success) { // Display some basic information about the card Serial.println("Found an ISO14443A == "xxxxxxxxxxx" || ringUid == "xxxxxxxxxxxx"){ // put your authorised UID in here Serial.println("PERMISSION GRANTED, DOOR UNLOCKED"); digitalWrite(triggerPin1, HIGH); // triggers unlock output for central locking delay(1000); // waits for a second digitalWrite(triggerPin1, LOW); // removes triggered output digitalWrite(triggerPin3, LOW); // Removes triggered alarm output } else if (ringUid == "xxxxxxxxxxxx"){ // put your authorised UID in herev Serial.println("PERMISSION GRANTED, SYSTEMS TRIGGER"); digitalWrite(triggerPin2, HIGH); // sets latching relay trigger on ie for ignition mode in car to allow button start - button start must disable reads until button turns engine off again delay(1000); // waits for a second digitalWrite(triggerPin2, LOW); // removes latching relay trigger digitalWrite(triggerPin3, LOW); // Removes triggered alarm output } else{ Serial.println("I'M AFRAID I CAN'T ALLOW THAT DAVE"); digitalWrite(triggerPin3, HIGH); // Trigger latching relay to alarm system delay(1000); // waits for a second } } } Hi! Could you tell me more about your relay starter? is for the ignition but for the central car start, how do you do? And how you will plug your arduino for the supply? (12v of the batterie with a buck regulator?) Hi mate, the arduino will be powered by a small transistor regulator at 5 volts from the main car supply. Switching for the ignition on is via latching relays of the type used in remote car start kits, they can be purchased seperately in high amperage rated units. Cranking the engine at present will be done via a pushbutton under the dashboard to trip a high current relay. This could be added to another relay driven from the arduino (and do away with the pushbutton) at a later date but that will require a major re-think of the code contained therein to allow proper state switching. Okay, that is a really cool project :D I thought on my first car starter prototype to a system drive hidden in the gear knob (with a realization clear plastic 3D printer). see a startup module that has just clipped driving (hands-free kit style). But this project will wait for lack of time ^^' but maybe it can inspire some! That's certainly an interesting idea you've got there! So the reader unit would be inside the gear knob and car usage would be 'allowed' when the ring is present on the knob? For mine because there is still the chance that you can switch off the car by accident I intend to place the reader in an out of the way position where I won't normally be putting my hands. I'm looking at possibly having two reader units, one on the door at the handle position and one up on the dashboard. Though the reader units and the arduino modules are cheap enough that I could simply go with two of those, one complete unit for the door and one for the vehicle ignition. The gear knob reader juste allow the start of the car, a button are use for the starter. But for more security i will use the ODB2 connexion for know the RPM of the car so you could stop the car only if it is in the stop state. And off course, i use bistate relay for don't stop the car in case of a electronic problem. Why don't use a MSP430 chip ? it's compatible with the PN532 and the RC522 arduino library with energia. It's the cheapest solution and a ultra low power technology :) You can make a wake-up button in the handle and use a little battery too (less wire so easier to connect). Interesting, I'll have to look into that one! Ok, I finally got around to doing something more with my NFC Ring car starter. This is just a short proof of concept video showing the changing states via an RGB LED. I'll work on re-writing the code next so that I can do a little more with just a single ring inlay, and close the relay for the starter motor properly - at the moment as it's set up it pulses, I'll alter it so that a single inlay can change mode (on/off) and then start the engine when held. Here are some pictures, including the ugliness of my wiring. :-) Your ring or your reader work really fine, I have a PN532 reader from aliexpress (a clone for 14$) and the read is not effective (I have to rub the ring against the antenna to try to read). But maybe it's the ring tag. What is the maximum current in your relay? I use a 30A relay because I don't know the instantaneous current to the contact closure, maybe you know that? Hi @MrStein, the latching relay is 30v/3a for the contacts, it's triggering the vehicles relay switched ignition state so it should be fine for that. The momentary relay is 30v/5a, triggering the vehicle starter relay. It's bigger than the latching relay, but is the smallest had in my parts bin! I have noticed that my Pn532 sometimes doesn't like to read one of my inlays and almost requires rubbing at times, but the other inlay works fine at a distance. - johnyma22 NFC Ring Team This is awesome, keeping a close eye on this! :) lol, thanks for the vote of confidence there! I've modified the hardware side of things a little bit since I took those photos, adding a transistor to trigger the starter relay (the relay has a 12 volt coil and the arduino wasn't coping with that directly from an output) and a couple of filter capacitors for the regulator on the in and out sides to smooth things out a little, there were mild voltage fluctuations. Nothing major, but it's always a good idea to do that and keep things clean. The starter output from D2 is now running through one side of the latching relay on to the trigger transistor for the momentary relay coil after I decided that the best way to lock out engine cranking was in hardware. This way the starter can only be cranked once the vehicle ignition is switched on via the latching relay. D2 can still be triggered if software allows it, but it wont go any further than the latching relay if that relay is in the off state. At the moment I'm investigating methods to trigger D2 and keep it high while the ring is present until the engine is running. This is getting a little trickier than I'd first planned and might end up needing an extra input from the vehicle to exit the engine cranking portion of the program when normal running is achieved. For those who are interested, in the main unit I'm using: - Panasonic DS2E-SL2-DC5V latching relay - LZ-12H-K momentary relay - MPSA42 NPN transistor - L7805CV regulator - 10uF electrolytic capacitors, x2 - Microduino equivalent to Arduino Pro Mini 5V - PN532 NFC board This is basically all stuff that I had floating around in my parts bin for random projects, you can substitute for things you have as long as they're equivalent. The latching relay I bought for around $10 each from mouser, everything else is a leftover from something else! The Arduino portion of this could also be pretty much any arduino compatible board, I'm using the microduino because I had them and they were small and convenient. The Elechouse PN532 reader is in my opinion non-negotiable. They're the only board I've had real success with reading the rings, most others are far too picky and have trouble reading the rings. Either way, this just shows that you can build something usable out of very little! Get cracking, people, there are projects waiting for you to build them! Good job! Don't forget some flyback diode (free wheel diode in french, I don't no how to say that in english ^^') for the discharge of the magnetic circuit of the inductance (relays). Oh yes, good catch @MrStein , I'll do that now. I had totally forgotten that one! (Usually it was me telling the apprentices to do that with relays!) I usually refer to it as a clamping diode, flyback or free wheel diode is perfectly correct as well though. So, I've progressed a bit in the last 5 days - my code is still just as awful but I've got the prototype unit to the point where I've test installed it. Following are some photos and a short video of the starter in action! First I removed the housing for the steering column and located the ignition barrel (I've done all this for other things previously) Next look at the back of the barrel and find the wires coming out, this bundle is what you need to link into. I've hooked in after the connector into the main loom, I recommend you hook in before that though. Tapping into the loom is fairly simple. This soldering is not my best, but it will hold for now. Upside down with a gas soldering iron in a cramped space is difficult. Tape up the joins as you go, as neatly as possible. You really don't want to short out against other wires or metal parts of the car, it can cause all sorts of issues. And here we are, this is the connector for the points I'm now going to be controlling from the car starter unit. The wire doesn't need to be excessively heavy because power is mostly switched via relays in this vehicle already. And here is a short video of the starter in action! - NFCringTom NFC Ring Team so fricking awesome!! Have to get me one of these things sorted Thanks @NFCringTom , I'm pretty pleased with how it's progressing! Next step will be to cut the prototype board down to make it a bit more manageable, house it in a plastic case and then decide where I'm going to put the actual unit! Everyone should note that the steering lock is currently intact and so you still require the key in order to drive anywhere - I'm looking at options for removal, I'll see if I can retrofit an electric steering lock from something else and then remove the entire ignition barrel permanently. On my other car this wont be an issue as it has never had a steering lock. I've got a minor code update, I've re-written it to reflect what it's actually doing and made it so that a single inlay can switch through all three states - on, cranking, off. It's still extremely rough and ready and if anyone else wants to do it better then go for it, I'd welcome the input. int triggerPin1 = 2; int triggerPin2 = 3; int triggerPin3 = 4; int carRunState = == "4db5e2b62880" || ringUid == "48fc02b62880" || ringUid == "4c9232b62880"){ Serial.println("Key accepted!"); } if ((ringUid == "4db5e2b62880" || ringUid == "48fc02b62880" || ringUid == "4c9232b62880") && (carRunState == 0)){ Serial.println("PERMISSION GRANTED, SYSTEMS ON"); digitalWrite(triggerPin3, HIGH); // sets latching relay trigger on for ignition mode in car to allow start delay(200); // waits briefly digitalWrite(triggerPin3, LOW); // removes latching relay trigger carRunState = carRunState++; } else if ((ringUid == "4db5e2b62880" || ringUid == "48fc02b62880" ||ringUid == "4c9232b62880") && (carRunState == 1)){ Serial.println("ENGINE CRANKING"); digitalWrite(triggerPin1, HIGH); // triggers output for engine starting delay(2000); // waits for a second digitalWrite(triggerPin1, LOW); // removes triggered output Serial.println("ENGINE CRANKING COMPLETE"); carRunState = carRunState++; } else if ((ringUid == "4db5e2b62880" || ringUid == "48fc02b62880" ||ringUid == "4c9232b62880") && (carRunState > 1)){ Serial.println("SYSTEM OFF, SLEEP TIME"); digitalWrite(triggerPin2, HIGH); // Trigger latching relay to reset for vehicle OFF delay(200); // waits briefly digitalWrite(triggerPin2, LOW); // Removes latching relay reset trigger delay(3000); carRunState = 0; } } }
https://forum.nfcring.com/topic/89/a-quick-and-dirty-vehicle-entry-start-sketch-for-arduino
CC-MAIN-2017-09
refinedweb
2,274
59.84
its very urgent please help me its very urgent please help me how can i retrieve all images from ms access database and display in jsp pages Reply Me - Struts file,connection file....etc please let me know its very urgent  ...Reply Me Hi Friends, I am new in struts please help me... visit for more information. looking for some factory pattern examples to learn about them, if you can point me towards some of the examples that would we very helpful. Thanks Hello Tell me good struts manual... that session and delete its value like this... session.removeAttribute of Struts and its Model-View-Controller (MVC) architecture. The authors... covers everything you need to know about Struts and its supporting technologies...: The Jakarta Struts Model 2 architecture and its supporting Which is the good website for struts 2 tutorials? Which is the good website for struts 2 tutorials? Hi, After... for learning Struts 2. Suggest met the struts 2 tutorials good websites. Thanks Hi, Rose India website is Framework An introduction to the Struts Framework This article is discussing about the high-class web application development framework, which is Struts. This article will give you detailed introduction to the Struts Framework. Struts Program Very Urgent.. - JSP-Servlet Program Very Urgent.. Respected Sir/Madam, I am R.Ragavendran....]=white;background color Please send me the cause of the problem asap because its most urgent.. Thanks/Regards, R.Ragavendran.. Hi friend Struts - Struts Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good...:// Thanks Hi.. - Struts Hi.. Hi, I am new in struts please help me what data write in this file ans necessary also... struts-tiles.tld,struts-beans.tld,struts........its very urgent Hi Soniya, I am sending you a link. This link Java beginers java beginers Struts 2.0.0 Tell me - Struts Struts tutorial to learn from beginning Tell me, how can i learn the struts from beginning Tell me - Struts Directory Structure for Struts Tell me the Directory Structure for Struts - Struts Architecturec . Struts is famous for its robust Architecture and it is being used for developing... to maintain the consistency in its presentation when the model changes. 3). Controller... work very closely together. Overview of the Struts Framework The Struts framework Can you suggest any good book to learn struts Can you suggest any good book to learn struts Can you suggest any good book to learn struts Alternative Struts Alternative Struts is very robust...-Controller (MVC) design paradigm. Most Struts applications use a browser as the client... of struts. stxx sits on top of Struts, extending its existing functionality to allow Struts2 - turorials for struts - Struts turorials for struts hi till now i dont about STRUTS. so want beginers struts tutorials notes. pls do Hi - Struts please help me. its very urgent Hi friend, Some points to be remember...Hi Hi Friends, I want to installed tomcat5.0 version please help me i already visit ur site then i can't understood that why i installed Basic problem but very urgent - JSP-Servlet strar[2]=white;background color II DONT MAKE ANY SENSE TO ME!!! Please send me the cause of the problem asap because its most urgent.. Thanks/Regards...Basic problem but very urgent Respected Sir/Madam, I am Reply me - Java Beginners Reply me Hi Friends, Quest:- Tell em what is the difference between java and php, dotnet Quest:- what is the similar point of php and java... Please tell me its very urgent reply me its urgent - Java Beginners reply me its urgent Hi friends I am facing problem in my application in this type please help me i am using database mysql5.0 version...'@'localhost' (using password: YES)" please tell me what is the error....its pls help me sir its urgent pls help me sir its urgent thanks for reply, but i am getting this error pls help me its urgent type Exception report description The server encountered an internal error () that prevented it from Reply Me - Java Beginners and db file please insert this data in this table its very urgent.... My project deadline is very near this code i am using then i got the some error "code too large for try statement" please help me its very urgent Radio Buttons in DB Very Urgent - JSP-Servlet Radio Buttons in DB Very Urgent Respected Sir/Madam, I am R.Ragavendran.. I got your reply.. Thank you very much for the response. Now I am sending...: ------------------------------------------------------------------------------ Its Tell me - Struts Directory Structure with example program Execution Tell me the Directory Structure with example program Execution Validating Number Very Urgent - JSP-Servlet ('id').value= document.getElementById('name').value='' Its Very Urgent.. Please send me the coding asap.. Thanks/Regards, R.Ragavendran.. ...Validating Number Very Urgent Respected Sir/Madam, I am Reply Me - Java Beginners use this i don't know... please tell me what is the use... with databse. Controller are servlet,jsp which are act as mediater. If u want...-Model V-view C-Controller Model : The model is basically according to me its ok but is calculating wrong values in according to me its ok but is calculating wrong values in program to calculate roman numbers .....i.e roman number calculator import java.util.*; import java.io.*; class romancalci { & what is struts? - Struts . Struts provides its own Controller component and integrates with other... to survive. Struts helps you create an extensible development environment...what is struts? What is struts?????how it is used n MVC - Struts MVC Can any one help me in good design of an struts MVC....tell me any e-book so that i can download from site Help Very Very Urgent - JSP-Servlet requirements.. Please please Its Very very very very very urgent... Thanks...Help Very Very Urgent Respected Sir/Madam, I am sorry..Actually the link u have sent was not my actual requirement.. So,I send my requirement java beginers - JavaMail important for all begin MyEclipse - Struts which helps me to work on MyEclipse...MyEclipse Hi, I am using MyEclipse6.0. I have developed a struts based application. As part of that i have used struts-html tags. When i Struts 2 Struts 2 I am getting the following error.pls help me out. The Struts dispatcher cannot be found. This is usually caused by using Struts tags... passed through its servlet filter, which initializes the Struts dispatcher needed IMP - Struts choices) with answers for struts. kindly send me the details its urgent for me Thanku Ray Hi friend, Visit for more information.../jakartastrutsinterviewquestions.shtml Thanks Struts Struts How Struts is useful in application development? Where to learn Struts? Thanks Hi, Struts is very useful in writing web applications easily and quickly. Developer can write very extensible and high Very new to Java Very new to Java hi I am pretty new to java and am wanting to create a programe for the rhyme 10 green bottles. 10 green bottles standing... actually help me with this that would be great what are Struts ? -Controller (MVC) design paradigm. Struts provides its own Controller component... professional web application needs to survive. Struts helps you create an extensible...what are Struts ? What are struts ?? explain with simple example Java - Struts Java Hi Good Morning, This is chandra Mohan I have a problem in DispatchAction in Struts. How can i pass the method name in "action" and how can i map at in struts-config.xml; when i follow some guidelines
http://roseindia.net/tutorialhelp/comment/70
CC-MAIN-2014-15
refinedweb
1,265
66.74
13 September 2012 15:44 [Source: ICIS news] TORONTO (ICIS)--Canada-based chemicals producers ran their plants at an average capacity utilisation rate of 80.1% in the second quarter, compared with 79.9% in the first, a statistics agency said on Thursday. Year on year, Canada’s second-quarter chemical plant utilisation rate was up by 3.5 percentage points from 76.6% in the 2011 second quarter, according to data from Statistics Canada. Meanwhile, ?xml:namespace> In the plastics industry, plant capacity utilisation fell to 77.1% in the second quarter, from 77.8% in the first. In the 2011 second quarter, that sector’s capacity utilisation was 72.1%. In the rubber industry, second-quarter plant capacity utilisation was 81.6%, compared with 86.0% in the 2012 first quarter and 84.6% in the 2011 second quarter. In the 2011 second quarter, the overall manufacturing plant capacity utilisation rate was 78
http://www.icis.com/Articles/2012/09/13/9595370/canada-q2-chemical-plant-utilisation-rises-to-80.1.html
CC-MAIN-2014-10
refinedweb
155
53.78
I am reading book called the c programming language by Brian W.Kernighan and Dennis M.Ritchie. I cannot understand the function that is written in the book for generating pseudo-random number it is like this; unsigned long int next = 1; int rand(void) { next = next * 1103515243 + 12345; return (unsigned int)(next / 65536) % 32768; } void srand(unsigned int seed) { next = seed; } I also tried my self. But I only came up with the following observations 65536 = is the value of 16 bit unsigned + 1 bit 32768 = is the value of 16 bit signed + 1 bit but I am not able to figure out the whole process . This is the book written by the legends and I want to understand this book. Please if anybody can help me to figure out this problem I will feel very very fortunate. Pseudo Random Number Generators are a very complex subject. You could learn it for years, and get a PhD on it. As commented, read also about linear congruential generator (it looks like your code is an example in some C standard) In C on POSIX systems, you have random(3) (and also lrand48(3), sort-of obsolete); In C++11 you have <random> The /65536 operation might be compiled as >>16 a right shift of 16 bits. The %32768 operation could be optimized as a bitmask (same as &0x7fff) keeping 15 least bits. This hasn't an accepted answer yet, so let's try one. As noted by Basile Starynkevitch, what is implemented here is a pseudo-random number generator (RNG) from the class of linear congruential generators (LCGs). These in general take the form of a sequence X := (a * X + c) mod m The starting value for X is called the seed (same as in the code). Of course c < m and a < m. Often also a << c. Both c and m are usually large numbers chosen so that the whole sequence does reasonably well in the spectral test, but you probably don't have to care about that to understand the basic mode of operation. If you are a little bit into number theory, you will probably see that the sequence repeats after a while (it is periodic). Random numbers are generated, by first seeding X with a starting seed. For each generated number, the sequence is cycled and a subset of the bits of X are returned. In the code from the question, a = 1103515245, c = 12345, and m is implicitly pow(2, 8 * sizeof(unsigned long)) by virtue of unsigned integer overflow. These are also the values ISO/IEC 9899, i.e. the C language standard suggests. With this known, the first pitfall is probably this statement: return (unsigned int)(next / 65536) % 32768; Kernighan and Ritchie probably thought that using only simple arithmetic is more readable and more portable than using bit masks and bit shifts. The above is equivalent to return (unsigned int)(next >> 16) & 0x7fff which selects bits 16-30 from next. You get back a pseudo-random number in the range [0;32767]. The bit range is also the one suggested in the C standard. WARNING: It is well known that this LCG does --while widely deployed, because it's noted in the standard-- not produce very good pseudo-random numbers (the version in GLIBc is even worse). Distinctively, it is absolutely unsafe to use for cryptographic applications. With the few number of random bits, I would not even use it for any Monte Carlo method, because results may be severely skewed by the quality of the RNG. So in short: Try to understand it: yes, you are welcome. Use it for anything: no.
http://www.dlxedu.com/askdetail/3/07bcec02d84a3e877530e006391f4561.html
CC-MAIN-2019-04
refinedweb
610
69.21
Last) UPDATE: Part 1.5: Managed Metadata in SharePoint 2010 - some notes on the "why" Part 2: ECM platform enhancements - Enterprise Content Types, Content Organizer, Scalability etc. I want to focus on Managed Metadata first as it will be such a key ECM building block in SharePoint 2010. Background In SharePoint 2007, metadata was a huge blind spot – many organizations have a fundamental requirement to only allow certain ‘approved’ terms from a central list to be used as metadata. Broadly, the options were: - a plain old textbox Frequently, metadata terms are in a hierarchy which counts some of those options out. Otherwise the first and last options were lame/unsuitable across large deployments, and I can practically guarantee that any vendor or custom solution out there wouldn’t be as rich as a proper baked-into-SharePoint implementation. And this is what we’ve now got in SharePoint 2010 with the “Managed Metadata” capability – I wouldn’t say it covers all of the bases, but it can be extended easily. In my talk I joked that I couldn’t bear to do a talk without any code, and so showed how a notable hole in the metadata framework in can be plugged in 10 minutes flat by using the Microsoft.SharePoint.Taxonomy namespace. More on this later. A key thing to note is that the new Managed Metadata field now exists by default on many core content types such as ‘Document’ – so it’s right there without having to explicitly add it to your content. SharePoint 2010 - Creating the central taxonomy An organisation’s taxonomy is defined in the Term Store Management Tool – this is part of the Managed Metadata service application, and can be accessed either from Central Administration or from within Site Settings. Permissions are defined within the Term Store itself. For my demo I “borrowed” the taxonomy from a popular UK electrical retailer, and added the terms manually (but note you can also import from CSV). The following image shows the different types of node used to structure and manage a SharePoint 2010 taxonomy, and also the options available to manage a particular term: Adding site columns - making terms available for use In order for authors to be able to use the terms on a document library, a column needs to be created (most likely on the appropriate content types) of type ‘Managed Metadata’. There are 2 key steps here: - Mapping the column to the area of the taxonomy which contains the terms we wish to use for this field: Some notes on this: - The node selected is used as the top-level node – if it has children, these values can also be used in this field. - Site collections can optionally define their own terms sets at the column level (i.e. leverage the authoring experience you’re about to see, but not just for organization-wide terms sets) rather than use the central one. This is labelled as ‘Customize your term set’ in the image above, and allows terms to be added when this radio button is selected. - Specifying whether ‘Fill-in’ choices are allowed (shown on lower part of above image): - First thing to note is that ‘fill-in’ choices are only possible when the ‘Submission Policy’ of the linked parent term set is set to ‘Open’. This provides centralized master control to override the local setting on the column. - When the “Allow ‘Fill-in’ choices” option on the column is set to ‘Yes’, we specify that authors can add terms into the taxonomy as they are tagging items - in taxonomic terms, this model is known as a folksonomy, meaning it is controlled by end users/community rather than centrally defined. Although the setting is quite innocuous, but this is hugely different in Information Architecture terms – typically it is often beneficial when content authors are trusted and capable and there is a desire to grow the taxonomy ‘organically’, perhaps because a mature one doesn’t exist yet. - I can imagine some document libraries may use both types (traditional taxonomy and folksonomy). One column is understood to be more controlled, the other free and easy. With some custom dev work on the search side, it would probably be possible (definitely if you have FAST) to weight the more controlled field higher than the folksonomy field in search queries – thus providing the best combination of tagging and “searchability”. The end user experience – web browser Now that we have a managed metadata site column, when a user is tagging a document in an appropriate library they can either get a ‘type-ahead’ experience where suggestions will be derived from the allowed terms: ..or they can click the icon to the right and use a picker to select (e.g. if they don’t know the first letters to type): The document is now tagged with an approved term from the taxonomy. Note that if the field allows fill-in choices (i.e. it’s a folksonomy field), this dialog has an extra ‘Add new item’ link for this purpose: The end user experience – Office 2010 client Alternatively, content authors can tag metadata fields natively from within Office 2010 applications if they prefer. This can be done within the Document Information Panel, but also in the new Office Backstage view which I’m liking more and more. They get exactly the same rich experience – both type-ahead and the picker can be used just as in the browser: And it’s things like this which other implementations (e.g. vendor/custom) just typically do not provide. So that’s the basics, onto some other aspects I discussed or demo’d. Managed Metadata framework features - Synonyms – a term can have any number of synonyms. So if you want your authors to say, tag items with ‘SharePoint Foundation’ instead of ‘WSS’, you’d define the latter as a synonym of the former. In my television specifications demo, I added some phoney terms ‘Plasma Super’ and ‘Plasma Ultra’ to my preferred term of ‘Plasma’, and showed that in the user experience the synonyms show up (indented) in the type-ahead, but cannot actually be selected – the preferred term of ‘Plasma’ will always end up in the textbox: In case you’re curious as to equivalent picker experience, this shows synonyms in a ‘tooltip’ kind of way when you hover over the term. - Multi-lingual – for deployments in more than one language, the metadata framework fully supports the SharePoint 2010 MUI (Multi-lingual User Interface), meaning that if the translations have been defined, users can tag items in the language tied to the locale of the current web. The underlying association is the same as the value actually stored in the SharePoint field is partly made up of the ID. - Taxonomy management – as shown in the term store screenshot way above, terms can be copied, reused (so a term can exist in multiple locations in the taxonomy tree without being a duplicate i.e. in a ‘polyhierarchy’ fashion – a common requirement for some clients), deprecated (so no new assignments of the term can occur), merged and moved etc. In short, the types of operation you’d expect to need at various times. - I’d add a note that these are possible against terms in the taxonomy – the parent node types of term set and group (in ascending order) logically don’t have the same options, so if you make the beginner’s mistake of creating a term set when you really wanted a term with a hierarchy of child terms underneath, you have some retyping to do as you can’t restructure by demoting a term set to a term. The key is simply understanding the different node types and ideally having more brain cells than I do. - Descriptions – minor point, but big deal. Add a description to a term to provide a message to users (in a tooltip) about when and how to use a term. This can be used to disambiguate terms or otherwise guide the user e.g. “This tag should only be used for Sony, not Sony Bravia models”. - Delegation/security – permissions to manage the taxonomy are defined at the group level (top-level node), so if you wish to have different departments managing different areas of the tree, you can do this if you create separate groups. Related to this, each term set can be allocated a different owner and set of stakeholders – this isn’t security partitioning, but does provide a place to specify who is responsible and who should be informed of changes at this level (in a RACI kind of way). - User feedback – if the term set has a contact e-mail address defined, a ‘Send feedback’ mailto link appears in the term picker, thus providing a low-tech but potentially effective way of users suggesting terms or providing feedback on existing terms. - Social – a user’s tagging activity will be shown in their activity feed No doubt I’ve missed some – add a comment if any spring to mind please! Extending the metadata framework – adding approval So there are some great features in the framework, but one thing that seems to be ‘missing’ is the idea of being able to approve terms before they make it into the central taxonomy. So perhaps we want to allow regular users to add terms into the taxonomy quite easily, but only if they are approved by a certain user/group - this would give a nice balance between a centrally-controlled taxonomy and a true folksonomy. I put the word ‘missing’ in quotes just now because quite frankly, it’s pretty trivial to build such a thing based on a SharePoint list and that’s just what I did in my talk. I’m sure more thought would need to go into it for production, but probably not much more. All we really need is to set up a list somewhere, add some columns, and add an event receiver. Adding an item to my list looked like this – I need to specify the term to add and also the parent term to add it under (using a managed keywords column mapped to the base of my taxonomy, meaning terms can be added anywhere): Then I just need some event receiver code to detect when an item is approved, and then add it to the term store: 1: public class TaxonomyItemReceiver : SPItemEventReceiver 2: { 3: public override void ItemUpdated(SPItemEventProperties properties) 4: { 5: if (properties.ListItem["Approval Status"].ToString() == "0") 6: { 7: string newTerm = properties.ListItem.Title; 8: TaxonomyFieldValue parentTerm = properties.ListItem["Parent term"] as TaxonomyFieldValue; 9: 10: TaxonomySession session = new TaxonomySession(properties.Web.Site); 11: TermStore mainTermStore = session.TermStores[0]; 12: Term foundTerm = session.GetTerm(new Guid(parentTerm.TermGuid)); 13: Term addedTerm = foundTerm.CreateTerm(newTerm, session.TermStores[0].DefaultLanguage); 14: mainTermStore.CommitAll(); 15: } 16: 17: base.ItemUpdated(properties); 18: } 19: } My code simply finds the term specified in the ‘Parent term’ column, then adds the new term using Term.CreateTerm() in Microsoft.SharePoint.Taxonomy. Note the use of the TaxonomyFieldValue wrapper class – this is just like the SPFieldLookupValue class you may have used for lookup fields, as terms are stored in the same format with both an ID and label so this class wraps and provides properties. Once this code has run, the term has been added to the store and is available for use throughout the organization – perhaps the best of both worlds. Amusingly, when we got to the “soooo, did it work?” bit in my talk the demo gods mocked me and the type-ahead on the term picker waited a full 10 seconds before the term came in, leading to a big “ooof……[pause]…..woohoo!” from the audience which capped off a hugely fun talk (for me at least). Next time: other ECM enhancements such as Enterprise Content Types, Content Organizer, Scalability etc. 18 comments: Excellent article thank you, looking forward to my official briefings from MS in the new year, but your early insights are much appreciated. Thanks for the great article. Excellent overview and manual. Ben Chris, Have you tried to pull the data from the managed meta data column via an event receiver and pass to another column type? @Matty, No can't say I have - but wouldn't expect any issues, outside of the usual event receiver things you might run into (e.g. having to disable event firing if you're updating the same item in the receiver) etc. Chris. Great article, 2010 certainly stepped up on metadata management! Well demonstrated. Excellent Chris. Did you found any way to add custom properties to terms, e.g. some type of ID (to stay connected to any LOB category system), synonyms or links to related terms? @SharePointFrank, Well there are a couple of aspects to this - terms in the term store can have synonyms, as I showed with the 'Plasma' and 'PlasmaSuper'/'PlasmaUltra' example in this article. However remember you get the semantics of synonyms here - they are visible to the user and cannot be selected, the whole point is to steer users towards preferred terms. I suspect what you're looking for is really the Term.CustomProperties collection. You'd have to use the API to read/write into this, but this will allow you store whatever you like against your terms. HTH, Chris. Thanks CHris, that was exactly what I'm looking for. / SharePointFrank Thanks , This is a gr8 article ... :) Hi Chris Have you tried out search with synonyms/abbreviations? E.g. you have a Term called "Line of business", and add a abbreviation to that term "LOB". If a page is tagged with a Managed Keyword "Line of business", one would expect searching for "LOB" would return this page. Unfortunately I am only getting results returned when "Line of business" is searched for, not "LOB". Thoughts? Cheers Alan Alan, I haven't looked at this specifically (though I can completely see where you're coming from), but my theory is that there is no native integration between synonyms in Managed Metadata and synonyms in search. What happens if you add the same synonyms to the search thesarus file? (I'm assuming you're using Enterprise Search within SharePoint rather than FAST - the latter handles synonyms differently). Here's some info on adding items to the thesarus file for SharePoint 2007, I'm guessing it's much the same. Let us know what you find :) Chris. Thanks, great article about the Taxonomy functionallity Chris, This was a great post. We just added managed metadata and term store functionality to our capture product, and your explanation helped me configure demos and dig into the technology. Thanks!!! Hi Chris Wanted to get a quick heads up on the taxonomy and metadata tools within sharepoint 2010 having looked at them a while back. Interesting that you can import the taxonomy from csv, this could really help us in getting a taxonomy started and engaging users quickly. Cheers, Really useful. Great article, very helpful thanks. Slightly off topic, but is it possible to have branching logic within the metadata, i.e. if you answer question 1 yes, you are presented with metadata choices 2 and 3, but if you answer question 1 no, you are presented with metadata choices 4 and 5. Thanks for any thoughts. Thanks Chris, Is there any way to block certain terms through folksnomy.. I don't want end users to create a term which is similiar to the taxonomy(centrally managed).. Excellent article. Thank YOU! thanks for shareing great article
https://www.sharepointnutsandbolts.com/2009/12/managed-metadata-in-sharepoint-2010-key.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+ChrisObrien+(Chris+O'Brien+%5BMVP+SharePoint%5D)&utm_content=Google+Reader
CC-MAIN-2021-39
refinedweb
2,590
57.1
Differential Equations and Error of Estimations So I'm tasked with the following two questions: For the initial value problem y′=y sin(x), y(0)=1, first find the exact solution. Then make log-log plots of the error versus n (the number of steps) at x=pi for Euler's method and the fourth-order Runge-Kutta method. Repeat the above exercise for the IVP y′=y+sin(x), y(0)=−0.48. So we have a defined function for funding the Euler estimation: def EulerMethod(xstart, ystart, xfinish, nsteps, f): ''' Returns a list of x and y values for the initial value problem y' = f, y(xstart) = ystart, up to x=xfinish, using n steps of Euler's Method. EXAMPLE: var('x,y') f(x,y) = -y + cos(x) EulerMethod(0,1,2,4,f) [(0, 1), (0.500000000000000, 1), (1.00000000000000, 0.938791280945), (1.50000000000000, 0.739546793407), (2.00000000000000, 0.405141997537)] #If you just want to get the last value, you could do: sol = EulerMethod(0,1,2,4,f) sol[-1] (2.00000000000000, 0.405141997537) ''' sol = [ystart] xvals = [xstart] h = N((xfinish-xstart)/nsteps) for step in range(nsteps): sol.append(sol[-1] + h*f(xvals[-1],sol[-1])) xvals.append(xvals[-1] + h) return zip(xvals,sol) As written by the teacher. We are given that and no function for the RK4 so I assume we are to use the built in Sage RK4 estimation. Anyways I have no idea what to do. I'm absolutely lost. Can anyone please help me? Massive thanks in advance.
https://ask.sagemath.org/question/24330/differential-equations-and-error-of-estimations/
CC-MAIN-2017-34
refinedweb
262
66.44
Red Hat Bugzilla – Full Text Bug Listing Description of problem: After a process such as prelink changes the contents of an executable file, pidof will no longer show the process id unless arg0 of the process exactly matches the full path of the executable. This worked properly on RHEL5 (at least RHEL5.5) but doesn't work correctly on RHEL6.1. Version-Release number of selected component (if applicable): At least sysvinit-tools-2.87-4.dsf.el6.x86_64 but also still fails in the upstream latest version sysvinit-2.88dsf. How reproducible: One way is with the squid package which has a parent process with an arg0 of /usr/sbin/squid and a child process with an arg0 of "(squid)". Steps to Reproduce: 1. Install the squid package on a system that didn't have it, or downgrade+upgrade it so it gets a new executable. 2. As root or the squid user run /sbin/pidof /usr/sbin/squid 3. As root run prelink with /etc/cron.daily/prelink 4. Repeat step 2, you'll see only one Actual results: The process that doesn't have an arg0 of /usr/sbin/squid doesn't show up in step 4. This is especially a problem for squid because the pid it stores in squid.pid is the child process, the one that doesn't show up, so pidof cannot be used to match that. Expected results: The same process ids should appear in steps 2 & 4. Additional info: I have reported this to the upstream supplier and given them a suggested simple patch. Thank you for report. I have tried to reproduces this with this: #!/bin/bash yum -y remove squid yum -y localinstall squid-3.1.4-1.el6.x86_64.rpm yum -y localinstall squid-3.1.10-1.el6.x86_64.rpm /etc/init.d/squid start /sbin/pidof /usr/sbin/squid /etc/cron.daily/prelink /sbin/pidof /usr/sbin/squid but pidof showed me two pids in both calling. You wrote, that you don't have this problem in rhel-5. Can you please try if this also occurs with rhel5 version of pidof in rhel6? There is a possibility, that it can be caused by some regression in sysvinit. Lukáš, Thanks for trying it. I just made a new RHEL6.2 instance on Amazon EC2 and found out that squid is no longer a good example package, because its binaries have been made to be "not prelinkable". If I run 'prelink -p' after /etc/cron.daily/prelink, that's what it says about /usr/sbin/squid. I haven't found exactly why or when that change happened, but it appears it was probably made into a Position Independent Executable, most likely for security reasons. Unfortunately, offhand I don't know another example application to demonstrate this problem, one that has a second process where arg0 is changed. I can tell you how to simulate what prelink does to executables that are prelinkable, however. You could do these steps as root: yum install squid /etc/init.d/squid start /sbin/pidof /usr/sbin/squid # prints 2 process ids cp /usr/sbin/squid /usr/sbin/squid.copy rm -f /usr/sbin/squid mv /usr/sbin/squid.copy /usr/sbin/squid /sbin/pidof /usr/sbin/squid # doesn't print any process ids I know that there was a regression in sysvinit, because when I compiled the current sysvinit on RHEL5, it exhibited the same problem until my patch was applied. I tried copying a RHEL5 pidof binary to RHEL6 and it didn't work at all; it probably would need to be recompiled but I'm not yet convinced that it's worth the time to try that. - Dave In this example is the behavior of pidof same in rhel5 and rhel6. I am starting thinking, that this is not a bug but a feature, when I call pidof to concrete binary I want to know its pids and not for any other binary, even if it had the same name and was in the same path. Other question is calling pidof only with name of program ("pidof squid"), it seems that this in all cases return only one pid and it is not the one in .pid file. I was surprised to find you're right that my test case in comment #5 causes the same behavior of pidof on rhel5 & rhel6. pidof after prelink on rhel5, however, does still work even though it changes the inode & size of the binary. The test case I gave you was bad. Change it instead to eliminate the "rm -f /usr/sbin/squid" step. Then on rhel5 it does the expected thing and prints both process ids. The mv step then prompts to overwrite; just answer "y" or use mv -f to avoid the prompt. I don't think the behavior I observed for pidof on rhel6 is a feature, it is a bug. A big use for the pidof command is for /etc/init.d scripts to find out whether or not a copy of the same program is already running, and they need to know any process running with the the same path, not just the same binary file. This exercise has now shown me what the real difference is between rhel5 and rhel6. It isn't that pidof has regressed, it is just that it hasn't kept up with changes to the kernel. With rhel6, doing a copy of a running binary and then overwriting the binary with mv -f causes the /proc/NNNN/exe symlink to have a "(deleted)" appended to the name of the destination. That doesn't happen on rhel5 unless you do a rm -f of the running binary in between. Pidof, without my patch, can't cope with that difference. With this modification I can reproduce the difference, but I have tried this with more simple binary and result was same on rhel5 and 6: [root@rhel5 x]# cat a.c #include <unistd.h> int main() {while (1) sleep(3600);} [root@rhel5 x]# gcc a.c -o a [root@rhel5 x]# ./a & [1] 12159 [root@rhel5 x]# cp a b [root@rhel5 x]# mv -f b a [root@rhel5 x]# readlink /proc/12159/exe /root/x/a (deleted) I think that there is probably bug in rhel5 kernel and it's causing that the symlink stays same. I will discuss it with the kernel team. About the usage of pidof in initscripts, you can use pidofproc from /etc/init.d/functions. With your simple binary, the difference of whether or not the " (deleted)" shows up on rhel5 appears to be whether or not the filename you copy to has a single character. I tried "b", "c", "A" which all showed " (deleted)" but "a.copy" or "a2" did not show " (deleted)". - Dave A patch for this has now been put into the HEAD of the upstream source, and I tested it on RHEL6.2 and it works. Bug is still there, and clvmd process is affected. Proposed patch: --- a/src/killall5.c 2013-07-12 07:05:25.000000000 +0000 +++ b/src/killall5.c 2013-07-12 07:13:05.342450210 +0000 @@ -328,7 +328,12 @@ if (readlink(path, p->pathname, PATH_MAX) == -1) { p->pathname = NULL; } else { + char *ptr = NULL; p->pathname[PATH_MAX-1] = '\0'; + ptr = strstr(p->pathname, " (deleted)"); + if (ptr) { + *ptr = '\0'; + } } /* Link it into the list. */ Vladislav, In what version is the bug, and what version is your patch for? It does not appear to match the current upstream HEAD at. Dave Dave, That is for EL6's sysvinit-2.87. ====================================== Verified in version: sysvinit-tools-2.87-6.dsf.el6.i686 PASSED ====================================== # /root/aaa 3600 & [4] 28006 # cp /root/aaa /root/bbb # rm /root/aaa rm: smazat běžný soubor „/root/aaa“? y # mv /root/bbb /root/aaa # pidof /root/aaa # --->> there are pids ---> OK 28006 27829 27566 # rpm -Uvh sysvinit-tools-2.87-6.dsf.el6.i686.rpm Připravuji... ########################################### [100%] balíček sysvinit-tools-2.87-6.dsf.el6.i686 je již nainstalován # pidof /root/aaa # --->> there are pids ---> OK 28006 27829 27566 # ====================================== Reproduced in version: sysvinit-tools-2.87-5.dsf.el6.i686 FAIL ====================================== # /root/aaa 3600 & [3] 27829 # cp /root/aaa /root/bbb # rm /root/aaa rm: smazat běžný soubor „/root/aaa“? y # mv /root/bbb /root/aaa # pidof /root/aaa # --->> there are no pids ---> BAD # rpm -Uvh /root/sysvinit-tools-2.87-5.dsf.el6.i686.rpm Připravuji... ########################################### [100%] balíček sysvinit-tools-2.87-5.dsf.el6.i686 je již nainstalován # pidof /root/aaa # --->> there are no pids --->.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=760251
CC-MAIN-2016-50
refinedweb
1,430
73.78
On Apr 20, 2012, at 12:09 PM, Richard Wackerbarth wrote: >As for the "opaque" hash, the order is not important. The order of the inputs >is arbitrary. It needs to be fixed and published so that multiple encoders >will derive the same hash as that generated by another encoder. Right. I've updated the description of bug 985149 to be explicit about the proposal. I like Permalink-Hash as the header name. >If the List ID made a visible part of the message identifier, then it is >creating a separate namespace for each list. Here the order may have >implications when viewed in the context of other uses. > >Here, we might be wish to be able to have only one copy of the message in the >archive and/or the distribution channels even when that message gets >cross-posted to multiple lists. Note that RFC 5064 defines the Archived-At header. IMO, this would be the appropriate place to add any list-specific namespace discriminator. Also, RFC 2369 defines the List-Archive header, which could contain the base URL to the archiver, including the List-ID information. >The one thing that does need to be visible is the designation of the revision >of the hashing algorithm. Otherwise, without that visible indicator, there is >no way to recreate a "stable" value if a rehashing needs to be performed. Yep, see the bug for details. Below is an example in Python code. Cheers, -Barry >>> from email import message_from_string as mfs >>> msg = mfs("""\ ... To: mylist at example.com ... Message-ID: <foo> ... ... """) >>> from hashlib import sha1 >>> from base64 import b32encode >>> bare_msgid = msg['message-id'][1:-1] >>> bare_msgid 'foo' >>> msg['List-ID'] = '<mylist.example.com>' >>> bare_listid = msg['list-id'][1:-1] >>> bare_listid 'mylist.example.com' >>> h = sha1(bare_msgid) >>> h.update(bare_listid) >>> permalink_hash = b32encode(h.digest()) >>> permalink_hash 'FW7VLQIZV3P6O64PL7OGLM5Y3RUBQZ4F' >>> msg.add_header('Permalink-Hash', permalink_hash,>> msg['List-Archive'] = '{}'.format(bare_listid) >>> msg['list-archive'] '' >>> msg['Archived-At'] = '{}/{}'.format(msg['list-archive'], permalink_hash) >>> msg['archived-at'] ''
https://mail.python.org/pipermail/mailman-developers/2012-April/022002.html
CC-MAIN-2017-30
refinedweb
327
60.01
6:30: I awaken, and put on some clothes. Since I will be under anesthesia, I have consumed nothing, solid or liquid, since the previous night. 7:00: I arrive at the office, and begin filling out insurance forms and waivers. I am then taken to the room where the operation will take place. 7:15: The operation begins by inserting an IV into my left arm, loaded with a saline and sodium brevitol solution. After the injection, unconsciousness comes on in about ten seconds. 7:45: The operation is completed and I am moved to a recovery room. 8:00: I awaken, feeling no pain. When I attempt to stand, I become extremely dizzy. Both walking and talking become difficult. 8:45: I arrive at home, and place two large bags of refridgerated peas on each side of my jaw. I also take an antibiotic, washing it down with Gatorade, which is when I learn that the lower part of my jaw, extending from the lips to the chin, is numb. After consuming this repast, I fall asleep. After a few cycles of sleeping/drinking/drugging/icing/sleeping(approx. 3 hours), I feel well enough to move around. I take some pain medication and begin to relax. The area between my lower lip and chin is still numb, and the back edge of my jawline by the ear is mildly painful to the touch. 5:35: I begin noding my dental surgery experience on Everything. 5:38: I pause for thirty minutes to apply an ice pack. 6:00: I take another antibiotic. When drinking some water, I notice that sensations on the right side of my tongue differ. While water over the other areas of my tongue feels cool, the water on the right side of my mouth feels warm and tastes sulfurous. I'm feeling tired now, so I go to bed.. Don't believe the hype. Wisdom teeth are a hoax. #include "e2stddisclaimer.h" : Day of Surgery If you received a general anesthetic (i.e., you went all the way under), have someone else drive you home, then lie down with your head elevated until all effects of anesthesia have disappeared. Just put some calm music on and enjoy the trip. Again, effects vary from individual to individual, and you may feel drowsy for several hours. DO NOT operate a motor vehicle or any mechanical equipment for at least 12 hours after the surgery. Don't node, either ;-). Do not disturb the surgical area. You probably got gauze to bite down on. Keep it in, and keep firm pressure on the sockets (if it hurts, you're biting too hard). This is very important! Blood clots need to form down in the sockets, and if you knock them loose or prevent them from forming, you're going to be in for a few painful weeks. After the first hour you've been home or so, you can change the gauze. You probably still won't be able to feel anything, so be careful! Gently remove the bloody gauze, and the just pack a few pieces of fresh gauze back in. Gently! But be sure the gauze is going over the sockets, and not being clenched between your remaining molars. Steady pressure is what it takes to form clots. If you change the gauze after another hour and there still seems to be a bit of blood, soak a tea bag in moist water, squeeze it dry, wrap it between gauze, then bite down on that for 30 minutes. The tannin will aid in the healing. If after the first two hours you feel the bleeding is still severe, please call your doctor/dentist/oral surgeon (see disclaimer above)! Swelling is to be expected, and usually peaks around 48 hours after the surgery. To reduce swelling, wrap cold packs, ice bags, or bags of frozen vegetables in a towel and apply to your face near the surgical site, 20 minutes on, 20 minutes off. I recommend alternating either side of the face in this matter. Put the ice pack on your bed and lay your cheek on it. After 20 minutes, roll over. After an hour, change the ice and check the gauze. Keep this up for 2 days (during the day only) and you'll be in good shape. On the third day, switch to heat, either in chemical hot packs, hot water bottles, microwaved moist towels, or electric blanket-type heating pads. By now the swelling should be going down, so use heat until you get tired of it. If you were prescribed medicine for swelling (lucky S.O.B....), take it as directed. You may have a few bruises, but they'll go away after a week. Your jaw muscles will feel tight and you may have difficulty opening your mouth. This too should subside within a week or two. Use lip balm (like Chap-Stick) to keep your lips from cracking. Resist the urge to probe the sockets with your tongue, finger or anything else! Although it may seem convenient to do so, do not suck anything through a straw! Do not smoke cigarettes (or anything else; now could be that excuse to quit you've been looking for...). Do not spit (let it drain out of your mouth). If you play any wind instrument, woodwind or brass, give it a rest for at least a week. Ignore the urgings of your band director or fellow musicians, and forget about festival, contest, or the Friday night football game. It's not worth sacrificing your health! All these actions will put pressure on the sockets and could dislodge the clots, which will put you in a world of pain. Diet A few after surgery, once you've changed the gauze and made sure there's no severe bleeding, you'll probably be feeling a little hungry. Eat anything that fits the following descriptors: Soft, Bland, Nourishing, Mushy, Pureed, Liquid, Cool, Lukewarm. Good examples are pudding, ice cream, gelatin, vegetable or fruit puree. Bananas, soups. One food I particularly enjoyed was a combination between a virgin daiquiri and a fruit smoothie: Fill up a blender with ice, and crush it finely. Add any of the following: Frozen orange juice concentrate, kool-aid mix, fresh fruits, ice cream. Blend some more. Pour into a cup; eat with a spoon. Goes down easy and the cold will soothe the pain and swelling. Avoid anything really hot (caliente) or spicy (picante). Avoid anything that will force you to chew or move your mouth too much. Avoid anything with small, hard pieces that could get stuck in the sockets, like nuts or popcorn. As the days go on gradually progress to more solid foods as you feel comfortable chewing them. Remember, proper nourishment is essential to the healing process! Pain and Medications The local anesthetic the doctor gave you (probably Novocain or Lidocaine) will wear off after a few hours, and then your mouth is going to hurt. If you were prescribed a painkiller like vicodin, now's the time to start taking it, preferably with your first meal. Resist the urge to take it any more often than the bottle recommends, you'll just run out faster and on day three you'll be sorry.. You'll have trouble getting a prescription like that renewed, as any painkiller strong enough for oral surgery is likely to be habit-forming. If you feel the standard dose is not enough, supplement it with an analgesic like aspirin or Tylenol. Taking your pain medication with food and water will help lessen any nausea. Some painkillers are likely to mess up birth control pills; ask your doctor. If you wear any orthodontic appliances (you have all the luck, dontcha?), put them in as soon as you get home. Putting them in after the swelling kicks in will not be fun. For the Next Few Days Keep your mouth clean! You don't want to be fighting an infection along with the rest of the pain from surgery. The day after surgery the blood clots in your sockets should be in place, and you can start rinsing your mouth with warm salt water. Mix 1/2 teaspoon NaCl with 1 cup H2O. Rinse gently every 2 hours for the first week, then 3 times a day for the next 2 weeks. Also being your normal toothbrushing routine the day after surgery, as long as it's comfortable to do so. Remember not to spit! Let the salt water/toothpaste drain out of your mouth. Don't swoosh water around in your mouth either. Roll your head around. You'll feel silly, but you'll be better off in the long run. When you want to go back to work/school or whatever your normal daily routine consisted of is up to you. I stayed home for three days. I know guys who were back on the job the day after. I know other people that had it so bad they stayed home for two weeks. Use your best judgement, and be careful. Bad stuff: Dry sockets. If you go ahead and smoke, or do any of the other stuff I said not to, or maybe just have bad luck, around the third day you'll probably lose a blood clot out of one of the sockets, exposing the soft tissue and nerves to the air. This will hurt. You'll feel the pain spiking down your jaw and back to your ear. Call your doctor. Skin discoloration: Probably a bruise. Aspirin and heat will help it clear up. If you had an IV, there may be swelling and chemical irritation in the vein. Numbness: If your lower wisdom teeth are taken out (and they usually are no matter what, for a number of reasons), you may experience some loss of sensation in your lips or jaw (after the Novocain has worn off). This is because the main nerve that supplies sensation to your mouth runs right beneath the roots of your teeth, and it may be damaged in the extraction. It usually disappears after a few weeks. It may continue for months. In rare cases the damage is permanent. As always, call your doctor if you have any questions. And in the end.. After two or three weeks your sockets should be healed enough that you can safely use your tongue to poke out bits of food that will get stuck down in them. Go easy in them, and remember if don't like this you can always go back to the saltwater rinses. After two months you'll forget the holes are even there, and after 3 or months they'll be completely closed over. YMMV. That's pretty much the drill I went through when I had all four of my wisdom teeth out. Read wisdom teeth and getting my wisdom teeth pulled for other noders' experiences. But only trust the advice of a trained medical professional! December 17th - cry. Alright, I just got back into the office after having visited my brand new Oral Surgeon who I was referred to by my trusted dentist. I was, of course, referred to him for extraction of my wisdom teeth. Dr. Douglas Vincelli seems to be a very competant doctor, and he has a nicely run office. The people there, including himself, are very in touch with the reality associated with oral surgery. So, I thank my dentist for the good referral. So far, everything sounds good, right? Well, now I'm scared. The doctor gave me three choices for my state during the activity. Choice one: local anesthetic. Awake, but feeling no pain. Well, frankly, I think the knowledge and sight of someone working in your mouth like that is probably going to be the worst part, so no, I don't like this option. Choice two: half asleep, half awake. Drugged to the point where I don't know and don't care about what is going on to me. Hm.. that sounds good.. but I tend to think of myself as pretty perceptive, so I might notice blood dripping out of my mouth and some guy drilling around in there and stuff. No matter how drugged I am. Choice three: general anesthetic. Oh, what's that? I'll be completely asleep, and not have a clue what's going on? Sleep? I like sleep. Sleep it is. I take a look at the panoramic x-ray that the doctor has. Holy shit! Here's teeth, teeth, teeth.. they all look fine.. all straight in a little army of teeth. I'm amazed by how deep the roots go, being a bit more than two times the size of the teeth. No wonder my teeth feel strong, I think. I wonder, so, where are the wisdom teeth I need to get removed? Well, right at the end of the line.. Holy shit! The x-ray shows that on my bottom row of teeth, right next to the molars that look huge, are even bigger teeth. They're like 110% the size of my big molars... and they're sideways. The top of the wisdom teeth at a 45 degree angle to the rest of the teeth. A collision looks emminent.. and this x-ray was taken September 11, 2001. (it took that long to get an appointment for this consultation. :() So the doctor does a quick inspection of my mouth, and then begins to run down a list of the possible side-effects, soem of which inclued permanent loss of feeling in some areas of my mouth. He explains the odds of these things happening as 0.5% of the people having numbness caused by bruising a nerve during the operation, and 99% of those people recovering completely (within a few weeks to a few months). Hm, 5 out of a hundred thousand people have permanent numbing.. and four teeth being removed.. I don't really like those odds, but I can live with them. Finally we move onto the scariest part of the appointment. The billing. Luckily, I have 100% dental coverage for the minor dental surgery that this is. Still, being completely anesthesized has it's cost.. the bill is $110 CND for this consultation, and $942 CND for the teeth removal, and $175 CND for the anesthetic. $1227 CND.. not as bad as I feared. I need to pay this up front, but they expect my insurance will recover all but a couple hundred dollars. I guess it's affordable, but I would perfer to spend that money on something else. I mean, you know, paying to have some people induve pain in me isn't my idea of a good time. If it has to be something oral related, I'd much perfer to spend a few hundred dollars on white chocolate brownies rather than having teeth removed. The worst part is still yet to come, as I am to find out. And I'm not talking about the removal. I talk to the lady there to make an appointment for my wisdom teeth removal. I ask for a date as soon as possible, of course, because this waiting is terrible. December 28, 2001 is the date I'm given. Gah! I didn't expect it to be that soon! I waited 4 months for this consultation, and now I don't even have one pay check between me and my removal. Doh! Well, now I really need that Christmas Bonus, because I have a large expenditure coming up.. even if my insurance company reimburses me, it's still a lot of money out of my pocket. December 20th - The christmas bonus has come, and I can afford the surgery. Phew. "... because the skull and jaw of modern man is much smaller than that of our ancestors, most people do not have enough room for wisdom teeth..." (1) That's right. Wisdom teeth is a visible sign of evolution. It looks like the human lower jaw is shrinking over time. This leaves less room for those wisdom teeth. Some people never get wisdom teeth and there are children born today that aren't even born with the buds for them. Some say this is because people today are maturing faster. (2) Some say the early maturation isn't evolution but better nutrition. (2) Others say that the increased chewing that ancient man had to do stimulated the length of the lower jaw. (3) I didn't know that Cocoa Puffs were so nutritious! Better nutrition or evolution, wisdom teeth cause problems and the lucky kiddies of today and tomorrow won't have to deal with them. (1) (2) (3) After an Extraction For the first twenty four hours: No Smoking No Alcohol No Hot or Hard Food or Drink No Exercise or Exertion Gauze Bite on the piece of gauze in your mouth for a good half hour. This helps a blood clot to form. Some bleeding is normal, but if it starts again later bite on another piece of gauze. Pain As your anaesthetic wears off (you were anaesthetised, weren't you...?) take some paracetamol or similar. Your dentist will advise you. It is important not to take aspirin as it thins the blood, hindering formation of that blood clot I mentioned. Salty Water The following day, start to rinse with salt water. Stir a teaspoonful of salt into a glass of water and rinse the area around the socket regularly. This will help prevent any infection occuring. What Happens if I Smoke, Drink, Eat Hard Food? As a smoker, your blood flow is going to be poor to start with, and it will be more likely that your body will have trouble forming a clot over the socket. The blood clot is important because it seals the wound from the bacteria and stuff in your mouth, and allows the wound to heal. If you smoke cigarettes after the extraction, a blood clot may not form at all and that is called dry socket. Dry socket is agonisingly painful and smells dreadful. You don't want it to happen to you. If you drink alcohol, you may cause inflammation of the wound. Besides, you're not supposed to drink alcohol when you've had anasthetic plus painkillers. If you eat hard food you may get something stuck in the socket or dislodge the blood clot. If you eat hot food you may burn the wound (or, if you're still numb, the rest of your mouth). Remember, you have an open wound in your mouth. Would you pour coke in a bullet wound on your leg? No? Well, be careful what you pour on the extraction wound in your mouth. Why Are You Telling Me This Stuff? (A Disclaimer) I work as a Dental Assistant in Australia. Giving post-operative care and instructions is part of my daily work. I assist in, on average, one extraction a day and I see perhaps one dry socket per month. These instructions are fairly generic and apply to every extraction we do. However, I am not a dentist nor am I your dental assistant; you should follow the instructions your own dental professionals give you after an extraction. If in doubt or pain, consult your dental professional. Having your wisdom teeth pulled? Consider this! SLEEP SITTING UP! Why? It lets the fluids drain out of your head while you sleep, instead of the liquids pooling, which would happen if you lay down. This results in greatly reduces swelling. Greatly reduced swelling means greatly reduced pain on your part. I had only one of the pain pills that was prescribed to me after the surgery. I didnt take any more because I didnt need them. How? Sleeping sitting up means, very likely, sitting in a chair when you fall asleep. The first night after the surgery, this should be no problem at all. You should be so drugged up that falling asleep, no matter what position, will be easy. If you aren't fortunate enough to have a comfortable la-z-boy type chair to fall asleep in, I recommend stacking up pillows on your bed and sitting against them. If that doesnt work, sleep against a wall and use pillows to make it comfortable. When? The first three or so nights should be all you need to get past the swelling. By the third night, I couldnt really see any swelling, and I was tired of the chair, so I returned to my bed and used the pillow stacking method for the next couple of nights. I was back to sleeping normally by a week after the surgery. Who? This was recomended to me by the physical trainer for the sports teams at my high school. He taught my Spanish class and was out for (only) a couple of days due to having his wisdom teeth pulled. He touted sleeping sitting up as a method for speeding up recovery. DISCLAIMER: I am not a health proffesional. This is entirely conjecture based on the part of my onetime teacher, and my experiences. Consult with your doctor or orthodontical surgeon before you do this. All I can say is that I did it, I am currently 9 days after having had my wisdom teeth pulled, and I was never in any great pain. I recovered wonderfully, and I think that sleeping sitting up helped. Log in or registerto write something here or to contact authors. Need help? accounthelp@everything2.com
http://everything2.com/title/Wisdom+teeth?showwidget=showCs1136289
CC-MAIN-2016-07
refinedweb
3,599
82.85
This chapter will emphasize on case control structure of C programming. This control statement lets you to take decision from a set of choices. This is called switch. Switch control is characterized by case and default keywords. In this control structure, switch keyword is followed by an integer expression. According to the result of expression the constant which is specified with case is matched. As soon as the match is found, the statements following that case is executed. If none of them matches, then the statements following default is executed. In other words, switch provides multiple branching capability to a program. Format of switch-case is as follows: Explanation: Here, - switch, case, break, default are keywords - expression_to_decide is the expression which decides value with which case value is to be matched. This can be an integer or character constant. - block_1, block_2, block_default, etc are the respective body corresponding to each case value. Example: /* Program to illustrate switch-case statement */ # include <stdio.h> # include <conio.h> void main () { char myname[20]; int code; printf("Enter the code\n"); printf("Enter 1 to print your first name\n"); printf("Enter 2 to pront your last name\n"); scanf("%d", &code); switch(code) { case 1: printf("My first name is debo\n"); break; case 2: printf("My last name is sen \n"); break; default: printf("I don't want to print\n"); break; } getch(); } Snapshots of the sample program is as follows: Figure – C Program example switch-case statement Figure – C Program example switch-case statement compiled output Figure – Output of C Program example switch-case statement With this we conclude this chapter. Next chapter is dedicated to sequential statements. Thank you.
https://wideskills.com/c-tutorial/08-c-switch-case-stucture
CC-MAIN-2021-21
refinedweb
279
55.13
[Jim Fulton] ... Advertising From reading the source. they don't seem to fit the use case very well: - They are registered with the TM and are called for subsequent transactions until they are unregistered. This is not what we want here. We want hooks to be called only if the current transaction commits. We want to throw the hooks away at the end of the transaction. It's not obvious how to make this work with synchronizers. (I suppose the syncronizer could save the transaction it cares about and unregister itself if it sees another. This is a lot of bother.) It's curious to consider the ways in which this fails to work: """ import transaction class OneShot: def __init__(self, hook, *args, **kws): self.hook = hook self.args = args self.kws = kws self.txnmgr = transaction.manager self.txnmgr.registerSynch(self) def beforeCompletion(self, txn): self.hook(*self.args, **self.kws) def afterCompletion(self, txn): self._remove() def _remove(self): try: self.txnmgr.unregisterSynch(self) except KeyError: pass """ That is, OneShot(hook, ...) tries to work exactly the same way as your beforeCommitHook(hook, ...). The'. Actually it can check txn.status to decide.?
https://www.mail-archive.com/zodb-dev@zope.org/msg00031.html
CC-MAIN-2018-22
refinedweb
192
62.64
some queries to google. that’s the reason we face captcha entry for google search sometimes. - Unable Filter the websites before we build links from Those like Page rank & OBL Outbound links. - May does not build links our specific imported link list Display error message as No search engine matches. - Proxies, Captcha services, and Indexing service are expensive but these are common with every software. - we can’t filter domain Authority EDU & gov category. - Google can penalize with Penguin algorithm. Pros: - Integrate with captcha services - integrate with Ping & Premium Indexer services. - You can search & Filter URLS based On PA Page rank and Export Them. - Display URL Indexable or Not DO Follow /No follow and Page Rank Inbound, outbound links of the page. - Automatically register & verifies account for you with POP3 email Live mail prefer instead of Gmail. - we cannot build links based on page rank, - You can Schedule & Pause GSA SER some seconds like 8400 or some links 50. - we can check Submitted & GSA verified list. - Import your competitor backlinks and build it yours. If you have any list you can import it but it includes other links too by searching relevant websites. you may deselect categories to search. - Able to search related websites using keywords and build comments from those. - Unlimited projects: each URL/ Website count as 1 project. Other Services you must include - Proxies - Indexing services - de captcha services - PC / VPS Server to host virtually on the website. GSA SER Tutorials & TIPS: Options: Filter Rules: - You can filter the list from PR 1 to PR9 to post your link. - Out Bond Links too. - Types of links to create. Settings: data - check /uncheck USE verified links from other projects too. ProjectModify global project: Global project applies for All URl’s /projects in GSA, Search engine selection Country targeted search engine performs well to find related niche but it blocks quickly because queries to send to 1 search engines there 200+SE’s based on countries and global SE’s like ask.com if you want to build the link from more related niches then select at least 1 keyword included in the referring page. How to Improve GSA work? think about how many links per Minute (LPM) can GSA Build. To improve this - Submit links verified list, - choose best SE to harvest links - Private & fast proxies & Decaptcha indexer. - Set High timeout. ex: GSA LPM 1500 links per minute. Tips: if you build links fast you have to consume the same velocity otherwise you may catch by GUpdate. Make sure to filter PA and OBL Outbondlinkss (most important). Next tip: neglect google. build links in 3 tiers. GSA Video tutorials 2016. Build Links from - Article directories - document sharing - Wiki sites - web 2.0 - Forum profiles - Social bookmarking - General blogs - Blog comments - URL Shorteners - Trackbacks - video promotion for videos (adult too) - URL redirect referrer - Directory submission - RSS Directories Increase LPM in GSA - Skip bing and MSN search engines no Google penalties apply for other SE results - Analyze successful submissions by engines /platforms - Options >>Advanced >>tools >>>stats >>verified /successful submissions - and then deselect low success rate engines. - use global list for every project submitted & verified only Remove duplicate domains from global list. - Save submitted & verified links as global list, - proxies semi-dedicated more speed. The Number of keywords. If the less keywords SER use only footprints like scrape box. Also make sure to check use keywords find target URLs in project option Higher amount of projects means higher lpm Threads increase GSA performance Captcha service GSA & other using self-created verified list. articles directory wiki RSS blog comment drupal 7 rocketer * use keywords as anchor instead of single anchor analyze competitor backlinks and post Importing other URLs and identify save as new file. selecting categories ask user / skip / select random deselect all engine to build links from target URLs. importing target URLs global project (all verified/ submitted / identified / failed links) video id click on projects>>stats>>sybmitted links today>> export to CSV select search engine by country scraper URLs like sb Add custom footprints AddURLs online by crawling website or import from file. Green = submitting Blue = verification Orange = searching for new targets Red = something is wrong (not used for now) import target URLs from global site list at once for all projects then try use site list type and set that to. identified: 1 submitted: 2 verified: 4 failed: 8 skip adding: 16 You can add the values to use more (e.g. submitted + verified = 1 +2 = 3) Scrape Proxies:Options>>submissions>>USE proxies set queries time to search engines disable banned proxies No engines matches: GSA Unable to identify the platform of the website. already parsed: submitted HTML timeout increase 30-120 not from all: not punt found on target URL best tut Adding footprints to GSA & Scraping URLS Options>>Advanced>>tools>>Search online for foot prints>>add predifed foot prints Make sure your Target URL Cache is cleared – Highlight all of your projects – Right-click the selection, then select Modify Project, and click Delete Target URL Cache Linking from only EDU & gov domains Open your project, go to options and scroll down until you find “Skip sites with the following words in the URL/Domain. Check that option, then look to the right and you’ll see a button called “Create Domain Filter” click that and choose “Must Have Filter” – then choose .gov, .edu from the list of domain extensions. – See more at: GSA Search Engine Ranker Best Practices Guide – Set Up Your GSA SER today! harvest competitor backlinks Import existing back links for Tier 2 project GSA SER Comparison Scrapebox price 97$ Instead of 197$, Same features like GSA but it scrapes fast very popular in the market.you can scrape links by footprints post comments, mass link building, search engine harvester link, checker, harvest &USE free proxies. manual blog poster like comment blaster and removes duplicates like GSA. XENUKE X SCR: builds tier 1links very fat with a verified list but unable to scraper links like scrape box and GSA. higher price. 14 day free trial available Monthly subscription $147 worst thing. Magic submitter: best for web2.o 1/3 price of Xenuke. Ultimate demon: posts to multi platforms but you have to import the list. NO hands SEO: only automates trackbacks wbe2.0 likeGSA. Xrumer (14day free trail available very popular Automated proxies and de captcha but it difficult to understand same features like GSA SER. GSA Cleanup maintenance Its almost 6 Months completed with GSA SER and Scrapebox. (Scrape box not much used to because of manual process). But SER Daily I run to build 1000 to 1500 links for 29 Projects/ Websites. But No improvement in ranking. also i am not enough smart to use the tool and so lazy. Clearings Site list (Global list) Options>>Adavacned tab>> Sitelist>>Tools>> Remove Duplicates>>>> Tools>>cleanup/removenot working urls
http://theonlineking.com/gsa-search-engine-ranker-review/
CC-MAIN-2017-47
refinedweb
1,142
63.39
In this example we generate a map of brain regions that are coactivated with a seed ROI--i.e., regions in which studies that report activity in the seed regions are also likely to report activity. We define the ROI using two different approaches: (a) by feeding in an image file containing the ROI, and (b) by passing x/y/z coordinates and asking Neurosynth to grow a sphere. To begin, let's import the modules we need: from neurosynth.base.dataset import Dataset from neurosynth.analysis import network Now we load a Dataset object from a file. We're assuming here that we've previously created a Dataset and saved it; if you haven't done that, take a look at create_a_new_dataset_and_load_features example.py in this folder. dataset = Dataset.load('dataset.pkl') Now we're ready to define our seed ROI, which we can do that in two different ways. First, let's define a seed using an existing image file. The file can be any image file (most commonly, in Nifti format) that standard neuroimaging packages can read (in our case, we're using NiBabel behind the scenes). Neurosynth will include all voxels with value > 0 in the mask, so if you have multiple ROIs with different labels (i.e., voxel values) in the image, make sure to isolate the one you want in a separate image before running this analysis. roi = 'my_roi_mask.nii.gz' And now that we have an ROI, we're ready to run a coactivation analysis: network.coactivation(dataset, roi, threshold=0.1, outroot='coactivation_from_image') And we're done! It's that simple. The network.coactivation() function does everything for you in one shot, and spits out a bunch of images reflecting various kinds of information related to our coactivation analysis. Actually, network.coactivation() is really just a wrapper around the standard meta-analysis processing stream Neurosynth implements. In practice, a coactivation analysis is just a meta-analysis where we're comparing all the studies that activate within an ROI to all the studies that don't. So if you've used Neurosynth's meta-analysis functionality before (see some of the other examples), you'll get all of the same outputs. There are two arguments worth discussing here though: Alternatively, instead of using an image to define our seed, we could also just pass coordinates explicitly and ask Neurosynth to grow a sphere around them. This is even simpler: network.coactivation(dataset, [[0, 20, 28]], threshold=0.1, outroot='coactivation_from_coords', r=10) The only difference here is we're passing x/y/z coordinates in to define the seed ROI (in this case, a voxel in dorsal ACC) rather than an image. We also need to pass a radius parameter (r) telling Neurosynth how big we want the spheres around each coordinate to be. In this case, we ask for 10 mm spheres. Note that we're not limited to one set of coordinates; we could have just as easily passed [[0, 20, 28], [-40, 20, 28]], which would tell Neurosynth to identify regions that coactivate with some combination of dorsal ACC and dorsolateral PFC. A seed doesn't have to be a single ROI; it could be any set of regions you like (and the same is true for ROIs defined using an image: any non-zero voxels will be included in the mask, so you can test for coactivation with entire networks if you want).
https://nbviewer.ipython.org/github/neurosynth/neurosynth/blob/0.3.5/examples/identify_regions_coactivated_with_ROI.ipynb
CC-MAIN-2022-33
refinedweb
574
53.21
How to add some graphic label to the glyph cell in font overview - RafaŁ Buchner last edited by gferreira Hi Guys, I would like to add a system of marking the changes to my script. For different reasons, it cannot affect color marking ( glyph.colorMark). The obvious solution for me would be drawing some graphic label on top of the glyph cells in the font overview. Is there any way to achieve that? Thanks in advance for your help! there are some glyph cell view notifications: glyphCollectionDrawis called and provides the collection view. You can draw on top of all the glyph cells. glyphCellDrawis called for each glyph cell, you can draw additional data inside each cell, this is in glyph coordinates space. The notification provides the glyph, the glyphCellview and the rectof the cell. hope this helps!! I'm not sure if this way of dealing with the problem is proper: let's say that I have a list of glyph names that were somehow changed by the "Action". I'm labelling the glyphs from the list by: - checking if the event's glyph is in that list - refreshing the glyphs in that list with glyph.changed() from mojo.events import addObserver, removeObserver from mojo.drawingTools import * import vanilla class Test: def __init__(self): self.w = vanilla.FloatingWindow((200,50),"TEST") self.w.bind("close", self.closeCB) self.w.open() self.w.button = vanilla.SquareButton((0,0,-0,-0),"Action",callback=self.actionCB) addObserver(self,"drawCell","glyphCellDraw") self.changedGlyphs = [] self.f = CurrentFont() def drawCell(self, info): glyphName = info["glyph"].name if glyphName in self.changedGlyphs: fill(1,0,0) rect(0,0,10,10) def actionCB(self,sender): self.changedGlyphs = ["a","b","c"] # making sure that cells will be labeled: # (are cells only labeled after the glyph.changed()?) for glyphName in self.changedGlyphs: self.f[glyphName].changed() def closeCB(self,sender): removeObserver(self,"glyphCellDraw") # cleaning the cells for glyphName in self.changedGlyphs: self.f[glyphName].changed() Test() Is it ok? or there is there a better way to achieve that? @RafaŁ-Buchner makes sense to me. nice example for the docs :) @gferreira Thanks, feel free to use it I would store your data in the glyph.liband draw from the lib data. glyph.lib["com.rafalbuchner.toolName"] = "changedToSomething" def drawCell(self, info): glyph = info["glyph"] data = glyph.lib.get("com.rafalbuchner.toolName") if data: fill(1,0,0) rect(0,0,10,10) the big advantage is the glyph get proper updates when you set or change the data in the glyph.lib. good luck!!
https://forum.robofont.com/topic/578/how-to-add-some-graphic-label-to-the-glyph-cell-in-font-overview
CC-MAIN-2020-40
refinedweb
426
59.6
Authors: Eszter Schoell, Teresa Venezia, Dani Ismail, Joseph Russo Team: Eszter Schoell, Teresa Venezia, Dani Ismail, Alejandra Jaimes and Joseph Russo are Data Science Fellows of the Data Science Bootcamp#2 (B002) – Data Science, Data Mining and Machine Learning – from June 1st to August 24th 2015. Teachers: Andrew, Bryan, Jason, Sam and Vivian. The post is based on the Kaggle Competition team project submitted on behalf of Eszter, Teresa, Dani, Joseph, and Alejandra. Machine Learning with Brain-Wave Patterns An EEG (electroencephalogram) is a non-invasive method that displays electrical activity in the brain. The challenge of the Grasp-and-Lift EEG Detection Kaggle project was to build a model to “identify when a hand is grasping, lifting, and replacing an object using EEG data that was taken from healthy subjects as they performed these activities.” (). Our team was attracted to the broader goal of the competition’s sponsor, WAY Consortium, which was to better understand the relationship between EEG signals and hand movements in order to develop a “BCI [brain computer interface] device that would give patients with neurological disabilities the ability to move through the world with greater autonomy.” Exploring the EEG Data The data was collected from a study of 12 right-handed participants between the ages of 19 and 35. An EEG cap with 32 electrodes was placed on each subject’s head, and the signals were collected while the subject performed grasp-and-lift tasks in a series. During these trials, the object’s weight, surface friction, or both were changed. The subject’s task in each trial was to perform these sequential steps on an object: (1) reach for it, (2) grasp with the thumb and index finger, (3) lift, (4) hold for a couple of seconds, (5) place back on the support surface, (6) and release it, before returning the hand to a designated rest position. The objective of this Kaggle competition was to detect the following 6 events that occurred during the grasp-and-lift tasks from the EEG data: (1) Hand Start, (2) First Digit Touch, (3) Both Start Load Phase, (4) Lift Off, (5) Replace, and (6) Both Released. For each subject, EEG data was recorded for 10 series of trials and approximately 30 trials within each series. Each observation was given a unique ID comprised of the subject, series, and frame. Each frame was 0.002 seconds (2ms). The training set contained the first 8 series (1 - 8) for each subject in data files and events files, respectively, totaling 17,985,754 frames or observations. The test set contained the last two series (9 -10) and totaled 3,144,171 frames. To illustrate, Figure 1 below plots one channel (electrode Fp1) for one subject and one trial of the grasp-and-lift task: Figure 1. Plotting Channel Fp1: Subject 1, Series 1, Frames 0 – 5000 The 6 events were provided for the training set as 6 columns with labels of either zero or one, depending on whether the corresponding event occurred within ±150ms (±75frames). The events for the test set were not provided and had to be predicted. For this challenge, a perfect submission would predict a probability of one for the entire event window. Preprocessing the Data Given that deciphering signals to characterize brain activity requires expertise in signal processing, we searched the Kaggle forum to gain a better understanding for preprocessing and analyzing this type of data. Most helpful were the Python scripts authored by Alexandre Barachant, which we adjusted for our purposes to extract the important features for the best classification. First, we normalized the data per series to remove series-related effects. Next, because EEG signals are “noisy”, we needed to consider the best frequency band and channel selection (spatial filter) for classification. With respect to bandpass filtering, since our data involved hand movements, the [7, 30Hz] bandpass Butterworth filter was used. In order to do this, we installed the mne package used for EEG processing in Python. Figure 2 below illustrates pre- and post-filtered data for one channel. Figure 2. Pre and Post Filtering [one channel before and after Butterworth filter] Final Model Selection: Deep Learning with Neural Net Deep Learning After experimenting with other models (see below: Earlier Models), we chose 'deep learning' as our final model, as it had the best AUC (0.91 on the test set). The initial incentive for choosing deep learning is simply because we were analyzing neural time-series data, and it would be fun to learn. We are heavily indebted to Tim Hochberg and the Python NNet script he shared on the site. At the core of deep learning is a hierarchical framework of linked (stacked) neural networks. To be “deep”, there has to be more than one of the neural net layers (stages) between the input and output layers. Figure 3 presents what a neural network could be for a subset of our data (subject 1, CSP-preprocessed data, created using R packages neuralnet, caret and e1071). On the far left are the inputs, in this case the 4 CSP features, in the middle are 2 layers with 16 and 8 nodes and on the far right the 6 events as outputs. Please compare this to the illustration (Figure 4) from made by Michael Nielsen of a deep learning model with convolution and pooling layers. Figure 3. Example neural net with 4 inputs, 6 outputs and 2 hidden layers. Figure 4. Visualization of Deep Learning from Michael Nielsen. One core concept of deep learning is to build complex functions from simple ones, and that each layer provides a non-linear function, or “feature transformation” that enables complex feature generation. This modular hierarchy of combining multiple levels of functions leads to a hierarchy of feature abstraction at each layer. A striking example is provided by deep networks from the imageNet model [Ref 1, Figure 2]. In this case, when observing the output of the sequential layers, the feature representation is indeed hierarchical: pixel -> edge -> “texton” or texture unit -> motif -> part -> object [Ref 2]. Another core concept is that the network should be able to build its own features, or representation of the data, relying on the incoming data itself, and not be hand-crafted by individuals for each source of data. For example, scale-invariant feature transform (SIFT) features are hand-crafted low-level features, versus having the network learn similar features, such as edge-detectors. This idea stems from the ability of different parts of the animal brain to be able to learn from different stimuli e.g. the auditory cortex learns to see. This suggests that there may be just one learning algorithm used by the brain [Ref 3]. Deep neural networks have become the benchmark for performance, and are commercially used by numerous companies including Microsoft, Google, and Facebook [Ref 2]. These deep networks scale to large data sets with millions of examples that require optimization over billions of parameters. However, training a deep neural network to achieve performance is not trivial, as there are many variables to consider [Ref 3]: -Architecture of the network -Loss function (regression: squared error, classification: cross-entropy) -Optimization of non-convex functions -Unlike convex problems, non-convex functions have no guarantee for global minimization -Initialization of the network -Supervised fine-tuning (back-propagation) Our first model trained on the raw data (all 32 channels), down-sampled by 10 and run on each individual person for series 1 through 8. The performance on the test data (series 9 and 10) was an AUC of 0.89. We then thought to use the pre-processed data (normalized, Butterworth filtered, CSP features), down-sampled by 10 and the performance was an AUC of 0.53. We therefore went back to the original data and trained on all data for each person (deep learning does better with more data). The AUC increased to 0.91. However, running this took about 6 hours on a MacBook pro with 16GB memory. We now wanted to run the deep learning model in PySpark. Since SparkMllib does not have a deep learning algorithm, we hypothesized that the easiest thing to do would be to load all into an RDD, group by person (there are 12) and run the Python deep learning script on each subject in parallel. In other words, an 'embarrassingly parallel' problem. As a first step, we set a Spark context that would use all 4 available cores and 10GB of memory: conf = SparkConf().setAppName("Simple App").setMaster("local[4]").set('spark.executor.memory','10g') We then pulled in a csv to create an RDD to be grouped by subject: # load file as RDD data = sc.textFile('filename.csv') # Remove header (column names). data = data.filter(lambda l: "subject" not in l) # Write function and run to create key/value pairs, # ignore first column which is index strings def parsePoint(line): elements = line.split(",") key = elements[1] values = elements[2:] pairs = (key,values) return pairs modeldata = data.map(parsePoint) # Group by subject groupdata = modeldata.groupByKey() # Test that the information was as expected framedata = groupdata.map(lambda x: pd.DataFrame.from_records(x[1])) framedata.take(1) TypeError: 'ResultIterable' object does not support indexing As we couldn't check whether our grouping had gone as expected or not, we tried a different approach. We grouped outside of PySpark and then pulled in as a RDD directly with key/value pairs: # Pull in as pandas data frame pd_df = pd.read_csv('filename.csv') # Extract unique keys from subject column keys = pd_df.ix[:,0].unique() # Create list of lists to represent key/value pairs data_ls = [] for i in keys: bools = pd_df.ix[:,0]==i data_ls.append([i,[pd_df.ix[bools,1:]]]) Creating the new object data_ls, which is a list of lists having subject in sublist with subject's data. In other words, a key/value mapping as list, that could then be pulled in to Spark as an RDD having 12 partitions: data_rdd = sc.parallelize(data_ls,12) Now, the next step will be to push the partitioned RDD through the deep learning. This requires changing the deep learning Python script to read input as a key/value data frame rather than pulling in csv's. Earlier Models and Challenges with Spark Machine Learning Library (Mllib) Logistic Regression We chose Logistic Regression as it is a common choice for classification problems and would give us a benchmark for comparison with later models. As explained above, we used the filtered CSP data to build and test a model in both the Python scikit-learn package and then in the Spark machine learning library (SparkMLlib). Using scikit-learn, we trained a logistic regression model for each subject on a subset of the training data and then tested the model on the test data, checking accuracy by submitting to Kaggle. After some experimentation applying various models to the test data, it became clear that there were some important factors beyond parameter tuning that affected the model’s performance. For example, since this was time series data, it was very important to sample the data sequentially rather than take a completely random sample. Additionally, scores were lower when all of the subjects’ data was aggregated to train the model. It was necessary to train separate models on each subject in order to get acceptable training errors. Ultimately, the scikit-learn model performed reasonably well with an AUC of 0.7. Notably, the AUC of a file submitted to Kaggle with every prediction set to zero was 0.5. We also tried to create a model using Spark with its integrated machine learning library (Mllib). Unfortunately, there are some limitations with the current Spark release that prevented us from submitting results to Kaggle. While it was possible to predict classification labels, it would not be until the next version of Spark that classification probabilities can be predicted, which is the form that Kaggle required in order to calculate AUC and evaluate results. However, we decided to go ahead and build a logistic model in Spark anyway for our own experience. We are currently able to train a model using Spark but there is still some debugging to be done since the model always predicts a zero when applied to a set of test data. One possible reason this is happening is that Spark returns an error when more than 10,000 rows of training data are used, which is less than 1% of the training data for each subject. This is an issue yet to be resolved before we can successfully create a Spark model to compare with scikit-learn. Support Vector Machine As a comparison to logistic regression, we decided to also run a support vector machine model (svm) on the CSP features. Logistic regression is usually a better choice when the data is very noisy. In this case, the data were not particularly noisy and the separation hyperplanes may not be linear. As a first step, overlapping classes were removed. The entire training sample data set over all subjects and series has 17,985,754 data points (time frames). 478,939 of those were not uniquely classified (the data point fell within 2 events); this 2.6% was removed from the entire training sample. Using cross-validation with 3 folds, several parameters of the svm model were tested using the grid search function in the scikit learn package in Python. Quadratic and cubic separating hyperplanes were tested, as well as the radial bias function (rbf) kernel. C, which shrinks the error to zero as it increases, was set at 1 and 10. Gamma was tested at 0.001 and 0.0001 for the rbf kernel. In order to run the model on a laptop (RAM = 16GB, SSD = 1TB), a subset of the first subject was used (280,000 out of data points). In order to run this faster, we decided to use Spark Mllib. However, the package currently only supports binary classification. Conclusion: Strong Teamwork When we embarked on this final project our hope was to use the tools that we learned during the Data Science Bootcamp from beginning to end. We are thrilled to report that we accomplished that goal! A critical ingredient to our success was teamwork. At the outset, we structured our project for optimal transparency. We agreed upon strategies for analyzing the data, visualizations, preprocessing, and machine learning. We developed a project workflow to manage discrete tasks and assign ownership. We created team GitHub, Dropbox, and Slack accounts to share code, documents, images, and most importantly, updates. We met daily to share progress reports, problem solving, and changes to workflow. We used both Python and R to accomplish our tasks. We relied on each other to help debug code and research other solutions. In the end, we were proud of the way we worked creatively, independently and as part of a team, and how we inspired each other to deliver. Go team! References: 1 Zeiler, M. and Fergus R. “Visualizing and Understanding Convolutional Networks” 2 Deep Learning: The Theoretician's Nightmare or Paradise? (LeCun, NYU, August 2012) 3 Bay Area Vision Meeting: Unsupervised Feature Learning and Deep Learning (Andy Ng).
https://nycdatascience.com/blog/student-works/team-oriented-grasp-and-lift-eeg-kaggle-competition/
CC-MAIN-2020-45
refinedweb
2,524
51.48
The String class contains many constructors that can be used to create a String object. The default constructor creates a String object with an empty string as its content. For example, the following statement creates an empty String object and assigns its reference to the emptyStr variable: String emptyStr = new String(); The String class contains a constructor, which takes another String object as an argument. String str1 = new String(); String str2 = new String(str1); // Passing a String as an argument Now str1 represents the same sequence of characters as str2. At this point, both str1 and str2 represent an empty string. We can also pass a string literal to this constructor. String str3 = new String(""); String str4 = new String("Have fun!"); After these two statements are executed, str3 will refer to a String object, which has an empty string as its content, and str4 will refer to a String object, which has "Have fun!" as its content. The String class contains a length() method that returns the number of characters in the String object. The return type of the method length() is int. The length of an empty string is zero. public class Main { public static void main(String[] args) { String str1 = new String(); String str2 = new String("Hello"); /* w w w .ja v a2s.co m*/ // Get the length of str1 and str2 int len1 = str1.length(); int len2 = str2.length(); // Display the length of str1 and str2 System.out.println("Length of \"" + str1 + "\" = " + len1); System.out.println("Length of \"" + str2 + "\" = " + len2); } } The code above generates the following result.
http://www.java2s.com/Tutorials/Java/Java_Data_Type/0190__Java_String_Create_Length.htm
CC-MAIN-2018-34
refinedweb
260
75.2
Updated: September 2008 This section contains overviews, examples, and background information that will help you understand and use Visual Basic and Language-Integrated Query (LINQ). Describes the three stages in writing a basic LINQ query: obtaining the data source, writing the query, and executing the query. Describes the most common types of query operations and how they are expressed in Visual Basic. Describes the Visual Basic language features that support LINQ, such as anonymous types and type inference. Describes how data types are preserved or transformed when queries are written and executed. Step-by-step instructions for creating a Visual Basic LINQ project, adding a simple data source, and performing some basic query operations. Includes an overview of queries in LINQ and provides links to additional resources. Includes a selection of How-to topics for using LINQ with in-memory collections. Provides links to topics that explain the LINQ samples. Describes how you can use LINQ technology to access SQL databases just as you would access an in-memory collection. Provides links to topics that explain the LINQ technologies. Describes tools that are available in the Visual Studio environment for designing, coding, and debugging LINQ-enabled applications. Describes how Visual Basic supports LINQ to XML. Provides a conceptual overview of LINQ to DataSet. Explains the LINQ to SQL technology and provides links to topics that help you use LINQ to SQL. Describes the .NET Framework version, references, and namespaces that are required to create LINQ projects. Includes links to topics that explain how to use LINQ to XML, which provides the in-memory document modification capabilities of the Document Object Model (DOM), and supports LINQ query expressions. Provides links to topics describe how to program with LINQ, including information about the standard query operators, expression trees, and query providers. Provides links to topics about using LINQ in C#. Date History Reason September 2008 Added link. Customer feedback.
http://msdn.microsoft.com/en-us/library/bb397910.aspx
crawl-002
refinedweb
317
55.74
04 July 2008 09:41 [Source: ICIS news] LONDON (ICIS news)--The benchmark Black Sea urea price has reached a record high $700/tonne (€448/tonne) ahead of expected buying in India and Latin America, traders said late on Thursday. Prilled urea prices have risen sharply from $661-662/tonne FOB (free on board) Yuzhny last Friday. Ukrainian producer ?xml:namespace> Ukrainian traders and distributor Agrofertrans sold prilled urea to traders at $680/tonne FOB for second-half July shipment. It has 15,000 tonne left to sell for July and will target around $700/tonne FOB. The $700/tonne FOB mark has been exceeded for forward business. Ukrainian producer (
http://www.icis.com/Articles/2008/07/04/9137719/Black-Sea-urea-hits-700tonne-record.html
CC-MAIN-2014-35
refinedweb
110
68.5
Controlling Hardware with ioctls A few years ago, I had a laptop that I used at work and at home. To simplify my network configuration and not have to change it manually depending on where I was, I decided to use DHCP in both places. It was the standard at work, so I implemented a DHCP server at home. This worked well except when I booted the system without it being plugged in to either network. When I did, the laptop spent a lot of time trying to find a DHCP server without success before continuing the rest of the startup process. I concluded that an ideal solution to this lag time would be for the system to start with the Ethernet interface down and have it come up if, and only if, the cable was connected to a hub, that is, if I had a link light on the Ethernet interface. The best way to do this appeared to be having a shell script call a program whose return code would indicate whether a link had been established on a particular network interface. So began my quest for a method to determine this link status of my 10/100Base-T interface. Not having done much low-level Linux programming, it took me a bit of time to discover that most of this type of interaction with device drivers usually is done through the ioctl library call (an abbreviation of I/O control), prototyped in sys/ioctl.h: int ioctl(int, int, ...) The first argument is a file descriptor. Because all devices in Linux are accessed like files, the file descriptor used usually is one that has been opened with the device to which you are interfacing as the target. In the case of Ethernet interfaces, however, the fd simply is an open socket. Apparently, no need exists to bind this socket to the interface in question. The second argument in ioclt.h is an integer that represents an identification number of the specific request to ioctl. The requests inherently must vary from device to device. You can, for example, set the speed of a serial device but not a printer device. Of course, a specific set of commands exists for network interfaces. Additional arguments are optional and could vary from the ioctl implementation on one device to the implementation on another. As far as I can tell, a third argument always is present, and I have yet to find more than a third. This third argument usually seems to be a pointer to a structure. This allows the passing of an arbitrary amount of data in both directions, the data being defined by the structure to which the pointer refers, simply by passing the pointer. A basic example of how ioctl works is shown in the following simple program that checks the status of one signal on a serial port: #include <termios.h> #include <fcntl.h> #include <errno.h> #include <sys/ioctl.h> main() { int fd, status; fd = open("/dev/ttyS0", O_RDONLY); if (ioctl(fd, TIOCMGET, &status) == -1) printf("TIOCMGET failed: %s\n", strerror(errno)); else { if (status & TIOCM_DTR) puts("TIOCM_DTR is not set"); else puts("TIOCM_DTR is set"); } close(fd); } This program opens a tty (serial port) and then calls ioctl with the fd of the serial port, the command TIOCMGET (listed as get the status of modem bits) and a pointer to an integer into which the result is returned. The ioctl result then is checked to see whether an error was made in processing the request. If there are no problems, we check the values returned by anding them with TIOCM_DTR. This step yields true or false, nonzero or zero, respectively. Using ioctl for Ethernet drivers is a similar process. The third parameter to ioctl calls for socket ioctl calls (where the fd is a socket handle) often is a pointer to a ifreq (interface request) structure. The type deceleration for ifreq structures can be found in net/if.h. Unfortunately, documentation for many of the ioctl interfaces is difficult to find, and there are at least three different APIs for accessing network interfaces. I originally wrote this program using the MII (media independent interface) method. While writing this article, with the most recent kernel installed on my machine, however, I discovered I had to add the ETHTOOL method. After adding ETHTOOL, I then modified the program and wrote each interface method as a subroutine. The modified program tries one method, and if it fails, attempts the other. The third method predates the MII API, and I have not yet run into a machine on which I have needed it, so the code is not included. The information on using the MII interface was acquired mainly by examining the mii-diag program () written by Donald Becker, which I found on Scyld Computing Corporation's Web site. This site also contains an excellent page () explaining the details of the MII status words that the ioctl functions may return. Here, however, I focus on the ETHTOOL interface because it is the newer method. The program and both interfaces are available from the Linux Journal FTP site at. Information on using the ETHTOOL API also was acquired by scouring various pieces of source code, not the least of which was the source code for the network interface drivers themselves—particularly eepro100.c. Also helpful was an e-mail written by Tim Hockin that I found while Googling. In writing my program, I set the default interface to eth0 unless a parameter was passed to the program. The interface ID is stored in ifname. Because the ioctl commands I use are specific to network interfaces, using some other device likely will cause a “cannot determine status” to be returned. Before calling ioctl we need a file handle, so we first must open a socket: int skfd; if (( skfd = socket( AF_INET, SOCK_DGRAM, 0 ) ) < 0 ) { printf("socket error\n"); exit(-1); } In the standard try-to-check-for-all-errors C coding style, I placed this inside an if statement that simply prints an error and terminates the program, returning a -1 if the socket does not open properly. For my purposes, I would rather report errors in determining status as a lack of a link rather than as a presence of one, so a found link is reported as 0 and not found is reported as 1. The new ETHTOOL API for interfacing to the driver has made determining the status of the link much easier than did the previous method. ioctl was implemented for ETHTOOL interfaces such that there is now only ONE ioctl command, SIOCETHTOOL (which specifies that the call is an ETHTOOL command), and the data block passed then contains the specific subcommand for the ETHTOOL interface. The standard ioctl data structure (type ifreq) requires two additional items: the name of the interface to which the command should be applied and an address of a structure (type ethtool_value) in which to store the specific command as well as the returned information. The structures and most other information (including the commands available) are located in the ethtool.h header file. The command that I needed was ETHTOOL_GLINK, documented as “get link status”, which I stored in edata.cmd: edata.cmd = ETHTOOL_GLINK; The name of the interface and the address of the edata structure both need to be placed into the ifreq structure: strncpy(ifr.ifr_name, ifname, sizeof(ifr.ifr_name)-1); ifr.ifr_data = (char *) &edata; At this point, all that remains is making the ioctl call, checking the return value (to be sure the command type was allowed) and checking the data in the structure pointed to by the returned pointer to see if the link is up or down: if (ioctl(skfd, SIOCETHTOOL, &ifr) == -1) { printf("ETHTOOL_GLINK failed: %s\n", strerror(errno)); return 2; } return (edata.data ? 0 : 1); In this case, my code returns a 0 for link up, a 1 for link down and a 2 for undetermined or failure. This code allows me to call this function from my rc.local, bring the interface up and either call dhcpcd or pump to get an IP address only if the system is plugged in to a functioning hub/switch. Here is the relevant section of rc.local: /root/sense_link/sense_link | logger if /root/sense_link/sense_link > /dev/null; then logger "No link sense -- downing eth0" /sbin/ifconfig eth0 down else logger "Sensed link - dhcping eth0" /sbin/dhcpcd eth0 fi First, the output of sense_link is sent to the system log. Then, if no link was sensed on eth0, or if it could not be determined, a message is written to the log and eth0 is taken down. If a link was sensed, dhcpcd is executed on eth0. Once this is implemented, my rc.local file now executes quite quickly when no network cable is plugged in or when a DHCP server is active and found. The only time I still experience significant delays is if I am plugged in to a network where there is no active DHCP server. I haven't yet tried this code with my 802.11b card to see if it can detect a link on it before attempting to contact a DHCP server, because I usually only have the PCMCIA card plugged in when I am in a location that I know has a server. It would be an interesting experiment and a useful extension, however, for an interested party. Lisa Corsetti presently is a software architect as well as the president of Anteil, Inc., a consulting firm that focuses on custom Web-based applications for various industries and government. Lisa received a BS in Electrical and Computer Engineering from Drexel University.
https://www.linuxjournal.com/article/6908?page=0,1&quicktabs_1=0
CC-MAIN-2018-22
refinedweb
1,619
59.23
Created on 2017-10-02 19:42 by Scott Tucholka, last changed 2017-10-09 09:40 by matrixise. This issue is now closed. Hello! Your bug report gives very little information for us to help you. Can you give details such as: your environement / setup, your code, expected result and full error message? I am running Windows 10 Enterprise x64 and use Spyder (Python 3.6). This is my code: import pandas as pd import pandas_datareader as dr dr.get_data_yahoo('AAPL') I am expecting that the module will import and the get_data_yahoo will return results for 'AAPL'. This is my log: Python 3.6.2 |Anaconda, Inc.| (default, Sep 19 2017, 08:03:39) [MSC v.1900 64 bit (AMD64)] Type "copyright", "credits" or "license" for more information. IPython 6.1.0 -- An enhanced Interactive Python. import pandas as pd import pandas_datareader as dr dr.get_data_yahoo('AAPL') Traceback (most recent call last): File "<ipython-input-1-43a2f11394e3>", line 2, in <module> import pandas_datareader as dr ModuleNotFoundError: No module named 'pandas_datareader' Thanks On Fri, Oct 6, 2017 at 12:57 PM, Éric Araujo <report@bugs.python.org> wrote: > > New submission from Éric Araujo <merwok@netwok.org>: > > Hello! Your bug report gives very little information for us to help you. > Can you give details such as: your environement / setup, your code, > expected result and full error message? > > > > ---------- > nosy: +merwok > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ > Hi, Your bug is related to Pandas and not to Python 3.6. Maybe you need to post your issue to the bug tracker of Pandas. You need to install pandas-datareader pip install pandas-datareader But it is not an issue with Python. Have a nice day,
https://bugs.python.org/issue31666
CC-MAIN-2017-51
refinedweb
284
68.36
That it's not a typical company is precisely the point Freebies can range from tax preparation to education Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more! Let me ask you a question... Who is the most successful investor of all time? I'm guessing most of you are thinking Warren Buffett. But you're wrong. Buffett's annualized return (as measured by growth in book value) is "only" about 20% since he took over Berkshire Hathaway in the mid-1960s. That's easily ahead of the market average, but it's not even close to being the best. You've probably never heard of him, but another money manager has handily beaten Buffett. His name is Jim Simons. He's the founder of Renaissance Technologies, and he has the world's best math and science minds on his payroll. Simons' Renaissance, which is not a name most investors are familiar with, is one of the most successful -- if not THE most successful -- investment managers of all time. Simons has averaged a 40% annual return since 1988. That's a great return... and one I would be happy with. But Simons' and Renaissance (along with most brokers and money managers) are missing out on one of the most powerful forces in the market. And no, I am not talking about dividend reinvestment or compounding. Both are powerful forces, for sure. No, I'm talking about the other side of the equation. What most investors miss today are the big winners -- the types of stocks that can post triple digit returns in just a few short years. Why are so many people -- including some of the world's brightest investing minds -- missing them? The answer is simple. They aren't looking for them. Sophisticated "quants" like Renaissance, for example, don't worry about the economy, industry dynamics or a stock's fundamentals... They look at pricing, studying not what a company does but how its price moves and how it is correlated to other predictable barometers. Your stockbroker operates in a similar way. They use a standardized tool to assess your risk tolerance, calculate your "required rate of return" -- what you think you should be able to earn on your money in a year, and then run something called a "Monte Carlo" simulation -- to put various hypothetical investments into a basket and calculate the odds of achieving the expected return. When the odds look promising, he or she pulls the trigger and invests your cash for you. Does this approach work as well as Simons' models? For the most part, yes. But the problem is your broker is never going to shoot for the biggest returns. He's only going to seek to deliver your expected rate of return, which in most cases will be sub-market. But at the end of each year, top-performing stocks routinely post gains of 400%, 500% even 1,000% and more. Those gains are all driven by the company's narrative -- the story of what it does, who it sells to, the demand for its product and, most important, what's next. For most of the professional money management world, these big winners are nothing more than outliers -- something that can't be predicted by a statistical model, so they don't even try to find them. That's where I stand apart. As editor of StreetAuthority's Game-Changing Stocks newsletter, I try to zero in on these "Game-Changers" using a nose for news and a careful read of each company's narrative. Of course, most of these types of stocks are highly speculative and risky ventures. But by following my 80/20 solution for allocating your portfolio, you maximize your chances of beating even the savviest investors. If you don't know about my 80/20 rule, here is the short explanation... I suggest people invest 80% of their money in safe, reliable assets -- the kind that will allow you to meet your needs and feel confident knowing you can adequately provide for your family. The other 20% is dedicated to the home-run picks... the "Game-Changers." Let me show you how this strategy works... Let's say you have $25,000 to invest. With $20,000, or 80% of your portfolio, you match the market's performance. Over the past decade the S&P 500 has returned about 7.95% on average per year, turning your $20,000 into $43,000 after just 10 years. Not bad, but look at what happens to that $5,000 if you invest in the Next Big Thing. Let's say that part of your portfolio returns 30% a year (there will be some big winners, but not everything will go up, so 30% is a good return to expect). After 10 years, that $5,000 would turn into about $69,000. And after 10-years, the original $25,000 you started with would turn into $112,000. But if you had simply put all of your money into the safe, reliable investments most brokers recommend, you would only have $53,700 . So now you know the power of the 80/20 rule, how do you find the kinds of stocks that will power that 20% of your money into outsize gains? That's where I come in. Every year I come out with my predictions for the Next Big Thing. For example, in 2009 we told our readers to expect a big move in nanotechnology. We said, "This is an opportunity of enormous proportions." Our nanotech pick, Starpharma ( SPHRY ), shot up 293%. We claimed in 2010 that the "best sci-fi speculation of the year" would be a powerful technology called RFID, or radio-frequency identification... and that three stocks would skyrocket because of it. A year later, my recommended picks were up 42%... 89%... and 310% . I've compiled a new research report for this year called The Hottest Investment Opportunities for 2014 . In it, we talk about a company developing a new tool that will quickly replace the keyboard... the company profiting off of "the death of cash," plus nine other shocking predictions... In my latest report, I'll lay out the details of all of my predictions for the coming year -- and the stocks that will profit from these bold calls. I've already told you about a few of these ideas in StreetAuthority Daily. But to get the full details on all of my ideas, and the stocks that could see triple-digit gains?
http://www.nasdaq.com/article/why-wall-street-geniuses-keep-missing-tripledigit-winners-cm278711
CC-MAIN-2015-06
refinedweb
1,097
72.46
An Introduction to Python For Undergraduate Engineers/Python as a Calculator From Wikibooks, open books for an open world Firstly, let's just quickly see how you can do simple arithmetic in Python. The table below list the basic mathematical operators and the associated code in python. For more complex arithmetic, python contains a special module called math. This module contains an array of different functions (such as square root) and constants (such as pi). To use a module, we must first import it, like so: import math We can then use for example: math.sqrt(16) #to find the square root of 16. math.pi #to get the value of pi. Alternatively you can import modules in the following way: from math import * This will import everything from the math module directly allowing us instead to call the functions as follows: sqrt(16) to find the square root of 16. pi to get the value of pi.
http://en.wikibooks.org/wiki/An_Introduction_to_Python_For_Undergraduate_Engineers/Python_as_a_Calculator
CC-MAIN-2015-22
refinedweb
158
62.78
Blogs are the savior of independent publishing, and the ability of most to allow commenting creates an intimate collaboration between performer and audience: read the blog's entry and any existing comments, and then add your own thoughts and opinions. What's most annoying, however, is needing to return on a regular basis to see if anyone has added additional comments, whether to the original posting or to your own follow up. With the RSS syndication format, you can monitor new blog entries in a standard way with any number of popular aggregators. Unfortunately, unless the site in question has provided its comments in RSS format also, there's not really a standard way for comments to be used, repurposed, or monitored. However, the more you read blogs and the comments themselves, you'll begin to see patterns emerge. Perhaps a comment always starts with "On DATE, PERSON said" or "posted by PERSON on DATE," or even plain old "DATE, PERSON." These answer to your needs: a script that uses regular expressions to check for various types of signatures can adequately tell you when new This script depends on being fed a file that lists URLs you'd like to monitor. These should be the URLs of the page that holds comments on the blog entry, often the same as the blog entry's permanent link (or permalink). If you're reading, for instance, and you've just commented on the "The Lazy GM" article, you'll add the following URL into a file named chkcomments.dat: A typical first run considers all comments new—new to you and your script: % perl chkcomments.pl Searching... * We saw a total of 5 comments (old count: unchecked). * Woo! There are new comments to read! You can also show the name, date, and contact information of each individual comment, by passing the --verbose command-line option. This example shows the script checking for new % perl chkcomments.pl --verbose Searching... - July 23, 2003 01:53 AM: VMB (mailto:vesab@jippii.fi) - July 23, 2003 10:55 AM: Iridilate (mailto:) - July 29, 2003 02:46 PM: The Bebop Cow (mailto:blackcypress@yahoo.com) ... etc ... * We saw a total of 5 comments (old count: 5). Since no comments were added between our first and second runs, there's nothing new. But how did the script know how many comments there were in the first place? The answer, as I alluded to previously, is comment signatures. In HTML, every comment on Gamegrene looks like this: On July 23, 2003 01:53 AM,<a href="mailto:vesab@jippii.fi">VMB</a> said: In other words, it has a signature of On DATE, <a href="CONTACT">PERSON</a> said or, if you were expressing it as a regular expression, On (.*?), <a href="(.*?)">(.*?)<\/a> said. Keen observers of the script will have noticed this regular expression appear near the top of the code: my @signatures = ( { regex => qr/On (.*?), <a href="(.*?)">(.*?)<\/a> said/, assign => '($date,$contact,$name) = ($1,$2,$3)' }, What about the assign line, though? Simply enough, it takes our captured bits of data from the regular expression (the bits that look like (.*?)) and assigns them to more easily understandable variables, like $date, $contact, and $name. The number of times our regular expression matches is the number of comments we've seen on the page. Likewise, the information stored in our variables is the information printed out when we ask for --verbose output. If you refer back to the code, you'll notice two other signatures that match the comment styles on Dive Into Mark () and the O'Reilly Network () (and possibly other sites that we don't yet know about). Since their signatures already exist, we can add the following URLs to our chkcomments.dat file: and run our script on a regular basis to check for new comments: % perl chkcomments.pl Searching... * We saw a total of 5 comments (old count: 5). Searching... * We saw a total of 11 comments (old count: unchecked). * Woo! There are new comments to read! Searching ... * We saw a total of 1 comments (old count: unchecked). * Woo! There are new comments to read! Searching... * We saw a total of 9 comments (old count: unchecked). * Woo! There are new comments to read! The obvious way of improving the script is to add new comment signatures that match up with the sites you're reading. Say we want to monitor new comments on Harvard Weblogs (). The first thing we need is a post with comments, so that we can determine the comment signature. Once we find one, view the HTML source to see something like this: <div class="date"><a href=""> Dave Winer</a> • 7/18/03; 7:58:33 AM</div> The comment signature for Harvard Weblogs is equivalent to <a href="CONTACT">PERSON</a> DATE, which can be stated in regular expression form as date"><a href="(.*?)">(.*?)<\/a> • (.*?)<\/div>. Once we have the signature in regular expression form, we just need to assign our matches to the variable names and add the signature to our listings at the top:")' }, { regex => qr/date"><a href="(.*)">(.*)<\/a> • (.*)<\/div>/, assign => '($contact,$name,$date) = ($1,$2,$3)' }, ); Now, just add the URL we want to monitor to our chkcomments.dat file, and run the script as usual. Here's an output of our first check, with verbosity turned on: Searching... - 7/18/03; 1:23:14 AM: James Farmer () - 7/18/03; 4:06:10 AM: Phil Wolff () - 7/18/03; 7:58:33 AM: Dave Winer () - 7/18/03; 6:23:14 PM: Phil Wolff () * We saw a total of 4 comments (old count: unchecked). * Woo! There are new comments to read! Save this script as chkcomments.pl: #!/usr/bin/perl -w use strict; use Getopt::Long; use LWP::Simple; my %opts; GetOptions(\%opts, 'v|verbose'); # where we find URLs. we'll also use this # file to remember the number of comments. my $urls_file = "chkcomments.dat"; # what follows is a list of regular expressions and assignment # code that will be executed in search of matches, per site.")' }, ); # open our URL file, and suck it in. open(URLS_FILE, "<$urls_file") or die $!; my %urls; while (<URLS_FILE>) { chomp; my ($url, $count) = split(/\|%%\|/); $urls{$url} = $count || undef; } close (URLS_FILE); # foreach URL in our dat file: foreach my $url (keys %urls) { next unless $url; # no URL, no cookie. my $old_count = $urls{$url} || undef; # print a little happy message. print "\nSearching $url...\n"; # suck down the data. my $data = get($url) or next; # now, begin looping through our matchers. # for each regular expression and assignment # code, we execute it in this namespace in an # attempt to find matches in our loaded data. my $new_count; foreach my $code (@signatures) { # with our regular expression loaded, # let's see if we get any matches. while ($data =~ /$code->{regex}/gism) { # since our $code contains two Perl statements # (one being the regex, above, and the other # being the assignment code), we have to eval # it once more so the assignments kick in. my ($date, $contact, $name); eval $code->{assign}; next unless ($date && $contact && $name); print " - $date: $name ($contact)\n" if $opts{v}; $new_count++; # increase the count. } # if we've gotten a comment count, then assume # our regex worked properly, spit out a message, # and assign our comment count for later storage. if ($new_count) { print " * We saw a total of $new_count comments". " (old count: ". ($old_count || "unchecked") . ").\n"; if ($new_count > ($old_count || 0)) { # joy of joys! print " * Woo! There are new comments to read!\n" } $urls{$url} = $new_count; last; # end the loop. } } } print "\n"; # now that our comment counts are updated, # write it back out to our datafile. open(URLS_FILE, ">$urls_file") or die $!; foreach my $url (keys %urls) { print URLS_FILE "$url|%%|$urls{$url}\n"; } close (URLS_FILE); O'Reilly Home | Privacy Policy © 2007 O'Reilly Media, Inc. Website: All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/h/993
CC-MAIN-2015-32
refinedweb
1,313
72.26
Back up and restore your app on Windows Phone 8.1, Part 2: App data This blog was written by Hector Barbera and Sean McKenna–Program Managers, Developer and Ecosystem Platform – Operating Systems Group In the first of this pair of posts, we covered one of the biggest additions to Windows Phone’s backup-and-restore functionality in the 8.1 release: the Start screen. In this post, we’ll cover the other major addition: app data. App data often includes valuable settings, configurations, game state, and usage history, any of which can take a long time to recreate. When this data doesn’t migrate to a new device, it’s both an inconvenience for the user and a loss of hours of valuable engagement with your app. With the addition of app-data backup and restore in Windows Phone 8.1, that inconvenience and loss of engagement become things of the past. Background To ensure that your app works well with backup and restore, it’s important to consider the full scope of app data that can be present on a phone. While most of an app’s state resides in its local app-data container—also known as isolated storage—other content resides in other places such as coupons in the wallet, contacts in the address book, and event data in the calendar. Not all of this data is backed up in Windows Phone 8.1, so there’s the potential for issues if your app has dependencies between what’s backed up and what isn’t. The goal of this post is to help you prepare your app to handle app-data restore. The backup-and-restore feature itself is pretty straightforward. Assuming the user hasn’t disabled app-data backup, here’s how it works: once a day, when the phone is idle, connected to AC power, and connected to a Wi-Fi network, the backup and restore engine looks for changes across the device and syncs them to Microsoft OneDrive. This includes looking for changes in specific folders of backup-eligible apps. By default, the set of eligible apps includes all Store apps targeting Windows Phone 8.1, including Windows Runtime apps and Windows Phone Silverlight apps. When the user sets up a new device and chooses to restore a backup, any app data included in that backup is restored as part of the system-initiated app installation. When an app is ready to launch for the first time, its app data is available in the same state as it was at the time of the backup. What you need to do The good news is that if you’re developing a Windows Phone 8.1 app and want to take advantage of app data backup, you don’t have to do much. The first step is simply to understand the options for storing data in the Windows Phone app data model, which is accessible through the Windows.Storage.ApplicationData class. Those options are: - Roaming. This container supports both unstructured data (files) and structured data (settings). Data stored here is eligible for roaming synchronization between the user’s devices (including synchronization between Windows and Windows Phone for universal apps with a shared identity). Roaming data may also be backed up under certain conditions (for example, when the user has disabled roaming) in order to capture the entirety of the app’s state. - Local. This is the default storage location. It also supports storage of both unstructured data (files) and structured data (settings). Note that for Windows Phone Silverlight apps, Local maps to the IsolatedStorage folder. All data in Local is backed up to the cloud. - LocalCache. This is identical to Local except that it is always excluded from backups. - Temporary. This container allows you to store unstructured data (files) that are excluded from backups. These files can be cleaned up by the system in the event of a low-storage situation. Best practices for backup Here are some tips for backing up your app data efficiently: - Use Local to store only data that can’t be regenerated without user input. Examples of such data include app configuration, game progress, or user-generated content like voice recordings or typed notes. Because backed-up data counts against the user’s OneDrive quota, don’t use Local (or Roaming) for temporary files or data that can easily be recreated or download as needed. - Use LocalCache to store data that you want to preserve across app sessions but that you don’t want to be backed up. LocalCache is intended for data that’s important to your app but that shouldn’t be replicated in OneDrive. There are several possible reasons to choose LocalCache: - Avoiding duplication—The data is already available in the cloud and can easily be downloaded as needed (for example. an e-book title or news article). - Privacy concerns—The data is confidential and should not leave the device. - Encryption—The data is encrypted with a device-based key and would be unusable when restored on a different device. - Use the Temporary folder for data that you don’t need to save between app sessions. The Temporary folder is eligible for clean up whenever the device reaches a low-storage threshold. Potential issues Here are some situations to look out for as you back up and restore app data. Running for the first time There are some tasks that an app might need to do when running for the first time, such as asking for user credentials. With the introduction of app-data backup, your app can no longer assume that it has already run on a particular device based on the presence of certain data in its Local app data location. If you need to keep track of whether your app has already run on the current device, persist a local flag and store it in the LocalCache folder. Encryption When you store sensitive data locally on the phone, it’s highly recommended that you use the Windows Data Protection API (DPAPI) to encrypt the data first. It’s important to note, however, that DPAPI uses an encryption key that is based on the device that it’s running on. So if you try to decrypt that data on after restoring it on a new device, the decryption operation will fail. If you’re encrypting data with DPAPI, either store it in the LocalCache folder or be prepared to handle the decryption failure on a new device. If you’re storing user names and passwords, use the PasswordVault object in the Windows.Security.Credentials namespace. If enabled by the user, the PasswordVault roams across all Windows devices, which means that it’s available for use on a new phone following restore. Content licenses Content licenses present similar challenges to data encryption. If your app acquires licenses that are tied to a specific device, consider storing them in the LocalCache folder. Testing your app with app-data backup and restore If you store all of your app data in the Local folder, you can simulate a device restore using the Isolated Storage Explorer tool (ISETool) by following these steps. (Note that the ISETool does not currently support interaction with the Roaming, Temporary, or LocalCache folders, or the Local settings container.) 1. Deploy your app to your developer device or emulator using Microsoft Visual Studio. 2. Use the app and create the state you want to test on restore. 3. Close the app. 4. Using the Isolated Storage Explorer tool, copy your Local folder to your PC: - Using the emulator: ISETool.exe ts xd <your app’s product ID> <path to an empty folder on your PC> - Using a physical phone: ISETool.exe ts de <your app’s product ID> <path to an empty folder on your PC> 5. Uninstall the app. This clears your app’s state. 6. Deploy your app again. If you want to test for hardware dependencies, try restoring to a device that’s different from the one you used in step 4. 7. Using the Isolated Storage Explorer tool, restore to your device the data that you backed up: - Using the emulator: ISETool.exe rs xd <your app’s product ID> <path to an empty folder on your PC> - Using a physical phone: ISETool.exe rs de <your app’s product ID> <path to an empty folder on your PC> 8. Launch your app and make sure all features work as you would expect following a device restore. Opting out While we hope that most app developers are pleased with the arrival of app-data backup and restore, we understand that there are reasons why you might want to opt out some or all of your data. The preferred method is to segment your data based on the data model described earlier—that is, using the LocalCache folder for data that you do not wish to store outside the device. This approach lets you take advantage of backup for some data while opting out content that might be sensitive or inappropriate for backup. Of course, because the segmented app-data model is new for Windows Phone 8.1, you probably need to do some work before your data is cleanly separated. If you’d like to update your Windows Phone Silverlight app to target Windows Phone 8.1 without worrying about data migration just yet, you can simply opt out of backup altogether. Just use a flag in the Packaging tab of the WMAppManifest.xml file, as shown here. Final notes App-data backup and restore is a significant new feature in Windows Phone 8.1. Users who have spent significant time and energy engaging with your apps will now be able to carry that forward to new devices with little or no work on your part. We hope this post has helped you understand how backup and restore works and how your app can take advantage of it. Updated November 7, 2014 11:29 pm Join the conversation It’s a complete shame that WP does not have a system wide backup/restore. As end user, be hostage of ALL developer’s good will is what this lack is being called across the internet: a fundamental flaw.
http://blogs.windows.com/buildingapps/2014/06/06/back-up-and-restore-your-app-on-windows-phone-8-1-part-2-app-data/
CC-MAIN-2015-32
refinedweb
1,706
61.06
Type: Posts; User: antlet88 It's written in this form. Just the value and then hitting enter to the next line. 15 2.34 2.43 2.30 2.29 2.41 2.42 2.33 Hey it worked. I found the setting exclude from project and it builds just fine. I mean the program doesn't actually work but at least it runs. Haha. I think the numbers from the text file aren't... Thanks everyone who didn't just yell at me for formatting. I'm sure once I become better at programming those things will come a little more natural. I'll try those recommendations tonight.... Okay, you're coming off kind of aggressive...Or maybe I'm getting the wrong impression. As I said, this is literally my first class in programming. I've never done it before. I just can't figure... Sorry about that!!! I'll try it here. Ummm. No I don't think I compile the .txt file. It's just saved in the folder and then called upon in the coding. #include <iostream> #include <fstream>... 1>------ Build started: Project: Lab 5, Configuration: Debug Win32 ------ 1> data.txt 1>c:\users\anthony\documents\visual studio 2010\projects\lab 5\lab 5\data.txt(1): error C2059: syntax error :...
http://forums.codeguru.com/search.php?s=ab3103b920032321a633bfc7604665be&searchid=5376127
CC-MAIN-2014-42
refinedweb
216
79.97
Russ Cox Brad Fitzpatrick July 2020 This is a Draft Design, not a formal Go proposal, because it describes a potential large change that addresses the same need as many third-party packages and could affect their implementations (hopefully by simplifying them!). The goal of circulating this draft design is to collect feedback to shape an intended eventual proposal. This design builds upon the file system interfaces draft design.. There are many tools to embed static assets (files) into Go binaries. All depend on a manual generation step followed by checking in the generated files to the source code repository. This draft design eliminates both of these steps by adding support for embedded static assets to the go command itself. There are many tools to embed static assets (files) into Go binaries. One of the earliest and most popular was github.com/jteeuwen/go-bindata and its forks, but there are many more, including (but not limited to!): Clearly there is a widespread need for this functionality. The go command is the way Go developers build Go programs. Adding direct support to the go command for the basic functionality of embedding will eliminate the need for some of these tools and at least simplify the implementation of others. It is an explicit goal to eliminate the need to generate new Go source files for the assets and commit those source files to version control. Another explicit goal is to avoid a language change. To us, embedding static assets seems like a tooling issue, not a language issue. Avoiding a language change also means we avoid the need to update the many tools that process Go code, among them goimports, gopls, and staticcheck. It is important to note that as a matter of both design and policy, the go command never runs user-specified code during a build. This improves the reproducibility, scalability, and security of builds. This is also the reason that go generate is a separate manual step rather than an automatic one. Any new go command support for embedded static assets is constrained by that design and policy choice. Another goal is that the solution apply equally well to the main package and to its dependencies, recursively. For example, it would not work to require the developer to list all embeddings on the go build command line, because that would require knowing the embeddings needed by all of the dependencies of the program being built. Another goal is to avoid designing novel APIs for accessing files. The API for accessing embedded files should be as close as possible to *os.File, the existing standard library API for accessing native operating-system files. This design adds direct support for embedded static assets into the go command itself, building on the file system draft design. That support consists of: //go:embedcomment directive naming the files to embed. embedpackage, which defines the type embed.Files, the public API for a set of embedded files. The embed.Filesimplements fs.FSfrom the file system interfaces draft design, making it directly usable with packages like net/httpand html/template. go/buildand golang.org/x/tools/go/packagesto expose information about embedded files. A new package embed, described in detail below, provides the type embed.Files. One or more //go:embed directives above a variable declaration of that type specify which files to embed, in the form of a glob pattern. For example: package server // content holds our static web server content. //go:embed image/* template/* //go:embed html/index.html var content embed.Files The go command will recognize the directives and arrange for the declared embed.Files variable (in this case, content) to be populated with the matching files from the file system. The //go:embed directive accepts multiple space-separated glob patterns for brevity, but it can also be repeated, to avoid very long lines when there are many patterns. The glob patterns are in the syntax of path.Match; they must be unrooted, and they are interpreted relative to the package directory containing the source file. The path separator is a forward slash, even on Windows systems. To allow for naming files with spaces in their names, patterns can be written as Go double-quoted or back-quoted string literals. If a pattern names a directory, all files in the subtree rooted at that directory are embedded (recursively), so the above example is equivalent to: package server // content is our static web server content. //go:embed image template html/index.html var content embed.Files An embed.Files variable can be exported or unexported, depending on whether the package wants to make the file set available to other packages. Similarly, an embed.Files variable can be a global or a local variable, depending on what is more convenient in context. ..path element. .path element (to match everything in the current directory, use *). .git/*or symbolic links (or, as noted above, empty directories). //go:embeddirective to appear except before a declaration of an embed.Files. (More specifically, each //go:embeddirective must be followed by a vardeclaration of a variable of type embed.Files, with only blank lines and other //-comment-only lines between the //go:embedand the declaration.) //go:embedin a source file that does not import "embed"(the only way to violate this rule involves type alias trickery). //go:embedin a module declaring a Go version before Go 1.N, where N is the Go version that adds this support. //go:embedwith local variables declared in functions. //go:embedin tests. embed.Fileswithout a //go:embeddirective. That variable simply contains no embedded files. The new package embed defines the Files type: // A Files provides access to a set of files embedded in a package at build time. type Files struct { … } The Files type provides an Open method that opens an embedded file, as an fs.File: func (f Files) Open(name string) (fs.File, error) By providing this method, the Files type implements fs.FS and can be used with utility functions such as fs.ReadFile, fs.ReadDir, fs.Glob, and fs.Walk. As a convenience for the most common operation on embedded files, the Files type also provides a ReadFile method: func (f Files) ReadFile(name string) ([]byte, error) Because Files implements fs.FS, a set of embedded files can also be passed to template.ParseFS, to parse embedded templates, and to http.HandlerFS, to serve a set of embedded files over HTTP. The go command will change to process //go:embed directives and pass appropriate information to the compiler and linker to carry out the embedding. The go command will also add six new fields to the Package struct exposed by go list: EmbedPatterns []string EmbedFiles []string TestEmbedPatterns []string TestEmbedFiles []string XTestEmbedPatterns []string XTestEmbedFiles []string The EmbedPatterns field lists all the patterns found on //go:embed lines in the package’s non-test source files; TestEmbedPatterns and XTestEmbedPatterns list the patterns in the package’s test source files (internal and external tests, respectively). The EmbedFiles field lists all the files, relative to the package directory, matched by the EmbedPatterns; it does not specify which files match which pattern, although that could be reconstructed using path.Match. Similarly, TestEmbedFiles and XTestEmbedFiles list the files matched by TestEmbedPatterns and XTestEmbedPatterns. These file lists contain only files; if a pattern matches a directory, the file list includes all the files found in that directory subtree. In the go/build package, the Package struct adds only EmbedPatterns, TestEmbedPatterns, and XTestEmbedPatterns, not EmbedFiles, TestEmbedFiles, or XTestEmbedFiles, because the go/build package does not take on the job of matching patterns against a file system. In the golang.org/x/tools/go/packages package, the Package struct adds one new field: EmbedFiles lists the embedded files. (If embedded files were added to OtherFiles, it would not be possible to tell whether a file with a valid source extension in that list—for example, x.c—was being built or embedded or both.) As noted above, the Go ecosystem has many tools for embedding static assets, too many for a direct comparison to each one. Instead, this section lays out the affirmative rationale in favor of each of the parts of the design. Each subsection also addresses the points raised in the helpful preliminary discussion on golang.org/issue/35950. (The Appendix at the end of this document makes direct comparisons with a few existing tools and examines how they might be simplified.) It is worth repeating the goals and constraints mentioned in the background section: gocommand does not run user code during go build. The core of the design is the new embed.Files type annotated at its use with the new //go:embed directive: //go:embed *.jpg var jpgs embed.Files This is different from the two approaches mentioned at the start of the preliminary discussion on golang.org/issue/35950. In some ways it is a combination of the best parts of each. The first approach mentioned was a directive along the lines of //go:genembed Logo logo.jpg that would be replaced by a generated func Logo() []byte function, or some similar accessor. A significant drawback of this approach is that it changes the way programs are type-checked: you can’t type-check a call to Logo unless you know what that directive turns into. There is also no obvious place to write the documentation for the new Logo function. In effect, this new directive ends up being a full language change: all tools processing Go code have to be updated to understand it. The second approach mentioned was to have a new importable embed package with standard Go function definitions, but the functions are in effect executed at compile time, as in: var Static = embed.Dir("static") var Logo = embed.File("images/logo.jpg") var Words = embed.CompressedReader("dict/words") This approach fixes the type-checking problem—it is not a full language change—but it still has significant implementation complexity. The go command would need to parse the entire Go source file to understand which files need to be made available for embedding. Today it only parses up to the import block, never full Go expressions. It would also be unclear to users what constraints are placed on the arguments to these special calls: they look like ordinary Go calls but they can only take string literals, not strings computed by Go code, and probably not even named constants (or else the go command would need a full Go expression evaluator). Much of the preliminary discussion focused on deciding between these two approaches. This design combines the two and avoids the drawbacks of each. The //go:embed comment directive follows the established convention for Go build system and compiler directives. The directive is easy for the go command to find, and it is clear immediately that the directive can’t refer to a string computed by a function call, nor to a named constant. The embed.Files type is plain Go code, defined in a plain Go package embed. All tools that type-check Go code or run other analysis on it can understand the code without any special handling of the //go:embed directive. The explicit variable declaration provides a clear place to write documentation: // jpgs holds the static images used on the home page. //go:embed *.jpg var jpgs embed.Files (As of Go 1.15, the //go:embed line is not considered part of the doc comment.) The explicit variable declaration also provides a clear way to control whether the embed.Files is exported. A data-only package might do nothing but export embedded files, like: package web // Styles holds the CSS files shared among all our websites. //go:embed style/*.css var Styles embed.Files In the preliminary discussion, a few people suggested specifying embedded files using a new directive in go.mod. The design of Go modules, however, is that go.mod serves only to describe information about the module’s version requirements, not other details of a particular package. It is not a collection of general-purpose metadata. For example, compiler flags or build tags would be inappropriate in go.mod. For the same reason, information about one package’s embedded files is also inappropriate in go.mod: each package’s individual meaning should be defined by its Go sources. The go.mod is only for deciding which versions of other packages are used to resolve imports. Placing the embedding information in the package has benefits that using go.mod would not, including the explicit declaration of the file set, control over exportedness, and so on. It is clear that there needs to be some way to give a pattern of files to include, such as *.jpg. This design adopts glob patterns as the single way to name files for inclusion. Glob patterns are common to developers from command shells, and they are already well-defined in Go, in the APIs for path.Match, filepath.Match, and filepath.Glob. Nearly all file names are valid glob patterns matching only themselves; using globs avoids the need for separate //go:embedfile and //go:embedglob directives. (This would not be the case if we used, say, Go regular expressions as provided by the regexp package.) In some systems, the glob pattern ** is like * but can match multiple path elements. For example images/**.jpg matches all .jpg files in the directory tree rooted at images/. This syntax is not available in Go’s path.Match or in filepath.Glob, and it seems better to use the available syntax than to define a new one. The rule that matching a directory includes all files in that directory tree should address most of the need for ** patterns. For example, //go:embed images instead of //go:embed images/**.jpg. It’s not exactly the same, but hopefully good enough. If at some point in the future it becomes clear that ** glob patterns are needed, the right way to support them would be to add them to path.Match and filepath.Glob; then the //go:embed directives would get them for free. In order to build files embedded in a dependency, the raw files themselves must be included in module zip files. This implies that any embedded file must be in the module’s own file tree. It cannot be in a parent directory above the module root (like ../../../etc/passwd), it cannot be in a subdirectory that contains a different module, and it cannot be in a directory that would be left out of the module (like .git). Another implication is that it is not possible to embed two different files that differ only in the case of their file names, because those files would not be possible to extract on a case-insensitive system like Windows or macOS. So you can’t embed two files with different casings, like this: //go:embed README readme But //go:embed dir/README other/readme is fine. Because embed.Files implements fs.FS, it cannot provide access to files with names beginning with .., so files in parent directories are also disallowed entirely, even when the parent directory named by .. does happen to be in the same module. The preliminary discussion raised a large number of possible transformations that might be applied to files before embedding, including: data compression, JavaScript minification, TypeScript compilation, image resizing, generation of sprite maps, UTF-8 normalization, and CR/LF normalization. It is not feasible for the go command to anticipate or include all the possible transformations that might be desirable. The go command is also not a general build system; in particular, remember the design constraint that it never runs user programs during a build. These kinds of transformations are best left to an external build system, such as Make or Bazel, which can write out the exact bytes that the go command should embed. A more limited version of this suggestion was to gzip-compress the embedded data and then make that compressed form available for direct use in HTTP servers as gzipped response content. Doing this would force the use of (or at least support for) gzip and compressed content, making it harder to adjust the implementation in the future as we learn more about how well it works. Overall this seems like overfitting to a specific use case. The simplest approach is for Go’s embedding feature to store plain files, let build systems or third-party packages take care of preprocessing before the build or postprocessing at runtime. That is, the design focuses on providing the core functionality of embedding raw bytes into the binary for use at run-time, leaving other tools and packages to build on a solid foundation. A popular question in the preliminary discussion was whether the embedded data should be stored in compressed or uncompressed form in the binary. This design carefully avoids assuming an answer to that question. Instead, whether to compress can be left as an implementation detail. Compression carries the obvious benefit of smaller binaries. However, it also carries some less obvious costs. Most compression formats (in particular gzip and zip) do not support random access to the uncompressed data, but an http.File needs random access ( ReadAt, Seek) to implement range requests. Other uses may need random access as well. For this reason, many of the popular embedding tools start by decompressing the embedded data at runtime. This imposes a startup CPU cost and a memory cost. In contrast, storing the embedded data uncompressed in the binary supports random access with no startup CPU cost. It also reduces memory cost: the file contents are never stored in the garbage-collected heap, and the operating system efficiently pages in necessary data from the executable as that data is accessed, instead of needing to load it all at once. Most systems have more disk than RAM. On those systems, it makes very little sense to make binaries smaller at the cost of using more memory (and more CPU) at run time. On the other hand, projects like TinyGo and U-root target systems with more RAM than disk or flash. For those projects, compressing assets and using incremental decompression at runtime could provide significant savings. Again, this design allows compression to be left as an implementation detail. The detail is not decided by each package author but instead could be decided when building the final binary. Future work might be to add -embed=compress as a go build option for use in limited environments. Other than support for //go:embed itself, the only user-visible go command change is new fields exposed in go list output. It is important for tools that process Go packages to be able to understand what files are needed for a build. The go list command is the underlying mechanism used now, even by golang.org/x/tools/go/packages. Exposing the embedded files as a new field in Package struct used by go list makes them available both for direct use and for use by higher level APIs. In the preliminary discussion, a few people suggested that the list of embedded files could be specified on the go build command line. This could potentially work for files embedded in the main package, perhaps with an appropriate Makefile. But it would fail badly for dependencies: if a dependency wanted to add a new embedded file, all programs built with that dependency would need to adjust their build command lines. In the preliminary discussion, a few people pointed out that developers might be confused by the inconsistency that //go:embed directives are processed during builds but //go:generate directives are not. There are other special comment directives as well: //go:noinline, //go:noescape, // +build, //line. All of these are processed during builds. The exception is //go:generate, because of the design constraint that the go command not run user code during builds. The //go:embed is not the special case, nor does it make //go:generate any more of a special case. For more about go generate, see the original proposal and discussion. The new embed package provides access to embedded files. Previous additions to the standard library have been made in golang.org/x first, to make them available to earlier versions of Go. However, it would not make sense to use golang.org/x/embed instead of embed: the older versions of Go could import golang.org/x/embed but still not be able to embed files without the newer go command support. It is clearer for a program using embed to fail to compile than it would be to compile but not embed any files. Implementing fs.FS enables hooking into net/http, text/template, and html/template, without needing to make those packages aware of embed. Code that wants to change between using operating system files and embedded files can be written in terms of fs.FS and fs.File and then use os.DirFS as an fs.FS or use a *os.File directly as an fs.File. An obvious extension would be to add to embed.Files a ReadFileString method that returns the file content as a string. If the embedded data were stored in the binary uncompressed, ReadFileString would be very efficient: it could return a string pointing into the in-binary copy of the data. Callers expecting zero allocation in ReadFileString might well preclude a future -embed=compress mode that trades binary size for access time, which could not provide the same kind of efficient direct access to raw uncompressed data. An explicit ReadFileString method would also make it more difficult to convert code using embed.Files to use other fs.FS implementations, including operating system files. For now, it seems best to omit a ReadFileString method, to avoid exposing the underlying representation and also to avoid diverging from fs.FS. Another extension would be to add to the returned fs.File a WriteTo method. All the arguments against ReadFileString apply equally well to WriteTo. An additional reason to avoid WriteTo is that it would expose the uncompressed data in a mutable form, []byte instead of string. The price of this flexibility—both the flexibility to move easily between embed.Files and other file systems and also the flexibility to add -embed=compress later (perhaps that would useful for TinyGo)—is that access to data requires making a copy. This is at least no less efficient than reading from other file sources. In the preliminary discussion, one person asked about making it easy to write embedded files back to disk at runtime, to make them available for use with the HTTP server, template parsing, and so on. While this is certainly possible to do, we probably should avoid that as the suggested way to use embedded files: many programs run with limited or no access to writable disk. Instead, this design builds on the file system draft design to make the embedded files available to those APIs. This is all new API. There are no conflicts with the compatibility guidelines. It is worth noting that, as with all new API, this functionality cannot be adopted by a Go project until all developers building the project have updated to the version of Go that supports the API. This may be a particularly important concern for authors of libraries. If this functionality ships in Go 1.15, library authors may wish to wait to adopt it until they are confident that all their users have updated to Go 1.15. The implementation details are not user-visible and do not matter nearly as much as the rest of the design. A prototype implementation is available. A goal of this design is to eliminate much of the effort involved in embedding static assets in Go binaries. It should be able to replace the common uses of most of the available embedding tools. Replacing all possible uses is a non-goal. Replacing all possible embedding tools is also a non-goal. This section examines a few popular embedding tools and compares and contrasts them with this design. One of the earliest and simplest generators for static assets is github.com/jteeuwen/go-bindata. It is no longer maintained, so now there are many forks and derivatives, but this section examines the original. Given an input file hello.txt containing the single line hello, world, go-bindata hello.txt produces 235 lines of Go code. The generated code exposes this exported API (in the package where it is run): func Asset(name string) ([]byte, error) Asset loads and returns the asset for the given name. It returns an error if the asset could not be found or could not be loaded. func AssetDir(name string) ([]string, error)"}. func AssetInfo(name string) (os.FileInfo, error) AssetInfo loads and returns the asset info for the given name. It returns an error if the asset could not be found or could not be loaded. func AssetNames() []string AssetNames returns the names of the assets. func MustAsset(name string) []byte MustAsset is like Asset but panics when Asset would return an error. It simplifies safe initialization of global variables. func RestoreAsset(dir, name string) error RestoreAsset restores an asset under the given directory func RestoreAssets(dir, name string) error RestoreAssets restores an asset under the given directory recursively This code and exported API is duplicated in every package using go-bindata-generated output. One benefit of this design is that the access code can be in a single package shared by all clients. The registered data is gzipped. It must be decompressed when accessed. The embed API provides all this functionality except for “restoring” assets back to the local file system. See the “Writing embedded assets to disk” section above for more discussion about why it makes sense to leave that out. Another venerable asset generator is github.com/rakyll/statik. Given an input file public/hello.txt containing the single line hello, world, running statik generates a subdirectory statik containing an import-only package with a func init containing a single call, to register the data for asset named "hello.txt" with the access package github.com/rakyll/statik/fs. The use of a single shared registration introduces the possibility of naming conflicts: what if multiple packages want to embed different static hello.txt assets? Users can specify a namespace when running statik, but the default is that all assets end up in the same namespace. This design avoids collisions and explicit namespaces by keeping each embed.Files separate: there is no global state or registration. The registered data in any given invocation is a string containing the bytes of a single zip file holding all the static assets. Other than registration calls, the statik/fs package includes this API: func New() (http.FileSystem, error) New creates a new file system with the default registered zip contents data. It unzips all files and stores them in an in-memory map. func NewWithNamespace(assetNamespace string) (http.FileSystem, error) NewWithNamespace creates a new file system with the registered zip contents data. It unzips all files and stores them in an in-memory map. func ReadFile(hfs http.FileSystem, name string) ([]byte, error) ReadFile reads the contents of the file of hfs specified by name. Just as ioutil.ReadFile does. func Walk(hfs http.FileSystem, root string, walkFn filepath.WalkFunc) error Walk walks the file tree rooted at root, calling walkFn for each file or directory in the tree, including root. All errors that arise visiting files and directories are filtered by walkFn. As with filepath.Walk, if the walkFn returns filepath.SkipDir, then the directory is skipped. The embed API provides all this functionality (converting to http.FileSystem, reading a file, and walking the files). Note that accessing any single file requires first decompressing all the embedded files. The decision in this design to avoid compression is discussed more above, in the “Compression to reduce binary size” section. Another venerable asset generator is github.com/GeertJohan/go.rice. It presents a concept called a rice.Box which is like an embed.Files filled from a specific file system directory. Suppose box/hello.txt contains hello world and hello.go is: package main import rice "github.com/GeertJohan/go.rice" func main() { rice.FindBox("box") } The command rice embed-go generates a 44-line file rice-box.go that calls embedded.RegisterEmbeddedBox to registers a box named box containing the single file hello.txt. The data is uncompressed. The registration means that go.rice has the same possible collisions as statik. The rice embed-go command parses the Go source file hello.go to find calls to rice.FindBox and then uses the argument as both the name of the box and the local directory containing its contents. This approach is similar to the “second approach” identified in the preliminary discussion, and it demonstrates all the drawbacks suggested above. In particular, only the first of these variants works with the rice command: rice.FindBox("box") rice.FindBox("b" + "o" + "x") const box = "box" rice.FindBox(box) func box() string { return "box" } rice.FindBox(box()) As the Go language is defined, these should all do the same thing. The limitation to the first form is fine in an opt-in tool, but it would be problematic to impose in the standard toolchain, because it would break the orthogonality of language concepts. The API provided by the rice package is: type Box struct { // Has unexported fields. } Box abstracts a directory for resources/files. It can either load files from disk, or from embedded code (when `rice --embed` was ran). func FindBox(name string) (*Box, error) FindBox returns a Box instance for given name. When the given name is a relative path, it’s base path will be the calling pkg/cmd’s source root. When the given name is absolute, it’s absolute. derp. Make sure the path doesn’t contain any sensitive information as it might be placed into generated go source (embedded). func MustFindBox(name string) *Box MustFindBox returns a Box instance for given name, like FindBox does. It does not return an error, instead it panics when an error occurs. func (b *Box) Bytes(name string) ([]byte, error) Bytes returns the content of the file with given name as []byte. func (b *Box) HTTPBox() *HTTPBox HTTPBox creates a new HTTPBox from an existing Box func (b *Box) IsAppended() bool IsAppended indicates whether this box was appended to the application func (b *Box) IsEmbedded() bool IsEmbedded indicates whether this box was embedded into the application func (b *Box) MustBytes(name string) []byte MustBytes returns the content of the file with given name as []byte. panic’s on error. func (b *Box) MustString(name string) string MustString returns the content of the file with given name as string. panic’s on error. func (b *Box) Name() string Name returns the name of the box func (b *Box) Open(name string) (*File, error) Open opens a File from the box If there is an error, it will be of type *os.PathError. func (b *Box) String(name string) (string, error) String returns the content of the file with given name as string. func (b *Box) Time() time.Time Time returns how actual the box is. When the box is embedded, it’s value is saved in the embedding code. When the box is live, this methods returns time.Now() func (b *Box) Walk(path string, walkFn filepath.WalkFunc) error Walk is like filepath.Walk() Visit for more information type File struct { // Has unexported fields. } File implements the io.Reader, io.Seeker, io.Closer and http.File interfaces func (f *File) Close() error Close is like (*os.File).Close() Visit for more information func (f *File) Read(bts []byte) (int, error) Read is like (*os.File).Read() Visit for more information func (f *File) Readdir(count int) ([]os.FileInfo, error) Readdir is like (*os.File).Readdir() Visit for more information func (f *File) Readdirnames(count int) ([]string, error) Readdirnames is like (*os.File).Readdirnames() Visit for more information func (f *File) Seek(offset int64, whence int) (int64, error) Seek is like (*os.File).Seek() Visit for more information func (f *File) Stat() (os.FileInfo, error) Stat is like (*os.File).Stat() Visit for more information type HTTPBox struct { *Box } HTTPBox implements http.FileSystem which allows the use of Box with a http.FileServer. e.g.: http.Handle("/", http.FileServer(rice.MustFindBox("http-files").HTTPBox())) func (hb *HTTPBox) Open(name string) (http.File, error) Open returns a File using the http.File interface As far as public API, go.rice is very similar to this design. The Box itself is like embed.Files, and the File is similar to fs.File. This design avoids HTTPBox by building on HTTP support for fs.FS. The Bazel build tool includes support for building Go, and its go_embed_data rule supports embedding a file as data in a Go program. It is used like: go_embed_data( name = "rule_name", package = "main", var = "hello", src = "hello.txt", ) or go_embed_data( name = "rule_name", package = "main", var = "files", srcs = [ "hello.txt", "gopher.txt", ], ) The first form generates a file like: package main var hello = []byte("hello, world\n") The second form generates a file like: package main var files = map[string][]byte{ "hello.txt": []byte("hello, world\n"), "gopher.txt": []byte("ʕ◔ϖ◔ʔ\n"), } That’s all. There are configuration knobs to generate string instead of []byte, and to expand zip and tar files into their contents, but there’s no richer API: just declared data. Code using this form would likely keep using it: the embed API is more complex. However, it will still be important to support this //go:embed design in Bazel. The way to do that would be to provide a go tool embed that generates the right code and then either adjust the Bazel go_library rule to invoke it or have Gazelle (the tool that reads Go files and generates Bazel rules) generate appropriate genrules. The details would depend on the eventual Go implementation, but any Go implementation of //go:embed needs to be able to be implemented in Bazel/Gazelle in some way.
https://go.googlesource.com/proposal/+/master/design/draft-embed.md
CC-MAIN-2020-45
refinedweb
5,699
56.76
Le Tue, Oct 28, 2008 at 11:59:50AM +1100, Brian May a écrit : > > If we rename plink in putty (I think that is what you are asking?), that > it going to make our version of putty inconsistent with every other > putty package out there. This program is often used by scripts, they > will break too. Hi Brian, Steffen, and everybody, while what was written above about plink in putty is also true for plink in plink, I agree that in this case it would be better to keep Putty's and changes Plink's. I had a disucssion with Upstream (that I forgot to CC to Steffen), in which he acknowledged that they unfortunately did not think about possible issues in an extended namespace when deciding for a name, as their program started as an internal project first. He proposed 'snplink' for Debian, as it nicely summarises what the tool does: linking SNPs. Have a nice day, -- Charles Plessy Debian Med packaging team, Tsurumi, Kanagawa, Japan
http://lists.debian.org/debian-devel/2008/10/msg00715.html
CC-MAIN-2013-48
refinedweb
167
64.75
Collection of streams. This forms a collection of message streams for a software component and contains one stream per message importance level. A facility is intended to be used by a software component at whatever granularity is desired by the author (program, name space, class, method, etc.) and is usually given a string name that is related to the software component which it serves. The string name becomes part of messages and is also the default name used by Facilities::control. All message streams created for the facility are given the same name, message prefix generator, and message sink, but they can be adjusted later on a per-stream basis. The C++ name for the facility is often just "mlog" or "logger" (appropriately scoped) so that code to emit messages is self documenting. The name "log" is sometimes used, but can be ambiguous with the ::log function in math.h. Thread safety: This object is thread-safe except where noted. Definition at line 1579 of file Message.h. Construct an empty facility. The facility will have no name and all streams will be uninitialized. Any attempt to emit anything to a facility in the default state will cause an std::runtime_error to be thrown with a message similar to "stream INFO is not initialized yet". This facility can be initialized by assigning it a value from another initialized facility. Definition at line 1594 of file Message.h. Construct a new facility from an existing facility. The new facility will point to the same streams as the existing facility. Create a named facility with default destinations. All streams are enabled and all output goes to file descriptor 2 (standard error) via unbuffered system calls. Facilities initialized to this state can typically be used before the C++ runtime is fully initialized and before Sawyer::initializeLibrary is called. Definition at line 1609 of file Message.h. References Sawyer::Message::FdSink::instance(). Assignment operator. The destination facility will point to the same streams as the source facility. Returns true if called on an object that has been constructed. Returns true if this is constructed, false if it's allocated but not constructed. For instance, this method may return false if the object is declared at namespace scope and this method is called before the C++ runtime has had a chance to initialize it. Thread safety: This method is not thread-safe. Definition at line 1630 of file Message.h. Returns a stream for the specified importance level. It is permissible to do the following: Thread safety: This method is thread-safe. Definition at line 1647 of file Message.h. Return the name of the facility. This is a read-only field initialized at construction time. Thread safety: This method is thread-safe. Renames all the facility streams. Invokes Stream::facilityName for each stream. If a name is given then that name is used, otherwise this facility's name is used. Thread safety: This method is thread-safe. Cause all streams to use the specified destination. This can be called for facilities that already have streams and destinations, but it can also be called to initialize the streams for a default-constructed facility. Thread safety: This method is thread-safe.
http://rosecompiler.org/ROSE_HTML_Reference/classSawyer_1_1Message_1_1Facility.html
CC-MAIN-2018-17
refinedweb
535
58.48
Get a string of characters from standard input #include <stdio.h> char *gets( char *buf ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The gets() function gets a string of characters from the stdin stream, and stores them in the array pointed to by buf until end-of-file is encountered or a newline character is read. Any newline character is discarded, and the string is NUL-terminated. The gets() function is similar to fgets(), except that gets() operates with stdin, has no size argument, and replaces a newline character with the NUL character. A pointer to buf, or NULL when end-of-file is encountered before reading any characters or a read error occurred (errno is set). #include <stdio.h> #include <stdlib.h> int main( void ) { char buffer[80]; while( gets( buffer ) != NULL ) { puts( buffer ); } return EXIT_SUCCESS; } ANSI, POSIX 1003.1 errno, feof(), ferror(), fopen(), getc(), fgetc(), fgets(), puts(), ungetc()
http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/g/gets.html
CC-MAIN-2020-34
refinedweb
162
65.93
Advertising Bug ID: 77965 Summary: -Wduplicated-cond should find duplicated condition / identical expressions of form "a || a" or "a && a" Product: gcc Version: 7.0 Status: UNCONFIRMED Keywords: diagnostic Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: burnus at gcc dot gnu.org Target Milestone: --- First, I wonder whether "-Wduplicated-cond" should be enabled by -Wextra (or even -Wall). Secondly, it only warns for "if (a) ... else if (a) ...". However, it would be also useful to warn for "(a || a)" and "(a && a)" as such code is easily written by copy'n'paste. [Possibly, instead of -Wduplicated-cond it should be used with -Wtautological-compare?] By comparison, cppcheck finds this issue and outputs: (style) Same expression on both sides of '||'. Example: #include <assert.h> int foo(int x) { assert (x == 5 || x == 5); return (x == 5 || x == 5) ? 1 : 0; }
https://www.mail-archive.com/gcc-bugs@gcc.gnu.org/msg510019.html
CC-MAIN-2016-50
refinedweb
146
50.53
Explore the Java API for JSON Processing JSON, which stands for JavaScript Object Notation, is a text-based, open standard, language independent data exchange format, primarily used to serialize/deserialize and transmit data over a network connection. JSON can have many other uses as well. But, in the arena of data exchange, it performs similar functions as XML. Java already has API support for XML, and JSON soon picked up interest. Java EE 7 embraced it into the library and standardised much of its functionality through Java Specification Request (JSR) 353. However, even prior to this core incorporation, there were several third party libraries to process, transform, serialize/deserialize or generate JSON data, such as Json-lib, fastjson, Flexjson, Jettison, Jackson, and so forth. Each has its uses under different scenario. So, Java actually never lacked support for JSON processing. But here, our specific interest lies in the core API group. JSON and XML When we talk of JSON, one question comes to our mind: Is it better than XML? In view of the data interchange scheme, they seem to perform similar functions. They may be similar, but they are not the same. In fact, their features are not comparable at all based on their originality. JSON is purely a data format. It is excellent for data interchange where readability for humans as well as machines is the primary concern. Now, what does this readability to a machine mean? Web servers and Web pages exchange data, right? JavaScript is rich in processing Web page logic. Most Web pages are built on it. JSON (JavaScript Object Notation) is close to JavaScript; this means that Web pages that use JavaScript can easily consume data given in JSON format. XML, on the other hand, is a bit alien when JavaScript consumability is concerned and requires cryptic parsing. Also, syntactically JSON uses fewer characters to represent data. This makes data size small and compact. For example, in JSON a item type (String/Number/Nested JSON Object) can be inferred syntactically. This lessens the parser's effort to realize that id is a number and firstName a string. A simple JavaScript code to illustrate the idea is as follows: obj=JSON.parse(jsonObj); obj.id==101; // true "Kenneth" // true In the case of XML, we may want to write the JavaScript in the following manner: obj = parseThatXMLPlease(); ppl = obj.getChildren("person"); p = ppl[0]; p.getChildren("id")[0].value() == 101 // true p.getChildren("firstname")[0].value() == "Kenneth" // true JSON Data Format { "id": 101, "firstName": "Kenneth", "lastName": "Garcia", "address": { "street": "950 Tower Lane", "city": "Foster City" "state":"CA" } } XML Data Format <?xml version="1.0"?> <person id="101"> <firstname>Kenneth</firstname> <lastname>Garcia</lastname> <address> <street>950 Towel Lane</street> <city>Foster City</city> <state>CA</state> </address> </person> XML (eXtensible Markup Language) is a language, a sophisticated language in its own right, to be precise. Using it as a data exchange format is just one of its side-effects. XML can organize the structuring information into metadata; the XML Schema can disambiguate the data structure properly without breaking any of its structural norms, modify an XML document using an XSL transformation document, and so on, and many more features. JSON, though picking up some of these features such as JSONPath for querying or JSON-Schema for validation, is not designed to have these characteristics. The point is, each has its own applicability and specific purpose. So, if comparison still creeps in, the basis of discrimination should be on what we need and what would be more appropriate under present circumstances. Java API for JSON The Java API for JSON processing primarily performs four functions: parsing, serialize/deserialize, transform, and query. There are two programming models for JSON processing, such as XML (DOM and SAX). One is called the object model and another is called the streaming model. Object Model The object model (JSON-P Model API) generates the data structure in a tree format. The tree is immutable and resides in memory. This makes it flexible to traverse and analyse. Because JSON output is generated by navigating through the tree, the memory footprint is high and lacks performance in comparison to streaming API. This is a specific concern when dealing with large volume of data. One of the APIS of this model is the Builder pattern class JsonObjectBuilder. This class provides several overloaded add() methodS that can be used to include object properties along with their values to the JSON data. Some commonly used APIs in this model are as follows: A Quick Example package org.mano.example; import java.io.InputStream; import java.net.URL; import javax.json.Json; import javax.json.JsonObject; import javax.json.JsonReader; public class JsonDemo { public static void main(String[] args) { try { URL url = new URL(""); InputStream in = url.openStream() JsonReader reader = Json.createReader(in); JsonObject jobj = reader.readObject(); System.out.println("Time: " + jobj.getJsonString("time")); System.out.println("Elapsed " + jobj.getJsonNumber ("milliseconds_since_epoch") + " miliseconds since epoch"); System.out.println("Date: " + jobj.getJsonString("date")); } catch (Exception ex) { System.out.println(ex); } } } Streaming Model The streaming model (JSON-P Streaming API), on the other hand, uses an event-based parser and does a sequential reading from a stream. The parser generates events and stops as soon an element is encountered. The element then can be processed or discarded through Java code before moving to the next element. It is faster and more efficient, but the tradeoff is it cannot access a specific JSON property directly like the object model due to sequential reading. In the streaming model, the JsonGenerator class is the main API to generate JSON data. This data then can be written into a stream by using several overloaded write() methods. The write() method adds object properties and their values to the JSON data. Other APIs of this model are as follows: A Quick Example package org.mano.example; import java.io.InputStream; import java.net.URL; import javax.json.Json; import javax.json.stream.JsonParser; import javax.json.stream.JsonParser.Event; public class JsonDemo { public static void main(String[] args) { try { URL url = new URL(""); InputStream in = url.openStream(); JsonParser parser = Json.createParser(in); while (parser.hasNext()) { Event event = parser.next(); if (event == Event.KEY_NAME) { switch (parser.getString()) { case "time": parser.next(); System.out.println("Time: " + parser.getString()); break; case "milliseconds_since_epoch": parser.next(); System.out.println("Elapsed " + parser.getLong() " miliseconds since epoch"); break; case "date": parser.next(); System.out.println("Date: " + parser.getString()); break; } } } } catch (Exception ex) { System.out.println(ex); } } } Conclusion JSON started as an alternative to XML. Soon, developers became interested due to its simple structure and easy operability. RESTful Web services extensively use JSON to format data between request and response processes. In view of data interchange, especially, when the consumer is a Web page, JSON seems a rational choice for developers.
http://www.developer.com/java/ent/explore-the-java-api-for-json-processing.html
CC-MAIN-2017-04
refinedweb
1,137
50.73
This would technically work but bare in mind that you will still be charged on the I/O for reading and writing to the backups on EBS. An alternative and possibly more cost effective method you could consider is backing up to S3 - in this way you only pay for the storage that you use and not for provisioning a whole EBS volume. To follow on, EBS volumes may only be attached to one instance at a time. This may be an issue for you if you have multiple instances running at one time. Should the instance attached to the EBS volumes go down, you will need to remount these to another instance. Using S3, each instance now and in the future can have access to the S3 bucket. Heroku's /tmp directories are unique to each dyno. So your Web Dyno saves a file in its /tmp directory, then your worker looks in its /tmp directory and cannot find it. The best option is likely refactoring your code (that way you aren't clogging up your Web Dyno's resources creating and writing files to disk). However, if you really want to avoid it, you could store your files temporarily on S3 [tutorial] or some other external storage mechanism. I feel following suggestions can help you optimize the code a bit if not completely: Use initialization over assignments wherever possible Prefer pre-increment over post for better speed.(believe me, it does make a change) Apart from that I think the code is just fine. There are some pros and cons of each DS..you gotta live with it. Happy Coding! Yes, you can replace PHP with Dart and run it via Apache. See this article. There are some libraries to also enable connecting to MySQL (like this one). The Dart VM is unrelated to client access; the Dart VM on the server would only be used for server-side Dart. Client-side Dart is generally converted into JavaScript using dart2js (via pub build) and this will work with all modern browsers. According to bitnami wiki it is in /opt/megastack/apps - here you create a folder with your app and apache serves it in a vhost you configure. It's a good practice to check /etc/httpd/ folder for the confifugration files and DocumentRoot. Note, make sure that you remove the index.html file you find by default in the folder. The index.html has a higher priority in being served than the index.php in the bitnami stack configuration. No, SQL Azure does not support filetable, nor filestreams. You can store your files in Azure Blob Storage (see How to use the Windows Azure Blob Storage Service) and store metadata about files (name, type, URL location) in SQL Azure DB. For a list of SQL Server feature limitations in Windows Azure SQL Database refer to Azure SQL Database Transact-SQL information. For a list of ALTER DATABASE options supported by Windows Azure SQL Database refer to ALTER DATABASE (Transact-SQL). You are getting symbols instead of an image since you are trying to send binary data without specifying what that data is. Add header to your renderPicture.php file : header('Content-Type: image/png'); And it will return the desired png image. Cheers After In his Guru Of The Week #28 column, Herb Sutter uses a union but it's less robust than Boost's efforts. Boost's aligned_storage solves the gory details for you. If you look at its implementation, you'll see it uses MSCV's __alignof or GCC's __alignof__ as well as another template: type_with_alignment. From my own codebase, I once used (derived from the GOTW link above): #if defined(_MSC_FULL_VER) && (_MSC_FULL_VER >= 150020706) # pragma warning(push) # pragma warning(disable: 4371) #endif // #if (defined(_MSC_FULL_VER) && (_MSC_FULL_VER >= 150020706) union AlignedStorage { char storage[sizeof(T)]; int16 dummy0; int32 dummy1; int64 dummy2; float dummy3; double dummy4; For Anyone whos having the same issue i solved the problem the following way: $url = ''; $fp = fopen(''); $ch=curl_init(); curl_setopt($ch, CURLOPT_HTTPHEADER, array('X-Auth-Token:sometoken')); curl_setopt($ch, CURLOPT_PUT, true); curl_setopt($ch, CURLOPT_INFILE, $fp); //this was the main one that i was missing curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, TRUE); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "PUT"); This seemed to resolve it and upload the file from the url to object storage server.. It is a logic bug in your code. First off, only newer versions of Windows use 5000–65534 as ephemeral ports. Older versions used 1025-5000 instead. You are creating multiple sockets that are explicitly bound to random ephemeral ports until you have bound a socket that is within 10 ports less than your target port. However, if any of those sockets happen to actually bind to the actual target port, you ignore that and keep looping. So you may or may end up with a socket that is bound to the target port, and you may or may not end up with a final port value that is actually less than the target port. After that, if port happens to be less than your target port (which is not guaranteed), you are then creating more sockets that are implicitly bound to different random available ephemera OK, I found a solution to this issue. Under linux it's not necessary, but under windows I discovered that if you are neither binding nor connecting, you must have transmitted something before you make the call to asynch_recieve_from(), the call to which is included within my this->asynch_receive() method. My solution, make a dummy transmission of an empty string immediately before making the asynch_receive call under windows, so the modified code becomes: m_socketPtr->set_option(boost::asio::socket_base::broadcast(true)); // If no local port is specified, default parameter is 0 // If local port is specified, bind to that port. if(localPort != 0) { boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort); m_socketPtr->bind(localEndpoint); } i Yes, the micro instances always use EBS root, so you don't have to do anything special. If you 'stop' your instance, and later start, it's moving your hard drive to another computer and rebooting. If you terminate your instance, your EBS drive will be fine as long as "delete EBS drive on termination" is not set on your drive. When you use other instances, you'll have to verify the AMI type. Not all AMIs are available in all combinations: EBS vs ephemeral 64 bit vs 32bit PVM vs HVM (Everything is PVM except the really high-end Compute Cluster) When you first get started in the cloud, EBS is a big deal. But as you get to be a cloud expert, you'll prefer non-EBS instances. EBS will only be used on a few servers, like your database or your syslog server. Most of your app should be stat Looks like the command line equates to <devicename>=<blockdevice>. So we should be able to do that in fog in one of a couple ways. The model version using your values would be something like: compute = Fog::Compute.new(...) compute.servers.create( :block_device_mapping => [ { 'deviceName' => '/dev/sdb', 'virtualName' => 'ephemeral0' }, { 'deviceName' => '/dev/sdc', 'virtualName' => 'ephemeral1' }, { 'deviceName' => '/dev/sdd', 'virtualName' => 'ephemeral2' }, { 'deviceName' => '/dev/sde', 'virtualName' => 'ephemeral3' }, ], :image_id => 'ami-xxxxxxxx' ) Or the lower level, more direct path might look like: compute.run_instances( 'ami-xxxxxxxx', 1, 1, :block_device_mapping => [ { 'deviceName' => '/dev/sdb', 'virtualName' => 'ephemeral0' } Not a bug - but a change to the Windows 2012 AMI configuration. Previously, you could add ephemeral volumes, but it wasn't clear how many you could add. Now, to "fix" that, they've put them all there to show how many you can have, and then you remove the ones you don't want. ie - n most cases - all of them. I'm not convinced it's a particularly good change, but that's the answer I got from support. You have two options to enable ephemeral storage on an instance. Enable it at launch. Can be done in the console or command line tools. Enable it with you register a snapshot as an AMI. By default it will not map ephemeral storage, this is something you have to explicitly enable. It's a limitation of the network protocols. Both TCP & UDP, for instance, have 16 bit source and destination ports. Even if you could increase the number of ports no one could address them. - (void)viewDidLoad { [super viewDidLoad]; BarViewController *bar = [[BarViewController alloc] init]; // does this assignment create a "strong" reference i.e. increase retain count by 1? NO, you set the delegate as weak so the compiler will not strongly point to any object (in this case SELF) when bar.delegate is assigned. bar is a local variable so it will automatically retain the BarViewController object as long as the functions does not return (increasing the count by 1 when assigned and deleting it when the function ends), or as long as you dont set it to NIL. If you do not specify the delegate var to be weak, then yes, you will increase the retain count to SELF by 1 and you might end up with retain cycle. That is way delegates should always be weak. bar.delegate =(); } } It's hard to change that behavior, you'd have to override all built-in search commands (/, n / N, *, #, etc.) and any custom (plugin) mappings. If this is bothering you, maybe :set nowrapscan is worth a try. You can then still "manually" wrap via gg / G, which will soon go into your muscle memory, yet keep you alert. It's hard to speculate. A lot of errors come from changes to dependencies or permissions. To see the problem with a specific object, use this query: FROM All_Errors WHERE Name = '<invalid object name>' AND Owner = '<invalid object owner>' The <invalid object name> and <invalid object owner> (meaning the schema) must be in uppercase unless the object name was defined using double-quotes and a non-uppercase name, for example CREATE TABLE "BadDecision" AS .... To see errors for all invalid objects, do this: FROM All_Errors WHERE (Owner, Name) IN ( SELECT Owner, Object_Name FROM All_Objects WHERE status = 'INVALID') I am assuming you are using the gcc compiler. I just gave that a little testing and my valgrind did report the errors only when I had the program built with -O0. With optimizations on (-O3) valgrind did not report the errors. I suppose that would be because the compiler is able to remove the uninitialized jump because the value never changes and is removed entirely. Since you are doing that for exercise you might try to figure out which compiler options work for you. In case you don't know yet what that is, look for "Debug" or "Release" build and switch to "Debug". HTH, Moose You can try load (scifac); declare("`", alphabetic); expr: (-8*m5*sin(Te)*Te`^2*L1^2-8*m4*sin(Te)*Te`^2*L1^2); gcfac(expr); 2 2 (%o2) - 8 (m5 + m4) sin(Te) Te` L1 I use ` (backtick) not ' (apostrophe). Maybe optimize can be useful. git merge drivers exist to efficiently detect and resolve potential conflicts between changes in two histories. Its smudge and clean filters exist to apply and remove changes that are needed only in your worktree, that don't belong in a published history. What you're doing is a smudge, a local-only change nobody else needs. merge is a conflict-resolver, not a smudger. You've already got the code to apply your local changes even if nothing else has changed upstream, you can run that as a smudge filter directly by feeding it the same file as both inputs. From my observation: the empty NSArray is basically a singleton instance. You can't create memory leaks in this way, because your app will always have a reference to the empty array "singleton". This will show you that all empty arrays point to the same memory address: NSArray *array1 = [NSArray array]; NSArray *array2 = [NSArray arrayWithArray:array1]; NSArray *array3 = [NSArray arrayWithArray:@[]]; NSArray *array4 = @[]; NSArray *array5 = [@[] copy]; NSArray *array6 = [[NSArray alloc] initWithArray:[[NSArray alloc] initWithArray:@[]]]; NSLog(@"%p", array1); NSLog(@"%p", array2); NSLog(@"%p", array3); NSLog(@"%p", array4); NSLog(@"%p", array5); NSLog(@"%p", array6); No matter where in your application lifecycle you log the address of the empty array, it will always be the same. Yo So there are at least two pieces to this question: The first is if you are running in IIS in classic mode versus integrated mode. Classic mode will make things behave like IIS 6, where everything is an ISAPI filter, including ASP.NET itself. Integrated mode takes advantage of the fact that IIS 7 was rewritten from the ground-up and now uses modules instead. Secondly, the short answer of why IIS knows how to forward a URL to ASP.NET is the Routing Module in the IIS 7+ pipeline; ISAPI filters are now part of the ISAPI Filters Module. For a visual description of how the IIS 7+ pipeline works from a Routing/URL-Rewriting perspective, read IIS URL Rewriting and ASP.NET Routing So the good news is if you are very much attached to the ISAPI filter approach you can use the classic mode of IIS. Use nohup to prevent child processes from being killed when the terminal closes. spawn nohup /usr/bin/firefox I assume there's more to the script, since there's no need to use Expect just to start firefox. there are attributes like layout_toLeftOf or layout_below. Use them with the id of the referenced checkbox. For example: <CheckBox android:id="@+id/chechbox_bike" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/Bike" android: <CheckBox android:id="@+id/chechbox_car" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/Car" android:layout_below="@id/checkbox_bike" android: See the attribute android:layout_below="@id/checkbox_bike" I added to the second checkbox. A short explanation: A relative layout is used to place views in "relation" to each other. Without any information they are ... but don't know if the commented line is needed No. Not because it's obvious but because it happens to be the default. From MSDN: Property Value One of the CommandType values. The default is Text. Having said that, it certainly is not bloat-code and I might include it, just to make the code a little less ambigious to read. If the main thread needs to be message synchronously during the operation, I'm wondering if there is a canonical solution that an operation subclasser can implement to prevent this type of deadlocking. There is. Never make a synchronous call to the main queue. And a follow-on: Never make a synchronous call from the main queue. And, really, it can be summed up as Never make a synchronous call from any queue to any other queue. By doing that, you guarantee that the main queue is not blocked. Sure, there may be an exceptional case that tempts you to violate this rule and, even, cases where it really, truly, is unavoidable. But that very much should be the exception because even a single dispatch_sync() (or NSOpQueue waitUntilDone) has the potential to deadlock. Of co Timer executes in the EDT and the SwingWorker makes work in another threads. I really like this example Swing Worker example. You could do something like this: void MyLog(NSString *format, ...) { va_list argList; va_start(argList, format); NSString *message = [[NSString alloc] initWithFormat:format arguments:arg_list]; va_end(argList); // send to TestFlight TFLog(@"%@", message); // TODO: save to my log // { your code here } } And then call MyLog instead of TFLog. Try something like this.... $json_string = file_get_contents('url'); $json_array = json_decode( $json_string, TRUE ); print '<pre>'; print_r( $json_array ); There are a couple of different ways to do this, but the basics are... Get the json string from source, decode it ( the TRUE in the json decode will decode it into an associative array ) and then you should be able to treat it like any array you want to. Good Luck. DbContext is not thread-safe, so making it static is not a good thing for server code. The overhead of creating a DbContext is low so I do not see why we have to avoid making it instance variable. It should be. Best way to know how to proceed would be to do a test and load a bunch of fake images, and makes sure you don't drop frames. Even if you use the GPU it might not be fast enough on your device, so recommend testing regardless. Have all the cells you need available in your table view controller. In your table view's datasource methods (i.e. numberOfSections, numberOfRowsInSection, cellForRowAtIndexPath) you can set all the correct numbers and data for the two states. When the segmented control changes, you can do two things. Either you simply reloadData, as was suggested before, but then you do not get any animations. Or you use insertRowsAtIndexPaths:withRowAnimation: and deleteRowsAtIndexPaths:withRowAnimation: to get the animation effect. E.g. showing sections 1-2 in case 1 and sections 3-6 in case 2 (section 0 being the top section); each section has 2 rows - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return _control.selectedSegmentIndex == 0 ? 3 : 4; } - (NSInteger)tableView: Easy enough, to be honest... $numOfEmployees = 8; $concatArray = array(); foreach($attendance as $k => $v) { $concatArray[] = array("employeeID" => $k, "attendance" => $v, "remark" => $remark[$k]; } If your attendance table is in anything like SQL, you could go about it with foreach on the above mentioned concatArray: foreach($concatArray as $key => $value) { $remark = $value['remark']; // sanitize remark for database $query = "INSERT INTO employees (ID, Date, Status, Remarks) VALUES ({$value['employeeID']}, NOW(), {$value['attendance']}, '{$value['remark']}');"; // perform the actual query } Keep in mind that this is a very general example, making a lot of assumptions about the database. As Frits notes, the remark field should be sanitized, You can install things on an external hard drive, but you cannot run them there. You can only run them on the computer to which you've attached the external hard drive. In other words, you need a computer to run a LAMP stack, and if you have a computer, it doesn't matter if the computer is using an internal or external hard drive. The only difference between an internal hard drive and an external drive is that external hard drives are often (but not always) slower.
http://www.w3hello.com/questions/In-a-LAMP-stack-what-are-some-obvious-ways-to-utilize-ephemeral-SSD-storage-closed-
CC-MAIN-2018-17
refinedweb
3,052
60.65
Julie Lerman Published: April 2011 Download the code for this article Video demonstrating this article Entity Framework Code First modeling workflow. I’ll demonstrate code first DataAnnotations with a simple pair of classes: Blog and Post. public class Blog { public int Id { get; set; } public string Title { get; set; } public string BloggerName {; } } You can read the whitepaper, Building an MVC 3 App with Code First and Entity Framework 4.1, which demonstrates using these classes with Entity Framework 4.1 Code First in an MVC 3 application. I’ll use the same application to demonstrate Data Annotations in action. As they are, the Blog and Post classes conveniently follow code first convention and required no tweaks to help EF work with them. But you can also use the annotations to provide more information to EF (and to MVC 3) about the classes and the database that they map to. Entity Framework relies on every entity having a key value (aka EntityKey) that it uses for tracking entities. One of the conventions that code first depends on is how it implies which property is the key in each of the code first classes. That convention is to look for a property named “Id” or one that combines the class name and “Id”, such as “BlogId”. In addition to EF’s use of the EntityKey, the property will map to a primary key column in the database. The Blog and Post classes both follow this convention. But an IdentityKey by default. The Required annotation tells EF that a particular property is required. Adding Required to the Title property will force EF and MVC to ensure that the property has data in it. [Required] public string Title { get; set; } With no additional no code or markup changes in the application, my existing MVC application will perform client side validation, even dynamically building a message using the property and annotation names. Figure 1 With MVC’s client-side validation feature turned off, you will get the same response, except that it will be a result of server-side validation. Entity Framework will perform the validation on the Required annotation and return the error to MVC which will display the message. The Required attribute will also affect the generated database by making the mapped property non-nullable. Notice that the Title field has changed to “not null”.. Prior to that, the default length was 128 as is the length of Title. MVC 3. Figure 2 Code first convention dictates that every property include the properties contained in its BlogDetail property. By default, each one is preceded with the name of the complex type, BlogDetail, as you can see in Figure 3. Figure 3: The Blogs database table containing columns that map to a complex type Another interesting note is that although the DateCreated property was defined as a non-nullable DateTime in the class, the relevant database field is nullable. You must use the Required annotation if you wish to affect the database schema. The ConcurrencyCheck annotation allows you to flag one or more properties to be used for concurrency checking in the database when a user edits or deletes an entity. If you've been working with Entity Data Model,; } Concurrency checking depends on having access to the original value of the property. If the context instance that originally queried for a blog stays in memory, it will retain knowledge of the original values of the blog along with its current values. But in a disconnected application, such as the MVC application being used, you’ll need to remind the context of that original value. You can do that using this db.entry property method before calling SaveChanges. In the following Edit postback method of the sample MVC application, the UI has returned the original value of the blogger name in the originalName parameter. Then it is applied to the DbContext’s tracking entry by setting the Property’s OriginalValue as you can see in the line of code just before SaveChanges is called. [HttpPost] public ActionResult Edit(int id, string originalName, Blog blog) { try { db.Entry(blog).State = System.Data.EntityState.Modified; db.Entry(blog).Property(b => b.BloggerName).OriginalValue = originalName; db.SaveChanges(); return RedirectToAction("Index"); } catch { return View(); } } You can also use the Table attribute when code first is creating the database for you if you simply want the table name to be different than the convention.. An important database features is the ability to have computed properties. If you're mapping your code first classes to tables that contain computed properties,Generation will become an identity key in the database. That would be the same as setting DatabaseGenerated to DatabaseGenerationOption.Identity. If you do not want it to be an identity key, you can set the value to DatabaseGenerationOption.None.; } } Figure 4 shows the constraint in the database that creates a relationship between InternalBlogs.PrimaryTrackingKey and Posts.BlogId. Figure 4 will .. ![CDATA[ Third party scripts and code linked to or referenced from this website are licensed to you by the parties that own such code, not by Microsoft. See ASP.NET Ajax CDN Terms of Use –. ]]>
https://msdn.microsoft.com/en-us/data/gg193958
CC-MAIN-2015-27
refinedweb
855
53.41
on 4/29/03 10:56 AM Stephan Michels wrote: >>Okay, the namespaces already exist and its only names, so it >>would be the easiest to keep the existing pattern, bit please >>avoid creating namespace variants based on version numbers. > > > I agree here, you doesn't gain a benefit using a version within > a namespace. Contracts need to be uniquely identified. There is no way out of this. There are are three simple way to uniquely identify an xml markup: 1) versioning the namespace (as we do in the sitemap, for example) 2) a version="" attribute and metodology (as done in XSLT) 3) a version="" pseudoattribute in a PI (as XML itself does) In my personal quest against PI, I rule #3 out. The other two remains. I personally believe that #2 is simply too complex for what we need. So #1 remains. The namespaces should only help to get an information, > where these element come from. Moreover, using versions allows to > use different version of the same element within a document. > > So here my -1, We need to be able to clearly identify a contract. Without a versioning system, we can't. So, either you propose an alternative unique identification of the contract (not of their families!) or your -1 is useless because it abolishes existing practices without providing an alternative to something that is already in place. -- Stefano.
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200304.mbox/%3C3EAFE0D9.2090005@apache.org%3E
CC-MAIN-2017-13
refinedweb
230
63.59
In this section you will learn about conversion of numeric type String data into long in java. Converting string to long is similar to converting string to int in java, if you convert string to integer then you can convert string to long by the following same procedure. You need to know that long is a primitive type which is wrapped by Long wrapper class and String is an object that is a sequence of character. Converting String to long using valueOf() method : You can convert string to long using valueOf() method of java.lang.Long class which accept string as argument and return a equivalent Long primitive value. If you provide the wrong input , by wrong input means that providing a non-numeric input to string or passing the value which is beyond the range then, the NumberFormatException is thrown. long value = Long.valueOf("1234"); Converting String into long using parseLong() : The parseLong() is another method from java.lang.Long class which convert String in to Long value. parseLong() also throws NumberFormatException if the String provided is not converted into long. It also support a overloaded method to which accept octal, hexadecimal long value to String format. parseLong() is widely used for conversion from String to long in java. parseLong() also take negative value as input in the argument as the first character. long value = Long.parseLong("1234"); A simple example using both these methods : public class StringToLong { public static void main(String args[]) { //Conversion using parseLong method long value=Long.parseLong("1234"); System.out.println("Using parseLong() -> "+value); //conversion using valueOf method String str="1234"; long val=Long.valueOf(str); System.out.println("Using Valueof() -> "+val); } } Output from the program : If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Discuss: Convert String into long in java Post your Comment
http://www.roseindia.net/java/java-conversion/java-StringToLong.shtml
CC-MAIN-2014-52
refinedweb
314
55.95
Yes and no. Take the 2010 source zip file (IronPython-2.6.1-Src-Net40.zip) from and run: Msbuild /t:rebuild /p:configuration="V4 Release" IronPython4.sln from the src directory. If you have only .NET 4.0 installed (that is, no VS 2010 installation), it should work. If on the other hand you have VS 2010 installed and try building this from an SDK command prompt you'll likely end up with some error messages about "The type or namespace name 'Utils' does not exist in the namespace 'System.Dynamic' (are you missing an assembly reference?)". If the latter is your case, the underlying issue has already been fixed and we should have the IronPython_Main branch on CodePlex setup with the VS 2010 solution within the next couple of weeks. Dave -----Original Message----- From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Max Yaffe Sent: Monday, April 26, 2010 1:53 PM To: users at lists.ironpython.com Cc: Jimmy Schementi Subject: [IronPython] IronPython Source & VS2010 Can IronPython 2.6.1 be compiled in VS 2010 - release version? Thanks, Max _______________________________________________ Users mailing list Users at lists.ironpython.com
https://mail.python.org/pipermail/ironpython-users/2010-April/012680.html
CC-MAIN-2019-47
refinedweb
199
68.26
The utility of WMI classes is huge and there are a lot of WMI classes to interact with. Which is of course an awesome thing, but since there are lot of WMI classes sometimes the challenge is figuring out which class you will need for your specific requirement. Today, lets discuss a WMI class which you will need for interacting with the registry - and the class is StdRegProv. A little bit of details on the StdRegProv class before we start: StdRegProv has been a part of Root\Default WMI namespace. However, starting from Windows Vista it is also a part of Root\CIMV2 namespace. Therefore, one can create an instance of StdRegProv class from Root\CIMV2 or Root\Default Namespace. Consider using the Root\Default namespace, if you want your script to be compatible with older versions of Windows Operating System as well. There is one more thing I want to focus your attention to about this class. You will need following specific numeric values to interact with specific registry hives. Below are the hives and their corresponding numeric values: Name Value HKEY_CLASSES_ROOT 2147483648 HKEY_CURRENT_USER 2147483649 HKEY_LOCAL_MACHINE 2147483650 HKEY_USERS 2147483651 HKEY_CURRENT_CONFIG 2147483653 Not to digress, but while writing these values I just realized that this one makes for an excellent example of use of HashTable, doesn't it! Just create a HashTable with the keys as the hive names and assign the corresponding number as the value of each key. Below is an example of what I am talking about: Now, it indeed becomes easy to access the numeric value of any hive (e.g. $regkey["HKEY_CLASSES_ROOT"] ) Having understood the important pieces of the StdRegProv class, now lets proceed with looking at some useful methods of this class. For this post, I am using the class from Root\Default namespace. Following are the two ways in which we can create a reference to this WMI class: 1. 2. If you have any doubts on the above 2 methods of referencing a WMI class please refer to my "Introduction to WMI" () blog post. The advantage of using Get-WMIObject over [WMIClass] type accelerator is that Get-WMIObject has a -ComputerName parameter. That means you can fetch registry information from a remote machine very easily using the Get-WMIObject cmdlet. That is: Using a Get-Member on the $regClass will list all the static members of the class: Formatting this output in a tabular format and wrapping the contents will list all the parameters which are expected by this method as well ( $regClass | Get-Member | Format-Table -Wrap ). We can use any of the Get.. methods in order to get a value from the registry. Lets say we want to retrieve a String value from the registry, in that case we will use the GetStringValue method. In the example below, we are reading the value of ExecutionPolicy key in the Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell folder. As a reminder, $regkey is a HashTable we created earlier and HKEY_LOCAL_MACHINE is one of the keys of the HashTable. In the output, ReturnValue 0 tells us that the operation was successful and sValue is the actual value. If you want to read just the ouptut, you can Select sValue from the output as shown in the example above. Just like you can read different types of values using this class, you can read the keys and the values as well using this class. Example: sNames is an array returned by EnumKey, hence we chose to use the ExpandProperty parameter on sNames to display all the Keys on a particular path. (Not sure what ExpandProperty parameter does? Check out my blog post on ExpandProperty parameter). Just like you have EnumKey, there is also an EnumValue method, which lists the Valuenames in a particular key. I think this post would remain incomplete if I didn't mention Invoke-WMIMethod cmdlet. Since we are calling static methods on the WMI class, we can also use Invoke-WMIMethod to call all the methods we have discussed in this post. I want to give you one example of using the Invoke-WMIMethod (to call EnumValues method). Just like we could call EnumValues, we can call any of the methods we have discussed in this blogpost. According to me, one of the most important advantage offered by these cmdlets (Get-WMIObject and Invoke-WMIMethod) is that you have a -ComputerName and a -Credential parameter. That means you can connect to a remote machine using different credentials when you are using these WMI cmdlets. Hopefully, this post gave you a good enough idea of how to interact with the registry on local or remote machine using WMI. Using this class you can create as well as delete keys or values. Doing a Get-Member along with a Format-Table -Wrap on the output will give you a good enough understanding of leveraging the different methods further. Just keep in mind that whenever you see hDefKey (Int32) parameter for any of these methods, you are supposed to pass the numeric value of a registry hive to that method. This approach should be useful if you want to interact with registry on remote machines and don't have PowerShell v2 Remoting enabled on your remote machine yet. If you have PowerShell v2 Remoting enabled, its just about using *-Item and *-ItemProperty related cmdlets to interact with the Registry instead of doing this!
http://blogs.msdn.com/b/vishinde/archive/2012/09/16/interacting-with-system-registry-using-wmi.aspx
CC-MAIN-2013-20
refinedweb
900
59.43
The QBuffer class is an I/O device that operates on a QByteArray. More... All the functions in this class are reentrant when Qt is built with thread support. #include <qbuffer.h> Inherits QIODevice. List of all member functions. QBuffer is used to read and write to a memory buffer. It is normally used with a QTextStream or a QDataStream. QBuffer has an associated QByteArray which holds the buffer data. The size() of the buffer is automatically adjusted as data is written. The constructor QBuffer(QByteArray) creates a QBuffer using an existing byte array. The byte array can also be set with setBuffer(). Writing to the QBuffer will modify the original byte array because QByteArray is explicitly shared. Use open() to open the buffer before use and to set the mode (read-only, write-only, etc.). close() closes the buffer. The buffer must be closed before reopening or calling setBuffer(). A common way to use QBuffer is through QDataStream or QTextStream, which have constructors that take a QBuffer parameter. For convenience, there are also QDataStream and QTextStream constructors that take a QByteArray parameter. These constructors create and open an internal QBuffer. Note that QTextStream can also operate on a QString (a Unicode string); a QBuffer cannot. You can also use QBuffer directly through the standard QIODevice functions readBlock(), writeBlock() readLine(), at(), getch(), putch() and ungetch(). See also QFile, QDataStream, QTextStream, QByteArray, Shared Classes, Collection Classes, and Input/Output and Networking. If you open the buffer in write mode (IO_WriteOnly or IO_ReadWrite) and write something into the buffer, buf will be modified. Example: QCString str = "abc"; QBuffer b( str ); b.open( IO_WriteOnly ); b.at( 3 ); // position at the 4th character (the terminating \0) b.writeBlock( "def", 4 ); // write "def" including the terminating \0 b.close(); // Now, str == "abcdef" with a terminating \0 See also setBuffer(). Returns this buffer's byte array. See also setBuffer(). Does nothing (and returns FALSE) if isOpen() is TRUE. Note that if you open the buffer in write mode (IO_WriteOnly or IO_ReadWrite) and write something into the buffer, buf is also modified because QByteArray is an explicitly shared class. See also buffer(), open(), and close(). Returns -1 if an error occurred. See also readBlock(). Reimplemented from QIODevice. This convenience function is the same as calling writeBlock( data.data(), data.size() ) with data. This file is part of the Qt toolkit. Copyright © 1995-2007 Trolltech. All Rights Reserved.
http://idlebox.net/2007/apidocs/qt-x11-free-3.3.8.zip/qbuffer.html
CC-MAIN-2014-10
refinedweb
400
69.89
Consuming REST APIs In React With Fetch And Axios Consuming REST APIs in a React Application can be done in various ways, but in this tutorial, we will be discussing how we can consume REST APIs using two of the most popular methods known as Axios (a promise-based HTTP client) and Fetch API (a browser in-built web API). I will discuss and implement each of these methods in detail and shed light on some of the cool features each of them have to offer. APIs are what we can use to supercharge our React applications with data. There are certain operations that can’t be done on the client-side, so these operations are implemented on the server-side. We can then use the APIs to consume the data on the client-side. APIs consist of a set of data, that is often in JSON format with specified endpoints. When we access data from an API, we want to access specific endpoints within that API framework. We can also say that an API is a contractual agreement between two services over the shape of request and response. The code is just a byproduct. It also contains the terms of this data exchange. In React, there are various ways we can consume REST APIs in our applications, these ways include using the JavaScript inbuilt fetch() method and Axios which is a promise-based HTTP client for the browser and Node.js. Note: A good knowledge of ReactJS, React Hooks, JavaScript and CSS will come in handy as you work your way throughout this tutorial. Let’s get started with learning more about the REST API. What Is A REST API A REST API is an API that follows what is structured in accordance with the REST Structure for APIs. REST stands for “Representational State Transfer”. It consists of various rules that developers follow when creating APIs. The Benefits Of REST APIs - Very easy to learn and understand; - It provides developers with the ability to organize complicated applications into simple resources; - It easy for external clients to build on your REST API without any complications; - It is very easy to scale; - A REST API is not language or platform-specific, but can be consumed with any language or run on any platform. An Example Of A REST API Response The way a REST API is structured depends on the product it’s been made for — but the rules of REST must be followed. The sample response below is from the Github Open API. We’ll be using this API to build a React app later on in this tutorial. { "login": "hacktivist123", "id": 26572907, "node_id": "MDQ6VXNlcjI2NTcyOTA3", "avatar_url": "", "gravatar_id": "", "url": "", "html_url": "", "followers_url": "", "following_url": "{/other_user}", "gists_url": "{/gist_id}", "starred_url": "{/owner}{/repo}", "subscriptions_url": "", "organizations_url": "", "repos_url": "", "events_url": "{/privacy}", "received_events_url": "", "type": "User", "site_admin": false, "name": "Shedrack akintayo", "company": null, "blog": "", "location": "Lagos, Nigeria ", "email": null, "hireable": true, "bio": "☕ Software Engineer | | Developer Advocate🥑|| ❤ Everything JavaScript", "public_repos": 68, "public_gists": 1, "followers": 130, "following": 246, "created_at": "2017-03-21T12:55:48Z", "updated_at": "2020-05-11T13:02:57Z" } The response above is from the Github REST API when I make a GET request to the following endpoint. It returns all the stored data about a user called hacktivist123. With this response, we can decide to render it whichever way we like in our React app. Consuming APIs Using The Fetch API The fetch() API is an inbuilt JavaScript method for getting resources from a server or an API endpoint. It’s similar to XMLHttpRequest, but the fetch API provides a more powerful and flexible feature set. It defines concepts such as CORS and the HTTP Origin header semantics, supplanting their separate definitions elsewhere. The fetch() API method always takes in a compulsory argument, which is the path or URL to the resource you want to fetch. It returns a promise that points to the response from the request, whether the request is successful or not. You can also optionally pass in an init options object as the second argument. Once a response has been fetched, there are several inbuilt methods available to define what the body content is and how it should be handled. The Difference Between The Fetch API And jQuery Ajax The Fetch API is different from jQuery Ajax in three main ways, which are: - The promise returned from a fetch()request will not reject when there’s an HTTP error, no matter the nature of the response status. Instead, it will resolve the request normally, if the response status code is a 400 or 500 type code, it’ll set the ok status. A request will only be rejected either because of network failure or if something is preventing the request from completing fetch()will not allow the use of cross-site cookies i.e you cannot carry out a cross-site session using fetch() fetch()will also not send cookies by default unless you set the credentialsin the init option. Parameters For The Fetch API resource This is the path to the resource you want to fetch, this can either be a direct link to the resource path or a request object init This is an object containing any custom setting or credentials you’ll like to provide for your fetch()request. The following are a few of the possible options that can be contained in the initobject: method This is for specifying the HTTP request method e.g GET, POST, etc. headers This is for specifying any headers you would like to add to your request, usually contained in an object or an object literal. body This is for specifying a body that you want to add to your request: this can be a Blob, BufferSource, FormData, URLSearchParams, USVString, or ReadableStreamobject mode This is for specifying the mode you want to use for the request, e.g., cors, no-cors, or same-origin. credentials This for specifying the request credentials you want to use for the request, this option must be provided if you consider sending cookies automatically for the current domain. Basic Syntax for Using the Fetch() API A basic fetch request is really simple to write, take a look at the following code: fetch('') .then(response => response.json()) .then(data => console.log(data)); In the code above, we are fetching data from a URL that returns data as JSON and then printing it to the console. The simplest form of using fetch() often takes just one argument which is the path to the resource you want to fetch and then return a promise containing the response from the fetch request. This response is an object. The response is just a regular HTTP response and not the actual JSON. In other to get the JSON body content from the response, we’d have to change the response to actual JSON using the json() method on the response. Using Fetch API In React Apps Using the Fetch API in React Apps is the normal way we’d use the Fetch API in javascript, there is no change in syntax, the only issue is deciding where to make the fetch request in our React app. Most fetch requests or any HTTP request of any sort is usually done in a React Component. This request can either be made inside a Lifecycle Method if your component is a Class Component or inside a useEffect() React Hook if your component is a Functional Component. For example, In the code below, we will make a fetch request inside a class component, which means we’ll have to do it inside a lifecycle method. In this particular case, our fetch request will be made inside a componentDidMount lifecycle method because we want to make the request just after our React Component has mounted. import React from 'react'; class myComponent extends React.Component { componentDidMount() { const apiUrl = ''; fetch(apiUrl) .then((response) => response.json()) .then((data) => console.log('This is your data', data)); } render() { return <h1>my Component has Mounted, Check the browser 'console' </h1>; } } export default myComponent; In the code above, we are creating a very simple class component that makes a fetch request that logs the final data from the fetch request we have made to the API URL into the browser console after the React component has finished mounting. The fetch() method takes in the path to the resource we want to fetch, which is assigned to a variable called apiUrl. After the fetch request has been completed it returns a promise that contains a response object. Then, we are extracting the JSON body content from the response using the json() method, finally we log the final data from the promise into the console. Let’s Consume A REST API With Fetch Method In this section, we will be building a simple react application that consumes an external API, we will be using the Fetch method to consume the API. The simple application will display all the repositories and their description that belongs to a particular user. For this tutorial, I’ll be using my GitHub username, you can also use yours if you wish. The first thing we need to do is to generate our React app by using create-react-app: npx create-react-app myRepos The command above will bootstrap a new React app for us. As soon as our new app has been created, all that’s left to do is to run the following command and begin coding: npm start If our React is created properly we should see this in our browser window when we navigate to localhost:3000 after running the above command. In your src folder, create a new folder called component. This folder will hold all of our React components. In the new folder, create two files titled List.js and withListLoading.js. These two files will hold the components that will be needed in our app. The List.js file will handle the display of our Repositories in the form of a list, and the withListLoading.js file will hold a higher-order component that will be displayed when the Fetch request we will be making is still ongoing. In the List.js file we created inside the components folder, let’s paste in the following code: import React from 'react'; const List = (props) => { const { repos } = props; if (!repos || repos.length === 0) return <p>No repos, sorry</p>; return ( <ul> <h2 className='list-head'>Available Public Repositories</h2> {repos.map((repo) => { return ( <li key={repo.id} <span className='repo-text'>{repo.name} </span> <span className='repo-description'>{repo.description}</span> </li> ); })} </ul> ); }; export default List; The code above is a basic React list component that would display the data, in this case, the repositories name and their descriptions in a list. Now, Let me explain the code bit by bit. const { repos } = props; We are initializing a prop for the component called repos. if (repos.length === 0 || !repos) return <p>No repos, sorry</p>; Here, all we are doing is making a conditional statement that will render a message when the length of the repos we get from the request we make is equal to zero. return ( <ul> <h2 className='list-head'>Available Public Repositories</h2> {repos.map((repo) => { return ( <li key={repo.id} <span className='repo-text'>{repo.name} </span> <span className='repo-description'>{repo.description}</span> </li> ); })} </ul> ); Here, we are mapping through each of the repositories that will be provided by the API request we make and extracting each of the repositories names and their descriptions then we are displaying each of them in a list. export default List; Here we are exporting our List component so that we can use it somewhere else. In the withListLoading.js file we created inside the components folder, let’s paste in the following code: import React from 'react'; function WithListLoading(Component) { return function WihLoadingComponent({ isLoading, ...props }) { if (!isLoading) return <Component {...props} />; return ( <p style={{ textAlign: 'center', fontSize: '30px' }}> Hold on, fetching data may take some time :) </p> ); }; } export default WithListLoading; The code above is a higher-order React component that takes in another component and then returns some logic. In our case, our higher component will wait to check if the current isLoading state of the component it takes is true or false. If the current isLoading state is true, it will display a message Hold on, fetching data may take some time :). Immediately the isLoading state changes to false it’ll render the component it took in. In our case, it’ll render the List component. In your *App.js file inside the src folder, let’s paste in the following code: import React, { useEffect, useState } from 'react'; import './App.css'; import List from './components/List'; import withListLoading from './components/withListLoading'; function App() { const ListLoading = withListLoading(List); const [appState, setAppState] = useState({ loading: false, repos: null, }); useEffect(() => { setAppState({ loading: true }); const apiUrl = ``; fetch(apiUrl) .then((res) => res.json()) .then((repos) => { setAppState({ loading: false, repos: repos }); }); }, [setAppState]); return ( <div className='App'> <div className='container'> <h1>My Repositories</h1> </div> <div className='repo-container'> <ListLoading isLoading={appState.loading} repos={appState.repos} /> </div> <footer> <div className='footer'> Built{' '} <span role='img' aria- 💚 </span>{' '} with by Shedrack Akintayo </div> </footer> </div> ); } export default App; Our App.js is a functional component that makes use of React Hooks for handling state and also side effects. If you’re not familiar with React Hooks, read my Getting Started with React Hooks Guide. Let me explain the code above bit by bit. import React, { useEffect, useState } from 'react'; import './App.css'; import List from './components/List'; import withListLoading from './components/withListLoading'; Here, we are importing all the external files we need and also the components we created in our components folder. We are also importing the React Hooks we need from React. const ListLoading = withListLoading(List); const [appState, setAppState] = useState({ loading: false, repos: null, }); Here, we are creating a new component called ListLoading and assigning our withListLoading higher-order component wrapped around our list component. We are then creating our state values repos using the useState() React Hook. useEffect(() => { setAppState({ loading: true }); const user = ``; fetch(user) .then((res) => res.json()) .then((repos) => { setAppState({ loading: false, repos: repos }); }); }, [setAppState]); Here, we are initializing a useEffect() React Hook. In the useEffect() hook, we are setting our initial loading state to true, while this is true, our higher-order component will display a message. We are then creating a constant variable called user and assigning the API URL we’ll be getting the repositories data from. We are then making a basic fetch() request like we discussed above and then after the request is done we are setting the app loading state to false and populating the repos state with the data we got from the request. return ( <div className='App'> <div className='container'> <h1>My Repositories</h1> </div> <div className='repo-container'> <ListLoading isLoading={AppState.loading} repos={AppState.repos} /> </div> </div> ); } export default App; Here we are basically just rendering the Component we assigned our higher-order component to and also filling the isLoading prop and repos prop with their state value. Now, we should see this in our browser, when the fetch request is still being made, courtesy of our withListLoading higher-order component: Now, when the fetch request has completed successfully, we should see the repositories displayed in a list format as below: Now, let’s style our project a little bit, in your App.css file, copy and paste this code. @import url(''); :root { --basic-color: #23cc71; } .App { box-sizing: border-box; display: flex; justify-content: center; align-items: center; flex-direction: column; font-family: 'Amiri', serif; overflow: hidden; } .container { display: flex; flex-direction: row; } .container h1 { font-size: 60px; text-align: center; color: var(--basic-color); } .repo-container { width: 50%; height: 700px; margin: 50px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.3); overflow: scroll; } @media screen and (max-width: 600px) { .repo-container { width: 100%; margin: 0; box-shadow: none; } } .repo-text { font-weight: 600; } .repo-description { font-weight: 600; font-style: bold; color: var(--basic-color); } .list-head { text-align: center; font-weight: 800; text-transform: uppercase; } .footer { font-size: 15px; font-weight: 600; } .list { list-style: circle; } So in the code above, we are styling our app to look more pleasing to the eyes, we have assigned various class names to each element in our App.js file and thus we are using these class names to style our app. Once we’ve applied our styling, our app should look like this: Now our app looks much better. 😊 So that’s how we can use the Fetch API to consume a REST API. In the next section, we’ll be discussing Axios and how we can use it to consume the same API in the same App. Consuming APIs With Axios Axios is an easy to use promise-based HTTP client for the browser and node.js. Since Axios is promise-based, we can take advantage of async and await for more readable and asynchronous code. With Axios, we get the ability to intercept and cancel request, it also has a built-in feature that provides client-side protection against cross-site request forgery. Features Of Axios - Request and response interception - Streamlined error handling - Protection against XSRF - Support for upload progress - Response timeout - The ability to cancel requests - Support for older browsers - Automatic JSON data transformation Making Requests With Axios Making HTTP Requests with Axios is quite easy. The code below is basically how to make an HTTP request. // Make a GET request axios({ method: 'get', url: '', }); // Make a Post Request axios({ method: 'post', url: '/login', data: { firstName: 'shedrack', lastName: 'akintayo' } }); The code above shows the basic ways we can make a GET and POST HTTP request with Axios. Axios also provides a set of shorthand method for performing different HTTP requests. The Methods are as follows: axios.request(config) axios.get(url[, config]) axios.delete(url[, config]) axios.head(url[, config]) axios.options(url[, config]) axios.post(url[, data[, config]]) axios.put(url[, data[, config]]) axios.patch(url[, data[, config]]) For example, if we want to make a similar request like the example code above but with the shorthand methods we can do it like so: // Make a GET request with a shorthand method axios.get(''); // Make a Post Request with a shorthand method axios.post('/signup', { firstName: 'shedrack', lastName: 'akintayo' }); In the code above, we are making the same request as what we did above but this time with the shorthand method. Axios provides flexibility and makes your HTTP requests even more readable. Making Multiple Requests With Axios Axios provides developers the ability to make and handle simultaneous HTTP requests using the axios.all() method. This method takes in an array of arguments and it returns a single promise object that resolves only when all arguments passed in the array have resolved. For example, we can make multiple requests to the GitHub api using the axios.all() method like so: axios.all([ axios.get(''), axios.get('') ]) .then(response => { console.log('Date created: ', response[0].data.created_at); console.log('Date created: ', response[1].data.created_at); }); The code above makes simultaneous requests to an array of arguments in parallel and returns the response data, in our case, it will log to the console the created_at object from each of the API responses. Let’s Consume A REST API With Axios Client In this section, all we’ll be doing is replacing fetch() method with Axios in our existing React Application. All we need to do is to install Axios and then use it in our App.js file for making the HTTP request to the GitHub API. Now let’s install Axios in our React app by running either of the following: With NPM: npm install axios With Yarn: yarn add axios After installation is complete, we have to import axios into our App.js. In our App.js we’ll add the following line to the top of our App.js file: import axios from 'axios' After adding the line of code our App.js all we have to do inside our useEffect() is to write the following code: useEffect(() => { setAppState({ loading: true }); const apiUrl = ''; axios.get(apiUrl).then((repos) => { const allRepos = repos.data; setAppState({ loading: false, repos: allRepos }); }); }, [setAppState]); You may have noticed that we have now replaced the fetch API with the Axios shorthand method axios.get to make a get request to the API. axios.get(apiUrl).then((repos) => { const allRepos = repos.data; setAppState({ loading: false, repos: allRepos }); }); In this block of code, we are making a GET request then we are returning a promise that contains the repos data and assigning the data to a constant variable called allRepos. We are then setting the current loading state to false and also passing the data from the request to the repos state variable. If we did everything correctly, we should see our app still render the same way without any change. So this is how we can use Axios client to consume a REST API. Fetch vs Axios In this section, I will be listing our certain features and then I’ll talk about how well Fetch and Axios support these features. Basic Syntax Both Fetch and Axios have very simple syntaxes for making requests. But Axios has an upper hand because Axios automatically converts a response to JSON, so when using Axios we skip the step of converting the response to JSON, unlike Fetch() where we’d still have to convert the response to JSON. Lastly, Axios shorthand methods allow us to make specific HTTP Requests easier. Browser Compatibility One of the many reasons why developers would prefer Axios over Fetch is because Axios is supported across major browsers and versions unlike Fetch that is only supported in Chrome 42+, Firefox 39+, Edge 14+, and Safari 10.1+. Handling Response Timeout Setting a timeout for responses is very easy to do in Axios by making use of the timeoutoption inside the request object. But in Fetch, it is not that easy to do this. Fetch provides a similar feature by using the AbortController()interface but it takes more time to implement and can get confusing. Intercepting HTTP Requests Axios allows developers to intercept HTTP requests. HTTP interceptors are needed when we need to change HTTP requests from our application to the server. Interceptors give us the ability to do that without having to write extra code. Making Multiple Requests Simultaneously Axios allows us to make multiple HTTP requests with the use of the axios.all()method ( I talked about this above). fetch()provides the same feature with the use of the promise.all()method, we can make multiple fetch()requests inside it. Conclusion Axios and fetch() are all great ways of consuming APIs but I advise you to use fetch() when building relatively small applications and make use of Axios when building large applications for scalability reasons. I hope you enjoyed working through this tutorial, you could always read more on Consuming REST APIs with either Fetch or Axios from the references below. If you have any questions, you can leave it in the comments section below and I’ll be happy to answer every single one. - The supporting repo for this article is available on Github. Related Resources - “REST API Structure,” - “Understanding And Using REST APIs,” Zell Liew - “CORS,” - “HTTP Headers,” - “Fetch API,” Mozilla Developer Network - “Using Axios And React,” Paul Halliday - “How To Make HTTP Requests Like A Pro With Axios,” Faraz Kelhini
https://www.smashingmagazine.com/2020/06/rest-api-react-fetch-axios/
CC-MAIN-2020-29
refinedweb
3,919
61.16
For one, you can set up a multi-project solution and implement the UI as a C# or VB UI project, and code the rest of the application logic in F#. This works so well that it is hard to argue that you could do better unless you had in turn the ability to design the UI with F# code behind. But after you start making incremental changes to your UI project, various small issues start to crop up. One such issue is the accessibility of certain UI elements that you need for wiring event handlers; these need to be manually made public in your UI library project. In this article, you will see two different approaches for making your UI coding easier and still retain all your application code in F#. In the first approach, you will develop a DSL to describe UI elements and their layout, and then implement a tool that generates F# code from this specification. While this has a sizeable gain over writing ordinary UI code in F# it also has some drawbacks: any errors you make are not discovered until you generated F# code and tried to compile it. In the second part of the article, to remedy the problems in the first approach, you need to develop a computation expression notation for building UI code directly in F#. Besides retaining a succinct declarative style, this also has the added benefit of all code being in F#—so you get type safety and the discovery of errors is instantaneous. #light open System open System.Windows.Forms [<STAThread>] let _ = let form = new Form(Text="Example", Width=400, Height=300) new Button(Text="Click me...") |> form.Controls.Add new Button(Text="And me...", Left=80) |> form.Controls.Add form |> Application.Run Listing 1 shows an example (Notepad.form) of a DSL that is able to support all the above. This snippet defines two forms, an empty one (AboutForm) and a more complex main form (MainForm). In this DSL, you can introduce a new control by the plus (+) symbol, give its type (for instance, Form) and name (for instance, MainForm) followed by an optional set of constructor arguments (for instance, mFile("&File")), and an optional set of property assignments in the form of <minus><property><space><value> (for instance, -Width 400). Property values can be of the usual types, with a "code" type being something extra—this can be given inside brackets as a string (for example, look at the shortcut properties of most menu items). Child controls can be nested inside the block delimited by braces, with an optional with <property> prefix that lets you designate the parent control property to which the child nodes are added (this property should support the Add method to add a new item). By default, this property is assumed to be Controls. Occasionally, you need the ability to specify that certain child controls are assigned to a different property of the parent control—this you can do with an optional as <property> suffix after giving the control's name. For example, the MainForm form has two child controls: a MainMenu control that is added as the form's Menu, and a Panel that is added to the form's Controls list. Each control in this DSL is named, but there are times when a given name is irrelevant. For example, above you used the same name for separator menu items. By using +: to introduce a new control, you can essentially cause it not to be exposed, thus its name becomes irrelevant. Furthermore, as you might have spotted, the Click event handler for the exit menu item: by using ++<event-name> you can introduce a new event handler for the parent control and specify its code as a string. Without further due on the syntax, you can proceed to create an AST definition to hold the necessary information for the above DSL. Your FormType.fs file looks like the following: #light namespace IntelliFactory.Tools.FormGen module Types = type comp = | Component of bool * string * (string * par list) * string option * prop list * string * comp list | EventHandler of string * string and par = | IntPar of int | FloatPar of float | StringPar of string | BoolPar of bool and prop = | IntProp of string * int | FloatProp of string * float | StringProp of string * string | BoolProp of string * bool | CodeProp of string * string The lexer (FormLexer.fsl) follows a "typical" FsLex definition, except that you added the ability to lex comments and strings with proper escape characters. You can easily add position information also to each token to recover from later semantic errors with an exact position; this was not added in the implementation as shown in Listing 2. The parser (FormParser.fsy) is equally straightforward and without much explanation you can find it in Listing 3. At this point, all you have left is a pretty printer that outputs F# code for an AST in your DSL. Here is FormPrint.fs: #light namespace IntelliFactory.Tools.FormGen module Print = open System.IO open Types exception DuplicateName of string let rec collect_exports env = function | Component (export, ty, (id, pars), _, props, _, comps) -> let env' = if export then if Set.exists (fun (id', _) -> id = id') env then raise (DuplicateName id) Set.add (id, ty) env else env List.fold_left (fun env comp -> collect_exports env comp) env' comps | EventHandler _ -> env Now you can tie it all together: in the main F# module (formgen.fs) call the lexer/parser with an input file, and pretty print the result into an F# code file (see Listing 5). fsyacc FormParser.fsy fslex FormLexer.fsl This completes the formgen tool. To test it, take the Notepad snippet you saw earlier and run the formgen tool on it to generate Notepad.fs. Then add this file to a new F# project, add references to System.Drawing and System.Windows.Forms, and create a new F# file with the following: #light open System.Windows.Forms open System [<STAThread>] let _ = let form = Notepad.CreateMainForm () form.mAbout.Click.Add (fun _ -> Notepad.CreateAboutForm().AboutForm.Show()) form.MainForm |> Application.Run Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/enterprise/Article/40481
CC-MAIN-2017-09
refinedweb
1,042
61.46
Record and Play 3D Printed Robotic Arm using Arduino Record and Play 3D Printed Robotic Arm using Arduino Robotic Arms have proved themselves useful and more productive in many applications where speed, accuracy and safety is required. But to me, what’s more than that is these things are cool to look at when they work. I have always wished for a robotic arm that could help me with my daily works just like Dum-E and Dum-U that Tony stark uses in his lab. These two bots can be seen helping him while building the Iron man suits or filming his work using a video camera. Actually Dum-E has also saved his life once……. and this is where I would like to stop it because this is no fan Page. Apart from the fictional world there are many cool real world Robotic Arms made by Fanuc, Kuka, Denso, ABB, Yaskawa etc. These robotic arms are used in Production line of automobiles, mining plants, Chemical industries and many other places. So, in this tutorial we are going to build our own Robotic Arm with the help of Arduino and MG995 Servo motors. The Robot will have a total of 4 Degree of Freedom (DOF) excluding the gripper and can be controlled by a potentiometer. Apart from that we will also program it to have a Record and play feature so that we can record a motion and ask the Robotic Arm to repeat it as many times as we require it. Sounds cool right!!! So lets start building…. Material Required - Arduino Nano - 5 MG-995 Servo Motor - 5-Potentiometer - Perf Board - Servo horns - Nuts and Screws Note: The body of the robotic arm is completely 3D Printer. If you have a printer you can print them using the given design files. Else, uses the 3D model provided and machine your parts using wood or acrylic. If you don’t have anything then you can just use cardboards to build simple Robotic Arm. 3D Printing and Assembling the Robotic Arm The most time consuming part in building this robotic Arm is while building its body. Initially I started by designing the body using Solidworks, but later realised that there are many awesome designs readily available on Thingiverse and there is no need to re-invent the wheel. So I went through the designs and found that the Robotic Arm V2.0 by Ashingwill work perfectly with our MG995 Servo Motors and would exactly suit our purpose. So get to his Thingiverse page (link given above) and download the model files. There are totally 14 parts which has to be printed and the STL files for all of them can be downloaded from Thingiverse page. I used the Cura 3.2.1 Software from Ultimaker to slice the STL files and my TEVO tarantula 3D printer to print them. If you want to know more on 3D printer and how it works you can read this article on Beginners Guide to Getting Started with 3D Printing. Luckily none of the parts have over hanging structures so supports are not needed. The Design is pretty plain and hence can be easily handled by any basic 3D printer. Approximately after 4.5 hours of printing all the parts are ready to be assembled. The assembly instructions are again neatly explained by Ashing itself and hence I am not going to cover it. One small tip is that you would have to sand/file the edges of the parts for the motors to fit in. All the motors will fit in vey snug with a little bit of mechanical force. Have patience and use a file to create room for the motors if they seem a bit tight. You would need like 20 numbers of 3mm bolts to assemble the Robotic ARM. As soon as mounting a motor make sure it can rotate and reach the desired places before screwing it permanently. After assembling, you can proceed with extending the wires of the top three servo motors. I have used the male to female wires to extend them and bring it to circuit board. Make sure you harness the wires properly so that they do not come into your way while the Arm is working. Once assembled my robotic Arm Looked something like this in the picture below. Circuit Diagram The MG995 Servo motors operate with 5V and the Arduino board has a 5V regulator with it. So creating the circuit is very easy. We have to connect 5 Servo motors to Arduino PWM pins and 5 Potentiometers to the Arduino Analog pins to control the Servo Motor. The circuit diagram for the same is given below. For this circuit I have not used any external power source. The Arduino is powered through the USB port and the +5v pin on the board is used to power the potentiometer and the Servo motors. In our Robotic Arm at any given instance of time only one servo motor will be in motion hence the current consumed will be less than 150mA which can be sourced by the on-board voltage regulator of the Arduino Board. We have 5 Servo motor and 5 potentiometers to control them respectively. These 5 potentiometers are connected to the 5 Analog pins A0 to A4 of the Arduino board. The Servo motors are controlled by PWM signals so we have to connect them to the PWM pins of Arduino. On Arduino Nano the pins D3,D5,D6,D9 and D11 only supports PWM, so we use the first 5 pins for our servo motors. I have used a perf board to solder the connections and my board looked something like this below when completed. I have also added a barrel jack to power the device through battery if required. However it is completely optional. If you are completely new with Servo motors and Arduino, then you are recommended to read the Basics of Servo motor and Controlling Servo with Arduino article before you proceed with the project. Programming Arduino for Robotic Arm Now, the fun part is to program the Arduino to allow the user to record the movements made using the POT and then play it when required. To do this we have to program the Arduino for two modes. Once is the Record mode and the other is the Play mode. The user can toggle between the two modes by using the serial monitor. The complete program to do the same can be found at the bottom of this page, you can use the program as it is. But further below I have explained the program with small snippets for you to understand. As always we begin the program by adding the required header files. Here the Servo.h header file is used to control the servo motors. We have 5 Servo motors and hence 5 objects are declared giving each motor a name. We also initialise the variables that we will be using in the progam. I have declared them all as global but you can change their scope if you are interested in optimising the program. We have also declared an array called saved_data which as the name states will save all the recorded movements of the Robotic ARM. #include <Servo.h> //Servo header file //Declare object for 5 Servo Motors Servo Servo_0; Servo Servo_1; Servo Servo_2; Servo Servo_3; Servo Gripper; //Global Variable Declaration int S0_pos, S1_pos, S2_pos, S3_pos, G_pos; int P_S0_pos, P_S1_pos, P_S2_pos, P_S3_pos, P_G_pos; int C_S0_pos, C_S1_pos, C_S2_pos, C_S3_pos, C_G_pos; int POT_0,POT_1,POT_2,POT_3,POT_4; int saved_data[700]; //Array for saving recorded data int array_index=0; char incoming = 0; int action_pos; int action_servo; Inside the void setup function we begin the Serial communication at 9600 baud rate. We also specify the pin to which the Servo motors are attached to. Here in our case we have used the pins 3,5,6,9 and 10 which is specified using the attach function. Since the setup function runs during the start-up we can use it to set our Robotic arm in a start position. So I have hardcoded the position value for all five motors. These hardcoded values can be changed according to your preference later. At the end of setup function we print a serial line asking the user to press R or P to do the corresponding action void setup() { Serial.begin(9600); //Serial Monitor for Debugging //Decalre the pins to which the Servo Motors are connected to Servo_0.attach(3); Servo_1.attach(5); Servo_2.attach(6); Servo_3.attach(9); Gripper.attach(10); //Write the servo motors to intial position Servo_0.write(70); Servo_1.write(100); Servo_2.write(110); Servo_3.write(10); Gripper.write(10); Serial.println(“Press ‘R’ to Record and ‘P’ to play”); //Instrust the user } I have defined a function called Read_POT which reads the analog values of all the 5 potentiometers and maps it to the Servo position values. As we know the Arduino has a 8-bit ADC which gives us an output from 0-1023 but the servo motors position values ranges from only 0-180. Also since these servo motors are not very precise it is not safe to drive them to the extreme 0 end or 180 end so we set 10-170 as our limits. We use the map function to convert 0-1023 to 10-170 for all the five motor as shown below. void Read_POT() //Function to read the Analog value form POT and map it to Servo value { POT_0 = analogRead(A0); POT_1 = analogRead(A1); POT_2 = analogRead(A2); POT_3 = analogRead(A3); POT_4 = analogRead(A4); //Read the Analog values form all five POT S0_pos = map(POT_0,0,1024,10,170); //Map it for 1st Servo (Base motor) S1_pos = map(POT_1,0,1024,10,170); //Map it for 2nd Servo (Hip motor) S2_pos = map(POT_2,0,1024,10,170); //Map it for 3rd Servo (Shoulder motor) S3_pos = map(POT_3,0,1024,10,170); //Map it for 4th Servo (Neck motor) G_pos = map(POT_4,0,1024,10,170); //Map it for 5th Servo (Gripper motor) } Recording Mode Code In the recording mode the user has to control the bot using the Potentiometers. Each POT corresponds to a individual motor, as the pot is varied we should save the position of the motor and the motor number inside the saved_dataarray. Let’s see how that is achieved using the Record function. Eliminating Jitter problem with Servo When working with these Servo motors one common problem that everyone might come across is that the motors might jitter while working. There are many solution for this problem, first you have sort out if the problem is with the control circuitry of the Motor or with the value of position that is written to the servo motor. In my case I used the serial monitor and found that the value of servo_pos is not left constant and sometime jitters up/down randomly. So I programmed the Arduino to read the POT values twice and compare both the values. The value will be taken as valid only if both the values are same, else the value will be discarded. Thankfully this solved the jitter problem for me. Also make sure that the POT is mounted firmly (I soldered it) to the Analog pin of the Arduino. Any lose connection will also cause jitters. The variables P_x_pos is used to save the old values and then again the x_pos values are read and mapped using the above discussed Read_POT function. Read_POT(); //Read the POT values for 1st time //Save it in a varibale to compare it later P_S0_pos = S0_pos; P_S1_pos = S1_pos; P_S2_pos = S2_pos; P_S3_pos = S3_pos; P_G_pos = G_pos; Read_POT(); //Read the POT value for 2nd time Now, we have to control the position of the servo motor if the value is valid. Also after controlling we have to save the motor number and motor position in the array. We could have used two different array one for motor number and the other for its position, but to save memory and complexity I have combined both of them by adding a differentiator value to the pos value before saving it in the array. if (P_S0_pos == S0_pos) //If 1st and 2nd value are same { Servo_0.write(S0_pos); //Control the servo if (C_S0_pos != S0_pos) //If the POT has been turned { saved_data[array_index] = S0_pos + 0; //Save the new position to the array. Zero is added for zeroth motor (for understading purpose) array_index++; //Increase the array index } C_S0_pos = S0_pos; //Saved the previous value to check if the POT has been turned } The differentiator value for Sero_0 is 0 and for Servo_1 is 1000 similarly for Servo_3 it is 3000 and for Gripper it is 4000. The lines of code in which the differentiator is added to the value of position and saved to the array is shown below. saved_data[array_index] = S0_pos + 0; //Save the new position to the array. Zero is added for zeroth motor (for understading purpose) saved_data[array_index] = S1_pos + 1000; //1000 is added for 1st servo motor as differentiater saved_data[array_index] = S2_pos + 2000; //2000 is added for 2nd servo motor as differentiater saved_data[array_index] = S3_pos + 3000; //3000 is added for 3rd servo motor as differentiater saved_data[array_index] = G_pos + 4000; //4000 is added for 4th servo motor as differentiater Playing mode Code After the user has recorded the movements in the saved_data he can toggle to the play mode by entering ‘P” in the serial monitor. Inside the play mode we have access each element saved in the array and split the value to get the motor number and motor position and control their position accordingly. We use a for loop to navigate through every element of the array the upto the values which are saved in the array. Then we use two variables action_servo and action_pos to get the number of servo motor to be controlled and its position respectively. To get the number of servo motor we have to divide it by 1000 and to get the position we need the last three digits which can be obtained by taking a modulus. For example if the value saved in the array is 3125, then it means that the 3rd motor has to be moved to the position of 125. for (int Play_action=0; Play_action<array_index; Play_action++) //Navigate through every saved element in the array { action_servo = saved_data[Play_action] / 1000; //The fist charector of the array element is split for knowing the servo number action_pos = saved_data[Play_action] % 1000; //The last three charectors of the array element is split to know the servo postion Now all that is left to do it use the servo number and move it to that obtained value of servo position. I have used a switch case to get into the corresponding servo motor number and the write function to move the servo motor to that position. The switch case is shown below switch(action_servo){ //Check which servo motor should be controlled case 0: //If zeroth motor Servo_0.write(action_pos); break; case 1://If 1st motor Servo_1.write(action_pos); break; case 2://If 2nd motor Servo_2.write(action_pos); break; case 3://If 3rd motor Servo_3.write(action_pos); break; case 4://If 4th motor Gripper.write(action_pos); break; Main loop function Inside the main loop function, we only have to check what the user has entered through the serial monitor and execute the record mode of the play mode accordingly. The variable incoming is used to hold the value of the user. If ‘R’ is entered Record mode will be activated and if ‘P’ if pressed Play mode will be executed by if conditional statements as shown below. void loop() { if (Serial.available() > 1) //If something is recevied from serial monitor { incoming = Serial.read(); if (incoming == ‘R’) Serial.println(“Robotic Arm Recording Started……”); if (incoming == ‘P’) Serial.println(“Playing Recorded sequence”); } if (incoming == ‘R’) //If user has selected Record mode Record(); if (incoming == ‘P’) //If user has selected Play Mode Play(); } Working of Record and Play Robotic ARM Make the connection as shown in the circuit diagram and upload the code that is given below. Power your Arduino Nano though the USB port of your computer and open the serial monitor you will be welcomed with this intro message. Now enter R in the serial monitor and press enter. Note that at the bottom of the serial monitor Newline should be selected. Once entered the bot will get into Recording mode and you will the following screen. The information shown here can be used for debugging. The numbers starting form 69 are the current position of servo motor 0 to motor 5. The index values are for the array size. Note that the array that we are using has a limit of 700 so we have complete recording the movements before we exceed that limit. After the recording is completed we can enter P in the serial monitor and press enter and we will be taken to the Play mode and the serial monitor will display the following. Inside the Play mode the robot will repeat the same movements that were done in the recording mode. These movements will be executed again and again until you interrupt it through the Serial monitor. The complete working can be found at the video linked in the bottom of the page. Read More Information… Record and Play 3D Printed Robotic Arm using Arduino This Post / Project can also be found using search terms: - 3d printable robotic arm arduino - 3d printed robotic arm designs - robotic arm 3d prints servo - arduino controlled robotic arm 3d printed
https://duino4projects.com/record-and-play-3d-printed-robotic-arm-using-arduino/
CC-MAIN-2020-29
refinedweb
2,931
59.03
Repository: - Technical Reports: - Code of Conduct: It'd be great to keep current panels at the same time slot. I'd suggest 14:00 UTC... Keeping the following in mind: Operating principle for effective participation is to allow access across disabilities, across country borders, and across time. Feedback on tooling and meeting timing is welcome. The Authentication Panel has been incubating the Solid-OIDC specification for the last several years. Today was an important milestone in that the panel has voted to promote Solid-OIDC from a purely "editors draft" document to a Community Group Draft: roughly equivalent to a FPWD. A repository tag was created with the version number of this draft. This is all in preparation for a summertime target of ~CR status. At present, the Solid-OIDC draft specification is available at via a GitHub pages workflow. I would like to discuss how we can start moving these drafts into the namespace, presumably under oidc. Ideally, we will have an automated process via GH, but for now a manual process should suffice. What would be the best way to proceed here? @acoburn If you'd like to publish the CG-DRAFT at and (with links referring to each other and possibly including links to the ED) then please make a PR of the final HTML and scripts/media to the solid/specification repository . On a related note, you may want to note solid/solid-oidc#98 @jeff-zucker I think we can make that more clear. The understanding/agreement was that 2xx responses MUST include the Link header with pim:Storage. Normally - or is it just in my head? - 4xx - more interestingly 401/403 - shouldn't include information in the headers that reveal the semantics of the target resource. However, there are/may be exceptions in that in order to enable a feature (for clients) with minimal information, e.g., discovery of root contain/storage, through URL path hierarchy, we'd need the Link header with pim:Storage irrespective of the status code. The way I interpret the current text - that I wrote - is that the header is always included. I'd like to hear from server/application implementers. Is your server exposing the header at all times or limited to certain requests/responses? For Storage locations besides the root URL path ( / after authority:port) I can't see how clients can work out a URI is allocated to identify the storage. webidand the app by client_id WebID-TLS is defined in, which is independent of Solid. It is also an editor's draft so the same caveats as with the WebID spec itself apply. The Solid Protocol spec refers to WebID-TLS non-normatively at Historically, there have been implementations of this. I am not aware of any that are in active development, but that doesn't mean that there are not any @jeff-zucker Can you have the app pick up the definitions from the specs, e.g., That grabs skos:Concepts from the Solid Protocol and WAC specs: CONSTRUCT FROM <> FROM <> WHERE { ?s a <> . ?s ?p ?o . } (based on the TRs but can use the EDs even) Can get more fancy if you want to grab skos:ConceptScheme, skos:Collection, and whatever else. In fact, we can start the whole discovery from :sparkles: where it links to all work items / technical reports.
https://gitter.im/solid/specification?at=6255bf440466b352a4638ae3
CC-MAIN-2022-40
refinedweb
558
62.38
How To Make C2B Payment Using Africa's Talking Payment API with Node.js(Express). This is a step by step tutorial on how to implement Africa's Talking (AT) client to business (C2B) payment API to your Node.js(express) application. Check out the whole project from GitHub. Prerequisite - Visual Studio Code or any IDE of your choice. - Basic knowledge of Node.js. - Africa's talking sandbox test environment. - Postman. How to get started. Login to your AT account and generate API key. To use the AT Payment API you need an API key that is unique and generated from your personal account. If you don't have an account create one here. Once you are logged in, click on the sandbox button. It will redirect you to AT dashboard. Click on settings, then API key. You will see the Enter password text box, input your account password and click on generate. Make sure you save your API key, since you wont be able to see it on subsequent visits. Create a new file and name it .env . This is where we will store our environment variables. In this case the apikey, username and port. Make sure you use the current generated apikey, use sandbox as the apps username. Use "sandbox" for development in the test environment. Initialize node using npm init or yarn init. After initializing node you should be able to see node modules and package.json file in your project root directory. We will need to add some dependencies for some modules to work. Copy either of the two commands to the console in your root directory. npm install express africastalking body-parser dotenv or yarn add express africastalking body-parser dotenv. To handle HTTP POST request in express.js version 4 and above we install body-parser middleware. body-parser extracts the entire body portion of an incoming request stream and exposes it on req.body. dotenv loads environment variables from the .env file. apiKey="your api key" username="your username" port="your port" We will be using ES6+ syntax and therefore we will have to add babel. To learn more about babel click here. npm i @babel/core babel-cli @babel/preset-env babel-watch --save-dev or yarn add @babel/core babel-cli @babel/preset-env babel-watch --dev. Now we will create the C2B payment function. Create a file in your root directory and name it mobile-payment.js. Copy the below code to your file. import express from "express"; import dotenv from "dotenv"; const router = express.Router(); dotenv.config(); const credentials = { apiKey: process.env.apiKey, username: process.env.username }; // Get payments service const AfricasTalking = require('africastalking')(credentials); // Initialize the SDK const payments = AfricasTalking.PAYMENTS; // Initiate the payment router.post("/",(req,res)=>{ const options = { //Set the product name productName: req.body.productName, phoneNumber: req.body.phoneNumber, currencyCode: req.body.currencyCode, amount: req.body.amount, }; payments.mobileCheckout(options) .then (result =>{ console.log(result); res.json(result) }).catch (err => { console.log(err); res.json(err.toString()); }); }) export default router; dotenv.config() method reads the environment variables. const credentials authenticates our application. const AfricasTalking imports the Africa's talking module and const payment accesses the AT payment API .The router.post payment route sends a request(req) to the AT payment. Create another file and name it app.js in the same directory(root directory). Copy the following code to this file; import express from "express"; import 'babel-polyfill'; import router from "./mobile-payment"; import dotenv from "dotenv"; import bodyParser, { json } from "body-parser"; dotenv.config(); const app = express(); const port = process.env.port || 3000; app.listen(port, ()=> console.log(`Listening from ${port}`)); app.use(bodyParser.json()); app.use("/v3", router); Note when importing your mobile payment file the number of dots depend on where you saved your file. We then set our live server to run on the port set on the .env file or port 3000. Our application listens from the AT API passing the whole body through bodyParser.json() Add the following scripts to your package.json "build": "babel app.js --out-dir build", "start": "babel-watch app.js" Your package.json file should look like this; { "name": "payment", "version": "1.0.0", "description": "", "main": "app.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "build": "babel app.js --out-dir build", "start": "babel-watch app.js" }, "author": "", "license": "ISC", "dependencies": { "africastalking": "^0.5.2", "body-parser": "^1.19.0", "dotenv": "^8.2.0", "express": "^4.17.1" }, "devDependencies": { "@babel/core": "^7.12.1", "@babel/preset-env": "^7.12.1", "babel-cli": "^6.26.0", "babel-watch": "^7.0.0", "make-runnable": "^1.3.8" } } Now lets run our application. Run npm start in your root terminal. You should be able to see in your console Listening from port ${your Port}. Open postman, fill in the body as you can see below and run your application. You should receive payment notification in your sandbox environment. Note, no notification will be sent to your mobile phone. There you go. We have used the AT C2B payment API successfully. Happy coding!!
https://developers.decoded.africa/how-to-make-using-africas-talking-airtime-api-with-nodejs-express/
CC-MAIN-2020-50
refinedweb
848
63.15
I have an array of integer r = [ 242 302 377 ..., 1090 225 203]. I would like to count the occurrences of 242 in r array. I used the count method like this: asd = r.count(242) print asd but it gives me error AttributeError: 'numpy.ndarray' object has no attribute 'count'. How to solve this? Simple example: If you get to work with a list type of a structure supporting .count() function you cann apply: list.count(x) where list is a type of a listable collection, count() is a function taking a single argument x which identifies which element to check for occurrences Else you could try and apply something like: counter = 0 for x in list: if x == 1: counter += 1 print('Counter: ', counter) where list is a listable collection; There's no such thing as a 'tab separated array'. The display of r is consistent with it being a numpy array (as is the error message). It may have been loaded from a tab separated CSV. In any case, count is a list method, not an array one. Either convert it to a list, or use one of the iterative solutions. There is an array bincount. Since your array appears to be integers in a reasonable range, e.g. 0-1000), it might apply here. Make a sample array: In [147]: r=np.random.randint(0,1000,2000) In [148]: r Out[148]: array([170, 754, 151, ..., 115, 299, 879]) Its str display is: In [166]: print(r) [170 754 151 ..., 115 299 879] This probably confuses Python programmers who don't know about numpy. bincount finds the count for all values in the range: In [152]: np.bincount(r) Out[152]: array([4, 1, 2, 1, 1, 5, 4, 3, 2, 1, 1, 1, 3, 3, 2, 4, 3, 2, 1, 1, 0, 1, 4, ... 1, 3, 0, 2, 1, 2, 3, 1, 2, 3, 3]) I probably should have used np.bincount(r,minlength=1000). The 7th value in that count list is 4, so let's select that: In [176]: np.bincount(r,minlength=1000)[6] Out[176]: 4 I can use count if I first convert r to a list: In [177]: r.tolist().count(6) Out[177]: 4 The iterative solutions also work, but are slower: def foo(a,v): my_count=0 for i in r: if (i==v): my_count+=1 return my_count In [178]: foo(r,6) Out[178]: 4 time tests: In [180]: timeit foo(r,6) 1000 loops, best of 3: 983 us per loop In [181]: timeit len([i for i in r if i==6]) 1000 loops, best of 3: 985 us per loop In [182]: timeit r.tolist().count(6) 10000 loops, best of 3
http://www.devsplanet.com/question/35266161
CC-MAIN-2017-04
refinedweb
459
82.14
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Correct way to get database name in Python? Hi guys, What is the correct way to get the database name from the current database in Python? I've foud the field dbname in the model ir.logging and tried to fetch it like this: def _get_db_name(self,cr,uid, vals,context=None): attach_pool = self.pool.get("ir.logging") test = attach_pool.search(cr, uid, [('dbname', '!=', ' ')]) return test _defaults = { 'name': _get_db_name, } But the default that gets filled in is always [] so what am I doing wrong here? Thanks, Yenthe Hi Yenthe, It is simple to get database name. You can get the database name from cr (Cursor object). cr.dbname will give you the name of the database which is related to that cr (Cursor object). I hope it will help you. Hmm it seems that you should never try to overcomplicate things. Thank you Emipro! By the way, what else can you exactly access with cr by default? Hi, You can see your self just follow the path : odoo=>openerp=>sql_db.py=>class Cursor. In this class you can see all available methods and members which we can use directly from cr. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/correct-way-to-get-database-name-in-python-81149
CC-MAIN-2017-22
refinedweb
246
77.13
This page is generated from commits to the ease.js repository. This news may also be viewed offline the NEWS file. If you would like to subscribe to release notes, please subscribe to the info-easejs mailing list. For release notes on all GNU software (including ease.js), please subscribe to info-gnu. Support for trait class supertype overrides 22 Oct 2015 Traits can now override methods of their class supertypes. Previously, in order to override a method of some class `C` by mixing in some trait `T`, both had to implement a common interface. This had two notable downsides: 1. A trait that was only compatible with details of `C` could only work with `C#M` if it implemented an interface `I` that declared `I#M`. This required the author of `C` to create interfaces where they would otherwise not be necessary. 2. As a corollary of #1---since methods of interfaces must be public, it was not possible for `T` to override any protected method of `C`; this meant that `C` would have to declare such methods public, which may break encapsulation or expose unnecessary concerns to the outside world. Until documentation is available---hopefully in the near future---the test cases provide detailed documentation of the behavior. Stackable traits work as you would expect: ```javascript var C = Class( { 'virtual foo': function() { return 'C'; }, } ); var T1 = Trait.extend( C, { 'virtual abstract override foo': function() { return 'T1' + this.__super(); }, } ); var T2 = Trait.extend( C, { 'virtual abstract override foo': function() { return 'T2' + this.__super(); }, } ); C.use( T1 ) .use( T1 ) .use( T2 ) .use( T2 ) .foo(); // result: "T2T2T1T1C" ``` If the `override` keyword is used without `abstract`, then the super method is statically bound to the supertype, rather than being resolved at runtime: ```javascript var C = Class( { 'virtual foo': function() { return 'C'; }, } ); var T1 = Trait.extend( C, { 'virtual abstract override foo': function() { return 'T1' + this.__super(); }, } ); var T2 = Trait.extend( C, { // static override 'virtual override foo': function() { return 'T2' + this.__super(); }, } ); C.use( T1 ) .use( T1 ) .use( T2 ) .use( T2 ) .foo(); // result: "T2C" ``` This latter form should be discouraged in most circumstances (as it prevents stackable traits), but the behavior is consistent with the rest of the system. Happy hacking. Alias `constructor` member to `__construct` 15 Sep 2015 This allows ease.js classes to mimic the structure of ES6 classes, which use `constructor` to denote the constructor. This patch simply aliases it to `__construct`, which ease.js handles as it would normally. To that note, since the ES6 `class` keyword is purely syntatic sugar around the prototype model, there is not much benefit to using it over ease.js if benefits of ease.js are still desired, since the member definition syntax is a feature of object literals: ``` // ease.js using ES6 let Person = Class( { _name: '', // note that __construct still works as well constructor( name ) { this._name = ''+name; }, sayHi() { return "Hi, I'm " + this.getName(); }, // keywords still work as expected 'protected getName'() { return this._name; } } ); // ES6 using `class` keyword class Person { // note that ES6 will _not_ make this private _name: '', constructor( name ) { this._name = ''+name; }, sayHi() { return "Hi, I'm " + this.getName(); } // keywords unsupported (you'd have to use Symbols) getName() { return this._name; } } // ES3/5 ease.js var Person = Class( { _name: '', __construct: function( name ) { this._name = ''+name; }, sayHi: function() { return "Hi, I'm " + this._name; }, 'protected getName': function() { return this._name; } } ); ``` As you can see, the only change between writing ES6-style method definitions is the syntax; all keywords and other features continue to work as expected. Various ES3-related bugfixes for bugs introduced by v0.2.3 7 Aug 2014 GNU ease.js remains committed to supporting environments as far back as ES3; unfortunately, this is important due to the popularity of older IE versions (IE<=8). Btw, ease.js runs on IE 5.5, in case you still need that. ;) But please don't use a proprietary web browser. Indeed, this is why the breaks were introduced in the first place: I neglected to run the browser-based test suite on the proprietary Microsloth browsers until after the v0.2.3 release, because I do not own a copy of Windows; I had to run it at work. But, regardless---my apologies; I'll be more diligent. Global no longer uses root as alt object prototype 7 Aug 2014 This is a bugfix; the bug was introduced in v0.2.3. Initially, the implementation created a new object with the root object as its prototype, taking advantage of ECMAScript's native overrides/fallthroughs. Unfortunately, IE<=8 had a buggy implementation, effectively treating the prototype as an empty object. So, rather than alt.Array === root.Array, alt.Array === undefined. The fix is simply to reference discrete objects. method.super references now ES3-compatible 7 Aug 2014 This is a bugfix; the bug was introduced in v0.2.3. In ECMAScript 5, reserved keywords can be used to reference the field of an object in dot notation (e.g. method.super), but in ES3 this is prohibited; in these cases, method['super'] must be used. To maintain ES3 compatiblity, GNU ease.js will use the latter within its code. Of course, if you are developing software that need not support ES3, then you can use the dot notation yourself in your own code. This does not sway my decision to use `super`---ES3 will soon (hopefully) become extinct, and would be already were it not for the terrible influence of Microsloth's outdated browsers. Corrected test broken by Node.js v0.10.27 6 Aug 2014 Specifically, aae51ecf, which introduces deepEqual changes for comparing argument objects---specifically, this change: ```c if ((aIsArgs && !bIsArgs) || (!aIsArgs && bIsArgs)) return false; ``` Since I was comparing an array with an arguments object, deepEqual failed. While such an implementation may confuse users---since argument objects are generally treated like arrays---the distinction is important and I do agree with the change. Interface.isInstanceOf will account for interop compatibility 6 Aug 2014 This is a bug fix. If the provided object's constructor is an ease.js type, then the conventional rules will apply (as mentioned in the test docblock and in the manual); however, if it's just a vanilla ECMAScript object, then the interop compatibility checks will be used instead. The manual already states that this is the case; unfortunately, it lies---this was apparently overlooked, and is a bug. Subtype ctor guarantees with parent __mixin or __construct 27 Jul 2014 A solution for this problem took a disproportionally large amount of time, attempting many different approaches, and arriving still at a kluge; this is indicative of a larger issue---we've long since breached the comfort of the original design, and drastic refactoring is needed. I have ideas for this, and have already started in another branch, but I cannot but this implementation off any longer while waiting for it. Sorry for anyone waiting on the next release: this is what held it up, in combination with my attention being directed elsewhere during that time (see the sparse commit timestamps). Including this ordering guarantee is very important for a stable, well-designed [trait] system. Initial implementation of parameterized traits 28 May 2014 This is an important feature to permit trait reuse without excessive subtyping---composition over inheritance. For example, consider that you have a `HttpPlainAuth` trait that adds authentication support to some transport layer. Without parameterized traits, you have two options: 1. Expose setters for credentials 2. Trait closure 3. Extend the trait (not yet supported) The first option seems like a simple solution: ```javascript Transport.use( HttpPlainAuth )() .setUser( 'username', 'password' ) .send( ... ); ``` But we are now in the unfortunate situation that our initialization procedure has changed. This, for one, means that authentication logic must be added to anything that instantiates classes that mix in `HttpPlainAuth`. We'll explore that in more detail momentarily. More concerning with this first method is that, not only have we prohibited immutability, but we have also made our code reliant on *invocation order*; `setUser` must be called before `send`. What if we have other traits mixed in that have similar conventions? Normally, this is the type of problem that would be solved with a builder, but would we want every configurable trait to return a new `Transport` instance? All that on top of littering the API---what a mess! The second option is to store configuration data outside of the Trait acting as a closure: ```javascript var _user, _pass; function setCredentials( user, pass ) { _user = user; _pass = pass; } Trait( 'HttpPlainAuth', { /* use _user and _pass */ } ) ``` There are a number of problems with this; the most apparent is that, in this case, the variables `_user` and `_pass` act in place of static fields---all instances will share that data, and if the data is modified, it will affect all instances; you are therefore relying on external state, and mutability is forced upon you. You are also left with an awkward `setCredentials` call that is disjoint from `HttpPlainAuth`. The other notable issue arises if you did want to support instance-specific credentials. You would have to use ease.js' internal identifiers (which is undocumented and subject to change in future versions), and would likely accumulate garbage data as mixin instances are deallocated, since ECMAScript does not have destructor support. To recover from memory leaks, you could instead create a trait generator: ```javascript function createHttpPlainAuth( user, pass ) { return Trait( { /* ... */ } ); } ``` This uses the same closure concept, but generates new traits at runtime. This has various implications depending on your engine, and may thwart future ease.js optimization attempts. The third (which will be supported in the near future) is prohibitive: we'll add many unnecessary traits that are a nightmare to develop and maintain. Parameterized traits are similar in spirit to option three, but without creating new traits each call: traits now support being passed configuration data at the time of mixin that will be passed to every new instance: ```javascript Transport.use( HttpPlainAuth( user, pass ) )() .send( ... ); ``` Notice now how the authentication configuration is isolated to the actual mixin, *prior to* instantiation; the caller performing instantiation need not be aware of this mixin, and so the construction logic can remain wholly generic for all `Transport` types. It further allows for a convenient means of providing useful, reusable exports: ```javascript module.exports = { ServerFooAuth: HttpPlainAuth( userfoo, passfoo ), ServerBarAuth: HttpPlainAuth( userbar, passbar ), ServerFooTransport: Transport.use( module.exports.ServerFooAuth ), // ... }; var module = require( 'foo' ); // dynamic auth Transport.use( foo.ServerFooAuth )().send( ... ); // or predefined classes module.ServerFooTransport().send( ... ); ``` Note that, in all of the above cases, the initialization logic is unchanged---the caller does not need to be aware of any authentication mechanism, nor should the caller care of its existence. So how do you create parameterized traits? You need only define a `__mixin` method: Trait( 'HttpPlainAuth', { __mixin: function( user, pass ) { ... } } ); The method `__mixin` will be invoked upon instantiation of the class into which a particular configuration of `HttpPlainAuth` is mixed into; it was named differently from `__construct` to make clear that (a) traits cannot be instantiated and (b) the constructor cannot be overridden by traits. A configured parameterized trait is said to be an *argument trait*; each argument trait's configuration is discrete, as was demonstrated by `ServerFooAuth` and `ServerBarAuth` above. Once a parameterized trait is configured, its arguments are stored within the argument trait and those arguments are passed, by reference, to `__mixin`. Since any mixed in trait can have its own `__mixin` method, this permits traits to have their own initialization logic without the need for awkward overrides or explicit method calls. FallbackSymbol added 6 Jul 2014 This is the closest we will get to implementing a concept similar to symbols in pre-ES6. The intent is that, in an ES5 environment, the caller should ensure that the object receiving this key will mark it as non-enumerable. Otherwise, we're out of luck. The symbol string is pseduo-randomly generated with an attempt to reduce the likelihood of field collisions and malicious Math.{floor,random} overwrites (so long as they are clean at the time of loading the module). Super method now provided on method override wrapper 2 May 2014 This allows invoking any arbitrary method of a supertype. This is needed, for example, if some method `foo` is overridden, but we wish to call the parent `foo` in another method; this is not possible with __super: var C = Class( { 'virtual foo': function() { return 'super'; } } ); var SubC = C.extend( { 'override foo': function() { return 'sub'; }, superfoo: function() { return this.foo.super.call( this ); } } ); SubC().superfoo(); // 'super' Obviously, __super would not work here because any call to __super within SubC#superfoo would try to invoke C@superfoo, which does not exist. Vanilla ECMAScript interop patches 29 Apr 2014 Now that ease.js is a GNU project, it has much broader reach than before. Since its very existence is controversial, it would be wise (and polite) to provide a means for others to integrate with libraries written using ease.js without being forced to use ease.js themselves. Further, ease.js users should be able to build off of the work of other libraries that do not use ease.js. This set of changes introduces a number of interoperability improvements, documented in the new manual chapter ``Interoperability''. Since it is documented in the manual, this commit message will not go into great detail; I wish to only provide a summary. Firstly, we now have the concept of interface compatibility; while ease.js classes/etc must still conform to the existing interface requirements, the rules are a bit more lax for other ECMAScript objects to permit interoperability with type-checking ease.js code. For example: var I = Interface( { foo: [ 'a' ] } ), obj = { foo: function( a ) {} }; Class.isA( I, obj ); // true This is also a powerful feature for implementing interfaces around existing objects, as a preemptive interface check (rather than duck typing). Prototypally extending ease.js classes is potentially problematic because the constructor may perform argument validations (this is also an issue in pure prototypal code). As a solution, all classes now have a static `asPrototype` method, which defers constructor invocation, trusting that the prototype constructor will do so itself. Aside from a few bug fixes, there is also a more concise notation for private members to allow prototypal developers to feel more at home when using GNU ease.js: members prefixed with an underscore are now implicitly private, which will satisfy the most common visibility use cases. I do recognize that some (mostly in the Java community) use underscore *suffixes* to denote private members, but I've noticed that this is relatively rare in the JS community; I have therefore not included such a check, but will consider it if many users request it. There are many more ideas to come, but I hope that this will help to bridge the gap between the prototypal and classical camps, allowing them to cooperate with as little friction as possible. Miscellaneous performance enhancements 17 Apr 2014 These are the beginning of some smaller performance optimizations brought on by the v8 profiler. This includes removal or movement of over-reaching try/catch blocks and more disciplined argument handling, neither of which can be compiled into machine code (permanently, at least). This also removes some unneeded code, adds some baseline performance test cases, and begins generic performance test output and HTML generation which will be used in the future for more detailed analysis. This is just a starting point; there's more to come, guided by profiling. The trait implementation needs some love and, since its development is not yet complete, that will be optimized in the near future. Further, there are additional optimizations that can be made when ease.js recognizes that certain visibility layers are unneeded, allowing it to create more lightweight classes. Performance enhancements will also introduce the ability to generate a ``compiled'' class, which will generate a prototype that can be immediately run without the overhead of processing keywords, etc. This will also have the benefit of generating code that can be understood by static analysis tools and, consequently, optimizers. All in good time. 9 Apr 2014 Copyright for the GNU ease.js project has been assigned to the Free Software Foundation. This allows the FSF to enforce the project licenses, which is something that I lack the time and money to do. It further ensures, through the contract I signed with the FSF, that all distrbutions of GNU ease.js, and all derivatives, will always ``be on terms that explicitly and perpetually permit anyone possessing a copy of the work to which the terms apply, and possessing accurate notice of these terms, to redistribute copies of the work to anyone on the same terms'' and that the project ``shall be offered in the form of machine-readable source code''. Consequently, any contributors to the project (aside from changes deemed to be trivial) will be required to assign copyright to the FSF; this puts GNU ease.js on a firm legal ground to prevent complicating enforcement. Contributors can rest assured that the code they contribute will always remain free (as in freedom). I thank Donald Robertson III of the FSF for his help and guidance during this process. Private methods are no longer wrapped 20 Mar 2014 This is an exciting performance optimization that seems to have eluded me for a surprisingly long time, given that the realization was quite random. ease.js accomplishes much of its work through a method wrapper---each and every method definition (well, until now) was wrapped in a closure that performed a number of steps, depending on the type of wrapper involved: 1. All wrappers perform a context lookup, binding to the instance's private member object of the class that defined that particular method. (See "Implementation Details" in the manual for more information.) 2. This context is restored upon returning from the call: if a method returns `this', it is instead converted back to the context in which the method was invoked, which prevents the private member object from leaking out of a public interface. 3. In the event of an override, this.__super is set up (and torn down). There are other details (e.g. the method wrapper used for method proxies), but for the sake of this particular commit, those are the only ones that really matter. There are a couple of important details to notice: - Private members are only ever accessible from within the context of the private member object, which is always the context when executing a method. - Private methods cannot be overridden, as they cannot be inherited. Consequently: 1. We do not need to perform a context lookup: we are already in the proper context. 2. We do not need to restore the context, as we never needed to change it to begin with. 3. this.__super is never applicable. Method wrappers are therefore never necessary for private methods; they have therefore been removed. This has some interesting performance implications. While in most cases the overhead of method wrapping is not a bottleneck, it can have a strong impact in the event of frequent method calls or heavily recursive algorithms. There was one particular problem that ease.js suffered from, which is mentioned in the manual: recursive calls to methods in ease.js were not recommended because it (a) made two function calls for each method call, effectively halving the remaining call stack size, and (b) tail call optimization could not be performed, because recursion invoked the wrapper, *not* the function that was wrapped. By removing the method wrapper on private methods, we solve both of these problems; now, heavily recursive algorithms need only use private methods (which could always be exposed through a protected or public API) when recursing to entirely avoid any performance penalty by using ease.js. Running the test cases on my system (your results may vary) before and after the patch, we have: BEFORE: 0.170s (x1000 = 0.0001700000s each): Declare 1000 anonymous classes with private members 0.021s (x500000 = 0.0000000420s each): Invoke private methods internally AFTER: 0.151s (x1000 = 0.0001510000s each): Declare 1000 anonymous classes with private members 0.004s (x500000 = 0.0000000080s each): Invoke private methods internally This is all the more motivation to use private members, which enforces encapsulation; keep in mind that, because use of private members is the ideal in well-encapsulated and well-factored code, ease.js has been designed to perform best under those circumstances. Preliminary support for traits as mixins 15 Mar 2014 This has turned out to be a very large addition to the project---indeed, with this release, its comprehensiveness remains elusive, but this is a huge step in the right direction. Traits allow for powerful methods of code reuse by defining components that can be ``mixed into'' classes, almost as if the code were copied and pasted directly into the class definition. Mixins, as they are so called, carry with them the type of the trait, just as implementing an interface carries with it the type of the interface; this means that they integrate into ease.js' type system such that, given some trait T that mixes into class C and an instance of C, it will be true that Class.isA( T, inst ). The trait implementation for GNU ease.js is motivated heavily by Scala's implementation of mixins using traits. Notable features include: 1. Traits may be mixed in either prior to or following a class definition; this allows coupling traits tightly with a class or allowing them to be used in a decorator-style manner prior to instantiation. 2. By mixing in a trait prior to the class definition, the class may override methods of the trait: Class( 'Foo' ).use( T ).extend( { /*...*/ } ) If a trait is mixed in after a class definition, then the trait may instead override the functionality of a class: Class( 'Foo', { /*...*/ } ).use( T ) 3. Traits are stackable: By using the `abstract override' keyword combination, a trait can override the concrete definition of its parent, provided that the abstract definition is implemented by the trait (e.g. by implementing a common interface). This allows overrides to be mixed in any order. For example, consider some class Buffer that defines an `add' method, accepting a string. Now consider two traits Dup and Upper: Buffer.use( Dup ).use( Upper )().add( "foo" ) This would result in the string "FooFoo" being added to the buffer. On the other hand: Buffer.use( Reverse ).use( Dup )().add( "foo" ) would add the string "Foofoo". 4. A trait may maintain its own private state and API completely disjoint from the class that it is mixed into---a class has access only to public and protected members of a trait and vice versa. This further allows a class and trait to pass messages between one-another without having their communications exposed via a public API. A trait may even communicate with with other traits mixed into the same class (or its parents/children), given the proper overrides. Traits provide a powerful system of code reuse that solves the multiple inheritance problems of languages like C++, without introducing the burden and code duplication concerns of Java's interfaces (note that GNU ease.js does support interfaces, but not multiple inheritance). However, traits also run the risk of encouraging overly rich APIs and complicated inheritance trees that produce a maintenance nightmare: it is important to keep concerns separated, creating classes (and traits) that do one thing and do it well. Users should understand the implications of mixing in traits prior to the class definition, and should understand how decorating an API using mixins after a class definition tightly couples the trait with all objects derived from the generated class (as opposed to the flexibility provided by the composition-based decorator pattern). These issues will be detailed in the manual once the trait implementation is complete. The trait implementation is still under development; outstanding tasks are detailed in `README.traits`. In the meantime, note that the implementation *is* stable and can be used in the production environment. While documentation is not yet available in the manual, comprehensive examples and rationale may be found in the trait test cases. Happy hacking! Support for stacked mixins 6 Mar 2014 The concept of stacked traits already existed in previous commits, but until now, mixins could not be stacked without some ugly errors. This also allows mixins to be stacked atop of themselves, duplicating their effect. This would naturally have limited use, but it's there. This differs slightly from Scala. For example, consider this ease.js mixin: C.use( T ).use( T )() This is perfectly valid---it has the effect of stacking T twice. In reality, ease.js is doing this: - C' = C.use( T ); - new C'.use( T ); That is, it each call to `use' creates another class with T mixed in. Scala, on the other hand, complains in this situation: new C with T with T will produce an error stating that "trait T is inherited twice". You can work around this, however, by doing this: class Ca extends T new Ca with T In fact, this is precisely what ease.js is doing, as mentioned above; the "use.use" syntax is merely shorthand for this: new C.use( T ).extend( {} ).use( T ) Just keep that in mind. Added support for weak abstract methods 26 Jan 2014 This adds the `weak' keyword and permits abstract method definitions to appear in the same definition object as the concrete implementation. This should never be used with hand-written code---it is intended for code generators (e.g. traits) that do not know if a concrete implementation will be provided, and would waste cycles duplicating the property parsing that ease.js will already be doing. It also allows for more concise code generator code. Began implementing composition-based traits 23 Jan 2014 As described in <>. The benefit of this approach over definition object merging is primarily simplicitly---we're re-using much of the existing system. We may provide more tight integration eventually for performance reasons (this is a proof-of-concept), but this is an interesting start. This also allows us to study and reason about traits by building off of existing knowledge of composition; the documentation will make mention of this to explain design considerations and issues of tight coupling introduced by mixing in of traits. GNU ease.js 20 Jan 2014 ease.js is now part of the GNU project; this merges in changes that led up to the submission being accepted and additional cleanup thereafter. More information will be included in the release announcement (which will be included in the 0.2.0 tag), and relicensing rationale is included in the commit that introduced the license change. The release announcement will also include a small essay I have been authoring with the help and input of RMS about the importance of making JavaScript code free. Happy GNU year! Altered license templates for combined files with section 7 exception 6 Jan 2014 As suggested by RMS in The JavaScript Trap: <> This does increase the size of the minified file a bit---the header is now about 1kB of uncompressed text (which will hopefully comprses nicely with the rest of the minified file). That said, ease.js will be continuing to grow in size, bandwidth is becoming less and less important, and the license is very important, especially in our goal to spread the philosophy of software freedom. Relicensed under the GPLv3+ 20 Dec 2013 This project was originally LGPLv+-licensed to encourage its use in a community that is largely copyleft-phobic. After further reflection, that was a mistake, as adoption is not the important factor here---software freedom is. When submitting ease.js to the GNU project, it was asked if I would be willing to relicense it under the GPLv3+; I agreed happily, because there is no reason why we should provide proprietary software any sort of edge. Indeed, proprietary JavaScript is a huge problem since it is automatically downloaded on the user's PC generally without them even knowing, and is a current focus for the FSF. As such, to remain firm in our stance against proprietary JavaScript, relicensing made the most sense for GNU. This is likely to upset current users of ease.js. I am not sure of their number---I have only seen download counts periodically on npmjs.org---but I know there are at least a small number. These users are free to continue using the previous LGPL'd releases, but with the understanding that there will be no further maintenance (not even bug fixes). If possible, users should use the GPL-licensed versions and release their software as free software. Here comes GNU ease.js. 'this' now properly binds to the private member object of the instance for getters/setters 19 Jan 2013 Getters/setters did not get much attention during the initial development of ease.js, simply because there was such a strong focus on pre-ES5 compatibility---ease.js was created for a project that strongly required it. Given that, getters/setters were not used, since those are ES5 features. As such, I find that two things have happened: 1. There was little incentive to provide a proper implementation; even though I noticed the issues during the initial development, they were left unresolved and were then forgotten about as the project lay dormant for a while. 2. The project was dormant because it was working as intended (sure, there are still things on the TODO-list feature-wise). Since getters/setters were unused in the project for which ease.js was created, the bug was never found and so never addressed. That said, I now am using getters/setters in a project with ease.js and noticed a very odd bug that could not be explained by that project's implementation. Sure enough, it was an ease.js issue and this commit resolves it. Now, there is more to be said about this commit. Mainly, it should be noted that MemberBuilder.buildGetterSetter, when compared with its method counterpart (buildMethod) is incomplete---it does not properly address overrides, the abstract keyword, proxies or the possibility of method hiding. This is certainly something that I will get to, but I want to get this fix out as soon as I can. Since overriding ES5 getters/setters (rather than explicit methods) is more likely to be a rarity, and since a partial fix is better than no fix, this will likely be tagged immediately and a further fix will follow in the (hopefully near) future. (This is an interesting example of how glaring bugs manage to slip through the cracks, even when the developer is initially aware of them.) Added `proxy' keyword support 2 May 2012 The concept of proxy methods will become an important, core concept in ease.js that will provide strong benefits for creating decorators and proxies, removing boilerplate code and providing useful metadata to the system. Consider the following example: Class( 'Foo', { // ... 'public performOperation': function( bar ) { this._doSomethingWith( bar ); return this; }, } ); Class( 'FooDecorator', { 'private _foo': null, // ... 'public performOperation': function( bar ) { return this._foo.performOperation( bar ); }, } ); In the above example, `FooDecorator` is a decorator for `Foo`. Assume that the `getValueOf()` method is undecorated and simply needs to be proxied to its component --- an instance of `Foo`. (It is not uncommon that a decorator, proxy, or related class will alter certain functionality while leaving much of it unchanged.) In order to do so, we can use this generic, boilerplate code return this.obj.func.apply( this.obj, arguments ); which would need to be repeated again and again for *each method that needs to be proxied*. We also have another problem --- `Foo.getValueOf()` returns *itself*, which `FooDecorator` *also* returns. This breaks encapsulation, so we instead need to return ourself: 'public performOperation': function( bar ) { this._foo.performOperation( bar ); return this; }, Our boilerplate code then becomes: var ret = this.obj.func.apply( this.obj, arguments ); return ( ret === this.obj ) ? this : ret; Alternatively, we could use the `proxy' keyword: Class( 'FooDecorator2', { 'private _foo': null, // ... 'public proxy performOperation': '_foo', } ); `FooDecorator2.getValueOf()` and `FooDecorator.getValueOf()` both perform the exact same task --- proxy the entire call to another object and return its result, unless the result is the component, in which case the decorator itself is returned. Proxies, as of this commit, accomplish the following: - All arguments are forwarded to the destination - The return value is forwarded to the caller - If the destination returns a reference to itself, it will be replaced with a reference to the caller's context (`this`). - If the call is expected to fail, either because the destination is not an object or because the requested method is not a function, a useful error will be immediately thrown (rather than the potentially cryptic one that would otherwise result, requiring analysis of the stack trace). N.B. As of this commit, static proxies do not yet function properly. Added signchk tool 18 Apr 2012 This tool can help to ensure that commits have not been falsely authored. For example, if you receive an ease.js repository from a friend, there is no way to verify that a commit from "Mike Gerwitz" is actually a commit from myself unless it has been signed using my private key. This additional check will help to ensure the integrity of the repository. Please note that automated systems should *not* invoke this utility directly from this repository, unless it is invoked using a previously trusted commit. Otherwise, an attacker need only alter the script to competely evade the check. Updated package.json to remove deps and correct license 13 Dec 2011 Growing much closer to releasing. Hopefully in the next couple of days; I just don't want to rush it. Though, at the same time, I've been noticing projects popping up with very similar / exact names to this one. A project named "ease" was added to the npm repository and another "ease.js" is on GitHub, although it's made no progress. As such, I want to ensure I reserve the name in npm. I've been testing the new library and work and noticed only a couple minor issues, primiarly due to misuse of the library. Looking good. Switched to Closure Compiler 6 Dec 2011 This is nothing against uglify. Rather, here's the story on this: Commit e4cd1e fixed an error that was causing minified files to break in IE. This was due to how IE interprets things, not how UglifyJS was minifying them. Indeed, Closure Compiler had the exact same problem. The decision to move to Closure Compiler was due to a variety of factors, which came down to primarily feature set and tests. Closure Compiler is well tested and maintained. It also includes a number of additional, beneficial features. UglifyJS is an excellent project and I would recommend it to anyone, but it is not tested (no unit tests; it's tested by ensuring common libraries like jQuery run after minification). It is, however, significantly faster. It's likely that, in the future, once I add autoconf for the build process to configure certain settings, that I will add UglifyJS as an option. I'm sure many people would prefer that, especially those who dislike Java and do not wish to have it installed. Hopefully those that do decide to install Java will go with openjdk, not Oracle's proprietary implementation. Fixed __self assignment for FF 4 Dec 2011 This little experience was rather frustrating. Indeed, it would imply that the static implementation (at least, accessing protected and private static members) was always broken in FF. I should be a bit more diligent in my testing. Or perhaps it broke in a more recent version of FF, which is more likely. The problem seems to be that we used defineSecureProp() for an assignment to the actual class, then later properly assigned it to class.___$$svis$$. Of course, defineSecureProp() makes it read-only, so this failed, causing an improper assignment for __self, breaking the implementation. As such, this probably broke in newer versions of FF and worked properly in older versions. More concerningly is that the implementations clearly differ between Chromium and Firefox. It may be that Firefox checks the prototype chain, whereas Chromium (v8, specifically) will simply write to that object, ignoring that the property further down the prototype chain is read-only. [#25] Finished refactoring MemberBuilder/MethodTest and removed inc-member_builder-common (no longer needed) 26 Oct 2011 Finally feels like things are starting to come together. It's rather interesting looking back. Each time I begin writing a piece of software, I think to myself, "This is the best way to do it." Well, generally. Perhaps the implementation could have been better, but I may not have had the time. However, the general concept remains. Each time I look back months later and find that I disagree with certain decisions. I find certain implementations to be messy or poorly constructed. Or perhaps I was just being lazy to begin with. Whatever the case, it is comforting. It shows that one is continuing to learn and evolve. Now, in the case of ease.js, we're working with a number of different factors in regards to my perception of prior code quality. Primarily, I'm looking at a basic implementation (in this case, I'm referring to test cases) that served as a foundation that could be later evolved. I didn't have the time to devote to a stronger solution. However, since the project has evolved so far past my original expectations, a more sophisticated solution is needed in order to simplify the overall design. That is what happened here. Of course, we're also looking at a year's worth of additional, intimate experience with a language. Regardless of the reason, I love to see software evolve. Especially my own. It's as if I'm watching my child grow. From that, I can get a great deal of satisfaction. One shouldn't expect perfection. But one should certainly aim for it. Added very basic formatted output and failure tolerance for test case 10 Oct 2011 The one year anniversary of the beginning of the ease.js project is quickly approaching. I find myself to be not quite where I had expected many months ago, but find that the project has evolved so much further than I had event originally anticipated. My main motivation behind the project continues to be making my life at work easier, while providing an excellent library that others can hopefully benefit from. If anything, it's a fascinating experiment and clever hack around JavaScript. Now I find myself with a newborn child (nearly four weeks old), who demands my constant attention (and indeed, it is difficult to find the desire to put my attention elsewhere). Still - I am a hacker. Software is my passion. So the project must move forward. I also find myself unwilling to create a blog for ease.js. I feel it's inappropriate for a project that's in its (relative) infancy and does not have much popularity (it has never been announced to anyone). As such, I feel that commit messages will serve my purpose as useful journal entries regarding the status of the project. They will also be interesting easter eggs for those who would wish to seek them out for additional perspective on the project. (Granted, one could easy script the discovery of such entries by examining the absurd length of the commit message...perhaps the git log manpages would be useful). So. Let's get back to the project. ease.js is currently going through a strong refactoring in order to address design issues that have begun to creep up as the project grew. The initial design was a very simple one - a "series of modules", as it was originally described in a CommonJS sense, that would provide features of a classical Object-Oriented system. It would seem ironic that, having a focus on classical Object-Oriented development, one would avoid developing the project in such a paradigm. Instead, I wished to keep the design simple (because the project seemed simple), more natural to JS developers (prototypal) and performant (object literals do not have the overhead of instantiation). Well, unfortunately, the project scope has increased drastically due to the success of the implementation (and my playfulness), the chosen paradigm has become awkward in itself and the performance benefit is indeed a micro-optimization when compared with the performance of both the rest of the system and the system that will implement ease.js as a framework. You can only put off refactoring for so long before the system begins to trip over itself and stop being a pleasure to work with. In fact, it's a slap in the face. You develop this intricate and beautiful system (speaking collectively and generally, of course) and it begins to feel tainted. In order to prevent it from developing into a ball of mud - a truly unmaintainable mess - the act of refactoring is inevitable, especially if we want to ensure that the project survives and is actively developed for any length of time. In this case, the glaring problem is that each of the modules are terribly, tightly coupled. This reduces the flexibility of the system and forces us to resort to a system riddled with conditionals. This becomes increasingly apparent when we need to provide slightly different implementations between environments (e.g. ES5/pre-ES5, production/development, etc and every combination). Therefore, we need to decouple the modules in order to take advantage of composition in order to provide more flexible feature sets depending on environment. What does this mean? We need to move from object literals for the modules to prototypes (class-like, but remember that ease.js exists because JS does not have "classes"). A number of other prototypes can be extracted from the existing modules and abstracted to the point where they can be appropriately injected where necessary. Rather than using conditions for features such as fallbacks, we can encapsulate the entire system in a facade that contains the features relevant to that particular environment. This will also have the consequence that we can once again test individual units rather than systems. At the point of this commit (this entry was written before any work was done), the major hurdle is refactoring the test cases so that they do not depend on fallback logic and instead simply test specific units and skip the test if the unit (the prototype) is not supported by the environment (e.g. proxies in a pre-ES5 environment). This will allow us to finish refactoring the fallback and environment-specific logic. It will also allow us to cleanly specify a fallback implementation (through composition) in an ES5 environment while keeping ES5 detection mechanisms separate. The remaining refactorings will likely be progressive. This all stemmed out of the desire to add the method hiding feature, whose implementation varies depending on environment. I want to get back to developing that feature so I can get the first release (v0.1.0) out. Refactoring can continue after that point. This project needs a version number so it can be used reliably.
https://www.gnu.org/software/easejs/news.html
CC-MAIN-2016-22
refinedweb
7,216
54.32
Introduction Automated differentiation was developed in the 1960’s but even now does not seem to be that widely used. Even experienced and knowledgeable practitioners often assume it is either a finite difference method or symbolic computation when it is neither. This article gives a very simple application of it in a machine learning / statistics context. Multivariate Linear Regression We model a dependent variable linearly dependent on some set of independent variables in a noisy environment. where runs from 1 to , the number of observations; are i.i.d. normal with mean and the same variance : ; For each , is a column vector of size and is a column vector also of size . In other words: We can therefore write the likelihood function given all the observations as: In order to find the best fitting parameters we therefore need to maximize this function with respect to . The standard approach is to maximize the log likelihood which, since log is monotonic, will give the same result. Hence maximizing the likelihood is the same as minimizing the (biased) estimate of the variance: We can define a cost function: Clearly minimizing this will give the same result. The constant is to make the manipulation of the derivative easier. In our case, this is irrelevant as we are not going to derive the derivative explicitly but use automated differentiation. In order to mininize the cost function, we use the method of steepest ascent (or in this case descent):. Implementation Some pragmas to warn us about potentially dangerous situations. > {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-type-defaults #-} > module Linear where Modules from the automatic differentiation library. > import Numeric.AD > import Numeric.AD.Types > import qualified Data.Vector as V Some modules from a random number generator library as we will want to generate some test data. > import Data.Random () > import Data.Random.Distribution.Normal > import Data.Random.Distribution.Uniform > import Data.RVar Our model: the predicted value of is given the observations . > yhat :: Floating a => > V.Vector a -> > V.Vector a -> a > yhat x theta = V.sum $ V.zipWith (*) theta x For each observation, the “cost” of the difference between the actual value of and its predicted value. > cost :: Floating a => > V.Vector a -> > a -> > V.Vector a > -> a > cost theta y x = 0.5 * (y - yhat x theta)^2 To find its gradient we merely apply the operator grad. > delCost :: Floating a => > a -> > V.Vector a -> > V.Vector a -> > V.Vector a > delCost y x = grad $ \theta -> cost theta (auto y) (V.map auto x) We can use the single observation cost function to define the total cost function. > totalCost :: Floating a => > V.Vector a -> > V.Vector a -> > V.Vector (V.Vector a) > -> a > totalCost theta y x = (/l) $ V.sum $ V.zipWith (cost theta) y x > where > l = fromIntegral $ V.length y Again taking the derivative is straightforward. > delTotalCost :: Floating a => > V.Vector a -> > V.Vector (V.Vector a) -> > V.Vector a -> > V.Vector a > delTotalCost y x = grad f > where > f theta = totalCost theta (V.map auto y) (V.map (V.map auto) x) Now we can implement steepest descent. > stepOnce :: Double -> > V.Vector Double -> > V.Vector (V.Vector Double) -> > V.Vector Double -> > V.Vector Double > stepOnce gamma y x theta = > V.zipWith (-) theta (V.map (* gamma) $ del theta) > where > del = delTotalCost y x > stepOnceStoch :: Double -> > Double -> > V.Vector Double -> > V.Vector Double -> > V.Vector Double > stepOnceStoch gamma y x theta = > V.zipWith (-) theta (V.map (* gamma) $ del theta) > where > del = delCost y x Let’s try it out. First we need to generate some data. > createSample :: Double -> V.Vector Double -> IO (Double, V.Vector Double) > createSample sigma2 theta = do > let l = V.length theta > x <- V.sequence $ V.replicate (l - 1) $ sampleRVar stdUniform > let mu = (theta V.! 0) + yhat x (V.drop 1 theta) > y <- sampleRVar $ normal mu sigma2 > return (y, x) We create a model with two independent variables and thus three parameters. > actualTheta :: V.Vector Double > actualTheta = V.fromList [0.0, 0.6, 0.7] We initialise our algorithm with arbitrary values. > initTheta :: V.Vector Double > initTheta = V.replicate 3 0.1 We give our model an arbitrary variance. > sigma2 :: Double > sigma2 = 0.01 And set the learning rate and the number of iterations. > nSamples, nIters:: Int > nSamples = 100 > nIters = 2000 > gamma :: Double > gamma = 0.1 Now we can run our example. For the constant parameter of our model (aka in machine learning as the bias) we ensure that the correspoding “independent variable” is always set to . > main :: IO () > main = do > vals <- V.sequence $ > V.replicate nSamples $ > createSample sigma2 actualTheta > let y = V.map fst vals > x = V.map snd vals > x' = V.map (V.cons 1.0) x > hs = iterate (stepOnce gamma y x') initTheta > update theta = V.foldl f theta $ V.zip y x' > where > f theta (y, x) = stepOnceStoch gamma y x theta > putStrLn $ show $ take 1 $ drop nIters hs > let f = foldr (.) id $ replicate nSamples update > putStrLn $ show $ f initTheta And we get quite reasonable estimates for the parameter. ghci> main [fromList [-8.34526742975572e-4,0.6024033722648041,0.69483650585735]] fromList [7.095387239884274e-5,0.6017197904731632,0.694335002002961] 15 thoughts on “Regression and Automated Differentiation” Nice example. You might want to try the `gradientDescent` routine, which should take over the optimisation bit. Also, it is a one-liner to goose up the model from something linear into something fancier like a multilayer perceptron: `fmap atanh . linearMap w2 . fmap atanh . linearMap w1 …` Yes I was going to do logistic regression next followed by the multi-layer perception. It’s something of a mystery why the Machine Learning community seems to focus on backpropagation when they could just apply automated differentiation and gradient descent. Also I was aware of the gradientDescent routine so I should use it for the logistic regression example You may also be able to use one of the other gradientDescent variants, e.g. conjugateGradientDescent, or the Halley version depending on your problem. We’ve begun actively exploring adding more gradient descent techniques (BFGS, etc.) to the library of late. I’ve always viewed AD as symbolic differentiation where the compiler does the symbolic manipulation: is there something wrong with that view? Ganesh, Yes. With automatic differentiation we don’t push symbols even in the compiler, we use the partial derivatives of each primitive operation and either collapse them as we go (in forward mode) or build a tree with just the partial derivatives with regards to the particular arguments it was given (in reverse mode). Note we only store the partial derivatives in that tree with respect to the actual inputs given, we don’t store a tree node with the actual operation. There is no need to store a node that says this thing is ‘Sin’ or ‘*’. This makes a big difference compared to symbolic differentiation. In the absence of a ‘let’ in the grammar for your symbolically differentiated form, that difference is actually asymptotic in the time it takes to evaluate. Symbolic solutions without sharing are based on the expanded tree, and AD solutions using forward mode can get sharing on the values with no extra effort. The nice thing about using “automatic” differentiation as opposed to a purer symbolic approach is that as you go you have the ‘primal’ answer, so if you need to use conditional branching based on that primal answer, etc it still just works. You can derive symbolic differentiation from automatic differentiation at least for a given code path, by just using a traced numeric type, and lifting it into AD and in reverse mode you have to use observable sharing or other nastiness to cheat, which is of course open to symbolic approaches as well, so in practice this distinction may seem a bit pedantic, but it still makes a huge difference in implementation strategy and use. There is no need to store a node that says this thing is ‘Sin’ or ‘*’. Doesn’t your source program (manipulated by the compiler) contain those nodes? In the definition for ‘sin’ for each mode we wind up with something like: cos = lift1 cos (negate . sin) where lift1 is a function that defines how to compute the answer as normal and its partial derivative with respect to its input. if we were using a mode like a simple 1-step forward mode like data Dual a = Dual a a then lift1 f df (Dual b db) = Dual (f b) (df b * db) In a truly symbolic story, I’d be building up some kind of tree data Symbolic a = Val a | Cos (Symbolic a) | Symbolic a :*: Symbolic a | … cos = Cos Here, the source program still exists, yes, so as i said, the distinction is a bit pedantic, but I also get the full power of the source language. I view automatic differentiation as working on the quotient of the information exposed to an arbitrary symbolic differentiator that is actually interesting to the problem of differentiation — that tree of derivatives, not the operations that got me there. There is a relationship between them that is even nodded to in the wikipedia article for AD, though, and with the addition of symbolic numeric types the line becomes even more blurry. I guess it’s just a question of emphasis: I think that AD is a special case of symbolic differentiation where the machinery of the differentiation ends up being managed by the compiler; the AST the compiler manipulates does have the kind of representation embodied by Symbolic. I mostly find it a choice of where the operation happens. One can view symbolic as working on the initial representation as a syntax tree and automatic as working on the final representation in terms of composing primitive operations with known Jacobians, where now the only choice remaining is which order to do the composition. Even that description in the wikipedia article acknowledges that if you pull far enough back to the origin mathematics, you get something morally symbolic out where you initially produce the code, but by pushing it down into the code, we just calculate the derivative for the value you are using while we calculate f(x), rather than plug it into an explicit formula for f'(x). Pingback: Logistic Regression and Automated Differentiation | Idontgetoutmuch's Weblog You juggle effortlessly two ideas close to my researcher’s heart: machine learning and functional programming (though I’ve never implemented ML in FP, but I reckon there’s plenty of opportunity here). So if I understand correctly, your AD library requires (invertible,terminating) functions with known Jacobians as respectively the first and second argument to lift1 (how is the operator that maps a manifold to its tangent bundle called?) Moreover, where can I find some MWEs of your AD library? I’ve cabal installed it, for the time being I am not sure to whom you are addressing your comment. The AD library was written by (I think) Edward Kmett and Barak Pearlmutter. I just used it in my blog article on Extended Kalman Filters (). I have probably used it in other places. I don’t think the functions need to be invertible just differentiable. Hope this helps. To supply a full history of “ad” in Haskell: Way back in the day Pearlmutter and Siskind had written a ton of beautiful papers on “Lambda the Ultimate Backpropagator”, etc. Pearlmutter and Siskind had used a fresh tag supply to get this safety in stalingrad, their compiler for a scheme-like with built-in AD. Björn’s adaptation let them approximate this in Haskell at least in a setting where the combinators are properly nested. Some things like ‘flip grad’ ceased to be well-typed, so some flexibility was lost, and rank-n types can be more painful to work with, but correctness was preserved. Somewhere along the way Conal Elliott wrote a nice article on Beautiful Differentiation describing forward mode, packaging up the sort of folklore of the time on AD. Barak Pearlmutter wrote “fad” with Bjorn Buckwalter and Jeffrey Mark Siskind, which worked on a lazy forward mode derivative tower. The difference between Barak et al. and Conal’s encoding is that Björn had borrowed a trick from an article by Chung-Chieh Shan on his blog to avoid perturbation/sensitivity confusion problems. I build a reverse mode AD library “rad” based loosely on its API, and stole Björn’s trick for using quantifiers to “brand” your infinitesimals in response to a stack overflow question. To do this I used Andy Gill’s work on “observable sharing” from Kansas Lava, so my reverse mode was rather non-standard and instead based on a topological sort of the graph rather than a linear tape. I then went off and built a framework for doing reverse mode via tapes (Wengert lists), various sparse and dense variants on forward mode, etc. hiding the choice of mode behind the quantifier Chung-Chieh introduced. Anthony Cowley helped me scale the “ad” package up to hundreds of thousands of variables to better support his work on computer vision and motion planning for robots. However, the decision of how to “hide the mode” ultimately turned out to be limiting the end user’s ability to define new primitive numeric operations, so we redesigned the library and `ad 4` was released after a considerable overhaul by me and Alex Lang. Invertability is definitely not required. Termination is only required if you look at it.😉 thank you Dominic and Edward for the clarifications and inspiring work. Unfortunately I cannot get these examples to work due to a build error in one of random-fu’s dependencies ( I already filed a bug report), but I will soon try the AD library.
https://idontgetoutmuch.wordpress.com/2013/04/26/regression-and-automated-differentiation-4/
CC-MAIN-2016-18
refinedweb
2,281
56.25
id summary reporter owner description type status component version severity resolution keywords cc stage has_patch needs_docs needs_tests needs_better_patch easy ui_ux 4554 Newsforms: {{ field.error }} in HTML Template is HTML (ul) but should be Text (plain) djangoproject.com@… adrian "Using the newforms library I do something like that in my html template: {{{ {% for field in form %} {% if field.errors %} {{ field.errors }}{% endif %} {{field}} {{ field.help_text }}{% endif %} {% endfor %} }}} Which renders fine as long as no field has errors. If I the user made an error and in field.errors for that reason is an error message to display '' {{field.errors }}'' renders to something like this: ''
https://code.djangoproject.com/ticket/4554?format=tab
CC-MAIN-2015-27
refinedweb
103
59.3
Opened 13 years ago Closed 13 years ago Last modified 12 years ago #853 closed defect (worksforme) Django has high start costs, weight Description My biggest issue with Django right now is that it's so hard and heavy to get started. Here are all the administrative (non-code writing) things you have to do: - Install Django. - Run startproject, which creates six things. - Create a database. - Update several variables to use the database. - Update your PYTHONPATH variable. - Update your DJANGO_SETTINGS_MODULE variable. - Run init. - Run startapp, creating six more things. - Update INSTALLED_APPS. - Run install. 10 steps, 12 files. For comparison, here are the only two necessary steps: - Install Django. - Give Django the URL to an existing database (ala SQLObject) and a prefix so it doesn't reuse any tables. Imagine how much simpler that tutorial document would be. Imagine how many more people would just start using Django, instead of putting it off or giving up. Imagine how people would start using bits and pieces of Django everywhere, instead of just their website. I'll follow by posting tickets with more concrete suggestions, but I'd like to propose this sort of simplicity as a new design principle. You shouldn't have to fit your code into Django, Django should fit into your code. Change History (6) comment:1 Changed 13 years ago by comment:2 Changed 13 years ago by As a practical matter, I think the tutorial shouldn't use startproject and startapp then. It definitely frustrates and confuses people who are just starting with Django in my experience (n=2). I should be able to create one file with the URL conf and views and another file with the database model and just have that work. Then the tutorial should show people how to grow this into an app inside a project when/if that becomes necessary. But the more nonsense like this you put up front, the more you drive people away. comment:3 Changed 13 years ago by OK, here's some sample code that should work under the lightweight Django: import web, db urls = ( (r'^foo/$', 'index'), ) def index(request): return HttpResponse("This is a test.") if __name__ == '__main__': web.run() The web module is a simplified interface to all the common Django stuff. db is where the models for the app are stored. web.run detects whether it's being called as CGI, FCGI, or command line and does the right thing (i.e. runserver for the command line) and gets the urls from its calling module. I'm currently working on web.py -- it looks like it'll require some changes to be able to avoid the DJANGO_SETTINGS_MODULE and ROOT_URLS stuff. comment:4 Changed 13 years ago by aaronsw, I think most of your issues would be far better discussed on the mailing lists at this stage. You make several emotive appeals ("driving away", "giving up"), etc etc - these really aren't going to sway anyone as to the technical merits. So please bring these issues up on the mailing list, rather than doing a bunch of work and then being surprised that no one agrees with your premises. comment:5 Changed 13 years ago by Yeah, I agree with rjwittams -- it'd be great if you brought it up on the mailing list. comment:6 Changed 13 years ago by Closing this ticket because it's broad rather than having a specific bug. Uhm - the startproject and startapp and stuff are only needed when you build _new_ projects - and in those cases you will have to write the actual code. There is no way around that when writing something new - it can't just magically spring into existance. Complaining about startproject and startapp to provide a basic layout is rather weird, especially since they are only there as helpers - you are not required to use them, but it "tells you how to do things" (which is a good thing to do for a framework!). The other steps aren't really new steps - replacing the requirement to create a dabase with "point to an existing database" is rather silly, as that existing database needs to come into existance anyway - it's not a step in "getting Django running" that you need to create a database at all - it's a requirement of "using any database" that you need to create it first. Getting Django to start (for a _new_ project): That's actually not bad for a framework with a strong notion of difference on "project" and "app" - one of the big niceties of Django, as you can easily port and reuse applications between different projects. Mind, this is a tutorial for _beginners_ - you give them basic tools and tell them what they do and give them guidance. Advanced hackers way to start a django project: Can't see how that really is a big burdon. Yes, I often create my project and app structures by hand.
https://code.djangoproject.com/ticket/853
CC-MAIN-2019-09
refinedweb
824
69.31
FEP Alert Timing Forefront Endpoint Protection FEP provides actionable alerts on security events to desktop administrators. The time it takes to receive these alerts depends on multiple settings. The following information explains the alert process and the associated timing for alerts to be delivered to the administrator. Each FEP alert originates as an event on a FEP client computer. When malware is detected on a FEP client computer, the following actions occur: - Event ID 1116 is logged in the event log of the computer on which the malware is detected, and the malware is suspended. Depending on the action results, the FEP client logs one of the following events: - Event ID 1117 is logged when an action is taken. - Event ID 1118 is logged when an action fails. - Event ID 1119 is logged when an action experiences a non-critical failure. - If no manual action is taken by the end user after a prescribed period of time, Event ID 1117 is logged. - The FEP client calls the Desired Configuration Management (DCM) agent. The DCM agent evaluates the baselines assigned to the client computer and creates an XML report in the form of a state message. - The FEP client creates a report in the .\root\Microsoft\SecurityClient WMI namespace by using the Malware and AntimalwareInfectionStatus classes. - By default, state messages are sent to the Configuration Manager management point every 15 minutes. - If the client recently reported a state message, the next state message will not be sent until the next 15 minute period. - This state message is stored in the Configuration Manager database. - On the instance of SQL Server that hosts the Configuration Manager database, a SQL Server Agent job transfers data from the Configuration Manager database to the FEP reporting database. - The SQL Server job is named FEP_GetNewData_FEPDW_XXX, where XXX is your Configuration Manager site code. - This job runs every 15 minutes; however, the load on your database server might increase the time it takes for this job to complete. - The Forefront Endpoint Monitoring service checks the FEP reporting database for events that trigger an alert. - The Forefront Endpoint Monitoring service appears on the task list of the database server as FEPSrv. - The FEPSrv service checks the database every 2 minutes. Show:
https://technet.microsoft.com/en-us/library/gg675286.aspx
CC-MAIN-2016-30
refinedweb
372
54.42
For purposes of this question, let’s say there is a table schema foo.bar.baz And we have created a cursor object using following boilerplate import snowflake.connector ctx = snowflake.connector.connect(...) cur = ctx.cursor() With that cursor object, we can put the whole dot deliminated schema into a query like so: cur.execute(''' select * from foo.bar.baz ''' ) and have no issues, but we wouldn’t be able to do: cur.execute(''' select * from %(tbl)s ''', {'tbl': 'foo.bar.baz'} ) Doing that throws this type of error: ProgrammingError: 001011 (42601): SQL compilation error: invalid URL prefix found in: foo.bar.baz I’m guessing this is because the dots are sql identifiers and not strings, but I don’t see any workaround in the snowflake documentation. Does anyone know how this could be done without having to change the connection object. Answer In a FROM clause, the syntax TABLE( { string_literal | session_variable | bind_variable } ) can be used select * from TABLE(%(tbl)s)
https://www.tutorialguruji.com/python/how-to-specify-a-dot-separated-table-as-a-parameter-into-the-sql-query/
CC-MAIN-2021-39
refinedweb
162
58.48
I for one, see the power of this feature and welcome it back in the fold. We are moving in a direction that would either 1) place our own VTIs under the diag namespace (ugly) or 2) shipping a slightly modified form (ugly) or 3) petitioning to get this added back in. The VTI API should have the ability to: - get at the full qualifier list of the given query (right now, it gets partial list depending on what operators are used) - IQualifier needs to be a notify API, not a "you must do all filtering" API (see enhanceement 703). - How do joins get indicated if the VTI can quickly build a hashed lookup on an identifier? IN other words, if I have a data structure that is indexed somehow, and the VTI is exposing that data, and then is joined with another similar VTI, can the engine take advantage of an index? - I would like the VTI to be exposed in the metadata of the database - I need the current syntax to be kept or augmented without removing the ability to parameterize a VTI. Thanks....maybe I'll think of more! -----Original Message----- From: Rick Hillegas [mailto:Richard.Hillegas@Sun.COM] Sent: Thursday, November 17, 2005 7:26 PM To: Derby Development Subject: Re-enabling VTIs I have logged enhancement request 716 to track the re-enabling of customer-defined Virtual Table Interfaces (VTIs). A little history follows: Cloudscape used to let users include arbitrary customer written ResultSets in the FROM lists of queries. This provided a very powerful ability to impose a tabular face on external data and to join relational data with external data streams. It is hard to overestimate the usefulness of this feature. Today, Derby still ships a number of diagnostic VTIs. However, customer-written VTIs have been disabled. The parser raises an error when it sees a VTI that doesn't live in Derby's diagnostic packages. I would like to develop a plan for addressing the concerns which led to the disabling of this feature. I would like to see us put this power back in our customer's hands. I can imagine that the non-standard nature of VTI declaration was an issue. Are there other issues? Has anyone given some thought to what a more acceptable API would look like? Thanks, -Rick
http://mail-archives.apache.org/mod_mbox/db-derby-dev/200511.mbox/%3CF418C1909C57A14E9456C8BEA88563455B705E@mailbox1.mosol.com%3E
CC-MAIN-2017-22
refinedweb
391
60.85
I am about to enable more testcases in ConcurrencyTest, which relies on getting a lock timeout to verify a lock conflict. The problem is that default setting for lock timeout is 60 seconds, and the test uses a lot of time waiting for the timeout. I can make the test go 10 times faster by reducing derby.locks.waitTimeout (and with all testcases enabled it used 350 secs in embedded framework). With the current harness, I could add a ConcurrencyTest_app.properties file and set it there. However this test is now part of _Suite (a pure junit Suite). One quick-fix for me would be to create a _Suite_app.properties. This would cause all junit tests in the suite to run with the same properties. Another quick-fix would be to remove it from the _Suite, and put it into the "old-harness" suites. - However, if the Derby community is interested at completely replacing the old test harness in the future, it would be good to have a pure junit solution to this. One solution could be to create a TestSetup, which can configure the database for a junit Test, and delete the database once the test is complete - i.e public class DBSetup extends BaseJDBCTestSetup { public void setUp() { create and configure db } public void tearDown() { delete db } } Other thoughts ? -- Andreas
http://mail-archives.apache.org/mod_mbox/db-derby-dev/200608.mbox/%3C44E3061B.3060505@sun.com%3E
CC-MAIN-2017-13
refinedweb
221
63.29
09 July 2010 4 comments Django About a month ago I add a new feature to django-static that makes it possible to define a function that all files of django-static goes through. First of all a quick recap. django-static is a Django plugin that you use from your templates to reference static media. django-static takes care of giving the file the optimum name for static serving and if applicable compresses the file by trimming all whitespace and what not. For more info, see The awesomest way possible to serve your static stuff in Django with Nginx The new, popular, kid on the block for CDN (Content Delivery Network) is Amazon Cloudfront. It's a service sitting on top of the already proven Amazon S3 service which is a cloud file storage solution. What a CDN does is that it registers a domain for your resources such that with some DNS tricks, users of this resource URL download it from the geographically nearest server. So if you live in Sweden you might download myholiday.jpg from a server in Frankfurk and if you live in North Carolina, USA you might download the very same picture from Virgina, USA. That assures the that the distance to the resource is minimized. If you're not convinced or sure about how CDNs work check out THE best practice guide for faster webpages by Steve Sounders (it's number two) A disadvantage with Amazon Cloudfront is that it's unable to negotiate with the client to compress downlodable resources with GZIP. GZIPping a resource is considered a bigger optimization win than using CDN. So, I continue to serve my static CSS and Javascript files from my Nginx but put all the images on Amazon Cloudfront. How to do this with django-static? Easy: add this to your settings: DJANGO_STATIC = True ...other DJANGO_STATIC_... settings... # equivalent of 'from cloudfront import file_proxy' in this PYTHONPATH DJANGO_STATIC_FILE_PROXY = 'cloudfront.file_proxy' Then you need to write that function that get's a chance to do something with every static resource that django-static prepares. Here's a naive first version: # in cloudfront.py conversion_map = {} # global variable def file_proxy(uri, new=False, filepath=None, changed=False, **kwargs): if filepath and (new or changed): if filepath.lower().split('.')[-1] in ('jpg','gif','png'): conversion_map[uri] = _upload_to_cloudfront(filepath) return conversion_map.get(uri, uri) The files are only sent through the function _upload_to_cloudfront() the first time they're "massaged" by django-static. On consecutive calls nothing is done to the file since django-static remembers, and sticks to, the way it dealt with it the first time if you see what I mean. Basically, when you have restarted your Django server the file is prepared and checked for a timestamp but the second time the template is rendered to save time it doesn't check the file again and just passes through the resulting file name. If this is all confusing you can start with a much simpler proxy function that looks like this: def file_proxy(uri, new=False, filepath=None, changed=False, **kwargs): print "Debugging and learning" print uri print "New", new, print "Filepath", filepath, print "Changed", changed, print "Other arguments:", kwargs return uri The function to upload to Amazon Cloudfront is pretty straight forward thanks to the boto project. Here's my version: import re from django.conf import settings import boto _cf_connection = None _cf_distribution = None def _upload_to_cloudfront(filepath): global _cf_connection global _cf_distribution if _cf_connection is None: _cf_connection = boto.connect_cloudfront(settings.AWS_ACCESS_KEY, settings.AWS_ACCESS_SECRET) if _cf_distribution is None: _cf_distribution = _cf_connection.create_distribution( origin='%s.s3.amazonaws.com' % settings.AWS_STORAGE_BUCKET_NAME, enabled=True, comment=settings.AWS_CLOUDFRONT_DISTRIBUTION_COMMENT) # now we can delete any old versions of the same file that have the # same name but a different timestamp basename = os.path.basename(filepath) object_regex = re.compile('%s\.(\d+)\.%s' % \ (re.escape('.'.join(basename.split('.')[:-2])), re.escape(basename.split('.')[-1]))) for obj in _cf_distribution.get_objects(): match = object_regex.findall(obj.name) if match: old_timestamp = int(match[0]) new_timestamp = int(object_regex.findall(basename)[0]) if new_timestamp == old_timestamp: # an exact copy already exists return obj.url() elif new_timestamp > old_timestamp: # we've come across the same file but with an older timestamp #print "DELETE!", obj_.name obj.delete() break # Still here? That means that the file wasn't already in the distribution fp = open(filepath) # Because the name will always contain a timestamp we set faaar future # caching headers. Doesn't matter exactly as long as it's really far future. headers = {'Cache-Control':'max-age=315360000, public', 'Expires': 'Thu, 31 Dec 2037 23:55:55 GMT', } #print "\t\t\tAWS upload(%s)" % basename obj = _cf_distribution.add_object(basename, fp, headers=headers) return obj.url() Moving on, unfortunately this isn't good enough. You see, from the time you have issued an upload to Amazon Cloudfront you immediately get a full URL for the resource but if it's a new distribution it will take a little while until the DNS propagates and becomes globally available. Therefore, the URL that you get back will most likely yield you a 404 Page not found if you try it immediately. So to solve this problem I wrote a simple alternative to the Python dict() type that works roughly the same except that myinstance.get(key) will depend on time. 1 hour in this case. So it works something like this: >>> slow_map = SlowMap(10) >>> slow_map['key'] = "Value" >>> print slow_map['key'] None >>> from time import sleep >>> sleep(10) >>> print slow_map['key'] "Value" And here's the code for that: from time import time class SlowMap(object): """ >>> slow_map = SlowMap(60) >>> slow_map[key] = value >>> print slow_map.get(key) None Then 60 seconds goes past: >>> slow_map.get(key) value """ def __init__(self, timeout_seconds): self.timeout = timeout_seconds self.guard = dict() self.data = dict() def get(self, key, default=None): value = self.data.get(key) if value is not None: return value value, expires = self.guard.get(key) if expires < time(): # good to release self.data[key] = value del self.guard[key] return value else: # held back return default def __setitem__(self, key, value): self.guard[key] = (value, time() + self.timeout) With all of that ready willing and able you should now be able to serve your images from Amazon Cloudfront simply by doing this in your Django templates: {% staticfile "/img/mysprite.gif" %} To test this I've deployed this technique on my money making site code guinea pig Crosstips. Go ahead, visit that site and use Firebug or view the source and check out the URLs used for the images. They look something like this: If you want to look at my code used for Crosstips download this file. It's pretty generic to anybody who wants to achieve the same thing. Have fun and happy CDN'ing! Here's a screenshot of the wonderful Amazon AWS Console Follow @peterbe on Twitter is there a way to upload all the FileFields and the ImageFields to the Amazon Cloudfront using this app? that's one thing I need to do in a near future There's an app called django-storage which I've used in another project to upload FileFields to Amazon S3. If it doesn't have Cloudfront support yet, that package would be the best place to start. thanks, I'll look up to it. Nice post and explanation. I just came across django-queued-storage that has a slightly different approach to storing data on Cloudfront: Not sure if the celery scheduled task responsible for pushing to Cloudfront provides all the functionality of your SlowMap though.
https://www.peterbe.com/plog/hosting-django-static-images-with-amazon-cloudfront-cdn
CC-MAIN-2019-26
refinedweb
1,252
63.7
06 February 2008 12:26 [Source: ICIS news] LONDON (ICIS news)--European purified terephthalic acid (PTA) producer BP hopes to resume full production at its two plants in ?xml:namespace> ?xml:namespace> Accordingly, force majeure declared on feedstock paraxylene (PX) last Thursday was lifted at midnight, added the spokesperson. The two plants, based in Geel were originally closed for maintenance but plans to restart them were thwarted due to industrial action last week. However, the industrial action ended over the weekend and BP is currently concentrating on recommencing normal operating rates, the spokesperson added. The total combined capacity of the plants exceed 1m tonnes/year. PX spot prices were stable at $1,040-1,070/tonne (€707-723/tonne) FOB (free on board) Rotterdam ahead of BP’s announcement, according to global chemical market intelligence service ICIS pricing. PX contract negotiations in Europe have only just got under way following confirmation that the PX contract in Asia rolled over from January at $1,100/tonne CFR (cost and freight) Asia. “It’s too early to comment since “I expect to see parity with “There is already a premium for European PX and I see room for an improvement,” said another large PX consumer. “There should be a reduction [from January's settlement of €793/tonne].” ($1 = €0.68) Julia Meehan contributed to this story For more on PTA
http://www.icis.com/Articles/2008/02/06/9098876/bp-plans-to-resume-production-at-belgium-pta-lines.html
CC-MAIN-2014-52
refinedweb
228
50.16