text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
As per this post I am currently going through the process of creating some simple game creation tools using HTML5, more specifically using the YUI 3 library as well as the EaselJS canvas library. This post illustrates the very skeleton upon which we are going to create our app. YUI3 provides a full MVC framework which you can use to create your application so I decided to make use of it. The end result of this code is remarkably minimal, it just creates a single page web application with different views representing different portions of the UI. Specifically, we will create a top zone where the menu will go, a left hand area where the level editing will occur, then a right hand panel which will change contextually. I also created a very simple data class, to illustrate how data works within the YUI MVC environment. First off, if you have never heard of MVC, it is the acronym of Model View Controller. MVC is a popular design practice for separating your application in to logically consistent pieces. This allows you to separate your UI from your logic and your logic from your data ( the last two get a little gray in the end ). It adds a bit of upfront complexity, but makes it easier to develop, maintain and test non-trivial applications… or at least, that’s the sales pitch. The simplest two minute description of MVC is as follows. The Model is your application’s data. The View is the part of your application that is responsible for displaying to the end user. The Controller part is easily the most confusing part, and this is the bit that handles communications between the model and view, and is where you actual “logic” presides. We aren’t going to be completely pure in this level in this example ( MVC apps seldom are actually ), as the Controller part of our application is actually going to be a couple pieces, you will see later. For now just realize, if it aint a view and it aint a model, it’s probably a controller. It is also worth clarifying that MVC isn’t the only option. There is also MVVM ( Model-View-ViewModel ) and MVP ( Model-View-Presenter ), and semantics aside, they are all remarkably similar and accomplish pretty much the same thing. MVC is simply the most common/popular of the three. Put simply, it will look initially more complex ( and it is more complex ), but this upfront work makes life easier down the road, making it generally a fair trade off. Alright, enough chatter, now some code! The code is going to be split over a number of files. A lot of the following code is simply the style I chose to use, and is completely optional. It is generally considered good practice though. At the top level of our hierarchy we have a pair of files, index.html and server.js. server.js is fairly optional for now, I am using it because I will (might?) be hosting this application using NodeJS. If you are running your own web server, you don’t need this guy, and won’t unless we add some server-side complexity down the road. index.html is pretty much the heart of our application, but most of the actual logic has been parted out to other parts of the code, so it isn’t particularly complex. We will be looking at it last, as all of our other pieces need to be in place first. Now within our scripts folder, you will notice two sub-folders models and views. These predictable enough are where our models and views reside. In addition, inside the views directory is a folder named templates. This is where our moustache templates are. Think of templates like simple HTML snippets that support very simple additional mark-up, allowing for things like dynamically populating a form with data, etc. If you’ve ever used PHP, ASP or JSP, this concept should be immediately familiar to you. If you haven’t, don’t worry, our templates are remarkably simple, and for now can just be thought of as HTML snippets. The .Template naming convention is simply something I chose, inside they are basically just HTML. If you are basing your own product on any of this code, please be sure to check out here, where I refactored a great deal of this code, removing gross hacks and cleaning things up substantially! Let’s start off with our only model person.js, which is the datatype for a person entry. Let’s look at the code now: person.js YUI.add('personModel',function(Y){ Y.Person = Y.Base.create('person', Y.Model, [],{ getName:function(){ return this.get('name'); } },{ ATTRS:{ name: { value: 'Mike' }, height: { value: 6 }, age: { value:35 } } } ); }, '0.0.1', { requires: ['model']}); The starting syntax may be a bit jarring and you will see it a lot going forward. The YUI.add() call is registering ‘personModel’ as a re-usable module, allowing us to use it in other code files. You will see this in action shortly, and this solves one of the biggest shortcomings of JavaScript, organizing code. The line Y.Person = Y.base.create() is creating a new object type in the Y namespace, named ‘person’ and inheriting all of the properties of Y.Model. This is YUI’s way of providing OOP to a relatively un-OOP language. We then define a member function getName and 3 member variables name, height and age, giving each of the three default values… just cause. Of course, they aren’t really member variables, they are entries in the object ATTRS, but you can effectively think of them as member variables if you are from a traditional OOP background. Next we pass in a version stamp ( 0.0.1 ), chosen pretty much at random by me. Next is a very important array named requires, which is a list of all the modules ( YUI, or user defined ) that this module depends on. We only need the model module. YUI is very modular and only includes the code bits you explicitly request, meaning you only get the JavaScript code of the classes you use. So that is the basic form your code objects are going to take. Don’t worry, it’s nowhere near as scary as it looks. Now let’s take a look at a view that consumes a person model. That of course would be person.View.js. Again, the .View. part of that file name was just something I chose to do and is completely optional. person.View.js YUI.add('personView',function(Y){ Y.PersonView = Y.Base.create('personView', Y.View, [], { initializer:function(){ var that=this, request = Y.io('/scripts/views/templates/person.Template',{ on:{ complete:function(id,response){ var template = Y.Handlebars.compile(response.responseText); that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height']))); } } }); }, render:function(){ return this; } }); }, '0.0.1', { requires: ['view','io-base','personModel','handlebars']}); Just like with our person model, we are going to make a custom module using YUI.add(), this one named ‘personView’. Within that module we have a single class, Y.PersonView, which is to say a class PersonView in the Y namespace. PersonView inherits from Y.View and we are defining a pair of methods, initializer() which is called when the object is created and render() which is called when the View needs to be displayed. In initializer, we perform an AJAX callback to retrieve the template person.Template from the server. When the download is complete, the complete event will fire, with the contents of our file in the response.responseText field ( or an error, which we wrongly do not handle ). Once we have our template text downloaded, we “compile” it, which turns it into a JavaScript object. The next line looks obscenely complicated: that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height']))); A couple things are happening here. First we are using that because this is contextual in JavaScript. Within the callback function, it has a completely different value, so we cached the value going in. Next we get the property container that every Y.View object will have, and set it’s HTML using setHTML(). This is essentially how you render a view to the screen. The parameter to setHTML is also a bit tricky to digest at first. Essentially the method template() is what compiles a moustache template into actual HTML. A template, as we will see in the moment, may be expecting some data to be bound, in this case name, age and height which all come from our Person model. Don’t worry, this will make sense in a minute. Our render method doesn’t particularly do anything, just returns itself. Again we specify our modules dependency in the requires array, this time we depend on the modules view, io-base, personModel and handlebars. As you can see, we are consuming our custom defined personModel module as if it was no different than any of the built-in YUI modules. It is a pretty powerful way of handling code dependencies. Now let’s take a look at our first template. person.Template <div style="width:20%;float:right"> <div align=right> <img src= </div> <p><hr /></p> <div> <h2>About {{name}}:</h2> <ul> <li>{{name}} is {{height}} feet tall and {{age}} years of age.</li> </ul> </div> </div> As you can see, a template is pretty much just HTML, with a few small exceptions. Remember a second ago when we passed data in to the template() call, this is where it is consumed. The values surrounded by {{ }} ( thus the name moustache! ) are going to be substituted when the HTML is generated. Basically it looks for a value by the name within the {{ }} marks and substitutes it into the HTML. For example, {{name}}, looks for a value named name, which it finds and substitutes it’s value mike in the results. Using templates allows you to completely decouple your HTML from the rest of your application. This allows you to source out the graphic work to a designer, perhaps using a tool like DreamWeaver, then simply add moustache markup for the bits that are data-driven. What you may be asking yourself is, how the hell did the PersonView get it’s model populated in the first place? That’s a very good question. In our application, our view is actually going to be composed of a number of sub-views. There is a view for the area the map is going to be edited in, a view for the context sensitive editing will occur ( currently our person view ), then finally a view where our menu will be rendered. However, we also have a parent view that holds all of these child views, sometimes referred to as a composite view. This is ours: editor.View.js YUI.add('editorView',function(Y){ Y.EditorView = Y.Base.create('editorView', Y.View, [], { initializer:function(){ var person = new Y.Person(); this.pv = new Y.PersonView({model:person}); this.menu = new Y.MainMenuView(); this.map = new Y.MapView(); }, render:function(){ var content = Y.one(Y.config.doc.createDocumentFragment()); content.append(this.menu.render().get('container')); var newDiv = Y.Node.create("<div style='width:100%'/>"); newDiv.append(this.map.render().get('container')); newDiv.append(this.pv.render().get('container')); content.append(newDiv); this.get('container').setHTML(content); return this; } }); }, '0.0.1', { requires: ['view','io-base','personView','mainMenuView','mapView','handlebars']}); The start should all be pretty familiar by now. We again are declaring a custom module editorView. This one also inherits from Y.View, the major difference is in our initializer() method, we create a Y.Person model, as well as our 3 custom sub-views, a PersonView, a MainMenuView and a MapView ( the last two we haven’t seen yet, and are basically empty at this point ). As you can see in the constructor for PersonView, we pass in the Y.Person person we just created. This is how a view gets it’s model, or at least, one way. Our render() method is a bit more complicated, because it is responsible for creating each of it’s child views. First we create a documentFragment, which is a chunk of HTML that isn’t yet part of the DOM, so it wont fire events or cause a redraw or anything else. Basically think of it as a raw piece of HTML for us to write to, which is exactly what we do. First we render our MainMenuView, which will ultimately draw the menu across the screen. Then we create a new full width DIV to hold our other two views. We then render the MapView to this newly created div, then render the PersonView to the div. Finally we append our new div to our documentFragment. Finally we set our view’s HTML to our newly created fragment, causing all the views to be rendered to the screen. Once again, we set a version stamp, and declare our dependencies. You may notice that we never had to include personModel, this is because personView will resolve this dependency for us. Lets quickly look at each of those other classes ( mainMenuView and mapView ) and their templates, although all of them are mostly placeholders for now. mainMenu.View.js YUI.add('mainMenuView',function(Y){ Y.MainMenuView = Y.Base.create('mainMenuView', Y.View, [], { initializer:function(){ var that=this, request = Y.io('/scripts/views/templates/mainMenu.Template',{ on:{ complete:function(id,response){ var template = Y.Handlebars.compile(response.responseText); //that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height']))); that.get('container').setHTML(template()); } } }); }, render:function(){ return this; } }); }, '0.0.1', { requires: ['view','io-base','handlebars']}); mainMenu.Template <div style="width:100%">This is the area where the menu goes. It should be across the entire screen</div> map.View.js YUI.add('mapView',function(Y){ Y.MapView = Y.Base.create('mapView', Y.View, [], { initializer:function(){ var that=this, request = Y.io('/scripts/views/templates/map.Template',{ on:{ complete:function(id,response){ var template = Y.Handlebars.compile(response.responseText); that.get('container').setHTML(template()); //that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height']))); } } }); }, render:function(){ return this; } }); }, '0.0.1', { requires: ['view','io-base','handlebars']}); map.Template <div style="width:80%;float:left"> This is where the canvas will go </div> Now, we let’s take a quickly look at server.js. As mentioned earlier, this script simply provides a basic NODEJS based HTTP server capable of serving our app. server.js var express = require('express'), server = express(); server.use('/scripts', express.static(__dirname + '/scripts')); server.get('/', function (req, res) { res.sendfile('index.html'); }); server.listen(process.env.PORT || 3000); I wont really bother explaining what’s going on here. If you are going to use Node, there is a ton of content on this site already about setting up a Node server. Just click on the Node tag for more articles. Finally, we have index.html which is the heart of our application and what ties everything together and this is the file that is first served to the users web browser, kicking everything off. index.html <!DOCTYPE html> <html> <head> <title>GameFromScratch example YUI Framework/NodeJS application</title> </head> <body> <script src=""></script> <script src="/scripts/models/person.js"></script> <script src="/scripts/views/person.View.js"></script> <script src="/scripts/views/map.View.js"></script> <script src="/scripts/views/mainMenu.View.js"></script> <script src="/scripts/views/editor.View.js"></script> <script> YUI().use('app','editorView', function (Y) { var app = new Y.App({ views: { editorView: {type: 'EditorView'} } }); app.route('/', function () { this.showView('editorView');//,{model:person}); }); app.render().dispatch(); }); </script> </body> </html> This sequence of <script> tags is very important, as it is what causes each of our custom modules to be evaluated in the first place. There are cleaner ways of handling this, but this way is certainly easiest. Basically for each module you add, include it here to cause that code to be evaluated. Next we create our actual Y function/namespace. You know how we kept adding our classes to Y., well this is where Y is defined. YUI uses an app loader to create the script file that is served to your clients browser, which is exactly what YUI.use() is doing. Just like the requires array we passed at the bottom of each module definition, you pass use() all of the modules you require, in this case we need the app module from YUI, as well as our custom defined editorView module. Next we create a Y.App object. This is the C part of MVC. The App object is what creates individual views in response to different URL requests. So far we only handle one request “/”, which causes the editorView to be created and shown. Finally we call app.render().dispatch() to get the ball rolling, so our editorView will have it’s render() method called, which will in turn call the render method of each of it’s child views, which in turn will render their templates… Don’t worry if that seemed scary as hell, that’s about it for infrastructure stuff and is a solid foundation to build a much more sophisticate application on top of. Of course, there is nothing to say I haven’t made some brutal mistakes and need to rethink everything! Now, if you open it up in a browser ( localhost:3000/ if you used Node ), you will see: Nothing too exciting as of yet, but as you can see, the menu template is rendered across the top of the screen, the map view is rendered to the left and the Person view is rendered on the right. As you can see from the text, the data from our Person model is compiled and rendered in the resulting HTML. You can download the complete project archive right here. Design Programming
https://gamefromscratch.com/creating-game-creation-tools-using-html5-the-basic-yui-mvc-app-framework/
CC-MAIN-2021-04
refinedweb
2,998
65.52
Hello , Jimmy Christian. The easiest way is to use an existing message that's already in the Ensemble message store. Locate the MessageBodyId value in the header tab of the message viewer, and execute the following commands in the same namespace as the production: Substitute the numeric Message Body ID for BodyID, the package name for your class for Package.Name, the method for MethodName and the identifier you want to test with for Identifier. The method you mentioned appears to return a string, so you should see the value displayed once you press enter on the 2nd command. Thank you Jeffrey. This looks perfect. I am going to test this and let you know. That seems very interesting. I wasn't aware of such method to import file or refer to a message in the file. Pretty interesting ! Will test this out today and post . Just something i noticed is we have assigned the datatype to messages in tMsg object, which is after the fact of referring it to a filename.ext. So in this case if the message does not match schema/doctype , below command would fail. JEFF > set tMsg.DocType="2.3.1:ORU_R01" Correct? If the message you wish to test is saved in a separate file outside of Ensemble, you can create a new message object with the following: You'll need to set the DocType manually; for messages already received by Ensemble, that was likely taken care of by the business service: Something to keep in mind is that methods expecting a message class as an argument (ex. EnsLib.HL7.Message) work on message objects rather than strings. The tMsg variable created by both the %OpenId() and ImportFromFile() methods are such objects. These objects have a rich set of methods for inspecting, transforming and otherwise working with messages. The DocType is not automatically set by the ImportFromFile() method. You would need to set it to whatever DocType (i.e. DocTypeCategory:DocTypeName) is required to properly parse the message. In a Production, the Business Service that receives messages uses the value in the Message Schema Category field to set the DocType for subsequent processing, and if both a DocTypeCategory and DocTypeName are present it will override the automatic selection of the DiocTypeName determined by the HL7 message's MSH:9 value. Thank you Jeffrey. I was able to pass the value to my function and worked very well. Glad to see such a rich set of methods and properties in the Enslib.HL7.Message which can be helpful in future. I appreciate your help and assistance. Regards, Jimmy Christian. To leave a comment or answer to post please log in Please log in
https://community.intersystems.com/post/studio-terminal-testing-custom-method
CC-MAIN-2020-16
refinedweb
448
64.91
Lars Knoesel <address@hidden> wrote on 07 Jan 2006 22:10:23 GMT: > Hi, > I have a little comment problem within java-mode (GNU Emacs 21.4.1). > public class AClass > { > public AClass() > { > try { > // This is a comment and after reaching the end > of line the comment is broken. > ... > How can I solve this problem? Er, what is the problem, exactly? I think you're saying that as you type a (longish) comment, it gets broken automatically (which is probably what you want) but the new line doesn't automatically get the comment opener "//". Do you still get the problem if you start Emacs without your own configuration? (Start Emacs with the command line # "emacs --no-site-file -q" .) If you can't sort this out, please do one of two things: (i) Upgrade to the newest CC Mode, available from <>; or (ii) Use C-c C-b to get a dump of your configuration, then post that here. > Many thanks in advance. > Best regards, > Lars -- Alan Mackenzie (Munich, Germany) Email: address@hidden; to decode, wherever there is a repeated letter (like "aa"), remove half of them (leaving, say, "a").
http://lists.gnu.org/archive/html/help-gnu-emacs/2006-01/msg00205.html
CC-MAIN-2015-48
refinedweb
190
70.53
#include <deal.II/base/function_lib.h> d-quadratic pillow on the unit hypercube. This is a function for testing the implementation. It has zero Dirichlet boundary values on the domain \((-1,1)^d\). In the inside, it is the product of \(1-x_i^2\) over all space dimensions. Providing a non-zero argument to the constructor, the whole function can be offset by a constant. Together with the function, its derivatives and Laplacian are defined. Definition at line 141 of file function_lib.h. Constructor. Provide a constant that will be added to each function value. The value at a single point. Values at multiple points. Gradient at a single point. Gradients at multiple points. Laplacian at a single point. Laplacian at multiple points.
http://www.dealii.org/developer/doxygen/deal.II/classFunctions_1_1PillowFunction.html
CC-MAIN-2015-06
refinedweb
123
63.36
In the post ‘Forcing evaluation in Haskell‘ , I described how to fully evaluate a value to get around Haskell’s lazy evaluation. Since then, I’ve found myself using the following snippet a lot: import Control.Parallel.Strategies import Test.BenchPress bench 1 . print . rnf This snippet fully evaluates a value and prints how long it took to do so. I regularly use it on the Programming Praxis problems to see where the bottleneck lies in my algorithm.. A short example: import Test.StrictBench main = bench [1..10000000 :: Integer] This code would give 2890.625 ms as output. For the rest of the documentation I refer you to the Hackage page. The source code is pretty simple: module Test.StrictBench (bench, benchDesc, time) where import Control.Parallel.Strategies import Test.BenchPress hiding (bench) import Text.Printf bench :: NFData a => a -> IO () bench = (putStrLn . (++ " ms") . show =<<) . time benchDesc :: NFData a => String -> a -> IO () benchDesc s = (putStrLn . printf "%s: %s ms" s . show =<<) . time time :: NFData a => a -> IO Double time = fmap (median . fst) . benchmark 1 (return ()) (const $ return ()) . const . putStr . (`seq` "") . rnf Nothing complicated, but a nice convenience library that I’ll be using from now on. Tags: benchmark, evaluation, force, Haskell, library, rnf, strict, strictbench
https://bonsaicode.wordpress.com/2009/06/07/strictbench-0-1/
CC-MAIN-2015-32
refinedweb
205
62.64
Eugene, On 25.07.05, Eugene M. Minkovskii wrote: >? Just use the graph.data.list class (instead of graph.data.file). If your list of tuples is called alist, you can use something like g.plot(graph.data.list(alist, x=1, y=2)) HTH, Jörg On Mon, Jul 25, 2005 at 07:56:20PM +0400, Eugene M. Minkovskii wrote: " Perhaps this is foolish question, sorry. "=20 " Can I feed data into the graph.plot() function without saving it " to the file, directly from python? "=20 " I have list of pairs [(x,y)...] and I don't like saving it into " the temporary files and then feed file into your function. I hope " you do something to feed data without saving? "=20 " Sorry again -- I'm newbie in your interface. No-no !! I find an anwer, by myself. Thanks --? Sorry again -- I'm newbie in your interface. -- André, On 22.07.05, Andre Wobst wrote: > (I got the Michaels mail much later then it was sent due to mail > delivery delays by sourceforge.) > > On 21.07.05, Michael Schindler wrote: > > The best solution (which is also not failsave because the circle, when > > constructed, does not know anything about the linewidth) is to insert > > a small straight line into the circle (about 0.02 degrees) > > > > moveto_pt(x+radius,y), arc_pt(x, y, radius, 0.01, 359.99), > > closepath() > > > > With this solution I have not found any artefacts. > > I very much like this solution, although we (and our users) may always > step into the same issue when building some other paths containing an > arc and we don't provide a PyX solution for that ... But then it's not our but their problem. > the user has to > deal with the issue himself. But it's not really our fault. (You have > to remember, that there is something wired in the PostScript spec, > that the circle is described by its center and a radius and the > leading moveto needs to be done at the tangent ... which might lead to > any kind of rounding errors. That's the whole problem.) > > However, I would use an arcepsilon of 0.1, since the absolute position > compared to the precision of the radius, might be different in several > digits. Suppose you do: > > from pyx import * > > c = canvas.canvas() > c.stroke(path.circle(123.456789, 123.456789, 1.23456789), [style.linewidth(1.5)]) > c.writeEPSfile("circlebug", bboxenlarge=1) > > would be bad using 0.01 as arcepsilon. However 0.1 is still fine. And > and replacing a 0.2 degrees part of the circle by a straight line > isn't that bad. So the proposed fix is: I think that's really not problematic. The only thing is that the code might look a bit weird, but that's ok - there is a reason for that. > andre@...:~/python/pyx/pyx$ cvs diff -u path.py > Index: path.py > =================================================================== > RCS file: /cvsroot/pyx/pyx/pyx/path.py,v > retrieving revision 1.219 > diff -u -r1.219 path.py > --- path.py 14 Jul 2005 20:08:32 -0000 1.219 > +++ path.py 22 Jul 2005 15:28:55 -0000 > @@ -1255,8 +1255,8 @@ > > """circle with center (x, y) and radius in pts""" > > - def __init__(self, x, y, radius): > - path.__init__(self, moveto_pt(x+radius,y), arc_pt(x, y, radius, 0, 360), closepath()) > + def __init__(self, x, y, radius, arcepsilon=0.1): > + path.__init__(self, moveto_pt(x+radius,y), arc_pt(x, y, radius, arcepsilon, 360-arcepsilon), closepath()) Fine, looks good. > Jörg, I couldn't reach you (I've talked to Michael), but I think you > like it too, don't you? I'm just checking in this patch to be released > with 0.8.1 ... That patch looks ok for me. Jörg Hi, On 24.07.05, John Owens wrote: > First, what a nice release; everything transitioned very nicely. Over all the release seems to work well, but we've also got some reports about problems. While those were easy to fix, we'll have a 0.x.1 release within the next days. I'm always sorry about those subreleases, but it turns out that we need them, since we want to have a more or less well-working release available to our users while working on the next release. You know, this always takes some time (about halve a year currently). >.) The idea is to use a graph style, where the attributes are "changing" when used several times. I do know, that we lack documentation for that and the concept is not yet really stablized. The point is, that it's a graph-only feature, since it is needed there, but we need to think about making it a general concept. Lets briefly explain the idea: suppose you have a "changing style aware" stoke method in the canvas (which we don't yet have). Then you could do: c.stroke([p1, p2], [attr.changelist([color.rgb.red, color.rgb.green])]) to stroke two paths p1 and p2. The first in red, the second in green. In the graph the style change is even non-local to the plot method call which we could tell a missdesign, but I always got told that I should do it that way: mylinestyle = graph.style.line([attr.changelist([color.rgb.red, color.rgb.green])]) g.plot(d1, [mylinestyle]) g.plot(d2, [mylinestyle]) (The important thing is, that *all* style instance given in the plot list are equal ... the list itself can be a different instance. Then all styles are "changed" as they naturally do in g.plot([d1, d2], [mylinestyle]).) Now, there are some attributes in the graph styles (and in the axis painter BTW, where we use the changeable attributes feature as well -- it even occured there the first time), where we use changeable attributes as well, although they are not stroke/fill styles. In that sence we're not talking about changeable attributes, but changeable *something*. To make the long storry short, the following code shows how this feature is intended to be used: import random from pyx import * mysymbols = attr.changelist([graph.style._crosssymbol, graph.style._plussymbol, graph.style._squaresymbol, graph.style._trianglesymbol, graph.style._circlesymbol, graph.style._diamondsymbol]) # unfortunately, the following does not work until we switch to # staticmethods for the symbols, i.e. abandon to support python 2.1 # # mysymbols = attr.changelist([graph.style.symbol.cross, # graph.style.symbol.plus, # graph.style.symbol.square, # graph.style.symbol.triangle, # graph.style.symbol.circle, # graph.style.symbol.diamond]) mystyle = attr.changelist([color.rgb.red, color.rgb.green, color.rgb.blue]) mysymbolstyle = graph.style.symbol(symbol=mysymbols, symbolattrs=[mystyle]) def data(): return graph.data.list([(i, random.random()) for i in range(11)], x=1, y=2) g = graph.graphxy(width=8) g.plot(data(), [mysymbolstyle]) g.plot(data(), [mysymbolstyle]) g.plot(data(), [mysymbolstyle]) # you could also do: # g.plot([data() for i in range(3)], [mysymbolstyle]) g.writeEPSfile("ownsymbols") Note, that the iteration thing is not finished and not yet well integrated into all parts of the attribute system. For example, the following thing does not work: mystyle = deco.stroked([attr.changelist([color.rgb.red, color.rgb.green, color.rgb.blue])]) i.e. you can't use changeable attributes inside of decorators. André -- by _ _ _ Dr. André Wobst / \ \ / ) wobsta@..., / _ \ \/\/ / PyX - High quality PostScript figures with Python & TeX (_/ \_)_/\_/ visit First, what a nice release; everything transitioned very nicely..) Instead I'm doing something that I think is pretty kludgey: [graph.style.symbol(gSymbol.getSymbol(), symbolattrs=[deco.filled, deco.stroked([gSymbol.getColor()])])]) which uses a class Symbol (below, with getSymbol() and getColor() member functions). I would be very happy if someone could direct me toward writing better code than what I've encapsulated in this class. This seems like a function which ought to be built in. I think the "color.palette.Rainbow" is probably a good start, but it is not documented at all in the manual. I would also appreciate a little better explanation of the "attrs"/changelist objects ("barattrs", "symbolattrs", etc.) which don't appear to be documented either (sorry if I missed them). "changecross" etc. also appear to fall in this category. It seems like these are nice lists that could be used to step through, but I'd like to see a little more explanation of how it's used, since I haven't found a good example. Thanks! This is a wonderful package. Keep up the great work. JDO class Symbol: def __init__(self): self.symbols = (graph.style.symbol.cross, graph.style.symbol.plus, graph.style.symbol.square, graph.style.symbol.triangle, graph.style.symbol.circle, graph.style.symbol.diamond, ) self.sidx = 0 self.colors = (color.rgb.red, color.rgb.green, color.rgb.blue, ) self.cidx = 0 def getSymbol(self): s = self.symbols[self.sidx] self.sidx += 1 if self.sidx > len(self.symbols): self.sidx = 0 return s def getColor(self): c = self.colors[self.cidx] self.cidx += 1 if self.cidx > len(self.colors): self.cidx = 0 return c ____________________________________________________ Start your day with Yahoo! - make it your home page
https://sourceforge.net/p/pyx/mailman/pyx-user/?viewmonth=200507&viewday=25
CC-MAIN-2018-13
refinedweb
1,512
68.67
A First Look at ObjectSpaces in Visual Studio 2005 Dino Esposito Wintellect February 2004 Applies to: Microsoft® Visual Studio® 2005 Microsoft® ADO.NET Microsoft® SQL Server™ 2000 SQL Language Summary: ObjectSpaces is one of the most interesting new features in Microsoft Visual Studio 2005 (formerly code-named "Whidbey"). An Object/Relational mapping tool fully integrated with Microsoft ADO.NET and Microsoft .NET technologies, ObjectSpaces puts an abstraction layer between your business tier and the raw-data tier where the physical connection to the data source is handled. You think about and design application features using objects, and ObjectSpaces reads and writes data across a variety of data sources using SQL statements. (15 printed pages) Note This article is based on the Visual Studio Whidbey preview code that was distributed at the Microsoft Professional Developers Conference in October 2003. Contents Introduction ObjectSpaces "In Person" Mapping Tables to Classes Getting Data Objects Persisting Changes Working with Object Graphs Delay Loading Benefits of ObjectSpaces Introduction When designing the Data Access Layer (DAL) of a .NET-based application, architects normally have two options for setting up a bidirectional communication infrastructure between the DAL itself and the business and presentation layers. The first option is to write classes that move data in and out of the data source using Microsoft ADO.NET objects. The second possibility entails using classes that abstract the schema of the underlying tables, and optionally add logic and application-specific capabilities. In both cases, the application moves structured data through the tiers, while reading and writing data using SQL commands. In the former scenario, extra code is required only to bind fields and tables to the user-interface elements—a task that data-binding features in .NET-based applications greatly simplify. The main drawback is there is a lot of SQL to be written by hand that, especially when the size of the application grows, becomes very complex to write and maintain. Overall, the design of the application is data-centric. Opting for a more abstract and object-oriented model is not a route to take lightheartedly, either. In this case, you have a strong business logic layer, but need an extra layer to be able to persist the object model to the underlying storage medium. This extra layer would completely shield you from dealing with the syntax of the storage medium. The bottom line is that an approach based on objects is elegant and neat on paper, but may consume significant time in reality if you have to write it from scratch. However, when large applications are involved that manage tightly interrelated, hierarchical data, thinking in terms of objects is terribly helpful and often it represents your only safe way out. For this reason, Object/Relational Mapping (O/RM) tools have existed for a long time, and from several vendors. An O/RM system allows you to persist an existing object model to a given storage medium. You should use an O/RM system because you have an object model to persist. You shouldn't create an object model in order to use an O/RM system. You define how your classes map to physical tables and columns and the O/RM system will take care of querying and updating the underlying tables for you. You work with an object-oriented interface and have the O/RM transpose your high-level language into raw SQL calls. At its core, this is just what ObjectSpaces is and does. ObjectSpaces "In Person" ObjectSpaces is an O/RM framework embedded in the .NET Framework in Visual Studio 2005. It provides a suite of classes that deal with SQL Server 2000 and SQL Server 2005 tables in reading and writing. Client modules invoke ObjectSpaces classes in lieu of ADO.NET classes and pass them data packed in .NET custom classes. The ObjectSpaces engine translates object queries into SQL queries and then translates modifications to your objects into modifications to the underlying tables. Data that is fetched is processed and stored in instances of .NET classes before being returned to the callers. The figure below provides a 10,000 feet view of ObjectSpaces in the context of an application. Figure 1. The overall architecture of an application based on ObjectSpaces By design, business rules express the logic of the application and govern the interaction of the various entities that form the problem's domain. Business rules are formulated in terms of objects that closely match specific business entities such as customers, orders, invoices and the like, not general-purpose data containers such as DataSets and DataTables. ObjectSpaces lets developers concentrate on business entities and design application reasoning in terms of objects instead of data streams. ObjectSpaces requires a preliminary effort to map classes to data tables. After that, the ObjectSpaces engine deals with the data source and largely shields you from the details of the interaction. As a result, you design your application using a flexible, reusable, and extensible object-oriented paradigm; at the same time, you keep your data in a relational data store. The ObjectSpaces architecture sits in-between the application logic and the data source and enables the developer to manage data without a deep knowledge of the underlying physical schema. By using ObjectSpaces, you persist objects to a data source and retrieve objects from a data source without having to write any SQL code. ObjectSpaces is part of Visual Studio 2005, and for the time being supports only two data sources—SQL Server 2000 and SQL Server 2005. Mapping Tables to Classes The root class of the ObjectSpaces architecture is ObjectSpace. The ObjectSpace class handles communication with the data source and governs the query and retrieve activity that occurs on the data source. ObjectSpace is responsible for persisting objects to tables and for instantiating objects out of the results of a query. In order to work, the ObjectSpace class needs a mapping schema and an ADO.NET connection object. The mapping schema can be a static resource such as an XML file; alternatively, it can be dynamically built through the interface of the MappingSchema object. The MappingSchema object determines which fields and tables will be used to persist the object data, and from which fields and tables the state of an object will be retrieved. The following code snippet demonstrates how to instantiate an object space. The connection object is a plain SqlConnection object that contains the parameters to connect to the specified instance of SQL Server. The mapping schema is divided in three parts: relational schema definition (RSD), object schema definition (OSD), and mapping schema, which links the previous two schemas together. For your convenience, you can store each schema component to a distinct XML file. In this case, only the mapping schema file must be passed to the ObjectSpace constructor. The mapping schema will then implicitly reference the relational and object schema. The following listing shows the mapping schema file that binds together a few tables in the Northwind database (rsd.xml file) and an object schema defined in the osd.xml file. <m:MappingSchema xmlns: <m:DataSources> <m:DataSource <m:Schema <m:Variable </m:DataSource> <m:DataSource <m:Schema </m:DataSource> </m:DataSources> <m:Mappings> <m:Map <m:FieldMap <m:FieldMap <m:FieldMap <m:FieldMap </m:Map> </m:Mappings> </m:MappingSchema> The <Mappings> section of the XML document above defines the bindings between a database table and a .NET class. In particular, the binding is set between the Customers table and the Customer class. The SourceField attribute indicates a table column; the TargetField indicates a class property. For example, the CustomerID column is bound to the Id property. The relational schema of the source data table is shown below. <rsd:Database <r:Schema <rsd:Tables> <rsd:Table <rsd:Columns> <rsd:Column <rsd:Column <rsd:Column <rsd:Column </rsd:Columns> <rsd:Constraints> <rsd:PrimaryKey <rsd:ColumnRef </rsd:PrimaryKey> </rsd:Constraints> </rsd:Table> </rsd:Tables> </r:Schema> </rsd:Database> As you can see, the schema selects only a few columns out of the Customers table. The column list includes the primary key. The schema is merely an XML description of the table view of interest. The object schema definition is an XML file that looks like the listing below. <osd:ExtendedObjectSchema <osd:Classes> <osd:Class <osd:Member <osd:Member <osd:Member <osd:Member </osd:Class> </osd:Classes> </osd:ExtendedObjectSchema> The file describes a class like the one shown below. In summary, the mapping information above instructs the ObjectSpaces system to serialize and deserialize the contents of a Samples.Customer object to and from a given set of columns in the Customers table. Let's see how to read and write data using the Customer class. To start out, have a look at the methods defined on the ObjectSpace class. (See Table 1.) Table 1. Methods of the ObjectSpace class The methods can be divided in three logical groups—transactional, reading, and writing. Methods such as BeginTransaction, Commit, and Rollback belong to the first category. Their role and implementation is straightforward and rather self-explanatory. Reading methods are Resync, GetObject, GetObjectReader, and GetObjectSet. The GetXXX methods return data packed into instances of the mapped class(es). They differ on the number of data objects retrieved and on the state of the underlying connection. The GetObject and GetObjectSet method retrieve their data by calling GetObjectReader internally. GetObject extracts only the first object out of the result set and throws an exception if multiple data objects are returned. This method broadly maps to the IDbCommand's ExecuteScalar method. GetObjectReader returns a stream of objects—an instance of the ObjectReader class—much like a data reader does with rows in plain ADO.NET. When the method returns, the connection is busy and is closed as soon as you close the ObjectReader. I'll provide an example of this in a moment. This method is similar to the IDbCommand's ExecuteReader method. Finally, GetObjectSet returns a disconnected collection of data objects retrieved by the query. The return type is ObjectSet. The method is the ObjectSpaces counterpart to the data adapter's Fill method. How does the mapping between application objects and data source types take place? Let's have a look at the signature of the various methods. The first parameter is a Type object which identifies the data object to work with. Here's a code snippet. The system creates a query object based on the specified query string and runs it against the data source. Internally, the GetObjectSet gets a stream of results and processes individual rows loading data into fresh instances of the specified type. Finally, a collection of those objects is returned and the connection is closed. It is important to point out that the objects returned by all GetXXX methods are automatically tracked for changes by the ObjectSpace engine. It is not necessary to attach them to the ObjectSpace engine using the StartTracking method. More on this later. The Resync method takes a single object, or a collection of objects, and refreshes their data by running a new query to collect up-to-date information. The method takes a second parameter—a Depth enum value—which indicates if the object specified is updated, or if related object data (i.e., child objects) is also updated. Writing-related methods are StartTracking, MarkForDeletion, and PersistChanges. StartTracking identifies the object as a persistent object. When the method is invoked, the object is given a state value, is added to the context of the ObjectSpace system, and is tracked for changes. The tracked state of an object is broadly equivalent to the RowState property of a DataRow object in an ADO.NET table. The state indicates whether the object should be added to the data source, deleted from it, or just updated during the next call to PersistChanges. An object marked as unchanged will not be processed. StartTracking must be used to insert new objects in the data store, but this requirement should be gone by the time Visual Studio 2005 ships. MarkForDeletion modifies the state of the object so that it will be deleted the next time. MarkForDeletion is equivalent to the Delete method of the ADO.NET DataRow object. PersistChanges starts a batch update process and persists all ongoing changes held in memory to the data source. You may pass a single object to the method, or a collection of objects. The PersistChanges method implicitly starts a local transaction before performing the updates at the data source. If the update fails, the transaction is rolled back and an exception is thrown. Otherwise, the transaction is committed and the updates are written to the data source. Let's take a closer look at these methods and the overall ObjectSpaces architecture by working out an example. Getting Data Objects To take advantage of ObjectSpaces in your Visual Studio 2005 applications, you need to reference a couple of assemblies—System.Data.ObjectSpaces and System.Data.SqlXml. The latter is needed because it is internally referenced by the primary ObjectSpaces assembly. For your convenience, you can also import the System.Data.ObjectSpaces namespace in the source. Make sure you have the mapping schema files in the same directory as the executables, and then use the following code. Dim ConnString As String = "..." Dim conn As New SqlConnection(ConnString) Dim os As New ObjectSpace("map.xml", conn) Dim query As New ObjectQuery(GetType(Customer), "Id = 'ALFKI'") Dim reader As ObjectReader = os.GetObjectReader(query) For Each c As Customer In reader CustID.Text = c.Id CustName.Text = c.Name CustCompany.Text = c.Company CustPhone.Text = c.Phone Next reader.Close() The code is part of a Windows Forms application that displays a form with a few text boxes—customer ID, name, company, and phone number. The query to run is represented by an instance of the ObjectQuery class. The class constructor needs a Type object and a query string. The type object specifies the business object to use to exchange information with the data source. An essential point to note here is that the query string must be written according to the persistent fields of the object (fields or properties), not the columns in the database. The query string above returns Customer objects in which the Id property equals ALFKI. Queries for object data are written in a new query language named OPath. As the name suggests, OPath is similar to XPath and enables you to specify queries for objects in object-oriented syntax. The query is passed to the GetObjectReader method and a stream-based ObjectReader object is returned. The contents of an object reader can be enumerated using a For..Each construct. The reader component returns ready-made instances of the specified type, Customer. Once you have each database row mapped to an instance of a user-defined class, binding properties to user-interface elements is a child's game. The figure below demonstrates a simple form populated using the above code. Figure 2. A Windows form filled using ObjectSpaces classes To read the contents of an object reader, you can also resort to the Read method, which would position your code on the subsequent object. (This approach is not really different from using the Read method on an ADO.NET data reader object.) The object reader must be closed when you're done with it. Objects returned by the GetObjectReader method are automatically tracked by the ObjectSpace system, and there's no need to add them to the ObjectSpace context. Persisting Changes The PersistChanges method is responsible for writing back to the data source any object that is currently being tracked by the ObjectSpace object. Objects retrieved through GetXXX methods are automatically tracked, but what about new objects? Here's how to add a new object to the tracking system. You create and fill a new object, and then add it to the ObjectSpaces context through a call to the StartTracking method. The call requires an initial state for the object to identify it as new to the data source, or as an object that already exists. The InitialState enum counts two items: Inserted and Unchanged. When PersistChanges is called on the object, a new row is added to the data source only if the initial state equals Inserted. It is interesting to notice that the Customer class in the above example has a Fax property. However, this property is not mapped to the Fax column in the underlying Customers Northwind table. (See the map.xml listing above.) As a result, when the above change is persisted, a new record is added to the Customers table. Figure 3. Changes to the Customers table inducted by the ObjectSpaces system Existing applications that make use of batch update and DataSet objects find in the pair ObjectSet class and PersistChanges method a direct counterpart. ObjectSpaces guarantees that all the objects added to an ObjectSet object are automatically tracked. You work with an instance of the ObjectSet class in much the same way you work with a disconnected DataSet. When you're done, instead of updating through a data adapter, you call the PersistChanges method. Working with Object Graphs ObjectSpaces can also handle hierarchical data and graphs of objects. A typical scenario is when a one-to-many relationship exists between the Customer and the Order object. Consider a Customer class enhanced as follows. The new Orders property will contain an array of orders: all the orders issued by the given customer. The code that processes the data retrieved looks like the following: There are two elements that make this code differ significantly from any other similar-looking code we considered so far. The most important aspect is hidden from view here—a new mapping schema is used that is aware of a join relationship between the Customers and Orders data. The constructor of the ObjectQuery class is different, too. In this case, the query object is built from three parameters: the type of the object to return, the OPath query string, and a third string known as the span. A span is a comma-separated string that identifies related objects that will be returned by the query. Specifying a span value of "Orders" ensures that the Order objects related to any Customer object are returned by the query as well. All orders are automatically packed into the Orders property. As for the mapping schema, the <DataSource> section undergoes some changes and now includes a new <Relationship> node. <m:DataSource <m:Schema <m:Variable <m:Variable <m:Relationship <m:FieldJoin </m:Relationship> </m:DataSource> The <Relationship> node defines the inner join between Customers and Orders on the common CustomerID field. In the relational schema file (*.rsd), the description of the Orders table can't lack a foreign key constraint. The object schema file now features a new node named <ObjectRelationships>. The node describes the type of relationship set between a parent class (Customer) and a child class (Order), both defined in the OSD resource. Figure 4 demonstrates a sample console application that makes use of spans to retrieve hierarchical data. Figure 4. Hierarchical data retrieved through ObjectSpaces Delay Loading To improve performance and memory use in parent/child relationships, ObjectSpaces provides a facility known as "delay loading". It works both for one-to-many and one-to-one relationships. The idea is that child objects are loaded in memory and on demand only at the time that they are requested. Two new objects are involved with this functionality. The ObjectList provides delay loading for a one-to-many relationship; the ObjectHolder provides delay loading for a one-to-one relationship. Both objects can be considered as container objects for the data to load on demand. ObjectList exposes the actual object through the InnerList property. ObjectHolder exposes the delay-loaded object through the InnerObject property. Both properties are declared as type Object. To gain the benefits of strongly typed programming, you typically wrap the delay-loaded object with an ad hoc property of a particular type. In the following code snippet, the property Orders is implemented through an internal member of type ObjectList. The get accessor of the property casts InnerList to its actual type, OrderList. Delay-loaded properties must be properly mapped to ObjectSpaces. You use the LazyLoad attribute in the OSD mapping file. Additionally, the OSD needs to reflect that the relationship is based on the private member, not the public member. Data associated with the delay-loaded property is retrieved the first time the property is programmatically accessed. This happens transparently to your code. Note that ObjectSpaces does not implicitly refresh the contents of a delay-loaded property once it has been retrieved. To force a refresh of all related objects for a specified delay-loaded property, you use the Fetch method on the ObjectEngine object. Benefits of ObjectSpaces Although far from being complete, ObjectSpaces qualifies as one of the most interesting new features in Visual Studio 2005. It is an Object/Relational mapping tool fully integrated with ADO.NET and .NET technologies. It puts an abstraction layer between your business tier and the raw-data tier where the physical connection to the data source is handled. You think about and design application features using objects, and ObjectSpaces does the dirty work of reading and writing data across a variety of data sources (currently it works only with SQL Server 2000 and 2005) using SQL statements. ObjectSpaces plugs into an application as easily as any other .NET assembly. The programming model of ObjectSpaces is aligned to ADO.NET as much as possible. ObjectSpaces, and any other similar framework, adds some performance overhead to any application and requires that developers get acquainted with its programming model. Today, quite a few points appear critical for the success of ObjectSpaces. In no particular order, they are: the efficiency of the tracking system, the lack of ad hoc mapping tools, the quality of SQL code, and perhaps the overall performance. Mapping tools to automate the creation of schemas will be available by the time Visual Studio 2005 ships, if not in the Beta 1 timeframe. A user's sample mapper utility was demonstrated at PDC and can be found here. Changes to the tracking system are in the works today just to improve its overall performance and effectiveness. By the way, in Visual Studio 2005, we're talking about a product that today is far from being released and that has not even entered a public beta testing phase. I want to emphasize that this is only the beginning. Let's wait and see; it sounds terrifically promising. About the Author Dino Esposito is a trainer and consultant based in Rome, Italy. A member of the Wintellect team, Dino specializes in ASP.NET and ADO.NET and spends most of his time teaching and consulting across Europe and the United States. In particular, Dino manages the ADO.NET and .NET Framework courseware for Wintellect and writes the "Cutting Edge" column for MSDN Magazine.
https://msdn.microsoft.com/en-us/library/ms971512.aspx
CC-MAIN-2015-32
refinedweb
3,821
55.13
tittjrk &fbtt. 4 JV a' , roST-OrKICR AT C.UTIIRir, OK , A SUoin i ASS MTTIv OM III (IF I'll I ICA1IUN ! HANRIROII AlKRl'K ami jnjimMM GUTIIRIR, OKLAHOMA, TUESDAY MOTJSL 0, SKPTKM BUR 5, 185)3. NO. 230. ft! ? KnTBRKD AT TUB iw.nfciaWliiai"jS'.'i jl H. 1 . r, K . C 11 i..l 1 I S In till .1 V dM m 1 "y ,.;v.u BL J ft OF THE FIGHT PEBBf ''V IAVE3 COMMITS SUICIDE HIS PRISON CBLL. UIHS HE WAS Q3U JUCD TO D3ATH Trjtslril .equrl In III" Mr? it Itirntiliy ur.lir M.tt-ry- n I. -Hurl In Ills ' Wife th" l'illU ll tlltterly AtliciM II M I'.rmemtors Worn Out n i I "!Tiiiintd. llf I.Ivji I p. rhNvwi, Col.. 1 h i bJr Urate On.raf Mm. Jo luiUctl suicide III S'pt I -Dp T the i mviit I po.s l!nni' lt.ii ii tby, coin lnscell tit tli" lounty in r'atttrduV night, piest.injh'y by Viking pot hoi i lie u.i found stiff ttnd cold in dea' Ii Sum! iv in it mug by tht "trusty" who has I) en .11 111 for lllm. On "his ii'i)H v. is founil llic fo'lowing It'll m io tueliron . or 1) ioi DfcnVc.ll. Col , n 'i IK ir Mr I'lcise ilo not hold un uiiti,is u.i nw niniliis Thi eauso of di al h ji v 1 r t I nil un follows "Dtnd from li uuon Wor.i uut K tousled ' uursi pitifully T 't II ATI IIFH 111 AN IS M 1) The corf so was iuit' colli w lion fount Ni direct evidence of suicide ai wihle, but the above letter tells tho story There weie also letters to Mis Unives, wife of the doctor; to .Ittiler C. ws and tin address to tins public. 1 hat tne pi isoner hail long con tempi 1 ted taking his own life. Is uviclcnt front the (Kite of thu letter, August '.i, lust. Anotlu r loltct was addressed to the jailer It rend ns follows AC..2 IHBt It would kn .1 ntntn busy 10 follow HtiM-ns and uabluahin ly pcildli-i or two, I o iner, ni'i I daily pro.i n to 1 c n Mass ihu 1 its Mate the Conniitiiul Me in r tliu lit s whlih 111! ul 1 i the pi ors O-ie u iit in uud tin v uru I nus a un mbi r of "I iliial sOLily iiml of l so 11 IT I in vor mideapi 1 itiou to 1 lio Khodu Island Milo Medical soi n ty for aum n Ion My U that lnc n.tliy, en 1 side of 1 1 Mrbut I coward u i on v 1 dear, 1 i line tlie full riioipt. showing iinn-el t'e 1 l.iti of Mil liar ,it tin i Malt un" ou . lur out It All ltv will 'I'll' puhllr 810 u 1 1 en 1 I111111 iuo Mlianous 1 s 1 u N 01 Mi un I cm 110 ix 11 1 n . 11 iiilu.. him I must t iku t 1 1 10 pi t it fur my wife uud 1 ,1 ttl uthor I '1 H Tt III It GllU l.s I -. I ittir In tho I'iiIiIK. tier win h lr T 'llutdiir Wished ; n to fie public 1. . and 1-- a - 101 o In .tves 11 'ii In . 11 1 iUMTYiJAIlSlHENtTliB, lc355322SSS5SSl 1 jHiUiuunBjaB'iLfnf' IJEwitj; )i ysiib w3MpRMMtDiPto.lK' SvMiaV. I Htf (pTMIV VHH 1 WOlm BWti WnV nrvr"or 01 r . 1 tM-i ifmhf 1 t'oUS. !Ull 01 UuyI.t I null who never Ii km I of dirty mi 1 end wbo waa aun juon In mo ott At 1 n vie found overytliinv n 1 1 lutoly under Ills tontr. 1 1 baJIUTa. too court offlcia.-. 1 the court the dopuilt 1udo nnd Uo Jury (Sui learned that no man wa lean he had Urit agreed fromlaea of political prerii eclvtd wore ireoly offci ed n Forty years tto a ma rminoctli'Ut . ud my ftvth ovrt some land. 'I In- sou Urn tvn, and pld tho t, tho uaua) manner of t-ui not know until louii iiftci formod since the trial aoim recclrol political uppomtn a, d Hojio ar ' profsastuu j I lie Jur.' In cis- vbere li C). These tUinr are i What posslb.o iiinor 0 atalnit Sfvuna baked l nurchatsiblo jtiryf oni ul im faeodod 300) am iitr ilt t 1 1 1 I . 1 . 1 111 n 1 -ill t o i Si lau ami 1 1 niitn, a ilit. miv . himself, 1 1 1 Hi 1 lit , I 1 1 f 'IlO.M d ' I I 1 I l. - 1 1 r 1 mi"! l1 1 ' 11 1 1 tnu 1 rkil tin Inpurabli ' I t .' trh w h I 11 1 li j 1 v ' p ki i 1 1 1 ' ' li 'in ; n t a -. u .1 1 'I .! - 1 1 I I JLUX 1 1 Pit 11 njr have - in 111 itven, . 1 tin 1 upm il a.K ili " noun u Il.. r truiKi! 'i I 1 1I1 ajurtj nil 1 (I that .Oil WllBOkS! Informc-l of hor husbaiul's death shortly before noon She wau at tlie house of Attorney Thomas Macon, who 1ms so nbly defended her hus band, where she. has been stopping for aoino lime past The poor woman was deeply alTeetod by tho news, and foi a time nobody could comfort her. Accompanied by Mrs. Macon she hurried to the jail, only to llnd that the body had been taken to tho coro ner' office. The new r of tho removal of the roinains caused another alTcet Inff scone, and tho poor woman sat in a da?c for some time. Then she was led to the apartments of Jailor Crews, wlicro she remained for some time, lnOhntng and crying lailer Crews, in an Interview, indig n intlv denies that Graves committed suicide. He says hut? the doctor died of a broken heart, and to Uho tho jail er's words, "was murdered by the attorneys for the state, who have liar rassed the old man to death." About tho first of August County Commissioner Twomblcy went Hast to seo the w itnesscs for the prosecution and ascertain whether or not they would attend tho trial. .lust before Mr. Twomblny's return, tho doctor, in an interview", exhibited symptoms of beirg distuibcd over tho results of Mr Twombly's tiip. He said in an Intctvlow, that he believed tho prose cution would bring here a lot of wit nesses to slander him, and ho said it would bo onlv fair for the county board topty the expenses of his wit ness s if they paid the expenses of the witnesses for tho prosecution Ah is well known, Dr. (Iravcs was in prison awaiting his second trial for the alleged murder of Mrs. .Tosephino llarnaby of Providence, who, at tho lime of 'her death was visiting friends in Denver. bho died Apiil 10. 1801. On April !l sho drank from a bottle of w liisky that had come by mail from ISoston and was labeled: "Wish you a happy Xow Year. Please uc-ept this line old whisky from your friends in tho woods." Tho w liisky contained a solution of nrsonio. Dr. Graves was accused of sonditig tho bottle. After one of the most famous trials in tho criminal an nals of this country, Dr. Graves was convicted of murder in the first de gree and sentenced to be hanged The supreme court giunted him it new tiial which was to have begun the latter pars of this month DiUrlct Attorney Stewim Tulln. Attornoy Stevens has prepared a length statement to tho public in de fense of his conduct in the Graves case, and gives out the new evidence secured on his recent trip to the n.ist. Tin most important is a letter, the yvrt twot WH taflfiwty s tnai ono nimuay jHTHenttJrlRjf,tlie month of Nbvcm- Mlwtf lSMllHtiwM sitting tn the smoic fWfi&Vt$!fflu. k Square station of UStfMJBWrfHXJlt? in ail, 111 Jiusiou, BiiPflLiM.H.!iMiuu' tii) mil said if ho 4 111- I i. 11.1 -A-t.A .. f.. 1.1... iji'l J vvuillil will ii uuii; mi iiim n t lie wiuiii give nun a ,iun icr, nun what he wrote, ho sa'8, was ex iutH tli same, woid tr word, as what was sent with tlat bottle of wh.shv to Mrs. Harntbr. When Mr. Stevens was last, in ompany wltli Comniissiouer Twom)ley, the two visitctl Canton ami talted with llrcs lyn, the writer of lite letter, at lougth. His description of tho man who had tho noto written tallied with that of Dr. Graves, and they had no doubt but that Mr. llrcslyn would be able to identify Gmes Although "he It ter n Mra. On.?' liM not been flv thr p ', oonttwits arc feo". ' . i . 1 tut .W. tl'.1A LI- B' tn i i' coi iiiim tho )itM moni , 'm , ,!4 probably ithut neiv irootl 1 ti.lt to him. lie therefore ii 1 leu irrnJ hi, life and tenia His irV- n,ml ...1 iaother what ho was pouet-Mfi of in ordtsr SIGHT. si.t,tt--rTwR; rvjn. voonHEts threatens THE SILVER SENATORS. WANTS A VOTE OH REPEAL BILL The sllicr Men II. M n Coiifurrnpc 11111I Ilcrlilo to rrolonir tliii, IlelMiln nil I.tinc us riHsllilc'-Slr rcTr In trniliicr-i 11 "oiv Sitbtruniury hrliemn Tlio JlnrttT Wiirit- I up on the Halo. TUB L'.OYS IM DL'JE. Tliey r Mnrrlitni; (in liiillnnnpolln In the (ir.iiul Itnitiiiipineiit. IxPiv.VAi'Ol.is, Ind , Sept I. In dlfliinpollis IR in holiday nttire to wel come tlie votonms of the Grand Army of the Hopublio nnd their friends Kor the past tlirco wooks he cltiiotm' ov octttlvo coininitteo lias been nctlvelr at work nndof an executive iKMird composed of members of tho Cninmorclttl club, making the nritinfrements for the greatest event In thu hlstorr of the city The completion of these arrangements 1ms been marked br tin exh'bitlon of at tentHn to dotails which has lieen pro duetivw of by frho best piuparotl oity in which the untionnl encamp incut Imis ever beonOiehl. V ! EXPRESS MESSENGER CHAP MAN MURDERED. Wasiiixotov, Sopt 4 --A wnmiitu was given In the Unll 1 States senate Stnl lll'HnV flint (lint urn lalfvlif tuiilitt Ind to force tho sonnto tr a roto on the wM l,orhaP l,rovo l!18 mosl''i-"-stroiis repeal bill. The warn'ng was given TWO KILLED, FORTY INJURED. Dl-introtn strout Cnr Arclilont In CI11- In lint t Six Hurt Ilrjniii Itcruii-ry, CiNris.vATi, Ohio, Sept- 4 What BOLD HOLD-UP ON THE 'FRISCO. I'olleil In "llii-lr Mtonipt In t'upliire t'lN- l-un-iiN stlf ,n lll-jlinrjiiieii Ituli thi. 1'u-iseni-nM All llmlr l'lirlii- liln ultiulili.', Alioiit 1,011(1 In All, 'lliKoii Mtiiinil Vnl- lo., Kiiiu, tho Sioiic. by Mr oorhoes of Indiana, chair man of tho llnance coiimittee, in tho form of a notice that ho would ask the senate Monday to consider it mo tion to begin tho tlail sessions at 11 o'clock each morning, instead of at noon "1 have a sort of an old fttslilonctf idea," said Mr Voorln-ts in giving the notice, "that wo shotiil always Bttb mit to the will of the najoiity and for Unit reason, I will ask for a vote of the senate on this preposition." Instantly tho silver senators con strued this into meatiug that early meeting and long se-sions would be tho rule and that whin the spf echos of the opposition slioulrl be exhausted, 11 demand for a vote would lid made and if necessary an appeal to cloturo rule be made Asa result thev held a hurried confeienco nnl their plan of action will ,be to always hnvo tt man prepatcd for a bpoecl; s'o that there may be no dangerous Intorvnl in tho debate, similar to tint of Friday afternoon. , Tho sil'-er men will lot allow a vote to bo tnken until tliey nre unable to hold the floor any longer. That is the point to which the light will finally bo brought, for the men from the silver states will never consent to a vote beiug reached in any other wav. It Is doubtful if the Democratic majority nouid cvei consent to the adoption of a cloture rule, no matter how lony the talk may be strung out. What the fieo coinage men seek to accomplish bv delay was indicated by Mr. Vnuco in his speech He urged the advoeatis of trco sil 1 r to hold on a little whili 1 mger and spoke of tho iinpioti'incnt already goin , on in bus iness timing lout I lie t 1 ntr. Uho silver men hope tlltft-tf th , can delay a voto long enough the 1 oiiditiou of the country wlU bkwjufticiently im proved to weaiten tho demands of tho ficople upon the senate, for action nnd 11 this way they ran Anally got tho advocates of lepeal to consent to somo sort of 11 compromise. A NEW SUBTRCASURY BILL Introduced liy Mr. I'rlli'r Hut Not Ito.ltl tin 1'roilsluiK. Washinoon, Siopt J. Sonator PcfTor of Kansas has introduced a subtreaa- ury bill. It was not lead but was ro- 1 the judlelary oonnilUee. It street cnr accident that over happened took place in this city tst evening An Arondalo car packed with peoplo dashed down tt hill at frightful speed, left tho 1,1 aelc, bioko a telegnph polo and sho. lto tt saloon, w ret king both Itself and tho structure it struck As a result of the collision two people aro dead, six aro injured beyond recovery and nearly forty more are hurt, many of them dnuguroiisly. RESULTS OF THE TORNADO. I'ully linn Tlminuilit l.li I. ml In tlm Storm Sucpt Illitrlit. Cnm.K8iox, S. C, Sept 4 lteports from the storm swept distiicts In crease in horror I'ully 1,000 lives were lost In nenrly all of tho churches of Charleston collections wcto taken up .vest onlay for the suf ferers by tliu storm, and a huge sum was realized. Thu pastors of tho colored churches have tailed a mass meeting to raise a relief fund. 'I tin Mute Until. Tin. WsiiiiT)N Sept 4 Secrotnry Carlisle, Speaker Crisp, Hoprosentu-tlvo-DeWitt, Warner of N'ew York: Representative Hall of Missouri, nnd Represent itive Oaltrs of Alabama, held a conference at tho treasury de partment looking to tho repeal of tho ten per cent tax on state banks. It was ono of a series of such confer ences which hits had this subject under consideration President ( lereland Is represented us favoring tho proposi tion to repeal the bnnk law if a meas ure can be framed which will ofTsc1. thu diffcultles in the way of a rehabil itation of sta'c banks. Tho plan suggested is to repeal the tnx on btitte liiuiktf and provide fin them tt uniform currency printed and issued by tho general govnrnmenl un-u uu 1. ue eiiiHBcs 01 saie ami ac ceptable bands nnd securities properly guaranteed by statesornniniclpalitioR. AroliliUluip Irelind Sptiki- J Ciiicaoo, Sept I. The world's Inbor congress last night was crowded. Tho chief attraction was an address by ArchbiRhop Ireland. The famous archbishop was enthusiastically le ceived and frequently his remarks woro interrupted by round after round of applau&o, 1 thi 1 tn t .V. I The i. it, tl. 'all 1. lUc Jawla. 1. St !-, k 1 sec 1 ot tcveral titttltut on amend I i iltii that they might live In comfort Crt ;iol:c ooncrbss. of th tieasmv I tnt uid w c to the .le jommii menu t n,000,()0 in pajx-r for ea;h loO.oOO InUabitautm, or in otber words at th rate of 960 rr onpliav nils money '. in "sou or th lud .0 und jury. It aln.i, o.al'ud to r tKi ? , , L .- ... ,.n1 .1 . ..1. Hit.. .... .1 tl.,1 mm lull wiwi; m 1 . 14 . . 1 t , ti. ., - .. he wim ".Jajlnufor mi slake The eupnune to.iri 11 luiurjuo tn thf noi emphatic, iwtlnii' il inttrr woids id nouueed the trU1 unt ir 1 t and din d and ordered 11 n mil ' imt. nmr found 1 roof Unit n Uu ira w. outtoouly one Yiiiiuss 11 1 iltv.t of aoiuer m i" n th n sounded -as to li h r In . .. - '' In r Buuimonotl m iu , 1 - tl ta lav-ir ,-le-.cu, nad thla s .line I m.mis ball Ilia The iurv w is 1 11 kr cti.u.i "' mf nf n unrf itllliins ul 1 11 ttl til Mld 1 . '11 hunt b ono 1 1 tin n 1 . ' 11 -"- . .. . . ana arounu umu 1 r u ui 1 jTor m ntU i.nd m littlo, ieraed n ' whlata eooertaiMi 1 1 have tow not h ii t-tevena. in tin 1 lie aeknowlodoil ' and control to d dareaolda.Moai. tut .1 new lrl.il 1 ' tiauMaVtM to lurulsi for. od to remain 1 i uu try aummor until 1 .iir.l l b.okon dou 'I r. w dmtriit r 1 taclji aupolntcd. hut 1 relrti HUi more, ani raacia aa he J ' or.' l! o 1 INVI "T . m ect wrltv in m 11 '.n- ' lncas,1) 1 !, I 11 1 I m,uisiil In in I 1 titll 1 . e. lilt liy 1 1 111 vinous una I 1 t lonn r trial md 1 orlc IielcjTKte i'ull train and l Ki i.j. . i wmmittee ou fur the thlu. con. . vening at lhc Ujnd . . chairman .,, .., ratlfittd the work dono b . , .ri.oir selected tho following olll . ,, Uiatrman, Judge Vf,..;""; to supremo Profoaaor unt l .1 1 11 fin Pan porui 4 SI, f1-'. t, tenner bil s A h deliver rrtTi-:' and no Interest 1 its use b' tlie The ttati s recc prohibited f.om I an interest t liar, to be n.strlbnted , t!0, etc., lemtl i of tli! st .rl t ro ( cot 1 1 Ja it-1 t-inp 1 ii v i.retir fill I. US ill AlVfcW- , 91 .Mt... l. tittkit, !, (tolotiil.ia w' -s. '"V I . ' .' 7 .'"r".- ,rWn N, ft.I. ...ii.j 1, .. ! II 1111 T ,!- ."- .. w .. N'1 "ii I IIiCkjIc'mi ner it llJW of cn-.l' 1 li id nJi.i ri'l to a nun p I 11 ' ' t ' tin mil' . 11 1 uud o in1 1 11 ion I 1 t-d. anil imi sltt'o I . I'uu retulncd 1 1 d 1 huv Lh . ti I ,i) iron h Ut n In t II t at s-, to il Qii 1 1 i mi inhiic 1 1. ti iniint SI. icti 1 ' . t0 up si 1 u ii as the jutln ii in t what 1 t to 1 At the prvi li 1 un 1 oat to f tho co uimi i " liooawinu K Krancia i.eilurnlrul0,'or night w.ilUnl . . it 1 oft the aiuruny of the ti im auJ was klllod. P,tfor" sence was not dlscorerod un "' u" , tl u ...ftl.M1 "Ills I"" .. 'itr ti i' l'a. lani1 lout ody waa win H flril gies peal wUl- la' bten. . . - im.1 t nti CM MaeBeaf. ,,, .,.,. to ii. .11, . uiiu a tr 1 la MB - . - . . . . , . lw M..i..,.l """M lie attor uauix .iu,riT w..,. . IIJmi H ill thm cijol and dot'j ' PoSfm' . 1 i prou.Uly Induce ihi.n I'onup 1 r J 1.11 Im 1 . t u I 1 a I 1.1 I im I Uu.1" .iJilirti 1 1 iiuiri ThnnhoJci itfl'SCsf .'! at v TiiiM,rl l v atiw&uu u ayaaillTformed i' 3Horaui niditli JrtStBh IftthcrV ,fs8IlI, " J .1, 10 niinaienco s.i 11 ll he 1 to ue ,1 o( lids whole inis tte I .11 I 1110 lallalran.eth I I I 1 ituiilnuo the hi-lit' I tl ,1 , . 1c al 1 ,hl I 1 11 inn ai. 1 ud 1 1 ' 11 i not lur I I , 1 .''ir is enr d.s.rai a wi r 01 in tui.1 ( ... lurli iHll Kll IH T ! "..."u "'" - ,rn ,1, . Illl'lUiiwa ..--- .- ,w "' V4rat Kiaeti n,sriiif ,. . -Ou fTortt n . ih ra i In? iinwtc1 iM , . 1. ' . !? vt I ) 1 aio'hpor' ot has luadt. 1. n a r'jlatl rlso ' c'riuuu.' 8nf1 taken i' e m w the -t 11 ito -ii COUltll I j 1 1 ill' . .1 inatnUr 1 jh S bill to it.o uu.. yi p. 4m . pared a Oil L.3 W. of tho t i.i nrtra. 0tc e ro- iw under Vw York h in per- U lias itM.-r in ek, t .- lion in the lious- trod ui- 11 lor IrM ahldt, WAitHitoTitN sept t- Con Georne W I itiiian of unnr-'ercssmaii chairm.1' .' fie cunurtv''' ,nc new IliioiahiaudaBd twit anions " ' -,,, ..inner notoi' ' 111 t red of it 1 ' ' Vlort ne m n!niun " - au. IP la-t; ., ' - ...... tr . ii ineui fflfor ,uIU'rW'IBOli " J . 1 .pibeexpenoiua. f " V" '...ill. .vDubrlKn VZ: m. or deed '' j write it the r c uiut -w -- I -1.. ierlo.1 'l' I '-' "" Vld iind icre 1 1.1 i'P I up ,. 1 11 1 P' iui lied lit i V uJtff 1 I chant 1 pared ante 'rf c n rci. COUUtl Ii me 1. -i atre on jriu 4la cries I as pre tt ulcli will rcceivt a favor--t ti m h , oonimilico, pro 't.- 1 er vd'nlssion to Amor t 1.:' oi ps built in foreign f'rfjf-i-ree of to le iliHrgfd eneial piveriitne u g tlm moi ey tli' i v vs o! .'.I'.i per unnuiu. Tho presuici.t of ' United States and national treasurer, onescuttor and tiro intuibe-H nf tho house of ropiesenlativos uie to be a committee, to see each state shall choose uomm tactions to givo bond for secure handling of 1 money' rccoivod, tho bond to be ap-1 proved by tho governoi of tho state. I 1 his iiionoy is not to be lent on landed s ctirlty of less than an undoubted j viluo ofSJ.dOO for every fl.OW iasttcd aod no 0110 pel sou is to loceive more ' th.iii S'J.OUJ. Corporations uto not al- 1 lowed to loud the money The time for which money is lent is sixteen years, but ono fourth of the total amount ia to be naideverv four viinm lnimiu.i m to became duo iiniiuaHy ufter earning it No fees or couunii.iau8 nre to bo charged to one soliciting or procuring a loan. All lands and improvements forfeited for non-payment of principal or interest aro to go into public do main. Other money than metal now outstanding is to bo called into tho troa ury and destroyed. The aecrc mr of the treasury is required to pn i 5,000.000 fifty-eon t bills nnd the sanit number of twenty-fivo cent bilU, to b. Eold by postmasters. Amendment bovouteon prohibits the OawKnii IC1111 , Sept I. Three men, 0110 masked, hold up 'Frisco passenger train St. 2, at Mound Valley, Kan., nt 3:13 o'clock yesterday morning, shot nnd Instantly killed liprcss Messen ger Chnrlos A. Chapman, lobbed near ly all of the passengers and escaped. Mound Valley s a littlo station only fifteen miles from the Indian Terri tory line. The bandits aro now pro bably safe within that retreat of out laws. W hen tho train pulled into Mound Valley two of the bandits boarded tho engine and ono remained upon the platform. A moment later tho colored porter stepped from the train to usslst it lady in getting on A Winchester was thrust into his face and ho was told to throw up his hands. Instead of complying he 1 ushed tho woman into the car and locked the door Conductor Mills, knowing nothing of the potter's experience, came forward from n rear car and met with tho same locoptlon. lie, too, refused to comply with the icquest to tlnow up his hands and ran back to the sleeper. Meanwhile Messenger Chapman had loft his car, whether to escape or to notify the passengers will never bo known, for he had gone but a few yaids when ho was . iscovored by tho outlaws upon tho engine who opened lire upon him. Only two shots were filed, but one ball from n Winchester crashed into his brain and ho stag-! gored and fell beside the track, dead Then the outlaws commanded the I engineer to pull out and mil until he was told to stop One mile and a half down tho load ho was told to check up and dismounting, tho men pro ceeded to rob the train In killing I hnpmnn they had shut themselves on' of the Wells-Fargo safe, however, for it was locked and successfully re sisted their blows with a coal pick Foiled in their nttotnpt to loot tho safe the bandits turned their attention to the passengers. tth tho e ceptlon of those m tho sleeper, uvorf mun an I woman "was robbed. Montr , watches, jawelpf, hits, coat, ati'i oven a bottle ft wlilsky was taken It is estimated thai fully 31, 000 in 0,11) and va liable ymif, secured. Then, learlnf the tram, ' the men iliappiurfd in Hie darkm c ! It is probable that tin 3- had li j s,., ju waiting and rode for the ter-unty Tho train va ran lml t M nd city, wncro 1 iiipiuau a uenu i a us recovered, anil then journey. io unTurumnn iiiesarng lived at .Idplin. lie va 4 vum nl,l and leave a wife to whom l.i, waa recently marri.srt hen tut ' 1 1 1 I't-acher) this place a poTtSc wa 11 tul up nnd started in pa suit of b ul'. . .- imi .en is llt.le hope tnat th y wilt l i-in mred. c lefof police of I'al.tUi. .. a- I the llt'gro torter of u 1 tho iraiut .ad w, re armed, and they offered no lesistanue vhaterer All of the pa-aengera interview etl aay that the lohir dianlaved a ennl- for I iiited States semTtor "ini aSSf '2. WM R"PU'- remarkable and ("rrirm vj OUR SHOES TALK nntl ttie nt)caW 1unn s of misi nll fa ItraiKl ttf f)Miuit . mi ih t ir mir imr. 11 dmsn t p i , -. .-u u ,i fH)txenr lien tuir-. m .il m m'mh iiU'aHanl as a at aiion Our ii U im It otir in iten an- u11, cinu .nut mi It 01rll w 1 mi-i 1 1 Hi. (NKkMlMMik. Don'i 1 w I Hi th. th il . 1 ihiikmI) tu "in t' i 1 wii it 1I1 me tl4 '1' tl! lis Hill lilt 111 s Hid t 'u li U u .air slim s in.1 ear iiothhir t Nr EISENSCHMIDT & HETSCH. FAlrlF U jL Sl J JimI 2Lm.a If 109 HARRISON AVENUES1 ? Everything In the DRUC T WALL FA1JRH Prescriptions Filled Day or Night CiUl .U) AT GOBI , A, G, HIXOH, Prop'r. W TI I 1 l'HONi; CONNKCTI0N.-W ich Sece3nd Hand Stor 5-i Jo Mall war M'rnlt 11 1 Strrator. 8TRf.AT1.1s, 111., Sent, 4 On Satur day n glit terrible tf-eek occurred in . Jie otaktriA ot tin It , -.,, H j In , o" itnJ f- I the d alb .Vr..v' v ' 'uw ' seriiius 11111 lea u a ilc?en n nr'i r ' sons. 'J lie iilino s 'S liry ai d Nor b am branch ol tha bin.uig.ua nt u tbrougji a bridge IiibiIU' Allnae.l Aniblttiilt. Toiek, Kan. Nipt 4 It is said 'S21JH-ts4jhtv- t'r lflfiji9will be a ean- ui'iuie iur 1 linen states senator luii .' succeed W IVftVr, and as a ate iiing htone to Jus Miia tonal aspira ti'm.wUl le a caa tdate for tho lie pnl icau nominat o'fYor governor next )" 1 ho Hanii' Hu.o (ilir I'raiposU. I.ommiw, Sept 1, That the homo rule bill will be re looted by the house that 1 of lords in short order ia certain, nnd that an appeal to tie country will follow is equally cerltin, but that Mr. tilailstane will live to lend it second light for Ireland is 114 probablo. New Goods &taH3mi-MGCi Seeour Gasoline Stoves beat. Sold light DOW STJS pairing oi Gas.Hii,.; bt A. H. RiC i BigtiWi r ' ialt. :rOMD, Fi"- R Couiproiiilaeil Witl llepniltorj. Kkvaiia. Mo , Sept i.' It is reliably leported that a couiroinlae has been made lctwoen tho dree tors and de positors of tho defuuc Hnrtley bank ing company of .larva Springs. Tho diioctor have turned over $l?,tKM) to pay the depositors. I! imi 11 tlier to the Ltopln'a I'urty. Dkkvkii, CoL, Sopt 1. The Kocky Mountain News pubiihas a letter from T. M. Patterson its propnotor, now in Washington, announcing its political allegiance lurunftor to tho People's party on aoeoint of the silver issue. import 01 any puuiic money in it nre '"mu,u' ruining war at 1 vate or incorporated bnnk other lliiyi jl l"t been suuaUiiliuteil troasury or sub-treas- M 1 I lia hall 1 i 1' 1 U" 1' plllill .lure ltni tllv a 4:l Vuo 1 tie nil 11 MUttiknu lor 11 lliirelur. ' vMHIiit Mo., hept 4. - otspXen ".j sjbea, a well known farmer, was us- "" ' taken for a buiglar ten miles soutlvf nut ' bedAlia at 1 o'clock yesterday 1031,. ill '"'I ing, and was shot 111 the heat ly J. nank Hollaway, who Is also a fira- li,, mi ,ir. Tlie woui'(tis not jaiai. .1 a-o.-, h la '. c, I'eoble Ml ml ml, lluv VuKI inst ' ,i' K.cip,, WAy.VM.n,rt 7t i. 1 to for juiightj y i&capi d 00 .11 safely .-pt i.'St. lntnrt.t.l .1. ... tnlghiandnn ruction. it iwthi.werliall (aifen .lb tho national uries. Amendment eighteen provides for lb,, fro., pnlnAtr,. ..f l.nl, rv.1.1 .! .it ' .... .... bW.H.t-U WA "U.l fUll WUll ftl vor and in ordir to carry out this great ork additional mints aro to bo established near tlm uiines. AliienditMMit ID prohibits aubtreas urcrs from buying gold or silver or re ceiving gold or silver for deposit and issuing substitute money therefor. Amendment 20 divides the national tieasury into two separate depart ments one to receive all revenue due tho government and disburse, and the other to issue, and distribute money to the states and rcdeqm mutilated bills. At WurU on the llulo. Wasiuxuto.v, Sopt 4 The house returned wearily to the debate ou the rnies baturttay morning, not over moiubers,bjjtng on th floor wjieji speakeryrqppo.d thr gavuL ,' M. Itoiluct T Shot. 1 A Ills Sopt 4 J. . 1, Ing u m , n,v ijj,, , ,, , I , jr , 1 ; ,1 Trooin Orileroillto Itohjr, HLKiiAiir, Ind, Sit 4. Recent rumors of 11 coming war at Uoby have and the Klklmrt and houth l!eldmlllta;y oom- paiueo liavo beon ordatod to lontle vousat I,aportu. 1 Mlsmiurl r.iulllo mot Hurueil. llooxvit.i.B, Mo.Sojt. 1. Tho Mis souri Pnclflo depot liurned to tho ground at 9 o'clock ye erday morning It Is suppos d to lutv loen set on bro by a trump. Loss, ostluutod at Sl.OiH). NEWS NOVES. actiili'lfe oirVn nd at fW "wwAacss. THP EOUAL SUFPclACIBTSi. Xliey Adopt n l'latrncin for the Umiias CumpaiKii at 1891. , Kaasab Cirv, Kan., Sspt A The uausus rtqual suffiage oiivuntlon bus adopted tho following platform Wlicroaft, The Momon lr. copveoiwn as semklod In Kansis City, Kan., rcoonlzo und belluto that the sutimtsulon of the equal sut fra e 11t11011d1ne.it at tho present tlmo l.s on ev olution nnd not u ruiolutlon, that It Is Btuiply one liure step In the progress of il II gororn- linorii, und thallt Is in 11 eplrlt or holpfulnuss iiuu uu. uuiu iiiimui iuii nu uai( ino support of tho men of Hits uiubo thcruforo bo it KiHoluil Tuut inasimiuh us thiroaro intho auflriko rinks women ot 110 pollllcal parties and uomen of no poiitkul affiliations ar.d aluo women of nil churches and uomon of no church nnd wheroas, those-m omen urea unit it thoir demand lor the ballot and aro norkiiu toji'thor tor their common causo theroforo, bo lt further Itesolted. That wo declare It 10 bo tho do tormlned polio) of tho Kansas Hquul hudravo Misooiatlon to I'ouline ln uork for the umetiil luent Htnctly to arguments and propupnmla fur tho enfranchtsoment of nomea It Is not expectod, nor will lt hi nskud of hi women of tho seiT.il partlu.s that Vc; khould coaso their actllltles und their .iiulous work for their respouthu pur tie tot imi most emphatically Mutu that all xpouUora uuil workers, under tho ausplcos of the umnndmeut campalun committee, shall ro train Irom ar.'Uinont for or rercremo to their party Issuea Inasniuch as we reconlie tbo present 1 rials und t.ie slgnlllcancu thereof nnd the relation or this moiument to the political partle thoretore do It Itemiived, That all pollttcul p&rtlus of tho stale hall bo and hereby aro asked to embody In tbeir county and state platforms expres sion faiorlng the adoption of tbo pending amendments neaolied. That we extend to the Republi cans l'opull8tH and I'rohlbltlonints of thoiu lounties which haio adopted uneiiultocal equil NuSraxo planks In their platform our hoartT thanks nnd congratulations upon their sagacity aud iiroarounho ioaitlon. The rally ilomil this oicnhu with on ad draw by busau II Anthony, Mrs T J Smith, Mrs. I.eae and others P UHvUwm HEATJQlAlUiV fci)IL88 AND SPXs?$rr- SADDLES FROM $ yevrm Td!8iu Chi uu 1115 vee our Mai 1 .ir inembfir the number, 111, 11., . IiaillaOil CAPITAL CITY BOOK fi Rniilaa m 7 ,1 c m m a k u trf n n ii. r, -m uHaaVav TBftf itn'il nu wtrtpryu" UVl.lt.l l 11 lilje f SAUNDh -c" c Hon. J. Proctor Knott is spoken of ub a sueoesaor of Mr Wount as Minis tor Ui Hawaii. The Hitltimoro and Ohio railroad has decided upon a tonper cent reduc tion of snlaiios of employes receiving more than $130 per mou,th St Joseph, orso on. rue thej Ula" l0U Pr 'nontli tlTifSlP-, was sneedfnff hU ih lako road the animal belted, throwing ," ' lion aft and orushlngblji 1 skull. lJ BEADLEJS BLOCK. I A full line of Books. SuavnrhriS3t mb, bSsbKsbbT 5i aavrvr Supplies alwftjf tfll hi H.A. BOYLE, Propri in. CUv LOOK HERE I I Am Here Attempted to Strut 11 l'ortuue. San Kit vxcisco, Cal., Sept. -1 The burned steamer San Juan, on ior lust trip from Hon? Kong to Manilla, had on board 200,000 ounccsof silver wortii about SS50, 000 all of which disappeared. No signs of it can bo found by divers working on tho wreck Chief JJa gincer Webb and a number of ie bluuiuor's olllcers huve beenni.-i.l and great oxeltcrooM pr. vuils ,,1 v- ico. thoJMJillllplneK '' - W ' over tlie tinfeovcry of was fuunu 1 m m Sin wnia, "i. , . iiiua If you uro in want of tbo Colobratod CJgsfHlMU Sale, o ind liurgiar rrooi, . iMn want of the ColojjaJ Aowrtooi K- - or I'iri If you aro' Homo Sewing Mnohinty If you uro in want of MF King of Scae. S , awl Trtcycl'eifc a, Kowir, th Oriel, Warwick. tVaiPM King. WeTOftfraM. t Traveleu thJ&r Vail an4 the Boad Qtv rotalJ,comoafla get my prices, .t Khi 1. . to Stay! r'ire or Huiglar Proof fefli ir"lg9 gt-H. k n .lehen, JrCCl 1UD SHIVELY BROS - -aIlM ' m m l..l nnrllLSnn.ll..liirl mid , ".' " V. - UOlie, U one ,, h'8w;;"-:rp' r W"U' banjlecoPIc i ,.,. r' -nicin. Cut out this ad ver- , i . .einent and send with your order. tt.ia.iAun r i & vanHIie BEST in tlie CJJ vt 'Address The PerniH.' t f-ouls, M l'lrst-aluss livurv bara ,(Sslar a4 Or improved facilities for rfyini pas-euers bet' always ready to start Ht ki i ti 1h t i ' . r LJ2. riaJLaJOsiHBin B 1 1 fWll'll UIBBBHBBJIi. lr" tarn. ? m xml | txt
http://chroniclingamerica.loc.gov/lccn/sn86063952/1893-09-05/ed-1/seq-1/ocr/
CC-MAIN-2017-09
refinedweb
6,212
83.36
SYNOPSIS #include <nng/transport/tls/tls.h> int nng_tls_register(void); DESCRIPTION The tls transport provides communication support between nng sockets across a TCP/IP network using TLS v1.2 on top of TCP. Both IPv4 and IPv6 are supported when the underlying platform also supports it. The protocol details are documented in TLS Mapping for Scalability Protocols. Depending upon how the library was built, it may be necessary to register the transport by calling nng_tls_register(). Availability The tls transport depends on the use of an external library. As of this writing, mbedTLS version 2.0 or later is required. URI Format This transport uses URIs using the scheme tls+tcp://, followed by an IP address or hostname, followed by a colon and finally a TCP port number. For example, to contact port 4433 on the localhost either of the following URIs could be used: tls+tcp://127.0.0.1:4433 or tls+tcp://localhost:4433. A URI may be restricted to IPv6 using the scheme tls+tcp6://, and may be restricted to IPv4 using the scheme tls+tcp4://. When specifying IPv6 addresses, the address must be enclosed in square brackets ( []) to avoid confusion with the final colon separating the port. For example, the same port 4433 on the IPv6 loopback address ('::1') would be specified as tls+tcp://[::1]:4433. The special value of 0 ( INADDR_ANY) can be used for a listener to indicate that it should listen on all interfaces on the host. A short-hand for this form is to either omit the address, or specify the asterisk ( *) character. For example, the following three URIs are all equivalent, and could be used to listen to port 9999 on the host: tls+tcp://0.0.0.0:9999 tls+tcp://*:9999 tls+tcp://:9999 The entire URI must be less than NNG_MAXADDRLEN bytes long. Socket Address When using an nng_sockaddr structure, the actual structure is either of type nng_sockaddr_in (for IPv4) or nng_sockaddr_in6 (for IPv6). Transport Options The following transport options are available. Note that setting these must be done before the transport is started.
https://nng.nanomsg.org/man/v1.2.2/nng_tls.7.html
CC-MAIN-2020-10
refinedweb
347
57.06
javax.namin.ReferenceMathew Blackberry Aug 7, 2003 2:09 AM I have used an example to get an MBean to register a class with JNDI. however when I try and get the class using Object fa = (Object) ic.lookup("fileManager"); System.out.println(fa.getClass()); the output is javax.naming.Reference. I have used an example I found on the Web to register the class with JNDI. ------------------------------- InitialContext rootCtx = new InitialContext(); Name fullName = rootCtx.getNameParser("").parse(JndiName); Name parentName = fullName; if (fullName.size() > 1) parentName = fullName.getPrefix(fullName.size() - 1); else parentName = new CompositeName(); Context parentCtx = createContext(rootCtx, parentName); Name atomName = fullName.getSuffix(fullName.size() - 1); String atom = atomName.get(0); NonSerializableFactory.rebind(parentCtx, atom, fileAction); --------------------- Am I on the right Track? I would have expected to do something like the following. FileAction fa = (FileAction) ic.lookup("fileManager"); fa.test(); but this throws ClassCastException. Thanks in advance. Mat 1. Re: javax.namin.ReferenceMathew Blackberry Aug 7, 2003 10:44 PM (in response to Mathew Blackberry) I have found that this works fine locally. i.e from EJB but not from a separate application. This is fine, as this is what I want it for anyway. 2. Re: javax.namin.ReferenceJon Barnett Aug 8, 2003 12:10 AM (in response to Mathew Blackberry) You're binding to a JNDI namespace that is not accessible externally. You have the same problem if you try to locate a DataSource from outside the JBoss JVM. 3. Re: javax.namin.ReferenceMathew Blackberry Aug 8, 2003 6:29 PM (in response to Mathew Blackberry) Is <Statement 1> --------- FileAction fa = (FileAction) ic.lookup("fileManager"); fa.test(); --------- any different than <Statement 2> --------- FileAction fa = new FileAction(); fa.test(); --------- Obviously the top one is already instantiated by the MBean. However at the moment I am a little confused by the difference in the two statements above. How does the first statement (if it indeed does) get around using file i/o from within an EJB? (assuming fa.test() copies the contents of one file and creates a new one). Does the second statement also get around this. Or do neither of them? Do I need to try another way. Although I should add that it does work at the moment using JNDI the way I want it too and does not have any problems deploying or running. I am currently using 3.0.6 would I get errors when deploying on 3.2.x or 4.0? Thanks Mat 4. Re: javax.namin.ReferenceJon Barnett Aug 8, 2003 8:14 PM (in response to Mathew Blackberry) Essentially, the object that performs the file I/O should not be created within the EJB container. Ideally, some might say the file I/O establishment should also not be performed in the container - opening and closing the connection is performed outside the container. So you have a file connection object factory that receives a request from the EJB for a file connection object. You manufacture the object (outside the container), establish the connection to the file and pass the object to the EJB. The EJB does it's work and signals it has finished (by returning the object). The return of the object involves clean-up such as closing the file connection - performed outside the container. The difference in the two conditions you provide is that you are instantiating the object within the container in the second instance - however, you might not handle the cleanup so well in your implementation as there is no explicit return of the object so to speak. You might want to consider the Pool implementation (source branch called pool) - provided in the JBoss source. It is a good example of a managed object with explicit external manufacture and explicit return. You probably don't want all the pooling aspects, but the factory has the important features for your instance. 5. Re: javax.namin.ReferenceMathew Blackberry Aug 15, 2003 5:15 AM (in response to Mathew Blackberry) Thanks for your help on this one Jon. In the end I used org.apache.commons.pool as I could not find the org.jboss.pool.ObjectPool etc in any of the jar files that come with JBoss 3.0.6. I found it similar to the jboss implementation (at least for the features I needed). I downloaded the source as you reccomended and found it very helpful. I also determined that the pooling features could benefit my application. Appreciate the help Thanks Mat 6. Re: javax.namin.ReferenceJon Barnett Aug 15, 2003 6:03 AM (in response to Mathew Blackberry) Glad it worked for you. It shouldn't matter which example you use - just the JBoss one has the JBoss programming style heritage. The actual classes for the object pool are no longer included in the binary distributions - from 3.0.x on. However, it exists in the source - probably as an example implementation. Since it extends the ServiceMBean and the object pool implementation is self-contained, it makes a nice, complete, non-trivial example. The built JAR is called jboss-pool.jar and would live in the server/instance/lib directory. This information is just in case anyone reading this wonders what we are talking about. But, as I said, as long as you had an example for implementation that is understandable - that was the real objective.
https://developer.jboss.org/thread/74994
CC-MAIN-2019-47
refinedweb
888
57.87
Lesson 1 - Introduction to software architectures Software design Software architectures and dependency injection Introduction to software architectures There's a big fuss over the dependency injection. This design pattern is used by larger frameworks to pass dependencies within the application. Understanding DI is not quite trivial, it took me quite a while to start using it at least intuitively and I understood the pattern completely not before I implemented it myself in my framework. In this software design course, I'll guide you through different ways of passing dependencies within the application, we'll attempt to use various mechanisms to pass dependencies step by step and come to a conclusion that DI is the only right option. We'll, of course, show examples and describe them in detail. This course puts the DI into the context of other bad dependency management practices such as Singletons or ServiceLocators and explains their disadvantages. This is why the course is different from other separate articles here on the network, where different ways of passing dependencies are described separately, but the context disappears, making it very difficult to understand a fundamentally simple principle. At the end of the course, we'll even show a minimalist implementation of our own DI container in just 50 lines. This course assumes you know the fundamentals of the object-oriented programming. Dependency Injection (DI) Although DI is used mainly in web frameworks, it's not limited to web applications. The problem of sharing dependencies between objects exists in every object-oriented application even though it has no more than 10 classes. Without DI, design problems escalate with the increasing number of classes. As an example, I chose a simple web application in PHP, but the DI is, of course, used the same way in Java (beans) or in C# .NET or, eventually, in other languages. You'll certainly find this knowledge useful during job interviews, I can confirm that they ask about it almost everywhere. It's a kind of knowledge advanced programmers have. What is a dependency? Information systems usually consist of a large amount of code. For the comparison, the ICT.social system has several hundreds of thousands of lines of code. In order to understand such code, we divide information systems into objects. ICT.social is composed of hundreds of objects that are gathered in namespaces (packages). According to the SRP (Single Responsibility Principle), each object should be responsible only for one area. The resulting application is then composed of a larger number of smaller objects that communicate with each other. By this division, we'll get readable and reusable components. We can compose other applications from existing universal objects. In monolithic applications written in a single file, it's often a problem to use any part anywhere else and to even read the code at all. SRP is related to other principles: - High Cohesion - The responsibility for a certain area of the application (e.g. user management) should be concentrated to one place, in the minimum number of classes (e.g. UserManagershould be responsible for users). - Low Coupling - The responsibility should be assigned to objects so that they have to communicate with as few other objects as possible. Since objects focus only on a small part of the application, they logically need to use the features of other objects from time to time. E.g. the UserManagerwill not normally communicate with a car manager, it shouldn't have any reason to do so. There are few other principles how to divide responsibility, but they are not the subject of this course. It happens in every object-oriented application that an object needs to communicate with another object. That's how dependencies are created. Of course, we don't want objects to be created over and over again, but to pass just one single instance of each dependency (sometimes referred to as services) to where it's needed. For example, we'd create an instance of the Db database class once and then pass it to all the objects that need it to communicate with the database, such as to the UserManager class. We'd certainly think of many similar services needed by other objects, besides the database, these are e.g. email senders, loggers, the currently logged user, and so on. Dependency is when an object needs to access other objects. Sometimes we talk about composing objects, when one object uses several other objects to function. How it all began In order to understand all the DI benefits, let's take it from the beginning. I've already mentioned that we'll write examples in the PHP language. It has a standard C-like syntax, so it should be readable for the vast majority of programmers. I think it's also great for examples, because you can very easily project the principles into your programming language. Unstructured code In early applications, people used to connect to the database at the beginning of a file, executed an SQL query, rendered some HTML, printed data using the programming language, and then rendered the rest of the page. Such an application looked like this: <?php $database = new PDO('mysql:host=localhost;dbname=testdb;charset=utf8mb4', 'name', 'password'); $cars = $database->query("SELECT * FROM cars")->fetchAll(); ?> <table> <?php foreach ($cars as $car) : ?> <tr> <td><?= htmlspecialchars($car['licenseplate']) ?></td> <td><?= htmlspecialchars($car['color']) ?></td> </tr> <?php endforeach ?> </table> We can see that SQL is mixed with PHP (the PDO database library) and with HTML code. All this in a single file. In this example, the chosen strategy seems to be correct. However, when there are tens of thousands of lines in a single file and we'll try to debug an SQL query mixed with a code for button styling, we'll surely find out that this is not the way we can create a real commercial application. Even ICT.social worked this way in the past, but it's been quite a long time ago. We were in a high school and we had no experience with software architecture yet. Even though we can split the application into multiple files, we'll still never have complete control over it. All the files will have access to all data from the other files. This creates global couplings and we'll start to overwrite our data and identifiers uncontrollably. One can just choose the same function name or variable name as was used in some another file. And it's very easy to do so, trust me Fortunately, in modern languages, a similar non-object-oriented code can't be even written anymore. We wouldn't solve the problem by splitting the code into multiple functions either. Functions, unlike object methods, have no context. Even if we create multiple files, each with several functions, we'll not avoid function name collisions. PHP solved the problem with the huge number of its internal functions by giving them all really ugly and long names that are mostly even not standardized. For example, most PHP array functions start with array_, but we get the length of an array by the count() function. When functions don't belong to objects, we'll get lost in their names after a while and. We'll have hard times sharing data between them, which will result either in using dangerous global variables or in passing too many parameters. Let's show one more bad example from the Wordpress content management system, probably one of the both worst-designed and popular applications in PHP. See what happens if you write an application with completely no architecture: Only a small demonstration of global functions shows that some start with "the", probably colliding with others without "the", some with "wp_", some with "get_". Urgh. Hopefully, I have convinced you that functions need to be bound to objects to avoid name collisions. Multiple objects can then easily have a function with the same name, it can be easily determined which function to call according to the object and our IDE also offers function lists automatically when typing their names. We don't have to remember them anymore and don't need silly cheat sheets like the one in the picture above. It's a shame that some people are not programming this way yet. Let's shake it off and leave the non-object-oriented world. The example listing cars will follow us through the entire course and we'll demonstrate all the possible ways of passing dependencies on it until we get to the Inversion of Control and the Dependency Injection, which is one of the IoC techniques. Object-oriented code The object-oriented programming solves the problems of "having it all in one file" or of "lots of strangely named functions with many parameters" (and brings other benefits like inheritance, etc.). But we already know that we have to thing about how to divide the application into objects and mostly, how they will communicate with their dependencies. I suppose that you know the basics of OOP, if not, please complete the OOP course for your programming language first, see the navigation menu. We'll continue in the next lesson, Monolithic and two-tier architecture, where we'll introduce different object architectures. No one has commented yet - be the first!
https://www.ict.social/software-design/software-architectures-and-dependency-injection/introduction-to-software-architectures/
CC-MAIN-2019-09
refinedweb
1,549
53
I'm literally figuring out StimulusJS right now on an app, and I was hoping you would do a screencast on it soon and save me some time! Awesome timing :) Hit me up with your questions! I'm still learning it too, but the good news is that it's pretty straightforward so there isn't _too_ much to learn. I'm compiling a list of problems I'm running into as I stumble my way through it! I use a lot of CoffeeScript classes to interact with my app and avoid spaghetti code, like you show in.... One issue I ran into already is that it seems the `data-controller="..."` attribute cannot have underscores (_) in the controller name. For example, `app/javascript/packs/transaction_record_controller.js` with `data-controller="transaction_record"` will not work, but once you remove the underscore it works fine. I couldn't find any documentation on this, and I can't think of anything my app is doing that would cause a conflict, so I'm assuming it's the way Stimulus works. I've been slowly converting several of my CoffeeScript classes into Stimulus controllers, and so far I've found it to be a great way to take care of stuff like adding a datepicker to an AJAX form, etc... This section in the docs mentions that you can only use hyphens in the controller name in your html. It maps to either a hyphenated or underscore controller.js filename though.... And yeah I think this is a nice clean way of refactoring those CS classes. You no longer have to deal with managing event listeners and can purely focus on the code that runs for each event. 👍 Chris, one question that would be awesome for you to cover is handling lists of elements. For instance, I'm working on a notifications system right now (based off of some of your episodes!), and I want to have a data-controller to maintain list-level actions (like mark all as read) but also individual elements (such as mark an individual notification as read/unread, click the notification to view associated record, etc). Stimulus has been updating their docs and they now have a section that talks about multiple data-controllers for lists, but I'm not sure what the "standard" is for high-level list actions vs individual item actions (if that makes sense). Great intro! What impresses me is how basecamp remains an incubator for code concepts thereby battle testing it before releasing to the wild. While programming is a joy for many of us, it is also how we make a living. I appreciated some of DHH's comments about stimulus and I truly appreciate their pragmatic views. I also am impressed how they rethink conventions like using DOM for state, managing a monolith, and supporting the progressive web development. This looks to make a nice addition to the toolset. Have a link to those comments by DHH about stimulus? Would love to hear his thinking. Thanks! Thanks for the interesting video. For a project of mine I used AngularJS (the original 1.x branch) in a similar way basecamp now proposes to use StimulusJS. What I liked about AngularJS in this context was the ability to offer form validation and to go as deep as I want it to go on demand. Right now what I would love to see in further stimulus releases would be something to handle form validation out of the box and some other convenience things every app needs. I get the feeling that Basecamp won't build out those kinds of features and will keep this generic, but it is the perfect opportunity to build a library on top of stimulus to make validations easy I would imagine. Brilliant! How would you structure your JS controllers? would every page have its own JS controllers? Thanks! Love your videos! Basically just one controller per feature. You should never do page specific JS or CSS because that means you can't move your features to different pages on your site later on which inevitably always happens. But couldn't we put our JS code in a utilities folder and import from there? But I guess that means we are separating it by feature anyway. :D Yes, you can definitely do that. Maybe I'd do that for different things like "admin" area components and so on. For newbies the application.js has changed so visit the official site:... Looks like if you've loaded version >=1 of Stimulus, there are a few changes in the installation. (See handbook at... Setup in application.js should look like: import { Application } from "stimulus" import { definitionsFromContext } from "stimulus/webpack-helpers" const application = Application.start() const context = require.context("./controllers", true, /\.js$/) application.load(definitionsFromContext(context)) import { Application } from 'stimulus' import { definitionsFromContext } from 'stimulus/webpack-helpers' const application = Application.start() const controllers = require.context('./controllers', true, /\.js$/) application.load(definitionsFromContext(controllers)) import { Applicatoin } from 'stimulus' import { autoload } from 'stumulus/webpack-helpers' const application = Application.start() const contsrollers = require.context("./controllers", true, /\.js$/ ) import { Application } from "stimulus" import { definitionsFromContext } from "stimulus/webpack-helpers" const application = Application.start() const context = require.context("./controllers", true, /\.js$/) application.load(definitionsFromContext(context)) import { Application } from "stimulus" import { autoload } from "stimulus/webpack-helpers" const application = Application.start() const controllers = require.context("./controllers", true, /\.js$/) application.load(autoload(controllers)) Awesome, as usual! Join 22,346+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/forum/0-javascript-and-css-asset-pipeline-stimulus-js-framework-introduction
CC-MAIN-2019-35
refinedweb
917
56.96
public class PrimeNumbers { public static void main (String b[]) { int count = 1; int x = 3; while (count < 1000000) { boolean a=true; int q = (int)Math.sqrt(x); //We can simply stop division once we hit the square root of x. Otherwise, x would have been divisible by a number less that the square root of x for (int y=3; y <= q; y += 2) //The only even prime number is 2. Can increment x by 2, so x can only be odd { if (x%y==0) // Primes can be divided by an even number if and only if they are even, but x cannot be even ==> y can be incremented by 2 too ==> y can only be odd. { a=false; break; } } if (a) { System.out.println(x); ++count; } x += 2; } } } Also, I wanted to make it so you could put in a command line input to specify how many prime numbers you wanted to add, I was thinking like adding. "(String args[])" to the main method and making it something like "while count < args[0]{ as the counter. Any help would be greatly appreciated, thanks
https://www.dreamincode.net/forums/topic/148535-prime-number-program-analysis-question/
CC-MAIN-2018-26
refinedweb
185
74.93
Post your Comment J2ME Java Editor plugin J2ME Java Editor plugin Extends Eclipse Java Editor support ing J2ME Polish directives, variables and styles. Edit java files using Editor Editor sir i want to develop a menu programe in java so sir i want to a file menu which have open dialog and save dialog and in edit menu have find option Eclipse Wiki Editor Plugin Eclipse Wiki Editor Plugin This Eclipse plugin is a simple personal or project Wiki... workspace * Link to Java source code in the current project path simply Eclipse Plugin- Editor Eclipse Plugin- Editor  ...; Eclipse Wiki Editor This Eclipse plugin is a simple personal or project Wiki... Resource Bundle Editor Eclipse plugin for editing Easy Eclipse Plugin Easy Eclipse Plugin An EasyEclipse Plugin is a plugin that we have prepackaged for you to add to an EasyEclipse Distribution. An EasyEclipse Plugin may work TinyOS Plugin for Eclipse : the editor plugin the environment plugin Know click for more... TinyOS Plugin for Eclipse  ... be a cumbersome task, especially if you're coming from the java-world and thus text editor in java text editor in java hi friends, i want to do a mini project in programmers editor with syntax based coloring in java. i have no idea about this project and i dont know the java lang also.from now only i have to learn. please ANS.1 Editor ANS.1 Editor ASNEditor provides an Eclipse platform (Version 3.0.1) editor plugin for the ASN.1 (Abstract Syntax Notation One) formal language. It presents Text Editor with Syntax Highlighting Text Editor with Syntax Highlighting How to write a java program for text editor with syntax highlighting jsp:plugin in jsp jsp:plugin in jsp What is the jsp:plugin action ? This action lets you insert the browser-specific OBJECT or EMBED element needed to specify that the browser run an applet using the Java plugin Toby's PL/SQL Editor Toby's PL/SQL Editor What is the PL/SQL Editor? The PL/SQL editor is a plugin... easily develop and test PL/SQL code. Download You can download the plugin at sourceforge. The change log for the editor can be seen here. If you want, you jsp plugin implementation - Applet jsp plugin implementation Hi, I have implemented the jsp plugin..., Java BooksStore's Java Bookstore Unable to start plugin.   Boneclipse-logging Boneclipse-logging This was my very first Eclipse plugin. It's a useful little plugin that adds submenu to the popup context menu of the Java Editor, "Add Logging". Under this, you can chose to automatically Sysdeo Tomcat Launcher Plugin Sysdeo Tomcat Launcher Plugin Plugin features Starting and stopping...) Adding Java Projects to Tomcat classpath Setting Tomcat JVM Dojo Editor Example Dojo Editor Example In this example, you will learn about the dojo editor. Editor: This is a Dijit's Rich Text editor, Dijit.Editor, is a text box that designed to look Resource Bundle Editor Resource Bundle Editor ResourceBundle Editor is an Eclipse plugin for editing... with. Supported Eclipse versions are 3.x and 2.x (up to 0.6.0). The plugin user SortIt Editor SortIt Editor This Eclipse plugin adds sorting to the Edit menu. You can... the plugin and unzip it into your plugins directory. The zip file contains jsp plugin implementation - Applet jsp plugin implementation Hi, I have implemented the code in my program, but it is not executing. Unable to load the plugin. Please download the plugin to continue. I have created the classes Editor help required. - Java Server Faces Questions Editor help required. I am having problem with editor used to design JSP page in JSF applications. I want to open JSP page with Web Page Editor... are they? Please Help me.... Thanks In advance How to create a Java Runtime Editor - Swing AWT How to create a Java Runtime Editor Hi, I am working on a requirement in which I need to integrate a java runtime editor in swing application... want to generate an editor using java swing, try the following code: import Afae - An All-purpose Editor for Eclipse Afae - An All-purpose Editor for Eclipse  ... All-purpose Editor. It is a group of plugins for Eclipse that do the following... plugin for posting to your blog * Keystroke comments * An image viewer JSP Plugin JSP Plugin Syntax: <jsp: plugin type = "bean |applet" code... of the java class file which will be executed by the plug-in. The use .class Eclipse Plugin-Rich Client Applications Eclipse Plugin-Rich Client Applications.... It is available as an Eclipse plugin and makes use of the JFace/SWT APIs.../editor, with special emphasis on the OpenGIS standards for internet GIS Sub-Editor for Java Magazine Sub-Editor for Java Magazine  ... Description: Sub Editor for a Java Magazine,: Sub-Editor who will work for our... Editor Java Magazine Please write back with detailed resume NRG JavaScript Editor NRG JavaScript Editor JSEclipse is one of the best and most popular Javascript plugin for the Eclipse environment. Its benefits are visible from the simplest tasks Numerical Gecko plugin Numerical Gecko plugin Numerical Gecko is an Eclipse Plugin based on the popular Java tool Numerical Chameleon. Its purpose is to help Java... with an efficient, easy to use conversion utility. The plugin supports bi maven fortify plugin a fortify report of a java source code using maven tool? Regards, Swagatika Post your Comment
http://www.roseindia.net/discussion/20437-J2ME-Java-Editor-plugin.html
CC-MAIN-2015-32
refinedweb
898
55.95
in the puppet private (you can use a random generated password, no strict guideline for it) and public repos. - Example 1 - (plus actual data in the private repo, see 1edf14c0) - - Example 2 - eventstreams-internal (T269160) - (plus the actual data in the private repo, see 6689496aand 376c92ad) - - Add a Kubernetes namespace: - Example 1 - (ignore the change to calico/default-kubernetes-policy) - Example 2 - eventstreams-internal (T269160) - At this point, you can safely merge the change (after somebody from Service Ops validates it of course). Please do it though when you have time to run the following command, to avoid impacting other people rolling out changes later on. - The first thing to do is to work in staging, updating the admin config. - On deploy1002: sudo -i; cd /srv/deployment-charts/helmfile.d/admin/staging/; kube_env admin staging; ./cluster-helmfile.sh -i apply - The command above should show you a change in namespaces/quotas/etc.. related to your new service. If this is not the case (for example, you also see other changes) ping somebody from the Service Ops team! There might be some work waiting to be applied. - Then you can proceed to deploy the new service to staging for real. Don't worry for TLS (if needed) since in staging it will be added a default config for your service auto-magically. Different thing is Production, but there is a step later on about it :D - On deploy1002: cd /srv/deployment-charts/helmfile.d/services/YOUR-SERVICE-NAME-HERE; helmfile -e staging -i apply - The magic command above will show a diff related to the new service, make sure that everything looks fine and then hit Yes to proceed. - You should now be able to test your new service in staging! You can use the handy endpoint http(s)://staging.svc.eqiad.wmnet:$YOUR-SERVICE-PORTto quickly test if everything works as expected. - Now you can move to Production! - Create certificates for the new service, if it has an HTTPS endpoint (remember that this step for staging is automatically handled for you, but for production it is not). - If the new service requires specific secrets, commit them to /srv/private/hieradata/role/common/deployment_server.yaml -/YOUR-SERVICE-NAME-HERE; helmfile -e codfw -i apply - On deploy1002: cd /srv/deployment-charts/helmfile.d/services/YOUR-SERVICE-NAME-HERE;
https://wikitech-static.wikimedia.org/w/index.php?title=Kubernetes&direction=prev&oldid=578504
CC-MAIN-2022-05
refinedweb
386
55.34
Making your first figure¶ Welcome to PyGMT! Here we’ll cover some of basic concepts, like creating simple figures and naming conventions. All modules and figure generation is accessible from the pygmt top level package: import pygmt Creating figures¶ All figure generation in PyGMT is handled by the pygmt.Figure class. Start a new figure by creating an instance of this class: fig = pygmt.Figure() Add elements to the figure using its methods. For example, let’s start a map with an automatic frame and ticks around a given longitude and latitude bound, set the projection to Mercator ( M), and the figure width to 8 inches: fig.basemap(region=[-90, -70, 0, 20], projection="M8i", frame=True) Now we can add coastlines using pygmt.Figure.coast to this map using the default resolution, line width, and color: To see the figure, call pygmt.Figure.show: Out: <IPython.core.display.Image object> You can also set the map region, projection, and frame type directly in other methods without calling gmt.Figure.basemap: fig = pygmt.Figure() fig.coast(shorelines=True, region=[-90, -70, 0, 20], projection="M8i", frame=True) fig.show() Out: <IPython.core.display.Image object> Saving figures¶ Use the method pygmt.Figure.savefig to save your figure to a file. The figure format is inferred from the extension. Note for experienced GMT users¶ You’ll probably have noticed several things that are different from classic command-line GMT. Many of these changes reflect the new GMT modern execution mode that will be part of the future 6.0 release. A few are PyGMT exclusive (like the savefig method). The name of method is coastinstead of pscoast. As a general rule, all ps*modules had their psprefix removed. The exceptions are: psxywhich is now plot, psxyzwhich is now plot3d, and psscalewhich is now colorbar. The arguments don’t use the GMT 1-letter syntax (R, J, B, etc). We use longer aliases for these arguments and have some Python exclusive names. The mapping between the GMT arguments and their Python counterparts should be straight forward. Arguments like regioncan take lists as well as strings like 1/2/3/4. If a GMT argument has no options (like -Binstead of -Baf), use a Truein Python. An empty string would also be acceptable. For repeated arguments, such as -B+Loleron -Bxaf -By+lm, provide a list: frame=["+Loleron", "xaf", "y+lm"]. There is no output redirecting to a PostScript file. The figure is generated in the background and will only be shown or saved when you ask for it. Total running time of the script: ( 0 minutes 1.880 seconds) Gallery generated by Sphinx-Gallery
https://www.pygmt.org/latest/tutorials/first-figure.html
CC-MAIN-2020-34
refinedweb
442
67.76
Do you want to learn how to build a Laravel React CRUD (create, read, update, delete) application using Laravel and React JS? If so, I am here to help you out. This tutorial is clear and easy to read and follow along. We will build a Task Manager App, where users will be able to manage their daily tasks. So let's begin! prerequisite - Basic understanding of Laravel - Be able to create a fresh new Laravel project - Basic understanding of JavaScript/EcmaScript and React JS (or learn from here) assets every time you make a change (compulsory) npm run watch Connect Laravel app to mySql database First go to your database client, I am using sequel pro (mac). Create a new database called crud. Then make changes in .env // .env DB_DATABASE=crud DB_USERNAME=root DB_PASSWORD= Show React component in your laravel project In your project folder directory, go to resources/assets/js/components/Example.js There you see Example component, which comes by default with laravel. It is using bootstrap4 styling. All this html looking code is in fact JSX (mix of javascript and html). It is almost like HTML. One difference you will notice that class is being replaces with className. Last two lines shows you that if there is an html element with an id of example (could be anywhere in our laravel app views or blade templates), then that is where this react component will render. If you want to get a deeper understanding of Modern JavaScript/EcmaScript and React JS, make sure to checkout my course on Udemy. This course covers all the core concepts of Modern JavaScript and React JS so that you can get started comfortably in very short time. By the end you will learn to build a Twitter like Real time web app too. Use this link to get this course for only $9.99. Ok. enough self promotion :) Lets see how we can display this react component in our laravel app. But before that, lets generate login system so that it is done and ready for us to use. Since we are using npm run watch, lets leave this terminal window and instead open a new terminal window, get inside our project and run the following commands. // generate authentication php artisan make:auth // migrate php artisan migrate Now that we have auth system ready to use, lets begin by registering a new user. once done login to the application so that user is landed in the home page. Go to resources/views/home.blade.php Remove the existing markup and instead show the react component. For that we need to create a div with an id of example. Do the following. @extends('layouts.app') @section('content') <div class="container"> <div id="example"></div> </div> @endsection Beautiful. Now go to the browser and refresh the page. There you see Example component. That is react component we are rendering! Modifying React components file/folder structure We are off to a great start but I am sure you don't want to continue using Example component, right? Lets make some changes. Let's rename this component to App.js and also create a new file called index.js inside resources/assets/js which will be responsible for rendering and also holds routing system (we will wok on this later). Rename Example.jsto App.js In App.jschange the component name from Example to App - Then in index.js, use the div with id of rootinstead of example resources/assets/js/components/App.js import React, { Component } from 'react'; import ReactDOM from 'react-dom'; export default class App extends Component { render() { return ( <div className="container"> <div className="row justify-content-center"> <div className="col-md-8"> <div className="card"> <div className="card-header">App Component</div> <div className="card-body">I'm an App component!</div> </div> </div> </div> </div> ); } } resources/assets/js/index.js import React, { Component } from 'react'; import ReactDOM from 'react-dom'; // import App component import App from './components/App' // change the getElementId from example to app // render App component instead of Example if (document.getElementById('root')) { ReactDOM.render(<App />, document.getElementById('root')); } resources/assets/js/app.js (this is not the same App.js inside components folder) This app.js comes with laravel by default, which is inside assets/js folder. Here it is requiring Example component like so: require('./components/Example'); Change this to use index.js like so: require('./index'); resources/views/home.blade.php - Change the div id from exampleto root @extends('layouts.app') @section('content') <div class="container"> <div id="root"></div> </div> @endsection This is perfect setup. Now we can start building our awesome CRUD app. You can go to the browser and refresh to make sure everything is working. Creating a form in React component Now it is time for us to build a form in our React componnet App.js. Here is the code: export default class App extends Component { render() { return ( <div className="container"> <div className="row justify-content-center"> <div className="col-md-8"> <div className="card"> <div className="card-header">Create Task</div> <div className="card-body"> <form> <div className="form-group"> <textarea className="form-control" rows="5" placeholder="Create a new task" required /> </div> <button type="submit" className="btn btn-primary"> Create Task </button> </form> </div> </div> </div> </div> </div> ); } } Here, all we did is create a form with a textarea and a button. But this form does not do anything yet. Let's put some life into it. Make sure to have npm run watchrunning in your terminal. Otherwise you will not see the changes in the browser. You could also run npm run devto compile the JavasScript but that would not be practical because we want to see the changes immediately. Handling onChange event in React The first thing we need to do here is handle onChange event. When a user starts typing in textarea, We want to capture the input and store in react component's state. Then we need to take that input from the state and apply into form's textarea as it's value. This is also known as controlled component, we control the textarea's value based on the component state. In App component, create a constructor and create a local state object with two properties name and tasks. tasks value is an empty array for the moment. name property will hold the value of user input. As the input changes name property's value will change. Later we will make a post request to laravel backend with the value that is available in this name property. We have also created handleChange method that will get the user input using onChange event and set the state using setState method. Also we need to bind this method to the constructor using bind() method. If all this does not make sense then you need to refresh your knowledge of Modern JavaScript/EcmaScript. React is all about using modern JavaScript. I recommend you to check out my course that covers everything you need to know about modern JavaScript so that you are up and running with React in very short time. App.js export default class App extends Component { constructor(props) { super(props); this.state = { name: '', tasks: [] }; // bind this.handleChange = this.handleChange.bind(this); } // handle change handleChange(e) { this.setState({ name: e.target.value }); console.log('onChange', this.state.name); } render() { Inside App component's render method. I have added onChange and value to textarea. Also add maxlength to 255, this is an easy way to do a bit of validation directly in HTML. <textarea onChange={this.handleChange} value={this.state.name} className="form-control" rows="5" maxLength="255" placeholder="Create a new task" required /> Now if you type something in the textarea, you can see the output in the console. It changes as soon as user types in. I also recommend you to install react devtools chrome extension. Where you can see the state updating as soon as there is a change. Now we can make a post request to laravel with the value available in the state's name property right? But we have not done anything in the backend yet. So let's move on and work in the backend. Creating Laravel API endpoints / Models / Controllers / Routing - Create a Task model with migration php artisan make:model Task -m - Modify the migration - Add user_id and name(for tasks) public function up() { Schema::create('tasks', function (Blueprint $table) { $table->increments('id'); $table->integer('user_id')->unsigned()->index(); $table->string('name'); $table->timestamps(); }); } - Make the name field fillable in Task model // app/Task,php class Task extends Model { protected $fillable = ['name']; } - Define a relationship between User and Task model // Task.php class Task extends Model { protected $fillable = ['name']; public function user() { return $this->belongsTo(User::class); } } // User.php public function tasks() { return $this->hasMany(Task::class); } - Run the migration php artisan migrate - Create necessary routes // using resource route Route::resource('tasks', 'TaskController'); - Create TaskController php artisan make:controller TaskController --resource Creating Controller to return the json response in Laravel Lets begin by working on two methods index and store. This way we can send post request from React component to create a new post. We can also send the json response of all tasks to React component to display all tasks in the browser. For this, we have created index method. See the code below: app/Http/Controllers/TaskController.php <?php namespace App\Http\Controllers; use App\Task; use Illuminate\Http\Request; class TaskController extends Controller { // apply auth middleware so only authenticated users have access public function __construct() { $this->middleware('auth'); } public function index(Request $request, Task $task) { // get all the tasks based on current user id $allTasks = $task->whereIn('user_id', $request->user())->with('user'); $tasks = $allTasks->orderBy('created_at', 'desc')->take(10)->get(); // return json response return response()->json([ 'tasks' => $tasks, ]); } public function create() { // } public function store(Request $request) { // validate $this->validate($request, [ 'name' => 'required|max:255', ]); // create a new task based on user tasks relationship $task = $request->user()->tasks()->create([ 'name' => $request->name, ]); // return task with user object return response()->json($task->with('user')->find($task->id)); } public function show($id) { // } public function edit($id) { // } public function update(Request $request, $id) { // } public function destroy($id) { // } } Back to React component, App.js. Lets create onSubmit method. This method will submit the new task to '/tasks' route, whhich is an API endpoint to make post request. Post request from React Component to Laravel backend App.js // bind handleSubmit method this.handleSubmit = this.handleSubmit.bind(this); // create handleSubmit method right after handleChange method handleSubmit(e) { // stop browser's default behaviour of reloading on form submit e.preventDefault(); axios .post('/tasks', { name: this.state.name }) .then(response => { console.log('from handle submit', response); }); } Add onSubmit method to form <form onSubmit={this.handleSubmit}> Great! Now if you try creating a new task. It will save in the database and return a response, which you can see in the console. The returned response has data property which contains the task we just created. Not only that, it also contains user object because we returned the response with user in our TaskController store method. Awesome! Store the response data(tasks) in the react component state Once we successfully create a new task, we get the response. That response we can store in an array called tasks which is in react state (we created earlier tasks: [] ) Make the following changes to handleSubmit() method: handleSubmit(e) { // stop browser's default behaviour of reloading on form submit e.preventDefault(); axios .post('/tasks', { name: this.state.name }) .then(response => { console.log('from handle submit', response); // set state this.setState({ tasks: [response.data, ...this.state.tasks] }); // then clear the value of textarea this.setState({ name: '' }); }); } Here, we are using ... three dots (spread operator) to spread out the tasks in an array along with the existing tasks that are or will be available in the state and finally merge. Again, If you need to brush up your JavaScript skill, check out this course which I have recently published. Render the list of tasks in react component Now that we will have tasks in the local state as a response each time we create a new one, lets render them. (soon we will use React's lifecycle method to fetch all the tasks and display) - Create a r enderTasks()method right below handleSubmit()method in App.js. This method uses map function, which takes function as argument which returns each task available in the state. The arrow function is used here instead of regular function. Learn the core Modern JavaScript/EcmaScript here, if necessary :) // render tasks renderTasks() { return this.state.tasks.map(task => ( <div key={task.id} <div className="media-body"> <p>{task.name}</p> </div> </div> )); } - Bind renderTasks()method to the constructor this.renderTasks = this.renderTasks.bind(this); - Execute renderTasks() method right below the closing </form>tag inside render method: <hr /> {this.renderTasks()} WOW! Go to the browser and give it a try! Each time you create a new task, it is displayed right below the create form. Isn't this exciting? But there is bit more to do. As soon as you refresh the page, Its gone. That's because, It is based on each submit response. Lets fetch all the posts and render using React's lifecycle method. React lifecycle method - componentWillMount() There are few lifecycle methods made availabel for use to use by React JS. The most common ones are componentWillMount(), componentDidMount() etc. Feel free to read more about React's lifecycle method in the official documentaion. Just before the React component will mount or be ready for the user to see, we want to make a get request to get all the tasks from the backend. For that we can use componentWillMount() method. Lets do this! Create a method that will fetch all the tasks from backend (we dont need to bind this method to the constructor because it will not be used inside r ender() method. render() is also one of the lifecycle method). Below we are creating getTasks() method to get all tasks from backend. Then execute this method inside componentWillMount() lifecycle method. Write this bit of code right below the renderTasks() method in App.js // get all tasks from backend getTasks() { axios.get('/tasks').then(( response // console.log(response.data.tasks) ) => this.setState({ tasks: [...response.data.tasks] }) ); } // lifecycle method componentWillMount() { this.getTasks(); } Now go to the browser and see the awesome app you have build so far. All the tasks you created so far are available right below the create form. As you create more tasks, They are displayed immediately. Laravel React - Send delete request and update local state - It is time for us to implement delete feature. Firstly, we need to delete task from the local state and update the component without page reload. - Next, we need to send delete request to laravel backend with the current task id, so that it can be deleted. Create handleDelete() method right below componentWillMount() method in App.js // handle delete handleDelete(id) { // remove from local state const isNotId = task => task.id !== id; const updatedTasks = this.state.tasks.filter(isNotId); this.setState({ tasks: updatedTasks }); // make delete request to the backend axios.delete(`/tasks/${id}`); } Then bind this method to the constructor this.handleDelete = this.handleDelete.bind(this); Add the delete button to each task. We are returning each task using the renderTasks() method. Make the following changes in renderTasks(). Here we are simply using onClick event and using arrow function inside it to execute the handleDelete() method, passing the task id to delete. // render tasks renderTasks() { return this.state.tasks.map(task => ( <div key={task.id} <div className="media-body"> <p> {task.name}{' '} <button onClick={() => this.handleDelete(task.id)} Delete </button> </p> </div> </div> )); } Now for this to work, make sure to write some code in the destroy() method in laravel :) TaskController.php public function destroy($id) { Task::findOrFail($id)->delete(); } That's it! Go ahead, give it a try. This app is absolutely rocking right now! Now there is one more step to cover that is update, right? But update part is a bit tricky. So I will try to add update part some other time ok... Cheers! Please leave your comments below for any help or suggestion.
https://kaloraat.com/articles/laravel-react-crud-tutorial
CC-MAIN-2018-34
refinedweb
2,715
57.47
From: Robert Ramey (ramey_at_[hidden]) Date: 2004-06-27 21:05:45. c) the Main tree goes on as usual d) pending issues in the release branch are addressed by nagging the person responsible. e) the new version is releases with a number like 1.31.1 or maybe even 28 July 2004. f) any changes between branch and release should be just bug fixes which can then be merged back into the development tree.. I realize that this is not the most orthodox way of doing things. But I think it's really hard to get such a large number of people working independently on exactly the same schedule. >How come we have so many >accepted libraries that have not even been put in the CVS? Almost all libraries are accepted subject to some changes being made. Many such changes seem simple but end up rippling through the whole library. This takes time. Once a library is accepted, then one can get serious about making it pass with a larger number of compilers. This also takes time - a lot of time These changes end up having a ripple effect and can result it changes like moving things in namespaces and subdirectories. While this is going on, the system is broken and really not suitable to be subject to the daily testing routine. So it seems attractive to deal with all this stuff before one does the initial check in to CVS. This way one can start with a relatively clean start. Robert Ramey Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/06/67013.php
CC-MAIN-2022-40
refinedweb
276
75.1
Keys To Web 3.0 Design and Development When Using ASP.NET You can skip the following boring story as it's only a prelude to the meat of this post. As I've been sitting at my job lately trying to pull off my web development ninja skillz I feel like my hands tied behind my back because I'm there temporarily as a consultant to add features, not to refactor. The current task at hand involves adding a couple additional properties to key user component in a rich web application. This requires a couple extra database columns and a bit of HTML interaction to collect the new settings. All in all, about 15 minutes, right? Slap in the columns into the database, update the SQL SELECT query, throw on a couple ASP.NET controls, add some data binding, and you're done, right? Surely not more than an hour, right? Try three hours, just to add the columns to the database! The HTML is driven by a data "business object" that isn't a business object at all, just a data layer that has method stubs for invoking stored procedures and returns only DataTables. There are four types of "objects" based on the table being modified, and each type has its own stored procedure that ultimately proxies out to the base mix of hard-wired bindings to the lowest base control (properties) for some of the user's settings, and for most of the rest way of passing the data--I use a series of FindControl(..) invocation chains to get to the buried control and fetch the setting I wanted to add to the database and/or translate back out to the view. (I would have done better than to add more kludge, but I couldn't without being enticed to refactor, which I couldn't do, it's a temporary contract and the boss insisted that I not.) To top it all off, just the simple CRUD stored procedures alone are slower than an eye blink, and seemingly showstopping in code. It takes about five seconds to handle each postback on this page, and I'm running locally (with a networked SQL Server instance). The guys who architected all this are long gone. This wasn't the first time I've been baffled by the output of an architect who tries too hard to do the architectural deed while forgetting that his job is not only to be declarative on all layers but also to balance it with performance and making the developers' lives less complicated. In order for the team to be agile, the code must be easily adaptable. Plus the machine I was given is, just like everyone else's, a cheap Dell with 2GB RAM and a 17" LCD monitor. (At my last job, which I quit, I had a 30-inch monitor and 4GB RAM which I replaced without permission and on my own whim with 8GB.) I frequently get OutOfMemoryExceptions from Visual Studio when trying to simply compile the code. There are a number of reasons I can pinpoint to describe exactly why this web application has been so horrible to work with. Among them, The architecture violates the KISS principle. The extremities of the data layer prove to be confounding, and buring controls inside controls (compositing) and then forking instances of them are a severe abuse of ASP.NET "flexibility". OOP principles were completely ignored. Not a single data layer inherits from another. There is no business object among the "Business" objects' namespace, only data invocation stubs that wrap stored procedure execution with a transactional context, and DataTables for output. No POCO objects to represent any of the data or to reuse inherited code. Tables, not stored procedures, should be used in basic CRUD operations. One should use stored procedures only in complex operations where multiple two-way queries must be accomplished to get a job done. Good for operations, bad for basic data I/O and model management. Way too much emphasis on relying on Web Forms "featureset" and lifcycle (event raising, viewstate hacking, control compositing, etc.) to accomplish functionality, and way too little understanding and utilization of the basic birds and butterflies (HTML and script). Way too little attention to developer productivity by failure to move the development database to the local switch, have adequate RAM, and provide adequate screen real estate to manage hundreds of database objects and hundreds of thousands of lines of code. Admittance of the development manager of the sadly ignorant and costly attitude that "managers don't care about cleaning things up and refactoring, they just want to get things done and be done with it"--I say "ignorant and costly" because my billable hours were more than quadrupled versus having clean, editable code to begin see where my syntax or other compiler-detected errors are in my code additions (and I haven't been sleeping well lately so I'm hitting the Rebuild button and monitoring the Errors window an awful lot). Even as I study (ever so slowly) for MCPD certification for my own reasons while I'm at home (spare me the biased anti-Microsoft flames on that, I don't care) I'm finding that Microsoft end developers (Morts) and Microsofties (Redmondites) alike are struggling with the bulk of their own technology and are heaping up upon themselves the knowledge of their own infrastructure before fully appreciating the beauty and the simplicity of the pure basics. Fortunately, Microsoft has had enough, and they've been long and hard at the drawing board to reinvent ASP.NET with ASP.NET MVC. But my interests are not entirely, or not necessarily, MVC-related. All I really want is for this big fat pillow to be taken off of my face, and all these multiple layers of coats and sweatshirts and mittens and ski pants and snow boots to be taken off me, so I can stomp around wearing just enough of what I need to be decent. I need to breathe, I need to move around, and I need to be able to do some ninja kung fu. These experiences I've had with ASP.NET solutions often make me sit around brainstorming how I'd build the same solutions differently. It's always easy to be everyone's skeptic, and it requires humility to acknowledge that just because you didn't write something or it isn't in your style or flavor doesn't mean it's bad seem to grow in a box that is controlled by Rendmondites, with few artistic deviators rocking the boat. It's with the server-driven view management rather than smart clients in script and markup. It's with nearly all development frameworks that cater towards the ASP.NET crowd being built for IIS (the server) and not for the browser (the client). I intend to do my part, although intentions are easy, actions can be hard. But I've helped design an elaborate client-side MVC framework before, with great pride, I'm thinking about doing it again and implementing myself (I didn't have the luxury of exclusivity of implementation last time) and open sourcing it for the ASP.NET crowd. I'm also thinking about building a certain kind of ASP.NET solution I've frequently needed to work with (CRM? CMS? Social? something else? *grin* I won't say just yet), that takes advantage of certain principles. What principles? I need to establish these before I even begin. These have already worked their way into my head and my attitude and are already an influence in every choice I make in web architecture, and I think they're worth sharing. 1. Think dynamic HTML, not dynamically generated HTML. Think of HTML like food; do you want your fajitas sizzling when when it arrives and you have to use a fork and knife while you enjoy it fresh on your plate, or do you prefer your food preprocessed and shoved into your mouth like a dripping wet ball of finger-food sludge? As much as I love C#, and acknowledge the values of Java, PHP, Ruby on Rails, et al, the proven king and queen of the web right now, for most of the web's past, and for the indefinite future are the HTML DOM and Javascript. This has never been truer than now with jQuery, MooTools, and other (I'd rather not list them all) significant scripting libraries that have flooded the web development have control of it in your own predictable environment, in practice there are just too many stop-edit-retry cycles going on in server-oriented view management. And here's why that is. The big reason to move view to the client is because developers are just writing WAY too much view, business, and data mangling logic in the same scope and context. Client-driven view management nearly forces the developer to isolate view logic from data. In ASP.NET Web Forms, your 3 tiers are database, data+view mangling on the server, and finally whatever poor and unlucky little animal (browser) has to suffer with the resulting HTML. ASP.NET MVC changes that to essentially five tiers: the database, the models, the controller, the server-side view template,and finally whatever poor and unlucky little animal has to suffer with the resulting HTML. (Okay, Microsoft might be changing that with adopting jQuery and promising a client solution, we'll see.) Most importantly, client-driven views make for a much richer, more interactive UIX (User Interface/eXperience); you can, for example reveal/hide or enable/disable a set of sub-questions depending on if the user checks a checkbox, with instant gratification. The ASP.NET Web Forms model would have it automatically perform a form post to refresh the page with the area enabled/disabled/revealed/hidden depending on the checked state. The difference is profound--a millisecond or two versus an entire second or two. 2. Abandon ASP.NET Web Forms. RoR implements a good model, try gleaning from that. ASP.NET MVC might be the way of the future. But frankly, most of the insanely popular web solutions on the Internet are PHP-driven these days, and I'm betting that's because PHP is on a similar coding model as ASP classic. No MVC stubs. No code-behinds. All ability to drag-and-drop functionality onto a page and watch it go, and premier vender (Microsoft / Visual Studio / MSDN) support. But it's difficult to optimize, difficult to templatize, difficult to abstract away from business logic layers (if at least difficult in that it requires intentional discipline), and puts way too much emphasis on the lifecycle of the page hit and postback. Look around at the ASP.NET web forms solutions out there. Web Forms is crusty like Visual Basic is crusty. It was created for, and is mostly used for, corporate grunts who use B2B (business-to-business) or internal apps. The rest of the web sites who use neat technology, but it is absolutely NOT a one-size-fits-all platform any more than my winter coat from Minnesota is. So congratulations to Microsoft for picking up the ball and working on ASP.NET MVC. 3. Use callbacks, not postbacks. Sometimes a single little control, like a textbox that behaves like an auto-suggest combobox, just needs a dedicated URL to perform an AJAX query against. But also, in ASP.NET space, I envision the return of multiple <form>'s, with DHTML-based page MVC controllers powering them all, driving them through AJAX/XmlHttpRequest. Why? Clients can be smart now. They should do the view processing, not the server. The browser standard has finally arrived to such a place that most people have browsers capable of true DOM/DHTML and Javascript with JSON and XmlHttpRequest support. Clearing and redrawing the screen is as bad as 1980s BBS ANSI screen redraws. It's obsolete. We don't need to write apps that way. Postbacks are cheap; don't be cheap. Be agile; use patterns, practices, and techniques that save development time and energy while avoiding the loss of a fluid user experience. <form action="someplace" /> should *always* have an onsubmit handler that returns false but runs an AJAX-driven post. The page should *optionally* redirect, but more likely only the area of the form or a region of the page (a containing DIV perhaps) should be replaced with the results of the post. Retain your header and sidebar in the user experience, and don't even let the content area go isolate page posts from componentized view behavior. Further, <form runat="server" /> should be considered deprecated and obsolete. Theoretically, if you *must* have ViewState information you can drive it all with Javascript and client-side controllers assigned to each form. ASP.NET MVC can manage callbacks uniformly by defining a REST URL suffix, prefix, or querystring, and then assigning a JSON handler view to that URL, for example ~/employee/profile/jsmith?view=json might return the Javascript object that represents employee Joe Smith's profile. You can then use have a login page. If you don't want to put ugly Username/Password fields on the header or sidebar, use AJAX. Why? Because if a user visits your site and sees something interesting and clicks on a link, but membership is required, the entire user experience is inturrupted by the disruption of a login screen. Instead, fade out to 60%, show work that way; the flow support ask the user to get the latest version. Why? Supporting multiple different browsers typically means writing more than one version of a view. This means developer productivity is lost. That means that features get stripped out due to time constraints. That means that your web site is crappier. That means users will be upset because they're not getting as much of what they want. That means less users will come. And that means less money. So take on the "Write once, run anywhere" mantra (which was once Java's slogan back in the mid-90s) by writing W3C-compliant code, and leave behind only those users who refuse to update their favorite browsers, and you'll get a lot more done while reaching a broader market, if not now then very soon, such as perhaps 1/2 yr after IE 8 is released. Use Javascript libraries like jQuery to handle most of the browser differences that are left over, while at the same time being empowered to add a lot of UI functionality without postbacks. (Did I mention that postbacks are evil?) 6. When hiring, favor HTML+CSS+Javascript gurus who have talent and an eye for good UIX (User Interface/eXperience) over ASP.NET+database gurus. Yeah! I just said that! Why? Because the web runs on the web! Surprisingly, most employers don't have any idea and have this all upside down. They favor database gurus as gods and look down upon UIX developers as children. But the fact is I've seen more ASP.NET+SQL guys who halfway know that stuff and know little of HTML+Javascript than I have seen AJAX pros, and honestly pretty much every AJAX pro is bright enough and smart enough to get down and dirty with BLL and SQL when the time comes. Personally, I can see why HTML+CSS+Javascript roles are paid less (sometimes a lot less) than the server-oriented developers--any script kiddie can learn HTML!--but when it comes to professional web develop they are ignored WAY too much because of only that. The web's top sites require extremely brilliant front-end expertise, including Facebook, Hotmail, Gmail, Flickr, YouTube, MSNBC--even Amazon.com which most prominently features server-generated content but yet also full have been taken for granted by us as technologists to such an extent that we've forgotten how important it is to value it in our hiring processes. 7. ADO.NET direct SQL code or ORM. Pick one. Just don't use data layers. Learn OOP fundamentals. The ActiveRecord pattern is nice. Alternatively, if it's a really lightweight web solution, just go slow to say "enterprise" because, frankly, too many people assume the word "enterprise" for their solutions when they are anything but. Even web sites running at tens of thousands of hits a day and generating hundreds of thousands of dollars of revenue every month don't necessarily mean "enterprise". The term "enterprise" is more of a people management inference than a stability or quality effort. It's about getting many people on your team using the same patterns and not having loose and abrupt access to thrash the database. For that matter, the corporate slacks-and-tie crowd of ASP.NET "Morts" often can relate to "enterprise", and not even realize it. But for a very small team (10 or less) and especially for a micro ISV (developers numbering 5 or less) with a casual and agile attitude, take the word "enterprise" with a grain of salt. You don't need a gajillion layers of red tape. For that matter, though, smaller teams are usually small because of tighter budgets, and that usually means tighter deadlines, and that means developer productivity must reign right there alongside stability and performance. So find an ORM solution that emphasizes productivity (minimal maintenance and easily adaptable) and don't you dare trade routine refactoring for task-oriented focus as you'll end up just wasting everyone's time in the long run. Always include refactoring to simplicity in your maintenance schedule. Why? Why go raw with ADO.NET direct SQL or choose an ORM? Because some people take the data layer WAY too far. Focus on what matters; take the effort to avoid the effort of fussing with the data tier. Data management is less important than most teams seem to think. The developer's focus should be on the UIX (User Interface/eXperience) and the application functionality, not how to store the data. There are three areas where the typical emphasis on data management is agreeably important: stability, performance (both of which are why we choose adapt existing code easily. Again, this is why a proper understanding of OOP, how to apply it, when to use it, etc, is emphasized all the time, by yours truly. Learn the value of abstraction and inheritence and of encapsulating interfaces (resulting in polymorphism). Your business objects should not be much more than POCO objects with application-realized properties. Adding a new simple data-persisted object, or modifying an existing one with, say, a new column, should not take more than a minute of one's time. Spend the rest of that time instead on how best to impress the user with a snappy, responsive user interface. 8. Callback-driven content should derive equally easily from your server, your partner's site, or some strange web service all the way in la-la land. We're aspiring for Web 3.0 now, but what happened to Web 2.0? We're building on top of it! Web 2.0 brought us mashups, single sign-ons, and cross-site social do more. It still has to derive content from you, but in a callback-driven architecture, the content is URL-defined. As long as security implications are resolved, you now have the entire web at your [visitors'] disposal! Now turn it around to yourself and make your site benefit from it! If you're already invoking web services, get! Throw the client a bone and let it fetch the external resources on its own. 9. Pay attention to the UIX design styles of the non-ASP.NET Web 2.0/3.0 communities. There is such a thing as a "Web 2.0 look", whether we like to admit it or not; we web developers evolved and came up with innovations worth standardizing on, why can't designers evolve and come up with visual innovations worth standardizing on? If the end user's happiness is our goal, how are features and stable and performant code more important than aesthetics and ease of use? The problem is, one perspective of what "the Web 2.0 look" actually looks like is likely very different from another's or my own. I'm not speaking of heavy WAY more than they deserved to do, neither of them understood that I was joking. Or, at least, they didn't laugh or even smile.) No, but I am talking about the use of artistic elements, font choices and font styles, and layout characteristics that make a web site stand out from the crowd as being highly usable and engaging. Let's demonstrate, shall we? Here are some sites and solutions that deserve some praise. None of them are ASP.NET-oriented. - (ugly colors but otherwise nice layout and "flow"; all functionality driven by Javascript; be sure to click on the "tabs") - (ignore the ugly logo but otherwise take in the beauty of the design and workflow; elegant font choice) - (I really admire the visual layout of this JavaServer Pages driven site; fortunately I love the fact that they support ASP.NET on their product) - (these guys did a redesign not too terribly long ago; I really admire their selective use of background patterns, large-font textboxes, hover effects, and overall aesthetic flow) - (stunning layout, rock solid functionality, universal acceptance) - (a beautiful and powerful open source CMS) - (I don't like the color scheme but I do like the sheer simplicity - .. for that matter I also love the design and simplicity of) Now here are some ASP.NET-oriented sites. They are some of the most popular ASP.NET-driven sites and solutions, but their design characteristics, frankly, feel like the late 90s. - (one of the most popular CMS/portal options in the open source ASP.NET community .. and, frankly, I hate it) - (sign in and discover a lot of features with a "smart client" feel, but somehow it looks and feels slow, kludgy, and unrefined; I think it's because Microsoft doesn't get out much) - (it looks like a step in the right direction, but there's an awful lot of smoke and mirrors; follow the Community link and you'll see the best of what the ASP.NET community has to offer in the way of forums .. which frankly doesn't impress me as much as phpBB) - (my blog uses this, I like it well enough, but it's just one niche, and that's straight-and-simple blogs - (the ORM technology is very nice, but the site design is only "not bad", and the web site starter kit leave me shrugging with a shiver) Let's face it, the ASP.NET community is not driven by designers. Why? Why do ugly and disgusting piece of cow dung in the area of UIX (User Interface/eXperience). AJAX functionality is based on third party components that "magically just work" while gobs and gobs of gobblygook code on the back end attempts to wire everything together, and what AJAX is there is both rare and slow, encumbered by page bloat and server bloat. The front-end appearance is amateurish, and I'm disheartened as a web developer to work with it. Such seems to be the makeup of way too many ASP.NET solutions that I've seen. 10. Componentize the client. Use "controls" on the client in the same way you might use need to be able to have a manageable development workflow. ASP.NET thrives on the workflows of quick-tagging (<asp:XxxXxx) and drag-and-drop, and that's all part of the equation of what makes it so popular. But that's not all ASP.NET is good for. ASP.NET's greatest strengths are two: IIS and the CLR (namely the C# language). The quality of integration of C# with IIS is incredible. You get, take a look at. Talk about drag-and-drop coding for smart client-side applications, driven by a rich server back-end (Java). This is some serious have a URI, and that should be reflected (somehow) in the browser's Address field. Even if it's going to be impossible to make the URL SEO-friendly (because there are no predictable hyperlinks that are spiderable), the user should be able to return to the same view later, without stepping through a number of steps of logging in and clicking around. This is partly the very definition of the World Wide Web: All around the world, content is reflected with a URL. 12. Glean from the others. Learn CakePHP. Build a simple symfony site. Watch the Ruby On Rails screencasts and consider diving in. And have you seen Jaxer lately?! And absolutely, without hesitation, learn jQuery, which Microsoft will be supporting from here on out in Visual Studio and ASP.NET. Discover the plug-ins and try to figure out how you can leverage them in an ASP.NET environment. Why? Because you've lived in a box for too long. You need to get out and smell the fresh air. Look at the people as they pass you by. You are a free human being. Dare yourself to think outside the box. Innovate. Did you know that most innovations are gleaning from other people's imaginative ideas and implemenations, and reapplying them in your own world, using your own tools? Why should Ruby on Rails have a coding workflow that's better than ASP.NET? Why should PHP be a significantly more popular platform on the public web than ASP.NET, what makes it so special besides being completely free of Redmondite ties? Can you interoperate with it? Have you tried? How can the innovations of Jaxer be applied to the IIS 7 and ASP.NET scenario, what can you do to see something as earth-shattering inside this Mortian realm? How can you leverage jQuery to make your web site do things you wouldn't have dreamed of trying to do otherwise? Or at least, how can you apply it to make your web application more responsive and interactive than the typical junk you've been pumping out? You can be a much more productive developer. The whole world is at your fingertips, you only need to pay attention to it and learn how to leverage it to your advantage. And these things, I believe, are what is going to drive the Web 1.0 Morts in the direction of Web 3.0, building on the hard work of yesteryear's progress and making the most of the most powerful, flexible, stable, and comprehensive server and web development technology currently in existence--ASP.NET and Visual Studio--by breaking out of their molds and entering into the new frontier. - Login or register to post comments - 2701 reads - Printer-friendly version (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) rekna replied on Sat, 2008/10/11 - 4:47am After working on a project with ASP.NET and MS ajax, I've come to the conclusion myself that digging into javascript for better RIA experience is necessary. However, writing efficient javascript, avoiding memory leaks and dealing with cross browser issues requires quite some experience in javascript programming. jQuery and other javascript libraries make it more easy to program javascript and build RIA but still, there are a lot of pitfalls... ask 10 people using jQuery whether they have to unbind events of a DOM element, when replacing its HTML, in order to avoid memory leaks, I wonder how many will know the right answer. What is needed is some Best Practises ... look at all the javascript libraries documentation, most of them have good documentation for each method and object in their library. A description for some common web application scenario's is mostly missing. Mark Kamoski replied on Mon, 2008/11/17 - 3:45pm Hmmm. All this talk reminds me of something... ...oh yes... ...at the text notes... Tim Berners-Lee, inventor of the World Wide Web, has questioned whether one can use the term in any meaningful way, since many of the technology components of Web 2.0 have existed since the early days of the Web.[4][5]...."[4] ...so, given that, what EXACTLY is one talking about when one says "Web 3.0"??? (Just curious.) Thank you. -- Mark Kamoski
http://css.dzone.com/news/keys-to-web-30-design-and-deve
crawl-002
refinedweb
4,761
61.26
The Java platform contains three classes that you can use when working with character data: [1] [2] [3] Characters An object of Character type contains a single character value. You use a Character object instead of a primitive char variable when an object is requiredfor example, when passing a character value into a method that changes the value or when placing a character value into a data structure, such as a vector, that requires objects. public class CharacterDemo { public static void main(String args[]) { Character a = new Character('a'); Character a2 = new Character('a'); Character b = new Character('b'); int difference = a.compareTo(b); if (difference == 0) { System.out.println("a is equal to b."); } else if (difference < 0) { System.out.println("a is less than b."); } else if (difference > 0) { System.out.println("a is greater than b."); } System.out.println("a is " + ((a.equals(a2)) ? "equal" : "not equal") + " to a2."); System.out.println("The character " + a.toString() + " is " + (Character.isUpperCase(a.charValue()) ? "upper" : "lower") + "case."); } } The following is the output from this program: a is less than b. a is equal to a2. The character a is lowercase. The CharacterDemo program calls the following constructors and methods provided by the Character class: [1] The compareTo method was added to the Character class for Java 2 SDK v. 1.2. [a] Added to the Java platform for the 1.1 release. Replaces isSpace(char), which is deprecated. [b] Added to the Java platform for the 1.1 release. [c] Added to the Java platform for the 1.1 release. Replaces isJavaLetter(char), which is deprecated. [d] Added to the Java platform for the 1.1 release. Replaces isJavaLetterOrDigit(char), which is deprecated. Strings and String Buffers The Java platform provides two classes, String and StringBuffer, that store and manipulate stringscharacter. public class StringsDemo { public static void main(String[] args) { String palindrome = "Dot saw I was Tod"; int len = palindrome.length(); StringBuffer dest = new StringBuffer(len); for (int i = (len - 1); i >= 0; i--) { dest.append(palindrome.charAt(i)); } System.out.println(dest.toString()); } } The output from this program is: doT saw I was toD In addition to highlighting the differences between strings and string buffers, this section discusses several features of the String and StringBuffer classes: creating strings and string buffers, using accessor methods to get information about a string or string buffer, and modifying a string buffer. Creating Strings and String Buffers A string is often created from a string literala series of characters enclosed in double quotes. For example, when it encounters the following string literal, the Java platform creates a String object whose value is Gobbledygook. "Gobbledygook" The StringsDemo program uses this technique to create the string referred to by the palindrome variable: String palindrome = "Dot saw I was Tod"; You can also create String objects as you would any other Java object: using the new keyword and a constructor. The String class provides several constructors that allow you to provide the initial value of the string, using different sources, such as an array of characters, an array of bytes, or a string buffer. Table 24 shows the constructors provided by the String class. [a] The String class defines other constructors not listed in this table. Those constructors have been deprecated, and their use is not recommended. Here's an example of creating a string from a character array: char[] helloArray = { 'h', 'e', 'l', 'l', 'o' }; helloString = new String(helloArray); System.out.println(helloString); The last line of this code snippet displays: hello. You must always use new to create a string buffer. The StringBuffer class has three constructors, as described in Table 25. The StringsDemo program creates the string buffer referred to by dest, using the constructor that sets the buffer's capacity: String palindrome = "Dot saw I was Tod"; int len = palindrome.length(); StringBuffer dest = new StringBuffer(len); This code creates the string buffer with an initial capacity equal to the length of the string referred to by the name palindrome. This ensures only one memory allocation for dest because it's just big enough to contain the characters that will be copied to it. By initializing the string buffer's capacity to a reasonable first guess, you minimize the number of times memory must be allocated for it. This makes your code more efficient because memory allocation is a relatively expensive operation. Getting the Length of a String or a String Buffer Methods used to obtain information about an object are known as accessor methods. One accessor method that you can use with both strings and string buffers is the length method, which returns the number of characters contained in the string or the string buffer. After the following two lines of code have been executed, len equals 17: String palindrome = "Dot saw I was Tod"; int len = palindrome.length(); In addition to length, the StringBuffer class has a method called capacity, which returns the amount of space allocated for the string buffer rather than the amount of space used. For example, the capacity of the string buffer referred to by dest in the StringsDemo program never changes, although its length increases by 1 for each iteration of the loop. Figure 48 shows the capacity and the length of dest after nine characters have been appended to it. Figure 48. A string buffer's length is the number of characters it contains; a string buffer's capacity is the number of character spaces that have been allocated. The String class doesn't have a capacity method, because a string cannot change. Getting Characters by Index from a String or a String Buffer You can get the character at a particular index within a string or a string buffer by using the charAt accessor. The index of the first character is 0; the index of the last Figure 49: Figure 49. Use the charAt method to get a character at a particular index. The figure also shows that to compute the index of the last character of a string, you have to subtract 1 from the value returned by the length method. If you want to get more than one character from a string or a string buffer, you can use the substring method. The substring method has two versions, as shown in Table 26. [a] The substring methods were added to the StringBuffer class for Java 2 SDK 1.2. The following code gets from the Niagara palindrome the substring that extends from index 11 to index 15, which is the word "roar": String anotherPalindrome = "Niagara. O roar again!"; String roar = anotherPalindrome.substring(11, 15); Remember that indices begin at 0 (Figure 50). Figure 50. Use the substring method to get part of a string or string buffer. Searching for a Character or a Substring within a String The String class provides two accessor methods that return the position within the string of a specific character or substring: indexOf and lastIndexOf. The indexOf method searches forward from the beginning of the string, and lastIndexOf searches backward from the end of the string. Table 27 describes the various forms of the indexOf and the lastIndexOf methods. The StringBuffer class does not support the indexOf or the lastIndexOf methods. If you need to use these methods on a string buffer, first convert the string buffer to a string by using the toString method. Note The methods in the following class don't do any error checking and assume that their argument contains a full directory path and a file name with an extension. If these methods were production code, they would verify that their arguments were properly constructed. //); } } public class FilenameDemo { public static void main(String[] args) { Filename myHomePage = new Filename("/home/mem/index.html", '/', '.'); System.out.println("Extension = " + myHomePage.extension()); System.out.println("Filename = " + myHomePage.filename()); System.out.println("Path = " + myHomePage.path()); } } And here's the output from FilenameDemo: Extension = html Filename = index Path = /home/mem As shown in Figure 51, our extension method uses lastIndexOf to locate the last occurrence of the period (.) in the file name. Then substring uses the return value of lastIndexOf to extract the file name extensionthat. Figure 51. The use of lastIndexOf and substring in the extension method in the Filename class. Also, notice that the extension method uses dot + 1 as the argument to substring. If the period character (.) is the last character of the string, dot + 1 is equal to the length of the string, which is 1 larger than the largest index into the string (because indices start at 0). This is a legal argument to substring because that method accepts an index equal to but not greater than the length of the string and interprets it to mean "the end of the string." Comparing Strings and Portions of Strings The String class has several methods for comparing strings and portions of strings. Table 28 lists and describes these methods. [a] Methods marked with * were added to the String class for Java 2 SDK 1.2. public class RegionMatchesDemo { public static void main(String[] args) { String searchMe = "Green Eggs and Ham"; String findMe = "Eggs"; int len = findMe.length(); boolean foundIt = false; int i = 0; while (!searchMe.regionMatches(i, findMe, 0, len)) { i++; foundIt = true; } if (foundIt) { System.out.println(searchMe.substring(i, i+len)); } } } The output from this program is Eggs. The program steps through the string referred to by searchMe one character at a time. For each character, the program calls the regionMatches method to determine whether the substring beginning with the current character matches the string for which the program is looking. Manipulating Strings The String class has several methods that appear to modify a string. Of course, strings can't be modified, so what these methods really do is create and return a second string that contains the result, as indicated in Table 29. public class BostonAccentDemo { private static void bostonAccent(String sentence) { char r = 'r'; char h = 'h'; String translatedSentence = sentence.replace(r, h); System.out.println(translatedSentence); } public static void main(String[] args) { String translateThis = "Park the car in Harvard yard."; bostonAccent(translateThis); } } The replace method switches all the r's to h's in the sentence string so that the output of this program is: Pahk the cah in Hahvahd yahd. Modifying String Buffers As you know, string buffers can change. The StringBuffer class provides various methods for modifying the data within a string buffer. Table 30 summarizes the methods used to modify a string buffer. [a] Methods marked with * were added to the StringBuffer class for Java 2 SDK 1.2. public class InsertDemo { public static void main(String[] args) { StringBuffer palindrome = new StringBuffer( "A man, a plan, a canal; Panama."); palindrome.insert(15, "a cat, "); System.out.println(palindrome); } } The output from this program is still a palindrome: A man, a plan, a cat, a canal; Panama. [2] [2] Palindrome by Jim Saxe. With insert, you specify the index before which you want the data inserted. In the example, 15 specifies that "a cat, " is to be inserted before the first a in a canal. To insert data at the beginning of a string buffer, use an index of 0. To add data at the end of a string buffer, use an index equal to the current length of the string buffer or use append. If the operation that modifies a string buffer causes the size of the string buffer to grow beyond its current capacity, the string buffer allocates more memory. As mentioned previously, memory allocation is a relatively expensive operation, and you can make your code more efficient by initializing a string buffer's capacity to a reasonable first guess. Strings and the Compiler The compiler uses the String and the StringBuffer classes behind the scenes to handle literal strings and concatenation. As you know, you specify literal strings between double quotes: "Hello World!" You can use literal strings anywhere you would use a String object. For example, System.out.println accepts a string argument, so you could use a literal string there: System.out.println("Might I add that you look lovely today."); You can also use String methods directly from a literal string: int len = "Goodbye Cruel World".length(); Because the compiler automatically creates a new string object for every literal string it encounters, you can use a literal string to initialize a string: String s = "Hola Mundo"; The preceding construct is equivalent to, but more efficient than, this one, which ends up creating two identical strings: String s = new String("Hola Mundo"); //don't do this You can use + to concatenate strings: String cat = "cat"; System.out.println("con" + cat + "enation"); Behind the scenes, the compiler uses string buffers to implement concatenation. The preceding example compiles to: String cat = "cat"; System.out.println(new StringBuffer().append("con"). append(cat).append("enation").toString()); You can also use the + operator to append to a string values that are not themselves strings: System.out.println("You're number " + 1); The compiler implicitly converts the nonstring value (the integer 1 in the example) to a string object before performing the concatenation operation. Summary of Characters and Strings Use a Character object to contain a single character value, use a String object to contain a sequence of characters that won't change, and use a StringBuffer object to construct or to modify a sequence of characters dynamically. Refer to the tables listed in Table 31 for details about constructors and methods in these classes. public class Palindrome { public static boolean isPalindrome(String stringToTest) { String workingCopy = removeJunk(stringToTest); String reversedCopy = reverse(workingCopy); return reversedCopy.equalsIgnoreCase(workingCopy); } protected static String removeJunk(String string) { int i, len = string.length(); StringBuffer dest = new StringBuffer(len); char c; for (i = (len - 1); i >= 0; i--) { c = string.charAt(i); if (Character.isLetterOrDigit(c)) { dest.append(c); } } return dest.toString(); } protected static String reverse(String string) { StringBuffer sb = new StringBuffer(string); return sb.reverse().toString(); } public static void main(String[] args) { String string = "Madam, I'm Adam."; System.out.println(); System.out.println("Testing whether the following " + "string is a palindrome:"); System.out.println(" " + string); System.out.println(); if (isPalindrome(string)) { System.out.println("It IS a palindrome!"); } else { System.out.println("It is NOT a palindrome!"); } System.out.println(); } } The output from this program is: Testing whether the following string is a palindrome: Madam, I'm Adam. It IS a palindrome! Questions and Exercises: Characters and Strings
https://flylib.com/books/en/2.33.1/characters_and_strings.html
CC-MAIN-2020-10
refinedweb
2,405
55.24
I'm not sure, but I believe that what you want is already built into zenoss using the google maps api. I built this zenpack because we wanted to throw in isometric views of our buildings and specify where every switch and access point was on each floor. Not something google maps supports. hi rians, thx for the great Map.zenpack. Have installed it and uploaded a background, but how can i use the Map for monitoring? Is there a ducumentation available how to use? i added some devices but i can´t get any Status informations from device??? is it possible to generate more maps with your zenpack and order them into mainmap and downdrill submaps? How can i use the map for monitoring? i it possible to change it against normal Network Map? THX for support duck Currently there is no documentation available, and parts of the zenpack are a complete hack... If anybody wants to write some documentation or provide some patches, they are welcome The zenpack uses the location tree to determine what possible children can be added to the map. If you set the location of the devices to be at the same location the map is, then you should be able to use the dropdown box to add those devices.You can also add other locations, if you have created a map for them. The primary view of the finished product is a dashboard widget that you should be able to add. the devices that are up should show as green, the ones that are down show as red. hey rians, thx for your response... I have uploaded my customized Map (PNG) to Locations/wettersystem I have already set the devices to the Location "wettersystem" ..but what do you mean by setting the map to the same location?? How can I set the map to the same Location? Could you please explain me in more detail?? For what do i need to set/update "customMap/customMapImages?func=getembeddedimg"??? what does it mean? wenn i add some devices to the map i still can decide between the devices which i have defined to the location, but the the objects appear grey (no grenn or red ) altough they are online??? second: when i try to add a second customized MAP in location location/wettersystem/subsystems I always get the Error message ******************************************* what is wrong... what do i have to set? THX duck I apologize for the confusion. The map has an inherent location (because of what location you are at when you go to the Custom Map tab). In order to add devices to a map, they have to be set to the same location that the custom map is. One of the things that the zenpack install is supposed to do is set up the data object at every location, but for some reason, it frequently fails. As far as I can tell, its non-deterministic, and until somebody smarter than me can tell me why, I'm stuck with putting in checks that keep the class from butchering its own data. Thats what that error message is indicating. There is a hack in place that will attempt to fix it. Whenever you visit the custom map tab of the parent location, it checks all its child locations and attempts to (re)fix them if they have an odd state. If that doesn't work, then you can try something like the following in zendmd: from ZenPacks.USU.Map.CustomMap import CustomMapObject loc = dmd.getObjByPath('/zport/dmd/Locations/insert/path/to/location/here') loc.custom_map = CustomMapObject(loc) commit() In the map editor (the Custom Map tab), all objects are grey. I suppose I could make them take on the proper color, but I didn't bother. The finished rendered map is meant to be viewed on the zenoss dashboard. There should be a custom map portlet that you can add to your front page. If you want to imbed the map somewhere else, then add the custom map portlet, and then copy the address of the iframe that it uses. You can view it independantly or put it in another website if you wish. is there is any way to uninstall this zenpack because i cant remove it [zenoss@utg-zenoss-01 ZenPacks]$ zenpack --remove=ZenPacks.USU.Map ERROR: zenpack command failed. Reason: TypeError: remove() takes exactly 2 arguments (3 given) [zenoss@utg-zenoss-01 ZenPacks]$ I hava install the zenpack ,but how can i use the zenpack for monitoring? is there a ducumentation available how to use? Thats odd, the documentation only says it needs the two arguments. You can always try adding in an extra bogus argument in the __init__.py at the base of the zenpack around line 92 or so (depends on the version you have): def remove(self, app, bogusarg): ...... failing that, open up zendmd, and try to call it directly: from ZenPacks.USU.Map import ZenPack ZenPack.remove(dmd) If that still doesn't work, you can remove it manually by removing the customMapTab action for the factory_type_information attribute of each location, and deleting the custom_map attribute of each Location object as well. I would do some research on manually ripping out zenpacks themselves, but that is how you can clean out the zope database. wangleileo wrote: I hava install the zenpack ,but how can i use the zenpack for monitoring? is there a ducumentation available how to use? Currently, there is no documentation. The basic premise of this zenpack is that it allows you to create your own map, with your own background and your own icons to represent each object. It has a built-in map editor that uses ajax and the canvas tag that allows you to create the maps in your browser. You can then load the custom map dashboard widget to see the rendered maps. The dashboard widget will also allow you to click on submaps to parse through the Location tree. The default icon set that is included in the zenpack show things as green, yellow, red, or grey, depending on their state. The map editor itself always shows icons in grey. All I can say is that if you are curious, set up a new instance of zenoss and try it out. It is mostly intuitive. As the above posts show though, some of the install and uninstall code is unstable, mostly because that is probably the feature i use least . I will try to be available to answer any questions you may have about it though. Who knows. I may even find time to fix some of the install/uninstall issues. Part of the reason I started this thread was try and answer some questions I had about the install/uninstall process, and so far, nobody seems to know the answers, so I am pretty much in the dark. my edge interface is none,but how to set edge interface? Thats so that the map can draw arrows indicating the amount of traffic. At least one of the devices has to have interfaces on its OS tab in order for that to work. If it does, then I imagine you have found a bug. I should change it so that that option isn't available if there are no interfaces. Two question: First, is there is any way to specify the thickness level of the links? Second, I'm having trouble setting the edge interface. When i try to set it the dialogue appears to hang and it only display the loading dialogue. See the attached screen. It looks like it is attempting to load the available interfaces of the two attached sides. I would check to make sure that the os tab for the two connected devices has a valid list of interfaces. Can you tell me what the output of custom_map/customMapInterface?func=getInterfaces&node=Passport Barceloneta and custom_map/customMapInterface?func=getInterfaces&node=Gabinete Aires CETA when you add those to the end of your browser url? There aren't any options for changing the thickness of the lines, although it wouldn't be difficult to add. The backend code already supports it, it just needs to be in the user interface. ---With the output of custom_map: ---Both produce the same error see GetInterface-Errors.txt ---in Screenshot-Mozilla Firefox.png shows the output of /Locations/Barceloneta interfaces that shows the interfaces for the location Barceloneta ---In Passport Barceloneta Interfaces.png shows the device interfaces. As i see the function only display interfaces with descriptions on it. <thinkingoutloud> One of the most Irritating things about zenoss is that when your code throws a KeyError (or was it AttributeError?), the web interface reinterprets that as meaning that the page wasn't there and gives you useless error message that doesn't tell you which line of code actually threw the error, since it doesn't know if it is my code or zope code that did it. </thinkingoutloud> Basically, I need to know which line of code in the getInterfaces function is throwing the exception, and what the actual exception was. in zendmd, can you run the following code? dmd.Locations.Barceloneta.custom_map.getInterfaces('Gabinete Aires CETA') dmd.Locations.Barceloneta.custom_map.getInterfaces('Passport Barceloneta') Hopefully those lines will throw a real exception and actually give you some details of where the error occurs. As fare as the Barceloneta node itself, yes, it should be filtering out nodes without descriptions on it. I did this to filter out extra interfaces since our devices automatically tack on an interface description if you don't set one manually. it makes it nice to filter out the ones that that are automatically generated (virtual interfaces, vlans, etc). In the _getInterfacesByDevice method of CustomMap.py you can remove the line that says if i.description: If you want to get rid of this behavior. I think I will remove it in my repository as well.
http://community.zenoss.org/message/50502
CC-MAIN-2014-49
refinedweb
1,660
64.3
Today in this article we shall study resize the plots and subplots using Matplotlib. We all know that for Data Visualization purposes, Python is the best option. It has a set of modules that run on almost every system. So, in this small tutorial, our task is to brush up on the knowledge regarding the same. Let’s do this! Basics of Plotting Plotting basically means the formation of various graphical visualizations for a given data frame. There are various types in it: - Bar plots: A 2D representation of each data item with respect to some entity on the x-y scale. - Scatter plots: Plotting of small dots that represent data points on the x-y axis. - Histogram - Pie chart etc There are various other techniques that are in use in data science and computing tasks. To learn more about plotting, check this tutorial on plotting in Matplotlib. What are subplots? Subplotting is a distributive technique of data visualization where several plots are included in one diagram. This makes our presentation more beautiful and easy to understand the distribution of various data points along with distinct entities. Read more about subplots in Matplotlib. Python setup for plotting - Programming environment: Python 3.8.5 - IDE: Jupyter notebooks - Library/package: Matplotlib, Numpy Create Plots to Resize in Matplotlib Let’s jump to create a few plots that we can later resize. Code: from matplotlib import pyplot as plt import numpy as np x = np.linspace(0, 10, 100) y = 4 + 2*np.sin(2*x) fig, axs = plt.subplots() plt.xlabel("time") plt.ylabel("amplitude") plt.title("y = sin(x)") axs.plot(x, y, linewidth = 3.0) axs.set(xlim=(0, 8), xticks=np.arange(1, 8), ylim=(0, 8), yticks=np.arange(1, 8)) plt.show() Output: This is just a simple plot for the sine wave that shows the amplitude movement when time increases linearly. Now, we shall see the subplots that make things more simple. For a practice play, I am leaving codes for cos(x) and tan(x). See, if the code works or not. Code for cos(x): from matplotlib import pyplot as plt import numpy as np x = np.linspace(0, 10, 100) y = 4 + 2*np.cos(2*x) fig, axs = plt.subplots() plt.xlabel("time") plt.ylabel("amplitude") plt.title("y = cos(x)") axs.plot(x, y, linewidth = 3.0) axs.set(xlim=(0, 8), xticks=np.arange(1, 8), ylim=(0, 8), yticks=np.arange(1, 8)) plt.show() Output: Code for tan(x): from matplotlib import pyplot as plt import numpy as np x = np.linspace(0, 10, 100) y = 4 + 2*np.tan(2*x) fig, axs = plt.subplots() plt.xlabel("time") plt.ylabel("amplitude") plt.title("y = tan(x)") axs.plot(x, y, linewidth = 3.0) axs.set(xlim=(0, 8), xticks=np.arange(1, 8), ylim=(0, 8), yticks=np.arange(1, 8)) plt.show() Output: The figures in Matplotlib are having a predefined size layout. So, when we need to change their size then the plot class has a figure function. This function is responsible for making the view more relative to the screen. The user has the full right to edit the dimensions of the plot. We shall understand this with an example: Code: import random from matplotlib import pyplot as plt plt.figure(figsize = (5, 5)) x = [] y = [] plt.xlabel("X values") plt.ylabel("Y values") plt.title("A simple graph") N = 50 for i in range(N): x.append(random.randint(0, 10)) y.append(random.randint(0, 10)) plt.bar(x, y, color = "pink") plt.show() Output: Explanation: - In this code the first two lines of code import the pyplot and random libraries. - In the second line of code, we use the figure() function. In that, the figsize parameter takes a tuple of the height and width of the plot layout. - This helps us decide how much height we are giving. - The random function inserts random values from ranges 1 to 10 in each of the two lists x, y. - Then call the bar() function to create bar plots. Resizing Plots in Matplotlib This library is for creating subplots on a single axis or multiple axes. We can implement various bar plots on it. It helps to create common layouts for statistical data presentation. Using figsize Code example: from matplotlib import pyplot as plt import numpy as np fig, ax = plt.subplots(figsize = (6, 6)) p1 = ax.bar(ind, menMeans, width, yerr=menStd, label='Men') p2 = ax.bar(ind, womenMeans, width, bottom=menMeans, yerr=womenStd, label='Women') ax.axhline(0, color='grey', linewidth=0.8) ax.set_ylabel('Scores') ax.set_title('Scores by group and gender') ax.legend() # Label with label_type 'center' instead of the default 'edge' ax.bar_label(p1, label_type='center') ax.bar_label(p2, label_type='center') ax.bar_label(p2) plt.show() Output: Explanation: - The first two lines are import statements for modules. - Then we define two tuples for men and women distribution values. - To divide the graph the standard divisions are menStd and womenStd. - Then the width of each bar is set to 0.35. - We create two objects fig and ax of plt.subplot() function. - This function has one parameter figsize. It takes a tuple of two elements depicting the resolution of the display image (width, height). - Then we assign two variables p1 and p2 and call the bar() method using the ax instance. - Then lastly just assign the labels to x-y axes and finally plot them. Categorical plotting using subplots The categorical data – information with labels plotting is also possible with matplotlib’s subplot. We can use the figsize parameter here to divide the plots into many sections. Example: from matplotlib import pyplot as plt data = {'tiger': 10, 'giraffe': 15, 'lion': 5, 'deers': of presence of all the animals in a zoo') Output: Explanation: - At first, we create a dictionary of all the key-value pairs. - Then we create a list of all the keys and a separate list of all the values. - After that create a simple instance of the subplots() class. - To write the necessary parameters, we give 1 at first to declare the number of rows. 3 to declare the number of columns. Thus, there are three plots on a single column - Here, figsize is equal to (9, 3). - Then we place each plot on the axes. Using the list functionality, - ax[0] = bar graph - ax[1] = scatter plot - ax[2] = a simple line graph - These show the presence of all the animals in a zoo. Conclusion So, here we learned how we can make things easier using subplots. Using the figsize parameter saves space and time for data visualization. So, I hope this is helpful. More to come on this topic. Till then keep coding.
https://www.askpython.com/python-modules/matplotlib/resize-plots-and-subplots
CC-MAIN-2022-33
refinedweb
1,128
69.89
Surf Reports During the late 60's , I would call Ron Jon's {person to person] asking for "KAHUNA". Back then, you could hear the whole conversation between the operator and the party you were calling. The surf shop would reply "He's not here. he will be back between 2 and 3". Or "He will be back at 5 or 6", or "He was gone for the day". TRANSLATION: surf was 2 to 3 feet, or 5 to 6 feet, or "Gone for the day" meant there was no surf. The operator would then inform me that Kahuna was not there and would ask me if I would like to try back later. I would hang up and get my dime back. (Living on the gulf coast). That is how we got our surf reports!! Things have changed! The phone call now cost $.50 BIG SMILE................ And I don't know if anyone working at Ron Jons knows who Kahuna is. PS, In 1969 a former friend of mine borrowed my 9'6" Dewey Weber Harold Iggy model and took it to the east coast, (Cocoa Beach), got drunk and left it there. If you know anything about this board, please let me know! - Dave A few cocoa beach memories....... I learned to surf behind the Satellite Hotel across from Jamaica Drive. Johnny Cash used to play there. I remember... Late to little league games due to surfing, burying a board in the sand when on restriction, driving at 14 yrs old was legal. CC Riders, Surfboards Hawaii, Da Cat models. Cocoa Beach High class president in 8th, 9th and 10th grade, teen town, lightning storms. Surf trips with Dan Mode and friends to New Smyrna, little league catcher with Richard Manzo. Pithcing a perfect game, Mrs. Edgar's art class, the drunk french teacher, shells in the grass during football practice, Steve Petro playing center, Jeff Goldsmith throwing me touchdown passes, Canaveral Pier surf contest with Jude, Skip Fry, Ron Stoner and Dale Dodson staying at our house, knee pads, surfing behind boats, lots of good memories and times. Seems kinda hard to find it's soul now. O'Hare surfboards only ones, then some guy from Jersey moved down and opened a weird little surf shop in an old gas station, never really accepted by locals. Noseriding, noseriding and noseriding, driving on the beach, teaching a 15 yr old newcomer to surf who became Brevard County's first surfing fatality, taking our boat thru the canals to school and, and... "If I could find my way back home, where would I go?" R.Adams - David Carson - Davidcarsondesign.com Surfing Cocoa Beach in the 60's Yes the Cocoa Beach Pier, 4th and 16th streets, PAFB Pier and the O-Club were THE stops in the 60's. There was another place south of jetty park called "Stucks", (an area at the end of Central Avenue now occupied by Royal Mansions Resorts). I remember a large stand of Australian pines and a beach house positioned very close to the shore beyond the dune line. Surf there was usually small, but we always had to stop by to see if anyone needed to be pulled out of the soft and loose sand. Surf inside the jetties was rare and always risky. Climbing out on the rocks with a 10' plus board as waves crushed over the jetties was well worth the ride on the unique "lefts" into the port channel. It only took one wipeout for me to realize the greatest undertow I've ever experienced. Hanging onto my board was the only thing that saved me that day. When the wave crashed, the board and I were pulled under. My arms and legs were wrapped around the board like there was no tomorrow, (which was almost the case for me). You can't imagine the sensation of not knowing the direction to the surface. As I weakened, the board was pulled away from me. I remember grabbing my knees and bouncing around until I broke the surface with my face only to see another 5' wave crashing over me. Again I was pulled under until I thought my lungs would burst. - John Van Lear Doze Were Da Days... Dear BH2 ~ I spent the winters of '68-'69 and '69-'70 living in Cocoa Beach, mostly surfing the pier. Out of college, money spent, see no future, pay no rent. First year, I lived in my VW van. Lots of cold snaps that winter with many nights in the twenties. (Thank goodness global warming doesn't let that happen anymore.) Camped in the little pine woods just south of the Pier (where the Chateau high-rise condo now stands). Lots of others from all around the U.S. were there, and at Jetty Park too (free parking then!). The next winter I graduated to a frumpy little pink stucco efficiency two doors north of the Pier (also gone to high-rise now) for $25 a week. Had to boil the water and skim the minerals off, but otherwise it was a heckuva deal. Quite a few good surfing days that spring -- but real life eventually intervened. Loaded up the van and parked overnight along the Indian River in Titusville to watch Apollo 13 leave for the moon in April 11, 1970. As soon as it was out of sight, started the engine and drove home to Long Island, never to return (well, almost never)... But now after 30 years of practicing law (still trying to get it right), I have blown my meager savings on a little dump in the Groves -- so I can spend my twilight years once again surfing the Pier. :] So, starting this fall, when you Pier cowboys (and girls) see the old fart snowbird out in the water in the early a.m., think fondly of the "old days" you missed, and cut him some slack, okay? ;] - Russell Stein - Montauk, New York Canaveral Pier I started surfing at Cocoa Beach pier in 1968. We traveled south from DeLand, Fl. to camp out on the beach (unless we were run off by the CBPD). We were in high school at the time and always looked forward to the weekend and the great waves at the pier and Patricks. The first time I ever got tubed was at the pier! Riding my 9'10" Hansen. The best of times!! - Henry Coggins On to Trivia Page 3 On to Trivia Page 4 On to Trivia Page 5 On to Trivia Page 6 On to Trivia Page 7 On to Trivia Page 8 Back to Trivia Page 1 Back to the Main Page Anyone want to talk old surf story or have original trivia about Cocoa Beach and Brevard County? and let all these new kids know what it was REALLY like pre-internet days. Peace. BH2
http://www.angelfire.com/fl/boardheads2/trivia2.html
CC-MAIN-2016-07
refinedweb
1,148
80.62
This document describes the format of XRC resource files, as used by wxXmlResource. Formal description in the form of a RELAX NG schema is located in the misc/schema subdirectory of the wxWidgets sources. XRC file is a XML file with all of its elements in the namespace. For backward compatibility, namespace is accepted as well (and treated as identical to), but it shouldn't be used in new XRC files. XRC file contains definitions for one or more objects – typically windows. The objects may themselves contain child objects. Objects defined at the top level, under the root element, can be accessed using wxXmlResource::LoadDialog() and other LoadXXX methods. They must have name attribute that is used as LoadXXX's argument (see Object Element for details). Child objects are not directly accessible via wxXmlResource, they can only be accessed using XRCCTRL(). The root element is always <resource>. It has one optional attribute, version which, while optional, should always be set to the latest version. At the time of writing, it is "2.5.3.0", so all XRC documents should look like the following: The version consists of four integers separated by periods. The first three components are major, minor and release number of the wxWidgets release when the change was introduced, the last one is revision number and is 0 for the first incompatible change in given wxWidgets release, 1 for the second and so on. The version changes only if there was an incompatible change introduced; merely adding new kind of objects does not constitute incompatible change. <resource> may have arbitrary number of object elements as its children; they are referred to as toplevel objects in the rest of this document. Unlike objects defined deeper in the hierarchy, toplevel objects must have their name attribute set and it must be set to a value unique among root's children. The <object> element represents a single object (typically a GUI element) and it usually maps directly to a wxWidgets class instance. It has one mandatory attribute, class, and optional name and subclass attributes. The class attribute must always be present, it tells XRC what wxWidgets object should be created and by which wxXmlResourceHandler. name is the identifier used to identify the object. This name serves three purposes: Name attributes must be unique at the top level (where the name is used to load resources) and should be unique among all controls within the same toplevel window (wxDialog, wxFrame). The subclass attribute optional name of class whose constructor will be called instead of the constructor for "class". See Subclassing for more details. <object> element may – and almost always do – have children elements. These come in two varieties: <object>elements and are can be repeated more than once. The specifics of which object classes are allowed as children are class-specific and are documented below in Supported Controls. Example: Anywhere an <object> element can be used, <object_ref> may be used instead. <object_ref> is a reference to another named (i.e. with the name attribute) <object> element. It has one mandatory attribute, ref, with value containing the name of a named <object> element. When an <object_ref> is encountered, a copy of the referenced <object> element is made in place of <object_ref> occurrence and processed as usual. For example, the following code: is equivalent to Additionally, it is possible to override some parts of the referenced object in the <object_ref> pointing to it. This is useful for putting repetitive parts of XRC definitions into a template that can be reused and customized in several places. The two parts are merged as follows: <object_ref>are added to it. <object_ref>are scanned. If an element with the same name (and, if specified, the nameattribute too) is found in the referred object, they are recursively merged. <object_ref>that do not have a match in the referred object are appended to the list of children of the resulting element by default. Optionally, they may have insert_atattribute with two possible values, "begin" or "end". When set to "begin", the element is prepended to the list of children instead of appended. For example, "my_dlg" in this snippet: is identical to: There are several property data types that are frequently reused by different properties. Rather than describing their format in the documentation of every property, we list commonly used types in this section and document their format. Boolean values are expressed using either "1" literal (true) or "0" (false). Floating point values use POSIX (C locale) formatting – decimal separator is "." regardless of the locale. Colour specification can be either any string colour representation accepted by wxColour::Set() or any wxSYS_COLOUR_XXX symbolic name accepted by wxSystemSettings::GetColour(). In particular, the following forms are supported: Some examples: Sizes and positions can be expressed in either DPI-independent pixel values or in dialog units. The former is the default, to use the latter "d" suffix can be added. Semi-formally the format is: size := x "," y ["d"] where x and y are integers. Either of the components (or both) may be "-1" to signify default value. As a shortcut, empty string is equivalent to "-1,-1" (= wxDefaultSize or wxDefaultPosition). Notice that the dialog unit suffix "d" applies to both x and y if it's specified and cannot be specified after the first component, but only at the end. Examples: Similarly to sizes, dimensions are expressed as integers with optional "d" suffix. When "d" suffix is used, the integer preceding it is interpreted as dialog units in the parent window, otherwise it's a DPI-independent pixel value. This is similar to Size size, but for values that are not expressed in pixels and so doesn't allow "d" suffix nor does any DPI-dependent scaling, i.e. the format is just size := x "," y and x and y are just integers which are not interpreted in any way. String properties use several escape sequences that are translated according to the following table: By default, the text is translated using wxLocale::GetTranslation() before it is used. This can be disabled either globally by not passing wxXRC_USE_LOCALE to wxXmlResource constructor, or by setting the translate attribute on the property node to "0": Like Text, but the text is never translated and translate attribute cannot be used. An unformatted string. Unlike with Text, no escaping or translations are done. Any URL accepted by wxFileSystem (typically relative to XRC file's location, but can be absolute too). Unlike with Text, no escaping or translations are done. Bitmap properties contain specification of a single bitmap or icon. In the most basic form, their text value is simply a relative filename (or another wxFileSystem URL) of the bitmap to use. For example: The value is interpreted as path relative to the location of XRC file where the reference occurs. Bitmap file paths can include environment variables that are expanded if wxXRC_USE_ENVVARS was passed to the wxXmlResource constructor. Alternatively, it is possible to specify the bitmap using wxArtProvider IDs. In this case, the property element has no textual value (filename) and instead has the stock_id XML attribute that contains stock art ID as accepted by wxArtProvider::GetBitmap(). This can be either custom value (if the app uses app-specific art provider) or one of the predefined wxART_XXX constants. Optionally, stock_client attribute may be specified too and contain one of the predefined wxArtClient values. If it is not specified, the default client ID most appropriate in the context where the bitmap is referenced will be used. In most cases, specifying stock_client is not needed. Examples of stock bitmaps usage: If both specifications are provided, then stock_id is used if it is recognized by wxArtProvider and the provided bitmap file is used as a fallback. Style properties (such as window's style or sizer flags) use syntax similar to C++: the style value is OR-combination of individual flags. Symbolic names identical to those used in C++ code are used for the flags. Flags are separated with "|" (whitespace is allowed but not required around it). The flags that are allowed for a given property are context-dependent. Examples: One of the wxShowEffect values. Example: XRC uses similar, but more flexible, abstract description of fonts to that used by wxFont class. A font can be described either in terms of its elementary properties, or it can be derived from one of system fonts or the parent window font. The font property element is a "composite" element: unlike majority of properties, it doesn't have text value but contains several child elements instead. These children are handled in the same way as object properties and can be one of the following "sub-properties": All of them are optional, if they are missing, appropriate wxFont default is used. If the sysfont or inherit property is used, then the defaults are taken from it instead. Examples: inheritfor a font that gets used before the enclosing control is created, e.g. if the control gets the font passed as parameter for its constructor, or if the control is not derived from wxWindow. Defines a wxImageList. The imagelist property element is a "composite" element: unlike majority of properties, it doesn't have text value but contains several child elements instead. These children are handled similarly to object properties and can be one of the following "sub-properties": Example: This section describes support wxWindow-derived classes in XRC format. The following properties are always (unless stated otherwise in control-specific docs) available for windows objects. They are omitted from properties lists below. All of these properties are optional. This section lists all controls supported by default. For each control, its control-specific properties are listed. If the control can have child objects, it is documented there too; unless said otherwise, XRC elements for these controls cannot have children. Notice that wxAUI support in XRC is available in wxWidgets 3.1.1 and later only and you need to explicitly register its handler using to use it. A wxAuiManager can have one or more child objects of the wxAuiPaneInfo class. wxAuiPaneInfo objects have the following properties: A wxAuiNotebook can have one or more child objects of the notebookpage pseudo-class. notebookpage objects have the following properties: Each notebookpage must have exactly one non-toplevel window as its child. Example: Building an XRC for wxAuiToolBar is quite similar to wxToolBar. The only significant differences are: Refer to the section wxToolBar for more details. If both value and selection are specified and selection is not -1, then selection takes precedence. A wxBitmapComboBox can have one or more child objects of the ownerdrawnitem pseudo-class. ownerdrawnitem objects have the following properties: Example: No additional properties. The <item> elements have listbox items' labels as their text values. They can also have optional checked XML attribute – if set to "1", the value is initially checked. Example: Example: Additionally, a choicebook can have one or more child objects of the choicebookpage pseudo-class (similarly to wxNotebook and its notebookpage). choicebookpage objects have the following properties: Each choicebookpage has exactly one non-toplevel window as its child. The wxCommandLinkButton contains a main title-like label and an optional note for longer description. The main label and the note can be concatenated into a single string using a new line character between them (notice that the note part can have more new lines in it). wxCollapsiblePane may contain single optional child object of the panewindow pseudo-class type. panewindow itself must contain exactly one child that is a sizer or a non-toplevel window object. If both value and selection are specified and selection is not -1, then selection takes precedence. Example: No additional properties. No additional properties. No additional properties. wxDialog may have optional children: either exactly one sizer child or any number of non-toplevel window objects. If sizer child is used, it sets size hints too. Example: wxFrame may have optional children: either exactly one sizer child or any number of non-toplevel window objects. If sizer child is used, it sets size hints too. No additional properties. At most one of url and htmlcode properties may be specified, they are mutually exclusive. If neither is set, the window is initialized to show empty page. Example: Example: Additionally, a listbook can have one or more child objects of the listbookpage pseudo-class (similarly to wxNotebook and its notebookpage). listbookpage objects have the following properties: Each listbookpage has exactly one non-toplevel window as its child. A list control can have optional child objects of the listitem class. Report mode list controls (i.e. created with wxLC_REPORT style) can in addition have optional listcol child objects. The listcol class can only be used for wxListCtrl children. It can have the following properties (all of them optional): The columns are appended to the control in order of their appearance and may be referenced by 0-based index in the col attributes of subsequent listitem objects. The listitem is a child object for the class wxListCtrl. It can have the following properties (all of them optional): Notice that the item position can't be specified here, the items are appended to the list control in order of their appearance. wxMDIParentFrame supports the same properties that wxFrame does. wxMDIParentFrame may have optional children. When used, the child objects must be of wxMDIChildFrame type. wxMDIChildFrame supports the same properties that wxFrame and wxMDIParentFrame do. wxMDIChildFrame can only be used as immediate child of wxMDIParentFrame. wxMDIChildFrame may have optional children: either exactly one sizer child or any number of non-toplevel window objects. If sizer child is used, it sets size hints too. Note that unlike most controls, wxMenu does not have Standard Properties, with the exception of style. A menu object can have one or more child objects of the wxMenuItem or wxMenu classes or break or separator pseudo-classes. The separator pseudo-class is used to insert separators into the menu and has neither properties nor children. Likewise, break inserts a break (see wxMenu::Break()). wxMenuItem objects support the following properties: Example: Note that unlike most controls, wxMenuBar does not have Standard Properties, with the exception of style. A menubar can have one or more child objects of the wxMenu class. A notebook can have one or more child objects of the notebookpage pseudo-class. notebookpage objects have the following properties: Each notebookpage has exactly one non-toplevel window as its child. Example: wxOwnerDrawnComboBox has the same properties as wxComboBox, plus the following additional properties: No additional properties. wxPanel may have optional children: either exactly one sizer child or any number of non-toplevel window objects. A sheet dialog can have one or more child objects of the propertysheetpage pseudo-class (similarly to wxNotebook and its notebookpage). propertysheetpage objects have the following properties: Each propertysheetpage has exactly one non-toplevel window as its child. The <item> elements have radio buttons' labels as their text values. They can also have some optional XML attributes (not properties!): Example: A wxRibbonBar may have wxRibbonPage child objects. The page pseudo-class may be used instead of wxRibbonPage when used as wxRibbonBar children. Example: Notice that wxRibbonBar support in XRC is available in wxWidgets 2.9.5 and later only and you need to explicitly register its handler using to use it. No additional properties. wxRibbonButtonBar can have child objects of the button pseudo-class. button objects have the following properties: No additional properties. Objects of this type must be subclassed with the subclass attribute. No additional properties. wxRibbonGallery can have child objects of the item pseudo-class. item objects have the following properties: A wxRibbonPage may have children of any type derived from wxRibbonControl. Most commontly, wxRibbonPanel is used. As a special case, the panel pseudo-class may be used instead of wxRibbonPanel when used as wxRibbonPage children. A wxRibbonPanel may have children of any type derived from wxRibbonControl or a single wxSizer child with non-ribbon windows in it. Notice that wxRichTextCtrl support in XRC is available in wxWidgets 2.9.5 and later only and you need to explicitly register its handler using to use it. wxScrolledWindow may have optional children: either exactly one sizer child or any number of non-toplevel window objects. If sizer child is used, wxSizer::FitInside() is used (instead of wxSizer::Fit() as usual) and so the children don't determine scrolled window's minimal size, they only affect virtual size. Usually, both scrollrate and either size or minsize on containing sizer item should be used in this case. wxSimpleHtmlListBox has same properties as wxListBox. The only difference is that the text contained in <item> elements is HTML markup. Note that the markup has to be escaped: (X)HTML markup elements cannot be included directly: wxSimplebook is similar to wxNotebook but simpler: as it doesn't show any page headers, it doesn't use neither image list nor individual page bitmaps and while it still accepts page labels, they are optional as they are not shown to the user neither. So simplebookpage child elements, that must occur inside this object, only have the following properties: choicebookpage objects have the following properties: As with all the other book page elements, each simplebookpage must have exactly one non-toplevel window as its child. wxSpinCtrl supports the same properties as wxSpinButton and, since wxWidgets 2.9.5, another one: wxSpinCtrlDouble supports the same properties as wxSpinButton but value, min and max are all of type float instead of int. There is one additional property: This handler was added in wxWidgets 3.1.1. wxSplitterWindow must have one or two children that are non-toplevel window objects. If there's only one child, it is used as wxSplitterWindow's only visible child. If there are two children, the first one is used for left/top child and the second one for right/bottom child window. No additional properties. No additional properties. A toolbar can have one or more child objects of any wxControl-derived class or one of three pseudo-classes: separator, space or tool. The separator pseudo-class is used to insert separators into the toolbar and has neither properties nor children. Similarly, the space pseudo-class is used for stretchable spaces (see wxToolBar::AddStretchableSpace(), new since wxWidgets 2.9.1). The tool pseudo-class objects specify toolbar buttons and have the following properties: The presence of a dropdown property indicates that the tool is of type wxITEM_DROPDOWN. It must be either empty or contain exactly one wxMenu child object defining the drop-down button associated menu. Notice that radio, toggle and dropdown are mutually exclusive. Children that are not tool, space or separator must be instances of classes derived from wxControl and are added to the toolbar using wxToolBar::AddControl(). Example: A toolbook can have one or more child objects of the toolbookpage pseudo-class (similarly to wxNotebook and its notebookpage). toolbookpage objects have the following properties: Each toolbookpage has exactly one non-toplevel window as its child. A treebook can have one or more child objects of the treebookpage pseudo-class (similarly to wxNotebook and its notebookpage). treebookpage objects have the following properties: Each treebookpage has exactly one non-toplevel window as its child. The tree of labels is not described using nested treebookpage objects, but using the depth property. Toplevel pages have depth 0, their child pages have depth 1 and so on. A treebookpage's label is inserted as child of the latest preceding page with depth equal to depth-1. For example, this XRC markup: corresponds to the following tree of labels: A wizard object can have one or more child objects of the wxWizardPage or wxWizardPageSimple classes. They both support the following properties (in addition to Standard Properties): wxWizardPage and wxWizardPageSimple nodes may have optional children: either exactly one sizer child or any number of non-toplevel window objects. wxWizardPageSimple pages are automatically chained together; wxWizardPage pages transitions must be handled programmatically. Sizers are handled slightly differently in XRC resources than they are in wxWindow hierarchy. wxWindow's sizers hierarchy is parallel to the wxWindow children hierarchy: child windows are children of their parent window and the sizer (or sizers) form separate hierarchy attached to the window with wxWindow::SetSizer(). In XRC, the two hierarchies are merged together: sizers are children of other sizers or windows and they can contain child window objects. If a sizer is child of a window object in the resource, it must be the only child and it will be attached to the parent with wxWindow::SetSizer(). Additionally, if the window doesn't have its size explicitly set, wxSizer::Fit() is used to resize the window. If the parent window is toplevel window, wxSizer::SetSizeHints() is called to set its hints. A sizer object can have one or more child objects of one of two pseudo-classes: sizeritem or spacer (see wxStdDialogButtonSizer for an exception). The former specifies an element (another sizer or a window) to include in the sizer, the latter adds empty space to the sizer. sizeritem objects have exactly one child object: either another sizer object, or a window object. spacer objects don't have any children, but they have one property: Both sizeritem and spacer objects can have any of the following properties: Example of sizers XRC code: The sizer classes that can be used are listed below, together with their class-specific properties. All classes except wxStdDialogButtonSizer support the following properties: Unlike other sizers, wxStdDialogButtonSizer has neither sizeritem nor spacer children. Instead, it has one or more children of the button pseudo-class. button objects have no properties and they must always have exactly one child of the wxButton class or a class derived from wxButton. Example: In addition to describing UI elements, XRC files can contain non-windows objects such as bitmaps or icons. This is a concession to Windows developers used to storing them in Win32 resources. Note that unlike Win32 resources, bitmaps included in XRC files are not embedded in the XRC file itself. XRC file only contains a reference to another file with bitmap data. Bitmaps are stored in <object> element with class set to wxBitmap. Such bitmaps can then be loaded using wxXmlResource::LoadBitmap(). The content of the element is exactly same as in the case of bitmap properties, except that toplevel <object> is used. For example, instead of: toplevel wxBitmap resources would look like: wxIcon resources are identical to wxBitmap ones, except that the class is wxIcon. It is possible to conditionally process parts of XRC files on some platforms only and ignore them on other platforms. Any element in XRC file, be it toplevel or arbitrarily nested one, can have the platform attribute. When used, platform contains |-separated list of platforms that this element should be processed on. It is filtered out and ignored on any other platforms. Possible elemental values are: Examples: Usually you won't care what value the XRCID macro returns for the ID of an object. Sometimes though it is convenient to have a range of IDs that are guaranteed to be consecutive. An example of this would be connecting a group of similar controls to the same event handler. The following XRC fragment 'declares' an ID range called foo and another called bar; each with some items. For the range foo, no size or start parameters were given, so the size will be calculated from the number of range items, and IDs allocated by wxWindow::NewControlId (so they'll be negative). Range bar asked for a size of 30, so this will be its minimum size: should it have more items, the range will automatically expand to fit them. It specified a start ID of 10000, so XRCID("bar[0]") will be 10000, XRCID("bar[1]") 10001 etc. Note that if you choose to supply a start value it must be positive, and it's your responsibility to avoid clashes. For every ID range, the first item can be referenced either as rangename[0] or rangename[start]. Similarly rangename[end] is the last item. Using [start] and [end] is more descriptive in e.g. a Bind() event range or a for loop, and they don't have to be altered whenever the number of items changes. Whether a range has positive or negative IDs, [start] is always a smaller number than [end]; so code like this works as expected: ID ranges can be seen in action in the objref dialog section of the XRC Sample. The XRC format is designed to be extensible and allows specifying and loading custom controls. The three available mechanisms are described in the rest of this section in the order of increasing complexity. The simplest way to add custom controls is to set the subclass attribute of <object> element: In that case, wxXmlResource will create an instance of the specified subclass ( MyTextCtrl in the example above) instead of the class ( wxTextCtrl above) when loading the resource. However, the rest of the object's loading (calling its Create() method, setting its properties, loading any children etc.) will proceed in exactly the same way as it would without subclass attribute. In other words, this approach is only sufficient when the custom class is just a small modification (e.g. overridden methods or customized events handling) of an already supported classes. The subclass must satisfy a number of requirements: classattribute. class'Create() method (this is because XRC will call Create() of class, not subclass). In other words, creation of the control must not be customized. A more flexible solution is to put a placeholder in the XRC file and replace it with custom control after the resource is loaded. This is done by using the unknown pseudo-class: The placeholder is inserted as dummy wxPanel that will hold custom control in it. At runtime, after the resource is loaded and a window created from it (using e.g. wxXmlResource::LoadDialog()), use code must call wxXmlResource::AttachUnknownControl() to insert the desired control into placeholder container. This method makes it possible to insert controls that are not known to XRC at all, but it's also impossible to configure the control in XRC description in any way. The only properties that can be specified are the standard window properties. unknownclass cannot be combined with subclassattribute, they are mutually exclusive. Finally, XRC allows adding completely new classes in addition to the ones listed in this document. A class for which wxXmlResourceHandler is implemented can be used as first-class object in XRC simply by passing class name as the value of class attribute: The only requirements on the class are that Child elements of <object> are handled by the custom handler and there are no limitations on them imposed by XRC format. This is the only mechanism that works for toplevel objects – custom controls are accessible using the type-unsafe wxXmlResource::LoadObject() method. In addition to plain XRC files, wxXmlResource supports (if wxFileSystem support is compiled in) compressed XRC resources. Compressed resources have either .zip or .xrs extension and are simply ZIP files that contain arbitrary number of XRC files and their dependencies (bitmaps, icons etc.). This section describes differences in older revisions of XRC format (i.e. files with older values of version attribute of <resource>). Version 2.5.3.0 introduced C-like handling of "\\" in text. In older versions, "\n", "\t" and "\r" escape sequences were replaced with respective characters in the same matter it's done in C, but "\\" was left intact instead of being replaced with single "\", as one would expect. Starting with 2.5.3.0, all of them are handled in C-like manner. Prior to version 2.3.0.1, "$" was used for accelerators instead of "_" or "&". For example, was used in place of current version's (or "&File").
https://docs.wxwidgets.org/trunk/overview_xrcformat.html
CC-MAIN-2019-47
refinedweb
4,617
54.73
Thrift::Parser::Type - Base clase for OO types Returns a reference to the Thrift::IDL::Type that informed the creation of this class. Returns a reference to the Thrift::IDL object that this was formed from. Returns the simple name of the type. Returns the internal representation of the value of this object. Boolean; is a value defined for this object. my $object = $subclass->compose(..); Returns a new object in the class/subclass namespace with the value given. Generally you may pass a simple perl variable or another object in this same class to create the new object. Throws Thrift::Parser::InvalidArgument or Thrift::Parser::InvalidTypedValue. See subclass documentation for more specifics. Used internally. Overridden for complex type classes that require the IDL to inform their creation and schema (mainly Thrift::Parser::Type::Container). my $object = $class->read($protocol); Implemented in subclasses, this will create new objects from a Thrift::Protocol. $object->write($protocol) Given a Thrift::Protocol object, will write out this type to the buffer. Overridden in all most subclasses. if ($object_a->equal_to($object_b)) { ... } Performs an equality test between two blessed objects. You may also call with a non-blessed reference (a perl scalar, for instance), which will be passed to compose() to be formed into a proper object before the comparison is run. Throws Thrift::Parser::InvalidArgument. Used internally for the "equal_to" call. Implemented by the specific type. Returns the Thrift type id. Overridden in subclasses. Throws Thrift::Parser::Exception. Returns a POD formatted string that documents a derived class.>
http://search.cpan.org/~ewaters/Thrift-Parser-0.06/lib/Thrift/Parser/Type.pm
CC-MAIN-2017-13
refinedweb
253
52.05
I have been staring at this previous years exam question for hours now and I just can't get it working correctly. If anyone can help it would be greatly appreciated as I have an exam on Friday! Code java: import java.util.*; public class Huffman{ public static void main(String[] args) { Scanner in = new Scanner(System.in); System.out.print("Enter your sentence: "); String sentence = in.nextLine(); String binaryString=""; //this stores the string of binary code for(int i=0; i < sentence.length(); i++) { int decimalValue = (int)sentence.charAt(i); //convert to decimal String binaryValue = Integer.toBinaryString(decimalValue); //convert to binary for(int j=7;j>binaryValue.length();j--) { binaryString+="0"; //this loop adds in those pesky leading zeroes } binaryString += binaryValue+" "; //add to the string of binary } System.out.println(binaryString); //print out the binary int[] array = new int[256]; //an array to store all the frequencies for(int i=0; i < sentence.length(); i++) { //go through the sentence array[(int)sentence.charAt(i)]++; //increment the appropriate frequencies } PriorityQueue < Tree > PQ = new PriorityQueue < Tree >() ; //make a priority queue to hold the forest of trees for(int i=0; i<array.length; i++) { //go through frequency array if(array[i]>0) { //print out non-zero frequencies - cast to a char System.out.println("'"+(char)i+"' appeared "+array[i]+((array[i] == 1) ? " time" : " times")); //FILL THIS IN: //MAKE THE FOREST OF TREES AND ADD THEM TO THE PQ //create a new Tree //set the cumulative frequency of that Tree //insert the letter as the root node //add the Tree into the PQ } } while(PQ.size()>1) { //FILL THIS IN: //IMPLEMENT THE HUFFMAN ALGORITHM //when you're making the new combined tree, don't forget to assign a default root node (or else you'll get a null pointer exception) //if you like, to check if everything is working so far, try printing out the letter of the roots of the two trees you're combining } Tree HuffmanTree = PQ.poll(); //now there's only one tree left - get its codes //FILL THIS IN: //get all the codes for the letters and print them out //call the getCode() method on the HuffmanTree Tree object for each letter in the sentence //print out all the info } public class Tree implements Comparable<Tree> { public Node root; // first node of tree public int frequency=0; public Tree() // constructor { root = null; } public int compareTo(Tree object) { // if(frequency-object.frequency>0) { //compare the cumulative frequencies of the tree return 1; } else if(frequency-object.frequency<0) { return -1; //return 1 or -1 depending on whether these frequencies are bigger or smaller } else { return 0; //return 0 if they're the same } } String path="error"; public String getCode(char letter) { //FILL THIS IN: //How do you get the code for the letter? Maybe try a traversal of the tree //Track the path along the way and store the current path when you arrive at the right letter return path; //return the path that results } } class Node { public char letter; //stores letter public Node leftChild; // this node's left child public Node rightChild; // this node's right child } // end class Node } I have included the entire problem! Help with any part would be greatly appreciated!
http://www.javaprogrammingforums.com/%20algorithms-recursion/24084-huffman-algorithm-help-printingthethread.html
CC-MAIN-2015-48
refinedweb
536
50.16
I broke down my problem to the following example program xy8_block = [ {'pulse': {}}, ] class Dummy: def __init__(self, block=list(xy8_block)): self._block = block dumdum = Dummy() dumdum._block[0]['pulse']['test'] = 0 print(xy8_block) xy8_block dumdum._block xy8_block Instead of: def __init__(self, block=list(xy8_block)): Do: from copy import deepcopy def __init__(self, block=deepcopy(xy8_block)): When you do list(my_list), you do a shallow copy of your list, which means its elements are still copied by reference. In other words, as you correctly mentioned, xy8_block and dumdum._block do have different memory addresses. However, if you check memory addresses for xy8_block[0] and dumdum._block[0], you will see that they are the same. By using deepcopy, you copy the list and its elements' values, not their references. EDIT As wisely noted by @FMc, this will still make all instances' _block attributes to point to the same object, since the list that results from deepcopy is created in the method's definition, not in its execution. So here's my suggestion: from copy import deepcopy class Dummy: def __init__(self, block=None): self._block = block or deepcopy(xy8_block)
https://codedump.io/share/gmf7S2mkHKqN/1/python-list-at-module-level
CC-MAIN-2018-26
refinedweb
189
66.23
Class Inheritance Inheritance is a paradigm in programming languages that allows objects to share traits similarly to how a human child inherits a trait from a parent. If a father has green eyes, his son might inherit that trait. If a daughter has blonde hair, one of her parents or grandparents contributed the gene, allowing her to inherit the blonde hair. In a classical programming language, inheritance allows a family of objects to share behavior and properties between related classes of objects. So far you’ve learned about the structure of a single class. Being a classical language, Dart supports single inheritance on a class-by-class basis. Single inheritance means that a class can directly inherit from only one class at a time. You’ve already been using implicit inheritance. If no extended class is defined, all classes in Dart automatically extend the class Object. Dart implements inheritance when the extends keyword is placed after the class name declaration followed by a named identifier of the class which is to be inherited from. Over the next few sections, you’re going to define the taxonomy of a fleet of flying vehicles. You’ll start with the most generic and work your way to the most specific. Let’s create a base class of type Vehicle (Example 4.14). EXAMPLE 4.14 class Vehicle extends Object { void turnOn(){ print('--Turns On--'); } void turnOff(){ print('--Turns Off--'); } } So far this looks like the classes you’ve already been working with. You’ll notice that you’ve declared two methods, turnOn() and turnoff(). These actions are a common trait of almost any vehicle, so they go into the base class. You will notice that the constructor method is left off. Dart will implicitly provide a constructor with no parameters. Let’s evolve the Vehicle class by using inheritance and by defining a new class (Example 4.15). EXAMPLE 4.15 class Aircraft extends Vehicle { String name = "Aircraft"; String fuelType; String propulsion; int maxspeed; void goForward() { print('--$name moves forward--'); } } By using the keyword extends, you’re declaring that a class of Aircraft should acquire all the fields and default behavior provided in class Vehicle. You then define properties that would be common traits of all Aircraft. Let’s take a look at an implementation in Example 4.16. EXAMPLE 4.16 main() { Aircraft craft = new Aircraft(); //uses the implicit class constructor craft.turnOn(); //inherits from Vehicle craft.goForward(); //defined only in Aircraft craft.turnOff(); //inherits from Vehicle } //Output: //--Turns On-- //--Aircraft moves forward-- //--Turns Off-- Next, let’s assume that when speaking, a person wouldn’t say, “I’m going to go for a ride in my aircraft.” A listener might infer some of the behaviors associated with that statement, but it’s still vague. An aircraft can be many things. Instead, they would refer to something more concrete, such as a blimp or a plane. You don’t have enough details in the Aircraft class in Example 4.16, so you’re going to declare Aircraft as an abstract class and build out a more concrete implementation with a few different types of aircraft. Abstract Classes An abstract class in Dart is a class used to share behavior among descendent classes. Abstract classes cannot be directly instantiated. You’re going to prefix the existing class declaration using the abstract keyword. abstract class Aircraft extends Vehicle { ... ... } In the implementation from Example 4.16, on the line with new Aircraft(), there is now an error: 'Abstract classes cannot be created with a 'new' expression'. To fix this, let’s create some concrete implementations of Aircraft: EXAMPLE 4.17 class Blimp extends Aircraft { Blimp(int maxspeed) //explicit class constructor { this.maxspeed = maxspeed; //assigns values to superclass fields this.name = "Blimp"; //assigns values to superclass fields } } main() { Aircraft craft = new Blimp(73); craft.turnOn(); craft.goForward(); craft.turnOff(); } //Ouput: //--Turns On-- //--Blimp moves forward-- //--Turns Off— Example 4.17 creates a concrete class Blimp. Class Blimp inherits from Aircraft, which inherits from the Vehicle class. Blimp has an explicit class constructor that has a parameter of maxspeed. The class Blimp has a namespace that now includes the inherited fields from each of the parent classes.
http://www.peachpit.com/articles/article.aspx?p=2468332&seqNum=6
CC-MAIN-2018-39
refinedweb
700
56.96
How to write correctly typed React components with TypeScript Brian Neville-O'Neill Originally published at blog.logrocket.com on ・7 min read Written by Piero Borrelli✏️ If you are a software developer — especially if you write JavaScript — then you have probably heard of TypeScript. Hundreds of courses, forum discussions, and talks have been created regarding this technology, and interest is still growing. TypeScript is a strict, typed superset of JavaScript developed by Microsoft. It basically starts from the usual JavaScript codebase we all know and compiles to JavaScript files, while adding some very cool features along the way. JavaScript is a dynamically typed language, and love it or hate it, it can be a very dangerous behavior. In fact, it can cause subtle issues in our program when some entities are not used as intended. With TypeScript, we can avoid these kinds of errors by introducing static types. This mechanism will save us a lot of time in debugging since any type error will prevent you from running your code. And also note that usage of types is completely optional; you will be able to use it discretely whenever you think it’s necessary in your code. With TypeScript, you will also able to use the most recent ES6 and ES7 features with no need to worry about browser support. The compiler will automatically convert them to ES5, leaving you space to focus on more important parts of your project and saving time spent testing browser compatibility. Integrating TypeScript with other technologies As you may have intuited, TypeScript can be a true game-changer for your project, especially if you believe it will grow in size and you want to have the best options for managing it. At this point, you may be wondering how you can integrate TypeScript with another technology you’re using. In this case, the language itself comes in handy by providing support for many frameworks. In this guide, we are going to check out how this amazing language can be integrated into the most popular frontend framework out there: React. The React case TypeScript is at its best position right now when it comes to using it for React applications. You will be able to use it to make your products more manageable, readable, and stable. The integration has become extremely easy, and in this case, my advice for you is to set up your favorite environment in order to try out the examples proposed in this article. Once everything is set up, we can start exploring our new TypeScript + React integration. Typed functional components Functional components are one of the best-loved React features. They provide an extremely easy and functional way to render our data. These types of components can be defined in TypeScript like so: import * as React from 'react'; // to make JSX compile const sayHello: React.FunctionComponent<{ name: string; }> = (props) => { return <h1>Hello {props.name} !</h1>; }; export default sayHello; Here we are using the generic type provided by the official typings — React.FunctionComponent, or its alias React.FC — while defining the expected structure of our props. In our case, we are expecting a simple prop of type string that will be used to render a passed-in name to the screen. We can also define the props mentioned above in another way: by defining an interface using TypeScript, specifying the type for each one of them. interface Props { name: string }; const sayHello: React.FunctionComponent<Props> = (props) => { return <h1>{props.name}</h1>; }; Please also note that using React.FunctionComponent allows TypeScript to understand the context of a React component and augments our custom props with the default React-provided props like children. Typed class components The “old way” of defining components in React is by defining them as classes. In this case, we can not only manage props, but also the state (even if things have changed since the latest release of React 16). These types of components need to be extended from the base React.Component class. TypeScript enhances this class with generics, passing props and state. So, similar to what we described above, class components can be described using TypeScript like so: import * as React from 'react'; type Props {} interface State { seconds: number; }; export default class Timer extends React.Component<Props, State> { state: State = { seconds: 0 }; increment = () => { this.setState({ seconds: (this.state.seconds + 1) }); }; decrement = () => { this.setState({ seconds: (this.state.seconds - 1) }); }; render () { return ( <div> <p>The current time is {this.state.seconds}</p> </div> ); } } Constructor When it comes to the constructor function, you have to pass your props (even if there are none), and TypeScript will require you to pass them to the super constructor function. However, when performing your super call in TypeScript’s strict mode, you will get an error if you don’t provide any type specifications. That’s because a new class will be created with a new constructor, and TypeScript won’t know what params to expect. Therefore, TypeScript will infer them to be of type any — and implicit any in strict mode is not allowed. export class Sample extends Component<MyProps> { constructor(props) { // ️doesn't work in strict mode super(props) } } So we need to be explicit on the type of our props: export class Sample extends Component<MyProps> { constructor(props: MyProps) { super(props) } } Default props Default properties will allow you to specify the default values for your props. We can see an example here: import * as React from 'react'; interface AlertMessageProps { message: string; } export default class AlertMessage extends React.Component<Props> { static defaultProps: AlertMessageProps = { message: "Hello there" }; render () { return <h1>{this.props.message}</h1>; } } Typing context Typically, in a React applications, data is passed down to every component via props in a parent-to-children approach. However, it can sometimes become problematic for certain types of information (user preferences, general settings, etc.). The Context API provides an approach to avoid the need to pass data down to every level of a tree. Let’s check out an example of this using both JavaScript and TypeScript: Javascript const ThemeContext = React.createContext('light'); class App extends React.Component { render() { // Using a Provider to pass the current theme to the tree below. return ( <ThemeContext.Provider <Toolbar /> </ThemeContext.Provider> ); } } // Middle component doesn't need to pass our data to its children anymore function Toolbar(props) { return ( <div> <ThemedButton /> </div> ); } // React here will find the closest theme Provider above and use its value("dark") class ThemedButton extends React.Component { // contextType to read the current theme context static contextType = ThemeContext; render() { return <Button theme={this.context} />; } } TypeScript Using this feature with TypeScript simply means typing each Context instance: import React from 'react'; // TypeScript will infere the type of this properties automatically export const AppContext = React.createContext({ lang: 'en', theme: 'dark' }); We will also see useful error messages available in this case: const App = () => { return <AppContext.Provider value={ { lang: 'de', // Missing properties ERROR } }> <Header/> </AppContext.Provider> } Typing custom Hooks The ability for developers to build their custom Hooks is really one of the React’s killer features. A custom Hook will allow us to combine the core of React Hooks into our own function and extract its logic. This Hook will be easily shared and imported like any other JavaScript function, and it will behave the same as the core React Hooks, following their usual rules. To show you a typed custom Hook, I have taken the basic example from the React docs and added TypeScript features: import React, { useState, useEffect } from 'react'; type Hook = (friendID: number) => boolean; // define a status since handleStatusChange can't be inferred automatically interface IStatus { id: number; isOnline: boolean; } // take a number as input parameter const useFriendStatus: Hook = (friendID) => { // types here are automatically inferred const [isOnline, setIsOnline] = useState<boolean | null>(null); function handleStatusChange(status: IStatus) { setIsOnline(status.isOnline); } useEffect(() => { ChatAPI.subscribeToFriendStatus(props.friend.id, handleStatusChange); return () => { ChatAPI.unsubscribeFromFriendStatus(props.friend.id, handleStatusChange); }; }); return isOnline; } Useful resources Here I’ve compiled for you a list of helpful resources you can consult if you decided to start using TypeScript with React: - Official TypeScript docs - Composing React components using TypeScript - The latest React updates - A beginner guide to TypeScript Conclusion I strongly believe that TypeScript will be around for a while. Thousand of developers are learning how to use it and integrating it into their projects to enhance them. In our case, we learned how this language can be a great companion to write better, more manageable, easier-to-read React apps! For more articles like this, please follow my Twitter. How to write correctly typed React components with TypeScript appeared first on LogRocket Blog. Favorite Front-end/UI developer interview questions? Help me compile your favorite interview questions. You can use React.FCinstead of React.FunctionComponent. It is short synonym)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/bnevilleoneill/how-to-write-correctly-typed-react-components-with-typescript-32p1
CC-MAIN-2020-05
refinedweb
1,455
53.71
Web Component Management We are all using web components in our applications today, no matter what we call them. But how should we load them? Join the DZone community and get the full member experience.Join For Free Within the last 6 months, it felt like a good time to get on board properly with Web Components so I’ve been toying around with bits and pieces. I’ve been thinking about the ecosystem as a whole and I’ve also recently been creating a few elements. One thing that is really unclear to me is that there is no defined best practice for how to include styles and templates (HTML) with your custom element which means as a consumer of custom elements you are at the mercy of what the component developer thinks is best. Looking at early guidance, there are two ways: - Use <link rel="import">that is only supported by Chrome but allows you to bundle CSS, <template>s and other assets needed for element. - Go it alone and figure something out. When I was making the <air-horner> element I did ship a <link rel=import> file because it seemed like that was the only way to get it working with webcomponents.org but all it does is load the single JS file that encapsulates everything. Instead, I chose to have a single JS file ( <script src="air-horner.js"></script>) that you include in your page that defines and registers the custom element. The script file encapsulates the element logic, definition, and styling. I chose this route because I made one decision early on: By including my component, the consumer of the custom element should not have uncontrolled blocking requests emanate from my element. If something will block the render, then the consumer has decided to do it. This means I don’t have any external style sheets and I don’t have any external JS either. I don’t include a <link rel=stylesheet> in the template and I also don’t dynamically fetch a remote file. This constraint means that I have to think of a way to embed both the template used in my shadowDOM and they styles too without polluting the global scope. I chose to expose a template() function in my custom element that will create and cache a dynamically-created <template> element. This template element contains a <style> element and a root <div> that the contains the inline HTML of the element structure. get template() { if(this._template) return this._template; else { this._template = document.createElement('template'); let styles = document.createElement('style'); styles.innerHTML = `:host {} /* Lots of CSS*/`; let body = document.createElement('div'); body. <div class='inner'> <div class='center'></div> </div> </div>`; this._template.content.appendChild(styles); this._template.content.appendChild(body); return this._template; } } When the element is instantiated I stamp out the shadowDOM and then go to work on attaching functionality to the element DOM. const root = this.attachShadow({mode:'open'}); root.appendChild(this.template.content.cloneNode(true)); // Now attach handlers... This works well for the very first version of the element, but it is not entirely extensible. To let the user style the element I have to figure out a way to allow them to inject their own styles and maybe even their own custom HTML, or I can expose extension points via CSS variables. The latter method is quite easy but it pollutes CSS variable namespace and makes the “API” complex to document and hard to discover for developers, the former method, I have no good idea about how to do that in a consistent way. I really don’t want to see an ecosystem where we have to have complex bundling and deployment scripts just so I can drop some fancy elements on my page but I would like to see some commonality about how we include elements in our sites and apps. I don’t have answers at the moment, I only have questions: - What is a good solution for deploying Web Components? Should we push harder on getting support for <link rel=import>, or wait for better module loading via <script type=module>link? - Do we need to roughly agree on a model for encapsulating and loading templates and styles, or is it too early? - Given that HTML imports seem dead, is inlining the template in the element a reasonable solution? Are there better ways whilst not polluting the global scope? - Is the constraint of not making requests from my component the correct goal? If so are there better solutions to hosting all the required assets? I would love to get your thoughts. Update: Ali Afshar asked why I am using a template element when it is not in the DOM. It’s a good question, I don’t believe I needed to, but it was a nice way to group multiple elements in something that wasn’t a div. Published at DZone with permission of Paul Kinlan, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/web-component-management?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev
CC-MAIN-2022-33
refinedweb
842
63.09
Java and Python are two of the most popular programming languages as of 2019. Both are very powerful. However, these two languages are very different. Thus, most beginners and newbies get confused when it comes to making the choice between these two as their first programming language. Which one should be your first programming language? In this post, I will discuss which programming language is better to learn first and why. Well, let?s make it clear: in my opinion, Java would be a better first programming language than Python. Let?s discuss the reasons. Note: Here I will not discuss which language is better or which one is more powerful but only explain which one beginners without any prior programming experience should opt to learn first. Python Indentation vs Java Curly Braces The two languages follow a totally different style of separating the blocks of code. While Java uses curly braces ({}) like many other popular programming languages like C++, C#, and JavaScript, Python uses indentation. In Java: // Java versionpublic class MyFristDate { private String firstName = “Bobby”;public void sayHello() { System.out.println(“Hello, ” + firstName); }}myClassInstance = new MyFristDate();myClassInstance.sayHello(); In Python: # Python versionclass MyFristDate: def __init__(self, first_name): self.first_name = “Bobby” def say_hello(self): print(“Hello, ” + first_name)my_class_instance = MyFristDate()my_class_instance.say_hello() Many developers love Python indentation (disclaimer: I love it as well). However, code with curly braces would be better for beginners who are completely new in the field as these braces provide a much clearer view in a glimpse and also completely differentiate the blocks of code, which is easily understandable for newbies. Furthermore, it is also difficult for beginners who are just starting out, to work with indentations because if they mistakenly put the wrong space character, the whole program logic will go wrong. def say_hello(): print(“Bobby”) # I use tab to make indentation here print(“Creativity”) # I use space to make indentation here# Do you see the differences between tabs and spaces? Another common mistake of beginners is to write functions with hundreds of lines of code. Indentation would be hell with such mistakes as it?s too unreadable! (Obviously, you should not write hundreds of code lines for a function). Static vs Dynamic Type Nature The most important reason is that Java is a static-type programming language and Python is dynamic-type. With a static-type language, everything is declared explicitly while with a dynamic-type language, types are not. Sure, it makes the code longer and looks like a boilerplate. However, explicitness really helps beginners a lot! As Java code is very strict and explicit, beginners are completely aware of what is happening in the code. See the example of how you define a variable in Java below: int number = 10;char character = ‘a’; Here, even beginners will quickly understand that ?number? variable is of type int (short for integer) and ?character? variable is of type char (short for the character). As Python is a dynamic-type language that doesn?t include type declarations, everything is implicit or hidden. Thus, beginners are not well-aware which type of variable they have declared. It becomes troublesome later when something unexpected happens. See the example of how you define a variable in Python below: number = 10;character = ‘a’; The code is sure shorter but we just don?t see the type of the declared variables. Let?s look at how ?class? and ?function? are defined in both languages. // Java versionpublic class MyFristDate { private String firstName = “Bobby”; public void sayHello() { System.out.println(“Hello, ” + firstName); }}myClassInstance = new MyFristDate();myClassInstance.sayHello(); Above is the Java version. Let?s compare it with the Python version below: # Python versionclass MyFristDate: def __init__(self, first_name): self.first_name = “Bobby” def say_hello(self): print(“Hello, ” + first_name)my_class_instance = MyFristDate()my_class_instance.say_hello() In Java, it?s clear that we have a public class, which is accessible to a class in any package. In addition, ?firstName? variable is of type String and has the private scope. And finally, the function ?sayHello? is public with the return-type of void. As compared to Python, all of this information is hidden. Now, can you guess what return type ?say_hello? function has? Most Python beginners won?t be able to answer this, but Java beginners do. In short, it is better to first learn a static-type programming language like Java as compared to dynamic-type, because static-type languages let programmers understand the inner working of how coding is done in general. There are several advantages of Java or static-type programming languages as discussed further in the following. Less Room For Mistakes Java?s static-type nature enforces programmers to make fewer mistakes because it contains strict rules of coding and type-safety system which check everything at compile-time. As Java program does not execute even if there is only a single compile-time error (the error which comes before the program executes), programmers are bound to solve all errors in order to execute the program. Hence, there are fewer chances of runtime errors (the errors which come after the program executes). It?s a totally different story with Python and other dynamic-type languages, in which everything is checked at runtime and thus, the chance of facing runtime errors is much higher. This is important because runtime errors are far more difficult to catch and debug as compared to compile-time errors. With Python, beginners might need to spend more time debugging and fixing issues than with Java. Easy Code Analysis As you already guessed, Java code is very easy to analyze because, again, everything is declared explicitly, so beginners don?t spend much of their time in understanding what this bunch of code is doing in case if they need to analyze their own previously written programs or the code written by teammates, which is common when working in large companies. However, a beginner would have to spend much of his time analyzing the Python code because everything is hidden and it?s really hard to grasp what each line of code does. In short, Python programmers face difficulties when analyzing the code because of its dynamic nature. Easy Transition It?s very common for professional programmers to know multiple programming languages.. On the other hand, if you choose to learn Python first, you might have some transition issues because of the syntax. JavaScript, PHP, and many other popular programming languages don?t use indentation. In short, the transition from Java to Python or any other programming language is like no brainer, but opposite takes much time. Conclusion Java and Python, both are widely used programming languages, but Java is better to learn first than Python because of reasons below: - Static-type language is more explicit than dynamic-type ones: In Java, everything is declared explicitly (variables, functions, and classes). Hence, beginners are fully aware of what the code is about. Additionally, static-type allows beginners to easily understand the inner working of several programming concepts. However, in Python, everything is implicit that makes beginners sometimes unaware of what their code is about and easy to make mistakes unintentionally; - Fewer chances of unexpected runtime errors: Java consists of strict rules and strong type-safety system which enforce programmers to do fewer mistakes, as it checks Java code at compile-time. Therefore, with Java, there are fewer chances of unexpected runtime errors. As compared to Python, which checks code at runtime, developers might face lots of unexpected errors. It?s because everything is shown up at runtime in Python, which also makes it difficult to debug and analyze the code in Python, as compared to Java. - It is easier to make the transition from Java to Python or any other language, but the reverse is a bit difficult because of Python syntax is a bit different than most other popular programming languages.
https://911weknow.com/why-java-should-be-your-first-language-instead-of-python
CC-MAIN-2021-04
refinedweb
1,307
55.95
my program needs to have the user enter a string and have it output backwards, not in reverse order. This is what I have so far, any help would be appreciated. I was trying to use memmove but I think I'm going in the wrong direction. Thanks Carl. #include <stdio.h> #include <string.h> int main(void) { char string [ 80 ]; char *tokenPtr; printf( "enter text\n" ); gets( string ); tokenPtr = strtok( string, "" ""); while ( tokenPtr != NULL ) { printf( "%s\n", tokenPtr ); tokenPtr = strtok( NULL, "" "" ); } printf( "%s%s\n", "the text before memmove is:", string ); printf( "%s%s\n", "the text after memmove is:", memmove( string, &string[5], 10 )); system("PAUSE"); return 0; }
https://www.daniweb.com/programming/software-development/threads/209393/need-to-output-backwards-not-in-reverse
CC-MAIN-2018-34
refinedweb
110
71.85
# Queries in PostgreSQL. Nested Loop So far we've discussed [query execution stages](https://postgrespro.com/blog/pgsql/5969262), [statistics](https://postgrespro.com/blog/pgsql/5969296), and the two basic data access methods: [Sequential scan](https://postgrespro.com/blog/pgsql/5969403) and [Index scan](https://postgrespro.com/blog/pgsql/5969493). The next item on the list is join methods. This article will remind you what logical join types are out there, and then discuss one of three physical join methods, the Nested loop join. Additionally, we will check out the row *memoization* feature introduced in PostgreSQL 14. Joins ----- *Joins* are the primary feature of SQL, the foundation that enables its power and agility. Sets of rows (whether pulled directly from a table or formed as a result of an operation) are always joined together in pairs. There are several *types* of joins. * **Inner join.** An `INNER JOIN` (or usually just `JOIN`) is a subset of two sets that includes all the row pairs from the two original sets that match the *join condition*. A join condition is what ties together columns from the first row set with columns from the second one. All the columns involved comprise what is known as the *join key*. If the join condition is an equality operator, as is often the case, then the join is called an *equijoin*. A *Cartesian product* (or a `CROSS JOIN`) of two sets includes all possible combinations of pairs of rows from both sets. This is a specific case of an inner join with a TRUE condition. * **Outer joins.** The `LEFT OUTER JOIN` (or the `LEFT JOIN`) supplements the result of an inner join with the rows from the left set which didn't have a corresponding pair in the right set (the contents of the missing right set columns are set to NULLs). The `RIGHT JOIN` works identically, except the joining order is reversed. The `FULL JOIN` is the `LEFT JOIN` and the `RIGHT JOIN` combined. It includes rows with missing pairs from both sets, as well as the `INNER JOIN` result. * **Semijoins and antijoins.** The *semijoin* is similar to a regular inner join, but with two key differences. Firstly, it includes only the rows from the first set that have a matching pair in the second set. Secondly, it includes the rows from the first set only once, even if a row happens to have multiple matches in the second set. The *antijoin* includes only those rows from the first set that didn't have a matching pair in the second set. SQL does not offer explicit semijoin or antijoin operations, but some expressions (`EXISTS` and `NOT EXISTS`, for example) can be used to achieve equivalent results. All of the above are logical operations. For example, you can represent an inner join as a Cartesian product that retains only the rows that satisfy the join condition. When it comes to hardware, however, there are ways to perform an inner join much more efficiently. PostgreSQL offers several join *methods*: * Nested loop join * Hash join * Merge join Join methods are algorithms that execute the join operations in SQL. They often come in various flavours tailored to specific join types. For example, an inner join that uses the nested loop mode will be represented in a plan with a Nested Loop node, but a left outer join using the same mode will look like a Nested Loop Left Join node in the plan. Different methods shine in different conditions, and it's the planner's job to select the best one cost-wise. Nested loop join ---------------- The nested loop algorithm is based on two loops: an *inner* loop within an *outer* loop. The outer loop searches through all the rows of the first (outer) set. For every such row, the inner loop searches for matching rows in the second (inner) set. Upon discovering a pair that satisfies the join condition, the node immediately returns it to the parent node, and then resumes scanning. The inner loop cycles as many times as there are rows in the outer set. Therefore, the algorithm efficiency depends on several factors: * Outer set cardinality. * Existence of an efficient access method that fetches the rows from the inner set. * Number of repeat row fetches from the inner set. Let's look at examples. Cartesian product ----------------- A nested loop join is the most efficient way to calculate a Cartesian product, regardless of the number of rows in both sets. ``` EXPLAIN SELECT * FROM aircrafts_data a1 CROSS JOIN aircrafts_data a2 WHERE a2.range > 5000; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (cost=0.00..2.78 rows=45 width=144) −> Seq Scan on aircrafts_data a1 outer set (cost=0.00..1.09 rows=9 width=72) −> Materialize (cost=0.00..1.14 rows=5 width=72) inner set −> Seq Scan on aircrafts_data a2 (cost=0.00..1.11 rows=5 width=72) Filter: (range > 5000) (7 rows) ``` The Nested Loop node is where the algorithm is executed. It always has two children: the top one is the outer set, the bottom one is the inner set. The inner set is represented with a Materialize node in this case. When called, the node stores the output of its child node in RAM (up to *work\_mem*, then starts spilling to disk) and then returns it. Upon further calls the node returns the data from memory, avoiding additional table scans. An inner join query might generate a similar plan: ``` EXPLAIN SELECT * FROM tickets t JOIN ticket_flights tf ON tf.ticket_no = t.ticket_no WHERE t.ticket_no = '0005432000284'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (cost=0.99..25.05 rows=3 width=136) −> Index Scan using tickets_pkey on tickets t (cost=0.43..8.45 rows=1 width=104) Index Cond: (ticket_no = '0005432000284'::bpchar) −> Index Scan using ticket_flights_pkey on ticket_flights tf (cost=0.56..16.57 rows=3 width=32) Index Cond: (ticket_no = '0005432000284'::bpchar) (7 rows) ``` The planner realizes the equivalence and replaces the `tf.ticket_no = t.ticket_no` condition with `tf.ticket_no = constant`, essentially transforming the inner join into a Cartesian product. **Cardinality estimation.** The cardinality of a Cartesian product of two sets equals the product of cardinalities of the two sets: 3 = 1 × 3. **Cost estimation.** The startup cost of a join equals the sum of startup costs of its child nodes. The total cost of a join, in this case, equals the sum of: * Row fetch cost for the outer set, for each row. * One-time row fetch cost for the inner set, for each row (because the cardinality of the outer set equals one). * Processing cost for each output row. ``` SELECT 0.43 + 0.56 AS startup_cost, round(( 8.45 + 16.58 + 3 * current_setting('cpu_tuple_cost')::real )::numeric, 2) AS total_cost; ``` ``` startup_cost | total_cost −−−−−−−−−−−−−−+−−−−−−−−−−−− 0.99 | 25.06 (1 row) ``` The imprecision is due to a rounding error. The planner's calculations are performed in floating-point values, which are rounded down to two decimal places for plan readability, while I use rounded-down values as an input. Let's come back to the previous example. ``` EXPLAIN SELECT * FROM aircrafts_data a1 CROSS JOIN aircrafts_data a2 WHERE a2.range > 5000; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (cost=0.00..2.78 rows=45 width=144) −> Seq Scan on aircrafts_data a1 (cost=0.00..1.09 rows=9 width=72) −> Materialize (cost=0.00..1.14 rows=5 width=72) −> Seq Scan on aircrafts_data a2 (cost=0.00..1.11 rows=5 width=72) Filter: (range > 5000) (7 rows) ``` This plan has a Materialize node, which stores rows in memory and returns them faster upon repeat requests. In general, a total join cost comprises: * Row fetch cost for the outer set, for each row. * One-time row fetch cost for the inner set, for each row (during which materialization is done). * (N−1) times the cost of repeat inner set row fetch cost, for each row (where N is the number of rows in the outer set). * Processing cost for each output row. In this example, after the inner rows are fetched for the first time, materialization helps us save on further fetch costs. The cost of the initial Materialize call is in the plan, the cost of further calls is not. Its calculation is beyond the scope of this article, so trust me when I say that in this case the cost of consecutive Materialize node calls is 0.0125. Therefore, the total join cost for this example looks like this: ``` SELECT 0.00 + 0.00 AS startup_cost, round(( 1.09 + 1.14 + 8 * 0.0125 + 45 * current_setting('cpu_tuple_cost')::real )::numeric, 2) AS total_cost; ``` ``` startup_cost | total_cost −−−−−−−−−−−−−−+−−−−−−−−−−−− 0.00 | 2.78 (1 row) ``` Parameterized join ------------------ Let's take a look at a more common example, one that does not simply reduce to a Cartesian product. ``` CREATE INDEX ON tickets(book_ref); EXPLAIN SELECT * FROM tickets t JOIN ticket_flights tf ON tf.ticket_no = t.ticket_no WHERE t.book_ref = '03A76D'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (cost=0.99..45.67 rows=6 width=136) −> Index Scan using tickets_book_ref_idx on tickets t (cost=0.43..12.46 rows=2 width=104) Index Cond: (book_ref = '03A76D'::bpchar) −> Index Scan using ticket_flights_pkey on ticket_flights tf (cost=0.56..16.57 rows=3 width=32) Index Cond: (ticket_no = t.ticket_no) (7 rows) ``` Here, Nested Loop searches through the outer set (tickets) and for each outer row searches through the inner set (flights), passing down the ticket number `t.ticket_no` *as a parameter*. When the Index Scan node is called on the inner loop, it is called with the condition `ticket_no = constant`. **Cardinality estimation.** The planner estimates that the `t.book_ref = '03A76D'` filter will return two rows from the outer set (rows=2), and each of these two rows will have, on average, three matches in the inner set (rows=3). *The join selectivity* is the fraction of rows of a Cartesian product that remains after a join. This must exclude any rows that have NULLs in the columns that are being joined because the filter condition for a NULL is always false. The cardinality is estimated as the Cartesian product cardinality (that is, the product of cardinalities of two sets) multiplied by the selectivity. In this case, the cardinality estimate of the outer set is two rows. As for the inner set, no filters (except for the join condition itself) apply to it, so its cardinality equals the cardinality of the `ticket_flights` table. Because the two tables are joined using a foreign key, each row of the child table will have only one matching pair in the parent table. The selectivity calculation takes that into account. In this case, the selectivity equals the reciprocal of the size of the table that the foreign key references. Therefore, the estimate (provided that `ticket_no` rows don't contain any NULLs) is: ``` SELECT round(2 tf.reltuples (1.0 / t.reltuples)) AS rows FROM pg_class t, pg_class tf WHERE t.relname = 'tickets' AND tf.relname = 'ticket_flights'; ``` ``` rows −−−−−− 6 (1 row) ``` Naturally, tables can be joined without foreign keys. In this case, the join selectivity will equal the selectivity of the join condition. Therefore, a "universal" selectivity calculation formula for an equijoin (assuming a uniform data distribution) will look like this: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/2a0/791/804/2a0791804c5a859a7954fda547208c8b.png) *nd*1 and *nd*2 are the numbers of distinct join key values in the first and the second set. The distinct values statistics show that all the ticket numbers in the tickets table are unique (which is no surprise, as `ticket_no` is the primary key), but in `ticket_flights` each ticket has about four matching rows: ``` SELECT t.n_distinct, tf.n_distinct FROM pg_stats t, pg_stats tf WHERE t.tablename = 'tickets' AND t.attname = 'ticket_no' AND tf.tablename = 'ticket_flights' AND tf.attname = 'ticket_no'; ``` ``` n_distinct | n_distinct −−−−−−−−−−−−+−−−−−−−−−−−− −1 | −0.3054527 (1 row) ``` The estimate matches the estimate with a foreign key: ``` SELECT round(2 tf.reltuples least(1.0/t.reltuples, 1.0/tf.reltuples/0.3054527) ) AS rows FROM pg_class t, pg_class tf WHERE t.relname = 'tickets' AND tf.relname = 'ticket_flights'; ``` ``` rows −−−−−− 6 (1 row) ``` The planner supplements the universal formula calculation with most common value lists if this statistic is available for the join key for both tables. This gives the planner a relatively precise selectivity assessment for the rows from the MCV lists. The selectivity of the rest of the rows is still calculated as if their values are distributed uniformly. Histograms aren't used to increase the selectivity assessment quality. In general, the selectivity of a join with no foreign key may end up worse than that of a join with a defined foreign key. This is especially true for composite keys. Let's use the `EXPLAIN ANALYZE` command to check the number of actual rows and the number of inner loop calls in our plan: ``` EXPLAIN (analyze, timing off, summary off) SELECT * FROM tickets t JOIN ticket_flights tf ON tf.ticket_no = t.ticket_no WHERE t.book_ref = '03A76D'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (cost=0.99..45.67 rows=6 width=136) (actual rows=8 loops=1) −> Index Scan using tickets_book_ref_idx on tickets t (cost=0.43..12.46 rows=2 width=104) (actual rows=2 loops=1) Index Cond: (book_ref = '03A76D'::bpchar) −> Index Scan using ticket_flights_pkey on ticket_flights tf (cost=0.56..16.57 rows=3 width=32) (actual rows=4 loops=2) Index Cond: (ticket_no = t.ticket_no) (8 rows) ``` The outer set contains two rows (actual rows=2), as expected. The inner Index Scan node was called twice (loops=2) and returned four rows on average each time (actual rows=4). The grand total is eight rows (actual rows=8). I switched the per-step timing off mainly to keep the output readable, but it's worth noting that the timing feature may slow the execution down considerably on some platforms. If we were to switch timing back on, however, we would see that the timings are averaged, like the row counts. From there, you can multiply the timing by the loop count to get the full estimate. **Cost estimation.** The cost calculation isn't much different from the previous examples. Here's our execution plan: ``` EXPLAIN SELECT * FROM tickets t JOIN ticket_flights tf ON tf.ticket_no = t.ticket_no WHERE t.book_ref = '03A76D'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (cost=0.99..45.67 rows=6 width=136) −> Index Scan using tickets_book_ref_idx on tickets t (cost=0.43..12.46 rows=2 width=104) Index Cond: (book_ref = '03A76D'::bpchar) −> Index Scan using ticket_flights_pkey on ticket_flights tf (cost=0.56..16.57 rows=3 width=32) Index Cond: (ticket_no = t.ticket_no) (7 rows) ``` The repeat scan cost for the inner set is the same as the initial scan cost. The result: ``` SELECT 0.43 + 0.56 AS startup_cost, round(( 12.46 + 2 * 16.58 + 6 * current_setting('cpu_tuple_cost')::real )::numeric, 2) AS total_cost; ``` ``` startup_cost | total_cost −−−−−−−−−−−−−−+−−−−−−−−−−−− 0.99 | 45.68 (1 row) ``` Row caching (memoization) ------------------------- If you repeatedly scan the inner set rows with the same parameter and (consequently) get the same result every time, it might be a good idea to cache the rows for faster access. This became possible in PostgreSQL 14 with the introduction of the Memoize node. The Memoize node resembles the Materialize node in some ways, but it is tailored specifically for parameterized joins and is much more complicated under the hood: * While Materialize simply materializes every row of its child node, Memoize stores separate row instances for each parameter value. * When reaching its maximum storage capacity, Materialize offloads any additional data on disk, but Memoize does not (because that would void any benefit of caching). Below is a plan with a Memorize node: ``` EXPLAIN SELECT * FROM flights f JOIN aircrafts_data a ON f.aircraft_code = a.aircraft_code WHERE f.flight_no = 'PG0003'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (cost=5.44..387.10 rows=113 width=135) −> Bitmap Heap Scan on flights f (cost=5.30..382.22 rows=113 width=63) Recheck Cond: (flight_no = 'PG0003'::bpchar) −> Bitmap Index Scan on flights_flight_no_scheduled_depart... (cost=0.00..5.27 rows=113 width=0) Index Cond: (flight_no = 'PG0003'::bpchar) −> Memoize (cost=0.15..0.27 rows=1 width=72) Cache Key: f.aircraft_code −> Index Scan using aircrafts_pkey on aircrafts_data a (cost=0.14..0.26 rows=1 width=72) Index Cond: (aircraft_code = f.aircraft_code) (12 rows) ``` First, the planner allocates *work\_mem* × *hash\_mem\_multiplier* process memory for caching purposes. The second parameter *hash\_mem\_multiplier* (1.0 by default) gives us a hint that the node searches the rows using a hash table (with open addressing in this case). The parameter (or a set of parameters) is used as the cache key. In addition to that, all keys are put in a list. One end of the list stores "cold" keys (which haven't been used in a while), the other stores "hot" keys (used recently). Whenever the Memoize node is called, it checks if the rows corresponding to the passed parameter value have already been cached. If they are, Memoize returns them to the parent node (Nested Loop) without calling the child node. It also puts the cache key into the hot end of the key list. If the required rows haven't been cached yet, Memoize requests the rows from the child node, caches them and passes them upwards. The new cache key is also put into the hot end of the list. As the cache fills up, the allocated memory might run out. When it happens, Memoize removes the coldest items from the list to free up space. The algorithm is different from the one used in buffer cache, but serves the same goal. If a parameter matches so many rows that they can't fit into the cache even when all other entries are removed, the parameter's rows are simply not cached. There's no sense in caching a partial output because the next time the parameter comes up, Memoize will still have to call its child node to get the full output. **The cardinality and cost estimates** here are similar to what we've seen before. One notable thing here is that the Memoize node cost in the plan is just the cost of its child node plus *cpu\_tuple\_cost* and doesn't mean much as such. The only reason we want the node in there is to reduce this cost. The Materialize node suffers the same unclarity: the "real" node cost is the *repeat scan cost*, which isn't listed in the plan. The repeat scan cost for the Memoize node depends on the amount of available memory and the manner in which the cache is accessed. It also depends significantly on the number of expected distinct parameter values, which determines the number of inner set scans. With all these variables at hand, you can attempt to calculate the probability of finding a given row in the cache and the probability of evicting a given row from the cache. The first value decreases the cost estimate, the other one increases it. The details of this calculation are irrelevant to the topic of this article. For now, let's use our favorite `EXPLAIN ANALYZE` command to check out how a plan with a Memoize node is executed. This example query selects all flights that match a specific flight path and a specific aircraft type, therefore the cache key will be the same for all the Memoize calls. The required row will not be cached upon the initial call (Misses: 1), but it will be there for all the repeat calls (Hits: 112). The cache itself takes up all of one kilobyte of memory in total. ``` EXPLAIN (analyze, costs off, timing off, summary off) SELECT * FROM flights f JOIN aircrafts_data a ON f.aircraft_code = a.aircraft_code WHERE f.flight_no = 'PG0003'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (actual rows=113 loops=1) −> Bitmap Heap Scan on flights f (actual rows=113 loops=1) Recheck Cond: (flight_no = 'PG0003'::bpchar) Heap Blocks: exact=2 −> Bitmap Index Scan on flights_flight_no_scheduled_depart... (actual rows=113 loops=1) Index Cond: (flight_no = 'PG0003'::bpchar) −> Memoize (actual rows=1 loops=113) Cache Key: f.aircraft_code Hits: 112 Misses: 1 Evictions: 0 Overflows: 0 Memory Usage: 1kB −> Index Scan using aircrafts_pkey on aircrafts_data a (actual rows=1 loops=1) Index Cond: (aircraft_code = f.aircraft_code) (15 rows) ``` Note the two zero values: Evictions and Overflows. The former one is the number of evictions from the cache and the latter one is the number of memory overflows, where the full output for a given parameter value was larger than the allocated memory size and therefore could not be cached. High Evictions and Overflows values would indicate that the allocated cache size was insufficient. This often happens when an estimate of the number of possible parameter values is incorrect. In this case the use of the Memoize node may turn out to be quite costly. As a last resort, you can disable the use of the cache by setting the *enable\_memoize* parameter to *off*. Outer joins ----------- You can perform a *left outer join* with a nested loop: ``` EXPLAIN SELECT * FROM ticket_flights tf LEFT JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id WHERE tf.ticket_no = '0005434026720'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop Left Join (cost=1.12..33.35 rows=3 width=57) Join Filter: ((bp.ticket_no = tf.ticket_no) AND (bp.flight_id = tf.flight_id)) −> Index Scan using ticket_flights_pkey on ticket_flights tf (cost=0.56..16.57 rows=3 width=32) Index Cond: (ticket_no = '0005434026720'::bpchar) −> Materialize (cost=0.56..16.62 rows=3 width=25) −> Index Scan using boarding_passes_pkey on boarding_passe... (cost=0.56..16.61 rows=3 width=25) Index Cond: (ticket_no = '0005434026720'::bpchar) (10 rows) ``` Note the Nested Loop Left Join node. For this particular query, the planner opted for a non-parameterized filtered join: this means that the inner row set is scanned identically every loop (which is why it's "stashed" behind the Materialize node), and the output is filtered at the Join Filter node. The cardinality estimate of an outer join is calculated in the same way as for an inner join, but the result is the larger value between the calculation result and the cardinality of the outer set. In other words, an outer join can have more, but never fewer rows than the larger of the joined sets. The cost estimate calculation for an outer join is entirely the same as for an inner join. Keep in mind, however, that the planner may select a different plan for an inner join than for an outer join. Even in this example, if we force the planner to use the nested loop join, we will notice a difference in the Join Filter node because the outer join will have to check for ticket number matches to get the correct result whenever there isn't a pair in the outer set. This will slightly increase the overall cost. ``` SET enable_mergejoin = off; EXPLAIN SELECT * FROM ticket_flights tf JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id WHERE tf.ticket_no = '0005434026720'; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop (cost=1.12..33.33 rows=3 width=57) Join Filter: (tf.flight_id = bp.flight_id) −> Index Scan using ticket_flights_pkey on ticket_flights tf (cost=0.56..16.57 rows=3 width=32) Index Cond: (ticket_no = '0005434026720'::bpchar) −> Materialize (cost=0.56..16.62 rows=3 width=25) −> Index Scan using boarding_passes_pkey on boarding_passe... (cost=0.56..16.61 rows=3 width=25) Index Cond: (ticket_no = '0005434026720'::bpchar) (9 rows) ``` ``` RESET enable_mergejoin; ``` *Right joins* are incompatible with nested loops because the nested loop algorithm distinguishes between the inner and the outer set. The outer set is scanned fully, but if the inner one is accessed using index scan, then only the rows that match the join filter are returned. Therefore, some rows may remain unscanned. *Full joins* are also incompatible for the same reason. Antijoins and semijoins ----------------------- Antijoins and semijoins are similar in the sense that for each row of the first (outer) set both algorithms want to find only *one* match in the second (inner) set. *The antijoin* returns all the rows of the first set that didn't get a match in the second set. In other words, the algorithm takes a row from the outer set and searches the inner set for a match, and as soon as a match is found, the algorithm stops the search and jumps to the next row of the outer set. Antijoins are useful for calculating the `NOT EXISTS` predicate. Example: find all the aircraft models without a defined seating pattern: ``` EXPLAIN SELECT * FROM aircrafts a WHERE NOT EXISTS ( SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code ); ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop Anti Join (cost=0.28..4.65 rows=1 width=40) −> Seq Scan on aircrafts_data ml (cost=0.00..1.09 rows=9 widt... −> Index Only Scan using seats_pkey on seats s (cost=0.28..5.55 rows=149 width=4) Index Cond: (aircraft_code = ml.aircraft_code) (5 rows) ``` The Nested Loop Anti Join node is where the antijoin is executed. This isn't the only use of antijoins, of course. This equivalent query will also generate a plan with an antijoin node: ``` EXPLAIN SELECT a.* FROM aircrafts a LEFT JOIN seats s ON a.aircraft_code = s.aircraft_code WHERE s.aircraft_code IS NULL; ``` *The semijoin* returns all the rows of the first set that got a match in the second set (no searching for repeat matches here either, as they don't affect the result in any way). Semijoins are used for calculating the `EXISTS` predicate. Let's find all the aircrafts with a defined seating pattern: ``` EXPLAIN SELECT * FROM aircrafts a WHERE EXISTS ( SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code ); ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop Semi Join (cost=0.28..6.67 rows=9 width=40) −> Seq Scan on aircrafts_data ml (cost=0.00..1.09 rows=9 widt... −> Index Only Scan using seats_pkey on seats s (cost=0.28..5.55 rows=149 width=4) Index Cond: (aircraft_code = ml.aircraft_code) (5 rows) ``` Nested Loop Semi Join is the node. In this plan (and in the anitjoin plans above) the `seats` table has a regular row count estimate (rows=149), while we know that we only need to get one row from it. When the query is executed, the loop will stop after it gets the row, of course. ``` EXPLAIN (analyze, costs off, timing off, summary off) SELECT * FROM aircrafts a WHERE EXISTS ( SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code ); ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop Semi Join (actual rows=9 loops=1) −> Seq Scan on aircrafts_data ml (actual rows=9 loops=1) −> Index Only Scan using seats_pkey on seats s (actual rows=1 loops=9) Index Cond: (aircraft_code = ml.aircraft_code) Heap Fetches: 0 (6 rows) ``` **The semijoin cardinality estimate** is calculated as usual, except that the inner set cardinality is set to one. As for the anitjoin selectivity, the estimate is also calculated as usual and then subtracted from 1. **The cost estimate** for both antijoin and semijoin is calculated with regards to the fact that only a fraction of the inner set rows is scanned for most of outer set rows. Non-equijoins ------------- The nested loop join algorithm can join row sets using any join condition. Naturally, if the inner set is a base table, the table has an index, the operator class of the index contains the join condition operator, then the inner set rows can be accessed very efficiently. That aside, any two sets of rows can be joined as a Cartesian product with a filtering join condition, and the condition here can be arbitrary. Here's an example: find pairs of airports that are located next to each other. ``` CREATE EXTENSION earthdistance CASCADE; EXPLAIN (costs off) SELECT * FROM airports a1 JOIN airports a2 ON a1.airport_code != a2.airport_code AND a1.coordinates <@> a2.coordinates < 100; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop Join Filter: ((ml.airport_code <> ml_1.airport_code) AND ((ml.coordinates <@> ml_1.coordinates) < '100'::double precisi... −> Seq Scan on airports_data ml −> Materialize −> Seq Scan on airports_data ml_1 (6 rows) ``` Parallel mode ------------- The nested loop join can run in the parallel mode. The parallelization is done at the outer set level and allows the outer set to be scanned by multiple worker processes simultaneously. Each worker gets a row from the outer set and then sequentially scans the inner set all by itself. Here's an example multiple-join query that finds all the passengers that have tickets for a specific flight: ``` EXPLAIN (costs off) SELECT t.passenger_name FROM tickets t JOIN ticket_flights tf ON tf.ticket_no = t.ticket_no JOIN flights f ON f.flight_id = tf.flight_id WHERE f.flight_id = 12345; ``` ``` QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Nested Loop −> Index Only Scan using flights_flight_id_status_idx on fligh... Index Cond: (flight_id = 12345) −> Gather Workers Planned: 2 −> Nested Loop −> Parallel Seq Scan on ticket_flights tf Filter: (flight_id = 12345) −> Index Scan using tickets_pkey on tickets t Index Cond: (ticket_no = tf.ticket_no) (10 rows) ``` The top-level nested loop join runs sequentially, as usual. The outer set contains a single row from the `flights` table that was fetched using a unique key, so the nested loop is efficient even despite the large number of rows in the inner set. The inner set is collected in the parallel mode. Each worker gets its own rows from `ticket_flights` and joins them with `tickets` in a nested loop.
https://habr.com/ru/post/698100/
null
null
4,879
64.51
Scene object with OdinEditor always mark scene as dirty. 1) Scene object with OdinEditor always mark scene as dirty. 2) 1 Create emtry scene. 2 Add new gameobject 3 Add script Test public class Test : MonoBehaviour { [SerializeField] private Sprite sprite; } 4 Add this script on object. 5 Save scene. 6 Scroll inspector of the object. 7 To see that scene is durty. 3) 4) 2017.1.1f1 5) 1.0.5.0 6) Windows 10 Thanks for reporting this - we'll check it out right away. Alright, we've resolved this bug. It will make it into any hotfix releases we make for 1.0.5.0. If you'd like a build with the fix included, please send me your Odin invoice id at tor@sirenix.net, and I'll send you a build right away. Thanks. I send Id on email.
https://bitbucket.org/sirenix/odin-inspector/issues/177/scene-object-with-odineditor-always-mark
CC-MAIN-2019-09
refinedweb
143
88.13
I'm using SpeedSearch in a popup showing a list of URLs (50+). Having '/' as a valid SpeedSearch filter would make selection easy - unfortunately '/' is not accepted. Is there a way to customize this? Taras I don't think so. Looks like it only accepts characters and digits. Why would you enter a "/"? AFAIK, the speedsearch checks if the typed text is a substring of any list element. Sascha Hello Sascha, Imagine a list popup containing: - ...[30 more namespaces]... - There's no way to quickly select the "" entry: I can't filter by '/o'. It's a pretty minor trivial limitation. However, for items that are not class names it would make sense to customize the allowed SpeedSearch characters, I think. -tt Hello Taras, >> I don't think so. Looks like it only accepts characters and digits. >> >> Why would you enter a "/"? AFAIK, the speedsearch checks if the typed >> text is a substring of any list element. OK, that makes sense. Wasn't obvious to me late at night ;) Yes, I'm also not sure why it's limited at all, especially because it doesn't seem to do any regex-matching like the speedsearch e.g. in the File Structure (ctrl-f12) popup does. I'd try filing a JIRA issue :) Sascha Hello Sascha, -tt
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206791635-Customizing-valid-SpeedSearch-characters?sort_by=votes
CC-MAIN-2020-10
refinedweb
216
78.14
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Is anyone doing functional programming research on transparent (aka orthogonal) persistence? I've been reading a lot about the subject () and I've found that promising projects like Eros, Grasshopper, Texas, and others have either died or removed transparent persistence from their projects. I'm a contributor to an open-source transparent persistent OS () that's in the baby stages; I am interested in transparent persistence because of its potential to remove non-pure I/O from functional platforms. Geoff Good question. The E language has spent a LOT of time (decades) thinking about the issues; there's a very good summary overview here. Basically, orthogonal persistence plays havoc with the need to evolve the structures that are being persisted in non-trivial ways. Peter Sewell's group, working on Acute, is striving to bring type theory to bear on the problem, with some level of success, but most definitely not with orthogonal persistence. Where do Smalltalk's saved images fit into this picture? The image file is a snapshot that includes source code, compiled code, open windows, applications... (everything is an object...) So the code and data remain in synch - they are together in the snapshot. Open file handles and database connections and network connections, will probably fail the next time the image is invoked. (OTOH there are ways to handle this when the snapshot is created, and when the image is next invoked.) Lightweight Smalltalk processes are just objects, so they are saved in the snapshot and continue running the next time the image is invoked :-) Smalltalk images are usually portable - save on MS Win machine, copy the image file to Unix machine and continue working on Unix machine. This is misleading and is exactly the kind of thing, the eRights page was talking about. Upgrades are the issue. While fresh code is (likely) in synch, as upgrades are made things will fall out of synch. What happens when two upgrades want to change the same method incompatibly, or how does code that works with upgraded objects (say with an additional field) deal with old ones? On an unrelated point: "Everything is an object" is completely irrelevant to the behavior. An image is just a copy of the heap and the current machine state. Note (to Luke Gorrie): the eRights page linked by Paul Snively explicitly mentions Smalltalk. Persistent Erlang systems have been built with live upgrades and downgrades. I think a general answer to the problems you stated has to do with designing a system in a modular way using transactions to allow the programmer to define safe points where one module can be upgraded without breaking dependencies. Derek, do you really know enough about Smalltalk to state this or that is misleading? "What happens when two upgrades want to change the same method incompatibly" Do you have a specific example? "the eRights page... mentions Smalltalk" "Smalltalk, with its easy support for live upgrade, is not a counter-example. This support cannot be made reliable, and is instead designed for programmers-as-customers who know how to recover from inconsistencies." Be nice to know why it cannot be made reliable, wouldn't it. Isaac Gouy: Be nice to know why it cannot be made reliable, wouldn't it. Off the cuff, I'd imagine that they mean that Smalltalk's live upgrade allows... well... anything, so there's no protection against leaving the system in an inconsistent state. Personally, I find EROS' move away from orthogonal persistence more interesting, since it supposedly represented the "holy grail" of orthogonal persistence, i.e. whole-system snapshotting at the OS level with issues like open network connections, file descriptors, etc. dealt with. But I've been unsuccessful in finding a good single explanation of Shap's reasoning in doing so. Update: There's an interesting thread on the subject here. Update again: Mark Miller's post here sheds a little more light in the context of E. "Smalltalk's live upgrade allows... well... anything" It's Smalltalk - it can be changed to do as much or as little as required. If there exists a 'live upgrade' technique that protects against leaving the system in an inconsistent state, why would that technique not be usable in Smalltalk? Isaac Gouy: It's Smalltalk - it can be changed to do as much or as little as required. It's not clear whether that's a problem statement or a solution statement. :-) Isaac: If there exists a 'live upgrade' technique that protects against leaving the system in an inconsistent state, why would that technique not be usable in Smalltalk? I think precisely because it wouldn't be mandatory and some later "upgrade" might revert it, or pervert it, or... More prosaically, I imagine that they meant "Smalltalk's live upgrade can't be made reliable short of replacing it." If your point is that it's replacable, I doubt anyone would argue with you, but it leaves open the question as to what, exactly, you could replace it with that would be guaranteed reliable without violating some other property of Smalltalk that's considered fundamental. Of course, we'd both be better off just asking the ERights team what was meant. First, just to be clear, Smalltalk does have working orthogonal persistence, via the snapshot mechanism, and the comment on the quoted page is not a statement about Smalltalk's support for persistence. It is specifically about live upgrade, i.e., changing an existing class in the context of a running system, and having all outstanding instances now act according to the new class definition. When such a live upgrade happens, all instance representations are effectively converted at that time so that the value of each old instance variable is transfered to the location of new instance variable of the same name, if any, within the new object's representation. Any old instance variables not present in the new class definition are simply dropped. Any new instance variables not present in the old are initialized to null. First, the non-fatal issue: A representation-conversion should be correctness preserving, i.e., it should have the property that if the old instance conformed to its old class' invariants, then the converted instance will conform to its new class' invariants. The above conversion rule does this surprisingly often. Often, the programmer can carefully write the new class so that the above conversion will have this property. In cases where this is too hard, one could imagine enhancing the live upgrade mechanism so that the programmer can provide explicit conversion code as part of the new class definition. (Perhaps Smalltalk even has such a thing? I don't know.) However, live upgrade can't be made reliable in Smalltalk primarily for two reasons, both of which have to do with changing methods, and neither of which have to do with changing the representation of named classes: 1) Smalltalk is multi-threaded. (This happens to be mostly-cooperative multi-threading, but this makes no difference to the issue here.) There is no moment at which all threads are necessarily quiescent. Mutable objects are only required to satisfy their class' invariants when they're quiescent, i.e., not when they're in the midst of executing a method. Further, once a method is upgraded, how should stack frames in the midst of executing that method proceed? Should we also allow the programmer to provide code for converting old stack-frames in progress into corresponding new stack-frames in progress? Anyone think this is realistic? 2) Named classes are not the only definitions that have instances. A Smalltalk "block" is an anonymous lambda expression. A Smalltalk "block closure" is the closure resulting from evaluating the lambda expression -- it is an instance of its block. These are anonymous, so there's no reliable way to say, for a new version of a method, which of its contained blocks are the successors to which contained block, if any, from the old method. Without this information, Smalltalk can't know which new block the old block closures should be upgraded to be an instance of. IIUC, what Smalltalk actually does in both of these cases is to not upgrade the impossible representation: it allows old stack frames to run to completion on their old method code, even though this old code will now be interacting with new objects. Likewise, block closures are not upgraded. They continue to behave according to the code of their original blocks. For supporting rapid interactive development, these are probably the right choices. The point though is that, given the overall Smalltalk architecture - no quiescence points and anonynous lambda expressions - neither these nor any other possible choices will always preserve consistency across a live upgrade, no matter how careful the programmer. Erlang does live upgrade of a sort, separately per-process, at a quiescence point in the logic being upgraded within each process, and where the only state passed forward from the old code to the new are data - terms, not instances of old anonymous lambda expressions. E does not yet support live upgrade. It never has, and I wouldn't be surprised if it never does. However, two aspects of E make success more likely: like Erlang, E has natural quiescence points - per vat between turns. E does have anonymous closures, but E's syntax is biased towards encouraging the naming of any object definition expressions whose instances might still be live after their creation turn. (Closures needed for conventional immediate-sequential control structures don't survive their creation turn, and so won't need to be upgraded.) EROS itself is now CapRos, and its architecture remains designed to support orthogonal persistence. The EROS successor Coyotos initially differed from EROS primarily by dropping support for orthogonal persistence. But the problem this leads to, secure restart, is one they never solved. Instead, Coyotos is now expected to support orthogonal persistence, in much the same way EROS does. Regarding E, chapter 17 of my thesis explains more about E's approach to non-orthogonal persistence. The issues involved in recovery of distributed consistency following the crash-revival of a vat may be of interest. Be sure not to miss the discussion of post-crash consistency of KeyKOS device drivers in section 17.5. It is unclear whether E's revival mechanism lends any insight to the secure restart question that Coyotos would have had to face. The systems are sufficiently different that it's hard to translate these issues between them. Personally, I find EROS' move away from orthogonal persistence more interesting, since it supposedly represented the "holy grail" of orthogonal persistence EROS/CapROS will always be persistent. Coyotos, the followup project to EROS, will also very likely have transparent orthogonal persistence. Various paper's on that site explain why Shap was considering removing persistence. Personally, I never used the security features of KeyKOS. (although now, in the web world, I imagine they'd be generally useful -- being able to deploy a search engine without (a) giving away information about the engine's implemenation to the users and without (b) giving away information about the users' searches to the engine's owners was the sort of problem KeyKOS was designed to address). But I did use the persistence, maybe once or twice a week (when one lives in the same system image as the kernel developers, this sort of thing is bound to happen from time to time), and I was grateful for not losing any work. One of the more impressive demonstrations (from after I'd left) was after they'd included a hosted UNIX subsystem -- the first part was the usual demonstration that a workstation's attention span is no longer than its power cable, and the second part involved plugging the machine back in, and then moving the mouse around the screen. The desktop would initially restore with empty windows, but as each app was given focus, it'd be swapped right back in, displaying a pre-power failure state. Moving from that world to Windows 3.0 gave me plenty of opportunity to meditate upon the fact that all systems are fault-tolerant -- it's just that with some systems, it's the end user's job to tolerate the faults. Sorry for the delay, I've been in New Orleans the past week or so. I'm well aware that you can do this; the issue isn't that you can add the field the issue is with what value do you initialize it? In general, it may well not be possible to make a correct choice. As for conflicting upgrades, I don't have a specific example*, and don't really see why you need to ask. It's relatively trivial to generate one. The issue is pretty much exactly the same as CVS conflicts and the only general solution is the same, manual intervention to merge (perhaps by choosing one or the other) the changes. This is exactly what the eRights page was talking about. You cannot automate this in general and lay users may well not be competent to "recover from inconsistencies". Looking at some of your later responses and the how you are presenting your questions I think you see this as an "attack" against Smalltalk in particular. In one version of a response, I had a comment along the lines that this is not Smalltalk's fault, it is inherent in the problem. This is not a failure of Smalltalk. As I mentioned above, this is about the same problem in RCSes or SCCSes and the only such system I'm aware of that attempts all but the most basic automated merging is darcs, and of course that gets nowhere near "solving" the problem. * However, when I did play with Squeak, I certainly had such issues. I have never believed in transparent persistence because there are too many cases where it's either not clear how it should behave, and the programmer must specify it explicitly, or it's impossible to work at all. Examples: Serialization makes sense when programmers explicitly specify how a given type is serialized. The system can automate that in simple cases, but even then the default definition should be overridable in case it's not appropriate (e.g. part of object's state is a large cache which is better recomputed than stored, or it has an invariant which should be guaranteed even if the serialized data has been altered). There will always be some unserializable types (except in toy languages). How do Smalltalk's saved images deal with these issues? I'd like to respectfully disagree, Qrczak. TP may be hard to define if it is done at the application level and must interact with legacy systems, but I think it can be well-defined if it is implemented at the kernel. would you agree that it might be well-defined with the following caveats? 1. I am not considering per-application persistence, but rather TP on the entire OS itself. Therefore, closures, function references, etc. would remain intact both before and after a reboot. Also, this means that persistence is always global. 2. Open network connections would, upon a system restore, simply throw an exception, just as if a web server had returned an error during a transfer. Good network code has plenty of error-handling logic. This is more of a distributed-systems-fault-handling problem. 3. Files don't really exist in a TP system, since any block of memory has a guarantee of being persisted to the disk. 4. If a wrapped object had a valid memory representation at one point, why can't that simply be restored back after a reboot, just as if VRAM had paged that memory out and then back in? The whole point is to bring the system back to the same memory state so that you can program your code as if the system was not rebooted. 5. You do bring up a good point - caches, video memory, etc that should not be persisted. This could be implemented as a special "NoPersist" attribute in the language implementation. 6. Please clarify what you mean by an unserializable type. I understand that hands-free CROSS-MACHINE serialization is not always well-defined, but surely WHOLE-SYSTEM INTRA-MACHINE serialization is trivial? I mean if the entire RAM is brought back to an identical state? In conclusion, I agree with your statement in general, but not as an absolute - TP can be well defined if it is implemented at the bottom level - the kernel. What do you guys think? In general it works better when the boundary between what is saved/restored and what is acquired from the new reality is smaller. Shifting it to the OS level removes some places which are difficult to synchronize (e.g. references other processes running in the system), and adds other such places (hardware configuration which might change in the meantime, which was not a problem for per-application state saving). It doesn't allow to freeze an individual process and restart it on another machine. This means that application writers must still implement something explicitly if they want their application's state to be saved and restored elsewhere. In particular it's not a substitute for saving documents. But what I'm really worried about is that it throws away OS concepts that we are used to. It would have to bring tremendous advantages for people to accept that they no longer work with files, and it would have to redesign and redevelop substitutes for the plethora of developer applications and mechanisms: editors, compilers, version control systems, access restrictions on a multiuser system, backups, distributing the application over a network, searching for text in the source, finding differences between two versions of the source, typesetting documentation, temporary forking a part of the source to try something out and later reverting to the old version (without disrupting the evolution of other parts), moving sources between projects etc. In fact this is why I reject Smalltalk. Because it creates its own world, separate from the rest of the system, and insists in reimplementing everything in a way which is limited to Smalltalk rather than language-agnostic. It's lack is his first example of what's wrong with the state of technology that he uses to motivate his Feyerabend Project. I don't turn off my computer. I was thinking about this recently. The Newton was built on a language (NewtonScript) descended from Smalltalk via Self; in particular, everything in the system (including the entire system) was one great big object, and the OS took care of persisting everything. Basically, the thing was a Dynabook, but not in Smalltalk. It made for some great features on the Newton (e.g., it was possible to extend the apps); but it also meant that you were stuck in a walled garden. It was possible to back up your data to the desktop, but all that meant was that your desktop had a copy of the master NewtonScript object. "Transparent persistence" turned your data into an opaque blob. Not necessarily—I can't help but note that what you've described is the process of taking the entirety of your Newton environment and embedding it in another environment. What else would you expect but for it to be a blob? You're obviously not claiming that the Newton didn't interact with other systems because it obviously did. Nor, obviously, are you claiming that it was impossible to write Newton software that could marshal NewtonScript objects to and from other formats for serialization and/or persistence. Unfortunately, though, this does indeed seem to be what people really want in practice: they want to piecemeal their system in formats that are meaningful in the context of other systems with other design goals and other features, and from that point of view, "orthogonal persistence" is indeed—well—orthogonal: it doesn't solve that problem. It's been interesting reading the E language take on orthogonal persistence and the EROS/Coyotos take: E observes that it's basically impossible to do orthogonal persistence and upgrade at the language level without OS support; EROS observes that it's possible to do orthogonal persistence at the OS level but it seriously complicates the design and makes it unfamiliar to application developers and users; Coyotos observes that if you abandon orthogonal persistence at the OS level you get a simpler design that's both more amenable to formal verification and more familiar to developers and users, but where does that leave a language like E? Let's face it: these are hard problems! Nor, obviously, are you claiming that it was impossible to write Newton software that could marshal NewtonScript objects to and from other formats for serialization and/or persistence. It was possible; but the Newton seduced people into thinking it wasn't necessary. If you want to be able to get your data out of the persistent world, you still need to write an ordinary serializer; so you might as well use that serialized format for data storage, rather than winding up with incompatible representations. This is where people always seem to go astray: it isn't a value-neutral thing to say "since I need to serialize and export objects anyway, my orthogonal persistence should be done in those terms." It has ramifications in terms of how virtual memory works (if you have virtual memory); it has security ramifications (cf. E and EROS); it has semantic implicatons with respect to the software (does it even make sense to talk about serializing object X and not object Y?) etc. It's far from obvious that Newton did the wrong thing, but also far from obvious that they did all of the right things. I do maintain that one right thing is treating interoperation, e.g. serializing objects to some transport, and orthogonal persistence, e.g. snapshotting the entire system, as distinct tasks. Since the O/S takes over persistence, it is the O/S that must make sure that types match the data. So if a 'file' is 'opened' by an incompatible 'type', the system should throw an exception. It is assumed that the system must not only keep types, but versions of types also. For example, a Word 95 document should be handled by type /class 'WordDocument' v. 95, and a Word 2003 document should be handled by a type/class 'WordDocument' v. 2003. An alternative viewpoint, which I suggest is worthwhile, is to say instead of viewing the state of the persistent store from the point of view of the world, let's view the state of the world from the point of view of the persistent store. To do this, we can take the persistent store to have some native programming language, and we view the state of the store as being just the state of execution of a program in this language as it interacts with the world. An example of this is the GemStone database management system (DBMS). GemStone's native language is Smalltalk augmented with an operation to demark transaction boundaries. Another possiblility for the native language of a persistent store would be a concurrent constraint language. This has the advantage of declarativity. There is no need for an added construct to demark transactions, because concurrent constraint languages have natural transactions. That's why we call these languages concurrent. Which would be more suitable as the native language of a persistent store, a concurrent constraint logic language, or a pure lazy functional language augmented with an infinite supply of oracles? And if your answer is a concurrent constraint logic language, should it be eager or lazy? Is there even such a thing as a lazy concurrent constraint logic language? An example eager concurrent constraint logic language is ToonTalk. Another example is the Actor model of Hewitt and Agha. The Carlsson and Hallgren (1998) thesis on Fudgets discusses oracles. Is there even such a thing as a lazy concurrent constraint logic language? Curry or Alice are good starting points. And Oz too, of course. No, Oz is an imperative language. It has Cells, which are subject to assignment. This is a primitive in the language, not something built up from a functional base. So does Alice. In my experience Oz code limits the use of state just as much as ML code. No, Oz is an imperative language. It has Cells, which are subject to assignment. Of course Oz is an imperative language. Haskell is also an imperative language (because of unsafePerformIO). This is not bad because you can easily program without state in both languages if you want. The state is well-factored from the rest. It would be bad if the state were to contaminate everything (like in Java, in which for example all arguments to calls are implicitly stateful). I've just barely examined Oz, but it seems that state is allowed to be used anywhere. If so, there's a crucial difference from Haskell where a programmer can know that a function is pure just by looking at its type, as opposed to Oz where things are generally pure, but code has to be prepared for impure arguments, or ward them off with documentation. Of course, the distinction is minor compared to that with somehting like Java or C, where you much of the standard library involves state. I assume by "Haskell is also an imperative language" you mean Haskell isn't referentially transparent, which is as true as "Java is not memory-safe" (thinking of JNI). The convention in Haskell is that by using a function whose name starts with unsafe the programmer is either claiming to posses a proof of some function-dependent obligation, or waiving any right to any of the nice properties of the language, all the way up to memory-safety. The promise around unsafePerformIO is that the result of the argument computation doesn't actually depend on when or how often it is executed (or what is executing concurrently). The ST monad and runST are pretty neat because they actually prove that with the type system for a restricted set of computations using just mutable variables. a programmer can know that a function is pure just by looking at its type It is often useful to have a stateful implementation of a function that behaves as if it were stateless. For example, if you want to implement memoization from scratch (i.e., the language doesn't give it to you magically), then you need this. If the type system does not let the implementor of a function hide its statefulness, then I consider it too weak. If the type system does not let the implementor of a function hide its statefulness, then I consider it too weak. I would argue the opposite: a type system that lets you hide a function's statefulness is weaker in the sense that you cannot fully trust the type system. Pragmatic, sure; but weak? In any case I don't think this is about a property of the type system as much as it is about how the type system is used in practice by an implementation. Does Haskell's STRef "hide" the statefulness of a function? If so, in what sense can the type system not be trusted, and in what sense is this not a property of the type system? If not, in what sense does STRef not "hide" the statefulness of a function? STRef doesn't hide anything. The ST monad (and importantly runST) does, though. The example Peter gave was implementing memoization. A correct implementation of memoization is referentially transparent from the outside even though it is internally side-effecting. Thus you'd like to keep the state monads out of the picture for modularity reasons: I should be able to make a function memoizing without changing its interface. In GHC they work out around this "problem" by using unsafePerformIO. I think this is a great pragmatic compromise but my point was that allowing something like unsafePerformIO into the language weakens the type system in a formal sense. There's a lazy version of the ST monad, I imagine that can be used instead. Isn't the whole point that you don't want the interface to change so the state monadic style is inapplicable here? Essentially we want memo :: (a -> b) -> a -> b instead of memoST :: (a -> b) -> ST Memo a -> ST Memo b. GHC achieves this with unsafePerformIO. Depends on what you're memoising. In many cases there's a data structure of results from which you can retrieve the one you want, and you can use a lazy invocation of the ST monad to build that structure. A major reason ST exists is because its use does not need to be apparent in the type of a function. There is a function runST :: (forall s . ST s a) -> a, which lets you do calculations involving mutable state inside a pure function, so long as the calculation is actually referentially transparent, because it only uses mutable structures it creates itself. (the forall cleverly ensures this - essentially skolemization of s during type checking creates a unique brand for each ST computation - see Oleg's work under the heading Lightweight Static Capabilitites for similar enforcement of more interesting properties). What you can't do with ST, and perhaps what you meant if I am the one misunderstanding, is share a piece of mutable state between different applications of the function internally using ST. I think proving that a function which does share state like that is still referentially transparent is well beyond the Haskell type system (cue Oleg), thus falling back to iveProvenThisSafeMyselfSoPleasePerformIO. What you could do withST would be something like a mapMemo :: (a -> b) -> [a] -> [b], which behaves like map but internally uses a mutable hash-table to memoize between mapping different list elements. Yes, this is what I'm talking about. I realize that you can use runST without changing the external interface when you only need "internal" memoization. A good many type systems in use today aren't strong enough to type the ST monad or a near-equivalent correctly - and thus couldn't allow you to safely hide the fact a pure function has an internally stateful implementation. The ST monad is my friend! Though yes, it does leave us with reasons to want impredicative rank-n polymorphism. Curry isn't even referentially transparent. Referential transparency is considered desirable, isn't it? I'm still am not clear exactly why, but the FP community seem to think this property brings important benefits. Well, it gets hard to reason about a function if you call it with the same parameters 100 times and get 100 different results. Perhaps it's the ultimate refuge of the truly touched:"Insanity: doing the same thing over and over again and expecting different results." Software tests require repeatable results. When I did J2EE for a living, it was difficult to ensure that the state of the J2EE server was exactly the same on different test runs. With referential transparency, Some of the This has been written about on WardsWiki, see AdvantagesOfFunctionalProgramming, GraphReducer, and FpVsOo for some useful insights. The last one is a particular favorite of mine, it got me thinking about the nature of identity in FP vs OO. Alistair Bayley mentioned that pure FP has only value equality, and I was slighly more enlightened when I understood that. There are two in-progress articles that will hopefully be in the March issue of The Monad.Reader. One is an article on the nature of FP vs the nature of OO by Alistair Bayley, and one is on duality by Esa Pulkkinen. Those two articles fit together in that FP can be viewed as the dual of OO. Now that I've wandered far from the topic, maybe I should stop... --Shae Erisson - ScannedInAvian.com RT gives you more static guarantees about the behavior of a given piece of code. If a language has referential transparency, code that produces observable (for some definition of observable, YMMV) side-effects will have a distinct type (e.g. IO Bool in Haskell, *World -> (Int, *World) in Clean). AFAIK the same can't be expressed in languages without RT. IO Bool *World -> (Int, *World) AFAIK the same can't be expressed in languages without RT. Using abstract types (if the language supports them) you can express referentially transparent constructions even in a non-referentially transparent language: Pure ML. Without inspecting the source code of the SKI structure (i.e. just relying on the sig and the compiler) how can one be sure that using $ or S won't have side-effects? Also how useful is a structure like that if I can't inject any values in it? In the end of the day we have to create a sufficiently large pure language inside the pure structure/functor that is just like a kernel RT language. We could even create an abstract type in the kernel RT language to represent operations with side-effects and abstract operations for sequencing such operations, injecting values in it and primitives for opening files, sockets, etc.. Perhaps we should call it the IO monad ;). How far do we need to go until we end with an interpreter to an RT language inside an non-RT one? As languages are turing complete how does this embedding invalidate my earlier point? It just reinforces the idea that you need a static analysis phase to prove static properties of code and that if the type system is sufficiently expressive we can emulate features of another language/type system in it. Without inspecting the source code of the SKI structure (i.e. just relying on the sig and the compiler) how can one be sure that using $ or S won't have side-effects? That is a red herring and misses the point. But, I'll counter with an analogous question. Without inspecting the implementation, how can one be sure that computations expressed using the IO Monad *will* have the intended side-effects? Also how useful is a structure like that if I can't inject any values in it? (I'm surprised. A Haskell programmer should know better. ;-) You could just as well ask how useful is a Monad or an Arrow if you can't just inject new kinds of *computations* [Edit: I meant computations that have new kinds of side-effects] into it. How far do we need to go until we end with an interpreter to an RT language inside an non-RT one? You could say that the SKI structure already is such an interpreter. Indeed, that's the the idea. You can express RT computations in a non-RT language by creating a combinator library that doesn't allow you to create non-RT computations. If you think about, RT and non-RT languages are duals: In an RT language you express non-RT computations as combinations of primitive non-RT computations that are then interpreted. (That's what you do with Monads, Arrows, etc.) In a non-RT language you express RT computations as combinations of primitive RT computations that are then interpreted. (That's what you'd do with the SKI structure.) As languages are turing complete how does this embedding invalidate my earlier point? Who said anything about invalidating points? My point is that you *can* express RT constructs in a non-RT language. [Edit: Clarified some wordings. No intentional semantic changes.] That is a red herring and misses the point. But, I'll counter with an analogous question. Without inspecting the implementation, how can one be sure that computations expressed using the IO Monad *will* have the intended side-effects? I'm surprised. A SML programmer should know better. ;-) By analyzing the code's transitions using the language's semantics/type system and verifying that the desired side-effects do occur. As far as I'm concerned the language's type system/semantics is its definition. If a type is int -> int it does matter if the language offers RT out of the box or doesn't. If the language has RT I can be sure that all functions in the universe ever won't (modulo unsafePerformIO and friends) do anything other than diverge or produce a valid int (we could even restrict divergence if we wanted a la Charity). In a language without RT I need to analyze every function to see if it has any side-effects. int -> int unsafePerformIO my post I said that RT does give you additional static guarantees. If I start programming in a subset of SML that is proven to produce no side-effects I'm not programming in SML anymore. Also the full proof can't be checked by the type-system. Of course the type checker will verify that the abstract types are properly used but the kernel library that defines the semantics of these abstract operations can't be verified by the compiler. If you think about, RT and non-RT languages are duals I agree with you (using your definitions) that they have this dual nature.. By analyzing the code's transitions using the language's semantics/type system and verifying that the desired side-effects do occur. Precisely. You'd do the same with the SKI language (to be sure that side-effects do not occur). The SKI "language" (or any other pure language you'd want to use) can be (has actually already been) given a precise defition. The only difference is that the SKI language is implemented as a combinator library in SML rather than as a Haskell run-time written in Haskell and C (for example). If a type is int -> int it does matter if the language offers RT out of the box or doesn't. Indeed, the meaning of a type is always interpreted with respect to a type system. While both SML and Haskell have a type constructor -> they are not the same, because the underlying type systems are different. In a language without RT I need to analyze every function to see if it has any side-effects. a language without RT I need to analyze every function to see if it has any side-effects. Is this some kind of attempt at appeal to authority? Again, you can specify and use a combinator library like SKI to define RT functions so you don't have to. I can prove that you can't build non-RT values using the SKI structure and I don't need SPJ to do that. If I start programming in a subset of SML that is proven to produce no side-effects I'm not programming in SML anymore. That is just ridiculous. So, assuming that you apply the same definition to Haskell, then when you program in Haskell you always use the full language (including unsafePerformIO) at each point of the program or otherwise you are no longer programming in Haskell. Also the full proof can't be checked by the type-system. Of course the type checker will verify that the abstract types are properly used but the kernel library that defines the semantics of these abstract operations can't be verified by the compiler. The run-time system and compiler of a typical Haskell implementation that implements the various operations of the IO Monad is not verified by Haskell's type system to perform the desired side-effects.. It is trivial to see that you can't build non-RT functions using the SKI structure and the type system of SML guarantees it (through type abstraction). I'm not trying to prove that Haskell (or Clean or whatever) is superior, as I'm not trying to prove that any approach is superior. My point was very precise and limited, that RT gives you an additional static guarantee (and I'm not even trying to prove that this guarantee is useful or whatever). If I implement a SKI/Lambda Calculus/Haskell library on top of SML I will have the no side-effects guarantee only for code written using this library, all other features of SML still don't have this guarantee. This is a fact because we don't have RT on SML as a whole, just on this subset of SML. If you still feel that I'm misunderstanding you, please go ahead and explain it. I'll read your post, but I'm won't bother other LtU regulars with yet another answer (from me) because I can't say anything different than I already said. Non-RT and RT languages, as discussed here, are duals of each other. Neither really makes for more guarantees than the other. The difference is that in an RT language everything is RT except the non-RT interpreter on top while in a non-RT language everything is non-RT except the embedded RT interpreter. Using a Haskell like notation for types, you could present this as: In an RT language your program is a value of type NonRT a (for some a - usually ()) that gets interpreted by a non-RT interpreter. In a non-RT language your program can construct values of type RT a and interpret them with the RT interpreter (and the point is that such computations are guaranteed to be RT and be subject to further optimization, for example). In practise, languages like Haskell and Clean provide some sugar for the non-RT stuff (e.g. do-notation) and also include one (or more) non-RT interpreter(s) by default (because you couldn't really do anything useful without it) while languages like OCaml and SML do not. I can imagine adding some sugar (syntactic and perhaps more) to an ML like (non-RT) language to support RT, but such additions aren't formally necessary. Suppose we have a programming language which allows assignment of local variables only, including arguments. Wouldn't such a language pass the test of being referentially transparent? In this language, each subroutine's result would only depend on its input. Static guarantees can also be given in the context of C as well; tools like Lint do quite effective static analysis. On one hand for a language to be RT it is often assumed that every operation should be RT and assignment operator is clearly not. It has an side effect, even if just local. On the other hand, on the surface, this is exactly what Haskells state monad does. Isn't the point of RT to have results of functions depending only on their arguments? so why a restricted assignment operator is not RT? The point of RT is that the value of an expression shouldnt depend on any side effects. For functions this means that there value should only depend on ther arguments. For other expressions it means that there value should be constant. If a variable or reference could have its value changed by the (side) effect of an assignment operation, then it isnt constant and therefor not RT. What I mean is to restrict assignment to local variables and arguments - no assignment to variables of the outer scopes. This means that functions' results would depend only on their input. A sequence of functions which modify their arguments seems similar to binding computations using <- in Haskell. It sounds like RT to me, without the limitations of pure FP. Which "limitation" do you see pure functional programming having that is not shared by your scheme? For example, in-place quick sort is not possible (perhaps) with pure FPs, is it? A type system for bounded space and functional in-place update I'm sorry if I'm just being dense, but I don't see how it's possible with your scheme either. You can't mutate an array passed to quicksort, so you have to make a copy local to the function and mutate that, then return it (probably copying it if you intend it to be mutable in the caller). Quicksort is recursive, remember, so now each of the two recursive calls returns separately allocated arrays, so you have to allocate a third array to copy both of those into and return (where has to be copied yet again, if it is to be mutable in the caller). I guess you can get "in-place" (for a suitable definition of in-place, i.e. only making two whole copies of the original array ;)) if you implement your own stack to make quicksort recursive. But then I have to ask the dual of my question---what do you expect to gain by referential transparency? You now have a world where function calls and returns are expensive (all the copying), so you end up with huge function bodies which are as imperative as ever. Seems like a net loss to me. Getting a compiler to notice that expressions like x=f(x) can be optimized with call by reference seems like it would be pretty easy - much easier than trying to get it to notice that a functional quick sort implementation can be made to operate in place. Arguments would be modifiable...therefore there wouldn't be a need for copying the array. Greg Buchholz showed you why it won't work. If you don't copy arguments, they could get stored in a closure and subsequently modified, thus functions no longer depend on only their arguments (this is Greg's example, simplified a bit): (let* ([f (lambda (m) (lambda (n) (* (car m) n)))] [x (list 3)] [g (f x)]) (display (g 7)) (newline) (set-car! x 4) (display (g 7)) (newline)) It doesn't seem to violate your rules. Likewise if you don't copy return values, they could be aliased in a returned closure, and functions no longer depend on only their arguments: (let* ([f (lambda (m) (let ([m (list m)]) (cons m (lambda (n) (* (car m) n)))))] [x (f 3)] [g (cdr x)]) (display (g 7)) (newline) (set-car! (car x) 4) (display (g 7)) (newline)) You can copy your arguments and return values (Matt M thinks that a sufficiently smart compiler can sometimes avoid it). You can disallow higher-order functions (or dump lexical scoping)---ouch. Or, you can disallow mutable data structures (but then you can't write quicksort in place anyway....). Am I just missing something obvious? Closures should only store copy of values in them, not references to values, thus saving the problem of functions depending only on their arguments. If closures close over copies of values from outer scopes, then why on earth disallow assignment to those copies? As long as you have a sufficiently well defined meaning for "no assignment to variables of the outer scopes"... (define (foo number-list) (lambda (multiplier) (map (lambda (x) (* multiplier x)) number-list))) (let* ((n (list 1 2 3)) (bar (foo n))) (begin (display (bar 10)) (set-car! n 99) (display (bar 10)))) I think Felicia answered your question earlier: you asked whether the definition of RT was "functions" that depend only on their arguments. She answered that RT applied to "expressions". Are function calls the only kind of expressions in the language you're thinking of? I did point out the similarities with the state monad a bit up, and sure it is macro expressible. I guess it depends on from where you view it from the outside its RT as Achilleas says, but inside the function it is not. This is by the way also my view of the state monad. To me its just an implementation of an imperative language made in a funcional one. But perhaps it's a path worth taking, especially for people new to functional programming but with experience in imperative programming. Encoding the equivalent, using binding rather than assignment, is certainly a valuable learning exercise - and I sometimes use a similar style, though I don't actually shadow the names. There's no need for a new language there though. It would be far easier to tell a programmer "here is a programming a language like the one you used to code in, with the only exception that assignment is restricted" than "here is a programming language with no assignment". I can testify to that: I've shown and discussed with my colleagues pure functional programming, and they great trouble doing what they have done so far. On the other hand, a programming language with restricted assignment would work because they would continue to do something similar to what they did so far, with a small difference, but with a huge decrease in number of bugs. And just as easy to tell them that assignment works like reference values in Java - that is, binding. Talking about /mutable/ assignment comes later, and let/in starts to look rather a lot like C-style blocks with only one non-declaration statement. There is a nice subset of Oz that is both lazy and concurrent (see chapter 4 of CTM). Semantically, it is a lazy concurrent constraint logic language. What would we software people have done if the hardware people had (don't ask how) provided to us, from the start, only one level of memory? Assume it all cost the same, accessed at the same speed, and persisted until we programmed the CPU to erase it. How would the upgrade problems first have started to manifest themselves? What approaches would have been developed to solve them? After 40 years of evolution in that environment, what would software systems have come out looking like? After re-reading the thread, a question comes up to mind: why should transparent persistence be a feature of an O/S that goes down the kernel? couldn't it be implemented as a language runtime future? in other words, why do you need to build a whole new O/S? You don't have to. But eventually it might be iteresting to re-implement the OS in the new framework. Here's an approach I hadn't thought of (which may or may not have drawbacks, but is interesting): OK, looks as though Torsion (cited by the original poster in this thread) is using this same approach. But I found the above-cited explanation helpful anyway for understanding. I know at least one system using a similar but higher-level approach to incremental check-pointing. If you have an incremental two-space copying GC you can get this almost for free. The idea is to write objects out to disk as they are moved from one semi-space to another, using the memory barrier (which you already need for an incremental GC) to catch attempted mutations of objects in the old semi-space. When an attempted mutation of such an object is caught, the object is prematurely moved to the new semi-space (thus writing it to disk as a side-effect) before the object is mutated. The invariant (that mutations are never applied to objects in the old semi-space) ensures that the check-point is consistent. While we're talking about referential transparency, I see that Peter Sestoft has a copy of "Referential transparency, definiteness and unfoldability" available from his publications page now. (Thanks Peter!)
http://lambda-the-ultimate.org/node/526
CC-MAIN-2016-18
refinedweb
8,596
60.45
1.2 anton 1: \ environmental queries 1.1 anton 2: 1.31 ! anton 3: \ Copyright (C) 1995,1996,1997,1998,2000,2003.25 anton 19: \ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111, USA. 1.8 anton 20: 1.3 pazsan 21: \ wordlist constant environment-wordlist 1.1 anton 22: 1.26 anton 23: vocabulary environment ( -- ) \ gforth 24: \ for win32forth compatibility 25: 26: ' environment >body constant environment-wordlist ( -- wid ) \ gforth 1.20 crook 27: \G @i{wid} identifies the word list that is searched by environmental 1.18 crook 28: \G queries. 1.26 anton 29: 1.3 pazsan 30: 1.10 anton 31: : environment? ( c-addr u -- false / ... true ) \ core environment-query 1.20 crook 32: \G @i{c-addr, u} specify a counted string. If the string is not 33: \G recognised, return a @code{false} flag. Otherwise return a 34: \G @code{true} flag and some (string-specific) information about 35: \G the queried string. 1.2 anton 36: environment-wordlist search-wordlist if 37: execute true 38: else 39: false 40: endif ; 41: 1.15 jwilke 42: : e? name environment? 0= ABORT" environmental dependency not existing" ; 1.13 jwilke 43: 1.27 jwilke 44: : $has? environment? 0= IF false THEN ; 1.14 jwilke 45: 1.27 jwilke 46: : has? name $has? ; 1.14 jwilke 47: 1.2 anton. 1.7 anton 55: 8 constant ADDRESS-UNIT-BITS ( -- n ) \ environment 1.19 crook 56: \G Size of one address unit, in bits. 1.2 anton 57: 1.18 crook 58: 1 ADDRESS-UNIT-BITS chars lshift 1- constant MAX-CHAR ( -- u ) \ environment 59: \G Maximum value of any character in the character set 1.1 anton 60: 1.18 crook 61: MAX-CHAR constant /COUNTED-STRING ( -- n ) \ environment 62: \G Maximum size of a counted string, in characters. 1.3 pazsan 63: 1.18 crook 1.21 anton 77: \G True if @code{/} etc. perform floored division 1.18 crook 1.22 anton 92: \G Counted string representing a version string for this version of 93: \G Gforth (for versions>0.3.0). The version strings of the various 1.23 anton 94: \G versions are guaranteed to be ordered lexicographically. 1.1 anton 95: 1.18 crook 96: : return-stack-cells ( -- n ) \ environment 97: \G Maximum size of the return stack, in cells. 1.10 anton 98: [ forthstart 6 cells + ] literal @ cell / ; 99: 1.18 crook 100: : stack-cells ( -- n ) \ environment 101: \G Maximum size of the data stack, in cells. 1.10 anton 102: [ forthstart 4 cells + ] literal @ cell / ; 103: 1.18 crook 104: : floating-stack ( -- n ) \ environment 1.19 crook 105: \G @var{n} is non-zero, showing that Gforth maintains a separate 106: \G floating-point stack of depth @var{n}. 1.11 pazsan 107: [ forthstart 5 cells + ] literal @ 108: [IFDEF] float float [ELSE] [ 1 floats ] Literal [THEN] / ; 1.10 anton 109: 1.6 anton 110: 15 constant #locals \ 1000 64 / 111: \ One local can take up to 64 bytes, the size of locals-buffer is 1000 1.5 anton 112: maxvp constant wordlists 1.2 anton 113: 114: forth definitions 115: previous 1.1 anton 116:
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/environ.fs?annotate=1.31;sortby=log;f=h;only_with_tag=MAIN;ln=1
CC-MAIN-2022-33
refinedweb
526
71.71
In Python, an array is used to store multiple items of the same type together. All elements of an array must be of the same kind. Arrays are handled by a module named NumPy. Slicing an array means to print the elements of an array for a specified range using their index positions. In Python, index 0 means the first character. For example, the first character of the word PYTHONis P, and its index position is 0. Similarly, the last character is N, while its index is 5. When you want to slice an array, you must specify which index to start from and which index to stop at. For instance, we write a code like the one below: print(array1[0:4]) In the code above, we tell Python that it should help us slice and print the elements of an array called array1, starting from index 0 (i.e. the first character) to index 4, not inclusive. Now, let us write a simple code to illustrate how to slice an array. import numpy as np array1 = np.array([1, 2, 3, 4, 5, 6, 7]) print(array1[0:4]) We can see that we can slice the array array1, thereby printing elements from index 0 to index 4. Notice that the last element is 4, but that element is at index 3 of the given array. This explains why the end index element is not included in the slicing operation. If the last index in our code, which is index 4, is included, then the last element of the slice would be 5. When we specify the range of elements to be sliced, the start index is written before the colon :, while the end index is written after the colon. When you want the slicing to start or end at the first and last indexes, you need to leave the index values blank. In the example below, we slice from the beginning and last index of a given array. import numpy as np array1 = np.array([1, 2, 3, 4, 5, 6, 7]) # to start at the beginnig index and stop at index 4 (not included) print('beginning of the array to index 4 of the array is', array1[:4]) # to start at index 2 and end at the end of the array print('Index 2 to the end of the array is ', array1[2:]) You can see from the code above that a blank is left before or after the colon when we want to specify the beginning or end of an index. import numpy as np array1 = np.array([1, 2, 3, 4, 5, 6, 7]) # to slice from the index 2 from the end to index 1 from the end print('>>[-3:-1] is given as:', array1[-3:-1]) # using a step size of 2 and slicing from 0 index to 6 index print('>> [0:6:2] is given as: ', array1[0:6:2]) After we specify the startand endindex before and after the colon :, we can add another colon, and the value that comes right after this colon is the stepsize. For example, for [0:6:2], the step is 2. RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-slice-an-array-in-numpy
CC-MAIN-2022-33
refinedweb
533
67.89
Provided by: manpages-dev_4.15-1_all NAME ioctl_ns - ioctl() operations for Linux namespaces DESCRIPTION Discovering namespace relationships The following ioctl(2) operations are provided to allow discovery (since Linux 4.9) Returns a file descriptor that refers to the owning user namespace for the namespace referred to by fd. NS_GET_PARENT (since Linux 4.9). Discovering the namespace type The NS_GET_NSTYPE operation (available since Linux 4.11) can be used to discover the type of namespace referred to by the file descriptor fd: nstype = ioctl(fd, NS_GET_NSTYPE); fd refers to a /proc/[pid]/ns/* file. The return value is one of the CLONE_NEW* values that can be specified to clone(2) or unshare(2) in order to create a namespace. Discovering the owner of a user namespace The NS_GET_OWNER_UID operation (available since Linux 4.11) can be used to discover the owner user ID of a user namespace (i.e., the effective user ID of the process that created the user namespace). The form of the call is: uid_t uid; ioctl(fd, NS_GET_OWNER_UID, &uid); fd refers to a /proc/[pid]/ns/user file. The owner user ID is returned in the uid_t pointed to by the third argument. This operation can fail with the following error: EINVAL fd does not refer to a user namespace. ERRORS Any of the above ioctl() operations can return the following errors: ENOTTY fd does not refer to a /proc/[pid]/ns/* file. CONFORMING TO Namespaces and the operations described on this page are a Linux-specific. EXAMPLE The example shown below uses the ioctl(2) operations described above to perform simple discovery of namespace relationships. The following shell sessions show various examples of the use of this program. Trying to get the parent of the initial user namespace fails, since it has no parent: $ ./ns_show /proc/self/ns/user p The parent namespace is outside your namespace scope Create a process running sleep(1) that resides in new user and UTS namespaces, and show that the new UTS namespace is associated with the new user namespace: $ unshare -Uu sleep 1000 & [1] 23235 $ ./ns_show /proc/23235/ns/uts u_show /proc/23235/ns/user p Device/Inode of parent_show /proc/self/ns/user p The parent namespace is outside your namespace scope sh2$ ./ns_show /proc/self/ns/uts u The owning user namespace is outside your namespace scope Program source /* ns_show fstat(2), ioctl(2), proc(5), namespaces(7) COLOPHON This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.ubuntu.com/manpages/bionic/en/man2/ioctl_ns.2.html
CC-MAIN-2021-39
refinedweb
437
54.22
GraphQL With Python In this day and age, if you've worked on anything internet related, you have probably come across the word API. This stands for Application Programming Interface. This basically defines the interactions between multiple software interfaces, be it online or offline. Now what is the first thing that comes to mind when you think of online APIs? REST right? Well, GraphQL is here to change that. What is GraphQL? As defined on their created by Facebook in 2012 and used by them for years before coming out as open-source in 2015, and is used by multiple companies, such as Github, Coursera, Pinterest amongst many others. Why Use GraphQL? It makes querying for data infinitely easier, by availing all the data a client would need without having to make many different endpoints as we would have to using REST. It also comes with it's own native editor, that can allow you to see what possible issues you may have with your queries called graphiQL, allowing you to test your queries quickly and efficiently. At the same time, with its native typing, it allows a client querying to know exactly what they are supposed to send, and if they do not, it offers very helpful and insightful tips as to how to fix it. Now that we've gotten that out of the way, let us find out how to implement a simple API using FastAPI, SQLAlchemy and MySQL. All you need for this is base level knowledge in Python, and a tiny bit of MySQL. Let's begin. Installation First things first, you need to install FastAPI. FastAPI is a relatively new API framework that was created by Sebastian Ramirez that he calls "starlette on steroids". Basically it takes the best parts of Starlette and Flask and groups them into a wonderful framework. To install it, simply go to your command line and input: pip install fastapi to use Python's native pip package installer to install it. Once it has been installed, follow it up by installing sqlalchemy to allow us to interact with the database. pip install sqlalchemy sqlalchemy is an ORM (Object-relational mapper) that abstracts the database and makes it easier to interact with, basically making writing queries less of a chore. Starting the Project We're going to be creating a simple API that allows a user to add a campaign, all people to contribute to it, and monitor how it has progressed. First things first, create a folder called 'lipiame' (the name of the product) with the following structure. ├── lipiame │ ├──models.py │ ├──schema.py | ├──settings.py models.py is where the sqlalchemy models are going to be, schema.py is where the graphql queries and mutations (more on these later) are going to live and settings.py is where the sqlalchemy connection is going to lie, among other things. Let's start with the settings.py file. from sqlalchemy import create_engine engine = create_engine('mysql+pymysql://root:@localhost/lipiame', pool_recycle=3600) This is what we use to connect to the database. The database binding uses pymysql, so if you do not have it, you can install it using pip: pip install pymysql The rest of the string is broken down like this: 'username:password@host/database'. In our case, seeing as it is running locally and therefore does not have a password, we leave that part blank. This is how models.py looks: from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, DateTime, Text, Date from settings import engine from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Campaign(Base): __tablename__ = 'campaign' id = Column(Integer, primary_key=True, autoincrement=True) campaign_name = Column(String(50)) amount_contributed = Column(Integer, default=0) user_email = Column(String(50)) Base.metadata.create_all(engine) What this does is create a table called 'campaign', with the campaign name, the amount that has been contributed so far (which we set as the default of 0) and the email it is associated to. At the bottom of the code block there is: Base.metadata.create_all(engine) which creates the tables in the database. We call engine from the settings file so we can actually access the table. Make sure the database exists before running this otherwise it will throw an error. Upon running this on your command line: python models.py the tables should be created. Now let's go to the heart of the project, in schema.py. Session = scoped_session(sessionmaker(bind=engine)) class CreateCampaign(graphene.Mutation): id = graphene.Int() campaign_name = graphene.String() user_email = graphene.String() class Arguments: campaign_name = graphene.String() user_email = graphene.String() def mutate(self, info, user_email, campaign_name): session = Session() campaign = Campaign(user_email=user_email, campaign_name=campaign_name) session.add(campaign) session.commit() return CreateCampaign( id=campaign.id, campaign_name=campaign.campaign_name, user_email=campaign.user_email, ) class Mutation(graphene.ObjectType): create_campaign = CreateCampaign.Field() class CampaignType(SQLAlchemyObjectType): class Meta: model = Campaign class Query(graphene.ObjectType): campaign = graphene.List(CampaignType, user_email=graphene.String()) def resolve_campaign(self, info, user_email): session = Session() campaign = session.query(Campaign).filter(Campaign.user_email == user_email).all() return campaign routes = [ Route('/', GraphQLApp(schema=graphene.Schema(query=Query, mutation=Mutation))) ] app = Starlette(routes=routes) It's a lot, but let's break it down. FastAPI graphQL runs on Starlette, hence all the imports from it, the Route sets which URL graphiQL is going to run on, and GraphQLApp is graphiQL itself. Graphene and graphene_sqlalchemy are necessary for the bindings to the tables in the database. We import the Campaign class from models that was the definition of the table. We import the engine, which is used to connect to the database. Session = scoped_session(sessionmaker(bind=engine)) This simply creates the session to enable threading safely so there won't be overlaps with variables from the database. Now before we look at our first mutation, let's define what it is. A mutation is basically anything that modifies data in a database. It's that simple. Now let's see how the first one looks. class CreateCampaign(graphene.Mutation): id = graphene.Int() campaign_name = graphene.String() user_email = graphene.String() These values represent what is to be returned once the values are added to the table, so in this case, it will return the record id, the name, and the user email associated to. class Arguments: campaign_name = graphene.String() user_email = graphene.String() This defines what the mutation accepts. In this case, the campaign name, and the user email. def mutate(self, info, user_email, campaign_name): This is where the variables are passed. Info is used in relays, but we won't use it in this case. session = Session() This is where the database connection is opened. campaign = Campaign(user_email=user_email, campaign_name=campaign_name) session.add(campaign) session.commit() Now this is the meat of the mutation. We use the Campaign model we created, and add the user_email and campaign_name that have been passed. We then add it to the session, and then commit it. return CreateCampaign( id=campaign.id, campaign_name=campaign.campaign_name, user_email=campaign.user_email, ) This is what is returned back to the front end, the id, campaign_name and user_email. class Mutation(graphene.ObjectType): create_campaign = CreateCampaign.Field() This is what defines the mutations that can be availed to the clients. If your created mutation is not put in this class, it will not be accessible. Now let's break down the query. class CampaignType(SQLAlchemyObjectType): class Meta: model = Campaign This class is what allows GraphQL to be able to access the Campaign table. class Query(graphene.ObjectType): campaign = graphene.List(CampaignType, user_email=graphene.String()) The class Query is where all queries are put. In this case, we call the query 'campaign', and pass the CampaignType class that we defined above for the query to work. We then define the input we expect to get from the client, in this case, user_email. def resolve_campaign(self, info, user_email): session = Session() campaign = session.query(Campaign).filter(Campaign.user_email == user_email).all() return campaign The query is called a 'resolver' hence why it is set up as 'resolve_campaign' here. We pass the user_email we get, and use that to query the table using sqlalchemy, specifically the user_email table, after which we return all the results that match the query. routes = [ Route('/', GraphQLApp(schema=graphene.Schema(query=Query, mutation=Mutation))) ] app = Starlette(routes=routes) With this, we define endpoint where we want our graphiQL editor. We then pass our Query and Mutation classes through the schema for them to available, and then run it via the app. Now let's run this and see it in action. First things first, install Uvicorn using pip. pip install uvicorn This is what we use to be able to run the code. Navigate to the folder with the schema.py and run it: uvicorn schema:app --host 127.0.0.1 --port 8000 Now if you go to, you should see the following: And you can start writing graphQL queries! The best part about graphiQL, the autocomplete. This makes it very simple to write the queries and check if everything is okay. Let's create a record. And it's as simple as that. Within the curly braces, we can decide what we want returned. If we only want the record id, we can just pass id. But! You cannot query anything more than what we set earlier, specifically, the record id, the email and campaign name. One thing to note is that, even if you've saved the column names in snake case, graphQL overwrites it and sets them in camel case, but only in the queries, everywhere else they maintain their snake case properties. Now let's query that record to see how much has been contributed. Nothing yet :( So let's write something to enable someone to contribute. class CampaignContribute(graphene.Mutation): id = graphene.Int() campaign_name = graphene.String() amount_contributed = graphene.Int() class Arguments: campaign_id = graphene.String() contribution = graphene.Int() def mutate(self, info, campaign_id, contribution): session = Session() campaign = session.query(Campaign).filter(Campaign.id == campaign_id).first() campaign.amount_contributed += contribution session.add(campaign) return CampaignContribute( id=campaign.id, campaign_name=campaign.campaign_name, amount_contributed=campaign.amount_contributed, ) The difference between this and the previous mutation is that this one gets a record that already exists and updates it, in this case, the amount contributed. This record is queried using the record id. Now someone is feeling generous and wants to contribute, so let's avail this and allow them to contribute. class Mutation(graphene.ObjectType): create_campaign = CreateCampaign.Field() campaign_contribute = CampaignContribute.Field() And now we contribute. Someone wants me to go to university, yay! Conclusion This was a small taste of how graphQL works with FastAPI and sqlalchemy. You see how simple it can be, accessing a database and interacting with it. Now go out there and was the taste of REST out of your mouth with this new and fun experience. This project can be found on my here
https://developers.decoded.africa/graphql-the-python-experience/
CC-MAIN-2020-50
refinedweb
1,799
58.89
#include <ilviews/base/view.h> View class. Library: xviews or winviews or mviews (mutually exclusive) This is an abstract class without any constructor. Thus, instances can only be created from subtypes of this class. Objects of the IlvAbstractView class and its derived subclasses give rise to actual windows or views that are displayed on your screen. A view is a visual place holder – a rectangular object on your screen – to display elements of an application. A window on the screen is an associated set of one or several views. Every view is distinguished by its location (x, y coordinates), size (height and width), and visibility (that is, an existing view can be visible or not visible). Accessors provide a scriptable and uniform way to inspect and modify an object by using its base class methods IlvValueInterface::queryValue(), IlvValueInterface::queryValues(), IlvValueInterface::changeValue(), IlvValueInterface::changeValues(). This class inherits the accessors of its superclass IlvSystemPort and adds the following ones: Retrieves the relative dimensions of the view. Allows the view to receive multi-touch events. IlTrueif the view receives multi-touch events, IlFalseotherwise. Ensures that a point remains visible. This member function makes sure the given point p is visible to the user. This is meaningful only if isScrolled() returns IlTrue, that is, if this object is scrolled in a system scrolling view. The parent scrolling view takes care of its subwindow displacement to guarantee the visibility of p. Ensures that a rectangle remains visible. This member function makes sure the given rectangle rect is visible to the user. This is meaningful only if isScrolled() returns IlTrue, that is, if this object is scrolled in a system scrolling view. The parent scrolling view takes care of its subwindow displacement to guarantee the visibility of rect. If rect represents a bigger rectangle than the scrolling window is able to display, then the subwindow is centered in the scrolling region on the center of rect. That is, the center of rect is moved to the center of the scrolling window, but the boundaries of rect are not visible. Erases an area of this view. Erases the indicated part of this view, using the background color of the background bitmap. Entirely erases the view. Erases the entire view, using the background color of background bitmap. Retrieves the dimension of the view frame. Retrieves the background color of this view. Gets the background bitmap of this view. Indicates if IlvButtonDragged compression is enabled or not. IlTrueif the compression is enabled and IlFalseif it is disabled. getCompressPointerMoved(), setCompressPointerMoved(), setCompressButtonDragged(). Indicates if IlvPointerMoved compression is enabled or not. IlTrueif the compression is enabled and IlFalseif it is disabled. setCompressPointerMoved(), getCompressButtonDragged(), setCompressButtonDragged(). Retrieves the cursor currently set in this view. lock()and unLock()on this resource (see IlvResource::lock()and IlvResource::unLock()). Returns the opacity level of the view. IlvIntensityvalue. 0means that the view is completely transparent and IlvFullIntensitymeans that it is completely opaque. Gets the parent of this view. IlvAbstractViewof the view, or 0if this view doesn't have an IlvAbstractViewparent. 0if this view was built using an IlvSystemView. Gets the system-dependent identifier of the top shell of this view. 0in every case but a Motif top window. Returns the class name of this object. Reimplemented from IlvStylable. Returns the display for this object. Implements IlvStylable. Returns the stylist for this object. 0if there is none. Implements IlvStylable. Gets the system-dependent identifier of this view. An IlvAbstractView object encapsulates a real interface object of your display system (sometimes referred to as a widget). The member function getSystemView() returns this object. The IlvSystemView type is the basic type of your display system widgets, and is thus platform-dependent. Returns the color used as a key for transparency. Retrieves the absolute dimensions of the view. Queries if this view has the display grab. IlTrueif this view has the display grab. Hides the view. Removes the window from the visible windows on your screen. Reimplemented in IlvView, and IlvContainer. Returns whether the view is layered or not. IlTrueif the window is layered, IlFalseif not. Queries if the view is set to receive multi-touch events. Queries for scrolling capabilities. IlTrueif this window is the subwindow of a system scrolling window (see IlvScrollView). Scrolling windows own a single subwindow, from which they display only a rectangular region. This allows you to manipulate bigger windows than your screen can display, and navigate within these by means of a smaller rectangular area. Retrieves the sensitivity of this view to input events. IlFalseis returned, no keyboard or pointing device events are received by this object. The window is said to be sensitive if it can receive these events and isSensitivereturns IlTrue. Pushes this view below the others. Puts this view object below any other view on the screen. This is meaningful only if this view is a top window. Reshapes the view. Moves and changes the size of the window in the parent window. Reimplemented in IlvElasticView, and IlvView. Retrieves the position of the view. Brings this view on top. Puts this view object on top of any other view on the screen. This is meaningful only if this view is a top window. Removes a user-defined window procedure. setWindowProc(). Resizes the view. Reimplemented in IlvElasticView, and IlvView. Sets the background color of this view. The color is used when erasing is required. Thus, a call to setBackground() is effective only when regions are erased. Reimplemented in IlvContainer. Sets the background bitmap of this view. The bitmap provided (if it is not 0) is locked. The previous bitmap, if there was one, is unlocked. If the bitmap provided is 0, the background color is used to erase the view. To remove properly a background bitmap that was previously set, you must do the following: This code must be used also for containers (replaces manager by container in code). Reimplemented in IlvContainer. Enables or disables IlvButtonDragged event compression. The IlvButtonDragged compression is similar to IlvPointerMoved compression. The difference is only in the event type: IlvButtonDragged events occur when the pointer moves while one or more mouse buttons are kept pressed. By default, IlvButtonDragged event compression is enabled. Applications that want to monitor every single IlvButtonDragged event should disable this compression. getCompressPointerMoved(), setCompressPointerMoved(), getCompressButtonDragged(). Enables or disables IlvPointerMoved event compression. The IlvPointerMoved event compression is a mechanism to filter events in order to ignore pointer motion events that are redundant. This allows applications to be more reactive to fast successions of pointer motion events. The event compression works iteratively. When the event loop handles an IlvPointerMoved event in a window with compression enabled, it skips all similar events (same type and same window) that IMMEDIATELY follow in the event queue, only keeping the last one. By default, pointer motion event compression is disabled. Applications that want to monitor every single pointer motion event should disable this compression. getCompressPointerMoved(), getCompressButtonDragged(), setCompressButtonDragged(). Sets the cursor for this view. Sets the given cursor pattern for your pointing device whenever it enters this window. This member function calls lock() on the cursor provided as an argument, then unLock() on the old cursor to release a reference to it (see IlvResource::lock() and IlvResource::unLock()). Sets the input focus to this view. Gives this window entire control of keyboard events, that is it "gets the focus." When you call this function, all keyboard events are sent to this object. The only way to let another window receive keyboard events is by giving the focus to another window. setFocus()does not lock your system permanently. Sets whether the window is layered or not. Layered windows only work in Windows 2000 or higher systems. Layered windows are rendered in an offscreen bitmap by the system, and then blended to the screen using various attributes such as partial translucency. This attribute allows improved performance for shaped windows. Sets the opacity value of the view. This method has only an effect on Windows 2000 and higher systems. The view must be a top window. The method setLayered(IlTrue) is called to set the layered attribute. Sets the sensitivity of this view to input events. Sets the transparency color key value. This method has only an effect on Windows 2000 and higher systems. The view must be a top window. All pixels painted in the window using this color will be transparent. The method setLayered(IlTrue) is called to set the layered attribute. Sets a Windows window procedure. This member function is available only for Windows ports of Rogue Wave Views. It sets a window procedure for processing events that should not be handled by Rogue Wave Views but by Windows. IlvWindowProc, removeWindowProc(). Shows the view. Displays the window on the screen. A window can be created in such a way that you cannot see it upon creation, which allows you to draw on that window before it is displayed. Reimplemented in IlvView, and IlvContainer. Retrieves the visible part of the view. Rogue Wave is a registered trademark of Rogue Wave Software, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.
http://docs.roguewave.com/visualization/views/6.2/RefMan/refcppfoundation/classIlvAbstractView.html
CC-MAIN-2018-22
refinedweb
1,515
60.61
Available Tools This chapter provides an overview of the InterSystems IRIS® tools that you can use to work with XML schemas and documents. It contains the following sections: Using the XML Schema Structures Page Using the XML Document Viewer Page Importing XML Schemas Programmatically Using the XML Schema Structures Page The Interoperability > Interoperate > XML > XML Schema Structures page enables you to import and view XML schema specifications. For general information on using this page, see “Using the Schema Structures Page” in Using Virtual Documents in Productions. Before importing a schema file, rename it so that its name is informative and unique within this namespace. The filename is used as the schema category name in the Management Portal and elsewhere. If the filename ends with the file extension .xsd, the file extension is omitted from the schema category name. Otherwise the file extension is included in the name. You can use these schemas only to support processing of XML virtual documents as described in this book. InterSystems IRIS does not use them for any other purpose. After importing a schema file, do not remove the file from its current location in the file system. The XML parser uses the schema file rather than the schema stored in the InterSystems IRIS database. Using the XML Document Viewer Page The Interoperability > Interoperate > XML > XML Document Viewer page enables you to display XML documents, parsing them in different ways, so that you can determine which DocType to use. You can also test transformations. The documents can be external files or documents from the production message archives. To display this page, click Interoperability, click Interoperate, click XML. Then click XML Document Viewer and click Go. For general information on using this page, see “Using the Document Viewer Page” in Using Virtual Documents in Productions. Importing XML Schemas Programmatically You can also load schemas programmatically by using the EnsLib.EDI.XML.SchemaXSD Opens in a new window class directly. This class provides the Import() class method. The first argument to this method is the name of the file to import, including its full directory path. For example: set status= ##class(EnsLib.EDI.XML.SchemaXSD).Import("c:\iiris\myapp.xsd") The EnsLib.EDI.XML.SchemaXSD Opens in a new window class also provides the ImportFiles() method. For this method, you can specify the first argument in either of the following ways: As the name of a directory to import files from. InterSystems IRIS attempts to import all files in this directory, regardless of the file extensions. For example: set status=##class(EnsLib.EDI.XML.SchemaXSD).ImportFiles("c:\iiris\")Copy code to clipboard As a list of filenames, separated by semicolons. You must include the full directory path for the first of these, and you can use wildcards in the filenames. For example: set status=##class(EnsLib.EDI.XML.SchemaXSD).ImportFiles("c:\iiris\*.xsd;*.XSD")Copy code to clipboard For more information, see the class reference for EnsLib.EDI.XML.SchemaXSD Opens in a new window. After importing a schema file, do not remove the file from its current location in the file system. The XML parser uses the schema file rather than the schema stored in the InterSystems IRIS database. XML Classes For reference, this section lists the classes that InterSystems IRIS provides to enable you to work with XML documents. You can also create and use subclasses of these classes. The business host classes include configurable targets. The following diagram shows some of them: For information on other configurable targets, see “Reference for Settings.”
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=EXML_TOOLS
CC-MAIN-2021-25
refinedweb
588
57.16
First of all, I'd like to say that this post is based on that one: But I made some changes to (IMO) simplify it... So, let's start: First, we can add in our file assets/js/app.js this code: // Import specific page views import './views/init'; And now we'll create this init.js file in assets/js/views/init.js and add this code: import loadView from './loader'; function handleDOMContentLoaded() { const viewName = document.body now we'll create this loader.js file mentioned on the import, so create the file assets/js/views/loader.js and add this code: import MainView from './main'; export default function loadView(viewPath) { if (!viewPath) return new MainView(); const ViewClass = require('./' + viewPath); return new ViewClass.default(); } And now this other file called main.js in assets/js/views/main.js with this code: export default class MainView { // It will be executed when the document loads... mount() { } // It will be executed when the document unloads... unmount() { } } Nice! :) Now in our controller, we can return a new value to the view, informing the directory of our js file, something like this: conn |> assign(:js_view, "posts/index") |> render("index.html") And now in our layout file, that may be in templates/layout/app.html.eex we can add this code in our <body> tag: <body data- And it's ready! :) So, we'll only try to fetch the JS file is we set the js_view on the controller. Now, to use it, you can create a file in assets/js/views/posts/index.js and add code similar to this: import MainView from '../main'; export default class View extends MainView { mount() { // Add your JS code here... console.log('UHULL! It works!') super.mount(); } unmount() { super.unmount(); } } ✨ Discussion (3) Does this mean that for each request, a JS file is generated? Or is it pre-generated with Webpack? I don't seem to get it completely. You will have an app.min.js file that was generated by Webpack and contains all your JS code. But the idea of this guide is to have some kind of rule in our JS code that is similar to this: So we'll not execute code that is not necessary for some pages... There's also a great library that I use sometimes and do almost the same thing: github.com/lucasmazza/page.js Oh okay ! I thought the idea was to create a JS file for each page. Makes sense now ! Thanks Ricardo !
https://dev.to/ricardoruwer/load-specific-js-files-in-elixir-and-phoenix-o57
CC-MAIN-2021-39
refinedweb
416
69.48
Plotly has the option to zoom in on the chart by dragging the box. Each time the react component is re-rendered, Persist user changes is initialized. In Plotly options, there is option about it. “uirevision in Plotly.react in JavaScript” I’ve already applied it, but there’s no change. Here is my code const GraphContainer = styled(Plot)` width: 100%; height: 300px; `; ..... return ( <> <GraphContainer data={[ { x: graphData.power.x, y: graphData.power.y, name: 'Graph', line: { color: theme.colors.chart.single[1], }, }, ]} layout={{ xaxis: { type: 'date', range: [ '2021-10-21 00:00:00', '2021-10-21 23:59:00', ], }, yaxis: { tickformat: ',.0f', ticksuffix: 'kW', }, uirevision: 'true', }} config={{ responsive: true, displayModeBar: false }} /> </> ); Another thing what I did: In Link, there is explanation “user interaction will mutate layout and set autorange to false, so we need to reset it to true”. According to documents, I’ve setting the layout object to state, uirevision: ‘true’, yaxis and xaxis’s autorange set to true, but not work. Someone help me please. Thank you.
https://community.plotly.com/t/reacts-plotly-js-uirevision-does-not-work/58292
CC-MAIN-2021-49
refinedweb
170
60.01
Help How to check a checkbox - Struts please help me with the syntax?? Hi friend, my checkbox...How to check a checkbox Hello Community, How can i check a checkbox defined with tags. with plain html, the tag checks the box. with the ComponentTagSupport - Struts working. but my requirement is with the help of my custom tag i have to display...Struts2 ComponentTagSupport I am working on struts2 custom tags... body help me. Hi Friend, Please visit the following link: http Checkbox Tag <html:checkbox>: :checkbox > tag : Here you will learn to use the Struts Html <html:checkbox...Checkbox Tag <html:checkbox>: html: checkbox Tag - is used to create a Checkbox Input2 and Hibernate Struts2 and Hibernate Sir/ madam, Can we use iterator tag in struts for fetch the database value and shown on form. if yes then how CheckBox in Flex CheckBox item. <mx:CheckBox> tag is use to access this control. Like any...Flex CheckBox Control: The CheckBox control of Flex is a very common GUI, which is supported by almost every computer language. CheckBox control is used - Struts help, thans! information: struts.xml HelloWorld.jsp... HelloWorld.jsp Remove the attribute "namespace" from the Tag Use attribute "namespace" in Tag For read more information to visit this link struts2 - Struts struts2 hello, am trying to create a struts 2 application that allows you to upload and download files from your server, it has been challenging for me, can some one help Hi Friend, Please visit2 Actions is usually generated by a Struts Tag. Struts 2 Redirect Action In this section, you will get familiar with struts 2 Redirect action... Struts2 Actions Struts2 Actions Training Tag Submit Tag Reset Tag Day--5 A Simple Struts2 Login Application... Struts2 Training Apache Struts is an open-source framework that is used to develop Java applications. Struts... for getting all records in tag: URL: Reg: Tree view in Struts using ajax - Struts in all struts tld file.. please help me if you know thanks in advance? ... Struts Tag : 1)Struts download site: Tree view in Struts using ajax HI all, Can you figure
http://www.roseindia.net/tutorialhelp/comment/24235
CC-MAIN-2015-40
refinedweb
351
72.16
16 February 2011 16:37 [Source: ICIS news] TORONTO (ICIS)--Air Products will not seek to appeal an adverse US court ruling that effectively ended its year-long $5.9bn (€4.4bn) battle to take over rival Airgas, the US-based international industrial gases major said on Wednesday. Air Products would “move on” to other opportunities, although it did not see any deals on the scale of Airgas, CEO John McGlade and chief financial officer Paul Huck said in an analysts' call. “We are done [with Airgas],” Huck said, when asked if his company might make a new bid for Airgas at a later time. Air Products withdrew its $70/share offer for Airgas on Tuesday, after the Delaware Chancery Court upheld Airgas’s “stockholder rights plan”. Commentators said the court’s decision was in line with rulings that upheld “poison pills” defences and favoured a target’s incumbent management in US takeover fights. “We made our best and final offer of $70/share, and the Airgas board still will not let the Airgas shareholders decide for themselves whether they want to accept that offer,” said McGlade. “Despite [Airgas statements], we are convinced they are unwilling to sell the company at any price,” he said. “We believe the Airgas board has done a great disservice to the Airgas shareholders,” McGlade added. Air Products launched the bid, initially for $60/share, in February 2010. Airgas continued to value the company at $78/share. Meanwhile, Airgas announced a $300m share repurchase programme on Wednesday. Air Products’ shares were up 3.86% to $93.70 at 10:45 local time in ?xml:namespace> ($1 = €0.74)
http://www.icis.com/Articles/2011/02/16/9436013/air-products-will-not-appeal-airgas-court-ruling-will-move-on.html
CC-MAIN-2015-18
refinedweb
273
63.8
docs.intersystems.com MSM to Caché Conversion Guide Caché System Management [Back] Caché System References > MSM to Caché Conversion Guide > Caché System Management Search : Configuring Caché To configure Caché, you use the Management Portal. The Caché System Administration Guide includes detailed information on how to use the manager, and also highlights the minimum and maximum values for each parameter. Alternatively, you can access Caché’s online help by pressing the <F1> key with a particular field within the utilities selected. Tips and tricks on basic system configuration: Check that Maximum # of User Processes equals the total process count of your Caché license, unless you want fewer users to gain access. Keep your partition size at a sensible level (the default is 1MB), unless your application requires larger values. If you set this value too large, you will use memory and swap space inefficiently. Caché partitions are fixed size and do not expand and contract as do MSM partitions. You cannot dynamically change the partition size because there is no %PARTSIZ equivalent utility in Caché. In many cases, increasing the number of Global and Routine Buffers is the best and quickest way to improve performance. See the appendix MSM and Caché Utilities Catalog for a list of MSM utilities and their Caché equivalents. Configuring Devices In most cases, devices must first be set up at the operating system level. Once configured at the OS level, you can immediately begin using these devices from within Caché. If you need to set up mnemonic names or numeric aliases for these devices, use the device configuration utility in Management Portal. These Caché-level device configurations are stored in the cache.cpf file. At each Caché startup, the system will read the cache.cpf file and recreate the system level globals in the CACHESYS database. Tips on configuring devices: Caché has a set of reserved, built-in device numbers which are generally different from MSM. See the Caché I/O Device Guide for more information. You only need to enter devices into Caché’s device tables if you want to access them via a mnemonic name, such as SUN, or a numeric alias, such as 100 (common on MSM systems). Mnemonic names are used by the character-based utility called ^%IS, which is used by utilities such as ^INTEGRIT and ^%G. Numeric aliases are used directly by Caché’s OPEN, USE, and CLOSE commands. The use of aliases is the best way to get MSM-like device handling. If you do not require the use of mnemonic names or numeric aliases, you can still access your devices through either the ^%IS utility or through OPEN, USE, and CLOSE commands. For example, the command OPEN "/dev/rmt0":"R" is perfectly valid, provided that /dev/rmt0 is a valid device on your system. Pipes are a very effective way of accessing printers and other devices on your machine. See the Caché I/O Device Guide for details on setting up and using pipes. If you use Caché’s SPOOL device, it might be a good idea to store the ^SPOOL global in an isolated location so that it does not take up space in your production environment. You can then reference this global through a namespace configuration. See the Caché System Administration Guide for information on configuring devices in Caché. Automating Caché Backups Caché backups and restorations are designed to run on live systems. These backups can either be system wide, on a per-database basis, or on globals and routines individually. Automating Caché backups from the OS level can be done with a few considerations. There are four recommended strategies: An OS scheduler can call into Caché’s backup API, performing Caché’s concurrent backup. Any of Caché’s backup strategies are available through this API, including full, cumulative, and incremental backups. This strategy is recommended for live automated backups. Caché can be brought down via OS scripting, and then an OS level backup can be run. Caché’s databases can be frozen while an OS level backup is being performed. An OS level backup can perform a full backup of a live Caché database. For this strategy, a valid cumulative backup must also be performed immediately after to ensure physical integrity. The Caché backup API can be called here to automate the cumulative backup. The steps for this procedure are as follows: As a pre-backup command, clear incremental bitmaps: Set x=$$CLRINC^DBACK(1) Run the OS backup on the live system As a post-backup command, perform cumulative backup: Set x=$$BACKUP^DBACK(Arg1,Arg2,...Arg10) This option requires that your OS-level backup allow files to change while the backup is being performed. Choose your OS backup software with care. Restoration of system backups is performed via the character-based utility called ^BACKUP. See the Caché System Administration Guide for more information on the basic backup types, and how to perform them. Caché Journaling Journaling in Caché is very similar to what you are used to in MSM. When used in conjunction with backups, it is the best mechanism for bringing a system as up-to-date as possible after a system failure. Journaling is also used to keep track of your application’s transactions. For information on journaling, see the High Availability Guide . The Before Image Journaling (BIJ) feature is equivalent to the Caché feature called Write Image Journaling (WIJ). This is automatically enabled on Caché systems. It consists of a single file (unlike MSM that has a separate BIJ file for each bullet proofed volume group) named cache.wij , located in the \cachesys\mgr directory. Its size is not fixed and grows as necessary to accommodate modified blocks. After Image Journaling is also automatically enabled. Unlike MSM, Caché journal files are created as needed and do not have to be defined in advance. Each time Caché is restarted a new journal file is begun. The age at which a journal file is automatically deleted is determined by settings that can be modified in the Management Portal. Journaling Tips: A process can enable or disable journaling for itself via the ENABLE and DISABLE line tags of ^%SYS.NOJRN, respectively. Do not use the Journal All Globals option unless you really need to. Choosing this option will journal every global in your database, which can lead to an extraordinarily large journal file, reduced system performance and increased network traffic if Shadow System Journaling is employed. Temporary globals that can be deleted upon system restart should never be journaled. It is a good idea to switch your journal files after each backup. This process can be automated. Under a shadow system journaling configuration, a global READ will access only the local version of the global, unless an extended global reference is used. Shadow System Journaling While Caché’s Shadow System Journaling is very similar conceptually to MSM’s Cross-System Journaling, you will find that Caché’s Shadow System Journaling is much more feature-rich. For example, in Caché you have two modes of data transfer available to you: Fast (previously block-mode) Transmission The shadow connects to the database server via TCP and captures the live journal flat file. Acquired transactions are optionally applied to the Shadow machine. Choosing not to apply the acquired transaction is a good way to keep redundant journal files. Since many transactions are captured at once via a binary journal block, block-mode tends to be quicker than record-mode. Applied transactions are optionally logged in the shadow machine’s local journal file. Compatible (previously record-mode) Transmission The shadow connects to the database server via TCP and captures transactions via packaged strings. Acquired transactions can be programmatically scanned before applying the transactions. You have access to the following information when scanning: Address of current record Transaction type, such as SET or KILL Global reference, if any New value to which global is set Since transactions are captured via packaged strings, record-mode tends to be slower than block-mode. Applied transactions are optionally logged in the shadow machine’s local journal file. Record mode shadowing can be employed when differing endian systems are to be linked (e.g. Intel and Sun), unlike block mode shadowing, which is limited to transmissions of the same endian type. The transport and delivery mechanism differs between MSM and Caché. MSM utilizes a push mechanism via DDP -- individual sets and kills are applied as regular cross system updates to the shadow server. Caché instead pulls the data from the primary server to the shadow server via TCP/IP. The Caché use of native TCP results in benefits such as much improved shadowing performance and easier setup of WAN support. It also allows enhanced capabilities such as shadowing across the Internet and the ability to define multiple shadow servers for each primary server. Shadowing and Switching from MSM to Caché Given that MSM Cross System Journaling is implemented using simple DDP cross-system sets and kills (i.e. using extended global references), and Caché supports DDP, a MSM server can in fact shadow to a Caché server. This can be a big help in preparing for a switch over with minimal down time of a production system. For example, consider the following steps when replacing a MSM Primary/Shadow pair with its Caché equivalent: Take the MSM shadow off line, back it up, enable journaling (or switch journal spaces if already enabled). Restore the backup onto the Caché server. Convert the MSM UCIs to Caché databases (see previous chapters). Configure DDP between Caché and MSM so that MSM can see the Caché databases. Configure Cross System Journaling on the MSM shadow to point to the correct databases on the Caché primary. Enable Cross System Journaling on the MSM shadow. Updates will then begin to arrive on the Caché primary and will continue to do so. Of course a Caché shadow can be created (using the original Caché databases created in #3 above), and connected to the Caché primary. Once the systems are stable and a switchover date and time has been set, disable all access to the MSM primary server, allow all updates to filter down to the Caché primary, shut down the MSM servers (to be safe), and then give access to the users to the new Caché primary instead of the old MSM one. The last step maybe as simple as changing an IP address of a server or in the client configuration. Using this technique can result in a total downtime literally measured in minutes. [Back] [Top of Page]   Content for this page loaded from GMSM_management.xml on 2019-06-14 08:31:38 View this book as PDF | Download all PDFs Send us comments on this page © 1997-2019 InterSystems Corporation, Cambridge, MA
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GMSM_management
CC-MAIN-2019-26
refinedweb
1,795
53.51
I am developing for Android in Eclipse Helios. I’m using the latest ADT plugin 8.0.1. Previously, I could see method javadoc description, when moving my mouse over a method name and waiting for about a second. Now it stopped working for some reason. […] I want to install the Intel x86 Atom System Image because my emulator speed is too slow. Each time I tried to install it from the Android sdk manager I failed getting this message:- Fetching URL: Validate XML: Fetching URL: Validate XML: Done loading packages. Preparing to install archives Downloading Intel […] When i right click in the debug perspective and select Terminate, Eclipse disconnects, but the process continues to run on the device. How can i get the program to actually terminate so no more code is executed on the device? Am i looking at a bug? I dont recall if this worked in Galileo. I […] I have a problem with the driver for a LG L90. The ADT (for Eclipse) doesn’t see the smartphone. The problem is that the phone isn’t able to install the drivers. Does anyone have any idea how I can do? The drivers, that I tried to install, are those of the official website. I installed fresh ADT: Then I installed: When I got into the Eclipse readme directory there is: Eclipse Project Release Notes Release 4.3.0 Last revised May 29th, 2013 I created fresh Android application then right clicked on it->Google->Generate Google App Engine Backend and this is what I got: Description Resource Path Location Type The […] I would like to copy a quite big directory from the assets folder of my app to the data folder on the first run of the app. How do I do that? I already tried some examples, but nothing worked, so I don’t have anything. My target is Android 4.2. Thanks, Yannik I am testing the code here :. Under features there is this sweet sounding line : “Android is fully supported”. But being completely new to maven I can make neither head nor tail of the instructions. How do I build a basic Android test project with this code? Simply adding the source code from […] I start with Cocos2D-X for android following. I run the demo in xcode and android with no problem, until I go to the ‘ Defining a Combined Java/C++ Project in Eclipse’ part. After I do all in this, I get the error Symbol ‘cocos2d’ could not be resolved for using namespace cocos2d; in jni/hellocpp/main.cpp […] Hello I am writing an android application, but as I do run this application, the following application generates and application doesn’t appear on the windows, please help .. !!!! I will appreciate if I get right solution ,,,,,,
http://babe.ilandroid.com/android/eclipse/page/9
CC-MAIN-2018-22
refinedweb
465
72.76
Writing Custom Web Services for SharePoint Products and Technologies Rah. Contents Introduction About the Web Service Sample Writing a Custom Web Service Creating a Sample Document Upload Web Service Troubleshooting Guide Conclusion Introduction. - If your default Web site (port 80) hosts Windows SharePoint Services, create a virtual server that uses a different port. The new virtual server acts as the development Web server, and the virtual server that hosts Windows SharePoint Services acts as the deployment Web server. - Create a Web Service Project on the development virtual server. - Generate and modify a static discovery (.disco) file and a .wsdl file, save these files as .aspx pages, and then register the Microsoft.SharePoint namespace with the page directive. - Modify the .disco and .wsdl files to support service virtualization. - When you finish developing the Web service, deploy its files to the _vti_bin virtual directory and the _vti_bin\bin virtual directory, which are located on the physical path for the Windows SharePoint Services site.). About the Web Service Sample Download ODC_WritingCustomWebServicesSampleSPPT.EXE and extract its contents to the following directory on a front-end server running Windows SharePoint Services: Local_drive:\CreatingaCustomWebServiceSample Double-click the build.bat file to compile the project and install the Web service. The following files are copied to the Local_drive:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\isapi\directory: - SPFiles.asmx - spfilesdisco.aspx - spfileswsdl.aspx The following files are copied to the Local_drive:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\isapi\BIN\directory: - WSCheckOut.dll - WSCheckOut.pdb Writing a Custom Web Service Click Start, point to Administrative Tools, and then click Internet Information Services (IIS) Manager. Expand the branch for the server computer to which you want to add a virtual server. Under the server computer branch, right-click the Web Sites folder, point to New, and then click Web site. In the Name box, type a name for the new Web site. In the Description box, type the description of your virtual server, and then click Next. In the Enter the IP address to use for this Web site box, select the IP address that you want to use, or use the default value (All Unassigned). In the TCP port this Web site should use box, type the port number to assign to the virtual server, and then click Next. Note Verify that this port number is not currently in use. You do not need to assign a host header because Windows SharePoint Service handles hosting. In the Path box, browse or type the path to the location on your hard disk where you want to store your projects. If you do not want to permit anonymous access to your virtual server, clear the Allow anonymous access to this Web site check box, and then click Next. On the Web Site Access Permissions panel, select the permissions that you want to use, and then click Next. Note For best results in most cases, accept the default permissions. By default, the Read permission and the Run Scripts (such as ASP) permission are selected. Windows SharePoint Services automatically adds the Execute (such as ISAPI applications or CGI) permission to the appropriate folders. Click Finish. To create a Web Service Project in Visual Studio .NET On the File menu, point to New, and then click Project. In the Project Types box, select Visual Basic Projects or Visual C Projects, depending on which language you prefer. In the Templates box, select ASP.NET Web Service. In the Location box, type the following path: Click OK.. In Solution Explorer, right-click Service1.asmx, and then click View Code. Remove the comments in the following lines: '<WebMethod()> Public Function HelloWorld() As String ' HelloWorld = "Hello World" ' End Function Compile the project. At the Visual Studio .NET command prompt, type the following line, and then press ENTER to create a Service1.disco file and a Service1.wsdl file in the current folder: Disco Open the Service1.disco file and locate the following line: <?xml version="1.0" encoding="utf-8"?> Replace the preceding line with the following lines: <%@"; %> Save the file as Service1disco.aspx. Repeat steps 2 through 4 to modify the Service1.wsdl file and save it as Service1wsdl.aspx.. Open the Service1disco.aspx file, and then locate the following tag: <contractRef ref="" docRef= "" xmlns="" /> Make the following changes in the <contractRef>tag: <contractRef ref=<% SPEncode.WriteHtmlEncodeWithQuote(Response, SPWeb.OriginalBaseUrl(Request) + "?wsdl", '"'); %> docRef=<% SPEncode.WriteHtmlEncodeWithQuote(Response, SPWeb.OriginalBaseUrl(Request), '"'); %> Locate the following tag: <soap address="" xmlns: Change the <soap address>tag as follows, and then save the changes: <soap address=<% SPEncode.WriteHtmlEncodeWithQuote(Response, SPWeb.OriginalBaseUrl (Request), '"'); %> xmlns: Open the Service1wsdl.aspx file, and then locate the following line: <soap:address Make the following changes to the soap:addressline, and then save the changes: <soap:address location=<% SPEncode.WriteHtmlEncodeWithQuote(Response, SPWeb.OriginalBaseUrl(Request), '"'); %> /> To copy the Web service files to the _vti_bin virtual directory Copy the Service1wsdl.aspx file, the Service1disco.aspx file, and the Service1.asmx file to the _vti_bin virtual directory. This is the directory where all default Web services are stored. Copy the corresponding assembly (.dll) file to the _vti_bin In Notepad, open the spdisco.aspx file of the Local_Drive:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\ISAPIfolder. Add the following lines to the end of the file within the discovery element and save the file: . Using the Web Service You are ready to use your custom Web service by creating a Windows application in Visual Studio .NET. To use the custom Web Service On another computer, open Visual Studio .NET. On the File menu, point to New, and then click Project. In the Project Types box, select Visual Basic Projects or Visual C Projects, depending on which language you prefer. In the Templates box, select Windows Application. Specify a name and path for the project, and then click OK. On the default form, add a command button to invoke the new Web service. Add a Web reference to the Web service: Right-click the project, and then click Add Web Reference. In the wizard, type the following URL, and then click Open to download the service contract for the Web service: Name this reference WSSServer. Add the following using directive to the project: using System.Net; In the Click event for the new button, add the following lines of code: WSSServer.Service1 _svc = new WSSServer.Service1(); //to use the following line use System.Net in the using section _svc.Credentials = CredentialCache.DefaultCredentials; string _Retval = _svc.HelloWorld(); MessageBox.Show(_Retval); Compile the project and run it, and then click the button to test the Web service. Creating a Sample Document Upload Web Use the method described in the previous section to create a Web service project. Name the project UploadSvc and name the Web service class and constructor UploadFile. In Solution Explorer, right-click Service1.asmx and rename the file UploadFile.asmx. Add a reference to the assembly for Windows SharePoint Services (Microsoft.Sharepoint.dll). By default, this assembly is listed with the other assemblies in the global assembly cache. The assembly for Windows SharePoint Services is also located in the following directory: C:\Program Files\Common Files\Microsoft Shared\web server extensions\60\ISAPI Add the following Web method to the UploadFile.asmx.cs file as a member of the UploadFile class: ; } } Add the following using directives to the project: using System.IO; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; Compile the Web service project. Create and modify the .disco and .wsdl files and modify the spdisco.aspx file as described in previous sections, but replace Service1 with UploadFile. Save the files as UploadFiledisco.aspx and UploadFilewsdl.aspx respectively. Copy the .asmx file and the .aspx versions of the .disco and .wsdl files to the _vti_binvirtual directory, and copy the corresponding assembly to the _vti_bin/binvirtual directory. To use the Upload service Open Visual Studio .NET, and then create a standard Windows application. Add a Web reference to the Web service that you created in the preceding procedure. Name the Web reference WSSServer. Add a button and two text boxes to the default form in the Windows application, one text box in which to enter the path of the file to upload and another text box in which to specify the destination document library (for example,). Add the following lines of code to the Click event for the button.); Add the following using directives to the project: using System.Net; using System.IO; Compile the project and run it. Troubleshooting Guide Obtaining the SharePoint Site Context. Testing your Custom Web Service. Adding a Web Reference to your Custom Web Service from Visual Studio .NET. Conclusion.
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint2003/dd583131(v=office.11)
CC-MAIN-2018-34
refinedweb
1,442
59.3
XHTML attributes with qualified names not recognized RESOLVED INVALID Status () People (Reporter: matt-mozilla, Unassigned) Tracking Firefox Tracking Flags (Not tracked) User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_4_11; en) AppleWebKit/525.18 (KHTML, like Gecko) Version/3.1.2 Safari/525.22 Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9) Gecko/2008052906 Firefox/3.0 If an XHTML file defines a namespace prefix which refers to the XHTML namespace and then uses that prefix to specify XHTML attributes using qualified names, those attributes are not recognized correctly. An example of the bug may be seen at: The file is an XHTML 1.1 document, served with Content-Type: application/xhtml+xml. It defines xmlns:h="" and uses the "h" prefix to construct the qualified attribute names "h:id" and "h:href". Neither attribute is correctly handled by Firefox: the CSS style that should apply to div#yes elements isn't applied nor is the link href interpreted correctly. This problem also occurs with Firefox 3.0.3 Linux and Minefield 3.1b2pre on OS X (i.e. Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:1.9.1b2pre) Gecko/20081112 Minefield/3.1b2pre). Reproducible: Always Steps to Reproduce: 1. Load 2. Note that CSS style which would indicate id attribute being recognized correctly isn't applied, nor is the href attribute on the link. Actual Results: The :after content of the div with id="yes" is set to "No". The link does not point to "#". Expected Results: The :after content of the div with id="yes" should be set to "Yes". The link should point to "#". Attributes are in the "null" namespace, not the xhtml namespace. I'm pretty sure this bug is invalid. Unprefixed attributes are in the "null" namespace by default. However, I haven't been able to find anything in any of the XHTML specifications that states that they MUST be in the "null" namespace. Indeed, Opera 9.5 interprets that test page as I would expect, recognizing that the attributes in the XHTML namespace should be interpreted as XHTML attributes. Additionally, Section A.2.2 of the XHTML Modularization 1.1 Recommendation leads me to believe that optionally prefixing attributes is valid. It reads, "... while local attributes depending on the schema implementation may be explicitly prefixed." (See: ) If I am missing some stricture of the XHTML Recommendations that you could point me to, I'm keen to be corrected. I believe you have to infer it from and means that the xhtml document can have a namespaced attribute without it being a parse error providing the namespace is defined. It does not mean that it such an attribute is treated as an alias of an attribute of the same name in the null namespace. See bug 442534 comment 2 Status: UNCONFIRMED → RESOLVED Last Resolved: 10 years ago Resolution: --- → INVALID
https://bugzilla.mozilla.org/show_bug.cgi?id=464594
CC-MAIN-2018-34
refinedweb
492
68.16
Introduction to Hibernate Session There are many object-oriented programming languages having their own syntaxes and libraries while on the other hand, the entire data which we store in the backend is based on the relational model having entirely different protocols and syntaxes to be followed. Hibernate is one of the middleware platforms to bridge this gap. This middleware application I called Object Relational Mapping (ORM). We have various ORM tools such as Hibernate, IBatis, Toplink and many more. In this article, we will focus on Hibernate and its sessions. What is Hibernate? It is the ORM tool used to link and map the objects in the application layer to the database for JAVA programming language. It is built to handle the impedance mismatch between a typical programming language and the relational database. It is free of cost software with a GNU license, can be easily downloaded from the internet. Hibernate is an extension of JAVA persistence API. It supports the Hibernate Query language (HQL). Hibernate’s major role is to link the JAVA objects and classes to the database classes via XML structure or JAVA annotations. Similarly, the datatypes of JAVA should be matched with the database’s datatypes so that there is miscommunication between two different systems. Hibernate can be used to extract the data using queries. It can generate the SQL calls and thereby mitigates manual errors and developer’s work. There are different inbuilt functions in hibernate for ease of use like: load(), update (), get (), merge() : If we are sure that object exist then we use this function to load the hibernate object otherwise we use get() function. Update and merge functions are used to update the database records based on the current sessions already exist or we are totally fresh sessions for the transaction. There are many more functions like this to support hibernate. What is the Hibernate Session? It is a runtime interface between an application and Hibernates which is created on-demand. In other words, it provides the connectivity between your application and database. It offers various functions such as create, delete, get, update to operate on the database by using session methods that exist in four states namely: Transient, Persistent and Detached, Removed. Hibernate has created to serve this purpose. It smoothly connects the database to java language irrespective of any database. It comes with flexible features and thus promoting flexibility of handling data over different platforms. Methods of Hibernate Session - Save(): Save() method generates the primary key and inserts the record in the database. It is similar to the persist() method in JPA but it behaves differently in a detached instance by creating the duplicate record upon database commit. - Update():Update() is used to update the existing database record. It returns an exception if the record is not found or called in a transient instance. - saveOrUpdate(): It saves or updates the database based on the entity passed. It does not return an exception in the transient state but it makes the state to persistent during a database operation. - merge(): Values from a detached entity are updated to the database when the merge() is used by changing the detached entity to the persistent state. - delete(): Delete method works in persistent mode to remove the entity from the database. An exception is returned if no record is found in the database. How to Create a Hibernate Session? In order to create a hibernate session, we have to load hibernate dependencies in the library of the tool which you are using along with database connector. Once, these libraries are loaded we can establish the connection by creating a session using the session factory. Let’s assume we have a table with two columns: Employee Id and Employee Name which should be updated. Code Snippet: import.org.hibernate.session // This way we hall import the hibernate class in the main program. public class testclasshibernate //declaration of class. { public static void main( string[] args ) //The program;s main execution shall start from here. { testprogram = new program(); // Here the class is objectified and then this object is used as a reference to send the values like employee name and employee ID which needs to be imported into the database table. program.setEmpId(101); program.setEmpName(“User1”); Configuration con = new Config().configure().addAnnotedClass testclasshibernate; // Create session factory using configuration SessionFactory sf1 = con.buildsessionfactory(); // Open session method gives the object of session. Session s1 = sf.OpenSession(); // Opening Transaction Transaction t1 = session.beginTransaction(); session.save(testprogram); tx.commit(); } } Advantages of the Hibernate Session - Hibernate session complies with the ACID(Atomicity, Consistency, Isolation, and Durability) properties of the database. - Its object mapping is consistent and thus reduces a lot of potential bugs and loopholes from the code. - It is database-independent so even if there is any database like mySQL or oracle this software can be used. - There is no need to know the SQL only basic knowledge on it should help you in understanding how it works. - Easy to create associations and a lot of guidance present over the net. Java being used widely with an association to a database over the net can c=make the most use of this software if used wisely. - Minimal code changes when there are any changes to tables since everything is handled via class and objects. Most of the code and functionalities are generic thus making it more worthy for use in applications which has a lot of dependency over transactional data. - Hibernate supports multilevel caching thereby improves the coding efficiency. Conclusion There has been a historical discrepancy of database data and the data handled via any programming language outside the database. To get this solved a new solution was designed called “ORM”. The data which is stored in tabular form in a database now can be retrieved from the database and can be handled in the form of objects in the programming language and hence eliminating the use of SQL queries. Recommended Articles This is a guide to Hibernate Session. Here we discuss what is hibernate and hibernate session? along with methods and advantages. You may also look at the following articles to learn more –
https://www.educba.com/hibernate-session/?source=leftnav
CC-MAIN-2020-50
refinedweb
1,023
55.64
MS-DOS MS-DOS is the Microsoft Disk Operating System, the most common operating system on PCs made in 1984. This article deals mainly with support for Hack and NetHack, in versions past and present, on MS-DOS. For a more general view, see the Wikipedia article on MS-DOS. Contents - 1 Operating System Support for the MS-DOS Version of NetHack - 2 Game History on MS-DOS - 3 Today - 4 References Operating System Support for the MS-DOS Version of NetHack Few modern PCs run MS-DOS, and indeed Microsoft discontinued support for the product long ago. Nonetheless, most modern PCs can run the MS-DOS version of NetHack. On Microsoft Windows Versions prior to Windows Vista This section describes the behavior of NetHack on Windows NT, Windows 2000, and Windows XP. Other versions of Windows (in particular Windows 95, Windows 98, and Windows Me) may behave differently, particularly in tiled mode. Microsoft Windows users can run the MS-DOS version of NetHack, as long as they are running a version for 32-bit Pentium hardware. Users of PowerPC or 64-bit versions of Windows will have to use an emulator. If you have not enabled VGA graphics mode, the game will run in a terminal window in text mode. You can switch to full screen mode and back by pressing Alt-Enter; this is a feature of Windows and applies to all programs running in terminal windows, not just NetHack. If you have uncommented the line "#OPTIONS=video:autodetect" in your NetHack.cnf file, the game will run full screen, either in tiled mode or drawing the ASCII and IBMgraphics characters on the graphical screen. Attempting to switch to a terminal window will cause the game to suspend; this is a limitation of Windows and NetHack cannot overcome it. (It won't harm your game in any way; just switch back to it and keep playing.) Windows Vista Windows Vista does not support terminal windows in full screen mode. This applies to both MS-DOS programs and text-mode Win32 programs. This limitation prevents MS-DOS programs from using graphics of any kind. The MS-DOS version of NetHack will still run, but will be limited to text mode. On OS/2 OS/2 users can run the MS-DOS version of NetHack, unless they have the (very rare) PowerPC version. NetHack on OS/2 works much as it does on Windows, except that the key to switch to full screen mode is Alt-Home. On Linux Users of x86 versions of Linux can run the MS-DOS version of NetHack by using DosBox. See the website for your distribution for instructions to obtain and install DosBox, or build from source. In DosBox, NetHack will appear in a window whether in tiled or text mode. Use of DOSEMU is not recommended. DOSEMU crashes if NetHack is run within it. On Mac Those users of 68K, PowerPC, or Intel Macs will have to use an emulator (see below). - This page is a stub. Should you wish to do so, you can contribute by expanding this page. In Emulation Non-x86 platforms, and AMD64 platforms running 64-bit operating systems, do not support MS-DOS programs directly. They can run the MS-DOS version of NetHack by using an emulator such as QEMU with a copy of FreeDOS installed inside, or with DOSBox. Of course it may not be worth the trouble. OS X users can use a Mac-specific version of DOSBox called Boxer. AMD64 platforms running 32-bit operating systems behave the same as Pentium hardware, and can run MS-DOS programs if the operating system supports them. AMD64-compatible operating systems could, in principle, support MS-DOS. Developers of these operating systems have thus far concluded that it is not worth the effort; it requires some complicated mode-switching code in the kernel, which in turn would have to be debugged and checked for security problems. Game History on MS-DOS PCs running MS-DOS had significant limitations compared to contemporary systems such as early Macs, Amigas, and Atari STs; NetHack would in time have to deal with these limitations. Hack on MS-DOS The original releases of Hack by Andries Brouwer supported only BSD Unix, but several third-party ports were created for other systems. Among these were the PC Hack series by Don Kneller. PC Hack 1.01 and 1.01e were based on Hack 1.0.1. Later releases included PC Hack 1.03, 3.0, 3.51 and 3.6, all based on Hack 1.0.3 and eventually implementing an early form of IBMgraphics. The PC Hacks were distributed on BBSes and by shareware dealers, because few PC users at the time had access to the Internet. NetHack 1.3d through 2.3e NetHack 1.3d included support for MS-DOS in the mainline code for the first time. It included a Makefile for Microsoft C 3.0 and even came with a "make" program to interpret this Makefile. NetHack 1.4f added support for Borland's Turbo C product. As home access to the Internet was still uncommon, these PC NetHacks were also distributed on BBSes and by shareware dealers. NetHack 3.0.0 through 3.0.5 MS-DOS provides only 640 kilobytes of memory space for all programs, drivers, and the MS-DOS kernel itself. Hack and NetHack through NetHack 2.3e were small enough to fit in this space without any special measures; but NetHack 3.0.0 was a much larger program and would overflow this space if built with all features enabled. NetHack 3.0.0 through NetHack 3.0.10 have an impressive list of compile-time options, any of which can be turned off to reduce the size of the final program at the expense of producing a game that lacked some of the advanced features. Here is the list from the NetHack 3.0.10 config.h: /* game features */ #define POLYSELF /* Polymorph self code by Ken Arromdee */ #define THEOLOGY /* Smarter gods - The Unknown Hacker */ #define SOUNDS /* Add more life to the dungeon */ #define KICK /* Allow kicking things besides doors -Izchak Miller */ /* dungeon features */ #define THRONES /* Thrones and Courts by M. Stephenson */ #define FOUNTAINS /* Fountain code by SRT (+ GAN + EB) */ #define SINKS /* Kitchen sinks - Janet Walz */ #define ALTARS /* Sacrifice sites - Jean-Christophe Collet */ /* dungeon levels */ #define WALLIFIED_MAZE /* Fancy mazes - Jean-Christophe Collet */ #define REINCARNATION /* Rogue-like levels */ #define STRONGHOLD /* Challenging special levels - Jean-Christophe Collet*/ /* monsters & objects */ #define ORACLE /* Include another source of information */ #define MEDUSA /* Mirrors and the Medusa by Richard P. Hughey */ #define KOPS /* Keystone Kops by Scott R. Turner */ #define ARMY /* Soldiers, barracks by Steve Creps */ #define WORM /* Long worms */ #define GOLEMS /* Golems, by KAA */ #define INFERNO /* Demons & Demonlords */ #ifdef INFERNO #define SEDUCE /* Succubi/incubi additions, by KAA, suggested by IM */ #endif #define TOLKIEN /* More varieties of objects and monsters */ #define PROBING /* Wand of probing code by Gil Neiger */ #define WALKIES /* Leash code by M. Stephenson */ #define SHIRT /* Hawaiian shirt code by Steve Linhart */ #define MUSIC /* Musical instruments - Jean-Christophe Collet */ #define TUTTI_FRUTTI /* Fruits as in Rogue, but which work... -KAA */ #define SPELLS /* Spell casting by M. Stephenson */ #define NAMED_ITEMS /* Special named items handling */ /* difficulty */ #define ELBERETH /* Allow for disabling the E word - Mike 3point */ #define EXPLORE_MODE /* Allow non-scoring play with additional powers */ #define HARD /* Enhanced wizard code by M. Stephenson */ /* I/O */ #define REDO /* support for redoing last command - DGK */ #define COM_COMPL /* Command line completion by John S. Bien */ #ifndef AMIGA #define CLIPPING /* allow smaller screens -- ERS */ #endif From NetHack 3.0.0 through 3.0.5, cutting out features from the above list was the only way to get a NetHack that would run on an MS-DOS PC. NetHack 3.0.6 through 3.0.10 NetHack 3.0.6 added support for overlays. An overlay is a piece of executable code that is not always loaded into memory. It is loaded when it is needed, possibly displacing some other overlay. With overlay support, a full-featured 3.0-series NetHack could be played on MS-DOS for the first time. NetHack 3.0.7 allowed the source files to be divided into smaller pieces, each of which could be a separate overlay. This finer-grained overlay system improved the performance of the program. The support for this division is still present in NetHack 3.4.3, though in disuse; look for directives such as "#ifdef OVL0" and for such preprocessor symbols as STATIC_DCL. Overlays remained the preferred way to build an MS-DOS NetHack through NetHack 3.0.10. NetHack 3.1.0 through 3.3.1 By the time that NetHack 3.1.0 was released in 1993, PCs based on the 386 chip were in widespread use. These could operate in protected mode, allowing use of more than the 640K of memory accessible to MS-DOS. MS-DOS, however, cannot operate in protected mode. The DOS extender was introduced to solve this problem. A DOS extender switches the CPU to protected mode before running the program to which it is bound, and then switches back to real mode whenever it is necessary to enter MS-DOS for any reason. DJGPP is a port of the GNU C compiler and related tools to MS-DOS, bundled with a DOS extender. NetHack 3.1.0 was the first version offered with an official version built with the DJGPP tools. The earlier 286 chip can also run in protected mode, but not in a way that the DJGPP tools can support. Programs built with DJGPP require a 386 to run, and so at first the overlaid versions of NetHack continued to be supported; thus there were two MS-DOS NetHacks, and neither could use the other's bones and save files. In time, however, pre-386 PCs were retired from service, and NetHack continued to grow, eventually straining the overlay system. The overlaid version flickered in and out of supported status; the last NetHack to offer it officially was NetHack 3.3.1. NetHack 3.4.0 through NetHack 3.4.3 Beginning with NetHack 3.4.0, only the DJGPP version of NetHack has had any official support from the DevTeam. The makefiles and preprocessor support for the overlaid version are still present, but are no longer supported. A recent attempt to build an overlaid NetHack 3.4.3 showed this infrastructure to be slightly broken; it was furthermore necessary to cut out tile support to get the program to fit. [1] An overlaid NetHack 3.4.3 ends up being so large that the 640K limit can barely accommodate it, even with a minimal set of drivers loaded. Running it on an 8088-based PC is likely to be futile, and even a 286 will be hard-pressed to find enough room. A 386 can load drivers outside the 640K area, but a player with a 386 can run the DJGPP NetHack. NetHack 3.6.0 and 3.6.1 The addition of special statue glyphs broke the MS-DOS port in NetHack 3.6.0. The default tileset has more colors than the VGA code can handle. NetHack 3.6.1 adds support for VESA BIOS modes, and falls back to the generic statue glyph if the original VGA mode is in use. It also changes the tileset format to a BMP, the same as Windows uses. The tile size is still limited to 16x16 and the colors to 256. Some small changes are still needed to compile, but a distribution is available. Today MS-DOS is semi-officially supported. The code is in the distribution, but needs a few changes to compile; the DevTeam has not released an official binary. An unofficial binary distribution of 3.6.1 is available on Github. Most players with PCs can play the MS-DOS version of NetHack 3.4.3 or the unofficial build for 3.6.1, as 32-bit versions of Windows and OS/2 can all run MS-DOS programs. Linux users can play the MS-DOS version through DosBox. Those players with pre-386 PCs are out of luck; NetHack demands more than their machines can give. They can upgrade or use an earlier version of NetHack. References - ↑ Ray Chason, Support for real-mode MS-DOS: still worthwhile? rec.games.roguelike.nethack, February 8, 2005.
https://nethackwiki.com/wiki/MSDOS
CC-MAIN-2018-34
refinedweb
2,063
64.71
I made a little game and build it on release. In the Release folder, the executable works fine. But, if I copy it on the desktop or on another computer, I have an error pop-up saying that the program has stopped working with the details : Signature du problème :Nom d’événement de problème: CLR20r3Signature du problème 01: blockMenu.exeSignature du problème 02: 1.0.0.0Signature du problème 03: 5a21458aSignature du problème 04: mscorlibSignature du problème 05: 4.7.2114.0Signature du problème 06: 59a638b8Signature du problème 07: 165dSignature du problème 08: fcSignature du problème 09: System.IO.DirectoryNotFoundVersion du système: 6.1.7601.2.1.0.256.48Identificateur de paramètres régionaux: 1036Information supplémentaire n° 1: 0a9eInformation supplémentaire n° 2: 0a9e372d3b4ad19135b953a78882e789Information supplémentaire n° 3: 0a9eInformation supplémentaire n° 4: 0a9e372d3b4ad19135b953a78882e789 I search on the net and saw a lot of things like : But nothing works for my case. At the beginning, I tested a few thing and the build worked. The only thing I changed since that moment are : - add 2 packages Newtonsoft.Json.10.0.3 and MathNet.Numerics.3.20.0- move some of my .cs files into folder, but I paid attention to namespace because I can launch it from the IDE. I'm at the end of my research and my ideas. I'm kind of disappointed. If anything have an idea, I take it right away and I would be pleased. Thanks in advance for the answers. Are you copying all the other files in that folder to the desktop as well? Or just the exe itself? If you're copying just the EXE, that's why. It can't find MonoGame or any of your other dependencies, and if you're doing things with content pipeline, it can't find your Content folder either. So it'll crash with that error. I understand. That's why I did copy all the files and folders when I encountered the issue. Ok, I found the problem. I use a JSON file to get my data for the titles, menu, etc. But I wasn't able to use it through the ContentPipeline. So I just used it in the root file.When I built on release, the JSON file is concatenated in the exe file so my program can't find it.After a lot of research I found that MonoGame still can't use JSON file in the ContentPipeline. You have to make a custom importer. But it's too dawn complex for me. So, instead I'll be using an xml file which can be manage through the ContentPipeline. So the problem is partially solved You can very easily get around the fact MonoGame doesn't have a content importer for JSON/text files. Simply add the JSON file to your Visual Studio solution explorer inside the Content folder (where your Content.mgcb file is), and set its Build Action to Content and Copy to Output Directory to Copy if newer. When you build your game, that file will be copied to the build output in the Content folder of your game (where all the .xnb files are). Then, to read the file: using System.IO; ... var baseDirectory = AppDomain.CurrentDomain.BaseDirectory; //get the base directory of your game's executable. var fullPath = Path.Combine(baseDirectory, "Content", "Name of json file.json"); //If your game's EXE is in "C:\Game", this will return "C:\Game\Content\Name of json file.json". If you're on Linux/Mac and it's in "/home/user/game", this will return "/home/user/game/Content/Name of json file.json". string jsonData = File.ReadAllText(fullPath); //Now you have the JSON. Do what you'd like with it :) Bit less elegant than Content Pipeline but it should work even across platforms as long as the file actually exists.
http://community.monogame.net/t/executable-doesnt-work-outside-the-release-folder/10024/5
CC-MAIN-2018-34
refinedweb
638
67.65
Summary I finally got some time to play with Cheetah and Django templates. After maybe an hour with each, I like Django best. Hopefully people are still watching this space... Encouraged by Fredrik Lundh, I took a look at Django templating, separate from the rest of Django. This works fine, except there's a mysterious environment variable DJANGO_SETTINGS_MODULE that is somehow required. That may be fine for framework users, but it's a bit harsh if all you want to do is use the templating library. Setting it to empty didn't help. I ended up uing this hack: import os os.environ["DJANGO_SETTINGS_MODULE"] = "__main__" from django.core.template import Template With that out of the way, I managed to run this program: t = Template("<h1>Hello {{name}}</h1>") print t.render({"name": "Phillip"}) print t.render({"name": "Adrian"}) This is nice! The output of course is: <h1>Hello Phillip</h1> <h1>Hello Adrian</h1>. I tried the same exercise using Cheetah. Not too different: from Cheetah.Template import Template names = {"name": "Tavis"} t = Template("<h1>Hello $name</h1>", searchList=[names]) print t names["name"] = "Ian" print t with the following output: <h1>Hello Tavis</h1> <h1>Hello Ian</h1> (Ironically, Cheetah also failed under Python 2.5 -- Python 2.5 inadvertently (I believe) changes the syntax that's allowed before "from __future__ import ..." to disallow assignment to __author__ and __version__. That was easier to fix though -- commenting out the "from __future__ ..." line was sufficient.) (BEWARE! I haven't used either system to build a real website yet. I'm just looking at the API and templating language designs.) Django definitely feels more "modern" than Cheetah. The templating languages are fairly similar, with Django writing {{foo.bar}} where Cheetah writes $foo.bar or ${foo.bar} for variable interpolation (== substitution). The biggest difference is that Cheetah allows pretty much arbitrary Python call syntax, e.g. ${foo.bar('hello', $name, 42+42)}. Yes, you have to prefix variable references in the argument list with another $, and there are confusing rules about when the $ is optional. Django only allows names separated by dots, using the old Zope trick of trying x['y'], x.y and x.y() when the input is {{x.y}}. When y looks like a number, it'll even try a sequence index, e.g. {{x.4}} means x[4]. (Aside: something similar is also found in web.py, but there it bothers me, because it's invoked from normal-looking Python code. E.g. foo["x"] and foo.x are equivalent, when foo is a "Storage" object. But this leaves me wondering, what does foo.keys mean? The keys method or the value stored under "keys"? Zope 2 was similarly confusing with implicit acquisition.) A big difference: if Django doesn't find a name, it inserts nothing; but Cheetah raises an exception. ISTM that Django is more user-friendly here, even if its approach could be considered error-prone (typos are easily missed since they simply suppress a small bit of output). In my experience, missing variables are very common in substitution data, and Cheetah requires you to provide explicit default values in this case. Both templating languages also have a "statement" syntax to complement their "expression" interpolating syntax. In Django, this is written as {% keyword %} while in Cheetah you use #keyword. Here I initially found Cheetah a bit more readable, since it resembles C preprocessor syntax: #if $name <h1>Hello $name</h1> #else <h1.Hello there</h1> #end if This is written in Django as: {%if name%} <h1>Hello {{name}}</h1> {%else%} <h1>Hello There</h1> {%endif%} which is harder on the eyes and prints more unnecessary whitespace. But then I stumbled upon Cheeta's inline form, which is pretty unreadable due to the symmetric delimiters: # if $name # <h1>Hello $name</h1> # else # <h1>Hello There</h1> # end if # Both languages have tons of other statement-level constructs, to do loops, set variables, invoke other templates, define blocks that can be used by other templates, and more. I haven't explored this much yet, but they both seem to cover a similar terrain. I guess this is what template authors actually need; or perhaps it points to some common ancestor in PHP or JSP? If you need your variable interpolations to be HTML-escaped (replacing "<" with "<" and so on), Cheetah lets you specify a default filter in the template (#filter) or when the template is created in Python code. In Django you must add a pipeline to the interpolation syntax, like this: ${foo.bar|escape}. Both support various other filters as well, with Django really going wild. I'm somehow surprised that HTML-escaping isn't the default -- failing to HTML-escape data is the number one vulnerability leading to XSS attacks. And in theory you should almost never need to provide HTML for interpolation: you're supposed to invoke HTML fragments using #include or {%include%}. Cheetah at least provides a way to make HTML-escaping the default filter throughout a template. See the examples near the top. I like Django's version better: you pass the variable bindings in to the render() method. Cheetah lets you specify multiple dictionaries with variable bindings, which are searched one after another, but it bothers me that these all have to be passed to the Template() constructor instead of to the render method (which is called __str__() in Cheetah :-). Django's template compilation is much simpler and IMO more elegant than Cheetah: Django parses the template text into nodes of various types using a big regular expression, and each node has an appropriate render() method. Rendering the template in a given context simply concatenates the results of rendering each node in that context. I imagine this could easily be turned into a generator compatible with WSGI. Cheetah, OTOH, compiles each template to a Python class! This is much slower, and in my experience brittle -- on my first attempt I introduced a syntax error in the template that caused a Python syntax error in the resulting Python class, which was hard to debug. It also seems overkill, and I worry that it might cause security problems -- given that the compiler isn't too smart (see above) I could see a malicious template author "breaking out" of the templating languages and invoking unauthorized Python code. Now, I wouldn't let people I don't trust edit templates on my website anyway, but it appears to be a common pattern that certain people are allowed to edit templates but not code. Cheetah blurs the distinction a little too much for my comfort. I guess we're closer than I thought to the mix-and-match approach that I requested in my previous blog. Cheetah is just a templating engine, and several web frameworks use or recommend it, e.g. web.py and Subway. (Is Subway dead? The website is super-incomplete and it still downloads an old Cheetah version.) Django's templates are almost usable independent from the rest of Django; I expect the situation will improve once they release the magic-removal branch.. Django and Cheetah both define a 'Template' class. Python's standard library also has a Template class (in string.py), which serves a similar purpose (but with only a fraction of the functionality). All these have different APIs. But are the differences important? It seems pretty arbitrary whether to use Template("...").render(locals()) (Django), or str(Template("...", searchList=[locals()]))` (Cheetah) or ``Template("...").substitute(locals()) (string.py). Perhaps we could attempt some standardization here similar to WSGI? Have an opinion? Readers have already posted 86 comments about this weblog entry. Why not add yours? If you'd like to be notified whenever Guido van van Rossum adds a new entry to his weblog, subscribe to his RSS feed.
http://www.artima.com/weblogs/viewpost.jsp?thread=146606
CC-MAIN-2016-44
refinedweb
1,297
57.87
Notifications You’re not receiving notifications from this thread. Devise: Add a select to my signup form Hi all, me again! haha So i have got my app working as i want so far. All is good and i am picking up Rails really quickly thanks to GoRails. I am however a little stuck, I can't seem to add in a select for my sign up form. <%= f.select :is_brand, options_for_select(%w[true false]) %> This is adding in the select, however it does not seem to store the field when i submit. I am thinking its cause its not reference by devise as standard, and i have not got any devise registration controllers. I basically want to have a select, with a list that says... -- Select an option -- I am a Brand (value would be TRUE) -- I am a Buyer (value would be FALSE) Then the/false would be stored as a bool in my DB, i have the DB all set up its just getting things to store. Thanks in advance :D Alan Ok, so after reading up a load about Devise, i found they have made it easy to add in parameters on to the sign up forms. You will need to create the devise controllers so check out the devise docs for details on how to do this. Once created open up the registrations_controller.rb and scroll down the page to around line 46. Her you will see some code which is commented out. # If you have extra params to permit, append them to the sanitizer. # def configure_account_sign_up_params # devise_parameter_sanitizer.permit(:sign_up, keys: [:attribute]) # end Simply uncomment this, and add replace :attribute with the attribute you want stored to the DB! It's that easy! No need to write out your own sanitiser. Now all thats left is to add in the fields you want added, so for me i wanted to add in a simple select, so i used the code below. <%= f.select :is_brand, options_for_select([["I am a Brand", true], ["I am a Buyer", false]]) %> I really hope this helps someone else out. You know, it'd be kind of cool if someone added a generator to Devise that would allow you to add this code into ApplicationController automatically so you didn't have to look it up each time. Maybe someone should make a PR on Devise for this idea... hint hint 😉 haha @Chris ;) That would be my first PR. I would be keen to learn how to do that on such a large scale project, for example what would need to be included etc. I would just open up an issue on their Github, ask if they'd be interested in the idea and if so, what are the things they'd like to see for it? code and document obviously, but does it need tests? that sorta thing. They're super helpful and that's basically what I did when submitting a suggestion for a feature on Devise previously. Sounds cool, i will have a look see what others have written to get ideas :D Thanks Be aware, things have changed! It seems i had to now move this to my application_controller otherwise i kept getting Unpermitted parameters Yeah, I think normally the devise_paramter_sanitizer stuff has to go in ApplicationController. Not entirely sure why but I guess because it's common and that doesn't require people to override the RegistrationsController. What version of devise are you using to have had to move your permitted params into your ApplicationController? Using Devise 4.2.0 ( $ gem list | grep devise) I still have my sanitizers in controllers/registrations_controller.rb and no issue... def configure_permitted_parameters devise_parameter_sanitizer.permit(:sign_up) do |user| user.permit(:first_name, :last_name, :email, :password, :password_confirmation, :avatar, :time_zone) end devise_parameter_sanitizer.permit(:account_update) do |user| user.permit(:first_name, :last_name, :avatar, :email, :password, :password_confirmation, :current_password, :time_zone, contact_attributes: [:line_1, :line_2, :city, :state, :zip, :phone]) end end The only thing I don't like is having to declare the sanitizer for the :sign_up action and :account_update separately - it could probably be combined but I haven't dug into it yet (maybe not though??) Jacob, You should be fine putting it in RegistrationsController, my mistake. The main reason why they suggest ApplicationController being that not everybody overrides the RegistrationsController, so it's easier. It can definitely go in RegistrationsController too because that's really where it's being used. I don't know about combining it, other than setting the arguments to a variable and passing in just the variable into both. I am also using 4.2.0, I'm not sure why but i had to move it :/ i am also using rails 5.0.0.1 maybe this is why? I am having issues with devise personally. Yeah its fantastic but its a pain cause i want to set the URL's to something different and i don't like how it deals with things. Might need to roll my own version of Auth soon i think. Also I notice you have contact_attributes maybe you could shed some light on another post of mine? I need to save data into a different table. - Maybe even let me know if there is a better way, maybe using joins? Sure, I'll take a look. I learn as I go so if I've encountered it before then I can probably help! You may have an issue with Rails 5, I'm still on 4.2.3 for the project I pulled that from. I did encounter your error a lot if I didn't have the proper before_filters set... so for mine its: before_filter :configure_permitted_parameters, :only => [:create, :update] lol i mentioned them before in another post. I wanted to change things like user/edit to account but when you post the form, if there is an error it sends you to a different address. I have the below set up for registrations, but when you post on /account and there is an error it redirects to /account. I have debugged and seen that its the PUT that is redirecting it - this is set in the url: registration_path(resource_name) part of the form. # Registration get 'signup' => 'devise/registrations#new', :as => :new_user_registration post 'signup' => 'devise/registrations#create', :as => :user_registration get 'signup' => 'devise/registrations#cancel', :as => :cancel_user_registration get 'account' => 'devise/registrations#edit', :as => :edit_user_registration put 'signup' => 'devise/registrations#update' patch 'signup' => 'devise/registrations#update' delete 'signup' => 'devise/registrations#destroy' I'm not 100% sure what you're trying to accomplish, but you're looking at either needing to just rename the routes or you may need to override the update method. If it's just a route renaming issue, it should be something like this: devise_for :users, :controllers => { registrations: 'registrations', sessions: 'sessions' }, :path => '', :path_names => {:sign_in => 'login', :sign_up => 'register'} Of course update to whatever your names are... Lines 41 - 62 is the update method on the registrations_controller and is what you're going to have to override to redirect to a certain action given whatever conditions you have. I haven't personally done it for this scenario so this is about as far as I can help for this one.
https://gorails.com/forum/devise-add-a-select-to-my-signup-form
CC-MAIN-2022-21
refinedweb
1,189
63.09
ClickableRequires Open the required javascript files with a mouseclick in SublimeText3 Details Installs - Total 87 - Win 39 - OS X 38 - Linux 10 Readme - Source - raw.githubusercontent.com ClickableRequires for Sublime Text 3 Description Open the required javascript files with a mouseclick as you are doing it in another IDEs. The implementation of the file search is based on the specification of require function in Node.js. :sunglasses: Now ES6 import statements are supported as well. :sunglasses: Installation - clone the repository into Sublime Packages folder - install through Package Control: ClickableRequires Usage You can hover on any require('module-name') or import module from 'module' statements to open a pop-up with in-app link to the file. For core node modules the online documentation will be opened in the browser. If the file is from node_modules then also an npm link to the package will be displayed. Click settings You can setup the plugin to navigate on mouseclick: * open the Pakages by Command Palette -> Browse Packages * in /Packages/User/ folder create or edit the Default.sublime-mousemap file * add the following (here you can modify the button and the modifiers as you like but beware with binding collosions.): [ { "button": "button1", "modifiers": ["super"], "command": "open_require_under_cursor", "press_command": "drag_select" } ] Settings The default settings are the following: { "debug": false, // To turn on or off file searching debug logs "reveal_in_side_bar": true, // Will reveal the file in the sidebar "extensions": [".js", ".jsx"], // The file extensions the plugin searches in "scope": "support.module", // See more at "icon": "dot", // Possible values: dot, circle, bookmark and cross. Empty string for hidden icon. "underline": true, // If the module names should be underlined "show_popup_on_hover": true // If a popup with module link and path should appear on hovering the require statement } However you can override them in Preferences -> Package Settings -> ClickableRequires -> Settings - User. Webpack or other module handlers If you are using webpack resolve.modules or resolve.aliases then you should configure the routes to this modules in your .sublime-project file. Use relative paths to the project file! { "folders": [ { "path": "." } ], "settings": { "webpack_resolve_modules": ["src", "other_module_directory"], "webpack_resolve_extensions": [".js", ".jsx", ".json"] } }
https://packagecontrol.io/packages/ClickableRequires
CC-MAIN-2017-39
refinedweb
346
53.71
[Updated] Simplify the require/import paths in your project and avoid ../../../ circles of hell Mohammadjavad Raadi Updated on ・4 min read Do you hate seeing ../../../ everywhere in your code? Come along and I'll show you why you should use babel-plugin-module-resolver to work faster and write cleaner code. Update 1 (3/31/19): As Pavel Lokhmakov suggested, I've created a new GitHub repo here to achieve the functionality explained in this post without the need to eject the app. react-app-rewired and customize-cra are both libraries which let you tweak the create-react-app webpack/babel config(s) without using 'eject'. Simply install these packages as dev dependencies and create a new file called config-overrides.js in the project's root directory and put your custom config there. Then all you have to do is to update your npm scrips according to react-app-rewired docs. The Inspiration I never liked writing code like this: import NavBar from '../../components/NavBar'; To me it seemed very confusing, not clean and not maintainable. Imagine somewhere down the line, you needed to alter your project's directory structure. You would have to go through every file and update your code to reflect your changes. Talk about non-maintainabilty! But I loved the way I would import packages from the node_modules directory: // ES6 import syntax import React, { Fragment } from 'react'; // CommonJS require syntax const nodemailer = require('nodemailer'); So I was eager to find a way to import/require my custom modules/components just like this. babel-plugin-module-resolver to the rescue! TL;DR You can find the GitHub repos associated with this article: What does it do? I'll let the plugin author explain:. In case you don't know what babel is, it's a JavaScript compiler which is mainly used to convert ECMAScript 2015+ code into a backwards compatible version of JavaScript in current and older browsers or environments. If you're building an app with create-react-app or similar libraries, they're using babel behind the scene. Let's get started Here I will show you how you can use this plugin in an app created by create-react-app. Create a new app with the command below: $ create-react-app babel-plugin-module-resolver-test-app create-react-app encapsulates the project setup and all the configurations and gives you tools to create production ready apps. Since we need to change babel configuration we need to eject our app. Ejecting will move create-react-app’s configuration files and dev/build/test scripts into you app directory. Note: this is a one-way operation. Once you eject, you can’t go back! It's fine for our use case because we're building a test app. Go ahead and run the command below: $ npm run eject Confirm and continue. Note: at the time of writing this post, there is a bug with create-react-app explained here. The workaround is to remove the node_modules directory and reinstall the dependencies again. Install the dependencies: $ npm install Install babel-plugin-module-resolver plugin by executing the following command in your project directory: $ npm install --save-dev babel-plugin-module-resolver Open package.json file and look for babel config. This is how it looks like after eject: ... "babel": { "presets": [ "react-app" ] }, ... Now we need to tell babel to use our module resolver and define our root directory and aliases. Edit your babel config section to make it look like this: ... "babel": { "presets": [ "react-app" ], "plugins": [ ["module-resolver", { "root": ["./src"], "alias": { "dir1": "./src/Dir1", "dir2": "./src/Dir2" } }] ] }, ... Now create two directories in src directory called Dir1 and Dir2. Our defined aliases will point to these directories respectively. Create a component file called ComponentA.js in Dir1 directory and put the code below in it: import React from 'react'; import ComponentB from 'dir2/ComponentB'; const ComponentA = () => ( <p> Hello from <ComponentB /> </p> ); export default ComponentA; Now create ComponentB.js in Dir2 directory with the code below: import React from 'react'; const ComponentB = () => ( <a href="" className="App-link" target="_blank" rel="noopener noreferrer" > Bits n Bytes Dev Team </a> ); export default ComponentB; Now edit the App.js file in the src direcory: import React, { Component } from 'react'; import logo from './logo.svg'; import './App.css'; import ComponentA from 'dir1/ComponentA'; class App extends Component { render() { return ( <div className="App"> <header className="App-header"> <img src={logo} <ComponentA /> </header> </div> ); } } export default App; Notice that I didn't have to go up one directory or down another directory to import my components. We're now ready to run our app, run the command bellow in your terminal: $ npm start You should see your app in the browser without any problem. If you have any questions or problems, feel free to comment. Conclusion Custom module resolvers will save you time and the frustration of dealing with ../ splattered everywhere. They take a bit to setup and ensure full cooperation with existing tooling, but the result and visual satisfaction of never having to see ../../../../../.. is well worth the initial outlay on big projects.. Why the React community is missing the point about Web Components I'm a generalist web developer with my hands in a few libraries.I don't conside... For CRA enough to create in root .envwith NODE_PATH=srcand all directories inside ./srcwill be available by absolute path. Do not use eject, we have alternatives to modify most of internal configs. U can choose: I've updated the post to reflect your suggestion. Thanks Thanks Pavel, I'll have a look at them. For typescript, you need this in tsconfig.json: Great writeup! There is also webpack alias settings, which you can use to achieve a similar thing. That is what we are using, although since our jesttests don't run webpack, we have to add the alias to jest as well. (we've made our root project directory '@'). I wonder if baking it into babelrc and using same babelrc files would cover the setup in one place. Incidentally with the webpack.alias approach I haven't got autocomplete in VS Code working despite having followed the instructions to get it to work.. (though maybe not close enough) We got it to work adding aliasFields... might wanna give it a shot. I wrote about it here dev.to/costicaaa/vuejs-type-hint-i... That's great. Vuejs has the same problem. I'm in trouble with that. But now I think it can be resolved by babel-plugin-module-resolver. Thank you so much If you're using the Vue webpack config, such as with SFC, it provides an alias you can use to reference files by absolute path (e.g. from '@/components/Button'). I've carried this convention into my React projects as it's really easy to setup without adding another dependency. Only downside is that it only works for webpacked files. Thank you so much. I'm gonna try it Yeah, I'm sure it's pretty much the same config because vue uses babel to compile as well. If you have a .babelrc file in your project directory, you can put the config in there. Take a look at the github page of the plugin for more info. Useful plugin, thanks! It certainly has made my life easier. This can be a very useful plugin. I'm currently dabbling with mobile development, specifically React Native. Can I use it for that? I use VSCode as my IDE. Thanks! Yes, I was able to get it to work with react-native as well. You need to configure VSCode to work with this plugin and make the auto complete functionality work. Here's what you gotta do from the plugin's doc: github.com/tleunen/babel-plugin-mo... Thanks! Really good and straightforward explanation. Glad you liked it
https://dev.to/mjraadi/simplify-the-require-import-paths-in-your-project-and-avoid-circles-of-hell-51bj
CC-MAIN-2019-22
refinedweb
1,311
58.28
Java Expert Solutions Chapter 28 Accessing Remote Systems Securely by Mark Wutka - Getting a Secure Web Server - Preventing Impersonations - Accessing Remote Data - Passing Keys to Clients - Implementing a Single-Client Secure Server - Implementing a Multiclient Secure Server - Creating Other Secure Remote Access Programs The huge explosion of Internet access is both a blessing and a curse to businesses on the Net. Because Internet data passes over insecure networks, a company must be extremely careful when passing sensitive data over the Internet. In the past, it was difficult for companies to communicate with their employees out in the field, because networking technology was fairly primitive, and laptop computers were not very portable, or too powerful. This was especially a problem for sales force automation. A salesman may be on the road for weeks or months at a time, unable to communicate with the home base. Figure 28.1 illustrates the old design of a sales force automation network. Figure 28.1 : In the past, companies had to create their own sales force automation networks. Now, laptop computers are extremely powerful with a wide range of networking options, and Internet access is available all over the world. Figure 28.2 shows how the Internet has changed the environment for sales force automation. Figure 28.2 : The widespread availability of the Internet eliminates the need for custom networks. It seems like things should be easier for people on the road, and it is for people at some companies. For others, things haven't gotten any better. The irony here is that it is still just as difficult to access secure data over the Net. The widespread infiltration of the Internet, which makes it easier for employees to communicate back to their home system, also increases the likelihood that the data can be intercepted. A devious spy from another company could snoop on private data transmissions over the Internet. In addition, many companies have custom software that must be adapted to work over the Net. Whenever the software is updated, it is very difficult to distribute the new versions to the people in the field. The company must either ship new diskettes, ship out a new laptop, wait until the salesperson is back in the office, or download software over the network. The latter is a very difficult endeavor, most often meeting with failure. Java can help solve the distribution problem. With Java, you can either write the custom software as applets, which can be cached on the local hard drive (as shown in Figure 28.3), or create a software distribution applet that downloads new versions of the local applications, as shown in Figure 28.4. A digitally signed applet can download updates to the local software, as well as update any information stored on the local hard disk. Figure 28.3 : Because applets are downloaded at runtime, you can eliminate some installation headaches. Figure 28.4 : A custom software installation applet can also assist in software installation. You still have the problem of downloading important information without someone intercepting the information. The Secure Socket Layer (SSL) protocol enables you to access Web pages securely. SSL uses a combination of a digitally signed certificate and an encrypted network session. Getting a Secure Web Server If you want to use SSL to download your applets, you must get a Web server that supports the SSL protocol. Netscape Communications () has a whole line of Web servers, all of which support the SSL protocol. The secure Web server from Open Market Systems () also supports SSL. There is also a version of the Apache Web server that supports SSL. If you are within the United States, you can download the Apache-SSL Web server from Community Connexion, Inc. (). Apache-SSL is free for non-commercial use. If you want to use it for commercial purposes, you must buy a license from C2. Once you get a secure Web server, you must also get a digitally signed certificate in order to use SSL. There are several certificate authorities around, one of which is Verisign (). One of the nice things about the Apache-SSL server is that it will generate a local certificate that you can use for testing. You must still obtain a signed certificate if you want someone else to access your Web server securely. Preventing Impersonations The Secure Socket Layer, or another secure form of HTTP, is vital in downloading applets securely. When you download one of your company's applets, you must to be sure that it really came from your company. The digital signature mechanism warns you if you download an unsigned applet, which prevents all but the most devious impersonation attacks. If someone is able to digitally sign an applet with a signature you think is valid, you will not know anything is wrong. Of course, if this happens, it means that you have trusted someone you shouldn't have. Chapter 26, "Securing Applets with Digital Signatures," contains more information about digital signatures and their relationship to applets. If you use SSL to download your applet, it cannot be impersonated, because SSL ensures that you are talking to the correct Web server. Accessing Remote Data You can use SSL to send and receive encrypted data without using any encryption software yourself. The URL class enables you to open up URLs with an https protocol type, which uses SSL for communications. Because of the way the applet security manager works, you cannot intermix http and https URL accesses in a single applet. If your applet was loaded using the http protocol type, you can open URLs only with a protocol type of http. If your applet was downloaded via https, you can open URLs only with a protocol type of https. What the restriction means is that if you need to download information securely, you must also download the applet securely. It is one of the simple ways that Java protects you from yourself. If you were allowed to download applets insecurely and then download information securely, you would be vulnerable to an impersonation attack. Since the SSL support is built into the URL class (actually, into the browser itself), you can use the methods discussed in Chapter 6 "Communicating with a Web Server," to store and retrieve files using the URL class. You cannot use any of the socket mechanisms to do this, however. Passing Keys to Clients One of the difficulties in performing encrypted communications is agreeing on an encryption key. You have to have a secure way to even agree on a key; otherwise, someone could just eavesdrop on your key exchange and see what key you're using. Don't Reuse Symmetric Keys You might be tempted to generate a nice key for symmetric key encryption and let all your applets use that key. Unless you have a foolproof method of ensuring that only trustworthy people can get your applet (which is easier said than done), you run the risk of a malicious person downloading your applet and examining it to find the key you're using. It should be obvious that just using SSL doesn't prevent someone from stealing your key. SSL only prevents someone from impersonating your server, and from watching your applet being downloaded (since the download is encrypted). The amount of risk involved in reusing a symmetric key isn't worth the trouble. Using Public Key Encryption to Get a Private Key Public key encryption is generally very costly. You would not want to carry on an interactive session using public key encryption. Public keys are very handy for exchanging a private key, however. Suppose you had a client and a server connected together using an insecure TCP/IP socket, as shown in Figure 28.5. Figure 28.5 : A client connects to a server using an insecure socket. Next, the client generates a random private encryption key. The client then encrypts the private key using the server's public encryption key and passes the encrypted key to the server, as shown in Figure 28.6. Figure 28.6 : The client encrypts a private key using the server's public key. Now, the server can decrypt the private session key and the two can use the private key to exchange data. It is safe for you to embed the server's public key in the client applet, because even if someone looked at it, they wouldn't learn anything that isn't already public knowledge. Because each session key is randomly generated, no one else can figure out your session key just by downloading the applet. Anyone who did so would just generate a different key. Passing a Private Key as an Applet Parameter Because you are using secure sockets to download your applet, you can take advantage of the fact that the applet and its parent Web page are transmitted in encrypted form. Instead of the client generating the session key and using public key encryption to pass the session key to the server, the server can pass the key to the client applet using the <PARAM> tag. This mechanism has some peculiar drawbacks to it, stemming from the fact that you must use CGI (or its equivalent) to pass the key information back. If you are lucky enough to have a Java Web server that also does SSL, you don't have to worry about this. Unfortunately, since many, if not all, of the current Java Web servers do not support SSL, you have to come up with some unique ways of passing keys around. The problem here is that a CGI program is supposed to generate content in response to a GET or a POST and then exit. Figure 28.7 illustrates the typical life of a CGI program. Figure 28.7 : In response to an http GET or POST, a CGI program starts up, generates a response, and exits. Unfortunately, since the response isn't sent back to the client until the CGI program terminates, by the time the client receives the random private session key, the program that generated the key is gone. There are a number of ways you can solve this problem. One way is to have your CGI program start up another program that also knows the session key. Once the client knows the session key, it connects to the program started by the CGI program. Figure 28.8 illustrates this sequence. Figure 28.8 : A CGI program can spawn another program that communicates securely with the client. This solution may be fine for some situations, but it has a number of drawbacks: - If the spawned program takes a long time to start up, you may bog down your system if you are starting up too many copies of the program at one time. Plus, the client may not be patient enough to wait for the server program to start. The client has to be smart enough to retry if it can't get a connection the first time. - If the server program needs to use a limited resource, you don't want many simultaneous copies of the program running. For instance, if it accesses a database, you don't want 50 copies of the program all opening database connections. It's even worse if the server needs a resource that can be accessed by only one program at a time. - Because the client has to connect to the server, the server needs some kind of timer in case the client fails for some reason. You don't want hundreds of old copies just sitting around waiting for a client that will never call. This isn't a big deal, just something you have to take care of. - It may be expensive on your system to have many copies of the Java virtual machine running at the same time. - It is tricky for the server program to create a socket and pass that socket number back to the CGI program so it can tell the client where to connect. Another solution is a bit less taxing on the system resources, but is harder to implement. In this solution, the server is a program that is already running, in the same way that the Web server is always running. When a CGI program starts up, it requests a new session key from the server and then passes the key to the client. The server then listens for an incoming connection. Figure 28.9 illustrates this relationship. Figure 28.9 : The CGI program gets a key from the server, which was already running. This architecture has some advantages over the previous one: - With only one copy of the server running, limited resources are not consumed so quickly. - If the server has a long startup time, you take that hit only once, and probably not while there is a client waiting, because the server is probably started at system boot time. - You need only one copy of the Java VM to run the server. - This solution also has its drawbacks: - The server has to handle multiple simultaneous requests. This is usually harder to implement than a single-threaded, single-client server. - The CGI program has to communicate with the server somehow, either through RMI, CORBA, or a simple socket connection. This takes some startup time. - You still have the problem of setting up a timer to decide when a client has had a problem and won't be calling. Implementing a Single-Client Secure Server If you need something more than a simple http GET or POST interface and you need encryption, you'll probably have to settle on a socket connection. If you can find a CORBA or RMI implementation that supports encryption, that would be much better. For a single-client secure server, you need a CGI program that invokes your server program. Since most server programs have a number of similar features, it makes sense to create an abstract class that handles many of the common features. The SingleSecureServer class, shown in Listing 28.1, implements a number of useful methods for creating a single-client secure server. The server expects to be started by a CGI program, and passes the port number it is listening on and the secure session key back to the CGI program as output. Listing 28.1 Source Code for SingleSecureServer.java import java.io.*; import java.net.*; // This class implements a single-client secure server. It is implemented // as an abstract class, leaving your specific server to fill in // the handleNewClient and makeSessionKey methods. public abstract class SingleSecureServer extends Object implements TimerCallback, Runnable { protected int clientTimeoutPeriod = 300000; // 5 minute timeout protected byte[] sessionKey; protected int tickCount; protected PrintStream responseStream; protected ServerSocket serverSock; protected Thread thread; public SingleSecureServer(OutputStream responseStream) { try { this.responseStream = new PrintStream( responseStream); } catch (Exception e) { } } // Called to create the listen socket. Override this if you want a // specific port number public ServerSocket createSocket() throws IOException { return new ServerSocket(0); } // waitForClient waits for a client to connect to the server. It also // sets up a timer that goes off after a certain amount of time, this // allows us to quit if the client never connects. public void waitForClient() { // Start the timer Timer timer = new Timer(this, clientTimeoutPeriod); tickCount = 0; timer.start(); while (true) { try { // Accept a new client Socket sock = serverSock.accept(); serverSock.close(); // Turn off the timer now timer.stop(); // Do whatever has to be done for the new client handleNewClient(sock); return; } catch (Exception e) { } } } // This class interfaces with the CGI program in a kludgy way - it // writes information to the output stream. The CGI program then // gets an input stream to our output stream and reads the information. // This method writes out the port number public void sendPortNumber(int port) { responseStream.println(port); } // This method writes out the session key for the CGI program to read public void sendSessionKey() { responseStream.println(keyString(sessionKey)); } // This function is provided in the Integer class in JDK 1.0.2 // My poor Linux version is only 1.0.1, so I had to hack one up. public static String toHexString(int i) { char hexBytes[] = new char[2]; hexBytes[0] = "0123456789abcdef".charAt((i >> 4)&0xf); hexBytes[1] = "0123456789abcdef".charAt(i&0xf); return new String(hexBytes); } // This method converts a binary session key into a string of hex digits public static String keyString(byte[] key) { String returnVal = ""; for (int i=0; i < key.length; i++) { returnVal += toHexString(key[i]&0xff); } return returnVal; } // tick is called by the timer when it goes off. The timer is built to // fire immediately, and then wait for a specific interval before // going off again. We use the tick count to figure out if this is the // immediate time, or if the time has elapsed. public void tick() { // if tickCount is 1 after we increment it, this is just the first tick // so don't do anything if (++tickCount == 1) return; // otherwise, assume the client isn't connecting stop(); } // start does everything we need - it creates the socket, writes out // the port number, creates the session key, writes it out, and then // waits for an incoming client public void run() { serverSock = null; // Create the socket we're going to listen on try { serverSock = createSocket(); } catch (Exception e) { return; } // tell the CGI program what the port number is sendPortNumber(serverSock.getLocalPort()); // create the session key makeSessionKey(); // tell the CGI program what the session key is sendSessionKey(); // make sure the CGI program gets all the information responseStream.flush(); // wait for a client to connect waitForClient(); } public void start() { thread = new Thread(this); thread.start(); } public void stop() { try { serverSock.close(); } catch (Exception e) { } thread.stop(); thread = null; } // handleNewClient does something with the incoming socket, it's // up to you to decide what public abstract void handleNewClient(Socket sock); // makeSessionKey generates a session key. public abstract void makeSessionKey(); } All your CGI program needs to do is start the server program, read two lines, and generate an HTML page that tells the requesting browser where to get the applet. While you could write the CGI program in any language you choose, Java seems like the ideal choice for this book. Listing 28.2 shows a CGI program that starts up a secure telnet server. This server uses an excellent Java Telnet applet written by Bret Dahlgren. The source code to the applet can be found on the World Wide Web at. com/~thorn/telnet/. In order to support secure telnet sessions, the Telnet application had to be modified slightly. It now supports a sessionKey parameter which, if present, tells it to use encryption for the session. The encryption used is DES3, provided by the Acme cryptography library. Listing 28.2 Source Code for SecureLoginStartup.java import java.net.*; import java.io.*; // This is a CGI program that starts up a secure Telnet session // using a subclass of the SingleSecureServer class. It generates // an HTML response that refers to a Telnet applet and passes the // port number and session key to the telnet applet. // It reads the port number and session key from the SingleSecureServer // (actually the SecureLoginServer). public class SecureLoginStartup extends Object { // Send an error response to the web browser - something went wrong public static void sendErrorResponse(String error) { System.out.println("Content-type: text/html"); // Gen the HTML on-the-fly. We put it in a string so we can // compute the content length and be real polite String\n"; response += "<PARAM name=\"host\" value=\""; try { response += InetAddress.getLocalHost().getHostName(); } catch (Exception e) { response += "localhost"; } response += "\">\n"; response += "<PARAM name=\"port\" value=\""+port+"\">\n"; response += "<PARAM name=\"sessionKey\" value=\""+key+"\">\n"; response += "You need Java for secure logins.\n"; response += "</APPLET>\n"; response += "</BODY></HTML>"; System.out.println("Content-length: "+response.length()); System.out.println(); System.out.println(response); } public static void main(String[] args) { try { // Start up the secure server program. You'll probably have to change // this for your system. Process externProcess = Runtime.getRuntime().exec( "/usr/local/java/bin/java SecureLoginServer"); // create an input stream for reading the parameters back from the server DataInputStream in = new DataInputStream( externProcess.getInputStream()); // Read the port String portLine = in.readLine(); int port = Integer.parseInt(portLine); // Read the session key String sessionKey = in.readLine(); // Send the web page to the browser sendNormalResponse(port, sessionKey); } catch (Exception e) { sendErrorResponse(e.toString()); return; } } } Listing 28.3 shows the HTML information generated by this CGI program: Listing 28.3 Output from SecureLoginStartup Content-type: text/html Content-length: 378 <HTML><HEAD> <TITLE> Secure Login Session </TITLE> <BODY> <H1>Secure Login</H1> <APPLET codebase="/classes" code="Telnet.class" width=600 height=400> <PARAM name="fields" value="off"> <PARAM name="host" value="flamingo"> <PARAM name="port" value="1126"> <PARAM name="sessionKey" value="8b4243347b3a69b8aa12594153c15c8c"> You need Java for secure logins. </APPLET> </BODY></HTML> The CGI program is started by a small shell script that sets the Java CLASSPATH variable before running: #!/bin/sh export CLASSPATH=/usr/local/etc/httpd/htdocs/classes:/usr/local/java/lib/classes.zip /usr/local/java/bin/java SecureLoginStartup The startup sequence for the secure Telnet applet is as follows: - The Web browser opens up the URL for the CGI program (actually, the startup script for the CGI program). - The CGI program creates an instance of a SingleSecureServer and initializes the server. - The server opens up a ServerSocket to listen for incoming connections and creates a random session key. It returns both the port number and the session key to the CGI program via System.out. - The CGI program generates an HTML page containing an <APPLET> tag for the Telnet applet and all the important parameters, including the session key. - The Telnet applet starts up, connects to the server, and, using the session key for encryption, engages in an encrypted telnet session. The class that actually creates the telnet connection and passes the information back and forth to the encryption routines is shown in Listing 28.4. It is fairly short. Essentially, it creates the telnet connection, and then uses a simple bridging class to link two streams together. Listing 28.4 Source Code for SecureLoginClient.java import java.io.*; import java.net.*; import Acme.Crypto.*; // This class sets up an encrypted telnet session. It uses the StreamBridge // class to transfer data between the telnet streams and the encrypted // streams. public class SecureLoginClient extends Object implements BridgeCloseCallback { Socket socket; StreamBridge bridge1; StreamBridge bridge2; byte[] sessionKey; public SecureLoginClient(Socket socket, byte[] sessionKey) { this.socket = socket; this.sessionKey = sessionKey; start(); } public void start() { try { // Connect to the telnet port on the local host Socket telnetSock = new Socket( InetAddress.getLocalHost(), 23); // It is vital that you create the encryption streams in the reverse // order from the other end. In other words, if you create the encrypted // output stream first here, you must create the encrypted input stream // first at the other end. This is because the block cipher streams // require some initial data over the stream. If you create the input // streams first, both sides will be waiting for input. // Connect the output from the telnet stream to the encrypted output // stream bridge2 = new StreamBridge( telnetSock.getInputStream(), new EncryptedOutputStream( new Des3Cipher(sessionKey), socket.getOutputStream()), this, true); // Connect the output from the encrypted stream to the telnet stream bridge1 = new StreamBridge( new EncryptedInputStream( new Des3Cipher(sessionKey), socket.getInputStream()), telnetSock.getOutputStream(), this, false); bridge1.start(); bridge2.start(); } catch (Exception e) { try { socket.close(); } catch (Exception ignore) { } return; } } // bridgeClosed is called by the StreamBridge class whenever a // stream closes. We just close off the socket to make sure // everything will shut down properly public synchronized void bridgeClosed() { try { socket.close(); } catch (Exception e) { } } } The SecureLoginServer class, which is a subclass of SingleSecureServer, does little more than create a session key and create the SecureLoginClient to handle the client connection. Most subclasses of SingleSecureServer will probably be this simple. Listing 28.5 shows the SecureLoginServer class. Listing 28.5 Source Code for SecureLoginServer.java import java.net.*; import java.io.*; // This class is responsible for creating a random session // key and for creating the class to handle a new client. public class SecureLoginServer extends SingleSecureServer { public SecureLoginServer(OutputStream out) { super(out); } // Create a SecureLoginClient to handle the client connection public void handleNewClient(Socket sock) { SecureLoginClient client = new SecureLoginClient(sock, sessionKey); } // Generate a random session key public void makeSessionKey() { sessionKey = new byte[16]; Acme.Crypto.CryptoUtils.randomBlock(sessionKey); } // Start the server with the responses going to System.out, these // will be picked up by the CGI program. public static void main(String[] args) { SecureLoginServer server = new SecureLoginServer( System.out); server.start(); } } Finally, the bridging mechanism to link two streams together is very simple. The StreamBridge class, shown in Listing 28.6, sets up a thread and constantly reads data from one stream and writes it to another. It can operate in either block mode or single character mode. The single character mode is necessary for the encryption streams, because if you do a block mode read on the encryption stream, it won't return until it fills the entire block. You don't necessarily want it that way. Listing 28.6 Source Code for StreamBridge.java import java.io.*; // This is a generic class for connecting one stream to another. // It has a callback to notify you when a stream closes, and // will operate in either single-character or block-read mode public class StreamBridge extends Object implements Runnable { InputStream in; OutputStream out; BridgeCloseCallback callback; Thread bridgeThread; boolean blockRead; public StreamBridge(InputStream in, OutputStream out, BridgeCloseCallback callback, boolean blockRead) { this.in = in; this.out = out; this.callback = callback; this.blockRead = blockRead; } public void run() { int ch; try { // If we support block read, create a block and read as much as // we can into it each time if (blockRead) { byte[] block = new byte[1024]; int len = 0; // Keep reading blocks. We flush the output just in case. while ((len = in.read(block)) > 0) { out.write(block, 0, len); out.flush(); } } else { // If we aren't in block-read mode, read a character, write a character // and flush the stream after writing each char. while ((ch = in.read()) >= 0) { out.write((char)ch); out.flush(); } } } catch (Exception error) { } callback.bridgeClosed(); stop(); } public void start() { bridgeThread = new Thread(this); bridgeThread.start(); } public void stop() { bridgeThread.stop(); bridgeThread = null; } } You can use this same framework to implement other secure protocols. For example, you could use the POP3 and SMTP classes from Chapter 11, "Sending E-Mail from an Applet," and create a secure mail system. This framework is essentially the "Poor Man's SSL." It uses SSL to get the initial setup information back and forth, and then uses other encryption for the rest of the session. Implementing a Multiclient Secure Server If you want to implement a secure server that handles multiple simultaneous sessions, you have to do a little more work. As it turns out, however, if you are careful in the design of your single-client server, you can reuse large amounts of it. The difference between the single-client and multiclient approach is that the multiclient server is always running, whereas the single-client server is spawned by the CGI program. The interesting thing here is that the CGI program has to communicate with both servers in a similar way. It needs to know the port number and session key regardless of the type of server implementation. Similarities like this are usually an indication that there may be some code reuse brewing somewhere. For a multiclient server, your CGI program can use a socket connection to get the port number and session key. Since the SingleSecureServer is already set up to run as a thread, it is ridiculously simple to make a multiclient server that just spawns new instances of a SingleSecureServer every time a CGI program connects to it. It's ridiculously simple now, but it wasn't in the first design of the SingleSecureServer. Originally, the SingleSecureServer didn't run as a thread. Its run method was called something else, and it didn't implement Runnable. When it came time to create a multiclient server, I realized that it took only a few changes to adapt the single-client server so it could run either way. This is what I call "reuse by re-engineering." Sometimes, you go to design a new system and discover that there is another class that almost works for you, but not quite. With a little change, the class could do what you need and still perform its previous functions. You run the risk of introducing bugs back into the original system, but it surely beats having to keep two copies of almost identical code. As you make these small design changes, try to make a note of what you had to change. Not necessarily a detailed description, but a general one. You will start to see common themes-design strategies that you have taken in the past that inhibit code reuse. The next time you design a class, keep those previous problems in mind, and maybe your class will not need to be changed. This accumulated knowledge is what really makes a good object-oriented designer. To support multiple simultaneous clients using the SingleSecureServer framework, you need a server that listens for socket connections from the CGI program and then spawns new SingleSecureServer threads. Listing 28.7 shows the MultiLoginServer class that does this. Listing 28.7 Source Code for MultiLoginServer.java import java.net.*; import java.io.*; // This class is responsible for creating a random session // key and for creating the class to handle a new client. public class MultiLoginServer extends Object implements Runnable { protected Thread thread; protected ServerSocket serverSock; public MultiLoginServer(int listenPort) throws IOException { // Create the socket that CGI program will connect to serverSock = new ServerSocket(listenPort); } public void run() { while (true) { Socket sock = null; // Accept a new client connection try { sock = serverSock.accept(); } catch (Exception ignore) { continue; } // Spawn a server to handle the new connection try { SecureLoginServer server = new SecureLoginServer( sock.getOutputStream()); server.start(); // If there was an error, close down the socket } catch (Exception oops) { try { sock.close(); } catch (Exception ignore) { } } } } public void start() { Thread thread = new Thread(this); thread.start(); } public void stop() { thread.stop(); thread = null; } public static void main(String[] args) { int port = 1234; // Allow the port address to be set as a property String portStr = System.getProperty("port"); // Parse the port address try { port = Integer.parseInt(portStr); } catch (Exception ignore) { } // Start the server try { MultiLoginServer server = new MultiLoginServer(port); server.start(); } catch (Exception e) { e.printStackTrace(); System.exit(1); } } } Finally, the CGI program that connects to the multiclient server is similar to the single-client CGI program. The only difference between the two programs is in their main methods. This being the case, it makes sense to just create a subclass of the single-client CGI program and create a new main. Listing 28.8 shows the multiclient CGI program. Listing 28.8 Source Code for MultiLoginStartup.java import java.net.*; import java.io.*; // This is a CGI program that starts up a secure Telnet session // using a multi-client server that should already be running. // It connects to the server to get the port number and session // key which it passes back to the client. public class MultiLoginStartup extends SecureLoginStartup { public static void main(String[] args) { try { int port = 1234; // Allow the port to be set as a system property String portStr = System.getProperty("port"); // Parse the port value try { port = Integer.parseInt(portStr); } catch (Exception e) { } // Connect to the login server Socket sock = new Socket(InetAddress.getLocalHost(), port); // create an input stream for reading the parameters back from the server DataInputStream in = new DataInputStream( sock.getInputStream()); // Read the client port String portLine = in.readLine(); int clientPort = Integer.parseInt(portLine); // Read the session key String sessionKey = in.readLine(); // Send the web page to the browser sendNormalResponse(clientPort, sessionKey); } catch (Exception e) { sendErrorResponse(e.toString()); return; } } } Creating Other Secure Remote Access Programs You can use the SingleSecureServer class as a framework for implementing other secure remote access programs. Frankly, this solution is not the optimal one, because you have to work on the socket level. With RMI and CORBA available in Java, you shouldn't have to open up raw sockets for simple communications. They are still useful when you need to grab a large chunk of raw data, but for passing messages back and forth, sockets are a step backwards. Be on the lookout for an ORB product that supports secure communications. If there isn't one now, there should be soon. Security is a growing issue on the Internet.
http://www.webbasedprogramming.com/Java-Expert-Solutions/ch28.htm
CC-MAIN-2013-48
refinedweb
5,415
54.63
User Tag List Results 1 to 3 of 3 - Join Date - Jul 2004 - Location - NC - 194 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) Syntax error in class declaration extends MovieClip So, I'm trying to get used to OOP flash and I'm getting kicked even before I start. I have the beginning of the class which is implemented as such Code: class AuthorizeClass extends Movieclip { } #include "AuthorizeClass.as" and I get this error: AuthorizeClass.as: Line 1: Syntax error. class AuthorizeClass extends Movieclip What gives? - Join Date - Jan 2006 - 268 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) class AuthorizeClass < Movieclip is how I've always seen inheritance in ruby..If you give someone a program, you will frustrate them for a day; if you teach them how to program, you will frustrate them for a lifetime. - Join Date - May 2001 - Location - Kaysville, UT - 68 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) 1. This is the Ruby forum - your question is about ActionScript. 2. It has been a while since I used ActionScript, but since it is prototype based, here is how you'd do it: function AuthorizeClass(){ } AuthorizeClass.prototype = new MovieClip(); // Then add your methods like this: AuthorizeClass.prototype.newMethod = function(){ // Do stuff };Spencer Uresk Rails App Hosting Cheap Hosting - Directory of hosting plans under $10/month! My blog: Tech and Java Articles Bookmarks
http://www.sitepoint.com/forums/showthread.php?513529-Syntax-error-in-class-declaration-extends-MovieClip&p=3611603
CC-MAIN-2015-40
refinedweb
228
63.49
The greatest QGIS release ever! QGIS 3: Application and Project Options Feature: Bug fixes by Alessandro Pasotti Feature: Bug fixes by Alexander Bruy Feature: Bug fixes by Jürgen Fischer Feature: Bug fixes by Peter Petrik Feature: Bug fixes by Julien Cabieces Feature: Bug fixes by Loïc Bartoletti Feature: Bug fixes by Victor Olaya Feature: Bug fixes by Even Rouault Feature: Bug fixes by Martin Dobias Feature: Bug fixes by. from_json to_json Allows expressions like: array(1,2,3)[0] -> 1 array(1,2,3)[2] -> 3 array(1,2,3)[-1] -> 3 (Python style, negative indices count from end of array) array(1,2,3)[-3] -> 1 map(‘a’,1,’b’,2)[‘a’] -> 1 map(‘a’,1,’b’,2)[‘b’] -> feature was funded by North Road This feature was developed by Nyall Dawson (North Road). This feature was developed by Mathieu Peller) add dX, dY and residual on GCP Points add option to automatically save GCP Points in the raster-modified path Nyall Dawson (North Road) This feature was developed by Corentin Falcone (SIRS). This feature was funded by SMEC/SJ This feature was developed by Nyall Dawson (North Road).) was funded by North Road This feature was developed by Nyall Dawson (North Road)!)) This feature was developed by Nyall Dawson (North Road). For feature was funded by SMEC/SJ This feature was developed by Nyall Dawson (North Road) new algorithm allows users to extract binary fields to files. This feature was funded by SMEC/S funded by SMEC/SJ This feature was developed by Nyall Dawson (North Road) This feature was developed by Alex Bruy This algorithm calculates statistics for a raster layer’s values, categorized by zones defined in another raster layer. This feature was developed by Nyall Dawson (North Road) feature was developed by Nyall Dawson (North Road) feature was developed by Nyall Dawson (North Road). This feature was developed by Nyall Dawson (North Road). This feature was developed by Nyall Dawson (North Road)). This feature was funded by North Road This feature was developed by Nyall Dawson (North Road). This feature was funded by North Road This feature was developed by Nyall Dawson (North Road)). This feature was funded by SMEC/SJ This feature was developed by Nyall Dawson (North Road).)! This feature was developed by Nyall Dawson (North Road) This feature was funded by A.R.P.A. Piemonte This feature was developed by Alessandro Pasotti. This feature was funded by SMEC/SJ This feature was developed by Nyall Dawson (North Road) This feature was funded by A.R.P.A. Piemonte This feature was developed by developed by Mathieu Pellerin ‘application/json’ or ‘application feature was developed by Nyall Dawson (North Road)())()) This feature was funded by North Road This feature was developed by Nyall Dawson (North Road) Allows the following to define processing scripts without the need for implementing a custom class: from qgis.processing import alg. This feature was developed by Nyall Dawson (North Road) See This feature was funded by SMEC/SJ This feature was developed by This feature was developed by Nyall Dawson (North Road) developed by Loïc Bartoletti (Oslandia)) With this feature, you can use mesh layer in 3D scene, This feature was funded by Lutra Consulting This feature was developed by Peter Petrik (Lutra Consulting)
https://www.qgis.org/en/site/forusers/visualchangelog36/index.html
CC-MAIN-2021-43
refinedweb
547
55.58
Navigation An Anvil app is composed of Forms, and navigation is simply a matter of displaying the correct Form. Displaying a new page To display a new page, call open_form() (from the anvil module), and pass in the instance of the Form that you’d like to display as your new page. When you do this, the new Form takes over entirely and becomes the top-level Form. The old Form is no longer visible. If you pass any extra parameters into open_form(), they are passed to the new Form’s constructor: class Form1(Form1Template): # ... def btn1_click(self, **event_args): open_form('Form2', my_parameter="an_argument") As well as specifying a Form by name, you can also create an instance of the Form yourself and pass it in. This code snippet is equivalent to the previous one: from Form2 import Form2 class Form1(Form1Template): # ... def btn1_click(self, **event_args): frm = Form2(my_parameter="an_argument") open_form(frm) from Form2 import Form2. Replacing just the main content Since Forms can be contained inside other Forms, you don’t need to throw away all of your current Form in order to display a new Form. For example, you might have a top-level Form with a menu bar that needs to be visible all the time, and display all other app content as panels within that page. An app with a nav bar that replaces just the main content. You can use get_open_form() to get a reference to the currently-open Form object. You can then use that to change what is displayed in the top-level Form, even from deep within a nested structure of Forms and components. Just clear the contents from the main container on the Form, and instantiate a new Form inside that container: # import Form2 to make it available inside the top-level Form from Form2 import Form2 new_panel = Form2() # The top-level form has a component called # column_panel. Clear it and put a new Form2() panel there: get_open_form().content_panel.clear() get_open_form().content_panel.add_component(new_panel) Creating a nav bar A common way to handle navigation is to use a nav bar. You can do this by creating a set of Links in the top-level Form and using them to switch the main contents as described above. Here’s a worked example of how to do that, and some tips. Worked example In the Material Design theme, you can drop a Column Panel into the sidebar and drop Link components into that Column Panel. Each Link can have a ‘click’ event handler to manage navigation in your app by running the code from either example above when a Link is clicked. Creating an event handler for the ‘Page 1’ link. In the example below, Form1 is an app with a navbar, and Page1 and Page2 are ‘content’ Forms that you want to switch between using the navbar. Each click handler inserts the relevant ‘content’ Form into Form1 when the link is clicked: # import Page1 to make it available inside Form1 from Page1 import Page1 # import Page2 to make it available inside Form1 from Page2 import Page2 class Form1(Form1Template): # ... def link_1_click(self, **event_args): # Clear the content panel self.content_panel.clear() # Add Page1 to the content panel self.content_panel.add_component(Page1()) def link_2_click(self, **event_args): # Clear the content panel self.content_panel.clear() # Add Page2 to the content panel self.content_panel.add_component(Page2()) This will replace the main content of the page each time a Link is clicked. Making a Link look ‘selected’ You can change the look of the Link that was clicked to show the user which page they’re on. For example, in the Material Design theme you can set the Link’s role to selected: from Page1 import Page1 from Page2 import Page2 class Form1(Form1Template): # ... def reset_links(self, **event_args): self.link_1.role = '' self.link_2.role = '' def link_2_click(self, **event_args): # Set link_2 to look 'selected' self.reset_links() self.link_2.role = 'selected' # Add Page2 to the main panel self.content_panel.clear() self.content_panel.add_component(Page2()) Advanced tip: using one click handler for all Links You don’t have to write one click handler per Link. You can get which Link has been clicked from the event_args['sender'], so you can use that to work out which Form to instantiate. For example, you can set each Link’s tag attribute to the name of the Form you want to open (or an instance of that Form). The following code snippet gives an example of a click handler that you could bind to each nav link in your top-level Form: def nav_link_click(self, **event_args): """A generalised click handler that you can bind to any nav link.""" # Find out which Form this Link wants to open form_to_open = event_args['sender'].tag.form_to_open self.content_panel.clear() self.content_panel.add_component(form_to_open) Using the URL hash get_url_hash() gets the decoded hash (the part after the ‘#’ character) of the URL used to open this app. self.label_1.text = "Our URL hash is: %r" % get_url_hash() You can create a URL-based navigation system by opening a particular Form depending on the URL hash: if get_url_hash() == 'stats': from Stats import Stats self.content_panel.clear() self.content_panel.add_component(Stats()) elif get_url_hash() == 'analysis': from Analysis import Analysis self.content_panel.clear() self.content_panel.add_component(Analysis()) Query params If the first character of the hash is a question mark (e.g.), it will be interpreted as query-string-type parameters and returned as a dictionary (e.g. {'a': 'foo', 'b': 'bar'}). get_url_hash() is available in Form code only.
https://anvil.works/docs/client/navigation
CC-MAIN-2019-35
refinedweb
913
61.26
--- a/doc/editing.txt +++ b/plugins/Rot13/Cmd_rot13.java @@ -1,48 +1,59 @@ -ADVANCED TEXT EDITING WITH JEDIT (editing.txt, last modified 4 Oct 1998) +/* + * Cmd_rot13.java - Simple plugin + * Copyright (C) 1998 Slava Pest. Markers -2. Auto Indent -3. Go To Line -4. Search And Replace -5. Open Selection -6. Editing URLs +import com.sun.java.swing.JTextArea; +import java.util.Hashtable; -1. Markers ----------- -Most other editors can set `markers' in the text, and go to those markers. -jEdit's markers, however are persistent across editing sessions and can have -names of any length. Markers are set with the `Edit->Set Marker' command, -cleared with `Edit->Clear Marker' and jumped to with `Edit->Go To Marker'. +public class Cmd_rot13 implements Command +{ + public Object init(Hashtable args) + { + return null; + } -2. Auto Indent --------------- -This one's useful if you're using jEdit to edit program source code. When -you press return, any white space or comment characters from the start of -the previous line is copied to the start of the new one. Try it out, it's -very useful. + public Object exec(Hashtable args) + { + View view = (View)args.get(VIEW); + if(view != null) + { + JTextArea textArea = view.getTextArea(); + String selection = textArea.getSelectedText(); + if(selection != null) + textArea.replaceSelection(rot13(selection)); + else + view.getToolkit().beep(); + } + return null; + } -3. Go To Line -------------- -The `Edit->Go To Line' command moves the caret to the start of the specified -line. Useful for editing source code, for example. - -4. Search And Replace ---------------------- -Not finished yet. - -5. Open Selection ------------------ -The `File->Open Selection' command uses the current selection as the name of -a file to open. This is useful in help files, for example. - -6. Editing URLs ---------------- -While everyone else is talking about full Internet integration, jEdit is -doing it! The `File->Open URL' and `File->Save To URL' commands enable jEdit -to edit files on the Internet. The protocols supported depend on your Java -Virtual Machine. Sun's one supports http and ftp. - --- Slava Pestov -<slava_pestov@geocities.com> + private String rot13(String str) + { + char[] chars = str.toCharArray(); + for(int i = 0; i < chars.length; i++) + { + char c = chars[i]; + if(c >= 'a' && c <= 'z') + c = (char)('a' + ((c - 'a') + 13) % 26); + else if(c >= 'A' && c <= 'Z') + c = (char)('A' + ((c - 'A') + 13) % 26); + chars[i] = c; + } + return new String(chars); + } +}
http://sourceforge.net/p/jedit/jEdit.bak/ci/9c214013b9b832d56cb33edc63c3038f34e2183e/
CC-MAIN-2015-22
refinedweb
383
70.19
Next: ANS Forth locals, Previous: Locals, Up: Locals Locals can be defined with { local1 local2 ... -- comment } or { local1 local2 ... } E.g., : max { n1 n2 -- n3 } n1 n2 > if n1 else n2 endif ; The similarity of locals definitions with stack comments is intended. A locals definition often replaces the stack comment of a word. The order of the locals corresponds to the order in a stack comment and everything after the --., F: for a floating point value: : CX* { F: Ar F: Ai F: Br F: Bi -- Cr Ci } \ complex multiplication Ar Br f* Ai Bi f* f- Ar Bi f* Ai Br f* f+ ; Gforth currently supports cells ( W:, W^), doubles ( D:, D^), floats ( F:, F^) and characters ( C:, C^) in two flavours: a value-flavoured local (defined with W:, D: etc.) produces its value and can be changed with TO. A variable-flavoured local (defined with W^ etc.) produces its address (which becomes invalid when the variable's scope is left). E.g., the standard word emit can be defined in terms of type like this: : emit { C^ char* -- } char* 1 type ; A local without type specifier is a:
http://www.complang.tuwien.ac.at/forth/gforth/Docs-html/Gforth-locals.html
CC-MAIN-2018-13
refinedweb
189
69.72
This week we’ll create an email notification system using the Raspberry Pi. The idea is to check for new email, and flash an LED when we get one. Connecting to Gmail The circuit will be extremely straight-forward, so let’s focus on the more difficult part first – connecting to an email service. We need to create a secure connection to our email provider, so we can find out when new mail arrives. Do a quick search, and you’ll likely find scripts like this one where you just connect with your username, password and a few other pieces of info depending on who the provider is. But what you can do will be extremely limited, and the code will be fragile. Find the Official API The preferred option is to check whether your email provider has already provided an “official” way to connect to their system and retrieve data from it. These are commonly referred to as APIs, or application programming interfaces. When a company provides an API, that means they’ve put real time and effort into exposing certain areas of their system to you (although you’ll still have to do some work of your own, as we’ll see), and you can access those systems in relative confidence that how you’re connecting won’t just change or break. Most of the major email services provide an API, including MS Office, Yahoo Mail, and Gmail. (Note: At the beginning of the week, I thought I’d implement all three of those, to demonstrate how each works. But understanding an API enough to properly implement it takes time, and I just didn’t have enough of it. Gmail is the only service I use, so that’s the one I focused on.) Authenticating With Gmail, the API was my only option. Every time I tried a Python script I found, I’d get the following error: imaplib.error: [ALERT] Application-specific password required: (Failure) The problem is that I have 2-Step Verification enabled, and the script can’t get past that. That’s okay… 2FA is a good thing, and we don’t want to disable that. There’s another, more secure and stable, way to access a Gmail account. Google provides an API for connecting to most of their systems (including Gmail), along with tutorials to implement it in multiple languages (including Python). We can use the API to access messages or pretty much any other aspect of our email account. This process involves entering a few details on their side, and then they assign you some special numbers (a “client id” and a “client secret”). You download those numbers in a special file called “client_secret.json” and include it with your script, which in turn helps prove that the script is authorized to access your account. Follow their Python Quickstart. Note the other languages on the left too, if you’d rather try one of those. After you authenticate, their sample script prints a list of your email tags: Getting the Unread Mail Count Once authenticated, their API gives you access to all kinds of useful info about your account. We’ll just modify the sample script they provided. Most of their script is just about authenticating to Gmail, so don’t touch any of that. The following two lines got the list of labels above, and those are the ones we’ll change. results = service.users().labels().list(userId='me').execute() labels = results.get('labels', []) But what do we change them to? To find out which API calls we need to make, we’ll delve into the APIs Explorer for the Gmail API. It’s a nice tool, where you can browse through all the available API calls, and even try them out, all from within your browser. The one we need is gmail.users.messages.list, so we can get a list of messages (and then count them). But we’re not interested in *all *email, just a subset. How do we do that? Check out that screen capture above again. There’s a hidden label called “UNREAD” that’ll work nicely. The API allows you to place filters (such as userId='me') inside the call to list(), and one of them is a query string (denoted as q=... below). messages = service.users().messages().list(userId='me', q='is:inbox + is:unread').execute() unread_count = messages['resultSizeEstimate'] By using is:inbox + ``is:unread we can get emails with both of those labels. The json response we get back includes two keys – one is a list of messages, while the other is the number of messages. We can use the latter to get our message count. { "messages": [ { "id": "xxxxxxxxxxxxxxxx", "threadId": "xxxxxxxxxxxxxxxx" } ], "resultSizeEstimate": 1 } Here’s the main portion of the script. I split most of the Gmail stuff into a separate file when I intended to implement several different email providers. I ended up not having enough time, but it’s still nice to have the code somewhat organized. The code that does the actual authorization to Gmail is in a third file, which you can find along with the entire project on Github. Gmail.py: from apiclient import errors import threading import time import RPi.GPIO as GPIO import GmailAuthorization PIN = 35 CHECK_INTERVAL = 30 service = None unread_count = 0 def refresh(): global unread_count try: messages = service.users().messages().list(userId='me', q='is:inbox + is:unread').execute() unread_count = messages['resultSizeEstimate'] except errors.HttpError as error: print('An error occurred: {0}'.format(error)) def indicator(): while True: if unread_count > 0: GPIO.output(PIN, not GPIO.input(PIN)) else: GPIO.output(PIN, GPIO.LOW) time.sleep(0.5) def monitor(): while True: refresh() time.sleep(CHECK_INTERVAL) def start_indicator(): t = threading.Thread(target=indicator) t.daemon = True t.start() def start_monitor(): t = threading.Thread(target=monitor) t.daemon = True t.start() def load_service(): global service service = GmailAuthorization.get_service() def start(): load_service() start_indicator() start_monitor() NewEmailIndicator.py: import RPi.GPIO as GPIO import Gmail CHECK_NOW_PIN = 12 def initialize_gpio(): GPIO.setmode(GPIO.BOARD) GPIO.setup(Gmail.PIN, GPIO.OUT) GPIO.setup(CHECK_NOW_PIN, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) GPIO.add_event_detect(CHECK_NOW_PIN, GPIO.RISING, callback=check_mail_now, bouncetime=1000) def check_mail_now(_): Gmail.refresh() def main(): try: initialize_gpio() Gmail.start() raw_input("\nPress any key to exit.\n") finally: GPIO.cleanup() if __name__ == '__main__': main() Designing the Circuit Once we’ve got a script that’ll connect to an email account, retrieve the data we’re interested in, and turn a GPIO pin on or off based on what we find, the next step is to create the circuit. It’s a fairly simple one, limited to just an LED and a resistor, connecting board pin 35 to ground. I’ve also added a button (and second resistor) to the circuit, connected to board pin 12, that allows us to immediately check for new email without waiting for the interval. That way, we can change CHECK_INTERVAL to some less-frequent number like 60 (a minute), but then press the button if we don’t feel like waiting. That’s what the add_event_detect line is for in the above code. Here are some photos of the actual circuit. Questions? Comments? Hit me up in the comments section below, or on Twitter. More Reading Some extra reading if you’re interested. Just some stuff I came across while doing research this week. Gmail - Python Quickstart - Gmail API Client Library for Python - Gmail API Reference - Gmail API: Users.messages:list - Google APIs Explorer: Gmail - Google APIs Explorer (gmail.users.messages.list) - Python library for accessing resources protected by OAuth 2.0 - Python client library for Google’s discovery based APIs
https://grantwinney.com/how-to-flash-an-led-on-your-raspberry-pi-when-you-get-new-email/
CC-MAIN-2019-39
refinedweb
1,274
65.83
import datetime,os thisdate = datetime.date.today() foldername = "c:\photos\source\%04d\%04d%02d%02d" % (thisdate.year,thisdate.year,thisdate.month,thisdate.day) try: os.makedirs(foldername) except WindowsError, e: print e.strerror if e.winerror <> 183: input() os.system("explorer "+foldername) I'll comment it up and add some more logic later, but the gist of it is that it automates the creation and opening of a folder which I do when I transfer photos. It'll save about 30 seconds each time it's run. One small set on a long journey. --Mike-- PS: I used a variant of this code to colorize/HTML the code.
http://mikewarot.blogspot.com/2007_05_01_archive.html
CC-MAIN-2014-52
refinedweb
108
63.15
adjacent_find Searches for two adjacent elements that are either equal or satisfy a specified condition. Parameters - _First - A forward iterator addressing the position of the first element in the range to be searched. - A forward iterator addressing the position one past the final element in the range to be searched. - _Comp - The binary predicate giving the condition to be satisfied by the values of the adjacent elements in the range being searched. Return Value A forward iterator to the first element of the adjacent pair that are either equal to each other (in the first version) or that satisfy the condition given by the binary predicate (in the second version), provided that such a pair of elements is found. Otherwise, an iterator pointing to _Last is returned. Remarks The adjacent_find algorithm is a nonmutating sequence algorithm. The range to be searched must be valid; all pointers must be dereferenceable and the last position is reachable from the first by incrementation. The time complexity of the algorithm is linear in the number of elements contained in the range. The operator== used to determine the match between elements must impose an equivalence relation between its operands. Example // alg_adj_fnd.cpp // compile with: /EHsc #include <list> #include <algorithm> #include <iostream> // Returns whether second element is twice the first bool twice (int elem1, int elem2 ) { return elem1 * 2 == elem2; } int main( ) { using namespace std; list <int> L; list <int>::iterator Iter; list <int>::iterator result1, result2; L.push_back( 50 ); L.push_back( 40 ); L.push_back( 10 ); L.push_back( 20 ); L.push_back( 20 ); cout << "L = ( " ; for ( Iter = L.begin( ) ; Iter != L.end( ) ; Iter++ ) cout << *Iter << " "; cout << ")" << endl; result1 = adjacent_find( L.begin( ), L.end( ) ); if ( result1 == L.end( ) ) cout << "There are not two adjacent elements that are equal." << endl; else cout << "There are two adjacent elements that are equal." << "\n They have a value of " << *( result1 ) << "." << endl; result2 = adjacent_find( L.begin( ), L.end( ), twice ); if ( result2 == L.end( ) ) cout << "There are not two adjacent elements where the " << " second is twice the first." << endl; else cout << "There are two adjacent elements where " << "the second is twice the first." << "\n They have values of " << *(result2++); cout << " & " << *result2 << "." << endl; } Output See Also <algorithm> Members | Nonpredicate Version of adjacent_find Sample | Predicate Version of adjacent_find Sample
https://msdn.microsoft.com/en-us/library/k2622yw5(v=vs.71).aspx
CC-MAIN-2015-35
refinedweb
373
58.08
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <file_config.h> BOOL ReadSect ( U32 sect, /* Absolute sector address. */ U8* buf, /* Location where to write the data. */ U32 cnt); /* Number of sectors to read. */ The function ReadSect reads one or more sectors from the FAT Device to a buffer. The parameter sect specifies the starting sector from where data are read. The parameter buf is a pointer to the buffer that stores the data. The parameter cnt specifies the number of blocks to read. The function is part of the FAT Driver. The prototype is defined in the file File_Config.h. Developers must customize the function. fat.DeviceCtrl, fat.Init, fat.ReadInfo, fat.UnInit, fat.WriteSect /* USB-MSC Device Driver Control Block */ FAT_DRV usb0_drv = { Init, UnInit, ReadSector, WriteSector, ReadInfo, CheckMedia }; /* Read single/multiple sectors from Mass Storage Device. */ static BOOL ReadSector (U32 sect, U8 *buf, U32 cnt) { return (usbh_msc_read .
https://www.keil.com/support/man/docs/rlarm/rlarm_fat_readsect.htm
CC-MAIN-2020-34
refinedweb
152
52.05
With the release of iOS 13, both Android and iOS now support dark mode. Having support for dark mode in your app is not just a nice extra, it is a core requirement if you want your app to fit in with the OS. But at time of writing, there is no official way for React Native apps to support dark mode. In my search for a simple clean way to implement theming, I decided to write a small library for this: react-native-themed-stylesheets. It is dead simple to use, because it builds on existing structures such as StyleSheet and hooks. It also does not impose any structure on your theme, which means you can use it not only for light/dark mode, but also for spacing, fonts, other colours, or whatever you dream up. (TL;DR If you just want to see the complete code, scroll down to the end of this article) Defining your themes The first thing you want to do is define and register your themes. I this example I'm just going to use a light and dark theme. First, we define our two themes, which we then pass to the registerThemes function: // themes.ts import { registerThemes } from "react-native-themed-stylesheets" const light = { backgroundColor: "white", textColor: "black" } const dark = { backgroundColor: "black", textColor: "white" } const styleSheetFactory = registerThemes( { light, dark }, () => "light" ) export { styleSheetFactory } This will return a factory function that you can use to create themed stylesheets. The registerThemes function takes a second argument which is a callback that returns the name of the default theme. In this case we make it just return "light", which means that our app will default to a light theme. Creating stylesheets from your themes Now that we have our stylesheet factory, we can use it to create a themed stylesheet. This factory function behaves almost the same as StyleSheet.create, with the exception that your theme is passed as a parameter to the callback function. In the following snippet, we create two styles: container and text. For both styles, we refer to a variable that we defined in our theme: // my-component.tsx import { styleSheetFactory } from "./themes" const styles = styleSheetFactory(theme => ({ container: { backgroundColor: theme.backgroundColor, flex: 1 }, text: { color: theme.textColor } })) Applying stylesheets to your components Finally, we must apply our styles to our components. For this we use the useTheme hook. It takes the themed stylesheet we just created, and optionally a name of a theme to use. It will then compute the component styles with that theme applied: // my-component.tsx import { useTheme } from "react-native-themed-stylesheets" // const styles = styleSheetFactory(...) const MyComponent = () => { const [styles] = useTheme(styles, "dark") return ( <View style={styles.container}> <Text style={styles.text}>Hello there</Text> </View> ) } Switching theme based on the OS appearance In the example above, we manually told the useTheme hook to apply the "dark" theme. Instead of specifying this yourself, you usually want this to automatically mirror the OS theme. Fortunately this is very easy to do using the react-native-appearance package. In this snippet, we retrieve the OS theme using useColorScheme(), and then return the appropriate application theme. If for whatever reason the OS theme is not "light" or "dark", we default to using the light theme. So even if in the future a "pink" theme will be supported on the OS level, our app won't break but gracefully degrade. // themes.ts import { useColorScheme } from "react-native-appearance" import { registerThemes } from "react-native-themed-styles" const styleSheetFactory = registerThemes({ light, dark }, () => { const colorScheme = useColorScheme() return ["light", "dark"].includes(colorScheme) ? colorScheme : "light" }) That's it! I hope you liked this short introduction to theming in React Native. If you want to try out the package, you can find it at GitHub or NPM. wvteijlingen / react-native-themed-styles Dead simple theming for React Native stylesheets Complete code import { registerThemes, useTheme } from "react-native-themed-stylesheets" import { useColorScheme } from "react-native-appearance" // 1. Register your themes const styleSheetFactory = registerThemes({ light: { backgroundColor: "white", textColor: "black", image: require("./light.png") }, dark: { backgroundColor: "black", textColor: "white", image: require("./dark.png") } }, () => { const colorScheme = useColorScheme() return ["light", "dark"].includes(colorScheme) ? colorScheme : "light" }) // 2. Create a stylesheet const styles = styleSheetFactory(theme => ({ container: { backgroundColor: theme.backgroundColor, flex: 1 }, text: { color: theme.textColor } })) // 3. Apply the styles const MyComponent = () => { const [styles, theme, themeName] = useTheme(styles) return ( <View style={styles.container}> <Text style={styles.text}>{`You are viewing the ${themeName} theme`}</Text> <Image source={theme.image} /> </View> ) } Discussion (0)
https://dev.to/wvteijlingen/dead-simple-theming-and-dark-mode-in-react-native-2l09
CC-MAIN-2022-05
refinedweb
743
57.16
length()function works by traversing the entire list until the Next pointer is invalid. Unfortunately, it segfaults each time, just after tmpbecomes invalid. Node *First, while Node contains a member Node *Next. Both of these are set by default to null in the relevant constructors. Nextsomehow (and consistently) ends up being 0x1 when it reaches the end. Therefore, Next is invalid but not null. I don't know where this might occur, however, because the only assignments to Nextanywhere are either Next = nullptror Next = new Node isLast()function, whose body is literally return (bool)Nextseems to have cleared up the issue of Nextequaling 0x1
http://www.cplusplus.com/forum/beginner/118349/
CC-MAIN-2014-41
refinedweb
104
56.96
<p>This IRC Meeting is planned for 18:00 UTC on Wednesday, 4 April>Calendar & updates</h2> <p>From Atlassian there are a new product call teamCalendar and should be add to confluence to manage meetings (and to display calendar) <ac:emoticon ac:. Therefore, Confluence is outdated, there are a lot of really good improvements in the last version, should be update.</p> <ac:macro ac:<ac:plain-text-body><![CDATA[ ]]></ac:plain-text-body></ac:macro> <h2>OAuth 2.0 integration and where it should live</h2> <p>There has been some discussion in IRC on whether the OAuth 2.0 component should live in Zend\OAuth2, Zend\OAuth\OAuth2, ZendService\OAuth2 or some hybrid of the above. Where should OAuth 2.0 call it's home?</p> <ac:macro ac:<ac:plain-text-body><![CDATA[ ]]></ac:plain-text-body></ac:macro> <h2>Zend\Math RFC</h2> <p>Zend\Math namespace should contain the common used math-related classes in other namespaces. This improves reusability. At the moment of writing, Zend Framework uses a BigInteger class within the Zend\Crypt namespace. This should be refactored to the Zend\Math namespace in order to improve reusability. Also within the Zend\Locale namespace mathematical functions are often used such as a custom round function, normalize, etc. There even is a special class devoted to doing this (Zend\Locale\Math).<br /> The Zend\Math RFC is <a href="">here</a>.</p> <ac:macro ac:<ac:plain-text-body><![CDATA[ ]]></ac:plain-text-body></ac:macro> 3 Commentscomments.show.hide Apr 02, 2012 Jeffrey Ah <p>Poll is ok whether to update Confluence or not IMO.</p> Apr 04, 2012 Mike Willbanks <p>IE: Confluence - The largest issue here is that if we do upgrade confluence the proposal system that was based on confluence will no longer work as the plugin that handles it is no longer maintained. While I agree that some of the features are very nice and handy (we use it internally at the company I am at); I'm not sure it makes sense without finding a way to preserve the current proposals inside of it today.</p> Apr 04, 2012 Matthew Weier O'Phinney <p>Agenda items should really have had <em>some</em> discussion on the list or IRC before being brought up here. I'm voting against discussing teamCalendar as the first I heard of it was today when checking the agenda. Additionally: we're trying to reduce the number of tools we host ourselves, not add to them (as maintaining them is a time sink); such a tool would require anybody wanting the calendar to be registered (which is not the case now).</p>
http://framework.zend.com/wiki/display/ZFDEV2/2012-04-04+Meeting+Agenda?focusedCommentId=46793574
CC-MAIN-2014-52
refinedweb
449
55.95
Three Easy SEO wins for Developers ! Discovery. I assume you know what is SEO, and why it’s a useful subject to be familiar with if you are building things or even developing for someone else. If you don’t, here the one line summary; Search Engine Optimization (SEO) is what determines where your site shows up on search engines. Show up in the right spot (and the right time for the user) and traffic goes up. Ideally, that traffic leads to sales. The problem with SEO is that it’s a notoriously complex subject with lots of marketing jargon and no definitive one source of truth. A marketer’s dream as good as any where it’s easy to get lost in the abundance of information. You won’t learn SEO here, that’s not doable in a Medium article. Rather, this going to be on the technical part of SEO. Simple things you can do with code that would make Google loves you. I have an MBA in marketing and developed as a solo-developer, so I may have a thing or two to say about the two subjects. Bonus: If you are a Django developer, I put my implementation there for you to copy! Although the concepts are widely applicable to web development in general. Three Easy Things! 1- META tags everywhere! Meta tags are the data about your data! It tells browsers, search engines, and web services what your site is about. Like an address on an envelope. They are easy to put together and an easy win. Here is my implementation of meta tags using Django templates to make sure they are on every page. <!-- Facebook meta tags --><meta property="og:title" content="Joyful.gifts: Joy, Everywhere!" /><meta property="og:description" content="The world needs more joy, gifting made simple" /><meta property="og:image" content="{% static 'images/facebook.png' %}" /><!-- Twitter meta tags --><meta name="twitter:card" content="summary_large_image" /><meta name="twitter:site" content="@Jonathan_adly_" /><meta name="twitter:creator" content="@Jonathan_adly_" /><meta name="twitter:title" content="Joyful.gifts: Joy, Everywhere!" /><meta name="twitter:description" content="The world needs more joy, gifting made simple" /><meta name="twitter:image" content="{% static 'images/twitter.png' %}" /><!-- Search engine meta tags --><meta name="description"content="Joyful.gifts is an automated gifting platform that always finds the perfect gift, at the best price." /><!-- title customized in every template w/ SEO keywords --><title>{% block title %} Joyful gifts {% endblock %}</title> Meta tags allows for social media posts to be rich in content, rather than a plain old link. It also allows search bots to correlate your page to key words that are being googled more easily. Here is a more detailed reading of the subject. 2- Sitemap and robots.txt A sitemap is just that — a map to guide search crawlers to your website. What’s really important in there, and what’s is not as important. If you have a blog, a bunch of articles, or even recipes in a food application you can highlight them in a sitemap. Sitemaps can get pretty complex as your project grows, but to start they are easy to do. Similarly, robots.txt is an optional file that tells search crawlers where they are not allowed to go. For example, if you have a section in the web-application that is only for employees, there are no reasons to have a search engine index that part. Here is my implementation using Django; settings.py INSTALLED_APPS = [..."django.contrib.sitemaps",... ]urls.py (in config/core folder) from django.contrib.sitemaps import GenericSitemap from django.contrib.sitemaps.views import sitemap from django.views.generic.base import TemplateView#import the model you want to highlight here. Blogs, Recipes, ect. from yourapp.models import Modelinfo_dict = {"queryset": Model.objects.all(),}urlpatterns = [...path("robots.txt",TemplateView.as_view(template_name="robots.txt", content_type="text/plain"),),path("sitemap.xml",sitemap,{"sitemaps": {"blog": GenericSitemap(info_dict, priority=0.7)}},name="django.contrib.sitemaps.views.sitemap",),]#priority default is 0.5 - the higher, the more important A robots.txt file would live in your base directory. User-Agent is the name of the bots you want to control (* is a directive to all bots coming to your site). Disallow or allow is where your directives to the bots lives. Here is an example: User-Agent: * Disallow: /employees-section/ In that file, I am telling all bots to not index urls with /employees-section. For fun, check out Medium’s robots.txt file: 3- JSON-LD JSON-LD is a special JSON for linked data. Basically, instead of letting the search bots do its own thing magically — figuring out what the title of your blog post, who wrote it, and what’s it all about without your input, JSON-LD gives it a little guidance. Spoon Feeding it relevant information by putting that information in a special script tag. How it all works can get quite complex — But luckily, implementing is not as hard. There are lots of examples over the internet of well-implemented JSON-LD. Here is my favorite site as well as my own implementation below using Django to dynamically fill the templates. <script type="application/ld+json">{ "@context": "","@type": "Article","headline": "{{post.title}}","alternativeHeadline": "Joyful.gifts is the place for the perfect gift in every occasion","image": "{% static 'images/svg7.svg' %}","author": "Joyful gifts","genre": "best gifts","keywords": "best gifts","wordcount": " {{post.content|wordcount}}","publisher": {"@type": "Organization","name": "Joyful gifts","logo": "{% static 'images/svg7.svg' %}""@type": "ImageObject","url": ""},"url": "{{post.get_absolute_url}}","mainEntityOfPage": {"@type": "WebPage","@id": ""},"datePublished": "{{post.timestamp}}","dateCreated": "{{post.timestamp}}","dateModified": "{{post.updated}}","description": "{{post.title}}","articleBody": "{{post.content}}"}</script> Conclusion. Like development, improving SEO nevers ends. There is always one more improvement to do. I do hope that you can pick up a thing or two from here though and can start that journey. I am happy to answer any specific questions about my journey or joyful.gifts — Best way is at Twitter
https://jonathan-adly.medium.com/three-easy-seo-wins-for-developers-ffeddbc5320b
CC-MAIN-2021-25
refinedweb
988
59.19
#include <FillStyle.h> A BitmapFill. BitmapFills can refer to a parsed bitmap tag or be constructed from bitmap data. They are used for Bitmap characters. Presently all members are immutable after construction. It is of course possible to change the appearance of the fill by changing the CachedBitmap it refers to. TODO: check the following: It may be necessary to allow setting the smoothing policy; the use of this should certainly be extended to non-static BitmapFills. How to smooth the bitmap. Whether the fill is tiled or clipped. Clipped fills use the edge pixels to fill any area outside the bounds of the image. Construct a BitmapFill from arbitrary bitmap data. TODO: check the smoothing policy here! Construct a static BitmapFill using a SWF tag. Copy a BitmapFill. The copied BitmapFill refers to the same bitmap id in the same movie_definition as the original. Get the actual Bitmap data. Referenced by gnash::AddStyles::operator()(). Get the matrix of this BitmapFill. Referenced by gnash::AddStyles::operator()(). Set this fill to a lerp of two other BitmapFills. Get the smoothing policy of this BitmapFill. Referenced by gnash::AddStyles::operator()(). Get the Type of this BitmapFill. BitmapFills are either tiled or clipped. Referenced by gnash::AddStyles::operator()().
http://gnashdev.org/doc/html/classgnash_1_1BitmapFill.html
CC-MAIN-2015-32
refinedweb
205
71.71
java beginers vector().add(10), new hashmap().setkey()} display(obj); display(object[] obj beginers - JavaMail = denominator / no; } public RationalClass add(RationalClass number java - Struts of questiond they may ask.i have scjp and scwcd and good knowledge in struts,hibernate. Hi Friend, Please visit the following link: http...java good morning sir.i have completed my mca at 2009.now i want Please help me to write a code for add contacts to groups in a website? Please help me to write a code for add contacts to groups in a website? Iam developing a site. In that site I need to write code for Groups. Groups contain Friends, Family and Others. If we want to add one person Struts - Struts Struts hi, I am new in struts concept.so, please explain... . what are needed the jar file and to run in struts application ? please kindly explain and details send to me. Hi friend, Please add struts.jar - Struts Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good...:// Thanks HOW TO BECOME A GOOD PROGRAMMER HOW TO BECOME A GOOD PROGRAMMER I want to know how to become good programmer Hi Friend, Please go through the following link: CoreJava Tutorials Here you will get lot of examples with illustration where you can PHP date add function ;format("d-m-Y").' : 5 Days'; date_add($date, new DateInterval...;'.$date->format("d-m-Y").' : 5 Years'; date_add($date, new... PHP Add Date function - date_add or DateTime::add function The date_add C Program....PLEASE HELP C Program....PLEASE HELP For this assignment, you are to write..., and pointers. I am lost and need a little help starting please *int fillArray... will normally be nValues if the input is good or zero if the input is bad. * double Java beginers>  ...; It is always ethical and professional to add a note which explains... and things like access, slow modems, bandwidths etc. Thus the job of a good designer Struts - Struts ) } alert("Invalid E-mail Address! Please re-enter.") return (false); } function validateForm(formObj){ if(formObj.userid.value.length==0){ alert("Please...; } if(formObj.password.value.length==0){ alert("Please enter password!"); formObj.password.focus Struts tag - Struts Struts tag I am new to struts, I have created a demo struts application in netbean, Can any body please tell me what are the steps to add new tags to any jsp page turorials for struts - Struts turorials for struts hi till now i dont about STRUTS. so want beginers struts tutorials notes. pls do Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary Please Help To Solve My Problem Please Help To Solve My Problem PHP I Have 6 Textbox and 1 ok button. 6 Textbox to Add the 6 Team Name. Each Team Name In String(Like- A,B,C,D,E,F) When I Add This Six Team Name In Six Text Box Then I Will Click On Ok Button java please please help java please please help Dear Friends plz help me to complete this program import java.util.*; public class StringDemo { static... HashMap,HashTable....etc.I dont know how to add this string array here b Struts Tutorials site that takes pizza orders. If the customer has a good credit history... Struts Tutorials Struts Tutorials - Jakarta Struts Tutorial This complete reference of Jakarta Struts shows you how to develop Struts Free Web Hosting - Why Its not good Idea of adds on your site. They make good money, but you can't get anything out... to choose good paid hosting service for hosting your web site.  ...Free Web Hosting - Why Its not good Idea This article shows you why Free important for all begin struts Hi what is struts flow of 1.2 version struts? i have struts applicatin then from jsp page how struts application flows Thanks Kalins Naik Please visit the following link: Struts Submitting Web site to search engine ; Before submitting your web site to search please go through... Registering Your Web Site To Search Engines... Web Sites Once your web site is running, the next job for you Struts Struts Hi i am new to struts. I don't know how to create struts please in eclipse help me Struts Articles and add some lines to the struts-config.xml file to get this going... portlets. If you are new to either Struts or portlet technology, then please take... to create dynamic web interfaces and would like to know how to add it to a Struts Books . The book starts with an explanation of why Struts is a "good thing" and shows... (not as good but bigger) struts book and the two complement quite well. Go... the Jakarta Struts Framework Companion Web site provides electronic Have you tried PHP resource site; PHPKode.com? Have you tried PHP resource site; PHPKode.com? is a good free open source PHP resource site which helps me a lot during my PHP learning. Have you tried it before struts struts hi i would like to have a ready example of struts using"action class,DAO,and services" so please help me Struts 2 Struts 2 I am just new to struts 2 and need to do the task. I have... will have two buttons like add and edit and a data grid will have the radio button and user needs to select the rows to edit or delete. Please help me the code Struts Alternative focus solely on creating a site that looks good, and programmers can focus... Struts Alternative Struts is very robust and widely used framework, but there exists the alternative to the struts framework Hibernate Tools Update Site Hibernate Tools Update Site Hibernate Tools Update Site In this section we... Site. The anytime you can user Hibernate Tools Update Manager from your eclipse KLEEN ; KLEEN Installation To install KLEEN directly from this web site, please point your Eclipse Update Manager to this site at... Add Update Site... and fill out the form. Then checkmark the new entry Error - Struts . If you can please send me a small Struts application developed using eclips. My...Error Hi, I downloaded the roseindia first struts example and configured in eclips. It is working fine. But when I add the new action and I how to i convert this case to loop...please help..... "); System.out.println("| d. Add new elements with a specified value into the list...how to i convert this case to loop...please help..... */ import..."); System.out.println("| b. Add an element at specified index how to i convert this case to loop...please help..... "); System.out.println("| d. Add new elements with a specified value into the list...how to i convert this case to loop...please help..... import..."); System.out.println("| b. Add an element at specified index - Struts validate(objForm){ if(objForm.username.value.length==0){ alert("Please enter...; } if(objForm.password.value.length==0){ alert("Please enter Password!"); objForm.password.focus(); return false; } if(objForm.usertype.selectedIndex == 0 ){ alert("Please Downloading Struts & Hibernate Downloading Struts & Hibernate In this we will download Struts & Hibernate.... Download Struts The latest release of Struts can be downloaded from http=" Struts validation not work properly - Struts Struts validation not work properly hi... i have a problem with my struts validation framework. i using struts 1.0... i have 2 page which... title here [Add record] 2)add.jsp struts struts <p>hi here is my code in struts i want to validate my...=0; //String d=daf.get("dob"); //DateFormat sdf=new SimpleDateFormat("dd-mm-yyyy"); //Date date=(Date)sdf.parse(d java - Struts ="add"; } } please give me correct answer . Hi friend...){ if(objForm.username.value.length==0){ alert("Please enter UserID...){ alert("Please enter Password!"); objForm.password.focus(); return false Sending large data to Action Class error. Struts code - Struts Sending large data to Action Class error. Struts code I have a jsp... files and folders sizes to be sent over internet)2. Search files (a good..., SrideviMannem) in your D drive. 3. Maintain all your data, documents, projects How to add How to add Hi, I want to make a website which will recommend users books according to the genre selected by them. I am using PHP, JavaScript... recommendations". Please tell me what code should I use. Or is there any workaround struts - Struts Struts dispatchaction vs lookupdispatchaction What is struts dispatchaction and lookupdispatchaction? And they are used to combined what? Hi,Please check easy to follow example at How to add struts2.1.8 plugin in eclipse - Struts How to add struts2.1.8 plugin in eclipse Hello, Can anybody help me how to add struts2.1.8 plugin in eclipse3.4 struts - Struts struts when the action servlet is invoked in struts? Hi Friend, Please visit the following link: Thanks Struts - Struts Struts Provide material over struts? Hi Friend, Please visit the following link: Thanks Add a fram Add a fram I want to put an image on frame on my GUI(Net Beans).please tell me the way to put this. only for frame.. thanks and regards, Rahul Dubey Here is an example that displays an image on jframe. import
http://roseindia.net/tutorialhelp/comment/14327
CC-MAIN-2014-15
refinedweb
1,552
75.91
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Is it possible to display custom data in the tree view? The email_template model has a field partner_to which is a comma separated char list of partners that will get emailed. I would like to display their names. <tree string="Scheduled Emails"> <field name="partner_to" eval="obj.partner_names()"> </tree> But in my tree view it just displays the value of the field. If I change the name to a custom name it errors saying the field doesn't exist. What's the correct way of doing this? You can create a function field that returns what you need and then include that field in the view. So, in your case, first create the method: def partner_names(self, cr, uid, ids, name, args, context=None) res = {} for _obj in self.browse(cr, uid, ids, context=None): res[_obj.id] = .... return res Note that the signature of the method must be as I have described and the return value must be a dictionary whose key is each id that is being processed and the value is the value to return. In the _columns, you add: 'partner_to_names': fields.function(partner_names, type='text', string='Partner Names') Then use the field (partner_to_names) in the view. Does anyone konw a way to create a generic tree and load it with custom data. Lets say we can make an empty grid and on load of a recors do a select and loat the grid with anything we want. Whith this I mean, doing this without creating a model for the data in the grid? Thx. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/is-it-possible-to-display-custom-data-in-the-tree-view-62507
CC-MAIN-2018-17
refinedweb
313
73.68
Comment on Tutorial - Hibernate Vs. JDBC ( A comparison) By Emiley J. Comment Added by : Muhammad Ishaq Awan Comment Added at : 2014-10-29 07:00:44 Comment on Tutorial : Hibernate Vs. JDBC ( A comparison) By Emiley J. good but not v.good. it seems the author is big fan of hibernate, no doubt hibernate is great tool but it have cons as well as pros, you only discus pros of hibernate over jdbc. you should also cover some pros by the jdbc also like speed or custom handling. great work View Tutorial By: Dinesh557 at 2008-09-25 00:27:59 3. You can not overload the private method in Test cl View Tutorial By: Gaurav at 2013-07-28 15:53:20 4. can you please explain this..!! t = View Tutorial By: kalyan at 2011-06-30 00:01:10 5. for ex public class test() { View Tutorial By: Anundhara at 2012-08-11 06:54:57 6. Sir! I want the source code for receiv View Tutorial By: Anupam Shukla at 2010-04-30 00:09:25 7. Hallo every one. I am working on my project in j2m View Tutorial By: radhe at 2010-12-14 08:08:25 8. i Want the Flow Chart for the Calendar Demo Progra View Tutorial By: Narayanan V at 2010-01-06 18:57:25 9. After edit tomcat-user file and restart sevice ..i View Tutorial By: Rajesh Perumal at 2013-01-08 06:29:21 10. When trying to install Java i was View Tutorial By: Sinmo at 2009-08-30 00:31:33
https://www.java-samples.com/showcomment.php?commentid=39676
CC-MAIN-2021-43
refinedweb
266
76.42
how to handle exception handling in struts reatime projects? can u plz send me one example how to deal with exception? can u plz send me how to develop our own exception handling struts struts how make my dummy project live..plz send me step's. kya ya... you. hi, to add jar files - 1. right click on your project. 2. go to properties. 3. go to java build path. 4. then click on libraries 5 Tutorials - Jakarta Struts Tutorial Struts project. Download the source code of Struts Hibernate... Plugin In this section we will write Hibernate Struts Plugin Java code... Struts Framework with the help of examples and projects. Struts 2 Training!:http... code will help you learn Struts 2.Thanks How to build a Struts Project - Struts How to build a Struts Project Please Help me. i will be building a small Struts Projects, please give some Suggestion & tips Struts code - Struts Struts code Hi Friend, Is backward redirection possible in struts.If so plz explain - Struts Struts Hello I like to make a registration form in struts inwhich... be displayed on single dynamic page according to selected by student. pls send compelete code. thanks Hi friend, Please give details with full Hello I have 2 java pages and 2 jsp pages in struts registration.jsp for client to register user registeraction.java to forward... with source code to solve the problem. For read more information on Struts visit sir plz send the project on quiz system code sir plz send the project on quiz system code sir plz send the client server based project in core java database in my sql struts struts shopping cart project in struts with oracle database connection shopping cart project in struts with oracle database connection Have a look at the following link: Struts Shopping Cart using MySQL Dear Sir , I am very new in Struts and want... to understand but little bit confusion,Plz could u provide the zip for address... and specify the type. 5 6 Zip Code needs to between Struts - Struts Struts Hello ! I have a servlet page and want to make login page in struts 1.1 What changes should I make for this?also write struts-config.xml and jsp code. Code is shown below Struts Articles Applications In most Java projects, a large percentage of the code..., easy code, which you will be able to use in your own projects. Struts 1.2.4... Struts Articles Building on Struts for Java saritha project - Struts (/ProjectOnline) is not available. Apache Tomcat/5.5.28 i have done servlet-mapping in web.xml then also it is not coming plzzzzz give me particular answer or example plz Struts - Struts in struts? please it,s urgent........... session tracking? you mean session management? we can maintain using class HttpSession. the code follows... for later use in in any other jsp or servlet(action class) until session exist Struts Books Struts Jakarta Struts is an Open Source Java framework for developing... for applying Struts to J2EE projects and generally accepted best practices as well... for Struts applications, and scenarios where extending Struts is helpful (source code What is Struts - Struts Architecturec small and big software projects. Struts is an open source framework used... pattern. It uses and extends the Java Servlet API to encourage developers to ..., then the request is handled by the Struts Action Servlet. When the ActionServlet receives how to generate bar code for continuous numbers in a loop using struts ... or atleast for a given number java - Struts friend, Check your code having error : struts-config.xml In Action Mapping In login jsp action and path not same plz correct...: Submit struts STRUTS STRUTS 1) Difference between Action form and DynaActionForm? 2) How the Client request was mapped to the Action file? Write the code and explain web applications quickly and easily. Struts combines Java Servlets, Java Server... build web applications quickly and easily. Struts combines Java Servlets, Java... build web applications quickly and easily. Struts combines Java Servlets, Java java - Struts problem...see the code i have created formbean class,formaction class ,struts-config.xml,web.xml,login form ,success and failure page also... code... own code ....ok ..plse do it... LoginForm.java package java; import java - Struts java Hi roseindia, Recently i got one problem..i want convert string to datetime . Now in my project i need this one ... plz favour me...:// Thanks why servlet as controller - Struts to write in java.As Servlet contains the pure java code and coding is needed... servlet as a controller in struts. why not jsp . Thanks Prakash Hi Friend, JSP is again a servlet which is having HTML syntax with java support.All struts internationalisation - Struts code to solve the problem : For more information on struts visit...struts internationalisation hi friends i am doing struts iinternationalistaion in the site Error - Struts . If you can please send me a small Struts application developed using eclips. My...Error Hi, I downloaded the roseindia first struts example... create the url for that action then "Struts Problem Report Struts has detected java - Struts java Failed to find resource /Bookmark/loginpage.jsp -how can i solve the problem. Hi friend, Plz give full details and full source code to solve the problem. Thanks struts the checkbox.i want code in struts java - Struts java Hi, Can any one send the code for insert,select and update using struts1.2 using front controller design Pattern. many Thnaks raghvendra java - Struts java where can define it ,The method getParameter(String) is undefined for the type HttpServletRequest Hi friend, Plz give details and full source code to solve the problem. For read more information Struts Tutorials Struts Struts is comprised of a controller servlet, beans and other Java classes... into a Struts enabled project. 5. Struts Action Class Wizard - Generates Java... is provided with the example code. Many advance topics like Tiles, Struts Validation, Can any send the code for insert,select,update in database using Frontcontroller in Struts.Plse help me. Many Thanks Raghavendra B Hi Friend, Please visit the following link: http Struts Architecture - Struts (MVC) design pattern. It uses and extends the Java Servlet API to encourage...Struts Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source struts - Struts struts when the action servlet is invoked in struts? Hi Friend, Please visit the following link: Thanks Project Planning - Struts Struts Project Planning Hi all, I am creating a struts application. Please suggest me following queries i have. how do i decide how many... palling of project like UI planning, classes and interface creation, POJOs Java - Struts Java this error occur when we run our struts wev b applcation Servlet action is currently unavailable Why Struts in web Application - Struts Why Struts in web Application Hi Friends, why struts introduced in to web application. Plz dont send any links . Need main reason for implementing struts. Thanks Prakash struts titles - Struts Use Struts Titles in Web Applications hi i want to use titles in my web project please help me . in my project i have four titles such as banner, menu , content and footer . please provide any example code. struts - Struts struts i want to learn more examples on struts like menu creation... and client side validation in struts are not running which are present on rose india. plz do hurry to solve my problem Reply - Struts Reply Thanks For Nice responce Technologies::--JSP please write the code and send me....when click "add new button" then get the null value...its urgent... Hi can u explain in details about your project Struts Tag Lib - Struts reserve word you cannot use the tag prefixes jsp, jspx, java, javax, servlet...Struts Tag Lib Hi i am a beginner to struts. i dont have.... JSP Syntax Examples in Struts : Description The taglib Java - Struts Java Please explain me how to develope logoff using struts and give me the code Struts project in RAD Struts project in RAD How to create a struts project in RAD struts code - Struts struts code In STRUTS FRAMEWORK we have a login form with fields USERNAME: In this admin can login and also narmal uses can log...:// Thanks java file upload in struts - Struts java file upload in struts i need code for upload and download file using struts in flex.plese help me Hi Friend, Please visit the following links: http first example - Struts Struts first example Hi! I have field price. I want to check require and input data type is int. can you give me an example code of validation.xml file to resolve it? thanks you so much! Hi friend, Plz specify ACTION - AGGREGATING ACTIONS IN STRUTS are a Struts developer then you might have experienced the pain of writing huge number of Action classes for your project. The latest version of struts provides classes...STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS Java - Struts Java hello friends, i am using struts, in that i am using tiles framework. here i wrote the following code in tiles-def.xml in struts-config file i wrote the following action tag struts - JSP-Servlet page from servlet. I set and get the values on java page through beans. I... or administrator. So I need values? Hi friend, Code to pass values from Servlet to JSP : "MyServlet.java" import java.io.*; import javax.servlet. code - Struts struts code how to call lookup dispatchAction class method using on change of JavaScript Struts Code - Struts Struts Code How to add token , and encript token and how decript token in struts. Hi friend, Using the Token methods The methods we... more information,Tutorials and Examples on struts visit to : http java - Struts pattern. It uses and extends the Java Servlet API to encourage developers... of the Application. Hi friend, Struts is an open source saritha project - Struts in the web.xml? Hi Friend, If OnlineRecruit is your servlet then check whether you have done the servlet-mapping in the web-xml or not. Then restart Struts Quick Start will show you how you can quick start the development of you struts based project... applications in Java technology. Struts is one of the most used framework.... The struts servlet reads the configuration parameters from struts-config.xml saritha project - Struts the servlet mapping in the xml file. Check it. Thanks send the mail with attachment problem - Struts send the mail with attachment problem Hi friends, i am using the below code now .Here filename has given directly so i don't want that way. i need... message.setContent(multipart); // Send the message java - Struts java i want to know how to use struts in myEclipse using an example i want to know about the database connection using my eclipse ? pls send me the reply Hi friend, Read for more information. http ; Hi friend, You change the same "path" and "action" in code : In Action Mapping In login jsp For read more information on struts visit to : Thanks Struts Alternative to an existing servlet application. If you're planning a new Java application... defined in Java code. When Velocity is used for web development, Web... solely on writing top-notch code. Velocity separates Java code from the web pages Struts Hibernate Integration In this section we will write Hibernate Struts Plugin Java code... and you can download and start working on it for your project or to learn Struts... Struts Hibernate   Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://roseindia.net/tutorialhelp/comment/17738
CC-MAIN-2013-20
refinedweb
1,950
75.61
Abstract data types, include files, non-inline member functions, assertions. It is now time to design some serious classes that actually do something useful. Let's start with something simple and easy to understand--an abstract data type--stack of integers. A stack is defined by its interface. Major things that you can do to a stack is to push a number into it and pop a number from it. What makes a stack a stack, is its LIFO (Last In-First Out) behavior. You can push values into it and, when you pop them, they come up in reverse order.. const int maxStack = 16; class IStack { public: IStack () :_top (0) {} void Push (int i); int Pop (); private: int _arr [maxStack]; int _top; }; The constructor of IStack initializes the top-of-the-stack index to zero--the start of the array. Yes, in C++ array elements are numbered from zero to n-1, where n is the size of the array. In our case the size is 16. It is defined to be so in the statement const int maxStack = 16; The modifier const tells the compiler that the value of maxStack will never change. The compiler is therefore free to substitute every use of the symbol maxStack with the actual value of 16. And that's exactly what it does. The line int _arr [maxStack];declares _arr to be an array of maxStack integers. The size of an array has to be a constant. You don't have to specify the size when the array is explicitly initialized; as we have seen in the case of strings of characters. Notice that member functions Push and Pop are declared but not defined within the definition of the class IStack. The difference between function declaration and function definition is that the latter includes the code--the implementation--(and is not followed by a semicolon). The former does not: It only specifies the types of parameters and the type of the return value. So where is the implementation of Push and Pop? In the separate implementation file stack.cpp. First thing in that file is the inclusion of the header stack.h, which we have just seen. Next we include the new header file cassert. This file contains the definition of the very important function assert. I will not go into the details of its implementation, suffice it to say that this magical function can be turned off completely by defining the symbol NDEBUG. However, as long as we don't define NDEBUG, the assertion checks its argument for logical truth, that is, for a non-zero value. In other words, it asserts the truth of its argument. We define NDEBUG only after the final program is thoroughly tested, and in one big swoop we get rid of all the assertions, thus improving the program's speed2. What happens when the argument of the assertion is not true (or is equal to zero)? In that unfortunate case the assertion will print a message specifying the name of the source file, the line number and the condition that failed. Assertions are a debugging tool. When an assertion fails it signifies a programmer's error--a bug in the program. We usually don't anticipate bugs, they appear in unexpected places all by themselves. However there are some focal points in our code where they can be caught. These are the places where we make assumptions. It is okay to make certain assumptions, they lead to simpler and faster code. We should however make sure, at least during development and testing, that nobody violates these assumptions. Let's have a look at the implementations of Push and Pop: #include "stack.h" #include <cassert> #include <iostream> using std::cout; using std::endl; //compile with NDEBUG=1 to get rid of assertions void IStack::Push (int i) { assert (_top < maxStack); _arr [_top] = i; ++_top; } int IStack::Pop () { assert (_top > 0); --_top; return _arr [_top]; } The first thing worth noticing is that, when the definition of a member function is taken out of the context of the class definition, its name has to be qualified with the class name. There's more to it than meets the eye, though. The methods we've been defining so far were all inline. Member functions whose definition is embedded inside the definition of the class are automatically made inline by the compiler. What does it mean? It means that the compiler, instead of generating a function call, will try to generate the actual code of the function right on the spot where it was invoked. For instance, since the method GetValue of the object Input was inline, it's invocation in cout << input.GetValue ();is, from the point of view of generated code, completely equivalent to cout << input._num; On the other hand, if the definition of the member function is taken out of the class definition, like in the case of Pop, it automatically becomes non-inline. Should one use inline or non-inline methods? It depends on the complexity of the method. If it contains a single statement, it is usually cheaper execution-wise and code-size-wise to make it inline. If it's more complex, and is invoked from many different places, it makes more sense to make it non-inline. In any case, inline functions are absolutely vital to programming in C++. Without inlining, it would be virtually impossible to convince anybody to use such methods as GetValue or SetValue instead of simply making _num public (even with inlining it is still difficult to convince some programmers). Imposing no penalty for data hiding is a tremendous strength of C++. Typing a few additional lines of code here and there, without even having to think much about it, is a price every experienced programmer is happy to pay for the convenience and power of data hiding. The main function does a few pushes and pops to demonstrate the workings of the stack. int main () { IStack stack; stack.Push (1); stack.Push (2); cout << "Popped " << stack.Pop() << endl; cout << "Popped " << stack.Pop() << endl; } 2 Believe it or not: Failure to turn off assertions and to turn on optimizations is often the reason for false claims of C++ slowness. (Others usually involve misuse of language features.)
http://www.relisoft.com/book/lang/scopes/11abstr.html
crawl-002
refinedweb
1,048
63.8
Hi all, will create teh branch for the rcouch gift but quickly after I want to create a new branch to handle the build using rebar. In the process I need to make some choices. 1. Where to put thirdpary dependencies in Erlang and our own apps. We have different choices: 1.a. Put everything in src/ 2.a Put thirdparty in deps/ a our apps in apps/ this is a common design in erlang world 3.a put thirdpary libs in thirdparty/ and apps in apps/ or lib or src. the C way I would be for the second. At the end all the binaries will be on the same namespace, this is more a way t distinct our code from the rest and let other know about it. I would also strongly suggest to have a patches/ folder somewhere whre we put the diffs we have on external dependencies that we applied in our own code base. 2. How to handle shared dependencies ? By shared dependencies i mean any external C code used: - spideronkey fr couchjs - icu for the collation driver - eventually nodejs - all docs deps We are currently using the autotools to do that. We have different way to do that: - In rcouch all is built statically and added to the final release. Like most apps in Erlang around. riak does that for ex - bigcouch is using scons in python to build couchjs (icu?) There is also another interesting way used by the Cloudi project () . Which while allowing Cloudi to be embed in other Erlang apps, it is still using the autotools. Except that the autotools are generating the files needed by rebar to build all the code and the release and a Makefile to trigger everything. I quite like this solution since it solving 4 features I want in the new couchdb release and use a solution available on all platforms and arch. These 4 features are: - be able to build a full static release - be able to use shared dependencies (reduce the space usage) - easy to detect other external progras needed for the build - build a couchdb that can be embedded in another Erlang app if needed What do you think? I plan to dedicate sone times over the we and next week on that. Hopefully we will be able to settle on a solution today. - benoit
http://mail-archives.apache.org/mod_mbox/couchdb-dev/201311.mbox/%3CCAJNb-9rpH_RA+QGOKeB4j4z59J-uxg7yMpD=njr2CUghQj+WZw@mail.gmail.com%3E
CC-MAIN-2014-42
refinedweb
392
70.43
Count of N Digit Numbers having each Digit as the Mean of its Adjacent Digits Problem Statement You have an integer N. Your task is to find the number of N-digit numbers, which are good. A number is called good if each digit in its decimal representation is the mean of its two adjacent digits. For example: - 9 is a good number: there is only one digit in 9. So it satisfies all the constraints. - 10 is a good number: there is no digit having two neighbours. - 102 is not a good number: 0 is not the mean of its two neighbours, 1 and 2. Constraints 1 <= N <= 1000 Sample Test Cases Input 1: 1 Output 1: 10 Explanation: All the numbers from 1 to 9 are good because there is only one digit, so they satisfy all the constraints. Input 2: 2 Output 2: 90 Input 3: 3 Output 3: 90 Approach Brute Force Solution The most straightforward approach is to iterate over all the N-digits numbers and check which satisfies all the constraints and keep their count. Also, to reduce the number of operations, we can recursively generate all the N-digits numbers and terminate the recursive calls whenever we encounter a digit that does not satisfy the constraints. The worst-case time complexity of both approaches is O(N * 10^N). Hence both approaches are inefficient. Efficient Approach We can solve the problem efficiently by using Dynamic Programming. Let's try to find out the subproblems (states of DP). We know that all the two-digit and one-digit numbers are good. Now consider a three-digit number D. Let d1,d2, and d3 be the digits in the decimal representation of D. If D is good number then, d2 = (d1 + d3)/2 Or, d2 = (d1 + d3 - 1)/2 (mean involves integer division) d3 = 2 * d2 - d1- 1 or d3 = 2 * d2 - d1 + 1 Now, if we want to add one more digit, d4 to D, to make it a four-digit good number then, d4 = 2 * d3 - d2 - 1 or d4 = 2 * d3 - d2 + 1 In general, if we want to add the Kth digit to a sequence then, d[k] = 2 * d[k - 1] - d[k - 2] Or, d[k] = 2 * d[k - 1] - d[k - 2] + 1 So, if we want to add a digit at a particular index ind, then we need the digit at the previous two indices. Let P1 be the previous digit, and P2 be the second previous selected digit. So the states of the DP are (ind, P1, P2). It gives the answer from index ind till the end when the previously selected digits are P1 and P2. It can be seen that there are overlapping subproblems. So we can use a three-dimensional matrix dp[ind][P1][P2] to memorize each state. Algorithm - Define a recursive function solve(ind, P1, P2), which computes the answer for the state (ind, P1, P2). - If ind == N + 1 we can return 1 as a N-digit good number is formed. - If the current index ind is 1, then we can place any digit from [1, 9]. But if N = 1, we can add 0 also. - If the current index ind is 2, we can place any digit from [0, 9]. - If the current index ind is greater than 2, then we can place (2 * P1 - P2) if 0 <= (2 * P1 - P2) <= 9. - Similarly, we can also place (2 * P1 - P2 + 1) if 0 <= (2 * P1 - P2 + 1) <= 9. Code #include <bits/stdc++.h> using namespace std; int dp[101][11][11]; //three dimentional matrix to memorize each state int N; //Given input //recursive function that compute the answer for states (ind, P1, P2) int solve(int ind, int P1, int P2){ if(ind == N + 1) //Base Case return 1; if(dp[ind][P1][P2] != -1) //if the state has already been computed return dp[ind][P1][P2]; dp[ind][P1][P2] = 0; if(ind == 1){ int st = 1, en = 9; //we can place any digit from [1, 9] if(N == 1) //but if N == 1, 0 can also be placed st = 0; for(st; st <= en; ++st){ dp[ind][P1][P2] += solve(ind + 1, st, P1); } } else if(ind == 2){ //place any digit from [0, 9] for(int i = 0; i <= 9; i++){ dp[ind][P1][P2] += solve(ind + 1, i, P1); } } else{ //try to place (2 * P1 - P2) if(2 * P1 - P2 <= 9 and 2 * P1 - P2 >= 0) dp[ind][P1][P2] += solve(ind + 1, 2 * P1 - P2, P1); //try to place (2 * P1 - P2 + 1) if(2 * P1 - P2 + 1 <= 9 and 2 * P1 - P2 + 1 >= 0) dp[ind][P1][P2] += solve(ind + 1, 2 * P1 - P2 + 1, P1); } //return ans return dp[ind][P1][P2]; } int main(){ //Given Input cin >> N; //intializing dp table with -1 memset(dp, -1, sizeof(dp)); //function call cout << solve(1, 0, 0) << "\n"; return 0; } Time Complexity The overall time complexity is O(N * 10 * 10). Space Complexity We need a 3D array of size (N * 10 * 10) to memorize each state. So the overall space complexity is O(N * 10 * 10).. - What is memoization? Memoization is a technique to optimize the recursive solution. It ensure that each state does not execute more than once. Key Takeaways In this article, we tried to solve a problem using dynamic programming. Having a good hold on Dynamic Programming is very important for cracking coding interviews at MAANG. Check out the coding ninjas master class on dynamic programming for getting!
https://www.codingninjas.com/codestudio/library/count-of-n-digit-numbers-having-each-digit-as-the-mean-of-its-adjacent-digits
CC-MAIN-2022-27
refinedweb
927
67.28
The first article in this three-part series looked at the revolution that is occurring in Python testing thanks to standard testing frameworks like zope.testing, py.test, and nose. These support more simple test idioms, and can replace the ad-hoc code that projects have traditionally had to write and maintain for running their tests. The second article examined how these automated solutions search through a Python package to identify the modules that may contain tests. This article takes the next step and asks what the frameworks do when they then introspect a test module to discover what tests live inside of it. It also looks at details like how common test setup and teardown is supported, or not supported, by the three frameworks. Test discovery in the Zope framework Once a list of interesting modules has been determined, how are actual tests inside of them discovered? Turning first to the zope.testing framework, you discover something interesting about the Zope community. Rather than build big tools that solve several problems each, they tend to build smaller and more limited tools that are capable of being connected together. The zope.testing module, as a case in point, actually provides no mechanism itself for detecting tests at all! Instead, zope.testing leaves it to each programmer to find the tests in each module that are worth running and put them together in a list. It looks in each test module for only a single thing: a test_suite() function, which it calls, expecting to be returned an instance of the standard unittest.TestSuite class that is stuffed full of the tests that the module defines. Some programmers using zope.testing just create and maintain this list of tests manually, in the test_suite() function. Others write custom code that takes some shortcuts for discovering what tests have been defined and are available. But the most interesting choice is to use another Zope package, z3c.testsetup, which has the same kind of capacity for automatically discovering individual tests in a package as do the other modern Python test frameworks. Again, this is a good illustration of how Zope programmers tend to write building blocks out of which frameworks can be built rather than large monolithic solutions. The z3c.testsetup package contains neither a command-line interface with which tests can be selected, nor any output module with which test results could be displayed; it relies entirely upon zope.testing for these capabilities. In fact, z3c.testsetup users generally do not even use zope.testing for its ability to discover test modules. Instead, they short-circuit the zope.testing algorithm by leaving unaltered its default behavior of looking only for modules named test.py, and then providing only one module with that name in their entire source tree. In the simplest case, their test.py looks something like this: import z3c.testsetup test_suite = z3c.testsetup.register_all_tests(my_package) This takes the task of test discovery away from zope.testing, and instead relies upon the more powerful mechanisms provided for discovery by z3c.testsetup itself. There are several configuration options that can be provided to the register_all_tests() function. See the z3c.testsetup documentation for details, but only its basic behavior needs to be outlined here. Unlike all of the other frameworks this article discusses, z3c.testsetup does not, by default, care about the name of each Python module in a package, but about its content. It will examine all of the modules, and all of the .txt or .rst files, in a package and select the ones that specify a :Test-Layer: somewhere in their text. It then builds the suite of tests by combining all of the TestCase classes inside the modules and all of the doctest stanzas from inside the text files. Using :Test-Layer: strings to mark files with tests is an interesting mechanism. It does have the disadvantage that, when browsing a package's files, a new programmer has to open every one of them, or at least grep for the :Test-Layer: string, in order to find where the tests are located. (Not to mention that z3c.testsetup obviously has to do the same thing; does this make it slower than frameworks that operate only on the filename?) Also note, finally, that the Zope test frameworks only support tests that are either UnitTest instances or doctests. As discussed in the first article in this series, the more modern Python testing frameworks also support plain Python functions as valid tests. This requires a different test detection algorithm, as you will see as you now turn your attention to these frameworks. Test discovery in py.test and nose The py.test and nose frameworks, as was discussed in the previous article, use similar but slightly different sets of rules to search through a Python package for the modules that they believe will contain tests. But both wind up in the same situation: with a list of modules that they must then inspect to find the functions and classes that the developer wants run as tests. As you saw in the last article, py.test tends to select a single standard to which all projects using it are expected to conform, while nose allows far more extensive customization at the expense of predictable behavior. It is the same in this case: the rules by which tests are detected inside of a test module are fixed, invariant, and predictable for py.test, while they are flexible and customizable for nose. If a project uses nose for its testing, you will have to first visit the project's setup.cfg file before you know whether nose will be following its usual rules for detecting tests or whether it will be following different ones specific to this individual project. Here are the procedures that py.test uses: - When py.testlooks inside of a Python test module, it collects every function whose name starts with test_and every class whose name starts with Test. It collects classes regardless of whether the class inherits from unittest.TestCaseor not. - Test functions simply get run, but test classes have to be searched for methods. Any methods whose names start with test_are run as tests once the class has been instantiated. - The py.testframework shows a curious behavior if provided with a test class that happens to inherit from the standard Python unittest.TestCaseclass: even if the class has several attractive test_methods, py.testwill die with an exception if it does not also contain a runTest()method. But if does a method does exist, then py.testignores it; it has to exist for the class to be accepted, but will not then be run because it does not begin with test_. To fix this behavior, activate the framework's unittestplug-in, either in your project's conttest.pyfile, or by using the -pcommand line option: $ py.test -p unittest This causes py.testto make three changes to its behavior. First, instead of only detecting test classes whose names start with Test, it will also detect any other classes that inherit from unittest.TestCase. Second, py.testwill no longer report an exception for TestCasesubclasses that do not provide a runTest()method. And, third and finally, any setUp()and tearDown()methods on TestCasesubclasses will be correctly run, in the standard fashion, before and after the tests that the class contains. The behavior of nose, while being more customizable, somehow winds up being simpler here: - When noselooks inside of a Python test module, it collects functions and classes that match the same regular expression that it uses for choosing test modules. (Which, by default, looks for names that include the word Testor test, but a different regular expression can be provided on the command line or a configuration file.) - When noselooks inside of a test class, it runs methods matching that same regular expression. - Without being asked, nosewill always detect subclasses of unittest.TestCaseand use them as tests. It will, however, use its own regular expression to determine which of their methods are tests, rather than using the standard unittestpattern of ^test. Generative tests As you saw in the first article, both py.test and nose have made tests in Python vastly easier to write by supporting tests that are written as simple functions, like: # test_new.py - simple tests functions def testTrue(self): assert True == 1 def testFalse(self): assert False == 0 Test functions, and more traditional test classes, are fine when all you want to do is check on a component's behavior in some single, specific circumstance. But what about when you want to do a long series of tests that are almost identical except for some of the parameters? In order to make such cases easy to implement without having to cut-and-paste a dozen copies of your test function, and then changing the names to be unique, both py.test and nose support generative tests. The idea here is that you supply a test function that is actually an iterator, and that uses its yield statement(s) to return a series of functions together with the arguments with which you want them called. For example, to run a single test against each of your favorite Web browsers, you might write something like this: # test_browser.py def check(browser, page): t = TestBrowser(browser) t.load_page(page) t.check_status(200) def test_browsers(): for b in 'ie6', 'ie7', 'firefox', 'safari': for p in 'index.html', 'about.html': yield check, b, p For generative tests, py.test offers one additional convenience. So that you can more easily tell the test runs apart, and thus understand the test report if one or more of them fail, the first item in each the tuple that you yield can be a name that will be printed as part of the name of the test: # Alternate yield statement, for py.test ... yield 'Page %s browser %s' % (b,p), check, b, p Generative tests should provide a much more attractive solution to parametrized tests than many of the quite awkward techniques that were current in many projects using homemade tests, or restricting themselves to what unittest was capable of. Setup and teardown A huge issue in designing and writing a test suite is how to handle common setup and teardown code. Many real-world tests do not resemble the very simple functions that this article has been using as examples here; they have to do things like open Web page in Firefox, click on a button labelled “Continue”, and then examine the result. Before the actual test even begins (by which I mean bringing up the page and clicking on the button), the test has to first complete several expensive steps. Now, consider one hundred functional tests that all perform a test like this. They will each need to call a common setup routine just to get Firefox running before they can commence their own particular test. Combine this with the fact that there is probably teardown code that is necessary to undo what the setup did, and you wind up with over two hundred extra function calls in your test suite. Each of its functions will look like this: # How test functions look if they each do setup and teardown def test_index_click_continue(): do_big_setup() # <- the same in every test t = TestBrowser(browser) t.load_page('index.html') t.click('#continue') t.check_status(200) do_big_teardown() # <- the same in every test To eliminate repetitious code like this, many testing frameworks provide a way to indicate once what setup and teardown code needs to run for each of entire groups of tests. All three frameworks this article is looking at, zope.testing, py.test, and nose, support the standard setUp() and tearDown() routines of any unittest.TestCase classes that the programmer writes. Beyond this, though, the frameworks differ remarkably in the facilities they provide for common setup code. Though zope.testing provides no extra support of its own for setup and teardown, the z3c.testsetup extension that was discussed above does something interesting with doctests. You will recall that it finds tests by looking for files with a :Test-Layer: specified somewhere in their test. The layer in a doctest can actually specify one of two different values. Marking a doctest as belonging to the unit layer means that it will be run without any special setup. But marking it as belonging to the functional layer means that it will run only after a framework setup function has been invoked. Typically, :Test-Layer: functional tests are designed to be run when the Zope Web framework has been fully configured, so that they can create a test browser instance, send a request, and see that the Web framework returns as the response. By being willing to perform this setup on the doctest's behalf, z3c.testsetup can save large amounts of boilerplate code from having to be copied into each functional doctest. One last convenience, which also reduces boilerplate code, is that z3c.testsetup can be given a list of variables to pre-load into the namespace of each unit doctest, and another to be pre-loaded for functional doctests. This eliminates the need to cut-and-paste a common tangle of import statements to the top of every doctest file. Moving on to py.test, it by default provides no support for setup and teardown. It does not even run the setUp() and tearDown() methods of standard unittest.TestCase classes unless you have turned on its unittest plug-in. It is nose that really shines when it comes to supporting common test code. When discovering tests, nose keeps up with the context in which it found them. Just like it considers every test method inside of a unittest.TestCase subclass to be “inside” that class and therefore governed by its setUp() and tearDown() methods, it also considers tests to live “inside” of their module, the enclosing package, and any package outside of that. For nose, therefore, a test lives inside of not one but a series of concentric containers, any of which can contain setup code that gets run before the test and teardown code that gets run afterward. Read the nose documentation for more information about package-wide and module-wide setup and teardown functions; among other details, you will learn that you have a bewildering array of choices for what your setup and teardown functions can be called. (Once again, nose seems to have difficulty encouraging different projects to write tests the same way so that they can easily read each other's code.) But they are a very powerful way to make your groupings of functions into packages and modules not merely structural (they all got put here) but also semantic (the tests in here all run in the same environment). There is one case in which nose does not care about the name of setup and teardown functions: when you specify them explicitly for a particular function using the @with_setup decorator. Again, if this interests you, consult the nose documentation; here, I will only take the space to note that, since functions are first-class objects in Python, you can assign a name to a particular decorator and use it over and over again: # Naming a with_setup decorator firefox_test = with_setup(firefox_setup, firefox_teardown) @firefox_test def test_index_click(): ... @firefox_test def test_index_menu(): ... One final distinction: while the setup and teardown functions specified in a @with_setup decorator or provided as methods in a unittest.TestCase subclass get run once for each function or test that they wrap, the setup and teardown code that you give nose up at the module or package level gets run only once for the entire set of tests. Do not, therefore, expect such tests to be properly isolated from each other: they will share a single copy of any resources that you create in the module's or package's setup routine. Conclusion Congratulations! You now understand how the different testing frameworks will support us (or fail to support us) by detecting our tests and arranging for them to run. The last article in this series will look at the payoff for all of the work that a framework puts into collecting our tests: the powerful test-selection options, reporting tools, and debugging support with which we make test results useful to us. And, in conclusion, we will consider how to choose from among the three frameworks the one best suited to your needs..
http://www.ibm.com/developerworks/aix/library/au-pythontesting3/index.html
CC-MAIN-2014-42
refinedweb
2,734
61.06
A Python compiler for the Neo Virtual Machine on the Ontology Blockchain Project description Overview The neo-boa compiler is a tool for compiling Python files to the .avm format for usage in the Neo Virtual Machine which is used to execute contracts on the Neo Blockchain. The compiler supports a subset of the Python language ( in the same way that a boa constrictor is a subset of the Python snake species) What does it currently do Compiles a subset of the Python language to the .avm format for use in the Neo Virtual Machine Works for Python 3.6+ Adds debugging map for debugging in neo-python or other NEO debuggers What will it do Compile a larger subset of the Python language Get Help or give help Open a new issue if you encounter a problem. Or ping @localhuman on the NEO official community chatroom. Pull requests welcome. New features, writing tests and documentation are all needed. Installation Make sure you are using a Python 3.6 or greater virtual environment Pip pip install neo-boa Docker This project contains a Dockerfile to batch compile Python smart contracts. Clone the repository and navigate into the docker sub directory of the project. Run the following command to build the container: docker build -t neo-boa . The neo-boa Docker container takes a directory on the host containing python smart contracts as an input and a directory to compile the .avm files to as an output. It can be executed like this: docker run -it -v /absolute/path/input_dir:/python-contracts -v /absolute/path/output_dir:/compiled-contracts neo-boa The -v (volume) command maps the directories on the host to the directories within the container. Manual Clone the repository and navigate into the project directory. Make a Python 3 virtual environment and activate it via: python3 -m venv venv source venv/bin/activate or, to install Python 3.6 specifically: virtualenv -p /usr/local/bin/python3.6 venv source venv/bin/activate Then, install the requirements: pip install -r requirements.txt Usage The compiler may be used like in the following example: from boa.compiler import Compiler Compiler.load_and_save('path/to/your/file.py') Docs You can read the docs here. Tests All tests are located in boa_test/test. Tests can be run with the following command python -m unittest discover boa_test License - Main author is localhuman. Donations Accepted at ATEMNPSjRVvsXmaJW4ZYJBSVuJ6uR2mjQU Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ont-boa/0.4.9/
CC-MAIN-2022-40
refinedweb
430
56.05
Section D.6 Exercise Sheet 6 1. Dynamic Programming - Basics. For which kinds of problems is dynamic programming applicable? What core assumption has to be made in order for dynamic programming to be helpful? How does the running time of the program change when dynamic programming is used correctly? How does the memory consumption of the program change when dynamic programming is used correctly? 202205201400 2. Dynamic Programming - Sequences. Konrad Klug wants to calculate the numbers of the integer sequences below. Unfortunately, he cannot write down the sequences in explicit form without recursion. Therefore, he wants to compute the numbers using a C program. He knows that calculating function values multiple times for the same argument is a bad idea, and that he can use dynamic programming to reuse previous calculations. Regretably, he has no time to write his program because of his math classes, so you have to write the program for him. \(a_n = 3a_{n-3}+a_{n-1}+7\) where \(a_0 = 1, a_1=2, a_2=4\) and the 100th sequence element shall be calculated. \(b_n = b_{n-1} \cdot b_{n-2}\) where \(b_0 = 1, b_1=2\) and the 8th sequence element shall be calculated. 202205201400 int sequence_an() { int* m = malloc(sizeof(int) * 100); m[0] = 1; m[1] = 2; m[2] = 4; for (int i = 3; i < 100 ; i++) { m[i] = 3 * m[i-3] + m[i-1] + 7; } int res = m[99]; free(m); return res; } int sequence_bn(){ int* m = malloc(sizeof(int) * 8); m[0] = 1; m[1] = 2; for (int i = 2 ; i < 8 ; i++) { m[i] = m[i-1] * m[i-2]; } int res = m[7]; free(m); return res; } 3. C0 AST. Draw the syntax tree for each of the following C0 programs: Program 1: { r = 1; while (r <= n) { r = r * n; n = n - 1; } } Program 2: if (b < a) { a = 3; abort(); } else { b = 3; } Program 3: { b = 3 * (2 + 1); c = a != b; while (c) c = 1; } 202205201400 4. Abstract Syntax. Define the abstract syntax of formulas in propositional logic. These consist of the binary operators \(\lor, \land, \Rightarrow, \Leftrightarrow\) and the unary operator \(\neg\text{.}\) 202205201400 5. Missing Declarations. What happens when you execute the following program on the empty state? x = 1; 202205201400 6. C0 Syntax. Find the syntax errors in the following C0 statements. if (x < y) r = x else r = y; while a < b {r = r + 1; a = a + 1;} if (a = 4) b = 42; else b = x; if (x > y) abort; else r = y; {x = 5; y = 4;; z = 0;} if (z < 0) abort() else x = x + y; r = 42; x = 3; y = x + r; 202205201400 Semicolon behind the first assignment missing. if (x < y) r = x; else r = y; While condition needs to be parenthesized. while (a < b) {r = r + 1; a = a + 1;} Equality comparison with =, instead of ==. if (a == 4) b = 42; else b = x; Parentheses for abortmissing. if (x > y) abort(); else r = y; Too many semicolons. {x = 5; y = 4; z = 0;} Missing semicolon behind abort. if (z < 0) abort(); else x = x + y; Missing Block. {r = 42; x = 3; y = x + r;} 7. Expression Evaluation. Consider the C0 expression evaluation of an expression \(e\text{.}\) What is the difference between the following situations: \(\denotR e\sigma\) is undefined and \(\denotR e\sigma=\indetval\text{.}\) What happens during execution of the statement x = e; when \(\denotR e\sigma=\indetval\) holds? 202205201400 If an address is not in the preimage of the memory mapping, then no container for this address exists. The value \(\indetval\) is the undefined value, which is in the container before a value has been assigned to it. This leads to the uninitialized value \(\indetval\) being read, on which expression evaluation is not defined. 8. Shorthand Notation. Consider the following state in shorthand notation as defined in the lecture script: Write down \(\rho\) and \(\mu\text{.}\) Use \(\adra, \adrb \dots\) do depict addresses. 202205201400 9. Modulo Calculation. Execute the following program according to C0 semantics on the state\begin{equation*} \rho := \{a \mapsto \adra ,b \mapsto \adrb\}, \mu := \{\adra \mapsto 42, \adrb \mapsto 17\} \end{equation*} { while (a >= b) a = a - b; if (a == 8) abort(); else b = 0; } Which kind of program stoppage occurs? How does it look when we instead choose \(\mu := \{\adra \mapsto 40, \adrb \mapsto 17\}\text{?}\) How does the program stop when we choose \(\mu := \{\adra \mapsto 42, \adrb \mapsto 0\}\text{?}\) Remark: A short explanation suffices. 202205201400 To improve readability, we use the shorthand Sfor the if statement and Wfor the while statement. The program aborts. For \(\mu := \{\adra \mapsto 40, \adrb \mapsto 17\}\) the program terminates in state \(\{ \adra \mapsto 6, \adrb \mapsto 0 \}\text{.}\) Except for the value of \(\adra\) the semantics is identical to part a). For \(\mu := \{\adra \mapsto 42, \adrb \mapsto 0\}\) the program diverges (does not stop) since the condition of the while loop is persistently true. 10. Do-While-Loops in C0. Extend the abstract syntax and the statement semantics of C0 to include the do-while-loop (see control flow section in the C chapter of the script). 202205201400 Extension of the statement syntax: Extension of the operational semantics: 11. For-Loops in C0. Extend the abstract syntax and the statement semantics of C0 to include a restricted for-loop which shall have the (abstract) syntax: 202205201400 Extension of the statement syntax: Extension of the operational semantics: Bonus 12. Dynamic Programming - Board Games. Consider a two-dimensional board, similar to a chess board, with height \(h\) and width \(w\text{.}\) Every position on that board has an integer cost. You start at the top row, in a column chosen by you. In every turn, you must move down one row. While doing so, you can also move one column to the left or to the right (or stay at the same column). The goal of the game is to arrive at the bottom row of the board as cheaply as possible, i.e. so that the sum of the costs of all visited positions is minimal. Represent the board as an integer array of length \(w \cdot h\) and implement an algorithm with the help of dynamic programming which computes a path of minimal cost, as well as the optimal starting column. Test your program with the following values: int board[] = {12, 25, 63, 8, 59, 1, 15, 42, 25, 3, 36, 18}; int h = 3; int w = 4; 202205201400 #include <limits.h> #include <stdio.h> #include <stdlib.h> typedef struct { int parent_col; int total_cost; } Pair; Pair *shortest_path(const int *board, int h, int w); /** * entry point * * @return exit */ int main() { // initialize values int board[] = {12, 25, 63, 8, 59, 1, 15, 42, 25, 3, 36, 18}; int h = 3; int w = 4; // compute weight table Pair *table = shortest_path(board, h, w); // allocate memory to store result Pair *path = (Pair *)malloc(h * sizeof(Pair)); // calculate minimum of last row int min_dis = INT_MAX; int min_col = 0; for (int i = 0; i < w; i++) { int a = table[(h - 1) * w + i].total_cost; if (a < min_dis) { min_dis = a; min_col = table[(h - 1) * w + i].parent_col; path[h - 1].parent_col = i; path[h - 1].total_cost = h - 1; } } // add parents with minimal costs to path for (int i = h - 2; i >= 0; i--) { path[i].parent_col = min_col; path[i].total_cost = i; min_col = table[i * w + min_col].parent_col; } // print result to stdout printf("shortest path has length %d\n", min_dis); for (int i = 0; i < h; i++) { printf("%d. (%d, %d)\n", i + 1, path[i].parent_col, path[i].total_cost); } // free and exit free(path); free(table); return 0; } /** * calculate shortest path using dynamic programming * * @param board * @param h height * @param w width * @return weight table */ Pair *shortest_path(const int *board, int h, int w) { // allocate memory for table Pair *table = (Pair *)malloc(h * w * sizeof(Pair)); // set values of parent_col row for (int i = 0; i < w; i++) { table[i].total_cost = board[i]; table[i].parent_col = i; } // iterate over remaining rows for (int i = 1; i < h; i++) { for (int j = 0; j < w; j++) { int pos = w * i + j; table[pos].total_cost = board[pos]; // left parent int a = j - 1 >= 0 ? table[(i - 1) * w + j - 1].total_cost : -1; // middle parent int b = table[(i - 1) * w + j].total_cost; // right parent int c = j + 1 < w ? table[(i - 1) * w + j + 1].total_cost : -1; // set table entry to minimum of path cost and corresponding column if (j - 1 >= 0 && a < b) { //if we can go to the left and it is cheaper than the middle if (j + 1 >= w || a < c) { //if we can not go to the right or it is more expensive than going left table[pos].parent_col = j - 1; table[pos].total_cost += a; } else { //we can co to the right and it is cheaper than going left table[pos].parent_col = j + 1; table[pos].total_cost += c; } } else { if (j + 1 >= w || b < c) { table[pos].parent_col = j; table[pos].total_cost += b; } else { table[pos].parent_col = j + 1; table[pos].total_cost += c; } } } } return table; } 13. Maximal Continuous Sub-Array Sum. Let \(A\) be an array filled with integers. Consider the problem of finding a continuous sub-array with maximal sum. For example, consider the array The continuous sub-array with maximal sum of \(A\) is the sub-array \([3,4,-2,6]\) with a sum of \(3+4+(-2)+6 = 11\text{.}\) First, we want to compute the maximal sum for a restricted case. Define a recursive mathematical function \(\operatorname{mT}(i)\) specified as follows: For all \(0 \leq i < n\text{,}\) \(\operatorname{mT}(i)\) considers all sub-arrays\begin{equation*} A' = [A[j], A[j+1], \dots, A[i]] \end{equation*}which start at any index \(j \in \{ 0, \dots, i\}\) and end at index \(i\) (both inclusive), and gives the maximal sum, i.e. the sum of the sub-array with the largest sum. For example, for \(i=3\) and the array \([-5,3,4,-2,6,-5,-8,4,3]\) we have \(\operatorname{mT}(i) = 3+4+(-2) = 5\text{,}\) sine the sub-array starting at \(j=1\) has the largest sum. Apply the function on the array \([-5,3,4,-2,6,-5,-8,4,3]\) for every end index \(i\) of the sub-array. Draw a corresponding table with your calculations for every end index \(i\text{.}\) What do you notice? Write a C program which uses the function from part a) to calculate the sum of the continuous sub-array with maximal sum. Extend the program such that the maximal sub-array is calculated and printed. 202205201400 - \begin{equation*} \operatorname{mT}(i) = \begin{cases} A[i] & i = 0\\ A[i] + \max \{\operatorname{mT}(i - 1),0\} & i > 0 \end{cases} \end{equation*} \(A = [-5,3,4,-2,6,-5,-8,4,3], n = 9\) One can directly read the maximal continuous sub-array sum from the table. It is the maximum of all the values in the rightmost column. Additionally, one can get the corresponding sub-array by choosing the index \(j\) corresponding to the maximum value as the last element index of the sub-array. For the starting index, one descends from the last index and chooses the smallest index \(i\) for which the value in the rightmost value is positive. In the example, these indices are \(j := 4\) and \(i := 1\text{.}\) The corresponding sub-array is \([3,4,-2,6]\) with sum \(11\text{,}\) which is the desired result. #include <stdio.h> int max_sum(int *array, int length) { if (length < 1) return 0; int* sums = malloc(sizeof(int) * length); sums[0] = array[0]; for (int i = 1; i < length; i++) { sums[i] = (sums[i-1] > 0) ? sums[i-1] + array[i] : array[i]; } int max = sums[0]; for (int i = 1; i < length; i++) { if (sums[i] > max) max = sums[i]; } free(sums); return max; } // or without additional array int max_sum_short(int *array, int length) { if (length < 1) return 0; int last_sum = array[0]; int max = last_sum; for (int i = 1; i < length; i++) { last_sum = (last_sum > 0) ? last_sum + array[i] : array[i]; if (last_sum > max) max = last_sum; } return max; } int main(int argc, char **argv) { int array[] = {-5,3,4,-2,6,-5,-8,4,3}; int res = max_sum(array, sizeof(array)/sizeof(int)); printf("The maximal continuous sub-array sum is %d\n", res); return 0; } #include <stdio.h> typedef struct { int max; int start; // inclusive int end; // exclusive } subarray_t; void max_sum_short(int *array, int length, subarray_t *result) { if (length < 1) { result->max = 0; result->start = 0; result->end = 0; return; } int last_sum = array[0]; int max = last_sum; int start = 0; int best_start = 0; int end = 0; for (int i = 1; i < length; i++) { if (last_sum <= 0) { start = i; last_sum = array[i]; } else last_sum += array[i]; if (last_sum > max) { max = last_sum; best_start = start; end = i+1; } } result->max = max; result->start = best_start; result->end = end; } int main(int argc, char **argv) { int array[] = {-5,3,4,-2,6,-5,-8,4,3}; subarray_t result; max_sum_short(array, sizeof(array)/sizeof(int), &result); if (result.end - result.start >= 1) { printf("The continuous sub-array with maximal sum is: [%d", array[result.start]); for (int i = result.start+1; i < result.end; i++) printf(", %d", array[i]); printf("]\n"); printf("The sum of the sub-array is %d\n", result.max); } else printf("[]\nThe passed array is empty!"); return 0; }
https://prog2.de/book/sheet06.html
CC-MAIN-2022-33
refinedweb
2,233
63.19
.\" Copyright (c) 1985, 1991, 1993 .\" The Regents of the University of California. All rights reserved. .\" Copyright (c) 2002 - 2005 Tony Finch <dot@dotat.at>. All rights reserved. .\" .\" This code is derived from software contributed to Berkeley by .\" Dave Yost. It was rewritten to support ANSI C by Tony Finifdef.1 8.2 (Berkeley) 4/1/94 .\" $dotat: things/unifdef.1,v 1.51 2005/03/08 12:39:01 fanf2 Exp $ .\" $FreeBSD: src/usr.bin/unifdef/unifdef.1,v 1.25 2008/05/02 16:23:47 hrs Exp $ .\" .Dd September 24, 2002 .Dt UNIFDEF 1 .Os .Sh NAME .Nm unifdef , unifdefall .Nd remove preprocessor conditionals from code .Sh SYNOPSIS .Nm .Op Fl cdeklnst .Op Fl I Ns Ar path .Op Fl D Ns Ar sym Ns Op = Ns Ar val .Op Fl U Ns Ar sym .Op Fl iD Ns Ar sym Ns Op = Ns Ar val .Op Fl iU Ns Ar sym .Ar ... .Op Ar file .Nm unifdefall .Op Fl I Ns Ar path .Ar ... .Ar file .Sh DESCRIPTION The .Nm utility selectively processes conditional .Xr cpp 1 directives. It removes from a file both the directives and any additional text that they specify should be removed, while otherwise leaving the file alone. .Pp The .Nm utility acts on .Ic #if , #ifdef , #ifndef , #elif , #else , and .Ic #endif lines, and it understands only the commonly-used subset of the expression syntax for .Ic #if and .Ic #elif lines. It handles integer values of symbols defined on the command line, the .Fn defined operator applied to symbols defined or undefined on the command line, the operators .Ic \&! , < , > , <= , >= , == , != , && , || , and parenthesized expressions. Anything that it does not understand is passed through unharmed. It only processes .Ic #ifdef and .Ic #ifndef directives if the symbol is specified on the command line, otherwise they are also passed through unchanged. By default, it ignores .Ic #if and .Ic #elif lines with constant expressions, or they may be processed by specifying the .Fl k flag on the command line. .Pp The .Nm utility also understands just enough about C to know when one of the directives is inactive because it is inside a comment, or affected by a backslash-continued line. It spots unusually-formatted preprocessor directives and knows when the layout is too odd to handle. .Pp A script called .Nm unifdefall can be used to remove all conditional .Xr cpp 1 directives from a file. It uses .Nm Fl s and .Nm cpp Fl dM to get lists of all the controlling symbols and their definitions (or lack thereof), then invokes .Nm with appropriate arguments to process the file. .Pp Available options: .Pp .Bl -tag -width indent -compact .It Fl D Ns Ar sym Ns Op = Ns Ar val Specify that a symbol is defined, and optionally specify what value to give it for the purpose of handling .Ic #if and .Ic #elif directives. .Pp .It Fl U Ns Ar sym Specify that a symbol is undefined. If the same symbol appears in more than one argument, the last occurrence dominates. .Pp .It Fl c If the .Fl c flag is specified, then the operation of .Nm is complemented, i.e., the lines that would have been removed or blanked are retained and vice versa. .Pp .It Fl d Turn on printing of degugging messages. .Pp .It Fl e Because .Nm processes its input one line at a time, it cannot remove preprocessor directives that span more than one line. The most common example of this is a directive with a multi-line comment hanging off its right hand end. By default, if .Nm has to process such a directive, it will complain that the line is too obfuscated. The .Fl e option changes the behaviour so that, where possible, such lines are left unprocessed instead of reporting an error. .Pp .It Fl k Process .Ic #if and .Ic #elif lines with constant expressions. By default, sections controlled by such lines are passed through unchanged because they typically start .Dq Li "#if 0" and are used as a kind of comment to sketch out future or past development. It would be rude to strip them out, just as it would be for normal comments. .Pp .It Fl l Replace removed lines with blank lines instead of deleting them. .Pp .It Fl n Add .Li #line directives to the output following any deleted lines, so that errors produced when compiling the output file correspond to line numbers in the input file. .Pp .It Fl s Instead of processing the input file as usual, this option causes .Nm to produce a list of symbols that appear in expressions that .Nm understands. It is useful in conjunction with the .Fl dM option of .Xr cpp 1 for creating .Nm command lines. .Pp .It Fl t Disables parsing for C comments and line continuations, which is useful for plain text. .Pp .It Fl iD Ns Ar sym Ns Op = Ns Ar val .It Fl iU Ns Ar sym Ignore .Ic #ifdef Ns s . If your C code uses .Ic #ifdef Ns s to delimit non-C lines, such as comments or code which is under construction, then you must tell .Nm which symbols are used for that purpose so that it will not try to parse comments and line continuations inside those .Ic #ifdef Ns s . One specifies ignored symbols with .Fl iD Ns Ar sym Ns Oo = Ns Ar val Oc and .Fl iU Ns Ar sym similar to .Fl D Ns Ar sym Ns Op = Ns Ar val and .Fl U Ns Ar sym above. .Pp .It Fl I Ns Ar path Specifies to .Nm unifdefall an additional place to look for .Ic #include files. This option is ignored by .Nm for compatibility with .Xr cpp 1 and to simplify the implementation of .Nm unifdefall . .El .Pp The .Nm utility copies its output to .Em stdout and will take its input from .Em stdin if no .Ar file argument is given. .Pp The .Nm utility works nicely with the .Fl D Ns Ar sym option of .Xr diff 1 . .Sh EXIT STATUS The .Nm utility exits 0 if the output is an exact copy of the input, 1 if not, and 2 if in trouble. .Sh DIAGNOSTICS .Bl -item .It Too many levels of nesting. .It Inappropriate .Ic #elif , .Ic #else or .Ic #endif . .It Obfuscated preprocessor control line. .It Premature .Tn EOF (with the line number of the most recent unterminated .Ic #if ) . .It .Tn EOF in comment. .El .Sh SEE ALSO .Xr cpp 1 , .Xr diff 1 .Sh HISTORY The .Nm command appeared in .Bx 2.9 . .Tn ANSI\~C support was added in .Fx 4.7 . .Sh AUTHORS This implementation was originally written by .An Dave Yost Aq Dave@Yost.com . .An Tony Finch Aq dot@dotat.at rewrote it to support .Tn ANSI\~C . .Sh BUGS Expression evaluation is very limited. .Pp Preprocessor control lines split across more than one physical line (because of comments or backslash-newline) cannot be handled in every situation. .Pp Trigraphs are not recognized. .Pp There is no support for symbols with different definitions at different points in the source file. .Pp The text-mode and ignore functionality does not correspond to modern .Xr cpp 1 behaviour.
http://opensource.apple.com/source/developer_cmds/developer_cmds-53.1/unifdef/unifdef.1
CC-MAIN-2015-32
refinedweb
1,211
79.36
If .Net 1 & 1.1 installed which is used? If I have an app written for .net 1.0 and I install it an .net 1.0, THEN the user updates .net to 1.1, will my app using the 1.0 or newer 1.1 assemblies? I'm THINKING that my app would use the older .net 1.0 (because that's what it was compiled against) if it's present. If 1.0 is NOT present but 1.1 IS, then it would use 1.1 Framework. INVALIDATES LINKER ARGUMENT? If I'm right, doesn't this invalidate Microsoft's .net linker Security argument? (The one that says : if you distribute an app written against .net runtime version A and MS finds a security problem in .net VErsion A the user should be able to update the framework to Version B and fix the security leak. So, now when your app runs, it's not exposing the PC to this old security leak). But... If I'm right, the PC maintains BOTH the old A and new B frameworks side by side and my app will still use the older A framework. So the situation would be the SAME as if I'd LINKED my app to .net version A. Right? Mr. Analogy {ISV owner} Tuesday, February 1, 2005 In general, an app built against the .NET Framework 1.0 will work with that version if it runs on a PC with both 1.0 and 1.1 installed. However, it does depend on what you have in the app's configuration file as you can use the configuration file to alter the default behaviour. See this link for an explanation of how it all works: Mike Green Tuesday, February 1, 2005 No. If Microsoft patches the .NET Framework v1.0.3705, it is still v1.0.3705. It just has SPs applied. Thus if your app demands 1.0.3705 (1.0) the patched framework will be used, but it's still 1.0. You know the whole side-by-side thing has me wondering - what happens when we have 10 versions of the framework and they're all running side-by-side? It's one of those ideas that seems great at the outset, but it doesn't have a longterm future. Dennis Forbes Tuesday, February 1, 2005 Thanks Dennis. I didn't realize that .net has patches. That makes it a bit more likely that we will NOT have a a zillion .net versions. (I.e., if they needed a new .net version every time they needed a bug fix in such a heap of code, then we'd have a lot of versions). Yes, the issue of side-by-side .net execution seems to defeat the whole "shared space" idea where all the apps are in the same "memory space" (I forget the terminology) . Hopefully, all of this means that microsoft will be motivated to have infrequent new .net versions. Then maybe we can actually get one version of .net installed on enough computers so that we don't have to distribute it for everyone. Mr. Analogy {ISV owner} Tuesday, February 1, 2005 Had you considering trying it out? using System; public class VersionTest { public static void Main() { Console.WriteLine(Environment.Version.ToString()); } } Took approximately 14 seconds to write. Took another 30 seconds to compile it twice, with 1.0 and 1.1, and run it twice. There's no surprises when you test things yourself. Brad Wilson Tuesday, February 1, 2005 Recent Topics Fog Creek Home
https://discuss.fogcreek.com/dotnetquestions/5374.html
CC-MAIN-2018-30
refinedweb
593
78.45
. emr-containers ] Displays detailed information about a specified virtual cluster. Virtual cluster is a managed entity on Amazon EMR on EKS. You can create, describe, list and delete virtual clusters. They do not consume any additional resource in your system. A single virtual cluster maps to a single Kubernetes namespace. Given this relationship, you can model virtual clusters the same way you model Kubernetes namespaces to meet your requirements. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. describe-virtual-cluster --id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --id (string) The ID of the virtual cluster that. virtualCluster -> (structure) This output displays information about the specified virtual cluster. id -> (string)The ID of the virtual cluster. name -> (string)The name of the virtual cluster. arn -> (string)The ARN of the virtual cluster. state -> (string)The state of the virtual cluster. containerProvider -> (structure) The container provider of the virtual cluster. type -> (string)The type of the container provider. EKS is the only supported type as of now. id -> (string)The ID of the container cluster. info -> (structure) The information about the container cluster. eksInfo -> (structure) The information about the EKS cluster. namespace -> (string)The namespaces of the EKS cluster. createdAt -> (timestamp)The date and time when the virtual cluster is created. tags -> (map) The assigned tags of the virtual cluster. key -> (string) value -> (string)
https://docs.aws.amazon.com/cli/latest/reference/emr-containers/describe-virtual-cluster.html
CC-MAIN-2022-27
refinedweb
230
53.58
QT 5.0 fullscreen mode behavior changed in Windows 7? In a dual-screen configuration switching a window into fullscreen mode did not change the screen in all versions up to 4.8.4. In Qt 5, however, this is no longer true. Instead the window alway uses the main screen (screen 0). Is this intended behavior or just a bug? I really need the old behaviour and would appreciate if someone knows a way to influence the target screen for fullscreen mode. you can change the window to a different screen, but i have no idea ( geometry doesn't work ) how to use both screens. Here an example to use the last screen: @ QList<QScreen *> screenList = app.screens(); viewer.setScreen(screenList.last()); viewer.showFullScreen();@ Thanks for the response. Unfortunately it did not work. I assumed from your example that setScreen() is a member of QWidget, but this is not the case. What type is the variable "viewer" you use in your snippet? Has you ever sucessfully used this method in Qt 5? Viewer is the QML-Viewer created by the QTCreator 2.6.1 when you begin a new QML2 Application Thanks again! The method setScreen() is a member of QWindows but not of QWidget. However, there is a member QWidget::windowHandle() that returns the pointer to the actual QWindow (although marked as "preliminary" in the documentation). Therefore, using ex->windowHandle()->setScreen(screenList.last()) where ex is a pointer to a QWidget, does the trick. Hi, does any one found a real solution for a QWidget based application? This isin't working at all on my system. QWidget::windowHandle() returns a constant. When casting and passing the second Screen to setScreen, everything just disappears. The above mentioned method works at least on Windows7, that is, on my dual-monitor system @QWidget *ex = <assign a pointer to an existing QWidget object>; QList<QScreen *> screenList = app.screens(); ex->windowHandle()->setScreen(screenList.last()); ex->showFullScreen();@ opens a window on the second screen. Please note that QWidget::windowHandle() returns a pointer to QWindow, not a constant. levelxxl you shouldn't call the windowHandle method from the QWidget namespace. You should call it on your applications widget :)
https://forum.qt.io/topic/22386/qt-5-0-fullscreen-mode-behavior-changed-in-windows-7
CC-MAIN-2018-05
refinedweb
363
67.86
See also: IRC log <nikos> RRSAgent make minutes <nikos> yeh =) <heycam> nikos, do you have local IRC logs? <nikos> I'll check. <nikos> I do. What would you like me to do with them? <heycam> do you want to just mail them as is as the minutes for the day? too much trouble to get them processed as HTML minutes, I think. <nikos> Ok, will do <heycam> thanks <nikos> heycam, they should be bcc'd to public-svg right? <nikos> I was messing around with scribe.perl seeing if I could generate something nice out of them but will have to give up for now <heycam> yep <heycam> ok, no problem <heycam> trackbot, close ACTION-3417 <trackbot> Closed ACTION-3417 Relax referencing requirements (issue-2295). <heycam> Cyril, when you get a chance could you add IDL for the SVGDiscardElement interface to struct.html? the build scripts give me a warning each time I touch that chapter. <heycam> trackbot, close ACTION-3130 <trackbot> Closed ACTION-3130 Edit the spec. for.. <trackbot> Date: 06 February 2013 <heycam> Meeting: SVG WG F2F Sydney 2013 Day 4 <birtles> scribenick: birtles cabanier: in CSS you have the border-image property ... where you can slice up an image ... and tile the sides ... you can stretch them and after a point they start duplicating themselves <cabanier> link: cabanier: we have the same features in Illustrator ... and you can define a side and a corner and it does the stacking for you ... instead of just a png ... if you do a google search you can see many places where people use this ... on the wiki there are several examples krit: you could use svg cabanier: but that's kind of hard because you have to define where the border is dino: do the two images orient along the edge cabanier: yes ... when there's a curve the artwork needs to bend along the curve ... so you might need to add some limitations ... e.g. if there was a gradient then you'd need to morph the gradient birtles: so is this for just SVG cabanier: this is the SVG counterpart to what's in CSS AlexD: this would have to be an adaptive dashing style thing ... so you have an exact integer number of repeats ... like we talked about for dashing where you try to make the dash array end on exact integers cabanier: you'd use this brush like a stroke ... so it also is effected by the stroke-width ... e.g. a stroke width of 2 would make it scale by two ... there's an e.g. with dashes ... so each dash would be one of the images heycam: so, theses dashes here, some of them are squashed when they're small ... but in the non-dashing examples they maintain their aspect ratio cabanier: I think in dashing examples they either squash or duplicate ... when you apply the corners... ... I think for now we only want this on polygons ... because how do you work out the corner of a path? ... even for rects you know where the corners are heycam: but you have examples of using this on a path cabanier: yes, I just drew these in Illustrator ... in the star it uses the corner on the outside but not in the inside ... I think it is a special case for stars <shepazu> (this corners question applies to the "rounded corners" proposal from Rigi-Kaltbad, too) dino: are you sure? cabanier: oh, I think you're right krit: it looks like the engine looks for the smallest angle ... and orients to that angle cabanier: I think the syntax for this would be fairly easy ... for rounded corners it doesn't use the corner piece ... that's just how Illustrator implemented it heycam: is it warping the corner pieces? cabanier: yes ed: so if you had a sharp corner in the last example with the squiggly line ... would it get the corner piece? cabanier: no dino: so you'd only use it for basic shapes? cabanier: right dino: even if you drew the star as a path you wouldn't get the corner piece? cabanier: yes ... otherwise you have to define what is a corner dino: seems like there would be a lot of work in describing how you walk a rectangular object... cabanier: it would be a lot of implementation work but not so much spec work ... I don't think the warping would be defined in the spec dino: I think how the control points are warped should be defined in the spec ... if you want it to be interoperable heycam: I think you could have a high-level description regarding how points along the bezier are mapped cabanier: I'm not saying we don't have to do it.. but do we do that elsewhere in the spec? ed: yes, we do <ed> ed: the method attribute ... when you set that to stretch ... it says something about how its done but it's not very precise dmitry: I checked and Illustrator does apply the corner pieces to custom paths with sharp corners cabanier: so it does, you're right ed: if you make a star with a path you probably want it krit: there's more calculation to determine if the corner between two curves is a sharp corner or not ed: where is the cut-off point krit: maybe it does it on all sharp corners? ... these are fairly detailed discussions... <heycam> heycam: the new wording about computing the shape of a stroke has the kinds of descriptions you would want for the warping here ... it's a high-level description of taking points on a path and turning them into shapes or different points <dmitry> Screenshot of the corner processing in Illustrator: cabanier: so this would effect getStrokeBBox too right? heycam: if some of the tiling pattern didn't fill out the whole tiling space or overflow it, would that effect the stroke bbox? cabanier: there was some discussion of this in the mailing list ... at least it's easier than other brushes like the bristle brush ... I think this is pretty easy and would be nice to have in CSS too ... in CSS you would just use it like border-image ... and if you applied it to a CSS box you wouldn't have the deformed beziers ... do you think this is useful? krit: I think this should not go into SVG2 birtles: I agree krit: so should we continue at all, and if so how should we continue? dmitry: Illustrator lets you define two different corner pieces (inside and outside corner) heycam and cabanier: agree it should not be in SVG2 <dmitry> Screenshot of two types of corners: krit: so do we want the feature? heycam: I think we want the feature krit: do we want to have a module for this or in SVG.next? heycam: I think it could be a separate spec ed: sure <heycam> dino, RESOLUTION: We will continue developing border brushes in a separate specification <scribe> ACTION: Rik to create a module to define SVG border brushes [recorded in] <trackbot> Created ACTION-3440 - Create a module to define SVG border brushes [on Rik Cabanier - due 2013-02-13]. cabanier: so this is not going to be for SVG2 heycam: but we had the requirement to represent InkML traces in SVG ... we said we'd enable InkML to be rendered ... so we *could* consider it for SVG2 cabanier: defining it is pretty easy ... not sure how hard it is to do in the graphic libraries heycam: can't be harder than these border brushes cabanier: at any rate, defining it should be very easy ... you just say that along these paths we have these points and at each point you say, e.g. the stroke should be 200% ... and then you draw a catmull-rom curve along the points AlexD: why would you do that, when it involves tension heycam: I'm thinking we could just extend stroke-width itself ... e.g. stroke-width="10px 15px" cabanier: I think you want to use percentages ... e.g. to represent pressure on the pen ... or if you pick up a bigger pen you want to just say 200% heycam: (e.g. on the board) ... stroke-width="10px" stroke-width-variation="0px 100%, 100px 100%, 150px 50%" birtles: (use case from previous discussion) I want this feature to be able to do finger-drawing on a tablet where the stroke width varies with touch pressure cabanier: I think I prefer stroke-width-varation="0% 100%, 66% 100%, 10% 50%" krit: do you want to be able to specify different widths for each side (left/right) cabanier: I think that might get complicated because it might not always be obvious which side is which heycam: I think it is ok dmitry: I think if you really want that you can just change the path heycam: if the widths could be a repeating pattern you could do spaghetti lines ... I don't think it's much more work cabanier: yes, I agree heycam: I think you could automatically tile it, especially if your offsets are absolute lengths ... I would actually be ok with different left-right sides ... since I think the implementation difficulty would be the same ... but what if the two intersected? cabanier: they can't since the percentage is always positive birtles: is it worth adding the features in stages? ... I think the primary use cases would be data from tablets and calligraphy ... where you probably don't need repeating patterns or asymmetric variations ed: I think it would be nice to have a spread-method like approach ... where it just repeats ... I think you might want to use variable-stroke width for custom line cap ... where the end tapers off dmitry: you could use markers for that ... but it doesn't always work cabanier: like if you have a gradient on the stroke ... the marker would have a separate gradient dino: who do we expect to use this? ... hand authors? tools to export? ... I think this kind of feature is complex to hand author cabanier: I think it's not so hard krit: of course, you can already export this from Illsutrator cabanier: but strokes become a series of paths AlexD: cartographers have been asking for this dmitry: and if the export from Illustrator loses the original path then you can't modify it in script birtles: and I think there are many uses cases where you create/modify paths from script Cyril: does this affect markers heycam: yes ... because markers can be scaled in size depending on the stroke width ... but that's probably what you want ed: I think there are cases where you don't cabanier: I think you want the original stroke-width Cyril: in d3 examples, where you have flows of data ... and you have arrows where you want the arrow to grow or shrink ed: so how should we proceed? heycam: if someone is keen to do the work, someone could specify the minimum set of features ... symmetric stroke width variation with no repeating krit: can stroke-width-variation be a shorthand heycam: not sure about the naming cabanier: it could be Cyril: this would be a presentation attribute? heycam: as I've written it, yes cabanier: can you define a URI somewhere heycam: so you can re-use the definition? ... inheritance works so that might be enough ... at first I'd like to avoid element syntax and just have the property ... if we need a pre-defined thing we can add it later ... it's currently assigned to Doug in the requirements commitments (item 20) cabanier: I can talk to Doug about it <shepazu> (I'm happy to defer to someone else for this, cabanier) <scribe> ACTION: Rik to specify variable width stroking in SVG2 [recorded in] <trackbot> Created ACTION-3441 - Specify variable width stroking in SVG2 [on Rik Cabanier - due 2013-02-13]. <shepazu> (also happy to discuss it and give feedback) <ed> -- 15min break -- -- break, 15min -- <heycam> ScribeNick: heycam krit: I'm not sure on the status of media fragments on the <image> element … especially for xlink:href … I don't think it's specified in SVG … should it be combined together with media fragments? allow #xywh there as well? seems to be useful, but it's not specified currently. … how would that affect our SVG Stacks hack? ed: I don't think it would affect it that much … unless you pick that particular ID krit: there are more fragments that SVG supports that aren't supported in Media Fragments … choosing the viewport e.g. … should we talk to the Media Fragments WG people? <silvia> silvia: what are the other things you're missing? <silvia> spatial dimensions: ed: you can pass transforms, a viewBox Cyril: I don't understand how this depends on xlink:href="" spec krit: Media Fragments conflicts with SVG in some cases Cyril: I don't see the problem with XLink href … media fragments are defined by the MIME type … if you use it in xlink:href="" or src="", it shouldn't matter krit: we need to reference Media Fragments ed: I don't think they're in very much conflict with each other … we should reference Media Fragments silvia: are you using Media Fragments with SVG resources yet? ed: no silvia: SVG resources define how their ID fragments get interpreted, if you don't adopted the Media Fragment spec for that resource type then there's no conflict Cyril: but it makes sense to support Media Fragments, xywh, or timing as well silvia: you just need to extend the fragment specification for SVG Cyril: so t and xywh should be reserved? heycam: just reserve "t=" and "xywh=" Cyril: in Media Fragments you have four dimensions. how does ID work? silvia: nobody has implemented that … the way we envision it is that some containers define names for sections, it's like imagine having a WebVTT chapter file in a webm file, and you have names for sections of the video file … and you can use the chapter name to address the chapter Cyril: what's the syntax? silvia: id=string Cyril: so it's not "#<string>" it's "#id=<string>" silvia: yes krit: for SVG would it be a difference to support #<element> compared to #id=<element>? Cyril: the question is how to combine them … once you start using a freeform name, you can't use an "&" and follow it with another dimension silvia: we wanted to be able to support multiple dimensions; choose this one video segment (temporal), and then choose a spatial area … that's why we defined the parsing for these media fragments … if you don't do that from the start it's difficult to post-fit it … your selectors are functions? … you could do something like "#mediafrag(xywh=…)" to be compatible Cyril: you'd have to treat SVG differently then … sometimes you don't know what the type of the resource is krit: do all media fragments require an equals sign? silvia: yes krit: then it's probably fine … we shouldn't need to have #mediafrag Cyril: we could disallow ampersand in SVG IDs, then it combines well here krit: media fragments in HTML, if you have <img src="blah#xywh=…"> you would have the same problem yes? silvia: on HTML pages it's difficult, since HTML has specified the way a fragment is interpreted … often you have a custom web site, e.g. in YouTube you can have similar time offsets with fragments … but that's not what we've standardised for … we've done this only for media files … HTML is a different media type krit: depends on the MIME type of the document you reference? silvia: yes … that's how the URL specification has defined fragments to work krit: so we need to specify how SVG's fragments are interpreted ... the timing fragments are useful in SVG too yes/ ed: we should define how that works birtles: we've said for Web Animations we want this to work … would be good if this worked for HTML containing documents with animations as well as SVG Cyril, dino: Cyril is asking how you distinguish #t=… from normal ID references in HTML Cyril: if HTML doesn't use the same solution as us, disallowing "&" and "=", it wouldn't be good krit: same problem with SVG in HTML, the MIME type is text/html, and the fragments would work differently silvia: the HTML MIME type does not support media fragments Cyril: yet? silvia: I think it's unlikely ed: but if we have Web Animations it might be useful silvia: we discussed xywh for HTML, it might be an interesting feature, but the HTML WG should discuss that Cyril: so should we aim for alignining with other media-like resources, or what HTML supports? silvia: I'd go with both strategies … I would want to be as much compatible with that spec as possible, and when SVG goes forward have its MIME type define how media fragments work, it's a bigger argument for HTML to support it … I don't think this will break SVG documents, but with HTML it could well break pages … people might have used #t= to mean something different on HTML pages … I would orient myself towards that problem, but rather being compatible with other media files krit: so Media Fragments defines how xywh are parsed, why do we need to define that in SVG? Cyril: we just need to say image/svg+xml follows media fragments silvia: right … the Media Fragments working group was bound by the URL specification … fragments are defined interpreted based on the mime type … we didn't want to have to deal with all of the mime types around, just video krit: SVG is still based on IRI, which should allow more characters silvia: it's not a problem; we wrote the spec to be based on UTF-8 … the only place where it really mattered was the chapter names, named references heycam: I think the URL Standard is meant to supersede IRIs Cyril: I think I'm fine with the group saying we adopt media fragments, and we restrict our IDs not to include xywh=, ampersands, etc. … and then later can coordinate with HTML WG to see if it's possible for them to support this too silvia: one part of the Media Fragments spec has been included in HTML; it defines how time offsets in videos are interpreted Cyril: if you put t=15 does the document timeline start at 0? birtles: it does a seek … then we need to add automatic pausing RESOLUTION: SVG 2 will use Media Fragments. <scribe> ACTION: Cyril to add Media Fragments support to SVG 2. [recorded in] <trackbot> Created ACTION-3442 - Add Media Fragments support to SVG 2. [on Cyril Concolato - due 2013-02-14]. Cyril: is #svgView(viewBox()) the same thing as #xywh? ed: not quite <scribe> ACTION: Brian to define how #t= is interpeted in Web Animations. [recorded in] <trackbot> Created ACTION-3443 - Define how #t= is interpeted in Web Animations. [on Brian Birtles - due 2013-02-14]. … percentages will be different krit: for #xywh these reference the original viewport krit: we have a new function that gets the transform between two elements … I think this will be harder with CSS transforms … since they have 3d transforms … and it's not really possible to get a transformation matrix over this flattening … the question is how do we want to solve this … and second, it returns an SVGMatrix which is 2D … it's not applicable for this … it should return something that can represent 3D matrices … I spoke with Dean about flattening … we think that it should either give you an exception, a null matrix back... … somethign that indicates it's not possible to get the transformation … something to be resolved is the return type … we would need to specify a new matrix … CSS Transforms had a spec for it … which needed to be removed, since it uses CSS OM … Dean proposed Matrix4x4 … we didn't know at this time where it should live -- maybe in ECMAScript? … on the window object … what does it look like? <krit> krit: there were some discusisons about whether it should work with Euler coordinates dino: there was some discussion that day in the SVG meeting about this … which I never followed up on … I think it was about whether to use radians or degrees … we have methods on here for both modifying the matrix and returning a new one … the other big discussion was someone suggested it should really be an ECMAScript type … someone else discussing whether it shouldn't have all those exposed as attributes, it should be a typed array with a wrapper of some sort krit: they wanted to use it with WebGL as well … which needs to be fast dino: that's why I have copyIntoFloat32Array … but you might not want to call this function each time heycam: I don't think there's a concept of live typed arrays is there? dino: I think the suggestion was to have the typed array backing the matrix … there would be an attribute to access that matrix backing data directly … in WebGL it would just be a matter of getting that out, instead of constantly creating new typed arrays … we didn't really have strong drive to get this happening quickly … but I think we do now, now that the spec is closer to completion heycam: makes you think well then why not about Points, etc. dino: I think that's why it wouldn't make sense to send it to ECMA … if we just have Matrix dino: there is still more work to do with improving SVG DOM interfaces … should we add something now to expose 4x4 matrices … or have conversion functions separate from the SVG DOM heycam: could we not replace SVGMatrix with Matrix4x4? dino: that was the idea krit: we got everything from SVGMatrix on Matrix4x4, then we make SVGMatrix inherit from this one … there are some things that need to be in SVGMatrix krit: there are SVGExceptions being thrown heycam: that's gone in SVG2 dmitry: nobody uses skews cabanier: why does this issue keep coming up then? krit: Mozilla found some pages that did use skew dino: people use it for horrible fake 3d, isometric dmitry: we should encourage them to not do this by not having it in the specification cabanier: I know people use skew a lot in animations for fake 3d krit: we could strongly ask people not to use it... dmitry: it should be easier to make beautiful things, not just easier to make ugly things too krit: we could have them be deprecated but browsers still implement it dino: I don't think we can remove skews from transform syntax any more … do we need to expose them in this interface? krit: yes dino: sounds like we do if we want to replace SVGMatrix with Matrix4x4 ed: so do we need to fix this in SVG2? krit: do we want to go with this new interface? I think yes … if it should live in ECMA we can worry about that later … whether it should go on window I asked the CSS group for approval heycam: I think just leave it not [NoInterfaceObject] krit: which spec should it live in? a new one? cabanier: Canvas references SVGMatrix, would be nice to reference this instead krit: what happens if you have a 3D transform on canvas? heycam: who should write this Matrix4x4 spec? krit: have to ask first, but I will probably do it dino: what about the name Matrix4x4? heycam: eh... dino: GL calls them matrix4, vec3, etc. krit: i prefer Matrix to Matrix4x4 heycam: should Matrix replace SVGMatrix, or SVGMatrix inherit? ed: I don't think it's common for people to rely on "[object SVGMatrix]" being the actual object type krit: ok, just replace SVGMatrix with the new Matrix then ed: as long as we have the same method names I think it should be fine RESOLUTION: SVG 2 will reference the new Matrix specification and replace SVGMatrix with Matrix, once that spec is ready. <scribe> ACTION: krit to write up a spec for Matrix [recorded in] <trackbot> Error finding 'krit'. You can review and register nicknames at <>. <scribe> ACTION: dirk to write up a spec for Matrix [recorded in] <trackbot> Created ACTION-3444 - Write up a spec for Matrix [on Dirk Schulze - due 2013-02-14]. ACTION-3444: Also update SVG 2 to reference the spec when it's ready. <trackbot> Notes added to ACTION-3444 Write up a spec for Matrix. <krit> krit: custom filters are used with CSS shaders … so that you can apply some distortion/modification to the graphics … in the old specification, the syntax was "custom(url('vertexshader') mix(url('fragmentshader') multiply src-over), 4 4 attached, param1 value1, param2 value2)" … it's very long, hard to read, and only supports GLSL … MS asked us to make it more generic to support other shader languages … one idea is "custom()" references an at-rule … @filter f1 { … } … but this does not support animations dino: you could have @filter f2 { … } and animate between f1 and f2 krit: we wanted to try to keep it simple … the @filter rule tries to emulate @font-face … so it has a 'src' attribute, and you can provide a type … src: url(…) format("x-shader/x-vertex") heycam: these are just hints not to download, like @font-face? krit: no it's different from font-face … you need all of these resources … there's also "geometry: grid(4,4);" … and "margin: …;" like a filter primitive margin … and "parameters: …;" for the parameters to the shader programs … so what happens with different shader formats. … the @filter defines a generic primitive, so this is not limited to CSS Shaders … if you don't support GLSL, we'd suggest a new media query … @media (filter: glsl) { … } … so other browser could define new properties on the at rule ed: what is the point of the format() yet? krit: in case there is a new shader type under GLSL dino: WebGL defines a restricted version of GLSL … it's not strictly GLSL krit: maybe this keyword could be "WebGL" then … we can think about that later ed: so you have src with format()s in case you support more shader types later? krit: also you could reference an SVG filter here <filter id="f2"> <feOffset dx="var(x1)" dy="var(x2)"/> </filter> … something like SVG Parameters or CSS Variables … the custom() function would then reference a filter at-rule that references the SVG filter @filter g2 { src: url(#f2) format('svg'); parameters: x1 30, x2 30; } filter: custom(g2, x1 20, x2 20); ed: what is the var() syntax there? is that defined somewhere? krit: we're not going to put that in the first version of the spec … since it's not clear how Parameters / CSS Variables is working in SVG yet … but this is how custom SVG filters can be animated in the future ed: it might be useful to be able to pass in the document time into the filter krit: we can think about that for v2 heycam: I find it a bit strange that format() in src works differently from in @font-face krit: what if we rename "src" to "filter-src"? heycam: maybe… it might be the combination of "src" and "format()" that looks to me like formats are hints to avoid downloading … why require format() at all given you can look at the actual served Content-Type? krit: servers might not set that correctly dino: there's not even a standardised extension for these files … what if you reference from the local file system heycam: what is the advantage of 'src' having the same format across different shading language @filter rules? dino: the source language format is the thing most likely to change … geometric, margin, parameters make sense with other shader languages too krit: so are people happy with @filter rule? heycam: I like it more than stuffing everything in to the property … what about the src descriptor, people want a different name? Cyril: you plan to have different mime types for vertex vs fragment shaders? krit: it's the case already … that's defined by WebGL Cyril: is the mime type registered? dino: x-shader/* is not registered, but someone would have written something down somewhere heycam: for me, you don't need to rename src for now… RESOLUTION: Filter Effects changes to use @filter. -- lunch break one hour -- <cabanier> scribenick: cabanier heycam: filter media query feels different from other media queries …since those are properties of the device ed: maybe @supports? heycam: so you could write '@support filter(glsl)' … the syntax is extended but not implement krit: yes, that seems better ... I'm OK with that heycam: at some point there will be @supports for @-rules krit: inside the rule? heycam: you could then have: … atrule(filter, src:… format('x-shader') … the normal properties just check if they parse correctly … so maybe it's not quite right, but I'd be happy … Maybe email www-style <scribe> ACTION: Dirk to email www-style about at-supports filter function [recorded in] <trackbot> Created ACTION-3445 - Email www-style about at-supports filter function [on Dirk Schulze - due 2013-02-14]. krit: is @filter-rule fine? ed: are there other possibilities … is everything of the previous syntax possible krit: yes cabanier: even animations: heycam: that's for the @media part not the rule krit: do we need a new RESOLUTION? heycam: can you do the same as before? krit: yes RESOLUTION: accept proposed descriptors for at-filter rule heycam: the question is if we support css lenght and how we reflect that in svg … Dirk, did you add it already? krit: yes. as unknown because we don't want to extend SVG DOM at this time heycam: so, there are a few new unit types such as rem, vw, vh, ch, etc ….rem is default font size shanestephens_: it's to do layout based on the font heycam: for instance to do margins shanestephens_: MDN has a good description heycam: we want to support all of those … my question was that we want to add accessors for all of those krit: should make it so that it becomes more extensible heycam: you could use named properties … that makes it open ended. But you'd have to update the spec ed: that makes it more clear that the spec is going to be extended … having a group of names makes that more clear … the spec refer to the group of supported unit types in CSS heycam: named properties would require slightly different implementation … but I like them to be visible in IDL ed: as long as the spec is clear that they can be updated, it's fine by me heycam: I think we want an accessor so you can set it by string … .x.? to get the string value … to write or read the string … .value ? shanestephens_: I like that … value would be a strange name for a unit RESOLUTION: add a 'value' attribute to read or write the CSS serialization of a unit length heycam: there are values like 'calc' and 'attr' krit: x = attr('x') heycam: that would work krit: that depends on the css syntax so it would not fail parsing heycam: no. that would not be a problem. … you can have a property to looks at attr(style) and that would fail to parse … do we want calc and attr and var to work in SVG? krit: yes heycam: and have them work on x and y to work on CSS shanestephens_: if something is a calc? heycam: you can still look at 'px' … should there be an accessor to get the calc value? shanestephens_: we have a lot of experience with polyfill and we spend a lot of javascript mimicing value parsing … web animations has a calc in javascript parser heycam: the other half is how this is reflect in SVG length … and follow Dirk's example to reflect them as unknown values ed: yes, that is the only reasonable value <scribe> ACTION: heycam add a string accessor on SVG animated length and to make 'calc', 'attr' and 'var' work [recorded in] <trackbot> Created ACTION-3446 - Add a string accessor on SVG animated length and to make 'calc', 'attr' and 'var' work [on Cameron McCormack - due 2013-02-14]. heycam: I changed the spec that if there is a list, you just set the first value … if there is no value, we add one ed: sounds reasonable heycam: I didn't add anything to SVGAnimatedAngle … since noone is really using that one … everyone uses the length one cabanier: maybe better to be consistent heycam: OK <scribe> ACTION: heycam to update SVGAnimatedAngle as well [recorded in] <trackbot> Created ACTION-3447 - Update SVGAnimatedAngle as well [on Cameron McCormack - due 2013-02-14]. <birtles> birtles: there are 3 documents … the link is the core spec and the intent is to have 2 more document to map SVG and CSS features … Is Tab's work available yet? shanestephens_: it's not quite ready, but I'll provide a link <shanestephens_> This is a copy of the CSS integration document, but it does rely on features that are not yet firmed up in the core specification: birtles: we have f2f next week shanestephens_: there's a few reasons that the spec is so big … we need to provide IDLs and Brian added a lot of diagrams … and complete descriptions of processes … Also we have a fairly complete polyfill that is only using 1700 lines of code krit: does the polyfill do synchronisation? shanestephens_: yes, but it doesn't integrate with CSS and SVG … but it works with other libraries <shanestephens_> here is the polyfill: birtles: there's a skeleton for the SVG integration … but don't even look at that … I want to talk about scheduling <shanestephens_> (actually it's now about 2200 lines) … next week we have a f2f to fix remaining issues … and then request FPWD … there are 3 contentious features shanestephens_: except for video, the main thrust of the doc is correct birtles: the FX taskforce will be asked to review so we can publish it … and we hope to do that at the end of next week shanestephens_: the review will be 2 ways. People will either like it or there will be a lot of contentious issues cabanier: probably have to ask each group for resolution birtles: yes dino: It would like to know what changed since I provided feedback … a declarative form is more important and it's not in the document … I notice that the template is taken so that's good shanestephens_: for declarative, we would like to CSS animation, transition and SVG all work the same under the hood dino: yes shanestephens_: so that in future version we can push more declarative markup in the spec dino: OK. Then I have no problem with the model and it does a good job of describing what an animation engine does in a browser … my concern is with the really big javascript API … CSS transitions became popular because they're powerful without being complex … people are hesitant for massive APIs Cyril: authoring tools are missing shanestephens_: it's a lot smaller than SMIL … the shim that we built is really quite small … another things is that it provides a declarative view … It's really very similar to CSS … I understand that that doesn't address your concerns … the polyfill is mostly for testing s/poylfill/API/ krit: in theory SVG and CSS animations should have the same model under the hood … the most important part is that this provides an animation model that is currently lacking in CSS birtles: there's an appendix dino: that's still really big birtles: we can talk about things that shouldn't be exposed dino: A lot of this stuff is needed … an author wants to provide timing to a document … he wants things to happen at certain times … for example, read long books … where we want something to happen at a certain time … just that alone, is requested a lot more than low level access to an animation … scrubbing animation, querying animations, etc … but we get almost no request for something like this shanestephens_: In Google we could use this a lot dino: but most people want simple features shanestephens_: chaining animations is low level? dino: it would be great to have that. … We're on the fence about that one … I'm not saying not to provide this … having such a massive API as step 1 seems too much shanestephens_: Are you suggesting not to provide a JS API? dino: no, put more emphasis on the integration specs … step 1, describe the timing mode and the next step is to provide the integration spec shanestephens_: we think that the most important thing is CSS and SVG animations use the same model … so we agree with you dino: I would like to solve of an author that want to make an animated page … and this API is not a solution for that birtles: yes, this is not for authors dino: that is why I want those specs at the same time shanestephens_: should we have the spec ready at FPWD time? dino: yes, but not have such an extensive API krit: CSS animations and transitions need a model now AlexD: maybe we need to split the APIs into a separate document dino: ??? birtles: there is a way to split things up in 2 parts dino: yes, the timing object would allow you to write your animations birtles: that concept is already there shanestephens_: that sounds exciting … do you think the CSS WG would go for that dino: if you have a class '::timeactive' … and have a CSS animation in there … that would be very powerful birtles: we can do a timesheet … and I would love to do that shanestephens_: so we should split the IDL off and into 2 pieces Cyril: yes, the size is an issue … maybe splitting the spec in separate documents birtles: I don't know how that would help Cyril: setting time in a document is useful by itself with no animations … media elements could be hooked to that birtles: those use cases are already met … you can have a media element Cyril: you can't have frame accuracy today … so maybe one step is solve this problem birtles: it's hard to prioritize … I'm happy to split the API up … with regards to the size, we can work on that … but our problem is that we want to harmonize CSS and SVG … and we're cutting out a bunch from SMIL already … so, it will always be big Cyril: how can you integrate inconsistent models? … if you map the models, will it break anything? birtles: no shanestephens_: CSS is underspecified birtles: and inconsistently implemented <birtles> (that is, some details of SVG are inconsistently implemented) shanestephens_: would it be OK to delay ::timeactive? … or should we do that now dino: I would like to have that now and can write something up … it would be very useful to a lot of people without exposing a large API … I would like to reimplement CSS animations in WebKit with your spec … take canvas for instance that took 8 years and only 2 classes <Cyril> ack silvia: introducing a big new feature using CSS, SVG and even HTML, how much overlap is there with HTML? … also when you're introducing this big thing, it's not enough and too much. Since people come with their own angles … for an HTML person it's not enough but the spec is too big … splitting it into more document will help … also providing examples and summaries is very helpful since it's too hard to digest birtles: there are no changes to HTML … just additions to document and element interfaces silvia: how about animateColor. That is in the spec birtles: that's surprising shanestephens_: it's tricky … people say it's too much but want more features … we should stick to the brief that we want to unify CSS and SVG animation model silvia: the minute you introduce the API you can animate an HMTL page Cyril: I want to make sure that we can integrate with media elements silvia: you can touch the HTML spec if it's needed … there's an interface that will be needed dino: <par> and <seq> would be nice to have in the document … and it's simpler than javascript … I would like to style animation shanestephens_: what is the next step? … we're working on this full time … and a lot of engineering time to make this happen dino: where do we want to be in order to make a first draft … I think the integration document is the most useful birtles: how can you have that without a model? ed: yes, I would like the integration specs first silvia: I would like to see the markup to see what you're trying to do shanestephens_: this doesn't really apply here … providing examples in markup will not do krit1: it seems like we're going in circles Cyril: everyone agrees that there should be a unified model … at a minimum we should have the model and integration with CSS krit1: are timesheets important? dino: I would like to … I think those are more important birtles: I think we can already do that Cyril: I would like to integrate with media elements <ed> -- 15min break -- <heycam> ScribeNick: heycam shanestephens_: I had a suggestion that we could keep the parts of the IDL that expose the behaviour of animations generated by CSS and SVG, and remove parts of the IDL that let you create content through js … so we can test the CSS and SVG are in the same model and are the same thing … and then cyril can go forward with animations in HTML … and dean can go forward with timesheets … and we can look at completing the js api … I think Brian is interested in adding functionality to SVG birtles: I'll work on SVG integration shanestephens_: v1 of the document can be the model, CSS integration, SVG integration, and just enough IDL to confirm that all of the timing parameters of the model are working correctly Cyril: a browser will be compliant to the standard if it exposes the right objects with the right values at the right time shanestephens_: that's pretty much all we care about. functionally that the two specs are aligned. birtles: I don't really like exposing a read only model like that. I think we should split the API into a separate spec. shanestephens_: it wouldn't have to be read only, but you'd need to leave out things like play() ... the only problem I have with splitting out the API is that it leaves nothing testable … and if it's not testable, it can't be a spec birtles: maybe that's OK … and we publish the API later and test that dino: a NOTE can go onto the REC track birtles: I think we can still work on the API spec, it's not sidelined … I think we're going to implement it anyway, just pref it off … just having a read only API doesn't meet those use cases shanestephens_: why do we want to get it to FPWD? birtles: I think we want to get rid of the animations stuff from SVG and point to this thing … that's where this whole discussion is going in terms of FPWD … is this going to be a problem for SVG? … SVG is moving along, and this one is slowing down … we've got to work out how to solve that shanestephens_: if we make it a NOTE would that work? birtles: we still have the SVG integration document and that will be referenced by SVG 2 shanestephens_: the CSS integration document will only allow you to test that CSS transitions birtles: what's the whole point of having a unified model, to think out loud? silvia: from what I'm hearing, you can't have a unified model without the JS API? shanestephens_: you can't test that it exists without a handle on it … another way forward would be as part of the spec, specify some interoperability primitives between SVG and CSS … so have some SVG animations using CSS key frames for example, and vv … but that's getting in to new features heycam: you could test how CSS and SVG animations interact shanestephens_: so a model document that says how CSS and SVG animations exist in the model and how they interact … then you can test the results of that birtles: this is not testing much of the animation model shanestephens_: can we just expose TimedItem? birtles: it's almost more meaningful to allow CSS animations to have an absolute start time … in terms of unifying the two … then at least you know they're working off the same clock shanestephens_: that doesn't make them interoperable at all though ... exposing TimedItem as a r/w object... birtles: we could think about that next week … if you do that I think you might draw in the rest pretty quickly Cyril: you might want to see if people agree with the model by implementing something heycam: I think it would be fine to go along the REC track, perhaps with conformance classes on other specifications using the model spec … you don't need a test suite to pass CR … though you could just wait until you have feedback from implementors that they are happy with re-jigging their animation implementations in terms of the model shanestephens_: you could point to the API spec and suggest that as a way for them to test it internally silvia: people won't be excited about a model spec shanestephens_: I think we're pushing the model faster so that it can be normatively referenced … the API can still stay as an ED next to it and publicise it … brian, dean and I are the people likely to implement in 3/5 browsers, so it's not like the people who need to see this aren't seeing it Cyril: will MS start implementing this now that there's a unified model? dino: wonder if a polyfill running in IE is enough to count as an implementation [65] Have unknown elements treated as <g> for the purpose of rendering AlexD: useful for globalCoordinateSystem heycam: I've never been entirely comfortable with changing the behaviour here ed: I don't like it at all shanestephens_: Web Components is like a subset of unknown elements … is that going to end up in SVG as well? heycam: they rely on unknown elements being rendered? shanestephens_: we'd need to look at exactly what they rely on … it's only a subset of unknown elements <x-blah> <dmitry> ** heycam: it would be nice to have explicit wording about exactly what is required of unknown elements ed: I don't know if it makes sense to separate elements found outside of <text> from text content elements … that's what you typically get from unknown/fallback Cyril: in previous minutes we said it could be a fallback for connectors … a new implementation will implement connectors, and a previous one would ignore it krit: do we have something in mind that we want to add new graphical elements in the future? … if we wanted introduce a <brush> element as a new resource, this wouldn't work as expected … for browsers who implement this unknown element, they would render the <brush> contents … but those who do implement it would not render it ed: I think it makes more sense to ignore / not draw unknown content krit: I think it will harm more than solve problems RESOLUTION: We will drop the unknown-elements-are-rendered requirement from SVG 2. <scribe> ACTION: Cameron to clarify the behaviour of unknown elements in SVG 2. [recorded in] <trackbot> Created ACTION-3448 - Clarify the behaviour of unknown elements in SVG 2. [on Cameron McCormack - due 2013-02-14]. [66] Remove the requirement to have @width and @height on foreignObject <shepazu> (I think this was a mischaracterization of the "unknown elements" proposal) heycam: this was to make foreignObject sized by shrink wrapping based on its contents dino: the width will be based on the viewport in HTML … doesn't make sense for it to be wider than that krit: once width and height are properties, we have the auto value … but we should keep the 0 width/height defaults heycam: it's not something I feel strongly about ed: I'm fine with not doing anything for this one RESOLUTION: We will drop the "foreignObject can be automatically sized" requirement for SVG 2. [67] Improve the fallback mechanism using switch Cyril: is this similar to allowReorder? ed: this is how you treat unknown elements … if you want to switch on something that is a new element, then you won't check the conditional processing attributes … that's the issue … let's say we introduce a new <foo> element in SVG, in old user agents you'd still like to see if the new feature string is there, and do something special … <foo requiredFeatures="blah"> heycam: you could just wrap it in a <g> ed: that's the workaround … maybe in some cases you don't want to have some wrapper element krit: if unknown elements are ignored, then you cannot reference it ed: render it heycam: I say just look at those conditional processing attributes if the element is in the SVG namespace ed: currently a <switch> would always pick an unknown element, since it is considered not to have any conditional processing attributes, and therefore passes the tests Cyril: you should never pick the element you don't know, what's the sense in that? ed: either you check the attributes you already know on SVG elements, or you just ignore them Cyril: so just remove it for the purpose of switch processing ed: the only way you can get what you want is to wrap it in a known SVG element <scribe> ACTION: Erik to do the "Improve the fallback mechanism using switch" requirement. [recorded in] <trackbot> Created ACTION-3449 - Do the "Improve the fallback mechanism using switch" requirement. [on Erik Dahlström - due 2013-02-14]. RESOLUTION: Keep the "Improve the fallback mechanism using switch" requirement in SVG 2. [68] Provide a way to control audio level and playback heycam: sounds like we should get this behaviour from the HTMLAudioElement interface … so I don't think SVG needs anything specific ed: the previous discussions didn't include <audio>, but I think they should be ACTION-3432: Should also add <audio>. <trackbot> Notes added to ACTION-3432 Edit SVG 2 to add the iframe, canvas, video elements. RESOLUTION: The "Provide a way to control audio level and playback" SVG 2 requirement does not need any action, as we will get this functionality from HTMLAudioElement. [69] Provide positioning information in MouseEvents AlexD: returning a user space position heycam: I thought I had a proposal on SVGPoint to get the UIEvent's position in a given element's coordinate space krit: I'd like to not encourage SVGPoint but rather a more general Point that's being discussed in CSS heycam: the problem is UIEvent is not really defined by us ... I'll take the requirement RESOLUTION: We will keep the "Provide positioning information in MouseEvents" requirement in SVG 2. <scribe> ACTION: Cameron to do the "Provide positioning information in MouseEvents" SVG 2 requirement. [recorded in] <trackbot> Created ACTION-3450 - Do the "Provide positioning information in MouseEvents" SVG 2 requirement. [on Cameron McCormack - due 2013-02-14]. [70] Support CSS3 Color syntax ed: this one already done [71] Support CSS3 image-fit ed: got renamed to object-fit ... I'll keep that, if I have the time to do it <scribe> ACTION: Erik to do the "Support CSS3 image-fit" SVG 2 requirement. [recorded in] <trackbot> Created ACTION-3451 - Do the "Support CSS3 image-fit" SVG 2 requirement. [on Erik Dahlström - due 2013-02-14]. RESOLUTION: We will keep the "Support CSS3 image-fit" SVG 2 requirement. [72] Make it easier to write a zoom/pan widget, possibly by adding convenience method to get scale/transfer heycam: so we discussed in zurich possibly extending CSS overflow and tying zoom/pan to that … Tab was interested in this … but it does not exist as a proposal yet … and needs more thought … I say defer unless there is a concrete proposal ed: yep RESOLUTION: We will defer the "Make it easier to write a zoom/pan widget" SVG 2 requirement unless a concrete proposal is forthcoming. [73] Align with CSS Value and Units heycam: I'd like to take that one <scribe> ACTION: Cameron to align SVG 2 with css3-values. [recorded in] <trackbot> Created ACTION-3452 - Align SVG 2 with css3-values. [on Cameron McCormack - due 2013-02-14]. RESOLUTION: We will keep the "Align with CSS Value and Units" SVG 2 requirement. [74] Deprecate baseline-shift and use vertical-align heycam: contingent on me rewriting the whole Text chapter … I will keep it RESOLUTION: We will keep the "Deprecate baseline-shift and use vertical-align" SVG 2 requirement. [75] Allow video elements to have captions, tracks, etc krit: I don't know why I put my name there heycam: should be part of Takagi-san's action, given he is adding <video> krit: so <track> and <source> would both be SVG elements as well ACTION-3432: Should also add <track> and <source>. <trackbot> Notes added to ACTION-3432 Edit SVG 2 to add the iframe, canvas, video elements. RESOLUTION: We will keep the "Allow video elements to have captions, tracks, etc" SVG 2 requirement. [76] Allow clip to reference any element heycam: Chris' name is on that currently krit: the problem is you have a <g> with a <rect>, why should that not clip while a plain <rect> would clip? … I don't think this would be a huge problem for anyone to implement heycam: if you have an existing shape, you don't want to duplicate it to put it in a <clipPath> krit: I'm not against it but I am not interested in doing the spec work cabanier: I will take it birtles: we already have a resolution to allow a <g> in a <clipPath> <scribe> ACTION: Rik to allow the clip-path property to reference non-<clipPath> elements in SVG 2, and to allow <g> in a <clipPath>. [recorded in] <trackbot> Created ACTION-3453 - Allow the clip-path property to reference non-<clipPath> elements in SVG 2, and to allow <g> in a <clipPath>. [on Rik Cabanier - due 2013-02-14]. <birtles> RESOLUTION: We will keep the "Allow clip to reference any element" SVG 2 requirement. [77] Promote some attributes to properties krit: so this is just allowing auto for lengths etc.? heycam: this is the whole property promotion change … unless somebody puts their hand up, defer krit: leave it in the list and see if we have time for it later ... I just know I won't have the time in the next few months to look at this, maybe after it heycam: how about I put your name next to it krit: ok <scribe> ACTION: Dirk to do the "Promote some attributes to properties" SVG 2 requirement. [recorded in] <trackbot> Created ACTION-3454 - Do the "Promote some attributes to properties" SVG 2 requirement. [on Dirk Schulze - due 2013-02-14]. RESOLUTION: We will keep the "Promote some attributes to properties" SVG 2 requirement for now, and hope Dirk gets time to do it. [78] Have an advance font metrics interface heycam: seems deferable to me dino: what does this mean? isn't there a measureText API in canvas? ed: that's usable for SVG as well ... if we don't have to do anything that's good too cabanier: I'm not sure if people are happy with that API dino: if you can defer something to another group, and they're actually going to do it... ed: what kind of things does it give you? dino: the descender lengths, ... cabanier: font bounding boxes, widths RESOLUTION: We will defer the "Have an advance font metrics interface" to the canvas spec. <scribe> ACTION: Rik to investgate making the canvas font metrics interface without needing a <canvas> element. [recorded in] <trackbot> Created ACTION-3455 - Investgate making the canvas font metrics interface without needing a <canvas> element. [on Rik Cabanier - due 2013-02-14]. ACTION-3455: Maybe by making TextMetrics take a constructor with a text string, element context for style. <trackbot> Notes added to ACTION-3455 Investgate making the canvas font metrics interface without needing a <canvas> element.. -- finish -- This is scribe.perl Revision: 1.137 of Date: 2012/09/20 20:19:01 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/custom paths/custom paths with sharp corners/ Succeeded: s/interprted/interpreted/ Succeeded: s/since/since it/ Succeeded: s/matri/matrix/ Succeeded: s/intead/instead/ Succeeded: s/f2/a filter at-rule that references the SVG filter/ Succeeded: s/../.../ Succeeded: s/no/not/ Succeeded: s/resolution/RESOLUTION/ Succeeded: s/expose the CSS OM/extend SVG DOM at this time/ Succeeded: s/resolution/RESOLUTION/ Succeeded: s/lenght/length/ Succeeded: s/resolution/RESOLUTION/ Succeeded: s/a/are/ Succeeded: s/ony/only/ Succeeded: s/authoring tools like it/authoring tools are missing/ FAILED: s/poylfill/API/ Succeeded: s/can we do that later/should we do that now/ Succeeded: s/the model could/a NOTE can/ Succeeded: s/use/us/ Succeeded: s/interesting/interested/ Found ScribeNick: birtles Found ScribeNick: heycam Found ScribeNick: cabanier Found ScribeNick: heycam Inferring Scribes: birtles, heycam, cabanier Scribes: birtles, heycam, cabanier ScribeNicks: birtles, heycam, cabanier WARNING: No "Present: ... " found! Possibly Present: ACTION-3432 ACTION-3444 ACTION-3455 AlexD Cyril birtles birtles_ cabanier dino dmitry ed filter glenn glenn_ heycam https joined konno konno_ konno__ krit krit1 link nikos nikos1 scribenick shanestephens_ shepazu silvia stakagi svg thorton trackbot You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Found Date: 06 Feb 2013 Guessing minutes URL: People with action items: brian cameron cyril dirk erik heycam krit rik[End of scribe.perl diagnostic output]
http://www.w3.org/2013/02/06-svg-minutes.html
CC-MAIN-2015-48
refinedweb
9,780
55.71
Provided by: alliance_5.1.1-3_amd64 NAME genpat, A procedural pattern file generator SYNOPSIS genpat [-v] [-k] [file] DESCRIPTION Genpat is a set of C fonctions that allows a procedural description of input pattern file for the logic simulator ASIMUT. The Unix genpat command accepts a C file as input and produces a pattern description file as output. The extension ".c" is not to be given. The file generated by genpat is in pat format, so IT IS STRONGLY RECOMMENDED TO SEE pat(5) BEFORE THIS MANUAL. OPTIONS -v verbose mode -k keeps the executable along with the compilation Makefile after completion GENPAT FILE FORMAT From a user point of view, genpat is a pattern description language using all standard C facilities (include, define, variables, loop, ...). Fonctions provided by genpat are to be used in a given order. Using them in a different order won't crash the system, but will result in execution errors. Here follows the description of the input file. A pat format file can be divided in two parts : declaration and description part. The declaration part is the list of inputs, outputs, internal signals and registers. Inputs are to be forced to a certain value and all the others are to be observed during simulation. The description part is a set of patterns, where each pattern defines the value of inputs and outputs. The pattern number represents actually the absolute time for the simulator. Similarly, a genpat file can be divided in two parts : declaration and description part. Functions related to the declaration must be called before any function related to the description part. declaration part The first thing you should do in this part is to give the output file's name (see DEF_GENPAT(3)). Then, this part allows you to declare the inputs, the outputs, and internal observing points (see DECLAR(3)). It is also possible to create virtual arraies (see ARRAY(3)). description part After all signals are declared, you can begin to define input values which are to be applied to the inputs of the circuit or output values which are to be compare with the values produced during the simulation. (see AFFECT(3)). Genpat describes the stimulus by event : only signal transitions are described. This part also allows you to give instructions to the simulation tool to save the state of the circuit at the end of the simulation. (see SAVE(3)). Last thing you should do in this part is to generate the output file (see SAV_GENPAT(3)). FUNCTIONS DEF_GENPAT() defines the output file's name. SAV_GENPAT() make the output file be generated DECLAR() declares inputs, outputs, and the internal observing points. ARRAY() allows signals of the same type to be groupped in an "virtual array" in order to ease their manipulation INIT() changes the values of registers between two patterns. AFFECT() assigns a value to a signal, at a given pattern number. This value is kept on the signal until a new value is assigned to the signal. SAVE() informs the simulation tool to save the state of the circuit at the end of simulation LABEL() gives a label to the current pattern GETCPAT() return the number of the current pattern EXAMPLES #include <stdio.h> #include "genpat.h" char *inttostr(entier) int entier; { char *str; str = (char *) mbkalloc (32 * sizeof (char)); sprintf (str, "%d",entier); return(str); } /*------------------------------*/ /* end of the description */ /*------------------------------*/ main () { int i; int j; int cur_vect = 0; DEF_GENPAT("example"); /* interface */ DECLAR ("a", ":2", "X", IN, "3 downto 0", "" ); DECLAR ("b", ":2", "X", IN, "3 downto 0", "" ); DECLAR ("s", ":2", "X", OUT, "3 downto 0", "" ); DECLAR ("vdd", ":2", "B", IN, "", "" ); DECLAR ("vss", ":2", "B", IN, "", "" ); LABEL ("adder"); AFFECT ("0", "vdd", "0b1"); AFFECT ("0", "vss", "0b0"); for (i=0; i<16; i++) { for (j=0; j<16; j++) { AFFECT (inttostr(cur_vect), "a", inttostr(i) ); AFFECT (inttostr(cur_vect), "b", inttostr(j) ); cur_vect++; } } SAV_GENPAT (); } ENVIRONMENT VARIABLES Genpat reads the environment variable VH_PATSFX to give the result file an extension. SEE ALSO AFFECT(3), ARRAY(3), DECLAR(3), DEF_GENPAT(3), GETCPAT(3), INIT(3), LABEL(3), SAVE(3), SAV_GENPAT(3), pat(5), asimut(1)
http://manpages.ubuntu.com/manpages/cosmic/man1/alliance-genpat.1.html
CC-MAIN-2019-30
refinedweb
682
51.07
Prime numbers are those integers greater than one that are divisible only by themselves and one; an integer greater than one that is not prime is composite. Prime numbers have fascinated mathematicians since the days of the ancient Greek mathematicians, and remain an object of study today. The sequence of prime numbers begins 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, … and continues to infinity, which Euclid, the famous teacher of geometry, proved about twenty-three centuries ago: Assume for the moment that the number of primes is finite, and make a list of them: p1, p2, …, pk. Now compute the number n = p1 · p2 · … · pk + 1. Certainly n is not evenly divisible by any of the primes p1, p2, …, pk, because division by any them leaves a remainder of 1. Thus either n is prime, or n is composite but has two or more prime factors not on the list p1, p2, …, pk of prime numbers. In either case the assumption that the number of primes is finite is contradicted, thus proving the infinitude of primes. — Euclid, Elements, Book IX, Proposition 20, circa 300 B.C. In this essay we will examine three problems related to prime numbers: enumerating the prime numbers, determining if a given number is prime or composite, and factoring a composite number into its prime factors. We describe in detail five relevant functions: one that makes a list of the prime numbers less than a given number using the Sieve of Eratosthenes; two that determine whether a given number is prime or composite, one using trial division and the other using an algorithm developed by Gary Miller and Michael Rabin; and two that find the unique factorization of a given composite number, one using trial division and the other using John Pollard’s rho algorithm. We first describe the algorithms in the body of the essay, then describe actual implementations in five languages — C, Haskell, Java, Python and Scheme — in a series of appendices. Our goals are modest. Our purpose is pedagogical, so we are primarily interested in the clarity of the code. We describe algorithms that are well known and implement them carefully. And we hope that careful reading will lead you to be a better programmer in addition to learning something about prime numbers. Even so, our functions are genuinely useful for a variety of purposes beyond simple study. 1 The Sieve of Eratosthenes The method that is in common use today to make a list of the prime numbers less than a given input n was invented about two hundred years before Christ by Eratosthenes of Cyrene, who was an astronomer, geographer and mathematician, as well as the third chief librarian of Ptolemy’s Great Library at Alexandria; he calculated the distance from Earth to Sun, the tilt of the Earth’s axis, and the circumference of the Earth, and invented the leap day and a system of latitude and longitude. His method begins by making a list of all the numbers from 2 to the desired maximum prime number n. Then the method enters an iterative phase. At each step, the smallest uncrossed number that hasn’t yet been considered is identified, and all multiples of that number are crossed out; this is repeated until no uncrossed numbers remain unconsidered. All the remaining uncrossed numbers are prime. Thus, the first step crosses out all multiples of 2: 4, 6, 8, 10 and so on. At the second step, the smallest uncrossed number is 3, and multiples of 3 are crossed out: 6, 9, 12, 15 and so on; note that some numbers, such as 6, might be crossed out multiple times. At this point 4 has been crossed out, so the next smallest uncrossed number is 5, and its multiples 10, 15, 20, 25 and so on are also crossed out. The process continues until all uncrossed numbers have been considered. Thus, each prime is used to “sift” its multiples out of the original list, so that only primes are left in the sieve. Or you may prefer the ditty: Strike the twos and strike the threes, The Sieve of Eratosthenes! When the multiples sublime, The numbers that remain are prime. Although this is the basic algorithm, there are three optimizations that are routinely applied. First, since 2 is the only even prime, it is best to handle 2 separately and sieve only on odd numbers, reducing the size of the sieve by half. Second, instead of starting the crossing-out at the smallest multiple of the current sieving prime, it is possible to start at the square of the multiple, since all smaller numbers will have already been crossed out; we saw that in the sample when 6 was already crossed out as a multiple of 2 when we were crossing out multiples of 3. Third, as a consequence of the second optimization, sieving can stop as soon as the square of the sieving prime is greater than n, since there is nothing else to do. Here is a formal statement of the sieve of Eratosthenes: Algorithm 1. Sieve of Eratosthenes: Generate the primes not exceeding n > 1: 1. [Initialization.] Set m ← ⌊(n−1)/2⌋. Create a bitarray B[0..m−1] with each item set to TRUE. Set i ← 0. Set p ← 3. Output the prime 2. 2. [Sieving complete?] If n < p2, go to Step 5. 3. [Found prime?] If B[i] = FALSE, set i ← i + 1, set p ← p + 2, and go to Step 2. Otherwise, output the prime p and set j ← 2i2 + 6i + 3 (or j ← (p2 − 3) / 2). 4. [Sift on p.] If j < m, set B[j] ← FALSE, set j ← j + 2i + 3 (or j ← j + p) and go to Step 4. Otherwise, set i ← i + 1, set p ← p + 2, and go to Step 2. 5. [Terminate?] If i = m, stop. 6. [Report remaining primes.] If B[i] = TRUE, output the prime p. Then, regardless of the value of B[i], set i ← i + 1, set p ← p + 2, and go to Step 5. The calculation of j in Step 3 is interesting. Since the bitarray contains odd numbers starting from 3, an index i of the bitarray corresponds to the number p = 2i+3; for instance, the fifth item in the bitarray, at index 4, is 11. Sifting starts from the square of the current prime, so to sift the prime 11 at index 4 we start from 112 = 121 at index 59, calculated as (i−3)/2. Thus, to compute the starting index j, we calculate ((2i+3)2−3)/2, which with a little bit of algebra simplifies to the formula in Step 3. You may prefer the alternate calculation (p2−3) / 2 which exploits the identity 2i+3 = p. As an example, we show the calculation of the primes less than a hundred. The first iteration notes that 3 is the smallest uncrossed number, and crosses out every third number starting from 3 · 3: 9, 15, 21, 27, 33, 39, 45, 51, 57, 63, 69, 75, 81, 87, 93, 99. 3 5 7 911 13 1517 19 2123 25 2729 31 3335 37 3941 43 4547 49 5153 55 5759 61 6365 67 6971 73 7577 79 8183 85 8789 91 9395 97 99 Now 5 is the next smallest uncrossed number, so we cross out every fifth number starting from 5 · 5: 25, 35, 45, 55, 65, 75, 85, 95. Note that 45 and 75 were previously crossed out, so only six additional numbers are now crossed. 3 5 7 911 13 1517 19 2123 25 2729 31 33 3537 3941 43 4547 49 5153 55 5759 61 63 6567 6971 73 7577 79 8183 85 8789 91 93 9597 99 Now 7 is the next smallest uncrossed number, so we cross out every seventh number starting from 7 · 7: 49, 63, 77, 91. Note that 63 was previously crossed out, so only three additional numbers are crossed. 3 5 7 911 13 1517 19 2123 25 2729 31 33 3537 3941 43 4547 49 5153 55 5759 61 63 6567 6971 73 75 7779 8183 85 8789 91 93 9597 99 Now the next smallest uncrossed number is 11, but 11 · 11 = 121 is greater than 100, so sieving is complete. The complete list of primes, including 2 followed by the remaining uncrossed numbers, is: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, and 97. The algorithm outputs primes in Step 1 (the only even prime 2), Step 3 (the sieving primes), and Step 6 (sweeping up the primes that survived the sieve). The word “output” can mean anything. It is common to collect the primes in a list, but depending on your needs, you could count them, sum them, or use them in many other ways. The Sieve of Eratosthenes runs in time O(n log log n), and because the only operations in the inner loop (Step 4) are a single comparison, addition and crossing-out, it is very fast in practice. There are other ways to make lists of prime numbers. If memory is constrained, or if you want only the primes on a limited range from m to n, you may be interested in the segmented Sieve of Eratosthenes, which finds the primes in blocks; the sieving primes are those less than the square root of n, and the minimum multiple of each sieving prime in each segment is reset at the beginning of the next segment. A. O. L. Atkin, an IBM researcher, invented a sieve that is faster than the Sieve of Eratosthenes, as it crosses out multiples of the squares of the sieving primes after some precomputations. Paul Pritchard, an Australian mathematician, has developed several methods of sieving using wheels that have time complexity O(n / log log n), which is asymptotically faster than the Sieve of Eratosthenes, though in practice Pritchard’s sieves are somewhat slower since the bookkeeping in each step of the inner loop is more involved. Sometimes instead of the primes less than n you need the first x primes. The simplest method is to estimate the value of n using the Prime Number Theorem, first conjectured by Carl Friedrich Gauss in 1792 or 1793 (at the age of fifteen) and proved independently by Jacques Hadamard and Charles Jean de la Vallée-Poussin in 1896, which states that there are approximately x / loge x primes less than x; estimate the value of n that gives the desired value of x, add a buffer of 20% or thereabouts, compute the primes less than n, and throw away the excess. By the way, many people implement a function to list the prime numbers to a limit that they call the Sieve of Eratosthenes, but really isn’t; their functions use trial division instead. If you use a modulo operator, or division in any form, your algorithm is not the Sieve of Eratosthenes, and will run much slower than the algorithm described above. On a recent-vintage personal computer, a properly-implemented Sieve of Eratosthenes should be able to list the primes less than a million in less than a second. 2 Primality Testing by Trial Division We turn next to the problem of classifying a number n as prime or composite. The oldest method, and for nearly two thousand years the only method, was trial division. If n has no remainder when divided by 2, it is composite. Otherwise, if n has no remainder when divided by 3, it is composite. Otherwise, if n has no remainder when divided by 4, it is composite. Otherwise, …. And so on. Iteration stops, and the number is declared prime, when the trial divisor is greater than the square root of n. We can see that this is so because, if n = p · q, then one of p or q must be less than the square root of n while the other is greater than the square root of n (unless p and q are equal, and n is a perfect square). A simple optimization notes that if the number is even it is composite, and if the number is odd any factor must be odd, so it is necessary to divide only by the odd numbers greater than 2, not by the even numbers. Algorithm 2. Primality Testing by Trial Division: Determine if an integer n > 1 is prime or composite: 1. [Finished if even] If n is even, return COMPOSITE and stop. 2. [Initialize for odd] Set d ← 3. 3. [Terminate if prime] If d 2 > n, return PRIME and stop. 4. [Trial division] If n mod d = 0, return COMPOSITE and stop. 5. [Iterate] Set d ← d + 2. Go to Step 3. As an example, consider the number 100524249167. It is not even. Dividing by 3 gives a remainder of 2. Dividing by 5 gives a remainder of 2. Dividing by 7 gives a remainder of 7. Dividing by 9 gives a remainder of 5. Dividing by 11 gives a remainder of 1. Dividing by 13 gives a remainder of 4. Dividing by 15 gives a remainder of 2. Dividing by 17 gives a remainder of 8. Dividing by 19 gives a remainder of 3. Dividing by 21 gives a remainder of 20. Dividing by 23 gives a remainder of 23 and, in Step 4, demonstrates that 100524249167 = 23 · 127 · 239 · 311 · 463 is composite. There are other optimizations possible. If you have a list of prime numbers at hand, you can use only the primes as trial divisors and skip the composites, which makes things go quite a bit faster; this works because, if you’ve already tested its component primes, it is not possible for a composite to be a divisor. If you can’t afford the space to store a list of primes, you can use one of Pritchard’s wheels, just as with the sieve. Trial division iterates until it reaches either the smallest prime factor of a number or its square root. Most composites are identified fairly quickly; it’s the primes that take longer, as all the trial divisors fail one by one. In general, the time complexity of trial division is O(√n), and if n is large it will take a very long time. Thus, trial division is best limited to cases where n is small, less than a billion or thereabouts. If n is large, it is common to use probabilistic methods to distinguish primes from composites, as the next section will show. 3 Factorization by Trial Division The problem of breaking down a composite integer into its prime factors has fascinated mathematicians from the ancient Greeks to modern times. The ancient Greek mathematicians proved the Fundamental Theorem of Arithmetic that the factorization of a number is unique, ignoring the order of the factors. Modern mathematicians use the difficulty of the factorization process to provide cryptographic security for the internet. Factoring integers is a hard and honorable problem. Euclid gave the proof of the Fundamental Theorem of Arithmetic in his book Elements. We assume without proof that if p divides ab then either p divides a or p divides b; this assertion is known as Euclid’s Lemma and was proved in Elements, Book VII, Proposition 7. The proof is in two parts: first a proof that all positive integers greater than 1 can be written as the product of primes, and second a proof that the factorization is unique. Suppose that n is the smallest positive integer greater than 1 that cannot be writen as the product of primes. Now n cannot be prime because such a number is the product of a single prime, itself. Thus the composite is n = a · b, where both a and b are positive integers less than n. Since n is the smallest number that cannot be written as the product of primes, both a and b must be able to be written as the product of primes. But then n = a · b can be written as the product of primes simply by combining the factorizations of a and b. But that contradicts our supposition. Therefore, all positive integers greater than 1 can be written as the product of primes. Furthermore, that factorization is unique, ignoring the order in which the primes are written. Now suppose that s is the smallest positive integer greater than 1 that can be written as two different products of prime numbers, so that s = p1 · p2 · … · pm = q1 · q2 · … · qn. By Euclid’s lemma either p1 divides q1 or p1 divides q2 · … · qn. Therefore p1 = qk for some k. But removing p1 and qk from the initial equivalence leaves a smaller integer that can be factored in two ways, contradicting the initial supposition. Thus there can be no such s, and all integers greater than 1 have a unique factorization. — Euclid, Elements, Book IX, Proposition 14, circa 300 B.C. There are many algorithms for factoring integers, of which the simplest is trial division. Divide the number being factored by 2, then 3, then 4, and so on. When a factor divides evenly, record it, divide the original number by the factor, then continue the process until the remaining cofactor is prime. A simple optimization notes that once the factors of 2 have been removed from a number, it is odd, and all its factors will be odd, so it is only necessary to perform trial division by the odd numbers starting from 3, thus halving the work required to be done. A second optimization stops the trial division when the factor being tried exceeds the square root of the current cofactor, indicating that the current cofactor is prime and is thus the final factor of the original number. Here is a formal statement of the trial division factorization algorithm: Algorithm 3. Integer Factorization by Trial Division: Find the prime factors of a composite integer n > 1: 1. [Remove factors of 2] If n is even, output the factor 2, set n ← n ÷ 2, and go to Step 1. 2. [Initialize for odd factors] If n = 1, stop. Otherwise, set f ← 3. 3. [Terminate if prime] If n < f · f, output the factor n and stop. 4. [Trial division] Calculate the quotient q and remainder r when dividing n by f, so that n = q f + r with 0 ≤ r < f. 5. [Loop on odd integers] If r > 0, set f ← f + 2 and go to Step 3. Otherwise, output the factor f, set n ← q, and go to Step 3. As an example, we find the factors of n = 13195. Since 13195 is odd, Step 1 does nothing and we go to Step 2, where f = 3. Since 3 · 3 < 13195, we calculate q = 13195 ÷ 3 = 4398 and r = 1 in Step 4, so in Step 5 we set f = 3 + 2 = 5 and go to Step 3. Since 5 · 5 < 13195, we calculate q = 13195 ÷ 5 = 2639 and r = 0 in Step 4, so in Step 5 we output the factor 5, set n = 2639, and go to Step 3. Since 5 · 5 < 2639, we calculate q = 2639 ÷ 5 = 527 and r = 4 in Step 4, so in Step 5 we set f = 5 + 2 = 7 and go to Step 3. Since 7 · 7 < 2639, we calculate q = 2639 ÷ 7 = 377 and r = 0 in Step 4, so in Step 5 we output the factor 7, set n = 377, and go to Step 3. Since 7 · 7 < 377, we calculate q = 377 ÷ 7 = 53 and r = 6 in Step 4, so in Step 5 we set f = 7 + 2 = 9 and go to Step 3. Since 9 · 9 < 377, we calculate q = 377 ÷ 9 = 41 and r = 8 in Step 4, so in Step 5 we set f = 9 + 2 = 11 and go to Step 3. Since 11 · 11 < 377, we calculate q = 377 ÷ 11 = 34 and r = 3 in Step 4, so in Step 5 we set f = 11 + 2 = 13 and go to Step 3. Since 13 · 13 < 377, we calculate q = 377 ÷ 13 = 29 and r = 0 in Step 4, so in Step 5 we output the factor 13, set n = 29, and go to Step 3. Since 29 < 13 · 13, we output the factor 29 in Step 3 and stop. The complete factorization is 13195 = 5 · 7 · 13 · 29. Step 1 removes factors of 2 from n. If you like, you can extend that to other primes: remove factors of 3, then factors of 5, then factors of 7, and so on, never wasting the time to trial-divide by a composite. Of course, that requires you to pre-compute and store the primes up to the square root of n, which may be inconvenient. If you prefer, there is a method known as wheel factorization, akin to Pritchard’s sieving wheels, that achieves most of the benefits of trial division by primes but requires only a small, constant amount of extra space to store the wheel. The time complexity of trial division is O(√n), as all trial divisors up to the square root of the number being factored must potentially be tried. In practice, you will generally want to choose a bound on the maximum divisor you are willing to test, depending on the speed of your hardware and on your patience; generally speaking, the bound should be fairly small, perhaps somewhere between a thousand and a million. Then, if you have reached the bound without completing the factorization, you can turn to a different, more potent, method of integer factorization. 4 Miller-Rabin Pseudoprimality Checking As we saw above, trial division can be used to determine if a number n is prime or composite, but is very slow. Often it is sufficient to show that n is probably prime, which is a very much faster calculation; the French number theorist Henri Cohen calls numbers that pass a probable-prime test industrial-grade primes. The most common method of testing an industrial-grade prime is based on a theorem that dates to 1640. Pierre de Fermat was a French lawyer at the Parlement in Toulouse, a jurist, and an amateur mathematician who worked in number theory, probability, analytic geometry, differential calculus, and optics. Fermat’s Little Theorem states that if p is a prime number, then for any integer a it is true that ap ≡ a (mod p), which is frequently stated in the equivalent form ap−1 ≡ 1 (mod p) by dividing both sides of the congruence by a, assuming a ≠ 0. As was his habit, Fermat gave no proof for his theorem, which was proved by Gottfried Wilhelm von Leibniz forty years later and first published by Leonhard Euler in 1736. This formula gives us a way to distinguish primes from composites: if we can find an a for which Fermat’s Little Theorem fails, then p must be composite. But there is a problem. There are some numbers, known as Carmichael numbers, that are composite but pass Fermat’s test for all a; the smallest Carmichael number is 561 = 3 · 11 · 17, and the sequence begins 561, 1105, 1729, 2465, 2821, 6601, 8911, 10585, 15841, 29341, …. In 1976, Gary Lee Miller, a computer science professor at Carnegie Mellon University, developed an alternate test for which there are no strong liars, that is, numbers for which there is no a that distinguishes prime from composite. A strong pseudoprime to base a is an odd composite number n = d · 2s + 1 with d odd for which either ad ≡ 1 (mod n) or ad·2r ≡ −1 (mod n) for some r = 0, 1, …, s−1. This works because n = 2m + 1 is odd, so we can rewrite Fermat’s Little Theorem as a2m − 1 ≡ (am − 1)(am + 1) ≡ 0 (mod n). If n is prime, it must divide one of the factors, but can’t divide both because it would then divide their difference (am + 1) − (am − 1) = 2. Miller’s observation leads to the following algorithm: Algorithm 4.A. Strong-Pseudoprime Test: Determine if a on the range 1 < a < n is a witness to the compositeness of an odd integer n > 2. 1. [Initialize] Set d ← n − 1. Set s ← 0. 2. [Reduce while even] If d is even, set d ← d / 2, set s ← s + 1, and go to Step 2. 3. [Easy return?] Set t ← ad (mod n). If t = 1 or t = n − 1, output PROBABLY PRIME and stop. 4. [Terminate?] Set s ← s − 1. If s = 0, output COMPOSITE and stop. 5. [Square and test] Set t = t2 (mod n). If t = n − 1, output PROBABLY PRIME and stop. Otherwise, go to Step 4. The algorithm is stated differently than the math given above, though the result is the same. In the math, we calculated ad·2r, which is initially ad when r = 0, then a2d, then a4d, and so on; in other words, ad is squared at each step. Step 5 thus reduces the modular operation from exponentiation to multiplication; the strength reduction makes the code simpler and faster. As an example, consider the prime number 73 = 23 · 9 + 1; at the end of Step 2, d = 9 and s = 3. If the witness is 2, then 29 ≡ 1 (mod 73) and 73 is declared PROBABLY PRIME in Step 3. If the witness is 3, then 39 ≡ 46 (mod 73) and the test of Step 3 is indeterminate, but 32·9 ≡ 72 ≡ −1 (mod 73) in the second iteration of Step 5, and 73 is declared PROBABLY PRIME. On the other hand, the composite number 75 = 21 · 37 + 1 is declared COMPOSITE with witness 2 because 237 ≡ 47 (mod 75) in Step 3. One more example is the composite number 2047 = 23 · 89;, which is declared PROBABLY PRIME by the witness 2 but COMPOSITE by the witness 3; 2047 is the smallest number for which 2 is a strong liar to its compositeness. Miller proved that n must be prime if no a from 2 to 70 (loge n)2 is a witness to the compositeness of n; Eric Bach, a professor at the University of Wisconsin in Madison, later reduced the constant from 70 to 2. Unfortunately, the proof assumes the Riemann Hypothesis and can’t be relied upon because the Riemann Hypothesis remains unproven. However, Michael O. Rabin, an Israeli computer scientist and recipient of the Turing Award, used Miller’s strong pseudo-prime test to build a probabilistic primality test. Rabin proved that for any odd composite n, at least 3/4 of the bases a are witnesses to the compositeness of n; although that’s the proven lower bound, in practice the proportion is much higher than 3/4. Thus, the Miller-Rabin method performs k strong pseudo-prime tests, each with a different a, and if all the tests pass the method concludes that n is prime with probability at least 4−k, and in practice much higher; a common value of k is 25, which gives a maximum probability of 1 error in 1017. Algorithm 4.B. Miller-Rabin Pseudo-Primality Test: Determine if an odd integer n > 2 is probably prime by performing k strong pseudo-prime tests: 1. [Terminate?] If k = 0, output PROBABLY PRIME and stop. 2. [Strong pseudo-prime test] Choose a random number a such that 1 < a < n. Perform a strong pseudo-prime test using Algorithm 4.A. to determine if a is a witness to the compositeness of n. 3. [Pseudo-prime?] If the strong pseudo-prime test indicates a is a witness to the compositeness of n, output COMPOSITE and stop. Otherwise, set k ← k − 1 and go to Step 1. Although the algorithm given above specifies random numbers for the bases of the strong pseudo-prime test, it is common to fix the bases in advance, based on the value of n. If n is a 32-bit integer, it is sufficient to test on the three bases 2, 7, and 61; all the odd numbers less than 232 have been tested and no errors in the determination of primality exist. If n is less than a trillion, it is sufficient to test to the bases 2, 13, 23, and 1662803. Gerhard Jaesche used the first seven primes as bases and determined that the first false positive is 341550071728321. And Zhenxiang Zhang plausibly conjectures that there are no errors less than 1036 when using the first twenty primes as bases. As an example, we determine the primality of 2149−1. Algorithm 4.A returns PROBABLY PRIME for witness 2 but COMPOSITE for witness 3, so 2149−1 = 86656268566282183151 · 8235109336690846723986161 is composite. That determination would be impossible for trial division, at least in any reasonable time frame, as the Prime Number Theorem suggests there are approximately 1.9 · 1018 primes to be tested. Step 5 of Algorithm 4.A. requires modular exponentiation. Some languages provide modular exponentiation as a built-in function, but others don’t. If your language doesn’t, you will have to write your own function. You should not write your function by first performing the exponentiation and then performing the modulo operation, as the intermediate result of the exponentiation can be very large. Instead, use the square-and-multiply algorithm. Algorithm 4.C. Modular Exponentiation: Compute be (mod m) with b, e and m all positive integers by the square-and-multiply algorithm: 1. [Initialize] Set r ← 1. 2. [Terminate when finished] If e = 0, return r and stop. 3. [Multiply if odd] If e is odd, set r ← r · b (mod n). 4. [Square and iterate] Set e ← ⌊e / 2⌋. Set b ← b2 (mod n). Go to Step 2. Consider the calculation 43713 (mod 1741) = 819. Initially b = 437, e = 13, r = 1 and the test in Step 2 fails. Since e is odd, r = 1 · 437 = 437 in Step 3, then e = 13 / 2 = 6 and b = 4372 (mod 1741) = 1200 in Step 4 and the test in Step 2 fails. Since e is even, r = 437 is unchanged in Step 3, then e = 6 / 2 = 3 and b = 12002 (mod 1741) = 193 in Step 4 and the test in Step 2 fails. Since e is odd, r = 437 · 193 (mod 1741) = 773 in Step 3, then e = 3 / 2 = 1 and b = 1932 (mod 1741) = 688 in Step 4 and the test in Step 2 fails. Since e is odd, r = 773 · 688 (mod 1741) = 819 in Step 3, then e = 1 / 2 = 0 and b = 6882 (mod 1741) = 1533 in Step 4. At this point the test in Step 2 succeeds and the result r = 819 is returned. By the way, the intermediate calculation results in the very large number 43713 = 21196232792890476235164446315006597, so you can see why Algorithm 4.C is preferable. The time complexity of the Miller-Rabin primality checker is O(1), which is vastly better than trial division. The time for the strong pseudo-prime test depends on the number of factors of 2 found in n − 1, which is independent of n. Likewise, the number k of strong pseudo-prime tests is independent of n, so it contributes to the implied constant, not to the overall order. If n is large, the arithmetic takes time O(log log n), but we ignore that in our analysis. There are other methods for quickly checking the primality of a number, including the Baillie-Wagstaff method that combines a strong pseudoprime test base 2 with a Lucas pseudoprime test and the method of Mathematica that adds a strong pseudoprime test base 3 to the Baillie-Wagstaff method; both methods are faster than the Miller-Rabin method, and also give fewer false positives. If a slight chance of error is too much for you, and you need to prove the primality of a number, you can use the trial division of Algorithm 2, there is a method of Pocklington that uses the factorization of n − 1, a method using Jacobi sums (the APR-CL method), a method using elliptic curves due to Atkin and Morain, and the new AKS method, which operates in proven polynomial time but is not yet practical. 5 Factorization by Pollard’s Rho Method In 1975, British mathematician John Pollard invented a method of integer factorization that finds factors in time O(n1/4), which is the square root of the O(√n) time complexity of trial division. The method is simple to program and takes only a small amount of auxiliary space. Before we explain Pollard’s algorithm, we discuss two elements of mathematics on which it relies, the Chinese Remainder Theorem and the birthday paradox. The original version of the Chinese Remainder Theorem was stated in the third-century by the Chinese mathematician Sun Zi in his book Sun Zi suanjing (literally, “The Mathematical Classic of Sun Zi”), and was proved by the Indian mathematician Aryabhata in the sixth century: Let r and s be positive integers which are relatively prime and let a and b be any two integers. Then there exists an integer n such that n ≡ a (mod r) and n ≡ b (mod s). Furthermore, n is unique modulo the product r · s. Sun Zi gave an example: When a number is divided by 3, the remainder is 2. When the same number is divided by 5 the remainder is 3. And when the same number is divided by 7, the remainder is 2. The smallest number that satisfies all three criteria is 23, which you can verify easily. And since the least common multiple of 3, 5, and 7 is 105, any number of the form 23 + 105k, from the arithmetic progression 23, 128, 233, 338, …, is also a solution. Sun Zi used this method to count the men in his emperor’s armies; arrange them in columns of 11, then 12, then 13, take the remainder at each step, and calculate the number of soldiers. In the theory of probability, the “birthday paradox” calculates the likelihood that in a group of p people two of them will have the same birthday. Obviously, in a group of 367 people the probability is 100%, since there are only 366 possible birthdays. What is surprising is that there is a 99% probability of a matching pair in a group as small as 57 people and a 50% probability of a matching pair in a group as small as 23 people. If, instead of birthdays, we consider integers modulo n, there is a 50% probability that two integers are congruent modulo n in a group of 1.177 √n integers. Pollard’s rho algorithm uses the quadratic congruential random-number generator x2 + c (mod n) with c ∉ {0, −2} to generate a series of random integers xk. By the Chinese Remainder Theorem, if n = p · q, then x (mod n) corresponds uniquely to the pair of integers x (mod p) and x (mod q). Furthermore, the xk sequence also follows the Chinese Remainder Theorem, so that xk+1 = [xk (mod p)]2 + c (mod p) and xk+1 = [xk (mod q)]2 + c (mod q) so that the sequence of xk falls into a much shorter cycle of length √p by the birthday paradox. Thus p is identified when xk and xk+1 are congruent modulo p, which can be determined when gcd(|xk − xk+1|, n) = p is between 1 and n. Depending on the values of p, q and c, it is possible that the random-number generator may reach a cycle before a factor is found. Thus, Pollard used Robert Floyd’s tortoise-and-hare cycle-detection method. The sequence of xs starts with two values the same, call them t and h. Then each time t is incremented, h is incremented twice; the hare runs twice as fast as the tortoise. If the hare reaches the tortoise, that is, t = h (mod n), before a factor is found, then a cycle has been reached and further work is pointless. At that point, either the factorization attempt can be abandoned or a new random-number generator can be tried by using a different c. Pollard called his method “Monte Carlo factorization” because of the use of random numbers. The algorithm is now called the rho algorithm because the sequence of x values has an initial tail followed by a cycle, giving it the shape of the Greek letter rho ρ. Fortunately the algorithm is much simpler than the explanation. Algorithm 5.A. Pollard’s Rho Method: Find a factor of an odd composite integer n > 1: 1. [Initialization] Set t ← 2, h ← 2, and c ← 1. Define the function f(x) = x2 + c (mod n). 2. [Iteration] Set t ← f(t), h ← f(f(h)), and d ← gcd(t−h, n). If d = 1, go to Step 2. 3. [Termination] If d < n, output d and stop. Otherwise, either stop with failure or continue by setting t ← 2, h ← 2, and c ← c + 1, redefining the function f(x) using the new value of c and going to Step 2. As an example, we consider the factorization of 8051. Initially, t=2, h=2, and c=1. After one iteration of Step 2, t=22+1=5, h=(22+1)2+1=26, and d=gcd(5−26,8051)=1. After the second iteration of Step 2, t=52+1=26, h=(262+1)2+1=7474, and d=gcd(26−7474,8051)=1. After the third iteration of Step 2, t=262+1=677, h=(74742+1)2+1=871 (mod 8051), and d=gcd(677-871,8051)=97, which is a factor of 8051; the complete factorization is 8051=83·97. Be sure before you begin that n is composite; if n is prime, then d will always be 1 (because if n is prime it is always coprime to every other number) and the algorithm will loop forever. As with trial division, it is probably wise to set some bound on the maximum number of steps you are willing to take in the iteration of Step 2, because large factors can take a long time to find using this algorithm. You should also be careful not to let c be 0 or -2, because in those cases the random numbers aren’t very random. Note that the factor found in Step 3 may not be prime, in which case you can apply the algorithm again to the reduced factor, using a different c. And of course, once you have one factor, you can continue by factoring the remaining cofactor. Algorithm 5.B gives the complete factorization of a number. Algorithm 5.B. Integer Factorization by Pollard’s Rho Method: Find all the prime factors of a composite integer n greater than 1: 1. [Remove factors of 2] If n is even, output the factor 2, set n ← n ÷ 2, and go to Step 1. 2. [Terminate if prime] If n is prime by the method of Algorithm 2.B, output the factor n and stop. 3. [Find a factor] Use Algorithm 5.A to find a factor of n and call it f. Output the factor f, set n ← n ÷ f, and go to Step 2. There are two ways in which Pollard’s algorithm can be improved. First, it should bother you that each number in the random sequence is computed twice; it bothered the Australian mathematician Richard Brent, who devised a cycle-finding algorithm based on powers of 2 that computes each number in the random sequence only once, and it is Brent’s variant that is most often used today. A second improvement notes that for any a, b, and n, gcd(ab,n) > 1 if and only if at least one of gcd(a,n) > 1 or gcd(b,n) > 1, and accumulates the products of the elements of the t and h sequences for several steps (for large n, 100 steps is common) before computing the gcd, thus saving much time; if the gcd is n, then it is possible either that a cycle has been found or that two factors were found since the last gcd, in which case it is necessary to return to values saved from the previous gcd calculation and iterate one step at a time. The time complexity of Pollard’s rho algorithm depends on the unknown factor d. By the birthday paradox, in the average case it will take 1.177 √d steps to find the factor, or O(√d). Thus, if n is the product of two primes, it will take O(n 1/4) to perform the factorization, assuming the two primes are roughly the same size. In other words, a million iterations of trial division will find factors up to a million, while a million iterations of Pollard’s rho method will find factors up to a trillion; that’s why you want to switch from trial division to Pollard’s rho method at a fairly low bound. 6 Going Further Although there is more to programming with prime numbers, we will stop here, since our small library has fulfilled our modest goals. The five appendices give implementations in C, Haskell, Java, Python, and Scheme, and the savvy reader will study all of them, because while they all implement exactly the same algorithms, each does so in a different way, and the differences are enlightening, about both the algorithms and the languages. The C appendix describes the tasteful use of the GMP multi-precision number library. The Haskell and Scheme appendices describe some of the syntax and semantics of those languages, on the assumption that they are unfamiliar to many readers. The Java appendix is the most faithful of all the appendices to the exact structure of the algorithms, including error-checking on the inputs as described in the preambles of each of the algorithms. The Python and Scheme appendices are the most “real-world” implementations, as they include error-checking on the inputs, bounds-checking to stop calculations that take too long, and even a non-mathematical but highly useful extension of the domain of the factoring functions. Although our goals were modest, we have accomplished much. It’s hard to improve on the Sieve of Eratosthenes, and the Miller-Rabin primality checker will handle inputs of virtually unlimited size. The rho algorithm will find most factors up to a dozen digits or more, regardless of the size of the number being factored. If Pollard’s rho algorithm won’t crack your composite, there are more powerful algorithms available, though they are beyond our modest aspirations. The elliptic curve method will find factors up to about thirty or forty digits (even fifty or sixty digits if you are patient). The quadratic sieve will split semi-primes up to about 90 digits on a single personal computer or 120 digits on a modest network of personal computers, and the number field sieve will split semi-primes up to about 200 digits on that same network. At the time this of this writing, the current record factorization is 231 decimal digits (768 bits), which took a team of experts about 2000 PC-years, and about eight months of calendar time, on a “network” of computers around the world connected via email. If your goal isn’t self-study and you really want to factor a large number, and the rho technique fails, you have several options. A good first step is Dario Alpern’s factorization applet at. Paul-Zimmermann’s gmp-ecm program at uses a combination of trial division, Pollard’s rho algorithm, another algorithm of Pollard known as p−1, and Hendrik Lenstra’s elliptic curve method to find factors. Jason Papadopoulos’ msieve program at uses both the quadratic sieve of Carl Pomerance and the number field sieve of John Pollard. There is much more to prime numbers and integer factorization than we have discussed here; for instance, there are methods other than trial division for proving the primality of large numbers (several hundred digits) and methods other than enumeration with a sieve for counting the primes less than a given input number. At the end of each algorithm above was a discussion of alternatives; the interested reader will find that web searches for the topics mentioned will be fruitful and interesting. A superb reference for programmers is the book Prime Numbers: A Computational Perspective by Richard Crandall and Carl B. Pomerance (be sure to look for the second edition, which includes discussion of the new AKS primality prover); beware that although the approach is computational, there is still heavy mathematical content in the book. You may also be interested in the Programming Praxis web site at, which has many exercises on the theme of prime numbers. Appendix: C C is a small language with limited data types; integers are limited to what the underlying hardware provides, there are no lists, and there are no bitarrays, so we have to depend on libraries to provide those things for us. Since we are interested in prime numbers and not lists or bitarrays, we will write the smallest libraries that are necessary to get a working program. We begin with bitarrays, which are represented as arrays of characters in which we set and clear individual bits using macros: #define ISBITSET(x, i) (( x[i>>3] & (1<<(i&7)) ) != 0) #define SETBIT(x, i) x[i>>3] |= (1<<(i&7)); #define CLEARBIT(x, i) x[i>>3] &= (1<<(i&7)) ^ 0xFF; To declare a bitarray b of length 8n, say char b[n], and to initialize each bit to 0, say memset(b, 0, sizeof(b)); change the 0 to 255 to set each bit initially to 1. Here is our minimal list library: typedef struct list { void *data; struct list *next; } List; List *insert(void *data, List *next) { List *new; new = malloc(sizeof(List)); new->data = data; new->next = next; return new; } List *insert_in_order(void *x, List *xs) { if (xs == NULL || mpz_cmp(x, xs->data) < 0) { return insert(x, xs); } else { List *head = xs; while (xs->next != NULL && mpz_cmp(x, xs->next->data) > 0) { xs = xs->next; } xs->next = insert(x, xs->next); return head; } } List *reverse(List *list) { List *new = NULL; List *next; while (list != NULL) { next = list->next; list->next = new; new = list; list = next; } return new; } int length(List *xs) { int len = 0; while (xs != NULL) { len += 1; xs = xs->next; } return len; } Lists are represented as structs of two members; the empty list is represented as NULL. The trickiest function is reverse, which operates in-place to make each list item point to its predecessor. The list functions leave all notion of memory management to the caller. With that out of the way, we are ready to begin work on the prime number functions. Our version of the Sieve of Eratosthenes uses b for the bitarray, i indexes into the bitarray, and p = 2i + 3 is the number represented at location i of the bitarray. The first while loop identifies the sieving primes and performs the sieving in an inner while, and the third while loop sweeps up the remaining primes that survive the sieve. List *primes(long n) { int m = (n-1) / 2; char b[m/8+1]; int i = 0; int p = 3; List *ps = NULL; int j; ps = insert((void *) 2, ps); memset(b, 255, sizeof(b)); while (p*p < n) { if (ISBITSET(b,i)) { ps = insert((void *) p, ps); j = (p*p - 3) / 2; while (j < m) { CLEARBIT(b, j); j += p; } } i += 1; p += 2; } while (i < m) { if (ISBITSET(b,i)) { ps = insert((void *) p, ps); } i += 1; p += 2; } return reverse(ps); } We look next at the two algorithms that use trial division; we’ll look at them together because they are so similar. We used long integers for the Sieve of Eratosthenes because they are almost certainly big enough, but we will use long long unsigned integers for the two trial division functions because that extends the range of the inputs that we can consider. The function that tests primality using trial division uses an if to identify even numbers, then a while ranges over the odd numbers d from 3 to the square root of n. int td_prime(long long unsigned n) { if (n % 2 == 0) { return n == 2; } long long unsigned d = 3; while (d*d <= n) { if (n % d == 0) { return 0; } d += 2; } return 1; } The td_factors function is very similar to the td_prime function. The initial if becomes a while, because we no longer want to quit as soon as we find a single factor, and the body of the second while also changes so that it collects all the factors instead of quitting as soon as it finds a single factor; the factors are stacked in increasing order as they are discovered, hence the reversal. The list of factors, which will contain only the original input n if it is prime, is returned as the value of the function. List *td_factors(long long unsigned n) { List *fs = NULL; while (n % 2 == 0) { fs = insert((void *) 2, fs); n /= 2; } if (n == 1) { return reverse(fs); } long long unsigned f = 3; while (f*f <= n) { if (n % f == 0) { fs = insert((void *) f, fs); n /= f; } else { f += 2; } } fs = insert((void *) n, fs); return reverse(fs); } We used long integers for the Sieve of Eratosthenes and long long unsigned integers for the two trial-division algorithms. Those native integer types are sufficient for those applications; usually only small n are required for the Sieve of Eratosthenes, and trial division is just too slow for large n. But many applications require much larger numbers, so we need a big-integer library, and we choose the GMP library from GNU, which is well-known for its useful interface and fast, bug-free implementation. You can obtain GMP from gmplib.org; to use it in your program, include the line #include <gmp.h> at the top of your program, and link with the option -lgmp. We look next at Gary Miller’s strong pseudo-prime test. The first while computes d and s, then the if checks for an early return, the second while computes and tests the powers of a, and the default return is composite. int is_spsp(mpz_t n, mpz_t a) { mpz_t d, n1, t; mpz_inits(d, n1, t, NULL); mpz_sub_ui(n1, n, 1); mpz_set(d, n1); int s = 0; while (mpz_even_p(d)) { mpz_divexact_ui(d, d, 2); s += 1; } mpz_powm(t, a, d, n); if (mpz_cmp_ui(t, 1) == 0 || mpz_cmp(t, n1) == 0) { mpz_clears(d, n1, t, NULL); return 1; } while (--s > 0) { mpz_mul(t, t, t); mpz_mod(t, t, n); if (mpz_cmp(t, n1) == 0) { mpz_clears(d, n1, t, NULL); return 1; } } mpz_clears(d, n1, t, NULL); return 0; } Let’s take a moment for a quick lesson in GMP. The datatype of big integers is given by mpz_t, where the mp is for multi-precision, z is for integer (from the German word Zahlen, for number), and _t is to indicate a type variable. All mpz_t variables must be initialized and cleared; in exchange for this effort, GMP takes care of all memory management automatically. The basic operations are given as mpz_add, mpz_mul, mpz_powm and the like, and they all return void, with the result given in the first argument (by analogy to an assignment, which puts the result on the left); the various division operators have their own naming conventions. Most of the operators have _ui variants in which the second operand (third argument) is a long unsigned integer instead of an mpz_t integer. Comparisons take two values and return a negative integer if the first is less than the second, a positive integer is the first is greater than the second, and 0 if the two are equal. To determine whether a given integer is prime or composite we use 25 random integers, saving the GMP random state in a static variable that persists from one call of the function to the next. The algorithm expects an integer greater than 2, so that is our first test. Then we check that n is odd, and additionally that it is not divisible by 3, 5 or 7; those tests aren’t strictly part of the algorithm, but they eliminate about three-quarters of all positive integers, and if they determine the compositeness of n, they are much cheaper than the full Miller-Rabin test. Finally, if we don’t yet have an answer, we proceed with the full algorithm with k counting down to 0. int is_prime(mpz_t n) { static gmp_randstate_t gmpRandState; static int is_seeded = 0; if (! is_seeded) { gmp_randinit_default(gmpRandState); gmp_randseed_ui(gmpRandState, time(NULL)); is_seeded = 1; } mpz_t a, n3, t; mpz_inits(a, n3, t, NULL); mpz_sub_ui(n3, n, 3); int i; int k = 25; long unsigned ps[] = { 2, 3, 5, 7 }; if (mpz_cmp_ui(n, 2) < 0) { mpz_clears(a, n3, t, NULL); return 0; } for (i = 0; i < sizeof(ps) / sizeof(long unsigned); i++) { mpz_mod_ui(t, n, ps[i]); if (mpz_cmp_ui(t, 0) == 0) { mpz_clears(a, n3, t, NULL); return mpz_cmp_ui(n, ps[i]) == 0; } } while (k > 0) { mpz_urandomm(a, gmpRandState, n3); mpz_add_ui(a, a, 2); if (! is_spsp(n, a)) { mpz_clears(a, n3, t, NULL); return 0; } k -= 1; } mpz_clears(a, n3, t, NULL); return 1; } The default GMP random number generator is the Mersenne Twister, which has good randomness properties and a very long period; we initialize the internal state of the random number generator with the current time (number of seconds since the epoch). The function mpz_urandomm returns in its first argument a uniformly-distributed pseudo-random non-negative integer less than its third argument, using and resetting the internal state of the random number generator in its second argument. GMP provides our is_prime function under the name mpz_probab_prime_p, but we give our own implementation anyway, so you can see how it is done. There are two functions that implement integer factorization by pollard rho: rho_factor finds a single factor, and rho_factors performs the complete factorization. The rho_factor function assumes that n is odd and composite; t is the tortoise, h is the hare, d is the greatest common divisor, and r is a temporary working variable holding the difference between t and h. The function keeps cycling until it finds a prime factor, calling itself recursively with the next greater c if it reaches a cycle or finds a composite factor. void rho_factor(mpz_t f, mpz_t n, long long unsigned c) { mpz_t t, h, d, r; mpz_init_set_ui(t, 2); mpz_init_set_ui(h, 2); mpz_init_set_ui(d, 1); mpz_init_set_ui(r, 0); while (mpz_cmp_si(d, 1) == 0) { mpz_mul(t, t, t); mpz_add_ui(t, t, c); mpz_mod(t, t, n); mpz_mul(h, h, h); mpz_add_ui(h, h, c); mpz_mod(h, h, n); mpz_mul(h, h, h); mpz_add_ui(h, h, c); mpz_mod(h, h, n); mpz_sub(r, t, h); mpz_gcd(d, r, n); } if (mpz_cmp(d, n) == 0) /* cycle */ { rho_factor(f, n, c+1); } else if (mpz_probab_prime_p(d, 25)) /* success */ { mpz_set(f, d); } else /* found composite factor */ { rho_factor(f, d, c+1); } mpz_clears(t, h, d, r, NULL); } The rho-factors function extracts factors of 2 in the first while, then calls rho-factor repeatedly in the second while until the remaining cofactor is prime. Rho-factors returns the list of factors in its first argument, like all the GMP functions. void rho_factors(List **fs, mpz_t n) { while (mpz_even_p(n)) { mpz_t *f = malloc(sizeof(*f)); mpz_init_set_ui(*f, 2); *fs = insert(*f, *fs); mpz_divexact_ui(n, n, 2); } if (mpz_cmp_ui(n, 1) == 0) return; while (! (mpz_probab_prime_p(n, 25))) { mpz_t *f = malloc(sizeof(*f)); mpz_init_set_ui(*f, 0); rho_factor(*f, n, 1); *fs = insert_in_order(*f, *fs); mpz_divexact(n, n, *f); } *fs = insert_in_order(n, *fs); } We demonstrate the functions shown above with this main function: int main(int argc, char *argv[]) { mpz_t n; mpz_init(n); List *ps = NULL; List *fs = NULL; ps = primes(100); /* 2 3 5 7 11 13 17 19 23 29 31 37 41 */ while (ps != NULL) /* 43 47 53 59 61 67 71 73 79 83 89 97 */ { printf("%ld%s", (long) ps->data, (ps->next == NULL) ? "\n" : " "); ps = ps->next; } printf("%d\n", length(primes(1000000))); /* 78498 */ printf("%d\n", td_prime(600851475143LL)); /* composite */ fs = td_factors(600851475143LL); /* 71 839 1471 6857 */ while (fs != NULL) { printf("%llu%s", (unsigned long long int) fs->data, (fs->next == NULL) ? "\n" : " "); fs = fs->next; } mpz_t a; mpz_init(a); mpz_set_str(n, "2047", 10); mpz_set_str(a, "2", 10); printf("%d\n", is_spsp(n, a)); /* pseudo-prime */ mpz_set_str(n, "600851475143", 10); /* composite */ printf("%d\n", is_prime(n)); mpz_set_str(n, "2305843009213693951", 10); /* prime */ printf("%d\n", is_prime(n)); mpz_set_str(n, "600851475143", 10); rho_factors(&fs, n); /* 71 839 1471 6857 */ while (fs != NULL) { printf("%s%s", mpz_get_str(NULL, 10, fs->data), (fs->next == NULL) ? "\n" : " "); fs = fs->next; } } To compile the program, say gcc prime.c -lgmp -o prime , and to run the program say ./prime. If you get any warnings about the cast to void * you can safely ignore them, as it is always permissible to cast to void. Here is the output from the program: 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 78498 0 71 839 1471 6857 1 0 1 71 839 1471 6857 You can see the program assembled at, but you won’t be able to run it because the environment doesn’t provide the GMP library. An abbreviated version of the program that provides only the Sieve of Eratosthenes and the two trial-division algorithms that use native integers is available at. Appendix: Haskell Haskell is the classic purely functional language, far different from C. We begin our look at Haskell by examining a function that is frequently cited as the Sieve of Eratosthenes. Many texts include a definition something like this: primes = sieve [2..] sieve (p:xs) p : sieve [x | x <- xs, x `mod` p > 0] But this is not the Sieve of Eratosthenes because it is based on division (the mod operator) rather than addition. It will quickly become slow as the primes grow larger; try, for instance, to extract a list of the primes less than a hundred thousand. Unfortunately, a proper implementation of the Sieve of Eratosthenes is a little bit ugly, since Haskell and arrays don’t easily mix. Most Haskell programs begin by importing various functions from Haskell’s Standard Libraries, and ours is no exception; all imports must appear before any executable code. ST is the Haskell state-transformer monad, which provides mutable data structures, including Control.Monad.ST and Data.Array.ST. Control.Monad provides imperative-style for and when, and Data.Array.Unboxed provides arrays that store data directly, rather than with a pointer, as long as the data is a suitable type; we will be using arrays of booleans, which are suitable. Finally, Data.List provides a sort function for use by Pollard’s rho algorithm. import Control.Monad (forM_, when) import Control.Monad.ST import Data.Array.ST import Data.Array.Unboxed import Data.List (sort) The Sieve of Eratosthenes is implemented using exactly the same algorithm as all the other languages, though it looks somewhat foreign to imperative-trained eyes. Functions in Haskell optionally begin with a declaration of the type of the function, and we will include one in each of our functions. Thus, sieve :: Int -> UArray Int Bool declares an object named sieve that has type (the double colon) that is a function (the arrow) that takes a value of type Int and returns a value of type UArray Int Bool. Int is some fixed-size integer based on the native machine type. UArray is an unboxed array; Int is the type of its indices and Bool is the type of its values. Note that typenames always begin with a capital letter, as opposed to simple variables or function names that begin with a lower-case letter. The body of the sieve function is fairly atypical of Haskell code, due to its use of arrays. The first line, runSTUArray $ do (that parses as run, ST for the state-transformer monad, UArray for the unboxed array), sets up the array processing; the array is initialized with indices from 0 to m−1, with all m values True, and assigned to variable bits. An expression like forM_ [0..x] $ \i do would be rendered in C as for (i=0; i<=x; i++); the expression [0 .. x] expands to 0, 1, … x, and is evaluated lazily, as if by a list generator, so the whole list is never reified all at once. Functions readArray and writeArray fetch and store elements of an array. Variable isPrime is assigned either True or False, depending on the value of the element of the bits array with value i. In the inner loop, iteration starts at 2*i*i+6*i+3, and each iteration steps by 2*i+3, which is the difference between the first and second elements of the list, continuing until j is greater than m−1. sieve :: Int -> UArray Int Bool sieve n = runSTUArray $ do let m = (n-1) `div` 2 r = floor . sqrt $ fromIntegral n bits <- newArray (0, m-1) True forM_ [0 .. r `div` 2 - 1] $ \i -> do isPrime <- readArray bits i when isPrime $ do forM_ [2*i*i+6*i+3, 2*i*i+8*i+6 .. (m-1)] $ \j -> do writeArray bits j False return bits The primes function is simple; assocs collects the elements of the bits in order, paired with their index, and those that are True are included in the out-going list of primes. The type signature indicates that the function takes an Int and returns a list of Int values, as indicated by the square brackets. The overall structure of the function is a list which has 2 as its head joined by a colon : to a list comprehension between square brackets [ … ]. The list comprehension has two parts. The expression 2*i+3 before the vertical bar | defines the elements of the output list. The generator after the bar assigns to the tuple all of the elements returned by the assocs function and keeps only those where the second element of the tuple is True, binding the first element of the tuple to the variable i that is used in the result expression. primes :: Int -> [Int] primes n = 2 : [2*i+3 | (i, True) <- assocs $ sieve n] It is simple to test primality by trial division because Haskell offers a simple way of generating the list of 2 followed by odd numbers. The colon operator, pronounced “cons,” is the list constructor. The expression [3,5..] is a list constructor (anything surrounded by square brackets is a list) with 3 as its first element, 5 as its second element, and so on in an arithmetic progression that increases by 5 − 3 = 2 at each step. The .. operator at the end of the list expression signifies that the list goes on forever; if there is a value after the .. operator, that is the ending value included in list. tdPrime :: Int -> Bool tdPrime n = prime (2:[3,5..]) where prime (d:ds) | n < d * d = True | n `mod` d == 0 = False | otherwise = prime ds Factorization by trial division is expressed more simply in Haskell than in the other languages because Haskell provides an easy way to build the list of trial divisors: the expression 2:[3,5..] is an infinite list generator that returns 2 followed by the odd integers. The guard expressions of the local facts function makes tdFactors very easy to read. Note that we used a where clause, but could equally have used a let … in; in this case the choice is a matter of personal preference, though there are other situations where one or the other is required. tdFactors :: Int -> [Int] tdFactors n = facts n (2:[3,5..]) where facts n (f:fs) | n < f * f = [n] | n `mod` f == 0 = f : facts (n `div` f) (f:fs) | otherwise = facts n fs As in C, we have gone as far as we can using native integers, and we’ll switch at this point to big integers; note that Haskell has no long integers, so it’s more restrictive than C. The switch is simpler for Haskell than for C, since big integers are provided directly in the language, in the Integer datatype. For the Miller-Rabin primality test, we first need to write the function to perform modular exponentiation, since Haskell doesn’t provide one in any of its standard libraries: powmod :: Integer -> Integer -> Integer -> Integer powmod b e m = let times p q = (p*q) `mod` m pow b e x | e == 0 = x | even e = pow (times b b) (e `div` 2) x | otherwise = pow (times b b) (e `div` 2) (times b x) in pow b e 1 This function is rather more typical of Haskell than the sieve function that used arrays. The signature indicates that the function takes three Integer values and returns an Integer value. The function is written as if it is three functions because all functions in Haskell are curried, so powmod is actually a function that takes an integer b and returns a function that takes an integer e that returns a function that takes an integer m and returns an integer; thus, it is only a colloquialism, and frankly wrong, to say that powmod is a function that takes three integers and returns an integer. The let … in … defines local values. Local function times performs modular multiplication mod m. Local function pow has three definition, each with a guard (the predicate between the vertical bar | and the equal sign =); the expression corresponding to the first matching guard predicate is calculated and returned as the value of the function. Here, the first guard expression checks for termination and the other two expressions call the pow function recursively. The strong pseudo-prime test is implemented by three local functions in the isSpsp function: getDandS extracts the powers of 2 from n−1, spsp takes the tuple returned by getDandS as input and performs the easy-return test, and doSpsp computes and tests the powers of a. Note that mod and div are curried prefix functions; the back-quotes turn them into binary infix functions. Note also that if is an expression in Haskell, as opposed to a statement in imperative languages, which means that it returns a value instead of controlling program flow; thus there may be no else-less if, and both consequents of the if must have the same type. isSpsp :: Integer -> Integer -> Bool isSpsp n a = let getDandS d s = if even d then getDandS (d `div` 2) (s+1) else (d, s) spsp (d, s) = let t = powmod a d n in if t == 1 then True else doSpsp t s doSpsp t s | s == 0 = False | t == (n-1) = True | otherwise = doSpsp ((t*t) `mod` n) (s-1) in spsp $ getDandS (n-1) 0 Haskell makes it difficult to work with random numbers (it’s possible, though inconvenient, in the same way that arrays were inconvenient in the Sieve of Eratosthenes) because they require a state to be maintained from one call to the next, so we use the primes less than a hundred as the bases for the Miller-Rabin primality test. isPrime :: Integer -> Bool isPrime n = let ps = [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97] in n `elem` ps || all (isSpsp n) ps The rhoFactor function finds a single factor by the rho method. Function f is the random-number generator. Function fact implements the tortoise-and-hare loop recursively; the computations in the where clause are done prior to the guard expressions in the body of the function. The rhoFactor function is called recursively if the loop falls into a cycle (the d == n clause) or finds a composite factor (the otherwise clause). rhoFactor :: Integer -> Integer -> Integer rhoFactor n c = let f x = (x*x+c) `mod` n fact t h | d == 1 = fact t' h' | d == n = rhoFactor n (c+1) | isPrime d = d | otherwise = rhoFactor d (c+1) where t' = f t h' = f (f h) d = gcd (t' - h') n in fact 2 2 Function rhoFactors calls rhoFactor repeatedly until it completes the factorization of n. The first two clauses extract factors of 2, the third clause tests primality of a remaining cofactor, and the fourth clause adjusts n and the list of factors after calling rhoFactor. Note that we solved the problem of factoring a perfect power of 2 differently than in the C version of the program, stopping as soon as n is reduced to 2. rhoFactors :: Integer -> [Integer] rhoFactors n = let facts n | n == 2 = [2] | even n = 2 : facts (n `div` 2) | isPrime n = [n] | otherwise = let f = rhoFactor n 1 in f : facts (n `div` f) in sort $ facts n A main program that exercises the functions defined above is shown below. To compile the program with the GHC compiler, assuming it is stored in file primes.hs, say ghc -o prime prime.hs, and to run the program say ./prime. main = do print $ primes 100 print $ length $ primes 1000000 print $ tdPrime 716151937 print $ tdFactors 8051 print $ powmod 437 13 1741 print $ isSpsp 2047 2 print $ isPrime 600851475143 print $ isPrime 2305843009213693951 print $ rhoFactors 600851475143 The output is the same as the C version of the program, except for the change in the input to tdFactors. You can run the program at. Appendix: Java Java is an object-oriented language, widely used, with a very large collection of libraries. Like Haskell, Java provides big integers, linked lists and bit arrays natively, so we can quickly jump in to the coding. The functions are shown below, but we leave it to you to package them into classes as you wish; in the sample code we put all the functions into class Main. We will be more careful here than in the two prior versions to ensure that we validate all the input arguments. We begin with the Sieve of Eratosthenes, which we limit to ints, but if you prefer a larger data type you are free to change it. public static LinkedList primes(int n) { if (n < 2) { throw new IllegalArgumentException("must be greater than one"); } int m = (n-1) / 2; BitSet b = new BitSet(m); b.set(0, b.size(), true); int i = 0; int p = 3; LinkedList ps = new LinkedList(); ps.add(2); while (p * p < n) { if (b.get(i)) { ps.add(p); int j = 2*i*i + 6*i + 3; while (j < m) { b.clear(j); j = j + 2*i + 3; } } i += 1; p += 2; } while (i < m) { if (b.get(i)) { ps.add(p); } i += 1; p += 2; } return ps; } We used the built-in exception IllegalArgumentException instead of creating our own exception; it’s easier, and just as clear. We also used the built-in data types BitSet and LinkedList; indeed, it is one of the benefits of programming in Java that the standard libraries provide so much useful code. Another of the libraries that Java provides is the BigInteger library, and we switch from normal integers to BigInteger for the rest of our functions; int is sufficient for the Sieve of Eratosthenes, because sieving with a large n produces too much output to be useful, but for the other functions BigInteger is definitely useful. The tdPrime function validates its input in the first if, checks for even numbers in the second if statement, and checks for odd divisors in the body of the while. public static Boolean tdPrime(BigInteger n) { BigInteger two = BigInteger.valueOf(2); if (n.compareTo(two) < 0) { throw new IllegalArgumentException("must be greater than one"); } if (n.mod(two).equals(BigInteger.ZERO)) { return n.equals(two); } BigInteger d = BigInteger.valueOf(3); while (d.multiply(d).compareTo(n) <= 0) { if (n.mod(d).equals(BigInteger.ZERO)) { return false; } d = d.add(two); } return true; } The tdFactors function domain-checks the input, removes factors of two, and, if the remaining cofactor is not 1, begins a loop over the odd numbers starting from 3, trying each odd number in turn until it finds factors and the remaining cofactor is greater than the square of the current factor. As with the GMP functions in C, the messiness of doing arithmetic by calling functions hides the simplicity of the algorithm. public static LinkedList tdFactors(BigInteger n) { BigInteger two = BigInteger.valueOf(2); LinkedList fs = new LinkedList(); if (n.compareTo(two) < 0) { throw new IllegalArgumentException("must be greater than one"); } while (n.mod(two).equals(BigInteger.ZERO)) { fs.add(two); n = n.divide(two); } if (n.compareTo(BigInteger.ONE) > 0) { BigInteger f = BigInteger.valueOf(3); while (f.multiply(f).compareTo(n) <= 0) { if (n.mod(f).equals(BigInteger.ZERO)) { fs.add(f); n = n.divide(f); } else { f = f.add(two); } } fs.add(n); } return fs; } It is annoying that the add method is overloaded, with the same method name referring to the addition of two BigIntegers when used as f.add(two) and to the insertion of an item in a LinkedList when used as fs.add(f). Such usage may not be confusing to the compiler, because it keeps track of the types of all variables, but it can be confusing to the programmer who writes and reads the code and has to make sense of it. The isSpsp function computes d and s in the first while loop, checks for an early termination in the if, then counts down s in the second while loop. Note that the early termination test is different in the Java version than the Haskell version; the Haskell version separates the early termination test from the n−1 tests, but the Java version combines the early termination test with the first loop of the n−1 tests. Both versions of the function get the right answer; the choice is based on the convenience of the programmer. private static Boolean isSpsp(BigInteger n, BigInteger a) { BigInteger two = BigInteger.valueOf(2); BigInteger n1 = n.subtract(BigInteger.ONE); BigInteger d = n1; int s = 0; while (d.mod(two).equals(BigInteger.ZERO)) { d = d.divide(two); s += 1; } BigInteger t = a.modPow(d, n); if (t.equals(BigInteger.ONE) || t.equals(n1)) { return true; } while (--s > 0) { t = t.multiply(t).mod(n); if (t.equals(n1)) { return true; } } return false; } After using a predefined list of bases in the Haskell version of the function, we’re back to using random bases in the isPrime function. The two if tests check the input domain and exit quickly if the input is even, then the while loop performs 25 strong pseudo-prime tests. public static Boolean isPrime(BigInteger n) { Random r = new Random(); BigInteger two = BigInteger.valueOf(2); BigInteger n3 = n.subtract(BigInteger.valueOf(3)); BigInteger a; int k = 25; if (n.compareTo(two) < 0) { return false; } if (n.mod(two).equals(BigInteger.ZERO)) { return n.equals(two); } while (k > 0) { a = new BigInteger(n.bitLength(), r).add(two); while (a.compareTo(n) >= 0) { a = new BigInteger(n.bitLength(), r).add(two); } if (! isSpsp(n, a)) { return false; } k -= 1; } return true; } Note that Java’s BigInteger library includes a function isProbablePrime that performs this computation in exactly the same way. The rhoFactor function races the tortoise and hare until the gcd is greater than 1, then the if– else chain either returns a prime factor or retries the factorization with a different random function. private static BigInteger rhoFactor(BigInteger n, BigInteger c) { BigInteger t = BigInteger.valueOf(2); BigInteger h = BigInteger.valueOf(2); BigInteger d = BigInteger.ONE; while (d.equals(BigInteger.ONE)) { t = t.multiply(t).add(c).mod(n); h = h.multiply(h).add(c).mod(n); h = h.multiply(h).add(c).mod(n); d = t.subtract(h).gcd(n); } if (d.equals(n)) /* cycle */ { return rhoFactor(n, c.add(BigInteger.ONE)); } else if (isPrime(d)) /* success */ { return d; } else /* found composite factor */ { return rhoFactor(d, c.add(BigInteger.ONE)); } } The rhoFactors function first validates its input, then extracts factors of 2 in the first while, and, unless the input is a power of 2, calls rhoFactor repeatedly in the second while until the remaining cofactor is prime, sorting the list of factors before returning it. The built-in isProbablePrime function is called rather than the one we defined above. public static LinkedList rhoFactors(BigInteger n) { BigInteger f; BigInteger two = BigInteger.valueOf(2); LinkedList fs = new LinkedList(); if (n.compareTo(two) < 0) { return fs; } while (n.mod(two).equals(BigInteger.ZERO)) { n = n.divide(two); fs.add(two); } if (n.equals(BigInteger.ONE)) { return fs; } while (! n.isProbablePrime(25)) { f = rhoFactor(n, BigInteger.ONE); n = n.divide(f); fs.add(f); } fs.add(n); Collections.sort(fs); return fs; } To show examples of the use of these functions, we have to create a complete program with all of its imports and a class declaration. The program shown below is decidedly simple-minded, sufficient only to show a few examples; you will surely want to do arrange the class differently in your own programs. For sake of brevity, the function bodies are elided below. import java.util.LinkedList; import java.util.BitSet; import java.math.BigInteger; import java.lang.Exception; import java.lang.Boolean; class Main { public static LinkedList primes(int n) { ... } public static Boolean tdPrime(BigInteger n) { ... } public static LinkedList tdFactors(BigInteger n) { ... } private static Boolean isSpsp(BigInteger n, BigInteger a) { ... } public static Boolean isPrime(BigInteger n) { ... } private static Boolean rhoFactor(BigInteger n, BigInteger c) { ... } public static LinkedList rhoFactors(BigInteger n) { ... } public static void main (String[] args) { System.out.println(primes(100)); System.out.println(primes(1000000).size()); System.out.println(tdPrime(new BigInteger("600851475143"))); System.out.println(tdFactors(new BigInteger("600851475143"))); System.out.println(isPrime(new BigInteger("600851475143"))); System.out.println(isPrime(new BigInteger("2305843009213693951"))); System.out.println(rhoFactors(new BigInteger("600851475143"))); } } Output from the program is the same as all the other implementations. You can run the program at. Appendix: Python Python is a commonly-used scripting language with a reputation of being easy to read and write and a mixed imperative/object-oriented flavor. We’ll take the opportunity with Python to extend the domain of integer factorization beyond the integers greater than 1 that mathematicians generally consider. Specifically, we’ll consider −1, 0 and 1 to be prime, so they factor as themselves, and we’ll factor negative numbers by adding −1 to the factors of the corresponding positive number. This isn’t entirely correct, but it isn’t entirely incorrect, either, and is actually useful in some cases. And we’re in good company; Wolfram|Alpha calculates factors the same way we do. We begin, as we did with Haskell, with a one-liner that purports to be the Sieve of Eratosthenes: print [x for x in range(2,100) if not [y for y in range(2, int(x**0.5)+1) if x%y == 0]] That prints the primes less than a hundred. But the expression if x%y == 0 at the end gives the game away: it’s really trial division (the % operator for modulo), so it’s not a sieve. Don’t be fooled by cute one-liners! We begin with the Sieve of Eratosthenes. After validating the input, the first while loop collects the sieving primes and performs the sieving, and the second while loop collects the remaining primes that survived the sieve. Note that append adds an element after the target, unlike the function we wrote in C that inserts an element before the target. def primes(n): if type(n) != int and type(n) != long: raise TypeError('must be integer') if n < 2: raise ValueError('must be greater than one') m = (n-1) // 2 b = [True] * m i, p, ps = 0, 3, [2] while p*p < n: if b[i]: ps.append(p) j = 2*i*i + 6*i + 3 while j < m: b[j] = False j = j + 2*i + 3 i += 1; p += 2 while i < m: if b[i]: ps.append(p) i += 1; p += 2 return ps The next function, td_prime, has three possible return values: PRIME, COMPOSITE, and UNKNOWN if the limit is exceeded. We use True and False for PRIME and COMPOSITE, and raise an OverflowError exception if the limit is exceeded, requiring some effort for the user to trap the error and respond accordingly. After validating the input, the if checks even numbers and the while loop checks odd numbers. Python’s optional-argument syntax is simple and convenient; the argument limit is optional, and is given a default value if not specified. def td_prime(n, limit=1000000): if type(n) != int and type(n) != long: raise TypeError('must be integer') if n % 2 == 0: return n == 2 d = 3 while d * d <= n: if limit < d: raise OverflowError('limit exceeded') if n % d == 0: return False d += 2 return True The td_factors function has the same problem with the limit argument as td_prime, and we solve it the same way, using an OverflowError exception. The if becomes a while on the factors of 2, and the function collects factors instead of returning, but otherwise td_factors is very similar to td_prime. def td_factors(n, limit=1000000): if type(n) != int and type(n) != long: raise TypeError('must be integer') fs = [] while n % 2 == 0: fs += [2] n /= 2 if n == 1: return fs f = 3 while f * f <= n: if limit < f: raise OverflowError('limit exceeded') if n % f == 0: fs += [f] n /= f else: f += 2 return fs + [n] The Miller-Rabin primality checker makes use of Python’s ability to nest functions to hide the strong pseudo-prime checker inside the is_prime function. It also uses the built-in modular exponentiation function; with two arguments, pow(b ,e ) computes be, but with three arguments, pow(b ,e ,m ) computes be (mod m). Python offers random numbers, but we prefer to test a fixed set of bases. def is_prime(n): if type(n) != int and type(n) != long: raise TypeError('must be integer') if n < 2: return False ps = [2,3,5,7,11,13,17,19,23,29,31,37,41, 43,47,53,59,61,67,71,73,79,83,89,97] def is_spsp(n, a): d, s = n-1, 0 while d%2 == 0: d /= 2; s += 1 t = pow(a,d,n) if t == 1: return True while s > 0: if t == n-1: return True t = (t*t) % n s -= 1 return False if n in ps: return True for p in ps: if not is_spsp(n,p): return False return True Like is_prime, the rho_factors function reduces namespace pollution by hiding local functions; notice the lambda, which is an alternate way of creating a local function. Again, as with Java, the + function is overloaded for both addition and list construction. The code is similar to Java, even if it looks quite different. def rho_factors(n, limit=1000000): if type(n) != int and type(n) != long: raise TypeError('must be integer') def gcd(a,b): while b: a, b = b, a%b return abs(a) def rho_factor(n, c, limit): f = lambda(x): (x*x+c) % n t, h, d = 2, 2, 1 while d == 1: if limit == 0: raise OverflowError('limit exceeded') t = f(t); h = f(f(h)); d = gcd(t-h, n) if d == n: return rho_factor(n, c+1, limit) if is_prime(d): return d return rho_factor(d, c+1, limit) if -1 <= n <= 1: return [n] if n < -1: return [-1] + rho_factors(-n, limit) fs = [] while n % 2 == 0: n = n // 2; fs = fs + [2] if n == 1: return fs while not is_prime(n): f = rho_factor(n, 1, limit) n = n / f fs = fs + [f] return sorted(fs + [n]) We could import the gcd function from the fractions library, but instead we implement it ourselves because it gives us the chance to discuss this famous algorithm. Donald E. Knuth, in Volume 2, Section 4.5.2 of his book The Art of Computer Programming, calls this the “granddaddy” of all algorithms because it is the oldest nontrivial algorithm that has survived to the present day. The algorithm is commonly called the Euclidean algorithm because it was described in Book 7, Propositions 1 and 2 of Euclid’s Elements, but scholars believe the algorithm dates to about two hundred years before Euclid, sometime around 500 B.C. Knuth gives the entire history of the algorithm, and an extensive analysis of its time complexity, which is well worth your time. Euclid’s version of the algorithm worked by repeatedly subtracting the smaller amount from the larger until they are the same; the modern version of the algorithm replaces subtraction with division (the modulo operator). Here are some sample calls to the functions defined above; the answers are the same as all the other implementations. print primes(100) print len(primes(1000000)) print td_prime(600851475143) print td_factors(600851475143) print is_prime(600851475143) print is_prime(2305843009213693951) print rho_factors(600851475143) You can run the program at. Appendix: Scheme Scheme is primarily an academic language, useful for expressing algorithms in imperative, functional, and message-passing styles, with a fully-parenthesized prefix syntax derived from Lisp. Scheme provides big integers natively, and also lists, but has no bit arrays, so our implementation of the Sieve of Eratosthenes uses a vector of booleans, which uses eight bits per element instead of one but works perfectly well. (define (primes n) (if (or (not (integer? n)) (< n 2)) (error 'primes "must be integer greater than one") ))))))) Let’s take a moment for a quick lesson in Scheme. An expression like (let ((var1 value1 ) ...) body ) establishes a local binding for each of several var/value pairs that is active in the body of the let; a let* expression is the same, except that the bindings are executed left-to-right, and each binding is available to those that follow. There are two looping constructs. The named- let variant of let, given by (let name ((var1 value1 ) ...) body ), is like let, but additionally binds name to a function with arguments vark whose code is the body of the let, which executes loops when it is called recursively; by convention, the variable name is often called loop, but it is sometimes convenient to use other names. The other looping construct is do, which is similar to the for of C. The form of the do loop is (do ((var1 value1 next1 ) ...) (done? ret-value ) body ...). Each var/value/next triplet in the first do clause specifies a variable name, a value for the variable when the do is initialized, and an expression evaluated at each step of the do; there may be multiple var/value/next triplets, in which case each is executed simultaneously, rather like a comma operator in a C for statement. The done? predicate terminates the do loop when it becomes true; this is the opposite of a C for loop, which terminates when the condition becomes false. The return value is optional; if it is not given, the return value of the do loop is unspecified. The statements in the body of the do loop are optional, and are evaluated only for their side effects. Scheme also provides two conditional constructs. The first is (if cond then else ), which first evaluates the condition then evaluates one of the two succeeding clauses; like Haskell, an if is an expression, not a control-flow statement. Cond is similar to a nested set of if statements; each clause consists of a condition and body, the conditions are read in order until one is true, when the corresponding body is evaluated. In the example above, len is the length of the bitarray, called m in the description of the algorithm, and bits is the bitarray itself, a vector of booleans. The cond has three clauses. The first clause is actually the termination clause of Step 5 and Step 6 that is executed last; the body-less do sweeps up the primes after sieving is complete, and the return value is the list of primes, which must be reversed because each newly-found prime is pushed to the front, not the back, of an accumulating list of primes. The second clause sifts each prime, as in Step 4; this do has a body, which clears the jth element of the bitarray, and the return value is an expression that calls the named- let recursively to advance to the next sieving prime. The else clause recurs when p is not prime. Our td-prime? function follows the Scheme convention that predicates (functions that return a boolean) have names that end in a question mark. An error is signaled if limit is less than the smallest prime factor of n; this isn’t as convenient as raising an exception in Python, because standard Scheme has no way to trap the error, but most implementations provide some kind of error trapping. (define (td-prime? n . args) (if (or (not (integer? n)) (< n 2)) (error 'td-prime? "must be integer greater than one") (let ((limit (if (pair? args) (car args) 1000000))) (if (even? n) (= n 2) (let loop ((d 3)) (cond ((< limit d) (error 'td-prime? "limit exceeded")) ((< n (* d d)) #t) ((zero? (modulo n d)) #f) (else (loop (+ d 2))))))))) Td-factors is similar to td-prime?, except that the first if becomes a loop, on twos, and both loops collect the factors that they find instead of stopping on the first factor. Here is a case where the names twos and odds in the named- let provide documentation of the nature of the loop, making the function clearer to the reader. (define (td-factors n . args) (if (or (not (integer? n)) (< n 2)) (error 'td-factors "must be integer greater than one") (let ((limit (if (pair? args) (car args) 1000000))) (let twos ((n n) (fs '())) (if (even? n) (twos (/ n 2) (cons 2 fs)) (let odds ((n n) (d 3) (fs fs)) (cond ((< limit d) (error 'td-factors "limit exceeded")) ((< n (* d d)) (reverse (cons n fs))) ((zero? (modulo n d)) (odds (/ n d) d (cons d fs))) (else (odds n (+ d 2) fs))))))))) We’ve been using lists but haven’t mentioned how they work. A list is either null or is a pair with an item in its car and another list in its cdr; the terms car and cdr are pre-historic. An item x is inserted at the front of a list xs by (cons x xs); the word cons is short for construct. Predicates (null? xs) and (pair? xs) distinguish empty lists from non-empty ones. The null list is represented as '(), and the order of items in a list is reversed by (reverse xs). The function prime? that implements the Miller-Rabin primality checker illustrates two more features of Scheme. We have been introducing functions with the notation (define (name args … ) body ), but the alternate notation is (define name (lambda (args … ) body )). We use it here because we want variable seed to persist from one invocation of prime? to the next. Since the let is inside the define but outside the lambda, the variable bound by the let retains its value from one call of the function to the next, just like static variables in some programming languages. Thus, prime? is a closure, not just a function, because it encloses the seed variable. And while we’re talking about define, even though it doesn’t apply here, we have been using the dot-notation for some of our argument lists; a construct like (define (f args ... . rest ) body ) provides a variable-arity argument list, with all arguments after the dot collected into a list rest. The other Scheme feature is internal- define, which is used to provide local functions that don’t pollute the global namespace. We define three local functions, rand that returns random numbers, expm that performs modular exponentiation (the name is a variant of expt that Scheme provides for the normal powering function) and spsp? that checks if a is a witness to the compositeness of n. And we’re not done; the internal definition expm has its own internal definition times for modular multiplication. An internal- define must appear immediately after another define, a lambda, or a let. (define prime? (let ((seed 3141592654)) (lambda (n) (define (rand) (set! seed (modulo (+ (* 69069 seed) 1234567) 4294967296)) (+ (quotient (* seed (- n 2)) 4294967296) 2)) (define (expm b e m) (define (times x y) (modulo (* x y) m)) (let loop ((b b) (e e) (r 1)) (if (zero? e) r (loop (times b b) (quotient e 2) (if (odd? e) (times b r) r))))) (define (spsp? n a) (do ((d (- n 1) (/ d 2)) (s 0 (+ s 1))) ((odd? d) (let ((t (expm a d n))) (if (or (= t 1) (= t (- n 1))) #t (do ((s (- s 1) (- s 1)) (t (expm t 2 n) (expm t 2 n))) ((or (zero? s) (= t (- n 1))) (positive? s)))))))) (if (not (integer? n)) (error 'prime? "must be integer") (if (< n 2) #f (do ((a (rand) (rand)) (k 25 (- k 1))) ((or (zero? k) (not (spsp? n a))) (zero? k)))))))) In the prime? function we define our own random number generator, since standard Scheme doesn’t provide one; the static variable seed maintains the current state of the random number generator, which is of a type known as a linear-congruential generator. The multiplier 69069 is due to Knuth. Note that the seed is reset with each call to rand; the witness a is set to the range 1 to n, exclusive. There are three do loops in the function. The first, the outer do in spsp?, binds two variables, d and s, and iterates until d is odd, performing Step 2 of Algorithm 4.A. The inner do in spsp? binds the two variables s and t and implements the loop of Step 4 and Step 5 of Algorithm 4.A, performing the modular squaring and terminating when s is zero or t is n−1. The result of the do-loop is computed by the predicate (positive? s), which is #t if t ≡ n−1 (mod n) and #f when s reaches 0 without finding t ≡ n−1 (mod n). The do in the main body of the function uses the same idiom of having two terminating conditions and using the finishing predicate to differentiate the two. Our final function implements integer factorization by Pollard’s rho algorithm. The two internal definitions are cons<, which inserts an item into a list in ascending order instead of at the front, and rho, which implements the rho algorithm. The cons< function turns a vice into a virtue; since standard Scheme lacks a sort function, we insert the factors in order as we find them, instead of writing a sort function to sort them at the end, which gives the virtue of simpler code and is probably faster, given the short length of most lists of factors. The body of the function does error checking, extracts factors of 2, and assembles the complete factorization. (define (rho-factors n . args) (define (cons< x xs) (cond ((null? xs) (list x)) ((< x (car xs)) (cons x xs)) (else (cons (car xs) (cons< x (cdr xs)))))) (define (rho n limit) (let loop ((t 2) (h 2) (d 1) (c 1) (limit limit)) (define (f x) (modulo (+ (* x x) c) n)) (cond ((zero? limit) (error 'rho-factors "limit exceeded")) ((= d 1) (let ((t (f t)) (h (f (f h)))) (loop t h (gcd (- t h) n) c (- limit 1)))) ((= d n) (loop 2 2 1 (+ c 1) (- limit 1))) ((prime? d) d) (else (rho d (- limit 1)))))) (if (not (integer? n)) (error 'rho-factors "must be integer") (let ((limit (if (pair? args) (car args) 1000))) (cond ((<= -1 n 1) (list n)) ((negative? n) (cons -1 (rho-factors (- n) limit))) ((even? n) (if (= n 2) (list 2) (cons 2 (rho-factors (/ n 2) limit)))) (else (let loop ((n n) (fs '())) (if (prime? n) (cons< n fs) (let ((f (rho n limit))) (loop (/ n f) (cons< f fs)))))))))) Here are some examples: > (primes 100) (2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97) > (length (primes 1000000)) 78498 > (td-prime? 600851475143) #f > (td-factors 600851475143) (71 839 1471 6857) > (prime? 600851475143) #f > (prime? 2305843009213693951) #t > (rho-factors 600851475143) (71 839 1471 6857) You can run the program at. As your reward for reading this far, we discuss a fast implementation of Pollard’s rho algorithm. We make two changes, one algorithmic and one code-tuning. The first change is algorithmic: we replace Floyd’s turtle-and-hare cycle-finding algorithm, which Pollard used in his original version of the rho algorithm, with Brent’s powers-of-two cycle-finding algorithm. Each time the step-counter. For instance, consider the sequence 1, 2, 3, 4, 5, 6, 3, 4, 5, 6, …. Initially j = 1, xj = 1, q = 2j = 2 and the saved x = 1. Then j = 2, xj = 2, q is reset to 4 and the saved x is reset to 2. Then j = 3, then j = 4, and q is reset to 8 and the saved x is reset to 4. The iteration continues until j = 8 and xj = 4, which equals the saved x, thus identifying the cycle. The second change is code-tuning: we replace the gcd at each step with a gcd that is calculated only periodically, performing instead a modular multiplication at each step, which is much faster than a gcd calculation. This is done by taking the product of all the |xi+1 − xi| modulo n for several steps, then taking a gcd of the product at the end of the several steps. If the gcd is 1, then all the intermediate gcd calculations were also 1. If the gcd is prime, it is a factor of n. If the gcd is composite (including the case where the gcd is equal to n) it is necessary to retreat to the saved value of x from the prior gcd calculation and proceed step-by-step through the gcd calculations. The number of steps between successive gcd calculations varies with the size of n (bigger n means less frequent gcd calculations) and the number of trial divisions performed before starting the rho algorithm (more trial divisions means less frequent gcd calculations); values between 10 and 250 may be appropriate depending on the circumstances. 25)) 25) (brent n (+ c 1) (- limit j)) (let ((d (gcd (- z x) n))) (if (= d 1) (loop2 (+ k 1) (f z)) (if (= d n) (brent n (+ c 1) (- limit j)) d)))))))))))))) Our function keeps three values of the random-number. Function f delivers the next value in the random-number sequence and function g accumulates the current product for the short-circuit gcd calculation. Loop1 is the main body of the function, and loop2 reruns the short-circuit gcd calculation when necessary; both call the function recursively if they find a cycle. We arbitrarily choose 25 as the number of steps between successive gcd calculations. Copyright © 2012. This essay can be found in html format at and in pdf format at. The author can be reached at programmingpraxis@gmail.com. […] Programming with Prime Numbers | Programming Praxis – Prime numbers are those integers greater than one that are divisible only by themselves and one; an integer greater than one that is not prime is composite. […]
https://programmingpraxis.com/programming-with-prime-numbers/
CC-MAIN-2017-30
refinedweb
16,663
66.98
User Tag List Results 1 to 1 of 1 - Join Date - Jan 2001 - Location - Berkshire UK - 71 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) SSI help Hi I've been testing the method explained in Corbb O'connor's article about creating a database driven website using SSI I followed the article but when i try it the message [an error occurred while processing this directive] aswell as the custom error message I made. I can see no reason why it should fail other than if the coding was wrong. This is the code i used <!--#set var="which" value="$QUERY_STRING_UNESCAPED" --> <!--#if expr="$which !=""--> <!--#include virtual="/content/$which" --> <!--#else --> <!--#include virtual="/content/error.txt" --> <!--#endif --> Is that correct does anyone have any suggestions. Thanks in advance Bookmarks
http://www.sitepoint.com/forums/showthread.php?29985-SSI-help&p=211519
CC-MAIN-2016-22
refinedweb
128
60.45
I need to create a class to use with another program that will take regular numbers and convert them to roman numerals. So far I have this: public class Roman { public Roman(String r){ char c = r.charAt(0); int value; if(c=='I') value=1; else if(c=='V') value=5; else if(c=='X') value=10; else if(c=='L') value=50; else if(c=='C') value=100; else if(c=='D') value=500; else if(c=='M') value=1000; } public void printRoman(){ System.out.println(); } public void printInt(){ System.out.println(); } } I know i need a loop for the if else if part, but I'm not too sure on what that loop is. Also, i need code after the printRoman part, but I don't know that either. If someone could point me in the right direction, that'd be great.
http://www.javaprogrammingforums.com/loops-control-statements/5800-help-creating-class.html
CC-MAIN-2015-11
refinedweb
145
71.85
A reader/writer for geometric primitives exposed by OSGeo MapGuide Server. Before further reading, please see FDO Reader/Writer and Binary Predicates introductory notes. Although MapGuide relays on it's own (FDO-based) spatial filters, there are situations when they are simply not enough - most often FDO providers don't expose necessary binary predicates by design. On the other hand, you can use rich JTS API to perform targeted geospatial analysis and bounce results back to MapGuide in some form (i.e. redlining, feature or spatial filter, etc). MapGuide Reader reads MapGuide Server geometries and creates geometric representation of the features based on JTS model. Curve-based geometries are currently not supported. MapGuide Writer reads features based on JTS model and creates their MapGuide Server representation. A Topology.IO.MapGuide.dll library file available for download here. Library exposes MgReader and MgWriter classes residing within Topology.IO.MapGuide namespace. References MapGuide Server's MapGuideDotNetApi.dll library, which in turn references other unmanaged libraries. All necessary support libraries are available for download here. If you already have any running version of MapGuide Server (either OSS or Enterprise), you can simply reference it's libraries found in ..\WebServerExtensions\www\mapviewernet\bin folder, rather than downloading full set of binaries using link above. Also references TF.NET core library available for download here. Currently can neither read nor write geometries involving curves.
http://code.google.com/p/tf-net/wiki/MapGuideReaderWriter
crawl-002
refinedweb
229
51.14
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to check _sql_constraints using function: Odoo 9 Hi guys, I want to check _sql_constraints of a model using function Example , I have a function @api.one def check_constraint(self) # CODE TO CHECK return True # If ok return False # If not ok The below constraint must raise an error if the function returns False _sql_constraints = [ .................] I know It is possible by overriding the create method. But I forgot the correct way Thanks in advance Hi Shameem, you have two ways. one way: std_id = fields.Char(string="Student ID", required=True) _sql_constraints = [('std_id_uniq', 'unique(std_id)', 'This ID already exists !')] Another way using api: mark1 = fields.Integer(string="Total Mark") @api.constrains('mark1') def _check_mark1(self): if self.mark1 == 0: raise ValidationError("Please enter the marks !") Also you can check any logical operations inside the constrains function. Hope this may help you Thanks Nilmar, It is working while adding @api.one Same answer of Axel , you explained more You have an upvote Good. Keep going :) About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-check-sql-constraints-using-function-odoo-9-110380
CC-MAIN-2017-17
refinedweb
218
59.8
This is an important topic, so I felt I had to take some extra time to reply. You'll see my replies to Jure's initial message below, but first let me make clear where I stand: 1. *Dashboard:* There is only one, and it's global. That's the way it was always intended too. As you drill down you'll find Product views, Version views, Milestone views etc, but I believe we should only call one the Dashboard for clarity. 2. *Custom Query and Reports: *Should always have a global scope to start with. That helps users with their mental model of how these things work. 3. *Wiki:* Gary has made strong arguments for keeping the global wiki in the past. That's fine with me. I do believe though that it should be much easier to associate Wiki articles with objects (such as Products, Version and the like), going as far as reserving a namespace in the wiki for each object (product/version/milestone), and linking to it automatically from the object's view. Now to address some specific points: *Andrej*: > What I would expect from global dashboard as a user is quite different to product dashboard: I agree. The scope of what is displayed naturally decreases as the user navigates to increasingly more specific objects. *Gary*: > I can see this working for users with access to multiple products. If a user > only has access to a single product, wouldn't we want them to go directly to > their product dashboard view? In my view: No. To set expectations, make it easier to compare what they see with colleagues and expand their set of objects (ie create a second product) we should show them the same (global) Dashboard. Users can of course always create browser bookmarks directly to the (sole) Product view if they prefer. *Jure:* > * 'My Products' - list of products user is member of, including quick links to tickets&wiki for that specific product > [...] > * 'All Products' - list of all products* * All Products are all products that I am allowed to view as that particular user, making it also 'My Products'. I don't believe we should differentiate between them. The only differentiation then should be based on: 1. Criteria the user selects themselves (ie personal focus) that I don't believe we need to determine in Bloodhound itself, we should just let users have the ability to follow/favourite/pin products. 2. Involvement in the product, especially 'current' involvement: Taking on tickets and completing them vs no tickets assigned to the user gives us a good indication which one may be more important to the user at any given time. This should be exposed in the UI. I will update the Dashboard html mockup this afternoon to make a suggestion and will report back then. Cheers, Joe On 6 December 2012 09:36, Jure Zitnik <jure@digiverse.si>: > * default page/entry point for the user > * layout could be very similar to the current dashboard with some widgets > missing (Versions, Milestones, Components for example) > * Search is global, through all products > * Wiki and Ticket quick links are not available > * > > -- Joe Dreimann UX Designer | WANdisco <> * * *Transform your software development department. Register for a free SVN HealthCheck <> *
http://mail-archives.apache.org/mod_mbox/incubator-bloodhound-dev/201212.mbox/%3CCAKoya92y6BGwArv-dXq8on-k6oN+QoVfDTBW__ak6+bk5PecVA@mail.gmail.com%3E
CC-MAIN-2016-07
refinedweb
537
61.16
This package provide support for running Dart on Google App Engine Managed VMs. Getting started Visit dartlang.org/cloud for more information on the requirements for getting started. When you are up and running a simple hello world application looks like this: import 'dart:io'; import 'package:appengine/appengine.dart'; void requestHandler(HttpRequest request) { request.response ..write('Hello, world!'); ..close(); } void main() { runAppEngine(requestHandler).then((_) { // Server running. }); } Add the application configuration in a app.yaml file and run it locally using by running: gcloud preview app run app.yaml When you are ready to deploy your application, make sure you have authenticated with gcloud and defined your current project. Then run: gcloud preview app deploy app.yaml Send Feedback We'd love to hear from you! If you encounter a bug, have suggestions for our APIs or is missing a feature, file it an issue on the GitHub issue tracker. Note The Dart support for App Engine is currently in beta.
https://www.dartdocs.org/documentation/appengine/0.2.3/index.html
CC-MAIN-2017-17
refinedweb
162
60.92
What it is: In python, lambda allows one to define simple functions in-line. It is syntactically an expression, not a statement (a function definition using def on the other hand is a statement). Understanding what this means is critical and I will return to it shortly. You will also often hear it said that with lambda, one can write anonymous functions. But before we get to that... The name: Why is lambda, called lambda? Well, apparently the idea was adopted from languages such as Lisp and Scheme. The ideas associated with lambda are generally equated with the paradigm of programming languages known as functional programming languages. With regards to the name, even Guido van Rossum lamented: For more on the decisions to add functional language features, read here:For more on the decisions to add functional language features, read here:Guido van Rossum wrote:I was never all that happy with the use of the "lambda" terminology, but for lack of a better and obvious alternative, it was adopted for Python. Origins of Pythons “Functional” Features How it works: The syntax of lambda actually isn't that complicated. It takes this structure: - Code: Select all lambda arguments : return_value Let's look at a simple (and largely useless) example: - Code: Select all square = lambda x: x**2 - Code: Select all def square(x): return x**2 It creates a function which takes a single argument, and returns that number squared. It is called like any other function: - Code: Select all >>> square(5) 25 Additional arguments can be used by separating them with commas, as in: - Code: Select all >>> pythag = lambda x,y : (x**2+y**2)**(0.5) >>> pythag(5,5) 7.0710678118654755 You can even--if you are so inclined--make use of *args and **kwargs: - Code: Select all >>> sum_of_squares = lambda *args: sum(arg**2 for arg in args) >>> sum_of_squares(4,6,2) 56 But why do we need them: Hopefully, at this point, you are asking yourself this question. Usually the answer to this is, we don't. In fact if you are using lambda there should be a very good reason. Writing short little functions as just demonstrated, is not a good enough reason. The two primary reasons for using lambda are: - You are writing a function that returns a function. - You are writing a short callback function. Let's look at number one; functions that return functions. Let's modify our previous square function so that we can use any power we want: - Code: Select all def power_of(power): return lambda x: x**power - Code: Select all >>> square = power_of(2) >>> cube = power_of(3) >>> fourth = power_of(4) >>> square(8) 64 >>> cube(8) 512 >>> fourth(8) 4096 I would also like to note at this point something that a lot of people seem to miss when they start out. We didn't need to use lambda for this function at all. The previous power_of function is identical to this: - Code: Select all def power_of(power): def pow_it(x): return x**power return pow_it 2. Now let's look at the second reason I listed for using lambda; you are writing a callback function. Basically a callback function is a function that you pass as an argument to another function. The most commonly seen example of this is with regards to sort/sorted. Take the following list of tuples: - Code: Select all data = [(0,4), (5,3), (5,2), (6,1), (7,9), (0,12), (2,1)] - Code: Select all >>> print(sorted(data)) [(0, 4), (0, 12), (2, 1), (5, 2), (5, 3), (6, 1), (7, 9)] We could manually write a callback function and use it: - Code: Select all def order_by_second(item): return item[1] - Code: Select all >>> print(sorted(data, key=order_by_second)) [(6, 1), (2, 1), (5, 2), (5, 3), (0, 4), (7, 9), (0, 12)] The accepted way to pass simple callbacks, is by using lambda: - Code: Select all >>> print(sorted(data, key=lambda item:item[1])) [(6, 1), (2, 1), (5, 2), (5, 3), (0, 4), (7, 9), (0, 12)] This is when understanding the distinction between statement and expression becomes important. As def is a statement, we could never actually define a function while passing it as an argument at the same time. Statements can not appear in such locations. The fact that lambda is an expression gives us the freedom we need to anonymously create and pass simple functions such as this. Along with being able to pass these functions immediately as arguments, you also have the freedom to do other things like place them in dictionaries. For example, this code in which three simple functions are defined and placed in a dictionary: - Code: Select all def rook(x,y): return x+y def queen(x,y): return max(x,y) def knight(x,y): return max((x//2+x%2),(y//2+y%2)) HEURISTICS = {"rook" : rook, "queen" : queen, "knight" : knight} Would become as short and simple as this: - Code: Select all HEURISTICS = {"rook" : lambda x,y : x+y, "queen" : lambda x,y : max(x,y), "knight" : lambda x,y : max((x//2+x%2),(y//2+y%2))} While lambda can be a very neat tool, knowing when it is appropriate to use a lambda is just as important as knowing when not to. You would do well to be aware that Guido van Rossum strongly considered removing lambda from the language completely when Python 3 was first being released. He certainly has justified reasons for this. -Mek
http://python-forum.org/viewtopic.php?p=12684
CC-MAIN-2015-14
refinedweb
917
63.83
Example: Check if a string is a valid shuffle of two other strings import java.util.Arrays; class Test { // length of result string should be equal to sum of two strings static boolean checkLength(String first, String second, String result) { if (first.length() + second.length() != result.length()) { return false; } else { return true; } } // this method converts the string to char array // sorts the char array // convert the char array to string and return it static String sortString(String str) { char[] charArray = str.toCharArray(); Arrays.sort(charArray); // convert char array back to string str = String.valueOf(charArray); return str; } // this method compares each character of the result with // individual characters of the first and second string static boolean shuffleCheck(String first, String second, String result) { // sort each string to make comparison easier first = sortString(first); second = sortString(second); result = sortString(result); // // with1X2", "Y21XX"}; // call the method to check if result string is // shuffle of the string first and second for (String result : results) { if (checkLength(first, second, result) == true && shuffleCheck(first, second, result) == true) { System.out.println(result + " is a valid shuffle of " + first + " and " + second); } else { System.out.println(result + " is not a valid shuffle of " + first + " and " + second); } } } } Output 1XY2 is a valid shuffle of XY and 12 Y1X2 is a valid shuffle of XY and 12 Y21XX is not a valid shuffle of XY and 12 In the above example, we have a string array named results. It contains three strings: 1XY2, Y1X2, and Y21XX. We are checking if these three strings are valid shuffle of strings first(XY) and second(12). Here, we have used 3 methods: 1. checkLength() - The number of characters in a shuffled string should be equal to the sum of the character in two strings. So, this method checks if the length of the shuffled string is same as the sum of the length of the first and second strings. If the length is not equal, there is no need to call the shuffleCheck() method. Hence, we have used the if statement as // inside main method if (checkLength(first, second, result) == true && shuffleCheck(first, second, result) == true) 2. sortString() - This method converts the string to char Array and then uses the Arrays.sort() method to sort the array. Finally, returns the sorted string. Since we are comparing the shuffled string with the other two strings, sorting all three strings will make the comparison more efficient. 3. shuffleCheck() - This method compares the individual characters of the shuffled string with the characters of first and second strings
https://www.programiz.com/java-programming/examples/check-valid-shuffle-of-strings
CC-MAIN-2022-40
refinedweb
421
67.49
+ Post New Thread Hi, As said by JDThermo here: , I found that Combobox can be added in Menu but... com.extjs.gxt.desktop.client.StartMenu#addToolSeperator() should be addToolSeparator(). "Separator" is spelled with an "a", not an "e". Please... MultiField is declared as MultiField<D> extends Field<D> but the getAll method on MultiField is returning List<Field<?>>. It should be... In the javadoc for the loader class: Please fix that... Regards, Michel. Using Safari 4 and Chrome double clicking on a row inside a grid gets the whole grid selected. Reproducible on demo site... Just thought I'd give you a heads up. I was looking to try the example of the slider, as well as see some sample code. It looks as if there is a... Hi Please find a test case below. The test app runs fine using IE6; however on FF and Chrome we get a blank gap on top of the page. We have... I have a FormPanel hosted in a Dialog. This FormPanel is the only child of the Dialog. The FormPanel contains 3 collapsible FieldSets. The First... GXT 2.1.1 GWT 2.0 OS: XP Browser: Firefox 3.5.5, IE7 Problem: The combobox open Button is placed to the right side of the window.... Hi, We have some issues with a grid with a BufferView as described below. Please see the initial view in Grid at top.jpg. Everything looks... Well private LayoutContainer mainpanel = new LayoutContainer(); mainpanel.setScrollMode(Scroll.AUTO); mainpanel.setLayout(new RowLayout());... Hi folks, There is a small typo in the gxt-all.css file, line 2438. ..x-grid3-invalid-cell { background: repeat-x bottom; }Note there... Hi. I am trying to use the setDirty method to mark a cell dirty in the grid, but it is not working. It should be marked dirty after clicking the... I don't know if this is a issue or the programmer is responsible to do this but, worth to try! In my Grid, I set loadingMask(true), so, before the... dear dev team! the ToolBarLayout contains a bug. if a panel contains a toolbar and the panel is disconnected from its parent and later connected... Events ------ /** * DOM ONMOUSEWHEEL event type. */ public static final EventType OnPaste = new EventType(Event.ONPASTE); PagingModelMemoryProxy inherits MemoryProxy<PagingLoadResult<? extends ModelData>> while MemoryProxy doesn't have the dependency on ModelData in its... Hello, To expand the "cb" combobox I need to clic three times on the trigger button. Am I doing something wrong ? Thanks public class... I believe I've encountered a problem when updating a ComboBox's store. I've got two ComboBoxes. Both ComboBoxes share the same store. In... If you call reset() on a fileuploadfield, the browse button becomes disabled. WTF Darrell, do you test your software? Sencha is used by over two million developers. Join the community, wherever you’d like that community to be or Join Us
http://www.sencha.com/forum/forumdisplay.php?46-Ext-GWT-Bugs-(2.x)/page22&order=desc
CC-MAIN-2014-42
refinedweb
485
70.09
Split Comma Separated String In C++ Welcome!. In this tutorial, we will learn how to split a comma separated string in the C++ programming language. We will split the comma-separated string by using two methods:- - By using strtok() function - By writing our logic Further, to execute the code given below, we are using an online compiler. Please use an online compiler only to execute the given codes as it is updated to the latest standard of C++. So, let’s start! First method: Using strtok() function strtok() splits a string according to a given delimiter. It is called in a loop to retrieve all tokens. But it returns NULL if there are no tokens left. Code:- #include <bits/stdc++.h> using namespace std; void splitSen(char str[]) { char *tok = strtok(str,","); while (tok!=NULL) { printf("%s\n",tok); tok=strtok(NULL,","); } } int main() { char str[100]="1,2,3,4,5"; splitSen(str); return 0; } We have created a function splitSen with one argument(char array) of type void. Subsequently, we have defined a new pointer variable tok. So now we will iterate this tok from start to end and print each tok. Therefore, in our main function, we have passed str as the lone argument. Second method: Using our logic #include <bits/stdc++.h> using namespace std; void splitSen(string str) { string w = ""; for (auto rem : str) { if (rem==',') { cout<<w<<endl; w=""; } else { w=w+rem; } } cout<<w<<endl; } int main() { char str[100]="1,2,3,4,5"; splitSen(str); return 0; } So in this method, we have again defined one function named splitSen with one argument str of type string and function of the type void. Moreover, we have also defined one variable w which is equal to null. Now, we iterate from the beginning of the string to the end using rem. During the iterations, if rem is equal to the delimiter(i.e. “,”) then we print w else we add w with w and rem respectively. At last, we print w until the end of string after the scope of for loop ends. Sample Input/Output:- Output:- 1 2 3 4 5 We have come to the end of this post. I hope this post may have helped you. Thank You And Have Fun Coding! Before you go, please check these amazing posts also.
https://www.codespeedy.com/split-comma-separated-string-in-cpp/
CC-MAIN-2020-45
refinedweb
391
74.08
I have a ver, very, strange issue with PCLs and namespaces. I was hoping for some PCL guru who could give me a hand in a Skype session. The problem is that I cannot use a specific class in an iOS project. However I can use it in a unit test project. Visual Studio complains about the namespace not being available, however the namespace is detected correctly and added to the "usings" statements. If I copy the class in question from the PCL to the iOS solution, it works. Exactly same code works in unit test project. Hard to explain, easy to show. I'm krumelur1976 on Skype. I had similar issue. After rechecking namespaces of the class, I found the differences and it was solves Found the problem meanwhile. It is a bug. I added the two projects to a blank Xamarin Studio solutions and everything builds. So back to Visual Studio. If I select "Rebuild solution", it complains. If I rebuild the PCL manually from its context menu and then rebuild the iOS version, it works.
https://forums.xamarin.com/discussion/11024/any-pcl-specialist-online-and-available-for-a-quick-chat-over-skype
CC-MAIN-2019-43
refinedweb
178
77.13