text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
GETSERVENT(3N) GETSERVENT(3N) NAME getservent, getservbyport, getservbyname, setservent, endservent - get service entry SYNOPSIS #include <<netdb.h>> struct servent *getservent() struct servent *getservbyname(name, proto) char *name, *proto; struct servent *getservbyport(port, proto) int port; char *proto; setservent(stayopen) int stayopen; endservent() DESCRIPTION getservent, getservbyname, and getservbyport net- work short byte order. s_proto The name of the protocol to use when con- tacting the service. getservent() reads the next line of the file, opening the file if nec- essary. getservent() opens and rewinds the file. If the stayopen flag is non- zero, the net data base will not be closed after each call to getser- vent() (either directly, or indirectly through one of the other "get- serv" calls). endservent() closes the file. getservbyname() and getservbyport() sequentially search from the begin- ning of the file until a matching protocol name or port number is found, or until end-of-file is encountered. If a protocol name is also supplied (non-NULL), searches must also match the protocol. FILES /etc/services SEE ALSO getprotoent(3N), services(5), ypserv(8) DIAGNOSTICS A NULL pointer is returned on end-of-file or error. BUGS All information is contained in a static area so it must be copied if it is to be saved. Expecting port numbers to fit in a 32 bit quantity is probably naive. 14 December 1987 GETSERVENT(3N)
http://modman.unixdev.net/?sektion=3&page=setservent&manpath=SunOS-4.1.3
CC-MAIN-2017-17
refinedweb
227
55.03
SharePoint, ADO.Net Data Services and Silverlight 4 data binding example I presented recently at an internal Microsoft conference here in Seattle & showed a simple OData/ADO.Net data services & Silverlight 4 data binding example. It is very rudimentary, but shows how simple it is to get up and running binding Silverlight controls to data coming from SharePoint. For this you will need: - A SharePoint development environment with VS 2010 set up. (see here) - Silverlight Web Parts Visual Studio 2010 project template. This is an extension project template from Paul Stubbs that make it super easy to create a custom Silverlight web part. - Silverlight 4 Tools for Visual Studio 2010 One time setup: - Create a new Custom list called “Projects” in the root of your SharePoint site. - Add a new text column to the list called “ProjectName” - Add some new data to the list and fill in the column values. The resulting list should look something like this now: Ok now for the good stuff… Creating the projects: - Create a new “Silverlight Web Part” from the SharePoint > 2010 project templates in Visual Studio. This template is part of the Silverlight Web Parts Visual Studio 2010 project template VS Extension. Make sure you have that installed if you don’t see it. - In the new project wizard that pops up make sure the path to your SharePoint site is specified. e.g. is the url to my dev site on my machine. - Make sure “Deploy as a sandboxed solution” is ticked & Click Next - Enter “MySilverlightWebPart” as the Silverlight Project Name. Select “Shared Documents” as the library to deploy the Silverlight control to. - Enter “My Silverlight Web Part” as the title for your new web part & click Finish - Right click on the MySilverlightWebPart project and choose properties. We need to flip the Target Silverlight Version to Silverlight 4 - When prompted click Yes to reload the project. Hooking up the OData/ADO.Net Data Services data source: - Right click on the “MySilverlightWebPart” project and click “Add Service Reference” - Enter the address to the listdata.svc service e.g. (you will need to use your server name of course) - Click GO & you should see “HomeDataContext” appear in the list of Services. Expand it to see all your lists. You should see “Projects” in the list. - Change the Namespace to “Contoso” and Click OK Your project should now look like this: Building the UI: In this example all we are going to do is drop a grid onto our Silverlight surface and bind it to the Projects data. - Double click “MainPage.xaml” to open it. You should see the design surface. - From the Data Sources panel drag out “Projects” onto the design surface and size it appropriately - If you only want to show the ID and Project Name columns delete all the others from the XAML window. - Save the file. CTRL+S Binding to the data: Now we need to issue the query to the server to retrieve the data and bind those results to the grid. Right click “MainPage.xaml” and View Code Add the following Namespace definitions using MySilverlightWebPart.Contoso; using System.Windows.Data; using System.Data.Services.Client; Uncomment the Loaded event handler wire up in the MainPage() function. It should look like this: Declare some local variables. Context is the data context for our ADO.Net Data Service, projects will contain the collection of projects we are going to bind to the grid & projectsViewSource is the view source collection that controls what data will be shown in the grid. Add the functions below to the code. This will query the datasource for the projects & then bind them to the grid when the main page loaded event is fired. /// <summary>Page loaded event handler</summary> private void MainPage_Loaded(object sender, RoutedEventArgs e) { // this news up our data context with the URI to the ListData.svc context = new HomeDataContext(new Uri("<a href=""">"</a>, UriKind.Absolute)); // create the collection that we are doing to store the projects in projects = new DataServiceCollection<ProjectsItem>(); // get the view source reference from the grid projectsViewSource = (CollectionViewSource)this.Resources["projectsViewSource"]; // Define a query that returns orders for a give customer. var query = from p in context.Projects select p; // wire up the loaded event projects.LoadCompleted += new EventHandler<LoadCompletedEventArgs>(projects_LoadCompleted); // Asynchronously load the result of the query. projects.LoadAsync(query); } private void projects_LoadCompleted(object sender, LoadCompletedEventArgs e) { // if there wasnt an error if (e.Error == null) { // load the view source with the list of projects that were returned. projectsViewSource.Source = projects; } else { MessageBox.Show(string.Format("An error has occured: {0}", e.Error.Message)); } } Building and Deploying Now its time to try it all out! - Hit F5 - Visual Studio will build the projects, Package up the solution in a WSP package, Deploy it into the Solution Gallery (the spot where sandbox solutions are deployed) & will start up IE and attach to the right processes for debugging. IE should load with the homepage of the site. - Click Page > Edit in the Ribbon - Click Insert > Web Part in the ribbon - Click into the Custom folder where you will find your web part “My Silverlight Web Part” & select it - Click Add to insert it onto the page - You will see your web part load and moments later the data should show up in the grid. Looking something like this: Congratulations. You have just built a Silverlight 4 web part that is data bound using ADO.Net Data Services to SharePoint list data! If you want to have a poke around with the ListData.svc Service then you can try hitting some URLs directly from IE like these: - http://<servername>//_vti_bin/ListData.svc">_vti_bin/ListData.svc - http://<servername>/_vti_bin/ListData.svc/Projects - http://<servername>/_vti_bin/ListData.svc/Projects(1) - http://<servername>/_vti_bin/ListData.svc/Projects?$filter=id eq 2 - http://<servername>/_vti_bin/ListData.svc/Projects?$filter=Title eq 'Project 2' Here is a good starting point in the documentation for how this REST interface works: Thanks, -Chris.
https://docs.microsoft.com/en-us/archive/blogs/cjohnson/sharepoint-ado-net-data-services-and-silverlight-4-data-binding-example
CC-MAIN-2021-31
refinedweb
995
64.91
Hi, everybody. In the #ruby-core design meeting, during the discussion about MVM, there was some mention of the sandbox API. I thought it would be worth while to write up an RCR. I mean: all though there has been some talk about the sandbox extension for Ruby 1.8 on this list, there hasn't been any talk about the API itself. Considering that $SAFE has fallen out of use and there is a renewed interest in managing many namespaces/environments on a single VM, I figured hey. ABSTRACT Ruby of yore has only had one interpreter environment. The sandbox API gives that central environment a means of creating other in-process environments for executing code. Be it restricted sandboxes for running unsafe code or fully-featured sandboxes to offer a clean namespace. PROS & CONS The benefits of this particular API: * Rather simple (yeah?) * Basic (albeit unstable) extensions exist for Ruby 1.8 and JRuby. * Patterned after other successful sandboxes (such as Firefox's XPCNativeWrapper[1] and Io's Sandbox[2]) * Generic enough to work in other Ruby impls. The drawbacks are: * Not fully proven on Ruby 1.8. * My extension does rely on Thread.kill! to stop a Sandbox, which is taboo. (Same problem timeout.rb has.) * Haven't worked out how tainting could play out. * Could be closer coupled with threading to offer concurrent interps in separate threads. THE API All classes and methods are enclosed in the Sandbox module. The primary classes are Sandbox::Full and Sandbox::Safe. Sandbox::Safe is descended from Sandbox::Full. Methods for these two classes are: * self.new(opts = {}) Returns a newly created sandbox. Available options: :init, :ref * eval(str, opts = {}) => obj Evaluates +str+ as Ruby code inside the sandbox and returns the result. Available options: :timeout * load(io, opts = {}) => nil At heart, just an alias for: eval(IO.read(io), opts) * ref(klass) => nil Adds a boxed reference to +klass+ in the sandbox. (Ex.: @box.ref(YAML) would create a YAML class in the sandbox which is derived from Sandbox::BoxedClass, a proxy to the YAML class on the outside.) * require(str) Requires a file into the Sandbox, using the $LOAD_PATH and file permissions of the current sandbox. The Sandbox module itself has a few methods: * Sandbox.safe(opts = {}) An alias for Sandbox::Safe.new(opts) * Sandbox.new(opts = {}) An alias for Sandbox::Full.new(opts) * Sandbox.current Returns an object representing the current Sandbox. * Sandbox.screen(obj) => true or Sandbox::ScreenException Traverses an object and its related symbols to be sure it is entirely composed of objects from the current sandbox. Purely for testing. As for the `opts` hash in the above methods, here's a brief description of those: * init: The portions of Ruby core to initialize. :load - $:, $-I, $LOAD_PATH, $\, $LOADED_FEATURES, load, require, autoload, autoload? :io - IOError, EOFError, IO, FileTest, File, Dir, File::Constants, test, File::Stat, :env - syscall, open, printf, print, putc, puts, gets, readline, getc, select, readlines, p, display, STDIN, STDOUT, STDERR :real - abort, at_exit, caller, exit, trace_var, untrace_var, set_trace_func, warn, ThreadError Thread, Continuation, ThreadGroup, trap, exec, fork, exit!, system, `, sleep, Process, Process::Status, Process::Sys, GC, ObjectSpace, hash, __id__, object_id :all - the whole enchilada (Sandbox::Full assumes :init => :all and Sandbox::Safe assumes :init => nil.) * ref: Classes to create boxed references for. (Ex.: :ref => [RedCloth, BlueCloth]) * timeout: Maximum seconds, a time limit for the sandbox. BOXED CLASSES Inside each Sandbox, a BoxedClass constant is defined. This class has two methods: method_missing and const_missing. So, let's say you're running a web app in the sandbox. And you want it to speak to Mongrel in the main interp. Imagine a MongrelConnector class that acts as medium between the two. -- master.rb -- require 'mongrel' class MongrelConnector def self.each str = yield "" # send str to mongrel end end box = Sandbox.safe box.load 'rails.rb' box.ref MongrelConnector box.eval 'start' -- web.rb -- def start MongrelConnector.each do |cgi| cgi << "hallo!" end end Inside the sandbox (where web.rb is running,) the MongrelConnector class is a BoxedClass. When `each` is called, method_missing switches sandboxes and runs the method on the class outside the box. When method_missing gets an answer back, it switches back inside the sandbox and returns an answer. For primitive data, such as numbers and strings and floats which have no instance variables, the data is marshalled. For other objects, a Sandbox::Ref is received. Both inside and outside the sandbox, a Sandbox::Ref points to data not inside the current sandbox. This ref also has a method_missing, which works just like BoxedClass' method_missing. It is not allowed to pass a Sandbox::Ref for an object whose class is not referred to in the receiving sandbox. So, if, for some reason, a method call tries to return an IO object to a sandbox and no IO class is defined (and properly ref'd,) a Sandbox::TransferException is thrown. THE PRELUDE Beyond the API, it is also required that the Sandbox run versions of common methods which are not exploitable. For example, the freaky freaky sandbox has a lib/sandbox/prelude.rb which includes a pure Ruby version of the `**` method since very high squares can lock the interpreter up in C. AND DONE That's it for now. I'm not an extreme zealot of this API, so I'd be glad to alter it or scrap it. But it has evolved through trial and error, based on xp points awarded during Try Ruby and the sandboxed wiki[3]. Thankyou for your generous attentions. [1] [2] [3] on 25.04.2008 00:13 on 25.04.2008 00:30 On Fri, 25 Apr 2008 07:12:39 +0900, _why <why@ruby-lang.org> wrote: > * My extension does rely on Thread.kill! to stop a Sandbox, > which is taboo. (Same problem timeout.rb has.) I don't think this is a big problem in practice -- even though Thread.kill! is "more evil" than even Thread.kill, we're destroying any evidence of malfeasance when we tear down the Sandbox VM. Anyway, using Thread.kill! is an implementation detail which shouldn't reflect badly on the proposed API itself. -mental on 28.04.2008 03:06 Hi, On Fri, 25 Apr 2008 07:12:39 +0900 _why <why@ruby-lang.org> wrote: > * eval(str, opts = {}) => obj I think eval(string) is <del>evil or</del> too ugly and takes more time especially in 1.9. It should take block instead of it. * eval(opt = {}, &block) => obj like: box.eval {start} on 28.04.2008 03:34 I was under the impression that (part of) the purpose of the sandbox was to run untrusted Ruby code within the context of a larger Ruby application. I'd imagine that a large portion of the time, this code enters the application as a string - for example, Try Ruby presumably accepts strings from the web interface and passes them through this method to get the result. I think you're right that if you know what code you're going to be evaluating when you write the call to eval, passing it as a block would be preferable. I think the best way to balance this would be to allow both, like instance_eval. I imagine if eval only took a block, we'd see a lot of code like box.eval { eval(str) } on 30.04.2008 07:34 . And Nathan makes some good points too. But thankyou for caring about this little idea! _why on 30.04.2008 21:28 >>IMAGE. I think it would be necessary to extract the block's parse tree and use that to construct a new block within the context of the sandbox. That would of course require that implementors keep the block parse tree around... The main downside with a string eval in this case is that the string must be parsed each time. Accordingly, avoiding that parsing overhead would be the main benefit of using a block. Beyond that, I'm not sure I see much advantage. Proxied method calls on wrapped objects are realistically going to be the main communication method with sandboxes, so that case needs to be optimized more than eval does. -mental on 30.04.2008 22:01 >>IMAGE At the risk of promoting the idea of giving someone enough rope to hang him/herself, in Ruby 1.8 there's Kernel#binding_n<>from ruby_debug.
http://www.ruby-forum.com/topic/150864
crawl-001
refinedweb
1,396
67.65
Hi, after some study and search i find the Interface ServiceLifecyle. Is this the thing i must use to solve the problem. I played a little bit with this interface, i tried to make produce some console-output to make sure it works the way i think. But there is noting to see... i inserted some samplecode for the service-interface- implementation. Everything right? Why do i see nothing. Must i make some additional configuration to wsdd. Or something like this. Maybe someon has an better, working example... Thanks for your help... Greetings, Steffen package TestInter_pkg; import javax.xml.rpc.ServiceException; public class LCTSSoapBindingImpl implements TestInter_pkg.TestInter, javax.xml.rpc.server.ServiceLifecycle { public int makeItSo(int in0) throws java.rmi.RemoteException { return in0*2; } public void init(Object context) throws ServiceException { System.out.println("INIT ..."); System.out.println(context.getClass().getName()); } public void destroy() { System.out.println("DESTROY ..."); } Am Sat, 13 Dec 2003 12:11:04 +0100 "Steffen Metzner" <st.metzner@gmx.de> schrieb: > Hello List, > > i am writing a session-based webservice with Apache Axis > and i use a thread to do some work. > But there is a little problem about that technic. Its very easy > to end up the thread from Clientside. Or to kick it of finalizing > the main-object, but nobody gurantees when the main-object > and therefore the thread is killed. > So, i must watch the connection-state of the user. How may i > do this? > > thanks for your help; > Steffen M.
http://mail-archives.apache.org/mod_mbox/axis-java-user/200312.mbox/%3C20031215153010.260c700c.zirfas@web.de%3E
CC-MAIN-2017-47
refinedweb
247
53.68
May 12, 2018 07:09 PM|wrappingduke|LINK I'm attempting to make ajax calls from an android device to a web service, but receiving the following error: Apache Cordova: Failed to load resource: the server responded with a status of 404 (Not Found) I've been able to access the service from the simulator with no problem. Here's a sample of the code: var sUrl = "" + $("#txtDeviceId").val(); var jqxhr = $.getJSON(sUrl, function () { // do nothing alert('this is a test'); }).done(function (data) { var jsonData = JSON.stringify(data); var objData = $.parseJSON(jsonData); }).fail(function (result) { CustomerServiceFailed(result); // When Service call fails }); return jqxhr; Although, I'm trying to access the service from localhost, the service runs successfully on the device using Chrome when ADB reverse socket is implemented. That is, the url is entered into web browser and data is returned. Here's a sample of the ADB command: adb.exe reverse tcp:4000 tcp:4000 Lastly, I also moved the service to IIS but received the same 404 on the device. The service on IIS can be reached from the simulator, as well. Any help is appreciated. All-Star 54611 Points May 13, 2018 01:01 AM|mgebhard|LINK I've never build a Cordova app before so I gave it a shot and it just worked. i had a couple of hiccups but they were just coding errors. I'm using VS 2017 Version 15.7.1. I had to install Cordova using the Visual Studio installer. I create a solution with 3 default template projects; Cordova Empty, Web API, and MVC. I used Web API in place of WCF as Web API is easier and has a richer REST environment. But WCF REST should work if it's configured correctly. First I configured IIS to host the Web API project from the project properties in Visual Studio. Next enabled CORS in the Web API project. I verified the Web API app using the MVC app and Chrome same as the tutorial. @{ ViewBag. ); }).fail(function (jqXHR, textStatus, errorThrown) { $('#value1').text(jqXHR.responseText || textStatus); }); } </script> } From this point, I followed this tutorial but replaced the weather service with my Web API service. The UI and JavaScript is based on the MVC test app above. My index.html. Note: I updated the <meta http-equiv="Content-Security-Policy"... tag with the service endpoint. <!DOCTYPE html> <html> <head> <!-- Customize the content security policy in the meta tag below as needed. Add 'unsafe-inline' to default-src to enable inline JavaScript. For details, see --> <meta http- <meta http- <div> <input id="button" type="button" value="Try it" /> <span id='value1'>(Result)</span> </div> </div> <script type="text/javascript" src="cordova.js"></script> <script type="text/javascript" src="scripts/platformOverrides.js"></script> <script type="text/javascript" src="scripts/index.js"></script> <script src="scripts/jquery-3.3.1.js"></script> <script src="scripts/jquery.mobile-1.4.5.js"></script> <script src="scripts/webapiclient.js"></script> </body> </html> Modified default index.js // For an introduction to the Blank template, see the following documentation: // // To debug code on page load in cordova-simulate or on Android devices/emulators: launch your app, set breakpoints, // and then run "window.location.reload()" in the JavaScript Console. (function () { "use strict"; document.addEventListener( 'deviceready', onDeviceReady.bind( this ), false ); function onDeviceReady() { // Handle the Cordova pause and resume events document.addEventListener( 'pause', onPause.bind( this ), false ); document.addEventListener('resume', onResume.bind(this), false); $('#button').click(invokeWebAPI); } function onPause() { // TODO: This application has been suspended. Save application state here. } function onResume() { // TODO: This application has been reactivated. Restore application state here. } } )(); The webapiclient.js file is custom and contains the jQuery AJAX code. function invokeWebAPI() { var serviceUrl = ''; $.ajax({ type: 'GET', url: serviceUrl }).done(function (data) { $('#value1').text(data); }).fail(function (jqXHR, textStatus, errorThrown) { $('#value1').text(jqXHR.responseText || textStatus); }); } I connected my android device using -> Then ran app on the device. clicked the button and "VALUE1, VALUE2" rendered to the UI. namespace WebApiService.Controllers { [EnableCors(origins: "*", headers: "*", methods: "*")] public class ValuesController : ApiController { // GET api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } All-Star 58824 Points May 13, 2018 11:15 PM|bruce (sqlwork.com)|LINK because the simulator is running on your box "localhost" will work (reference the website). but when you run in a real device, localhost is that device. you should use the ip address of the box not localhost. to access remotely. the device must b on your local lan, and you will probably need to open the firewall for your custom port number. May 14, 2018 06:51 PM|wrappingduke|LINK Hi Bruce, Thanks for the reply. It's appreciated. I think I need to change to ip address as well but I was hesitate to do so for two (2) reasons: All-Star 48940 Points May 14, 2018 07:12 PM|PatriceSc|LINK Hi, My first move would be to check the server side IIS log. Do you see 404 errrors? For now I'm trying to understand if the http query reaches the expected web site. My understanding is that you are trying to reach a web service from a real device (you are on the same network?) which runs a Cordova app. If you need further help a dedicated Cordova forum could be better. 5 replies Last post May 16, 2018 09:17 PM by wrappingduke
https://forums.asp.net/t/2140733.aspx?Access+web+service+from+device
CC-MAIN-2021-31
refinedweb
894
60.31
Created on 2006-03-20 12:37 by aimacintyre, last changed 2006-06-29 02:29 by sf-robot. Platform default thread stack sizes vary considerably. Some are very generous (Win32: usually 1MB; Linux: 1MB, sometimes 8MB). Others are not (FreeBSD: 64k). Some platforms have restricted virtual address space OS/2: 512M less overhead) which makes hard coding a generous default thread stack size problematic. Some platforms thread commit stack address space, even though the memory backing it may not be committed (Windows, OS/2 at least). Some applications have a thirst for stack space in threads (Zope). Some programmers want to be able to use lots of threads, even in the face of sound advice about the lack of wisdom in this approach. The current approach to stack space management in threads in Python uses a hard coded strategy, relying on the platform having a useful default or relying on the system administrator or distribution builder over-riding the default at compile time. This patch is intended to allow developers some control over managing this resource from within Python code by way of a function in the thread module. As written, it is not intended to provide unlimited flexibility; that would probably require exposing the underlying mechanism as an option on the creation of each thread. An alternative approach to providing the functionality would be to use an environment variable to provide the information to the thread module. This has its pros and cons, in terms of flexibility and ease of use, and could be complementary to the approach implemented. The patch has been tested on OS/2 and FreeBSD 4.8. I have no means of testing the code on Win32 or Linux, though Linux is a pthread environment as is FreeBSD. Code base is SVN head from a few hours ago. A doc update is included. While I would like to see this functionality in Python 2.5, it is not a critical issue. Critique of the approach and implementation welcome. Something not addressed is the issue of tests, primarily because I haven't been able to think of a viable testing strategy - I'm all ears to suggestions for this. Logged In: YES user_id=55188 I'm all for this! The FreeBSD port have maintained a local patch to bump THREAD_STACK_SIZE. The patch will lighten FreeBSD users' burden around thread stack size. BTW, the naming, "thread.stack_size" seems to miss a verb while all the other functions on the thread module have it. How about set_stack_size() or set_stacksize()? Or, how about in sys module? Logged In: YES user_id=250749 Thanks for the comments. As implemented, the function is both a getter and (optionally) a setter which makes attempting to use a "get"/"set" prefix awkward. I chose this approach to make it a little simpler to support temporary changes. I did consider using a module attribute/variable, but it is slightly more unwieldy for this case: old_size = thread.stack_size(new_size) ... thread.stack_size(old_size) vs old_size = thread.stack_size thread.stack_size = new_size ... thread.stack_size = old_size or (using get/set accessors) old_size = thread.get_stacksize() thread.set_stacksize(new_size) ... thread.set_stacksize(old_size) I think an argument can be made for passing on the "get"/"set" naming consistency based on the guidelines in PEP 8. While I have a preference for what I've implemented, I'm more interested in getting the functionality in than debating its decor. If there's a strong view about these issues, I'm prepared to revise the patch accordingly. I don't believe that the functionality belongs anywhere else than the thread module, except possibly shadowing it in the threading module, as it is highly specific to thread support. The sys module seems more appropriate for general knobs, and only for specific knobs when there is no other choice IMO. Doing it outside the thread module also complicates the implementation, which I was trying to keep as simple as I could. Logged In: YES user_id=21627 Usage of pthread_attr_setstacksize should be conditional on the definition of _POSIX_THREAD_ATTR_STACKSIZE, according to POSIX. Errors from pthread_attr_setstacksize should be reported (POSIX lists EINVAL as a possible error). I think PTHREAD_STACK_MIN should be considered. The documentation should list availibility of the feature, currently Win32, OS/2, and POSIX threads (with the TSS option, to be precise). If some platforms have specific additional requirements on the possible values (eg. must be a multiple of the page size), these should be documented, as well. Apart from that, the patch looks fine. Logged In: YES user_id=250749 1) wrt _POSIX_THREAD_ATTR_STACKSIZE, I'll look at that (though I note its absence from the existing code...) 2) PTHREAD_STACK_MIN on FreeBSD is 1k, which seemed grossly inadequate for Python (my impression is that 20-32k is a fairly safe minimum for Python). In principle I don't have a problem with relying on PTHREAD_STACK_MIN, except for trying to play it safe. Any further thoughts on this? I'm also putting together an environment variable only version of the patch, with a view to getting that in first, and reworking this patch to work on top of that. Thanks for the comments. Logged In: YES user_id=21627 re 1) Currently, the usage of the stacksize attribute is depending on the definition of a THREAD_STACK_SIZE macro. I don't know where that comes from, but I guess whoever defines it knows what he is doing, so that the stacksize attribute is defined on such a system. re 2) I can accept that Python enforces a minimum above PTHREAD_STACK_MIN; it shouldn't be possible to set the stack size below PTHREAD_STACK_MIN, since that *will* fail when a thread is created. -1 for an environment variable version. What problem would that solve? If this patch gets implemented, applications can define their own environment variables if they think it helps, and users/admins can put something in sitecustomize.py if they think there should be an environment variable controlling the stack size for all Python applications on the system. Logged In: YES user_id=250749 I have updated the patch along the lines Martin suggested. I have omitted OS/2 from the list of supported platforms in the doc patch as I haven't added OS/2 to anywhere else in the docs. My thinging has been that OS/2 is a 2nd tier platform, and I have kept an extensive port README file in the build directory (PC/os2emx) documenting port specific behaviour. The idea with the environment variable version was that it would be less "intrusive" a change from the user POV. Logged In: YES user_id=31435 The patch applies cleanly on WinXP, "and works" (I checked this by setting various stack sizes, spawning a thread doing nothing but a raw_input(), and looking at the VM size under Task Manager while the thread was paused waiting for input -- the VM size went up each time roughly by the stack-size increase; finally set stack_size to 0 again, and all the "extra" VM went away). Note that Python C style for defining functions puts the function name in the first column. For example, """ static int _pythread_nt_set_stacksize(size_t size) """ instead of """ static int _pythread_nt_set_stacksize(size_t size) """ The patch isn't consistent about this, and perhaps it's errenously ;-) aping bad style in surrounding function definitions. This should really be exposed via threading.py. `thread` is increasingly "just an implementation detail" of `threading`, and it actually felt weird to me to write a test program that had to import `thread`. Logged In: YES user_id=250749 Thanks Tim. My default action is to try and match the prevailing style, but cut'n'paste propagated the flaw. thread_pthread.h was clean AFAICS, so I'll do a style normalisation (as a separate checkin) on thread_nt.py and thread_os2.h when commit time comes. As an "implementation detail", I hadn't considered that exposing it via threading was appropriate. I can see 2 approaches: - a simple shadow of the function as a module level function; or - a classmethod of the Thread class. Any hints on which would be the more preferable or natural approach? Logged In: YES user_id=31435 Right, this one: "a simple shadow of the function as a module level function". If it affects all threads (which it does), then a module function is a natural place for it. If I a saw a method on the Thread class, the most natural (to me ;-)) assumption is that a_thread.stack_size(N) would set the stack size for the specific thread `a_thread`, but not affect other threads. Part of what makes that "the most natural" assumption is that Thread has no class or static methods today. As a module-level function, no such confusion is sanely possible. Sticking "stack_size" in threading.__all__, and adding from thread import stack_size to threading.py is all I'm looking for here. Well, plus docs and a test case ;-) Logged In: YES user_id=250749 Ok, v3 includes the additions to the threading module, tests in both test_thread and test_threading and docs in both thread and threading modules (duplicated as I don't know how to do the LaTex linking). If there are no other issues needing to be addressed, I propose to check these changes in sometime on the weekend of June 3-4 or thereabouts to get in a bit before the beta release. Logged In: YES user_id=250749 Checked in to trunk after further revision in r46919, with minor test tweaks in r46925 & r46929. Logged In: YES user_id=1312539 This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker).
http://bugs.python.org/issue1454481
crawl-002
refinedweb
1,617
62.88
Welcome to the first workshop session of the AI-Radiology masterclass. Today, we will review the basics of image processing for computational anatomy as we learn: How to register shapes using gradient descent. (Bonus:) The maths behind the JPEG standard (photos). This web-based Jupyter notebook is meant to be interactive and user-friendly. Feel free to type your own code in the cells, and to run it using "Ctrl+Enter" or the toolbar's "Run" button. N.B.: The Python syntax used here should be simple enough for you to follow this workshop session without hassle. Comments, written at the end of each line after the # symbol, describe in simple words the intention behind the code. If you don't understand something, "google is your friend"... And I'm here to answer your questions! %matplotlib inline import center_images # Center our images import matplotlib.pyplot as plt # Graphical display import numpy as np # Numerical computations from imageio import imread # Load .png and .jpg images Thanks to the Python keyword import, we can now use : plt= plot curves and images, np= numeric computations with python, imread= image reader, as new keywords in the cells below. For instance, the proper syntax to read images from the current folder reads: I = imread("data/smiley.png", as_gray=True) # Import a png image as a grayscale array As far as Python is concerned, I now denotes a variable that can be displayed, modified and saved. For instance, the print(...) function allows us to display variables as text: print(I) # Print the variable 'I' in the space below: [[255. 0. 0. 0. 0. 0. 255.] [ 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 255. 0. 255. 0. 0.] [ 0. 0. 0. 192. 0. 0. 0.] [ 0. 128. 0. 0. 0. 128. 0.] [ 0. 0. 128. 128. 128. 0. 0.] [255. 0. 0. 0. 0. 0. 255.]] As evidenced here, Python understands our image as an array of numbers. Far from seeing anything, our computer represents this raw data as a 7-by-7 tabular of positive numbers ranging from 0 to 255. To get a meaningful display, we have to use higher-level routines provided by the matplotlib.pyplot package: def display(im): # Define a new Python routine """ Displays an image using the methods of the 'matplotlib' library. """ plt.figure(figsize=(8,8)) # Create a square blackboard plt.imshow(im, cmap="gray", vmin=0, vmax=255) # Display 'im' using gray colors, # from 0 (black) to 255 (white) display(I) # Use our custom function 'display' to visualize the (7,7) array 'I' Just as in Excel, we can read and modify the value of any cell of an array. The syntax is straightforward: the alias array[row, column] can be used to read and write data. Beware: in Python (as in most programming languages), we start counting from 0 instead of 1. print( I[0,0] ) # Value in the 1st line, 1st column (indices start from 0) 255.0 print( I[4,1] ) # 5th line, 2nd column 128.0 print( I[-1,-2] ) # last line, penultimate column 0.0 I[2,1] = 200 # Change the value in the 3rd line, 2nd column... display(I) # and display! We can also access lines, columns and custom ranges using the range syntax start:end:stepsize instead of explicit numbers. These three integers are set by default to start = 0, end = max, stepsize = 1 and can be omitted when needed. Most importantly, the start is always included, and the end is always excluded. print("Full range:") print( I[:,:] ) print("") print("Column 3 (= the 4th one, as we start counting from 0):") print( I[:,3]) print("") print("Last row:") print( I[-1,:]) Full range: [[255. 0. 0. 0. 0. 0. 255.] [ 0. 0. 0. 0. 0. 0. 0.] [ 0. 200. 255. 0. 255. 0. 0.] [ 0. 0. 0. 192. 0. 0. 0.] [ 0. 128. 0. 0. 0. 128. 0.] [ 0. 0. 128. 128. 128. 0. 0.] [255. 0. 0. 0. 0. 0. 255.]] Column 3 (= the 4th one, as we start counting from 0): [ 0. 0. 0. 192. 0. 128. 0.] Last row: [255. 0. 0. 0. 0. 0. 255.] print("Rows 2 (included) to 5 (excluded):") print( I[2:5,:]) print("") print("Rows 0, 2, 4, 6:") print( I[::2,:]) print("") print("Sub-sampling according to a 2x2 pattern:") print( I[::2,::2]) Rows 2 (included) to 5 (excluded): [[ 0. 200. 255. 0. 255. 0. 0.] [ 0. 0. 0. 192. 0. 0. 0.] [ 0. 128. 0. 0. 0. 128. 0.]] Rows 0, 2, 4, 6: [[255. 0. 0. 0. 0. 0. 255.] [ 0. 200. 255. 0. 255. 0. 0.] [ 0. 128. 0. 0. 0. 128. 0.] [255. 0. 0. 0. 0. 0. 255.]] Sub-sampling according to a 2x2 pattern: [[255. 0. 0. 255.] [ 0. 255. 255. 0.] [ 0. 0. 0. 0.] [255. 0. 0. 255.]] Exercise: Draw a "T" using the methods presented above. T = np.zeros((7,7)) # 7-by-7 array filled with zeros # Your turn ! # ================= T[2,3] = 200 # etc. # ================= display(T) Solution : T = np.zeros((7,7)) # 7-by-7 array filled with zeros # Your turn ! # ================= T[1,1:6] = 200 # 2nd line, 2nd to 6th columns T[2:6,3] = 240 # 3rd to 6th lines, 4th column # ================= display(T) To remember: At a low level, images are encoded as numerical arrays. To handle and display them, engineers use pre-packaged routines that can be accessed through the import keyword.
http://jeanfeydy.com/Teaching/MasterClass_Radiologie/Part%201%20-%20Working%20with%20images.html
CC-MAIN-2021-17
refinedweb
905
69.82
Ubuntu :: Looking For Alternative To Gwibber?Jun 23, 2010 Does any of an alternative to Gwibber that can connect with Facebook and do what Gwibber claims it can do.View 6 Replies Does any of an alternative to Gwibber that can connect with Facebook and do what Gwibber claims it can do.View 6 Replies I used to use Gwibber as social networks aggregator/client, but starting since last update (current version 3.1.0) it crashes miserably on both Fedora 14 and Fedora 15. Gwibber team looks reluctant to provide any feedback on whether this shall be fixed in foreseeable future. Installing Gwibber from the last stable sources gives the same crash. After a fresh installation of Ubuntu 10.10 I uninstalled empathy (im using skype). The empathy menu in the evolution indicator was gone after that.I also uninstalled gwibber. But the gwibber status indicator remains within the about-me panel... Uninstalling the gwibber package also removes the me-menu which I want to keep actually. Especially the shutdown/restart... part of it. See attachment.I know there is a shutdown panel-app...but its butt-ugly and it doesnt allow to logon/off. In another thread I read that the gwibber status shouldnt be there if no account for it is configured... which it is not... I do use ubuntu-one though which is in the same menu... I saw that this issue was addressed at some bug report before, but not fixed yet... Does anybody know a good workaround to get rid of the gwibber status in the panel while keeping the about me and the shutdown button? Gwibber in 9.10 won't run. From the command line it gives code...View 5 Replies View Related I installed ubuntu 10.04 today..but when i clicked on "broadcast accounts" of nothing happened at all.I tried to run gwibber manually but again nothing happed at all...the "chat accounts" are working fine..the problem is only with gwibber..I searched different forums and found that some users are experiencing this problem and others are not.View 4 Replies View Related].... I can not add facebook to gwibber,View 9 Replies View Related I recently updated my Lenovo laptop to Lucid Lynx, and everything works great, however Gwibber will not open. When run from the terminal I get the following errors. ** (gwibber:4031): WARNING **: Trying to register gtype 'WnckWindowState' as enum when in fact it is of type 'GFlags' ** (gwibber:4031): WARNING **: Trying to register gtype 'WnckWindowActions' as enum when in fact it is of type 'GFlags' ** (gwibber:4031): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' ** Message: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files Traceback (most recent call last): File "/usr/bin/gwibber", line 50, in <module> from gwibber import client ..... Lucid is adamn good release and in my opinion, one of the best till date. If I have to rate Lucid on its OOB Factor (Out Of Box), I would rate it 9.7/10.I have an AMD based laptop with ATI graphics chipset and this used to be a major headache (ATI's fault ofcourse) previously and I was gearing up for another hour or so of installing the video drivers from ATI and getting it to work.But,suprisingly it just worked out of the box with visual effects enabled. Every component on my machine worked as it ought to be. I cant say enough about the interface overhaul. Its like a F16 got upgraded with some slick Alien technology. Ubuntu 10.04 is surely a GOLD standard for all Linux Distros out there. Coming to this minor issue, I am unable to add the facebook account in Me Menu via gwibber. It takes the login credentials and prompts me to allow or deny publishing to facebook. Clicking on allow doesn't do anything, it just sits there. I've recently upgraded to Ubuntu 10.04 and I've got to say that for me (been using Ubuntu since 9.04) that this has to be the best release yet, however one of the main added features doesn't successfully work for me? Unless I'm doing it wrong? So I'll list what I do just incase the problem is me.Firstly I access Gwibber (either from the Me Menu ( i think that's what it's called) and click broadcast account, or access it through the applications menu) then select Facebook from the drop down menu. I then click authorise and take all the steps needed to authorise Gwibber (logging in, accepting etc) and it says Facebook Authorised. I then click Add and it just highlights the box and doesn't actually do anything? So I would like to know if I am doing something wrong while trying to authorise it and add the account, or if it just isn't working for me at the moment? I tried installing Gwibber on Ubuntu 9.10, but when I start the program, it just goes grey and requires a force-quit.Running it in the terminal produced the following errors. Code: /usr/lib/python2.6/dist-packages/gwibber/microblog/support/facelib.py:47: DeprecationWarning: the md5 module is deprecated; use hashlib instead [code].... Twitter is working with my Gwibber but Facebook is working.It just shows blank on the twitter stream.View 5 Replies View Related i have just started using gwibber and it is only showing the main messages and few replies to messages but not all. is there a way to show all messages including all replies. i am using facebook with it. I have tried remove from ubuntu software center and Synaptic Package Manager, but when I reinstall Gwibber, it directly load from software list, and not download from internet. I want completely remove Gwibber, so if I want to reinstall Gwibber, my system download from internet and not install from database previous installation.View 7 Replies View Related I have a problem with gwibber. It has emerged randomly for no apparent reason as I have not changed any settings. All of a sudden, when updating my status on Facebook, and I click send, gwibber appears to work for a while, and then just nothing happens. My status does not get updated. It works fine with Twitter. It's only facebook. I have checked my application settings in Facebook and gwibber has full permission to post.# I tried setting up the account again in gwibber, but this just opened a can of worms, because I had to reinstall gwibber and delete all of the config files in order to reinstate my facebook account, (see URL...)due to a bug in gwibber, which was a pain. For some odd reason, Gwibber won't open at all. The process is listed as "Sleeping", but every time I try and open the app, nothing happens. I tried re-installing it, and even deleting it via Synaptic and installing, but to no avail. What do I do? The text entry box that you can normally type a message into when you click on your name in the top right of the screen is missing. It's there on my laptop, and I set up a couple of broadcast accounts and then noticed it was missing on my desktop. I've uninstalled and reinstalled gwibber and it's still not there.View 2 Replies View Related Whats up with Gwibber not authorizing Twitter. I think this has to do with Twitters Oauth mechanism. I know there was a big fuss about it. I was sure that 10.04 received an update to fix this.View 4 Replies View Related my twitter account does not show up in gwibber i added the account but nothing shows up i'm running ubuntu 10.04 with all the updates.View 2 Replies View Related I haven't ever had this working as I've never tried before today. Gwibber won't post to my ping.fm account. I'm certain the correct username and web-key have been entered (though Gwibber doesn't tell you one way or the other). It doesn't matter if I try to post via the Gwibber interface or the indicator-applet-session textbox... neither work. No error messages. - I know Gwibber is working because it will post to twitter, facebook, identica, etc. - I know my ping.fm account is working because it will post direct from website and/or browser extension I am using Gwibber and Empathy for a while. Does anyone know alternatives of the apps above, which supports Windows. Note: Windows Live Messenger isn't such an option, because it doesn't support any other IM networks. And I don't know any Windows apps which connects to social networks like Facebook and Twitter. I'm running 11.04 with gnome 3. Gwibber is updating fine with twitter, but facebook claims the account is authorized and giving me nothing! Don't see any updates and can't send anything out either.View 3 Replies View Related I just added the ppa for gwibber stable and I installed the Gwibber 3.0.0.1 because the default gwibber on ubuntu 10.4.2 LTS doesn't let me add twitter account. Now, after launching the application,View 6 Replies View Related Gwibber won't start in Ubuntu Natty. It has previously been running fine but yesterday it failed to start. Starting it from a Terminal gives code...View 2 Replies View Related View Related Has anyone had success with integration of Gwibber and Flickr and if you did how did you do it? The actual text I tried adding my flickr account to Gwibber in Ubuntu 10.04 Lucid Lynx and in the end I am not sure if it works or not. There is a documented bug related to Gwibber and Flickr. [URL] Now I did check my screen name (which is naggobot) and I used that and also every other id I could imagine, including direct http link [URL] and yahoo email address that is tied to Flickr account. In the end I did manage to get some input to the Gwibber window from the Flickr account but that input was from photo stream of a contact of mine. So my question is has anyone had success of getting their photostream from Flickr to Gwibber? If you succeeded in this how did you do it i.e. what did you fill in to the user name field? Please be super accurate. I am trying to set up my facebook account with gwibber. I click on add account and authorise my facebook account with gwibber successfully, but then when I close this window and go to the main gwibber window, there is nothing showing, so I go back to the accounts page, and there are no acconts listed there.View 7 Replies View Related I can not run gwibber by click it in the main menu, although I have been click gwibber, gwibber application does'nt run. I have been reinstalling linux use remastersys.View 1 Replies View Related View Related I thought I would give Gwibber a a try, so I set up my Twitter account with it. Seemed to work just fine. However, when I deleted the Twitter account from Gwibber, the messages I previously downloaded are still in the main window. I've checked for ".gwibber" in the home folder, but there's not one. So, I did a search on the machine for all Gwibber folders, and none had any message history in it.View 2 Replies View Related
https://linux.bigresource.com/Ubuntu-Looking-for-Alternative-to-Gwibber--8r97DITFK.html
CC-MAIN-2021-17
refinedweb
1,953
74.39
Bring your app to life using Angular animations Animations are a great way to improve the UX in an app. Just look at Google’s Material Design! With Angular Animations, you can easily add some nice effects to your app. This article will show you how to get started. Before we can talk about what Angular Animations are all about, it’s important to talk about why animations are important in the first place. Take a look at Material Design by Google. One of the core principles of the design language is that motion provides meaning. This is the idea that motion can help draw user’s focus and provide feedback. This is nothing new; the idea has been around in the realm of traditional animation for decades. Motion is the mechanism used to convey intentions to a viewer through the actions of the character on screen. These movements allow us to understand what the animated object was going to do. Put simply, it creates the context for a particular action. This concept can be applied to our apps, as we can use motion to create context and convert user interaction into awesome user experiences. Angular Animations is built on top of Web Animations API and runs natively on browsers that support it. If needed, it has the CSS keyframes fallback. So, how do we get started creating these awesome user experiences? First, we need to import the Angular animations module. Essentially, Angular Animations is a DSL. By importing the BrowserAnimationsModule from @angular/platform-browser/animations, we can get started adding some cool animations in our app. After importing it, we configure it in our NgModule as shown below. import { BrowserAnimationsModule } from '@angular/platform-browser/animations’; @NgModule({ declarations: [ AppComponent, ], imports: [ BrowserModule, BrowserAnimationsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Animations are linked using the `trigger` property. This trigger starts with a `@` and is placed on the component for animating. When a property is changed for a trigger or an element is taken on or off from the stage, the underlying animation is then triggered. In the example below, it will trigger the animation every time the length of the images change. <section [@fade]="images.length" class="gallery-block compact-gallery"> <div class="container”> ... </div> </section> The animation will start for the element with the ‘fade’ trigger when there is a value change of some kind. For a case with enter and leave, the `ngIf` can add or remove the element to the DOM, which will trigger the animation as well. Now, we need to define the animation itself. This happens in the metadata of the component; we will add the animations property in the decorator. Animations are configured as a series of transitions, so within our animations array, we have organized our different states for the animation (i.e. fadeIn and fadeOut). We can define the styles which should be presented on those states inside of them. After the states, we define the transitions which describe how the styles change when we move between the defined states. A `void` transition is one where the element is either added or removed from the DOM. `*` represents any state; this is useful if the elements are changing directions multiple times. @Component({ ... animations: [ trigger('fade', [ transition(‘void => *', [ style({ opacity: ‘0' }), animate('300ms ease-in', style({ opacity: 1 })) ]), transition('* => void', animate('300ms ease-out', style({ opacity: 0 }))) ]) ] }) export class GalleryComponent { ... } How do we define the style changes? We use the `style` and `animate` functions. These are essentially the functions which do all the work. If we use just the style function and pass in the object with css properties and values, it will apply the style immediately. If we pass in animate and style (or keyframes), then it will perform that over a period of time. For example, to set a starting state with a fadeIn animation, your initial style will set `opacity: 0`. Then, we apply the animate function with the appropriate time when we are ready to transition and the style is set to `opacity: 1`. The code above shows an example where we are transitioning between the components moving into the screen and out of the screen again. <section [@fade]="images.length” (@fade.start)=“triggerTangent($event)"> <div class="container”> ... </div> </section> There will be times when we will want to get feedback on the different state the animation is in. For this, we have available animations callback. Using the syntax defined above, we can define these life cycle hooks like when the animation started and when it ended, so we can then perform any additional actions if required. Each callback emits an animation event, which has all the information about the animation like its name, time, fromState, toState, and even the phaseName which it is currently in. This completes a high level architecture of the Angular Animations module. However, the animation modules go well beyond this and offer some more complex features as well. These include `query`, `group`, and `stagger`. @Component({ ... animations: [ trigger('fade', [ transition(‘void => *', [ query(':enter', [ style({ opacity: '0' }), stagger(30, [ animate('300ms ease-in', style({ opacity: 1 })) ]) ]) ]), transition('* => void', animate('300ms ease-out', style({ opacity: 0 }))) ]) ] }) export class GalleryComponent { ... } `Query` is used to target specific elements within the parent component and apply animations to them. The best part is that Angular handles setup, tear down, and clean up for these elements as they are being coordinated across the page. For example, I have a list of images in a gallery. If I want to target each individual element of the gallery, I can use the query to animate all of them. Or, as in the example above, I can target all the (child) elements that are in view. `Stagger` is something you use with query. As the word suggests, it staggers multiple elements as their style changes; in other words, it spaces out the animation of each element to create a strong impact. This bundled with `group`, which allows you to group multiple animations together. ‘group’ allows users to create complex and beautiful animations in a few lines of code. The example above is animating a list into view; the ‘:enter’ query targets the elements which have just been created on the DOM and are coming into the canvas. There are even more complex tools like `animateChlid` and `AnimationBuilder`, those only really come out when you really need to do some serious custom work. The basic idea behind the Angular Animations module is that it should be your de facto standard for animating your app, whether it’s a simple fade in animation or a complex orchestration of multiple elements on the whole page. Mashhood Rastgar will deliver one talk at the International JavaScript Conference. asap really? an animation article without a demo to show what it looks like?
https://jaxenter.com/bringing-app-life-using-angular-animations-149168.html
CC-MAIN-2021-31
refinedweb
1,129
54.93
Computer Science Archive: Questions from September 19, 2009 - Anonymous askedAssume that the user population of Anagon is 12 and thepopulation of Bregen is 18. Using the costs a... Show more Assume that the user population of Anagon is 12 and thepopulation of Bregen is 18. Using the costs as given , derive a design with the goal of optimizingcosts given that a blocking rate of 2% is acceptable for callsbetween the two sites.Each PSTN line cost - $ 25/monthLocal - $0.05/minuteLongdistancecall - $0.40/minutePBX - $2000purchase costLeasedline -$275/momthTotal working days -21.66 days/month1) Each employee calls other employee at other office 4time a day and each call lost about 5 minutes average2) Each employee employee calls other employee in the sameoffice 10 time a day and each call lost about 3 minutesaverage a. What is the cost ofPSTN (straightforward) solution? b. Perform the Erlangcalculation c. What is the totalnumber of states in state transition diagram d. How many linesneeded in order to have a blocking of less than 2%? e. Final Design• Show less1 answer - Anonymous asked* If the... Show more The basic rules of dice games(craps) are as follows: If the player bets "for" him/herself: * If the first roll of the dice results in a 7 or 11, then the player immediately wins the amount of the bet; * If the first roll of the dice results in a 2, 3 or 12, then the player immediately loses the amount of the bet; * If the first roll is a number other than 2, 3, 7, 11 or 12, the number that is rolled is called the player's "point." In addition, the player may increase ("press") the amount of his/her bet at this point. For purposes of this game, the player should be allowed to double the amount of the bet. * At this point, the player's goal is to roll the same "point" again *before* rolling a 7 (called "sevening out"). If the player bets "against" him/herself, the result is basically a mirror image of the above. Here are the rules: * If the first roll of the dice results in a 2, 3 or 12, then the player immediately wins the amount of the bet; * If the first roll of the dice results in a 7 or 11, then the player immediately *loses* the amount of the bet; * If the first roll is a number other than 2, 3, 7, 11 or 12, then the player who bets against him/herself is betting that s/he will roll a 7 *before* rolling the "point" a second time. If the 7 is rolled, the player wins; if the point is rolled first, the player loses. As when betting for him/herself, the player should be allowed to double the amount of the bet if he/she wishes. When the player has either "sevened out" or "made the point," the game is completed, and a new game begins. If the player won, s/he must decide whether to pick up his/her chips and quit or to play another game. Program should give the player an initial betting billfold of $100.00. The minimum bet is $5.00. For purposes of this game, there is no maximum bet, other than the amount of money available. Design and implement several functions in order to complete this game. Need to decide on appropriate variables in which to store the player's bank roll (in order to keep track of how much money the player has) and how much s/he won or lost in the most recent game. This bank roll should be kept up to date on the player's current status, which means that wins should be added and losses should be subtracted from the bankroll as the user plays the game. After each game, the program must report the result of the game, the amount of money won or lost, and the current value of the bank roll. After each game, the program should allow the player to continue playing until s/he chooses to quit, or until the player runs out of money. This central program control may be done within main(). "Rolling" the dice: A separate function will be used to "roll" the dice. This function will contain two die variables. Initializing each variable separately using a random number generator. The possible values are one through six, corresponding to the six sides of a regular die. This function will return the sum of the two variables. Also, the function will print the value of each die after it is rolled (but not the total). using the rand random number generator to• Show less generate random numbers for use in program. "Playing" the game: A second function will be used to play a single game of craps until the player either wins or loses a bet, based upon the rules given above. This function should be passed the current $ amount of the player's bank roll and should return the amount of money won or lost for that particular game. Within the function, the player is asked whether s/he would like to place a bet. If so, the player must choose whether to bet "for" or "against" him/herself (see game rules above). The player then "rolls the dice" (simulated by a call to the dice-rolling function). This should be done interactively (via a key press from the player), rather than simply having the program continuously roll the dice until the game is over. After each roll, this function should report the total value of the dice (after receiving this value from the dice-rolling function). If, after the first roll, the game is not over, the player should be asked whether s/he would like to double the amount of the bet. When a roll causes the end of a game, the player is notified whether s/he won or lost money in that game overall, and the amount won or lost. In all other cases, the player is notified that s/he needs to roll again. Additional functions may be necessary and/or appropriate, depending upon the final overall design of program. Optional Improvement: In real dice game, when complete a game, don't necessarily pick up chips if you intend to keep playing. Instead, sometimes you "let it ride" (leave your current bet+winnings on the table as your bet for the next game). Tips: Use a separate pass-by-reference variable.5 answers - };• Show less int[] w2 = { 10, 9, 8, 5, 3, 0, -3, -5, -8, -9, -10, -9, -8, -5, -3, 0, 3, 5, 8, 9 }; 1 answer - Anonymous askedWill someone please helpme out with this? i just started in java i tried to write thisprogram but no... Show more Will someone please helpme out with this? i just started in java i tried to write thisprogram but now im more confused than ever. Can someone please show me how this should look? scores=(90,80,40,75,90,100) and i have to Write aprogram that reads the scores from the file and calculates their total andaverage. Then, create a file called 'output.txt, and write the total and average to that file.use Scanner andPrintWriter to do this. : Implement a Triangle class that extends the GeometricObject class(which is at the bottom) Triangle contains: - Three double data fields names side1, side2, side3 withdefault values 1.0 - A no-arg constructor that creates a default triangle - A constructor that creates a rectangle with the specified side1,side2, and side3 - Accessor methods for all three data fields - A method named getArea() that returns the area of thistriangle - A method named getPerimeter() that returns the perimeter of thistriangle - A method named toString() that returns a string description ofthe triangle. This should return "Triangle: side1 = " + side1 + "side2 = " + side2 + " side3 = " + side3; implement the class,and write a test program that creates a Triangle with sides 1, 1.5,1, sets color yellow and filled true, then displays the area,perimeter, color, and whether filled or not. public class GeometricObject { private String color = "white"; private boolean filled; private java.util.Date dateCreated; /** Construct adefault geometric object */ publicfilled */ public void setFilled(boolean filled) { this.filled = filled; } /** Get dateCreated*/ public java.util.Date getDateCreated() { return dateCreated; } /** Return a stringrepresentation of this object */• Show less public String toString() { return "created on " + dateCreated + "\ncolor: "+ color + " and filled: " + filled; } }2 answers - Anonymous askedData analysis and interpretation is one of the integral parts ofresearch. Discuss the vari... Show more Data analysis and interpretation is one of the integral parts ofresearch. Discuss the various steps involved in data analysis andinterpretation.• Show less0 answers - Anonymous askedWrite aprogram to get string as input from user till enter key is pressedor the size of the... Show more Write aprogram to get string as input from user till enter key is pressedor the size of the array reaches 15. Display string if it haslength greater than 5.• Show less0 answers - Anonymous asked0 answers - Anonymous asked0 answers - Anonymous askedtarget... Show moreConsider a five stage pipelined MIPS. In this implementation, thebranch predicate and branch target address are determined in the ID stage and the branch targetis written to the PC in the ID stage (if the branch is taken), so the branch target instructioncan be fetched on next cycle (during the branch’s execute stage). Registers are written at thestart of the cycle and read at the end of the cycle. (a) In this part, assume there is no forwarding hardware. Supposethe following instructions are executed in order, instruction (1) is fetched on clock cycle 1, and thebranch instruction (4) is taken. Suppose a hazard detection mechanism inserts bubbles (stall cycles)into the pipeline to avoid data and control hazards. (a-1) For each instruction, determine in which clock cyclethe instruction is safely completed without hazards (i.e., when does its write back stage occur). Fillin the “WB Cycle” column. Show timing in the table below for maximum credit. Inst. No Label Instruction Write Back Cycle (1) ADD R2, R3,R1 cycle 5 (2) SUB R4, R1, R2 (3) SW O(R1), R2 (4) BEQZ R4, foo (5)skip: AND R3, RO,RO XXXXXXX (6)foo: LW R5, 100(RO) (7) ADD R7, R1, R5 (8) AND R2, R7, R1 Cycles Inst 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 1819 20 (1) F D E M W (2) (3) (4) (5) (6) (7) (8) 3 (a-2) What is the throughput (in instructions per cycle) ofthe MIPS pipeline while executing these instructions? (show work) • Show less1 answer - Anonymous askedAssume that a pseudocode program contains the followi... Show moreplease help with this,i am so completely lost:Assume that a pseudocode program contains the followingmodule:Module display(Integer arg1, Real arg2, String arg3)Display "Here are the Values:"Display arg1, " ", arg2, " ", arg3End ModuleAssume that the same program has a main module with thefollowing variable declarations:Declare Integer ageDeclare Real incomeDeclare String nameWrite a statement that calls the display module andpasses these variables to it• Show less0 answers - Anonymous askedUsing the JKF... Show moreHieveryone,Pls help me withthis question, lifesaver rating will definitely beawarded.Using the JKFF, design a ripple counter capable of implementing thecounting sequence asbelow:F(x)={0,1,2,3,4,6,7} and repeatsitself Explain the workings of the circuit withspecial emphasis on the additionalcircuit required to reset thecounter.• Show lessThank you & regards0 answers - accountingman askedWhat is the diference between EIA-232F and USB? Define each in terms of the components of an interfa... More »2 answers - zigtozag askedA Line of Demarcation is the point where the ownership ofequipment changes from the service provider... Show moreA Line of Demarcation is the point where the ownership ofequipment changes from the service provider to the customer.TrueFalse• Show less0 answers - zigtozag askedThe goal of a VPN tunnel is to make two ends of a network connection appear as if they are on the sa... More »0 answers - Anonymous askedWrite a program that finds the equation of a line giventhe coordinates of two points, which are ente... Show more Write a program that finds the equation of a line giventhe coordinates of two points, which are entered by the user. Given the coordinates of thefirst point (x1, y1) and second point(x2, y2) entered by the user, we will findthe slope of the line m using the formula: m= You may assume that the pointsentered will be such that will not be equal to zero. Then we willfind the equation of the line using the formula: Display the formula for theline in slope-intercept form Your program must workfor any two points entered by the user. All input and outputshould be done using the standard input/output devices. Do not use dialogboxes.• Show less3 answers - Anonymous asked0 answers - Anonymous asked0 answers - Anonymous asked0 answers - Anonymous asked1 answer - Anonymous asked Problem 2: Can the code of Problem 1 beimplemented as a MIPS pseudoinstruction: Problem 2: Can the code of Problem 1 beimplemented as a MIPS pseudoinstruction: mult $s3, $s1,$s2 # $s3 = $s1 * $s2 Explain howor why?• Show less0 answers - Anonymous askedA procedure uses twelve registers,$8 through $16, and $23 through $25 in its code. This p... Show more Problem 3: A procedure uses twelve registers,$8 through $16, and $23 through $25 in its code. This proceduredoes not make any other procedure calls.A. Write instructions that compiler should include at the beginningand end of the assembly code of the procedure to save and restoreregisters. B. Is there a better way to compile this procedure so thatsaving and restoring of registers will becomeunnecessary? • Show less0 answers - Anonymous asked0 answers - Anonymous askedProblem 1: Using minimum number of registers writeassembly code to multiply integers x and y for a M... Show moreProblem 1: Using minimum number of registers writeassembly code to multiply integers x and y for a MIPS computer thathas no multiply instruction. Assume that the product can berepresented as a 32-bit integer and x and y can have any positiveor negative values including 0 • Show less0 answers - Cretan askedWhat I am stuck on is writing this program to where if the personputs in a value outside the range,... Show moreWhat I am stuck on is writing this program to where if the personputs in a value outside the range, it will say "Invalid. Please TryAgain." This program works but this is the only thing I can notfigure out. My work is at the bottom and here is the question. Heres the question: _________________________________________________ Role Permit Type | Sticker Hang tag | student |$100 $150 | staff | $200 $300 | faculty | $300 $450 | Output Specification Output the price of the permit for the user with a statement of thefollowing format: You need to pay $XXX for your parking permit. where XXX represents the price of the user’s parking permitin dollars. Sample Run What role are you? (1=student, 2=staff, 3=faculty) 2 What type of tag do you want? (1=sticker, 2= hang tag) 1 You need to pay $200 for your parking permit. ______________________________________________________________________________ (MY WORK) #include <stdio.h> int main(void) { int role, permit_type, total; printf ("what role are you? ( 1 = student, 2 =staff, 3 = faculty) \n"); scanf ("%d", &role); printf ("What type of tag do you want? (1 =sticker, 2 = hang tag) \n"); scanf ("%d", &permit_type); if (role = = 1 && permit_type = = 1) total = 100; else if (role = = 1 && permit_type = =2) total = 150; else if (role = = 2 && permit_type = =1) total = 200; else if (role = = 2 && permit_type = =2) total = 300; else if (role = = 3 && permit_type = =1) total = 300; else if (role = = 3 && permit_type = =2) total = 450; printf ("You need to pay $%d for parkingpermit.", total); getch (); return 0; } • Show less1 answer - sam11235 askedDesign the output and draw a flowchart or write pseudocode for a program that calculates the service... More »0 answers - Anonymous askedtime using Euler's equation to solve... Show moreI'm taking MatLab for the first time and I'm having a very hard time using Euler's equation to solve a problem. I'm supposed tosolve dv/dt = -k*A The problem states that a spherical droplet of liquid evaporates ata rate that is proportional to its surface area dv/dt =-k*A and... V= [(4*p)*(r^3) and... (4)*(pi)*(r^2) where V=volume mm^3, t=time(min), k=the evaporation rate(mm/min),A=surface area(mm^2), and r=radius(mm) Assume a droplet initually has a radius of 3mm and k=0.1mm/min please write a matlab program to compute the droplet volume fromt=0 to 10min using Euler's method. please use the following timestep sizes and also calculate the corresponding percent relativeerrors at t=10min. olution from (a) using time step size = 2min. no ned for percent error in thiscalculation yet (b) using time step size = 1 min. also calculae the percentrelative error and that would = (solution from from b -solution from a ) / solution from b @ t = 10 min. (c) using time step size = 0.5 min. also calculate the percentrelative error and that would = (solution from from c -solution from b ) / solution from c @ t = 10 min. (d) using time step size = 0.25 min. also calculate the percentrelative error and that would = (solution from from d -solution from c ) / solution from d @ t = 10 min. (e) using time step size = 0.125 min. also calculate the percentrelative error and that would = (solution from from e -solution from d ) / solution from e @ t = 10 min. for each case please plot the volume vs. time curve and compare allthe percent relative errors at t=10min fom b,c,d, and e. plot apercent relative error vs. time step size curve. My professor said it was correct in solving r^3 = [3/4*pi]*v from V= [(4*p)*(r^3) and then algebraically manipulating it to say r=[(3/4*pi)*v]^1/3 that r can then be plugged into the A which gives me A=4*pi [(3/4*pi) *r)^1/3]^2 I ended up getting A = (4.835975862)*r^2/3 the problem says r is initially 3 meters i did eulers equation on dv/dt = -k*A and plugged in for theA and plugged in for the k I came up with the euler equation below v(ti+1) = v(ti) + (-0.483597586)(3^2/3)(t(i+1)-t(i) thats as far as i seem to know how to get. I'm supposed to doa time step of 0 to 10 and then approximate the error as i do different time steps of things like (0:0.5:10) and (0:1:10) All i seem to know so far from reading on the internet andyoutube >videos is that I will be using a function and a for loop butit's just very confusing for me since this is the first time I haveever used matlab or any computer programming language before :( Ifanybody has some advice or a website to look at that would be greathelp. • Show less0 answers -Create a sequence diagram for making a phone call using a cellphone to another cell phone. If you ar... Show more Create a sequence diagram for making a phone call using a cellphone to another cell phone. If you are not sure cell phonesoperate this would be a good time to learn. () - You do not have to be technically specific – keep yourdiagram very general. Create an activity diagram also showing how to make a cell phonecall from one cell phone to another cell phone. 1 answer - How is the sequence diagram different from the activitydiagram? - Why do you think you would use one diagram instead of theother? - When would you need to use both? - Romulas askedfirst holds the nodes in order they... Show moreOkay, I've been trying to copy a singly circular linked nodelist. first holds the nodes in order theyare read in. Each node contains a Point object and a next Node object (to point to the next node in thelist). So, I created a Node called copy and Iinvoked the following: copy = first; However, I've discovered that both copy and first pointto the same reference of the node list. How can I create a separate reference for copy that wouldn't effect the reference forfirst? • Show less2 answers - Anonymous askeddoubleprice,... Show more public class PQT { public static void main(String[] args) { //declare variables doubleprice, total; int quantity; String priceString = "49.99"; String quantityString = "15"; //convert variables double price = (double)priceString; int quantity =(int)quantityString; //calculate total double total = price * quantity ; //display results System.out.println("The price is" + price); System.out.println("The quantity is" +quantity); System.out.println("The total is" +total);}This is a program that I created and I have to run it using javabut when I run it, I get a message that says " Reached end of filewhile parsing String quantityString = "15"; " But I cant findthe error whould anybody please help me out... • Show less }1 answer - Anonymous asked1. When using Google to search forinformation, you can narrow your search results by using more_____1 answer - aznswti85 asked2 answers - SamSan askedpubl... Show morex. /** * LinkedList class implements a doubly-linked list. */ public class MyLinkedList<AnyType> implements Iterable<AnyType> { /** * Construct an empty LinkedList. */ public MyLinkedList( ) { clear( ); } /** * Change the size of this collection to zero. */ public void clear( ) { beginMarker = new Node<AnyType>( null, null, null ); endMarker = new Node<AnyType>( null, beginMarker, null ); beginMarker.next = endMarker; theSize = 0; } /** * Returns the number of items in this collection. * @return the number of items in this collection. */ public int size( ) { return theSize; } public boolean isEmpty( ) { return size( ) == 0; } /** * Adds an item to this collection, at the end. * @param x any object. * @return true. */ public boolean add( AnyType x ) { add( size( ), x ); return true; } /** * Adds an item to this collection, at specified position. * Items at or after that position are slid one position higher. * @param x any object. * @param idx position to add at. * @throws IndexOutOfBoundsException if idx is not between 0 and size(), inclusive. */ public void add( int idx, AnyType x ) { addBefore( getNode( idx, 0, size( ) ), x ); } /** * Adds an item to this collection, at specified position p. * Items at or after that position are slid one position higher. * @param p Node to add before. * @param x any object. * @throws IndexOutOfBoundsException if idx is not between 0 and size(), inclusive. */ private void addBefore( Node<AnyType> p, AnyType x ) { Node<AnyType> newNode = new Node<AnyType>( x, p.prev, p ); newNode.prev.next = newNode; p.prev = newNode; theSize++; } /** * Returns the item at position idx. * @param idx the index to search in. * @throws IndexOutOfBoundsException if index is out of range. */ public AnyType get( int idx ) { return getNode( idx ).data; } /** * Changes the item at position idx. * @param idx the index to change. * @param newVal the new value. * @return the old value. * @throws IndexOutOfBoundsException if index is out of range. */ public AnyType set( int idx, AnyType newVal ) { Node<AnyType> p = getNode( idx ); AnyType oldVal = p.data; p.data = newVal; return oldVal; } /** * Gets the Node at position idx, which must range from 0 to size( ) - 1. * @param idx index to search at. * @return internal node corrsponding to idx. * @throws IndexOutOfBoundsException if idx is not between 0 and size( ) - 1, inclusive. */ private Node<AnyType> getNode( int idx ) { return getNode( idx, 0, size( ) - 1 ); } /** * Gets the Node at position idx, which must range from lower to upper. * @param idx index to search at. * @param lower lowest valid index. * @param upper highest valid index. * @return internal node corrsponding to idx. * @throws IndexOutOfBoundsException if idx is not between lower and upper, inclusive. */ private Node<AnyType> getNode( int idx, int lower, int upper ) { Node<AnyType> p; if( idx < lower || idx > upper ) throw new IndexOutOfBoundsException( "getNode index: " + idx + "; size: " + size( ) ); if( idx < size( ) / 2 ) { p = beginMarker.next; for( int i = 0; i < idx; i++ ) p = p.next; &nbõ}bx.øi5sp; else { p = endMarker; for( int i = size( ); i > idx; i-- ) p = p.prev; } return p; } /** * Removes an item from this collection. * @param idx the index of the object. * @return the item was removed from the collection. */ public AnyType remove( int idx ) { return remove( getNode( idx ) ); } /** * Removes the object contained in Node p. * @param p the Node containing the object. * @return the item was removed from the collection. */ private AnyType remove( Node<AnyType> p ) { p.next.prev = p.prev; p.prev.next = p.next; theSize--; return p.data; } /** * Returns a String representation of this collection. */ public String toString( ) { StringBuilder sb = new StringBuilder( "[ " ); for( AnyType x : this ) sb.append( x + " " ); sb.append( "]" ); return new String( sb ); } /** * Obtains an Iterator object used to traverse the collection. * @return an iterator positioned prior to the first element. */ public java.util.Iterator<AnyType> iterator( ) { return new LinkedListIterator( ); } /** * This is the implementation of the LinkedListIterator. * It maintains a notion of a current position and of * course the implicit reference to the MyLinkedList. */ private class LinkedListIterator implements java.util.Iterator<AnyType> { private Node<AnyType> current = beginMarker.next; private boolean okToRemove = false; public boolean hasNext( ) { return current != endMarker; } public AnyType next( ) { if( !hasNext( ) ) throw new java.util.NoSuchElementException( ); AnyType nextItem = current.data; current = current.next; okToRemove = true; return nextItem; } public void remove( ) { if( !okToRemove ) throw new IllegalStateException( ); MyLinkedList.this.remove( current.prev ); okToRemove = false; } } /** * This is the doubly-linked list node. */ private static class Node<AnyType> { public Node( AnyType d, Node<AnyType> p, Node<AnyType> n ) { data = d; prev = p; next = n; } public AnyType data; public Node<AnyType> prev; public Node<AnyType> next; } private int theSize; private Node<AnyType> beginMarker; private Node<AnyType> endMarker; } class TestLinkedList { public static void main( String [ ] args ) { MyLinkedList<Integer> lst = new MyLinkedList<Integer>( ); for( int i = 0; i < 10; i++ ) lst.add( i ); for( int i = 20; i < 30; i++ ) lst.add( 0, i ); lst.remove( 0 ); lst.remove( lst.size( ) - 1 ); System.out.println( lst ); java.util.Iterator<Integer> itr = lst.iterator( ); while( itr.hasNext( ) ) { itr.next( ); itr.remove( ); System.out.println( lst ); } } }the question is:5. Modify the MyLinkedList class by adding the following methods:• Show less a. contains Receives an AnyType as its only parameter. Returns true if the current list contains the passed item. b. swap Receives two index positions as the only parameters, and swaps the two nodes at those index positions by only adjusting their links. c. toStack Returns a Stack<AnyType> that contains the items in the current list, so that the last item ends up at the top of the stack. Has no parameters. d. reverse Returns a new MyLinkedList<AnyType> that has the elements in reverse order. Has no parameters. e. erase Receives an index position and number of nodes as the only parameters. Removes nodes beginning at the index position for the number of nodes specified. f. splice Receives another List<AnyType> and an index position as the only parameters. Creates copies of each node oì epHmx.t and inserts them before the index position. g. condense Removes dup1 answer - Anonymous asked1 answer - Anonymous askeddesign a DFA M recognizing the languag... Show morelet Σ={0,1}let L={w|w does not conatain teh substring 10101}design a DFA M recognizing the language L. Include a brief explanation justifying yoru design.need help on this ass soon as possible• Show less1 answer - Anonymous askedDesign a DFA M recognizing th... Show moreLet Σ={0,1}Let L={w|w contains at least three 0's and at most two1's}Design a DFA M recognizing the language L. Include a briefexplanation justifying your answer• Show less1 answer - studentStudent asked1.- Show how the double word 1234567816 is storedin memory starting at address A00116. Is the double2 answers - Anonymous askedWrite a complete C++ program to process league statistics for the Western Collegiate Ho... Show morex. Write a complete C++ program to process league statistics for the Western Collegiate Hockey Association. • maximum of 12 teams per season. use an array of structure variables to store the collection of individual team statistics. • read the team statistics from a text file whose file name is entered at the command line. if no file name is specified then prompt the user for the file name. • Each line in the file contains statistics for one team (no blank lines).NAME WINS LOSSES TIES Data File Example: Bulldogs 2 11 5 // 0 or more blanks in front of the name fieldGophers 7 3 1 // 1 or more blanks in front of the other fields Solution Algorithm: 1. Display a message on the screen that describes to the user what the program does Part of the required program documentation (introduction function). Read one team from the data file – read one entire line as a single string value. Build the individual statistic values for the current team one character at a time: name, wins, losses, and ties – see note 2 below.. Store each statistic value in the appropriate structure variable field within the team array using appropriate data types – the only string value stored is the name. Calculate the individual team statistics: Points: 2 points for a win and 1 point for a tie – see note 3 below. Points/Game: ratio of points to games played – see note 3 below.Sort the teams in alphabetical order A..Z, a..z. Calculate the league statistics: - Sums: wins, losses, ties - Points: 2 points for a win and 1 point for a tie – see note 3 below. - Points/Game: ratio of points to games played – see note 3 below. 6. Display the individual team statistics and league statistics using the format shown below. WCHA Name Wins Losses Ties Points Points/Game Badgers 0 0 0 0 0.0 Beavers 11 2 2 24 1.6 Bulldogs 2 11 5 9 0.5 Gophers 7 3 1 15 1.4 Mavericks 8 12 6 22 0.8 Seawolves 15 15 4 34 1.0 Totals 43 43 18 104 1.0I»Hmx.Î6Hmx.õð.ð.ðx.øi5 • Show less1 answer - studentStudent asked1.- A) Express each of the signed decimal integersthat follow as either a byte- or a word hexadecima1 answer - studentStudent asked2.-... Show moreQuestions:1.- Which of the 8088's internal registers are used for memorysegmentation? and why?2.- What is the maximum amount of memory that can be active ata given time in the 8088 microprocessor?3.- How much of the 8088's active memory is available asgeneral-purpose data storage memory?• Show less1 answer - Anonymous asked1 answer - Anonymous askedFor this program, create a simple program that asks the user toguess a number within a range from 0 ... More »2 answers - Anonymous asked3.7 Give the initial state, goal test, successor function, andcost function for each of the followin1 answer - Anonymous • Show less1 answer
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2009-september-19
CC-MAIN-2014-52
refinedweb
5,106
63.9
1 Hello, Metal! Written by Caroline Begbie & Marius Horga You’ve been formally introduced to Metal and discovered its history and why you should use it. Now you’re going to to try it out for yourself in a Swift playground. To get started, you’ll render this sphere on the screen: It may not look exciting, but this is a great starting point because it lets you touch on almost every part of the rendering process. But before you get started, it’s important to understand the terms rendering and frames. What is Rendering? In 3D computer graphics, you take a bunch of points, join them together and create an image on the screen. This image is known as a render. Rendering an image from points involves calculating light and shade for each pixel on the screen. Light bounces around a scene, so you have to decide how complicated your lighting is and how long each image takes to render. A single image in a Pixar movie might take days to render, but games require real-time rendering, where you see the image immediately. There are many ways to render a 3D image, but most start with a model built in a modeling app such as Blender or Maya. Take, for example, this train model that was built in Blender: This model, like all other models, is made up of vertices. A vertex refers to a point in three dimensional space where two or more lines, curves or edges of a geometrical shape meet, such as the corners of a cube. The number of vertices in a model may vary from a handful, as in a cube, to thousands or even millions in more complex models. A 3D renderer will read in these vertices using model loader code, which parses the list of vertices. The renderer then passes the vertices to the GPU, where shader functions process the vertices to create the final image or texture to be sent back to the CPU and displayed on the screen. The following render uses the 3D train model and some different shading techniques to make it appear as if the train were made of shiny copper: The entire process, from importing a model’s vertices to generating the final image on your screen, is commonly known as the rendering pipeline. The rendering pipeline is a list of commands sent to the GPU, along with resources (vertices, materials and lights) that make up the final image. The pipeline includes programmable and non-programmable functions. The programmable parts of the pipeline, known as vertex functions and fragment functions, are where you can manually influence the final look of your rendered models. You’ll learn more about each later in the book. What is a Frame? A game wouldn’t be much fun if all it did was render a single still image. Moving a character around the screen in a fluid manner requires the GPU to render a still image roughly sixty times a second. Each still image is known as a frame, and the speed at which the images appear is known as the frame rate. When your favorite game appears to stutter, it’s usually because of a decrease in the frame rate, especially if there’s an excessive amount of background processing eating away at the GPU. When designing a game, it’s important to balance the result you want with what the hardware can deliver. While it might be cool to add real-time shadows, water reflections and millions of blades of animated grass — all of which you’ll learn how to do in this book — finding the right balance between what is possible and what the GPU can process in 1/60th of a second can be tough. Your First Metal App In your first Metal app, the shape you’ll render will look more like a flat circle than a 3D sphere. That’s because your first model will not include any perspective or shading. However, its vertex mesh contains the full three-dimensional information. The process of Metal rendering is much the same no matter the size and complexity of your app, and you’ll become very familiar with the following sequence of drawing your models on the screen: You may initially feel a little overwhelmed by the number of steps Metal requires, but don’t worry. You’ll always perform these steps in the same sequence, and they’ll gradually become second nature. This chapter won’t go into detail on every step, but as you progress through the book, you’ll get more information as you need it. For now, concentrate on getting your first Metal app running. Getting Started ➤ Start Xcode, and create a new playground by selecting File ▸ New ▸ Playground… from the main menu. When prompted for a template, choose macOS Blank. ➤ Name the playground Chapter1, and click Create. ➤ Next, delete everything in the playground. The Metal View Now that you have a playground, you’ll create a view to render into. ➤ Import the two main frameworks that you’ll be using by adding this: import PlaygroundSupport import MetalKit PlaygroundSupport lets you see live views in the assistant editor, and MetalKit is a framework that makes using Metal easier. MetalKit has a customized view named MTKView and many convenience methods for loading textures, working with Metal buffers and interfacing with another useful framework: Model I/O, which you’ll learn about later. ➤ Now, add this: guard let device = MTLCreateSystemDefaultDevice() else { fatalError("GPU is not supported") } This code checks for a suitable GPU by creating a device: Note: Are you getting an error? If you accidentally created an iOS playground instead of a macOS playground, you’ll get a fatal error because the iOS simulator is not supported. ➤ To set up the view, add this: let frame = CGRect(x: 0, y: 0, width: 600, height: 600) let view = MTKView(frame: frame, device: device) view.clearColor = MTLClearColor(red: 1, green: 1, blue: 0.8, alpha: 1) This code configures an MTKView for the Metal renderer. MTKView is a subclass of NSView on macOS and of UIView on iOS. MTLClearColor represents an RGBA value — in this case, cream. The color value is stored in clearColor and is used to set the color of the view. The Model Model I/O is a framework that integrates with Metal and SceneKit. Its main purpose is to load 3D models that were created in apps like Blender or Maya, and to set up data buffers for easier rendering. Instead of loading a 3D model, you’re going to load a Model I/O basic 3D shape, also called a primitive. A primitive is typically considered a cube, a sphere, a cylinder or a torus. ➤ Add this code to the end of the playground: // 1 let allocator = MTKMeshBufferAllocator(device: device) // 2 let mdlMesh = MDLMesh(sphereWithExtent: [0.75, 0.75, 0.75], segments: [100, 100], inwardNormals: false, geometryType: .triangles, allocator: allocator) // 3 let mesh = try MTKMesh(mesh: mdlMesh, device: device) Going through the code: - The allocatormanages the memory for the mesh data. - Model I/O creates a sphere with the specified size and returns an MDLMeshwith all the vertex information in data buffers. - For Metal to be able to use the mesh, you convert it from a Model I/O mesh to a MetalKit mesh. Queues, Buffers and Encoders Each frame consists of commands that you send to the GPU. You wrap up these commands in a render command encoder. Command buffers organize these command encoders and a command queue organizes the command buffers. ➤ Add this code to create a command queue: guard let commandQueue = device.makeCommandQueue() else { fatalError("Could not create a command queue") } You should set up the device and the command queue at the start of your app, and generally, you should use the same device and command queue throughout. On each frame, you’ll create a command buffer and at least one render command encoder. These are lightweight objects that point to other objects, such as shader functions and pipeline states, that you set up only once at the start of the app. Shader Functions Shader functions are small programs that run on the GPU. You write these programs in the Metal Shading Language, which is a subset of C++. Normally, you’d create a separate file with a .metal extension specifically for shader functions but for now, create a multi-line string containing the shader function code, and add it to your playground: let shader = """ #include <metal_stdlib> using namespace metal; struct VertexIn { float4 position [[attribute(0)]]; }; vertex float4 vertex_main(const VertexIn vertex_in [[stage_in]]) { return vertex_in.position; } fragment float4 fragment_main() { return float4(1, 0, 0, 1); } """ There are two shader functions in here: a vertex function named vertex_main and a fragment function named fragment_main. The vertex function is where you usually manipulate vertex positions and the fragment function is where you specify the pixel color. To set up a Metal library containing these two functions, add the following: let library = try device.makeLibrary(source: shader, options: nil) let vertexFunction = library.makeFunction(name: "vertex_main") let fragmentFunction = library.makeFunction(name: "fragment_main") The compiler will check that these functions exist and make them available to a pipeline descriptor. The Pipeline State In Metal, you set up a pipeline state for the GPU. By setting up this state, you’re telling the GPU that nothing will change until the state changes. With the GPU in a fixed state, it can run more efficiently. The pipeline state contains all sorts of information that the GPU needs, such as which pixel format it should use and whether it should render with depth. The pipeline state also holds the vertex and fragment functions that you just created. However, you don’t create a pipeline state directly, rather you create it through a descriptor. This descriptor holds everything the pipeline needs to know, and you only change the necessary properties for your particular rendering situation. ➤ Add this code: let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptor.vertexFunction = vertexFunction pipelineDescriptor.fragmentFunction = fragmentFunction Here, you’ve specified the pixel format to be 32 bits with color pixel order of blue/green/red/alpha. You also set the two shader functions. You’ll describe to the GPU how the vertices are laid out in memory using a vertex descriptor. Model I/O automatically creates a vertex descriptor when it loads the sphere mesh, so you can just use that one. ➤ Add this code: pipelineDescriptor.vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(mesh.vertexDescriptor) You’ve now set up the pipeline descriptor with the necessary information. MTLRenderPipelineDescriptor has many other properties, but for now, you’ll use the defaults. ➤ Now, add this code: let pipelineState = try device.makeRenderPipelineState(descriptor: pipelineDescriptor) This code creates the pipeline state from the descriptor. Creating a pipeline state takes valuable processing time, so all of the above should be a one-time setup. In a real app, you might create several pipeline states to call different shading functions or use different vertex layouts. Rendering From now on, the code should be performed every frame. MTKView has a delegate method that runs every frame, but as you’re doing a simple render which will simply fill out a static view, you don’t need to keep refreshing the screen every frame. When performing graphics rendering, the GPU’s ultimate job is to output a single texture from a 3d scene. This texture is similar to the digital image created by a physical camera. The texture will be displayed on the device’s screen each frame. Render Passes If you’re trying to achieve a realistic render, you’ll want to take into account shadows, lighting and reflections. Each of these takes a lot of calculation and is generally done in separate render passes. For example, a shadow render pass will render the entire scene of 3D models, but only retain grayscale shadow information. A second render pass would render the models in full color. You can then combine the shadow and color textures to produce the final output texture that will go to the screen. For the first part of this book, you’ll use a single render pass. Later, you’ll learn about multipass rendering. Conveniently, MTKView provides a render pass descriptor that will hold a texture called the drawable. ➤ Add this code to the end of the playground: // 1 guard let commandBuffer = commandQueue.makeCommandBuffer(), // 2 let renderPassDescriptor = view.currentRenderPassDescriptor, // 3 let renderEncoder = commandBuffer.makeRenderCommandEncoder( descriptor: renderPassDescriptor) else { fatalError() } Here’s what’s happening: - You create a command buffer. This stores all the commands that you’ll ask the GPU to run. - You obtain a reference to the view’s render pass descriptor. The descriptor holds data for the render destinations, known as attachments. Each attachment needs information, such as a texture to store to, and whether to keep the texture throughout the render pass. The render pass descriptor is used to create the render command encoder. - From the command buffer, you get a render command encoder using the render pass descriptor. The render command encoder holds all the information necessary to send to the GPU so that it can draw the vertices. If the system fails to create a Metal object, such as the command buffer or render encoder, that’s a fatal error. The view’s currentRenderPassDescriptor may not be available in a particular frame, and usually you’ll just return from the rendering delegate method. Because you’re asking for it only once in this playground, you get a fatal error. ➤ Add the following code: renderEncoder.setRenderPipelineState(pipelineState) This code gives the render encoder the pipeline state that you set up earlier. The sphere mesh that you loaded earlier holds a buffer containing a simple list of vertices. ➤ Give this buffer to the render encoder by adding the following code: renderEncoder.setVertexBuffer( mesh.vertexBuffers[0].buffer, offset: 0, index: 0) The offset is the position in the buffer where the vertex information starts. The index is how the GPU vertex shader function locates this buffer. Submeshes The mesh is made up of submeshes. When artists create 3D models, they design them with different material groups. These translate to submeshes. For example, if you were rendering a car object, you might have a shiny car body and rubber tires. One material is shiny paint and another is rubber. On import, Model I/O creates two different submeshes that index to the correct vertices for that group. One vertex can be rendered multiple times by different submeshes. This sphere only has one submesh, so you’ll use only one. ➤ Add this code: guard let submesh = mesh.submeshes.first else { fatalError() } Now for the exciting part: drawing! You draw in Metal with a draw call. ➤ Add this code: renderEncoder.drawIndexedPrimitives( type: .triangle, indexCount: submesh.indexCount, indexType: submesh.indexType, indexBuffer: submesh.indexBuffer.buffer, indexBufferOffset: 0) Here, you’re instructing the GPU to render a vertex buffer consisting of triangles with the vertices placed in the correct order by the submesh index information. This code does not do the actual render — that doesn’t happen until the GPU has received all the command buffer’s commands. ➤ To complete sending commands to the render command encoder and finalize the frame, add this code: // 1 renderEncoder.endEncoding() // 2 guard let drawable = view.currentDrawable else { fatalError() } // 3 commandBuffer.present(drawable) commandBuffer.commit() Going through the code: You tell the render encoder that there are no more draw calls and end the render pass. You get the drawablefrom the MTKView. The MTKViewis backed by a Core Animation CAMetalLayerand the layer owns a drawable texture which Metal can read and write to. Ask the command buffer to present the MTKView’s drawable and commit to the GPU. ➤ Finally, add this code to the end of the playground: PlaygroundPage.current.liveView = view With that line of code, you’ll be able to see the Metal view in the Assistant editor. ➤ Run the playground, and in the playground’s live view, you’ll see a red sphere on a cream background. Note: Sometimes playgrounds don’t compile or run when they should. If you’re sure you’ve written the code correctly, then restart Xcode and reload the playground. Wait for a second or two before running. Congratulations! You’ve written your first Metal app, and you’ve also used many of the Metal API commands that you’ll use in every Metal app you write. Challenge Where you created the initial sphere mesh, experiment with setting the sphere to different sizes. For example, change the size from: [0.75, 0.75, 0.75] To: [0.2, 0.75, 0.2] Change the color of the sphere. In the shader function string, you’ll see: return float4(1, 0, 0, 1); This code returns red=1, green=0, blue=0, alpha=1, which results in the red color. Try changing the numbers (from zero to 1) for a different color. Try this green, for example: return float4(0, 0.4, 0.21, 1); In the next chapter, you’ll examine 3D models up close in Blender. Then continuing in your Swift Playground, you’ll import and render a train model. Key Points - Rendering means to create an image from three-dimensional points. - A frame is an image that the GPU renders sixty times a second (optimally). deviceis a software abstraction for the hardware GPU. - A 3D model consists of a vertex mesh with shading materials grouped in submeshes. - Create a command queue at the start of your app. This action organizes the command buffer and command encoders that you’ll create every frame. - Shader functions are programs that run on the GPU. You position vertices and color the pixels in these programs. - The render pipeline state fixes the GPU into a particular state. It can set which shader functions the GPU should run and how vertex layouts are formatted. Learning computer graphics is difficult. The Metal API is modern, and it takes a lot of pain out of the learning, but you need to know a lot of information up-front. Even if you feel overwhelmed at the moment, continue with the next chapters. Repetition will help with your understanding.
https://www.raywenderlich.com/books/metal-by-tutorials/v3.0/chapters/1-hello-metal
CC-MAIN-2022-21
refinedweb
3,034
64.1
#include <CGAL/Heat_method_3/Surface_mesh_geodesic_distances_3.h> Class Surface_mesh_geodesic_distances_3 computes estimated geodesic distances for a set of source vertices where sources can be added and removed. The class performs a preprocessing step that only depends on the mesh, so that the distance computation takes less time after changes to the set of sources. adds the range of vertices to the source set. get estimated distance from the current source set to a vertex vd. doubleeven when used with an exact kernel. fills the distance property map with the estimated geodesic distance of each vertex to the closest source vertex. doubleeven when used with an exact kernel.
https://doc.cgal.org/4.14/Heat_method_3/classCGAL_1_1Heat__method__3_1_1Surface__mesh__geodesic__distances__3.html
CC-MAIN-2022-33
refinedweb
104
56.86
i wrote a programme that find the longest common string. but its not good for all cases.. its good if: first string is {bbbaa} second {aabbb}==>bbb and its not good if : first is {aabbb} second {bbbaa}==>aa its good for {AzB34CUDWQ} {DABC7ffD1}==>ABCD its good for {hibdrorhab} {abddrorhib}==>bdrorhb what sould i fix? #include <iostream.h> # include <stdio.h> # include <string.h> #define MAX 100 void comp(char st1[], char st2[], char temp[], char biggest[], int x, int y, int i, int n, int j); void help(char temp[],char biggest[]); /*programme that find the common longest string among 2 strings looking at first for the longest at str1 and after at str2 each time i check the longest common string from the next char x=counter of st1, y=counter of st2, i=counter of temp string, n=counter where the last place that find identity, j=care that after each string that i find will add 1 to the counter till its =strlen(st1) */ void main() { int i=0; char st1[MAX] = {"aabbb"}, st2[MAX] ={"bbbaa"}; char temp[MAX] = {0},big1[MAX] = {0}, big2[MAX] = {0}; /* cout<<"Please enter the first string:"; cin>>st1; cout<<"Please enter the second string:"; cin>>st2; */ comp(st1, st2, temp, big1, 0, 0, 0, 0, 1);//calling to recursive num1 comp(st2, st1, temp, big2, 0, 0, 0, 0, 1);//calling to recursive num2 cout<<"The longest common string is:"; //if ((strlen(big1)) > (strlen(big2)))//printing order to the longest string for(i=0;i<=strlen(big1);i++) cout<<big1[i]; cout<<endl; //else for(i=0;i<=strlen(big2);i++) cout<<big2[i]; cout<<endl; } void comp(char st1[], char st2[], char temp[], char biggest[], int x, int y, int i, int n, int j)//checking the strings { if (st1[x]!= st2[y])//if not equal { if (y!=strlen(st2))//checking if it is not the last char { comp (st1,st2,temp,biggest,x,y+1,i,n,j);//moving the pointer of return; //the second str } if (x!=strlen(st1))//checking if it is not the last char { comp (st1,st2,temp,biggest,x+1,n,i,n,j);//moving the pointer return; //of the second str } } else //if equal { temp[i]=st1[x];//put the common string in temp if ((st1[x+1]!=NULL) && (st2[y+1]!= NULL))//checking if it is not the last char { comp (st1,st2,temp,biggest,x+1,y+1,i+1,y+1,j);//moving the pointer return; // of both str } } help(temp,biggest); if (st1[j]!=NULL)//checking if it is not the last char comp (st1,st2,temp,biggest,j,0,0,0,j+1);//doing a new search from the next place } void help(char temp[],char biggest[])//function that check which { //string is bigger and initial the second int i=0; if ((strlen(temp))>(strlen(biggest)))//checking if big and then copy strcpy(biggest, temp); while(i <= MAX)//initial temp after copy { temp[i]=NULL; i++; } } This post has been edited by drordh: 18 January 2009 - 09:05 AM
http://www.dreamincode.net/forums/topic/81506-finding-longest-common-string-by-recursion/
CC-MAIN-2017-26
refinedweb
511
53.78
Mario Ivankovits wrote: > Your solution is just as good. But you have to ensure you really handle > it like the "host" within the other filesystems. The point is VFS has to > create a new filesystem instance for every "set", else all "sets" are > tied together in one filesystem and maybe never get garbage-collected as > someone might use a RamFS in an long time work. Ah, I see. I just abstracted out the file-y bits into a RamFile class: public class RamFile { private final Map<String, Object> attributes = new HashMap<String, Object>(); // TODO: what can be marked final? private FileType type = FileType.IMAGINARY; private byte[] buffer; private Set<String> children; private boolean hidden; private boolean readable; private boolean writeable; private long lastModifiedTime; // And appropriate getters/setters. } I could abstract futher, if need be, and turn the getters/setters into an interface. The simple implementation would be like the one above. More interesting ones might wrap java.io.File or work with a C-API via JNI (for the fellow interested in native code). The "set" idea is right now just expressed in FileName. I haven't coded a filesystem tree to represent the directory-subdirectory-file relationships yet. Where do I send code? Cheers, --binkley --------------------------------------------------------------------- To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org For additional commands, e-mail: commons-dev-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/commons-dev/200501.mbox/%3C41FBEA4E.2040006@alumni.rice.edu%3E
CC-MAIN-2014-52
refinedweb
227
58.69
Introduction In this tutorial, we’ll go over the process of translating a virtual address to physical address the way a processor does it. To begin, let’s present a short overview of how segmentation and paging is done on operating systems. At first, the virtual or logical address must be translated to a linear address. The picture taken from [1] presents this further: On the picture above we can see a 16-bit segment register and a virtual address that’s 32-bit long. The TI bit in the segment register specifies whether we must look for the corresponding register in a GDTR or LDTR. GDTR is a global segment descriptor table and is available for the system and all programs. Each program also has its own local descriptor table LDTR. The above 12-bits of the segment register are used as an index into the GDTR/LDTR to select the appropriate segment descriptor. The segment descriptor among a lot of different fields also contains a base address that is added to the 32-bit virtual address that we’re trying to access. This effectively constructs the linear address, which is not a physical address. In case we’re not using paging, the linear address equals the physical address, but otherwise this is not true. When paging is used, the linear address is taken as input and a physical address is calculated. This can be seen on the picture below, which is again taken from [1]: On the picture above, we can see that the linear address contains three fields. In case PAE is also enabled, the linear address is separated into four fields, but let’s not talk about that right now. In most cases PAE is disabled anyway, so we can safely assume that the linear address is separated into three fields as shown above. The lower 12 bits are directly used as part of the physical address. The 10-bits of directory index is used as index into the page directory that contains PDEs (page directory entry), while the middle 10 bits are used as index into the page table that contains PTEs (page table entry). Both PDEs and PTEs are used together with the lowest 12-bits to construct the physical address from the linear address. Also, notice that the control register CR3 contains a base pointer into the Page Directory? In this article, we’ll use the program provided below to try to discover how the virtual address is translated into the physical address. It’s one thing to know the whole process in theory, but it’s a whole new level to do something practical to confirm the theory. The program below is written in C++ and compiled in Visual Studio: the program basically creates two variables, one on stack and the other on heap and assigns the numbers 10 and 20 to them. Then it prints the values to the screen and calls getchar() to stop the program, so it doesn’t end before we’ve been given the chance to observe what was written on the screen. There’s also an additional assembly instruction “int 3″ that causes the program to invoke a software breakpoint, which is useful when we’re trying to call a program at some predetermined location so we can inspect the code easily. An alternative would be loading the program into a debugger and then manually looking for an instruction that interests us and setting the breakpoint in the debugger manually. We can see that the program is quite simple, which makes it perfect for what we’re trying to achieve in this article. The source code of the program can be seen. #include "stdafx.h" #include <stdio.h> #include <windows.h> int _tmain(int argc, _TCHAR* argv[]) { __asm { int 3 } int x; int *y = new int(); x = 10; *y = 20; printf("Number x: %d.n", x); printf("Number y: %d.n", *y); getchar(); return 0; } In this article we’ll basically be following the scheme presented on the picture below taken from [3]: We can see that the CPU uses logical or virtual addresses that are translated to linear addresses with a segmentation unit, which are later translated into actual physical addresses. Checking if PAE Enabled Now we can download the Intel reference manual from [2], where we can read about the internal workings of the Intel processor. Let’s first examine some registers to check whether PAE and paging are enabled. If PAE is enabled, the PAE flag in the CR4 register will be enabled. Let’s use the r instruction to print the value of the CR4 register, which is represented in hexadecimal format. If we would like to display the number in a binary form, so we can observe specific flags more easily, use the .formats command and pass the number as an argument. Both commands can be seen on the picture below: Let’s take a look at the format of the CR4 register’s flags, which is taken from [2]: Notice that the PAE flag is the 5th bit from the right to left? The 5th bit in the actual value of the CR4 register is 1, which means that PAE is enabled. Since PAE is enabled, we must also check whether the 4th PSE (Page Size Extensions) bit is enabled. It is enabled, which means that pages are 2MB in size and not 4KB only. The PAE is usually used if we want to address more than 4GB of physical memory in x86 machines, but the operating system itself must support it in order to be used. Keep in mind that PAE is usually used with x86 systems, because they use 32-bit addresses which have a limitation of 4GB. However, as the addresses don’t have a 32-bit limitation on x64 systems, there’s no need for PAE because the system can already address enough physical memory even if PAE is disabled. Also keep in mind that on 32-bit systems, PAE is not used just to allow the system to address more physical memory, but is also used to provide DEP (Data Execution Protection). If we would like to check whether the PAE is supported on our Windows system, we can take a look in the C:WINDOWSsystem32 directory and look for ntkrnlpa.exe (supports PAE) and ntoskrnl.exe (doesn’t support PAE). On the picture below, we can see that both files are present, so the PEA can be enabled or disabled. To actually determine whether PAE has been enabled or disabled, we can open the regedit.exe program and take a look into the HKLMSystemCurrentControlSetControlSession ManagerMemory ManagementPhysicalAddressExtension entry. We can see that on the picture below, the value of that entry is 0, which means that PAE is disabled. Checking if Segmentation is Enabled We’ve already talked about segmentation and so far, we should already know that the system has a data structure that’s called Global Descriptor Table (GDT) that holds descriptors. The upper bits of the segment register value are used as index into the GDT to get to the right descriptor, but the base address of the global descriptor table is stored in the GDTR register. We can use the “r gdtr” command to show the value of the GDTR register, but let’s try it another way. First use the “rm ?” command to dump the register masks that control how registers are displayed by the r command. All the register masks can be seen on the picture below: Let’s use most of the register masks to dump quite a lot of registers, which can be shown on the picture below where we’ve used the masks 0×8, 0×20, 0×80 and 0×100 to dump various register values: Notice that the GDTR register holds the value 0x8003f000, which is the value we’re interested in. Besides the GDTR register, it’s also a good idea to keep the GDTL (Global Descriptor Table Length) register in mind that specifies the length of the GDT table. The value of the GDTL register is 0x3ff bytes, which is a hexadecimal representation of the decimal number 1023; this provides the information about the GDT table length in bytes. We can use d command to dump the whole contents of the GDT table, which contains descriptors where each descriptor is 8 bytes long. Let’s present the first few descriptors from the table GDT table: We used the d command to dump the memory contents from the 0x8003f000 memory, but we also specified the number of bytes to dump. The number of bytes must have a letter ‘L’ followed by the hexadecimal representation of the number of bytes to dump. The first three descriptors from the picture above are presented below: - 0000000000000000 - 0000ffff00cf9b00 - 0000ffff00cf9300 We can see that handling the descriptors in such a way is very hard, because we need to manually extract the value of specific fields from the memory dump. The good thing is that Windbg provides the command dg that can be used specifically for dumping descriptors. The dg command takes two parameters where the first selects the first segment descriptor and the second selects the last segment descriptor in the table. Keep in mind that the dg command automatically knows where the GDT table is located so we don’t have to specify the address of the table manually. To dump the same descriptors as we’ve presented in the previous image, we could use the “kd 0 f0” command as shown on the picture below: Notice that the Windbg was automatically able to parse the descriptors and present their parameters in a column view as seen above. The output from the dg command uses the following columns: - Sel: the selector - Base: the base address of the linear address space segment - Limit: the length of the linear address space segment - Type: the type of the segment - Pl: the segment in ring 0 (kernel mode) or ring 3 (user mode) Do you notice on the picture above that some of the segments use the same base address and they span the entire linear address space, thus using the Limit of ffffffff? Some of the segments are also in kernel and some in user mode, but nevertheless they occupy the same region of memory. This is a clear indication that segmentation is not used, because otherwise the segment descriptors wouldn’t point to the same base address. Actually, segmentation is used, because it’s not optional on x86 machines, but the operating system can minimize its effect so much that we don’t even know we’re using it anymore. Thus, the operating system relies barely on paging to translate the virtual addresses to physical addresses and provide appropriate protection mechanisms. On the picture above, we can see a beginning null descriptor at 0×0000, which is the default and must be present in every GDT table. Then we have four segments that start at a base address 0×00000000 and span the entire linear address space up until the 0xFFFFFFFF address. Two of the segments are code segments where one is located in user mode and the other is located in kernel mode; the same is true for the two data segments. Since all the segments share the entire linear address space, the effect of segmentation has been minimized so much that we can’t talk about segmentation any more. In this case segmentation has been used only to lay the ground for the paging scheme, which, though optional, is apparently used exhaustively by the Windows operating system. Conclusion In this tutorial, we’ve seen how to check whether segmentation is enabled on the system and how to go about it. The Windows system doesn’t really use segmentation since the virtual addresses are the same as linear addresses, but it must nevertheless use it, because segmentation is not optional like paging.. Ive read a number of books on paging and segmentation in widows, but your very elaborate articles have finally made me understand the whole process. What I’ve appreciated most are the hands on exercises that have helped solidify the concepts very nicely. Keep it up. You are the man.
http://resources.infosecinstitute.com/translating-virtual-to-physical-address-on-windows-segmentation/
CC-MAIN-2014-52
refinedweb
2,044
56.39
Little more than a month ago I released Ink - a library for building command-line interfaces using React components. It was very well received by the community, way beyond my expectations. Incredible projects like npm, Jest, Gatsby, Parcel and Tap have either started using Ink or at least have it on their radar. Like I said last time, I'm grateful and happy either way! But in my opinion, there was one piece missing. That missing piece would reduce, if not eliminate, a barrier to start using Ink. That missing piece would help developers using Ink not just with building a user interface for your CLI, but a CLI itself. I want to introduce that missing piece today and it's called Pastel. Pastel is a framework for effortlessly building CLIs with Ink and React. It's inspired by a very old project of mine called Ronin and ZEIT's Next. Similar to Next, Pastel's API is a filesystem. Just create any file inside commands folder and it will become a separate command in your CLI. Need nested commands? No problem, just create a sub-folder and put the commands there. Here's how your Pastel project will look: my-beautiful-cli/ - package.json - commands/ - index.js - login.js - deploy.js In this case, Pastel will scan commands folder and generate 3 different commands: my-beautiful-cli(executed by commands/index.js) my-beautiful-cli login(executed by commands/login.js) my-beautiful-cli deploy(executed by commands/deploy.js) Now let's dive into the magical (and my favorite) part - writing commands. Here's an example command that greets you with a name passed via --name option. import React from 'react'; import PropTypes from 'prop-types'; import {Text} from 'ink'; /// This command says hello const Hello = ({name}) => <Text>Hello, {name}!</Text>; Hello.propTypes = { /// Name of the person to say hi to name: PropTypes.string.isRequired }; export default Hello; Pastel will automatically do all of this for you: propTypes ///) isRequiredflag Now let's check out the help message of this command and then run it: $ hello --help hello This command says hello Options: --help Show help --version Show version --name Name of the person to say hi to $ hello --name=Jane Hello, Jane! When you don't need to care about setting up options, help messages, validation and multiple commands, you can get back to doing what you truly enjoy - coding and creating your CLI. Pastel is a zero-config and zero-API tool, that gives you all of this for free. So why not try it out today? Check out Pastel at and let me know what you think! You can find me on Twitter. I want to give a shout-out to wonderful projects like Parcel, yargs, Babel and a bunch of Sindre's modules 😀. Without them, Pastel simply wouldn't happen. Thank you ❤️
https://vadimdemedes.com/posts/creating-clis-with-ink-react-and-a-bit-of-magic?utm_campaign=Craft%2BLink%2BList&utm_medium=web&utm_source=Craft_Link_List_88
CC-MAIN-2019-22
refinedweb
477
65.83
For. It makes adding various categories of features much simpler. Need authentication? There is a module for that. What about storage? Yup, there is one for that as well. Amplify is meant to make stitching together AWS services a seamless process. A simple command line call can provide all of the services you need in your AWS account to handle authentication. The Amplify Framework makes creating scalable mobile and web applications in AWS a simplified process. In this post, I am going to walk through how I used AWS Amplify to add authentication to Parler and how I customized the user interface components to fit my needs. Getting Started Amplify is an AWS provided framework. To get started we must install and configure the CLI for Amplify. $ npm install -g @aws-amplify/cli If you don't have the AWS CLI installed and configured you are going to need to configure the Amplify CLI. If you already have the AWS CLI configured you don't need to configure the Amplify one as well. # only run this configure if you don't have the AWS CLI $ amplify configure Once the Amplify CLI is installed we can begin adding modules to our mobile or web application. For my project, I am using Gatsby to build out the web application. This is a modern static site generator that can be used to quickly create static websites, blogs, portfolios, and even web applications. Since Gatsby is built on top of React we can use all of the same ideas from React in Gatsby. Let's initialize and configure our initial Amplify setup for a React web application. Initializing Amplify Now that we have the CLI installed globally we can initialize Amplify inside of our React app with one command line call. # run this from the root directory of your application $ amplify init This command will initialize our AWS configuration and create a configuration file at the root of our application. This command will not provision any services in our AWS account, but it lays the groundwork for us to do so. Adding authentication to our application Now that we have initialized the framework in our application we can start adding modules. For this blog post, we are going to add the authentication module to our application. We can do this with another call on our command line. $ amplify add auth This command will walk us through a series of questions. Each question is configuring the authentication for our application. If you are unsure what configuration you need, go ahead and select Yes, use the default configuration for the first question. You can always come back and reconfigure these settings by running the command amplify update auth. We now have the authentication module configured for our application. But, we still need to deploy this configuration to our AWS account. Lucky for us, this is handled by the Amplify CLI as well. $ amplify push This will create and deploy the necessary changes to our AWS account to support our authentication module. With the default settings, this will provision AWS Cognito to handle authentication into our application. When the deployment is complete we will have a new file in our source directory, aws-exports.js. This file represents the infrastructure inside of our AWS account to support our Amplify project. Using Amplify with React The Amplify framework has been added, we configured authentication, and we provisioned the necessary AWS services to support our application. Now it's time we set up our React/Gatsby application to leverage the framework. For the purpose of this blog post, we are going to assume we have an App component that is the main entry point for our application. We are also going to assume that you can't access the application without being authenticated first. Here is what our initial App component is going to look like. It is served at the /app route via a Gatsby configuration. Right now it is wide open to the world, no authentication is needed. import React from "react"; class App extends React.Component { constructor(props, context) { super(props, context); } render() { return ( <div> <h1>Internal App</h1> </div> ); } } export default App; With me so far? Great. Now we want to put our application behind the authentication module we added via Amplify. To do that we install two more libraries in our project. $ npm install aws-amplify aws-amplify-react Now that we have added these two libraries we can quickly add authentication to our application. First, we need to configure Amplify inside our App component. Then we can use a higher order component (HOC), withAuthenticator, specifically created for React applications. This component adds all of the logic to put our App component behind authentication. It also includes all of the UI pieces we need to log users in, sign up new users, and handle flows like confirming an account and resetting a password. Let's take a look at what these changes look like in our App component. import React from "react"; import Amplify from "aws-amplify"; import { withAuthenticator } from "aws-amplify-react"; import config from "../../aws-exports"; Amplify.configure(config); class App extends React.Component { constructor(props, context) { super(props, context); } render() { return ( <div> <h1>Internal App</h1> </div> ); } } export default withAuthenticator(App, true); Just like that we now have authentication added to our React application that is built with Gatsby. If we run gatsby develop from our command line and check out our changes locally we should be able to see the default login prompt provided by Amplify. Pretty slick right? With a few command line operations, we have authentication incorporated into our application. All of the AWS services needed to support our app are provisioned and continuously maintained by the Amplify Framework. This is all fantastic, but for Parler, I also wanted the ability to customize the UI pieces that Amplify provides. These pre-configured UI components are great for getting started but I wanted to add my own style to them using Tailwind CSS. So now let's explore how to customize the authentication UI of Amplify by overriding the default components like CustomSignIn component. Customizing the Amplify authentication UI To customize the look and feel of the Amplify authentication module we need to define our own components for the UI pieces we want to change. For example, the login UI is handled by a component inside of Amplify called SignIn, you can see the full source code of that module here. What we are going to do next is define our own component, CustomSignIn, that is going to extend the SignIn component from Amplify. This allows us to use all of the logic already built into the parent component but define our own UI. Let's take a look at what CustomSignIn looks like. import React from "react"; import { SignIn } from "aws-amplify-react"; export class CustomSignIn extends SignIn { constructor(props) { super(props); this._validAuthStates = ["signIn", "signedOut", "signedUp"]; } showComponent(theme) { return ( <div className="mx-auto w-full max-w-xs"> <form className="bg-white shadow-md rounded px-8 pt-6 pb-8 mb-4"> <div className="mb-4"> <label className="block text-grey-darker text-sm font-bold mb-2" htmlFor="username" > Username </label> <input className="shadow appearance-none border rounded w-full py-2 px-3 text-grey-darker leading-tight focus:outline-none focus:shadow-outline" id="username" key="username" name="username" onChange={this.handleInputChange} </div> <div className="mb-6"> <label className="block text-grey-darker text-sm font-bold mb-2" htmlFor="password" > Password </label> <input className="shadow appearance-none border rounded w-full py-2 px-3 text-grey-darker mb-3 leading-tight focus:outline-none focus:shadow-outline" id="password" key="password" name="password" onChange={this.handleInputChange} <p className="text-grey-dark text-xs"> Forgot your password?{" "} <a className="text-indigo cursor-pointer hover:text-indigo-darker" onClick={() => super.changeState("forgotPassword")} > Reset Password </a> </p> </div> <div className="flex items-center justify-between"> <button className="bg-blue hover:bg-blue-dark text-white font-bold py-2 px-4 rounded focus:outline-none focus:shadow-outline" type="button" onClick={() => super.signIn()} > Login </button> <p className="text-grey-dark text-xs"> No Account?{" "} <a className="text-indigo cursor-pointer hover:text-indigo-darker" onClick={() => super.changeState("signUp")} > Create account </a> </p> </div> </form> </div> ); } } With CustomSignIn we are extending the aws-amplify-react. This is so that we can override the showComponent method but still use the parent class functions like changeState and Notice that we are not overriding the render method but showComponent instead. This is because the parent SignIn component defines the UI inside of that function. Therefore, to show our UI we need to override it in our component. Inside of our constructor we see the following statement. this._validAuthStates = ["signIn", "signedOut", "signedUp"]; Amplify uses authState to track which authentication state is currently active. The custom components we define can state which auth states are valid for this component. Since we are on the login/sign in view, we only want to render our custom UI if authState equals signedOut, or signedUp. That is all of the magic sauce happening to show our UI over the default Amplify provided UI. We extend the showComponent function, check the authState and show our UI if the state is the one that we are looking for. Pretty slick right? Diving into the custom UI a bit we see the "Create Account" button makes a call to super.changeState("signUp") when its clicked. This is a function defined in the parent component we are extending. It updates the authState to SignUp component is rendered. We could, of course, customize this component as well following the same process we used to create CustomSignIn. The only other change we need to make now is back out in our App component. Instead of using the withAuthenticator HOC provided by Amplify we are going to use the Authenticator component directly. To make things clearer we are going to define a new component, AppWithAuth, that wraps our App component and makes use of the Authenticator component directly. import React from "react"; import { SignIn } from "aws-amplify-react"; import config from "../../aws-exports"; import { CustomSignIn } from "../Login"; import App from "../App"; import { Authenticator } from "aws-amplify-react/dist/Auth"; class AppWithAuth extends React.Component { constructor(props, context) { super(props, context); } render() { return ( <div> <Authenticator hide={[SignIn]} amplifyConfig={config}> <CustomSignIn /> <App /> </Authenticator> </div> ); } } export default AppWithAuth; Now our App component will receive the authState, just like our other components, inside of its render method. If we check the state inside of that method we can show our App component only when we are signed in. Let's take a look at our new App component code. import React from "react"; class App extends React.Component { constructor(props, context) { super(props, context); } render() { if (this.props.authState == "signedIn") { return ( <div> <h1>Internal App</h1> </div> ); } else { return null; } } } export default App; Now our App component is very minimal. In fact, the only notion we have of Amplify here is checking our authState which determines whether or not we should render this component. Just like that, we have added authentication to our application using the Amplify Framework. We have also customized the components of Amplify to give our own look, feel, and logic if we need it. Conclusion The Amplify Framework is an awesome new tool in our AWS toolbox. We demonstrated here that we can add authentication to any web or mobile application with just a few CLI commands. We can then deploy the AWS services that back modules like authentication with a simple push call. But sometimes we want to add our own style to these types of frameworks. Not a problem. We showed that we can extend the base components inside of Amplify to create our user interfaces as well as hide the ones we don't care about. Amplify continues to evolve and consists of many more modules like hosting, api, auth, and even storage. All key modules and AWS services that are important to most web applications. In addition, they also just announced Amplify Console which contains a global CDN to host your applications as well as a CI/CD Pipeline. If you have any questions about this post or Amplify, feel free to drop me a comment below. Continuous Deployment. Discussion (18) Hey everyone, due to a recent change to the Amplify signIn function, you now have to pass the event into your super.signIn function on line 61-ish: in customSignIn.js around line 61 onClick={() => super.signIn()} ---> onClick={(event) => super.signIn(event)} or you will get this error: Uncaught (in promise) TypeError: Cannot read property 'preventDefault' of undefined. Here is the changes they made: github.com/aws-amplify/amplify-js/... Here is the article I used to figure out what was wrong: stackoverflow.com/questions/475077... Hi Kyle, Was there some reason to not use the withAuthenticator()? As far as I understood from reading the docs, you can pass customized components to withAuthenticator too. Here is a link to the part of docs I'm talking about: aws-amplify.github.io/docs/js/auth... If I understood the docs correctly, you could have just passed the CustomSignIn-component to the withAuthenticator. Great question! I didn't use withAuthenticatorbecause I always wanted to use hide=[]to hide a lot of the default Amplify screens that I didn't need. Kyle wrote: Isn't that what the TRUE/FALSE tag in withAuthenticator is used for? Set it to TRUE (hide=[NONE]) and all AWS-components are shown. Set it to FALSE (hide=[ALL]) like this and add your custom component(s) into an array: And only component is shown (and all the other custom components in the list). Of course you can also add standard AWS-components to the list there. No the true/false for the HOC is for showing greetings. You could probably get this setup to work with the HOC but I preferred to go this route in the event I have further customization I need to make. Ahh, thanks. Couldn't find that in docs, so jumped to a coclusions, since it seemed to work that way (I just wanted the Greeting away and thought it did it to other components too...) Ok, have to look into that myself a bit more. Great article and helped me a lot :) Nice article Kyle! Easy to follow and explains the steps but not overly verbose, nice pacing. I saw a AWS team demo this a Neilson a couple months back during a MeetUp. Makes me want to get into mobile dev. :) Thank you for the kind comments David! It is a very slick framework and it is continuously being improved, definitely worth checking out. Inheritance? In my React?! xD React is certainly not my strongest area of expertise. Certainly, composition would be better here, but getting this customized to fit my needs was a jaunt in itself. Maybe I can circle back to this in the future. Yes, the clean way would probably be to use the auth function calls directly, but I guess this is also more work. It would be cool to add an article with examples for react-native. The library aws-amplify-react-native is slightly different and I'm having difficulty with changeState for forgotPassword and signUp Good article, but I do not quite understand how App and AppWithAuth are interrelated. What component do we include in index.js? And how do we generally connect AppWithAuth? At the highest level, you would include AppWithAuthas the entry point for your app. This allows you to wrap your entire application with an authentication layer as demoed here. I'm new to amplify and react as well. Is there an article that guides you in building a custom login page using amplify and modern react with functional components and hooks? Any idea of how to circumvent: TS2339: Property 'authState' does not exist on type 'Readonly<{}> & Readonly<{ children?: ReactNode; }>'. in App, at: if (this.props.authState === 'signedIn') I think you're using AWS Polly to convert. So, bad idea, actually, nobody likes to pay 5 bucks for every article and get robotised voice at the output.
https://dev.to/kylegalbraith/how-to-easily-customize-the-aws-amplify-authentication-ui-42pl
CC-MAIN-2022-21
refinedweb
2,712
55.54
Object-oriented programming helps us by encapsulating data and processing into a tidy class definition. This encapsulation assures us that our data is processed correctly. It also helps us understand what a program does by allowing us to ignore the details of an object's implementation. When we combine multiple objects into a collaboration, we exploit the power of ecapsulation. We'll look at a simple example of creating a composite object, which has a number of detailed objects inside it. Defining Collaboration. Defining a collaboration means that we are creating a class which depends on one or more other classes. Here's a new class, Dice, which uses instances of our Die class. We can now work with a Dice collection, and not worry about the details of the individual Die objects. Dice Die Example 21.4. dice.py - part 1 #!/usr/bin/env python """Define a Die, and Dice and simulate a dozen rolls."""." t= 0 for d in self.myDice: t += d.getValue() return t def getTuple( self ): "Return a tuple of the dice values." return tuple( [d.getValue() for d in self.myDice] ) def hardways( self ): "Return True if this is a hardways roll." return self.myDice[0].getValue() == self.myDice[1].getValue() This is the definition of a single Die, from the die.py example. We didn't repeat it here to save some space in the example. die.py This class, Dice, defines a pair of Die instances. The __init__ method creates an instance variable, myDice, which has a tuple of two instances of the Die class. __init__ myDice The roll method changes the overall state of a given Dice object by changing the two individual Die objects it contains. This manipulator uses a for loop to assign each of the internal Die objects to d. In the loop it calls the roll method of the Die object, d. This technique is called delegation: a Dice object delegates the work to two individual Die objects. We don't know, or care, how each Die computes it's next value. roll d The getTotal method computes a sum of all of the Die objects. It uses a for loop to assign each of the internal Die objects to d. It then uses the getValue method of d. This is the official interface method; by using it, we can remain blissfully unaware of how Die saves it's state. getTotal getValue The getTuple method returns the values of each Die object. It uses a list comprehension to create a list of the value instance variables of each Die object. The built-in function tuple converts the list into an immutable tuple. getTuple list value tuple The harways method examines the value of each Die objec to see if they are the same. If they are, the total was made "the hard way." harways The getTotal and getTuple methods return basic attribute information about the state of the object. These kinds of methods are often called getters because their names start with “get”. Collaborating Objects. The following function exercises an instance this class to roll a Dice object a dozen times and print the results. def test2(): x= Dice() for i in range(12): x.roll() print x.getTotal(), x.getTuple() This function creates an instance of Dice, called x. It then enters a loop to perform a suite of statements 12 times. The suite of statements first manipulates the Dice object using its roll method. Then it accesses the Dice object using getTotal and getTuple method. x Here's another function which uses a Dice object. This function rolls the dice 1000 times, and counts the number of hardways rolls as compared with the number of other rolls. The fraction of rolls which are hardways is ideally 1/6, 16.6%. def test3(): x= Dice() hard= 0 soft= 0 for i in range(1000): x.roll() if x.hardways(): hard += 1 else: soft += 1 print hard/1000., soft/1000. Independence. One point of object collaboration is to allow us to modify one class definition without breaking the entire program. As long as we make changes to Die that don't change the interface that Die uses, we can alter the implementation of Die all we want. Similarly, we can change the implementation of Dice, as long as the basic set of methods are still present, we are free to provide any alternative implementation we choose. We can, for example, rework the definition of Die confident that we won't disturb Dice or the functions that use Dice (test2 and test3). Let's change the way it represents the value rolled on the die. Here's an alternate implemetation of Die. In this case, the private instance variable, value, will have a value in the range 0<=value<=5. When getValue adds 1, the value is in the usual range for a single die, 1≤ n ≤6. test2 test3 0<=value<=5 class Die(object): """Simulate a 6-sided die.""" def __init__( self ): self.roll() def roll( self ): self.value= random.randint(0,5) retuen self.value def getValue( self ): return 1+self.value Since this version of Die has the same interface as other versions of Die in this chapter, it is isomorphic to them. There could be performance differences, depending on the performance of randint and randrange functions. Since randint has a slightly simpler definition, it may process more quickly. randint randrange Similarly, we can replace Die with the following alternative. Depending on the performance of choice, this may be faster or slower than other versions of Die. choice class Die(object): """Simulate a 6-sided die.""" def __init__( self ): self.domain= range(1,7) def roll( self ): self.value= random.choice(self.domain) return self.value def getValue( self ): return self.value
http://www.linuxtopia.org/online_books/programming_books/python_programming/python_ch21s06.html
CC-MAIN-2018-13
refinedweb
968
67.35
Uses the SNTP protocol (as specified in RFC 2030) to contact the server specified in the command line and report the time as returned by that server. This is about the simplest (and dumbest) client I could manage. Discussion I would like too see a more complete implementation of SNTP or even full-blown NTP in Python. Anyone interested? I might even try it myself. from socket import *. Nice job in a few lines of code. I'd like to see the namespace-polluting import line replaced, especially in published Python code. Done as requested. I can't login to change this code having lost my account details but the codes so short I can just post it here.
http://code.activestate.com/recipes/117211/
crawl-002
refinedweb
119
75.4
Related Tutorial Take a Tour: New Features in Create React App v. Huzzah! create-react-app v3.0.0 was just announced by the React Team! In this article, we’ll cover the most important features and go over some juicy code snippets. Instead of attempting to provide a comprehensive list of the changes in v3.0.0, I’ve grouped by tools and libraries (TypeScript, Jest, etc) so that you can cherry-pick what you wanna read. Table of contents Highlights browserslist Perhaps one of the biggest features is the ability to use browserslist tools to target specific browsers. As Babel transforms your code, it will look at your browserslist settings in package.json and make use of the appropriate polyfills & transforms. These are the default settings: "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all" ], "development": [ "last 1 chrome version", "last 1 firefox version", "last 1 safari version" ] } In production, your app will contain all of the polyfills/transforms for browsers that have at least 0.2% global usage, however, it’ll ignore Opera Mini (1.6% global usage). browserslist uses the global usage data from caniuse.com. For example, if you wanted to target Edge 16 you could still use array destructing: // Shiny, new ECMASCript features! const array = [1, 2, 3]; const [first, second] = array; // ...Babel transforms for Edge 16 const array = [1, 2, 3]; const first = array[0]; const second = array[1]; PostCSS Normalize PostCSS Normalize is made by the same folks that are building browserslist. PostCSS Normalize is similar to browserslist, but instead of transforming your JavaScript code, it transforms your CSS stylesheets. If you already have the browserslist declarations in package.json, it already knows what browsers you want to target! All you need to do is include @import-normalize at the top of one of your CSS files. For example, if you’re targeting Internet Explorer 9+ it’ll include these styles: @import-normalize; /* Add the correct "display" values in IE 9 */ audio, video { display: inline-block; } /* Remove border for img inside <a> tags IE 10 */ img { border-style: none; } However, if you only want to support IE 10+ @import-normalize; /* Remove border for img inside <a> tags IE 10 */ img { border-style: none; } With PostCSS Normalize, even though you’re doing all your development with Chrome you can rest assured that it’ll look exactly the same on Firefox/Safari/Opera/etc. I feel like we all have that story when we’re showing off our “sweet website” to a friend who’s using a weird browser and your website looks like chop suey. Say goodbye to those days with PostCSS Normalize! Linting for Hooks With React v16.8, the new Hooks API finally landed! Now Create React App v3 comes preinstalled with a linting config to help you write “best practices” Hooks. There are two linter rules for Hooks: - Call Hooks from React function components - Call Hooks from custom Hooks That’s it! They pertain to where you use Hooks to prevent situations where you might use a Hook inside a for-loop and create all sorts of havoc with useState and useEffect. If you’d like a Quick Start-style guide on Hooks, check out Introduction to React Hooks 🤓! Jest 24 create-react-app now ships with the latest version of Jest (v24) released late-January 2019. If you’ve been using Jest, you will definitely want to checkout their announcement that provides a great overview of all the new features! TypeScript Those of you that are using TypeScript, this new version of Create React App will detect and lint .ts files. This seems like a huge gesture of support for TypeScript, especially considering that Flow has less comprehensive linting rules. These are the default linting rules that come with Create React App v3: '@typescript-eslint/no-angle-bracket-type-assertion': 'warn', '@typescript-eslint/no-array-constructor': 'warn', '@typescript-eslint/no-namespace': 'error', '@typescript-eslint/no-unused-vars': [ 'warn', { args: 'none', ignoreRestSiblings: true, }, ] Visual Studio Code Lastly, if you use Visual Studio there’s finally support for baseUrl in your jsconfig.json and tsconfig.json file. This means you can use absolute imports: import DashboardContainer from '../../../containers/DashboardContainer' // 👈 this... import DashboardContainer from 'containers/DashboardContainer' // 👈 becomes this! This allows you to change the “lookup” priority from node_modules to your src folder. Normally, it’d look for a containers package inside your node_modules folder. 📝 Thanks for reading! For the official release notes go here
https://www.digitalocean.com/community/tutorials/react-take-a-tour-create-react-app-version-3
CC-MAIN-2020-34
refinedweb
742
54.73
See attached patch file for Atom 0.8-alpha patch for review, from technophilia@radgeek.com I think may have come up with a decent candidate for a solution to the question of how to represent elements that allow for multiple uses (e.g. categories, Atom link elements, etc.) and also of how to represent significant attributes, without breaking existing software that uses Magpie. When you have multiple instances of an element, the client can access them using ids with a counter attached (so the first category on an RSS item is in `$item['category']`, the second in `$item['category#2']`, the third in `$item['category#3']`, and so on; the total number of categories for the item can be found in `$item['category#']`). This has the advantage of allowing for multiple categories, enclosures, etc. while providing a sensible default (the first one) to clients that were written with the expectation of receiving a single element only. Similarly, attributes are now available to clients that want to peruse them using a bit of syntax lifted from XPath: if you want to know the length attribute of the first enclosure element for an item, you can find it at `$item['enclosure@length']` (and if you want to find the length of the attribute of the second enclosure, just combine the syntaxes to look at `$item['enclosure#2@length']`). If you need a list of all the attributes on an element, they can be found, separated by commas, at `$item['element@']`. In terms of concrete features, the main highlights of my revision are: 1. Supports most of Atom 1.0 and normalizes between 0.3 and 1.0 elements. 2. Supports multiple categories, using Atom 1.0 syntax, RSS 2.0 syntax, or dc:subject. 3. Supports RSS 2.0 and Atom 1.0 enclosures (making either representation available to you through normalize()). 4. Supports the use of a namespaced XHTML body or div to include full content for items; some RSS 2.0 feeds (e.g. Sam Ruby's) don't provide full content any other way. You can get the content from `$item['xhtml']['body']` or `$item['xhtml']['div']`. (I don't attempt to normalize this with the other content constructs; I don't know whether these are supposed to be semantically equivalent to those other constructs or not). 5. Supports inheritance of feed author(s), from either <atom:source> or <atom:feed>, to <atom:entry> elements that don't have author(s) listed. 6. Fixes some potential landmines in the handling of namespaces and namespaced XHTML along the way. 7. parse_w3cdtf now accepts, and tries to make something of, W3C coarse-grained dates that have the time omitted, or the day-of-month and the time omitted, or the month and day-of-month and time omitted. (According to the W3C date-time format spec, these are valid dates; since we need the fine-grained information to generate a Unix timestamp, parse_w3cdtf uses values based on the present moment, o 2004 is parsed as this moment one year ago, 2004-05 as this time on the 11th of May one year ago, 2005-09-25 as this time on the 25th of September, etc. 8. Added in a bugfix for the implementation of array_change_key_case() in CVS (an assignment was used when a comparison was meant). Nobody/Anonymous 2005-10-13 Patch file from CVS HEAD to 0.8-alpha
http://sourceforge.net/p/magpierss/patches/7/
CC-MAIN-2015-40
refinedweb
570
63.09
One easy way of determining how long it takes to return a page from Seam, and/or JSF is to use a phase listener. This phase listener below logs the end of each phase of the JSF lifecycle and measures the time from the start of the RESTORE_VIEW phase (the first phase in the lifecycle and the start of the request) to the end of the RENDER_RESPONSE phase which is the last one. Logging the end of each stage of the cycle lets you see what else is going on during each phase of the cycle. public class LogPhaseListener implements PhaseListener { public long startTime; private static final LogProvider log = Logging .getLogProvider(LogPhaseListener.class); public void afterPhase(PhaseEvent event) { if (event.getPhaseId() == PhaseId.RENDER_RESPONSE) { long endTime = System.nanoTime(); long diffMs = (long) ((endTime - startTime) * 0.000001); if (log.isDebugEnabled()) { log.debug("Execution Time = " + diffMs + "ms"); } } if (log.isDebugEnabled()) { log.debug("Executed Phase " + event.getPhaseId()); } } public void beforePhase(PhaseEvent event) { if (event.getPhaseId() == PhaseId.RESTORE_VIEW) { startTime = System.nanoTime(); } } public PhaseId getPhaseId() { return PhaseId.ANY_PHASE; } } To use this, simply add the class to your project and insert a new phase listener in the faces-config.xml file. <lifecycle> <phase-listener>package.name.LogPhaseListener</phase-listener> </lifecycle> While it may not be totally accurate, it at least gives you an idea of the scale of the duration of a request (i.e. 68ms versus 394ms). I’ve used this fairly effectively in a few projects to cut out some bottlenecks as well as comparing and contrasting different JSF frameworks.
http://www.andygibson.net/blog/tag/java/page/6/
CC-MAIN-2019-13
refinedweb
254
58.08
To systems.web.silverlight dll (you can get this from the source code version) to the DotNetNuke "bin" folder. You also need to add the MIME Type to your web server. DotNetNuke Module (and source code): DNNSilverlightHelloworld_02.00.00_Install.zip Silverlight Application (source code): DNNSilverlightHelloworld.zip You have to begin somewhere. The goal of this tutorial is to walk you through creating a simple Silverlight module in DotNetNuke. While it is possible to create Silverlight modules using Visual Web Developer Express 2008, this tutorial will use Visual Studio 2008 because it offers a level of integration with Expression Blend (a tool that is vital for easily creating XAML) that makes development very easy. You will first need to set up a DotNetNuke development environment. You will want to use this environment to develop your DotNetNuke Silverlight modules. You can later create a module package for installation in another DotNetNuke production web site. Follow the appropriate link to set up your DotNetNuke development environment: Note: If you are using IIS you also need to add the MIME Type to your web server. First you will open Visual Studio and use it to open your DotNetNuke website. Next, you will create a Silverlight project. You will then open the .XAML file of the Silverlight project in Expression Blend. You will use Expression Blend and Visual Studio to create the Silverlight application. You will then create a DotNetNuke module that will display the Silverlight application in the DotNetNuke web site. Use Visual Studio 2008 to open the DotNetNuke site (File then Open Web Site...): If the site has not already been upgraded to ASP.NET 3.5 you may get a message like this: Click Yes. This will add needed changes to the web.config. The DotNetNuke site is now open. From the toolbar, select File then Add then New Project... The Silverlight project will open up. Right-click on Page.xaml and select Open in Expression Blend... The project will open in Expression Blend. Click the Project tab in the upper right-hand corner of the screen to see the file listing. Double-click on the Button control The button will appear on the page. Click the Properties tab to switch to the properties for the button Change the Name to btnClickHere, the Height (under Layout) to 50, and the Content (under the Miscellaneous section) to Click Here. The button should look like the image above. Double-Click on the TextBox control to add it to the page. Click on the Direct Selection tool and click on the TextBox (that was just added to the page) to select it In it's Properties, enter HelloWorld for it's Name. Also, enter Hello World! for it's Text. Drag the HelloWorld TextBox so that it is outside and to the left of the main canvas. From the menu bar select Window then Active Workspace then Animation Workspace. Ensure that the Interaction tab is selected and in the Objects and Timeline menu, click the " " button to create a new storyboard. Name the Storyboard Storyboard1 and click OK. Ensure the HelloWorld box is selected. Click the Record Keyframe button. A key frame will appear for the HelloWorld control. Drag the yellow time line bar to 0:01.000 and move the HelloWorld box to the center of the main canvas. Click the Record Keyframe button. In the Properties for the HelloWorld box, select Scale in the Transform section. Enter 4 for the X and the Y. The HelloWorld box will appear larger. Drag the yellow time line bar to 0:02.000. Click the Record Keyframe button. Drag the yellow time line bar to 0:00.000 Drag the HelloWorld so that it is outside and to the left of the main canvas. Click the ">" button to preview the animation. From the toolbar select File then Save All. Close Expression Blend. Return to Visual Studio. You will see a box asking if you want to reload the project. Click Yes to All. You will also see a yellow bar asking you to update the designer. Click on the yellow bar to update the design surface. In the XAML for Page.xaml, find x:Name="btnClickHere". Click immediately after it and press the space bar. An intellesense menu will appear. Double-click on Click. New Event Handler will appear. Press the Tab button. btnClickHere_Click will appear. Right-click on on btnClickHere_Click and select Navigate to Event Handler. The screen will automatically switch to the Page.xaml.cs page and create a btnClickHere_Click method. Enter "This.Storyboard1.Begin();" in the method. From the toolbar, select Build then Build DNNSilverlightHelloworld. The project should build without errors. Inside the DotNetNuke site the DNNSilverlightHelloworld.xap file will appear in the ClientBin directory. This file contains the Silverlight application that was just created. You now need to create a DotNetNuke module to launch the Silverlight application. In Visual Studio, in the DotNetNuke website, right-click on the DesktopModules folder and select New Folder. Name the folder DNNSilverlightHelloworld. Right-click on the DNNSilverlightHelloworld.xap file in the ClientBin directory and select Copy. Paste a copy of the file in the DNNSilverlightHelloworld folder. Right-click on the DNNSilverlightHelloworld folder and select Add New Item. From the Add New Item box, select the Web User Control template, enter View.ascx for the Name, select Visual C# for the Language, and check the box next to Place code in separate file. When the View.ascx page opens, switch to source view and replace all the code with the following code: <%@ Control Language="C#" AutoEventWireup="true" CodeFile="View.ascx.cs" Inherits="DotNetNuke.Modules.DNNSilverlightHelloworld.View" %> <%@ Register Assembly="System.Web.Silverlight" Namespace="System.Web.UI.SilverlightControls" TagPrefix="asp" %> <div align="left"><asp:Silverlight</div> Click the plus icon next to the View.ascx file in the Solution Explorer (under the DNNSilverlightHelloworld directory) Double-click on the View.ascx.cs file to open it. Replace all the code with the following code and save the DotNetNuke; namespace DotNetNuke.Modules.DNNSilverlightHelloworld { public partial class View : DotNetNuke.Entities.Modules.PortalModuleBase { protected void Page_Load(object sender, EventArgs e) { } } } While logged into your DotNetNuke site as "host", in the web browser, from the menu bar select "Host". Then select "Module Definitions". Click the black arrow that is pointing down to make the fly-out menu to appear. On that menu select "Create Module Definition". In the Edit Module Definitions menu: Then click CREATE Enter "DNNSilverlightHelloworld" for NEW DEFINITION Then click "Add Definition" Next, click "Add Control" In the Edit Module Control menu: Then click UPDATE Click on the first page of the website and from the MODULE drop-down select "DNNSilverlightHelloworld". Then click ADD. The module will now appear.
http://dnnsilverlight.adefwebserver.com/Silverlight20/HelloWorld/tabid/67/Default.aspx
crawl-002
refinedweb
1,117
69.48
Definition at line 105 of file gnrc/netif.h. #include <net/gnrc/netif.h> Gets an option from the network interface. Use gnrc_netif_get_from_netdev() to just get options from gnrc_netif_t::dev. data. max_lenis lesser than the required space. optis not supported to be set. Definition at line 174 of file gnrc/netif.h. Initializes network interface beyond the default settings. netif != NULL This is called after the default settings were set, right before the interface's thread starts receiving messages. It is not necessary to lock the interface's mutex gnrc_netif_t::mutex, since the thread will already lock it. Leave NULL if you do not need any special initialization. Definition at line 118 of file gnrc/netif.h. Message handler for network interface. This message handler is used, when the network interface needs to handle message types beyond the ones defined in netapi. Leave NULL if this is not the case. Definition at line 203 of file gnrc/netif.h. Receives a packet from the network interface. netif != NULL Definition at line 158 of file gnrc 140 of file gnrc 191 of file gnrc/netif.h.
http://riot-os.org/api/structgnrc__netif__ops.html
CC-MAIN-2018-43
refinedweb
185
61.43
Getting Started - Your First Java Program You should have already installed Java Development Kit (JDK) and wrote a Hello-world program. Otherwise, Read "How to Install JDK". Let us revisit the Hello-world program that prints a message "Hello, world!" to the display console. Step 1: Write the Source Code: Enter the following source codes, which defines a class called " Hello", using a programming text editor (such as TextPad or NotePad++ for Windows; jEdit or gedit for Mac OS X; gedit for Ubuntu).". Filename and classname are case-sensitive. Step 2: Compile the Source Code: Compile the source code " Hello.java" into Java bytecode " Hello.class" using the JDK Compiler " javac". Start a CMD Shell (Windows) or Terminal (UNIX/Linux/Mac OS X) and issue these commands: // Change directory (cd) to the directory containing the source code "Hello.java" javac Hello.java Step 3: Run the Program: Run the program using Java Runtime " java", by issuing this command: java (Single-Line) Comment: begins with //and lasts until the end of the current line (as in Lines 4, 5, and 6). public class Hello { ...... } The basic unit of a Java program is a class. A class called " Hello" is defined via the keyword " class" in Lines 4-8. The braces {......} encloses the body of the class. In Java, the name of the source file must be the same as the name of the class with a mandatory file extension of " .java". Hence, this file MUST be saved as " Hello.java", case-sensitive. public static void main(String[] args) { ...... } Lines 5-7 defines the so-called main() method, which is the entry point for program execution. Again, the braces {......} encloses the body of the method, which contains programming statements. System.out.println("Hello, world!"); In Line 6, the programming statement System.out.println("Hello, world!") is used to print the string "Hello, world!" to the display console. A string is surrounded by a pair of double quotes and contain texts. The text will be printed as it is, without the double quotes. A programming statement ends with a semi-colon ( ;). Java Terminology and Syntax /* and ends with */, and may span multiple lines. An end-of-line (single-line) comment begins with // and lasts till the end of the current line. Comments are NOT executable statements and are ignored by the compiler. But they provide useful explanation and documentation. I strongly suggest that you write comments liberally to explain your thought and logic. Statement: A programming statement performs a single piece of programming action. It is terminated by a semi-colon ( ;), just like an English sentence is ended with a period, as in Lines 6. Block: A block is a group of programming statements enclosed by a pair of braces {}. This group of statements is treated as one single unit. There are two blocks in the above program. One contains the body of the class Hello. The other contains the body of the main() method. There is no need to put a semi-colon after the closing brace. Whitespaces: Blank, tab, and newline are collectively called whitespace. Extra whitespaces are ignored, i.e., only one whitespace is needed to separate the tokens. Nonetheless, extra whitespaces improve the readability, and I strongly suggest you use extra spaces and newlines liberally. Case Sensitivity: Java is case sensitive - a ROSE is NOT a Rose, and is NOT a rose. The filename is also case-sensitive.. Provide comments in your program! Output via System.out.println() and System.out.print() You can use System.out.println() (print-line) or System.out.print() to print message to the display console: System.out.println(aString)(print-line) prints the given aString, and advances the cursor to the beginning of the next line. System.out.print(aString)prints aString but places the cursor after the printed string. Try the following program and explain the output produced: The expected outputs are: Hello, world! Hello, world!Hello, world!Hello, world! Exercises - Print each of the following patterns. Use one System.out.println(...)(print-line) statement for each line of outputs. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * (a) (b) (c) Let's Write a Program to Add a Few Numbers Let us write a program to add five integers and display their sum, as follows: The expected output is: The sum is 165 How It Works? int number1 = 11; int number2 = 22; int number3 = 33; int number4 = 44; int number5 = 55; These five statements declare five int (integer) variables called number1, number2, number3, number4, and number5; and assign values of 11, 22, 33, 44 and 55 to the variables, respectively, via the so-called assignment operator '='. You could also declare many variables in one statement separated by commas, e.g., int number1 = 11, number2 = 22, number3 = 33, number4 = 44, number5 = 55; int sum; declares an int (integer) variable called sum, without assigning an initial value. sum = number1 + number2 + number3 + number4 + number5; computes the sum of number1 to number5 and assign the result to the variable sum. The symbol ' +' denotes arithmetic addition, just like Mathematics. System.out.print("The sum is "); System.out.println(sum); Line 13 prints a descriptive string. A String is surrounded by double quotes, and will be printed as it. What is a Program? A program is a sequence of instructions (called programming statements), executing one after another in a predictable manner. Sequential flow is the most common and straight-forward, where programming statements are executed in the order that they are written - from top to bottom in a sequential manner, as illustrated in the following flow chart. Example The following program prints the area and circumference of a circle, given its radius. Take note that the programming statements are executed sequentially - one after another in the order that they were written. The expected outputs are: The radius is 1.2 The area is 4.523893416 The circumference is 7.5398223600000005 How It Works? double radius, area, circumference; declare three double variables radius, area and circumference. A double variable can hold a real number (or floating-point number, with an optional fractional part). final double PI = 3.14159265; declare a double variables called PI and assign a value. PI is declared final, i.e., its value cannot be changed. radius = 1.2; assigns a value (real number) to the double variable radius. area = radius * radius * PI; circumference = 2.0 * radius * PI; compute the area and circumference, based on the value of radius and PI. System.out.print("The radius is "); System.out.println(radius); System.out.print("The area is "); System.out.println(area); System.out.print("The circumference is "); System.out.println(circumference); print the results with proper descriptions. Take note that the programming statements inside the main() method). What is a Variable? Computer programs manipulate (or process) data. A variable is a storage location (like a house, a pigeon hole, a letter box) that stores a piece of data for processing. It is called variable because you can change the value stored inside. More precisely, a variable is a named storage location, that stores a value of a particular data type. In other words, a variable has a name, a type and stores a value of that type. - A variable has a name (aka identifier), e.g., radius, area, age, height. The name is needed to uniquely identify each variable, so as to assign a value to the variable (e.g., radius = 1.2), as well as to retrieve the value stored (e.g., radius * radius * 3.14159265). - A variable has a type. Examples of type are: int: meant for integers (or whole numbers or fixed-point numbers), such as 123and -456; double: meant for floating-point numbers or real numbers, such as 3.1416, -55.66, having an optional decimal point and fractional part. String: meant for texts such as "Hello", "Good Morning!". Strings shall be enclosed with a pair of double quotes. - A variable can store a value of the declared type. It is important to take note that a variable in most programming languages is associated with a type, and can only store value of that particular type. For example, a intvariable can store an integer value such as 123, but NOT (which includes integer as a special form of real number); a String variable stores texts. Declaring and Using Variables To use a variable, you need to first declare its name and type, in one of the following syntaxes: type varName; // Declare a variable of a type type varName1, varName2,...; // Declare multiple variables of the same type type varName = initialValue; // Declare a variable of a type, and assign an initial value type varName1 = initialValue1, varName2 = initialValue2,... ; // Declare variables with initial values For examples, int sum; // Declare a variable named "sum" of the type "int" for storing an integer. // Terminate the statement with a semi-colon. double average; // Declare a variable named "average" of the type "double" for storing a real number. int number1, number2; // Declare 2 "int" variables named "number1" and "number2", separated by a comma. int height = 20; // Declare an "int" variable, and assign an initial value. String msg = "Hello"; // Declare a "String" variable, and assign an initial value. Take note that: - Each variable declaration statement begins with a type name, and works for only that type. That is, you cannot mix 2 types in one variable declaration statement. - Each statement is terminated with a semi-colon ( ;). - In multiple-variable declaration, the names are separated by commas ( ,). - The symbol '=', known as the assignment operator, can be used to assign an initial value to a variable. More examples, 2 int variables in one statement, separated by a comma. double radius = 1.5; // Declare a variable named "radius", and initialize to 1.5. String msg; // Declare a variable named msg of the type "String" msg = "Hello"; // Assign a double-quoted text string to the String variable. int number; // ERROR: A variable named "number" has already been declared. sum = 55.66; // ERROR: The variable "sum" is an int. It cannot be assigned a double. sum = "Hello"; // ERROR: The variable "sum" is an int. It cannot be assigned a string. Take note that: - Each variable can only be declared once. (You cannot have two houses with the same address.) - In Java, you can declare a variable anywhere inside your program, as long as it is declared before it is being used. - Once a variable is declared, you can assign and re-assign a value to that variable, via the assignment operator '='. - Once the type of a variable is declared, it can only store a value of that particular type. For example, an intvariable can hold only integer such as 123, and NOT floating-point number such as -2.17or text string such as "Hello". - Once declared, the type of a variable CANNOT be changed. x=x+1? Assignment in programming (denoted as '=') is different from equality in Mathematics (also denoted as '='). E.g., " x=x+1" is invalid in Mathematics. However, in programming, it means compute the value of x plus 1, and assign the result back to variable x. "x+y=1" is valid in Mathematics, but is invalid in programming. In programming, the RHS (Right-Hand Side) of '=' has to be evaluated to a value; while the LHS (Left-Hand Side) shall be a variable. That is, evaluate the RHS first, then assign the result to LHS. Some languages uses := as the assignment operator to avoid confusion with equality. Basic Arithmetic Operations The basic arithmetic operations are: Addition, subtraction, multiplication, division and remainder are binary operators that take two operands (e.g., x + y); while negation (e.g., -x), increment and decrement (e.g., ++x, --x) are unary operators that take only one operand. Example The following program illustrates these arithmetic operations: The expected outputs are: The sum, difference, product, quotient and remainder of 98 and 5 are 103, 93, 490, 19, and 3 number1 after increment is 99 number2 after decrement is 4 The new quotient of 99 and 4 is 24 How It Works? int number1 = 98; int number2 = 5; int sum, difference, product, quotient, remainder; declare all the variables number1, number2, sum, difference, product, quotient and remainder needed in this program. All variables are of the type int (integer)."); // Print text string "sum" - as it is System.out.println. System.out.println("number1 after increment is " + number1); System.out.println("number2 after decrement is " + number2); print the new values stored after the increment/decrement operations. Take note that instead of using many print() statements as in Lines 18-31, we could simply place all the items (text strings and variables) into one println(), with the items separated by '+'. In this case, '+' does not perform addition. Instead, it concatenates or joins all the items together. Exercises - Combining Lines 18-31 expected output is: The sum from 1 to 1000 is 500500 How It Works? int lowerbound = 1; int upperbound = 1000; declare two int variables to hold the upperbound and lowerbound, respectively. int sum = 0; declares an int variable to hold the sum. This variable will be used to accumulate over the steps in the repetitive loop, and thus initialized to 0. int number = lowerbound; while (number <= upperbound) { sum = sum + number; ++number; } This is the so-called while-loop. A while-loop takes the following syntax: initialization-statement; while (test) { loop-body; } next-statement; As illustrated in the flow chart, the initialization statement is first executed. The test is then checked. If 17). statement.) Conditional (or Decision) What if you want to sum all the odd numbers and also all the even numbers between 1 and 1000? There are many ways to do this. You could declare two variables: sumOdd and sumEven. You can then use a conditional statement to check whether the number is odd or even, and accumulate the number into the respective sums. The program is as follows: The expected outputs are: The sum of odd numbers from 1 to 1000 is 250000 The sum of even numbers from 1 to 1000 is 250500 The difference between the two sums is -500 How It Works? int lowerbound = 1, upperbound = 1000; declares and initializes the upperbound and lowerbound. int sumOdd = 0; int sumEven = 0; declare two int variables named sumOdd and sumEven and initialize them to 0, for accumulating the odd and even numbers, respectively. if (number % 2 == 0) { sumEven += number; } else { sumOdd += number; } This is a conditional statement. The conditional statement can take one these forms: if-then or if-then-else. >>IMAGE or modulus operator (%) to compute the remainder of number divides by 2. We then compare the remainder with 0 to test for even number. Furthermore, sumEven += number is a shorthand for sumEven = sumEven + Java,.) More on-5) where e or E denote the exponent of base 10. Example The expected outputs are: 37.5 degree C is 99.5 degree F. 100.0 degree F is 37.77777777777778 degree C. Mixing int and double, and Type Casting Although you can use a double to keep an integer value (e.g., double count = 5.0, as floating-point numbers includes integers as special case), you should use an int for integer, as int is far more efficient than double, in terms of running time, accuracy and storage requirement.'s produce an int; while arithmetic operations of two double's produce a double. Hence, 1/2 → 0(take that this is truncated to 0, not 0.5) examples, int i = 3; double d; d = i; // 3 → 3.0, d = 3.0 d = 88; // 88 → 88.0, d = 88.0 double nought = 0; // 0 → 0.0; there is a subtle difference between int 0 and double 0.0 However, you CANNOT assign a double value directly to an int variable. This is because the fractional part could be lost, and the Java compiler signals an error in case that you were not aware. For example, double d = 5.5; int i; i = d; // error: possible loss of precision i = 6.6; // error: possible loss of precision Type Casting Operators To assign an double value to an int variable, you need to explicitly invoke a type-casting operation to truncate the fractional part, as follows: (new-type)expression; For example, double d = 5.5; int i; i = (int) d; // Type-cast the value of double d, which returns an int value, // assign the resultant int value to int i. // The value stored in d is not affected. i = (int) 3.1416; // i = 3 value of 5 and returns 5.0 (of type double). Example Try the following program and explain the outputs produced: The expected output are: The sum from 1 to 1000 is 500500 Average 1 is 500.0 <== incorrect Average 2 is 500.5 Average 3 is 500.0 <== incorrect Average 4 is 500.5 Average 5 is 500.0 <== incorrect The first average is incorrect, as int/int produces an int (of 500), which is converted to double (of 500.0) to be stored in average (of double). For the second average, the value of sum (of int) is first converted to double. Subsequently, double/int produces. Take note that 1/2gives 0, but 1.0/2gives 0.5. Try computing the sum for n=1000, 5000, 10000, 50000, 100000. Hints: public class HarmonicSeriesSum { // Saved as "HarmonicSeriesSum.java" public static void main (String[] args) { int numTerms = 1000; double sum = 0.0; // For accumulating sum in double int denominator = 1; while (denominator <= numTerms) { // Beware that int/int gives int ...... ++denominator; // next } // Print the sum ...... } } - Modify the above program (called GeometricSeriesSum) to compute the sum of this series: 1 + 1/2 + 1/4 + 1/8 + ....(for 1000terms). Hints: Use post-processing statement of denominator *= 2, which is a shorthand "Java References & Resources"
http://www3.ntu.edu.sg/home/ehchua/programming/java/J1a_Introduction.html
CC-MAIN-2017-17
refinedweb
2,945
67.35
Working with Bluetooth APIs in Windows Phone 8 Introduction Microsoft introduced support for Bluetooth in Windows Phone 8. Bluetooth is a technology that involves devices communicating with each other in the proximity of 10 meters or less wirelessly. The Bluetooth APIs in Windows Phone 8 support app-to-app communication as well as app-to-device communication. Bluetooth Basics As described above, Windows Phone 8 Bluetooth APIs supports two scenarios. (1) App-to-app scenario – In this scenario, Bluetooth APIs are used by a Windows Phone 8 application to discover other applications whose service is desired to be used by the Windows Phone 8 application. After a connection is made, communication happens through a stream socket. Applications will need to declare the proximity capability: ID_CAP_PROXIMITY. (2) App-to-device scenario – In this scenario, Bluetooth APIs are used by a Windows Phone 8 application to discover devices whose service is desired to be used by the Windows Phone 8 application. After a connection is made, communication happens through a stream socket. Applications will need to declare the proximity capability, ID_CAP_PROXIMITY, as well as networking capability, ID_CAP_NETWORKING. Bluetooth Support in Windows Phone 8 Bluetooth 3.1 is supported in Windows Phone 8. There are various Bluetooth user profiles, which are supported in Windows Phone 8. (1) Audio/Video Remote Control Profile (AVRCP 1.4) (2) Advanced Audio Distribution Profile (A2DP 1.2) (3) Hands Free Profile (HFP 1.5) (4) Phone Book Access Profile (PBAP 1.1) (5) Object Push Profile (OPP 1.1) Note that there is no emulator support for Bluetooth. You will need a real physical device to work with Bluetooth. Hands-On Create a new Visual Studio 2012 project called WPBluetoothDemo. Create a new Visual Studio 2012 project When prompted, select Windows Phone OS 8.0 as the target Windows Phone OS version. Select the Windows Phone Platform Next, we declare the capabilities ID_CAP_PROXIMITY and ID_CAP_NETWORKING, which are needed for Bluetooth applications. Open WMAppManifest.xml by double clicking in the Solution Explorer. Open WMAppManifest.xml Select Capabilities Include the following namespaces in the code behind for the MainPage.xaml. Next, add two button controls: one to start discovery of peer devices and the second for connecting to the peer. Also, add a textbox to display the name of the first peer device that is discovered. Add two button controls Write the following code for the Click event of the “Discover Peers” button. private async void buttonDiscoverDevices_Click(object sender, RoutedEventArgs e) { PeerFinder.AlternateIdentities["Bluetooth:Paired"] = ""; var peerList = await PeerFinder.FindAllPeersAsync(); if (peerList.Count > 0) { textBoxPeer.Text = peerList[0].DisplayName; } else MessageBox.Show("No active peers"); } The above code specified that only paired Bluetooth devices are to be discovered. This means that a Bluetooth device will need to be paired up to your test phone device before it can be discovered. Next, when the user clicks the “Connect” button, we want to make a StreamSocket connection to the selected peer device. In our case, we will assume that we will connect to the first available peer device. Write the following code for the Click event of the “Connect” button. private async void buttonConnect_Click(object sender, RoutedEventArgs e) { var peerList = await PeerFinder.FindAllPeersAsync(); if (peerList.Count > 0) textBoxPeer.Text = peerList[0].DisplayName; StreamSocket socket = new StreamSocket(); await socket.ConnectAsync(peerList[0].HostName, "0"); } Finally, we need to prepare our application to respond to a connection request. For that we will wire up the PeerFinder.ConnectionRequested event to an event handler. We will setup with wiring on the Page Load event. The code for that is below: private void PhoneApplicationPage_Loaded_1(object sender, RoutedEventArgs e) { PeerFinder.ConnectionRequested += PeerFinder_ConnectionRequested; } void PeerFinder_ConnectionRequested(object sender, ConnectionRequestedEventArgs args) { Connect(args.PeerInformation); } async void Connect(PeerInformation peerToConnect) { StreamSocket socket = await PeerFinder.ConnectAsync(peerToConnect); } Our application is now complete. Deploy our application to two devices and run them, Pair the devices to each other before you click the “Discover Peers” button on one of the devices. Once the peer is discovered, you can click Connect to connect to the other device over Bluetooth. In case you have trouble compiling the code, a sample listing of this project is available here. Summary In this article, we learned about Bluetooth support in Windows Phone 8 and how to build a simple Windows Phone 8 Bluetooth With the release of Windows Phone 8, Microsoft introduced support for using Bluetooth APIs in Windows Phone 8 applications. This article walks Windows Phone developers through the fundamentals of working with Bluetooth APIs in applications targeting Windows Phone 8 platform. Bluetooth paired device in wp 8.1 c#Posted by Ramachandran on 10/07/2014 05:13am Hi I want to show paired bluetooth device list should be show in my app.how can i handle that.pl guide me to handle that one.Reply
https://www.codeguru.com/win_mobile/phone_apps/working-with-bluetooth-apis-in-windows-phone-8.html
CC-MAIN-2018-09
refinedweb
799
50.43
Created on 2014-07-16 00:13 by ppperry, last changed 2014-10-13 07:21 by serhiy.storchaka. This issue is now closed. In IDLE: >>> code = compile("dummy_code", "<test>", "exec") >>> pickle.dumps(code) "cidlelib.rpc\nunpickle_code\np0\n(S\\x00\\x00\\x00N(\\x01\\x00\\x00\\x00t\\n\\x00\\x00\\x00dummy_code(\\x00\\x00\\x00\\x00(\\x00\\x00\\x00\\x00(\\x00\\x00\\x00\\x00s\\x06\\x00\\x00\\x00<test>t\\x08\\x00\\x00\\x00<module>\\x01\\x00\\x00\\x00s\\x00\\x00\\x00\\x00'\np1\ntp2\nRp3\n." Outside of IDLE: >>> code = compile("dummy_code", "<test>", "exec") >>> pickle.dumps(code) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\pickle.py", line 1374, in dumps Pickler(file, protocol).dump(obj) File "C:\Python27\lib\pickle.py", line 224, in dump self.save(obj) File "C:\Python27\lib\pickle.py", line 306, in save rv = reduce(self.proto) File "C:\Python27\lib\copy_reg.py", line 70, in _reduce_ex raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle code objects Also, the error probably should be a pickle.PicklingError, not a TypeError. Code is really a code object, so the compile does not seem to be the problem. In 2.7, on Win7, I get exactly the same output after removing the possibly spurious space in the string posted. (ppperry), what system (OS) are you using. (In the future, please report both system and Python version). 3.4 gives a mild variation b'\x80\x03cidlelib.rpc\nunpickle_code\nq\x00CZ\xe3\x00\x00\x00\xN)\x01Z\ndummy_code\xa9\x00r\x01\x00\x00\x00r\x01\x00\x00\x00z\x06<test>\xda\x08<module>\x01\x00\x00\x00s\x00\x00\x00\x00q\x01\x85q\x02Rq\x03.' I ran both test_pickle and test_pickletools from both the command line >python -m test -v test_pickletools and from an Idle editor with F5, with no errors and apparently identical output. import pickle code = compile("dummy_code", "this is a file name", "exec") print(type(code)) print('pickle: ', pickle.dumps(code)) >>> produces <class 'code'> pickle: b'\x80\x03cidlelib.rpc ... \x13this is a file name\...' which shows that the bytes come from picke, not from a garbled traceback. Since the bytes look like an actual pickle, I tried unpickling and it works: import pickle code = compile("'a string'", "", "eval") pick = pickle.dumps(code) code2 = pickle.loads(pick) print(eval(code), eval(code2)) >>> a string a string import pickle code1 = compile("print('a string')", "", "exec") pick = pickle.dumps(code1) code2 = pickle.loads(pick) exec(code1) exec(code2) >>> a string a string I do not see any Idle bug here, so this might be closed unless you want to make this a pickle enhancement issue to work on code objects in the standard interpreter. The 3.4 interpreter traceback gives more info on why the pickle fails there. >>> code0 = compile("'abc'", '', 'eval') >>> pick = pickle.dumps(code0) Traceback (most recent call last): File "<stdin>", line 1, in <module> _pickle.PicklingError: Can't pickle <class 'code'>: attribute lookup code on builtins failed In Idle, __builtins__.code fails, just as in the interpreter, so pickle is doing something different, and more successful, in the slightly altered Idle user-process execution environment. It works in IDLE because it registers a custom pickling for code objects, in idlelib.rpc: copyreg.pickle(types.CodeType, pickle_code, unpickle_code) where pickle_code / unpickle_code calls marshal.dumps/loads. Although, I admit that this is weird. If idlelib.rpc is using this for transferring data between RPC instances, that's okay, but leaking the behaviour in the IDLE's interactive interpreter is not that okay, because leads to different results and expectancies between IDLE and Python's interactive interpreter. I agree with Claudiu. IDLE should pickle with a private dispatch_table. Maybe something like the attached patch. It doesn't have tests, though, I didn't find any tests for the idlelib.rpc anyway. Instead of copying dispatch_table, use ChainMap. Thanks, Serhiy. There is no unittest module for rpc yet. Should the pickle test include a test that pickling a code object fails, and with the proper exception? Is ppperry correct about PicklingError? Chainmap is new in 3.3 so 2.7 would need the copy version of the patch. ppperry: Component Windows (or Macintosh) means Windows (or Mac) specific. The rpc code is general to all systems. TypeError is raised only in Python 2, in Python 3 it's PicklingError. Terry, can I do something to move this issue forward? Here are the issues for me. 1. Except for your interest, this is a lower priority for me that most all of the other 125 Idle issues. 2. I am not familiar with pickling and wonder about the following in the patch. It deletes a statememt copyreg.pickle(types.CodeType, pickle_code, unpickle_code) that registers two functions. But the new code only registers pickle_code. Will message = pickle.loads(packet) then work? Maybe, since pickle_code has this. return unpickle_code, (ms,) Is this normal? Is registring unpickle_code not needed? Would it make any sense to wrap code objects in a CodePickler class with __getstate__ and __setstate__ methods? 3. The interprocess communication implemented in rpc.py is central to Idle's execution of user code. So I would want to be confident that a patch does not break it. What I would like is a test script (executed by hand is ok) that works now, fails with the deletion of the copyreg statement, and works again with the rest of the patch. 4. I have no idea if or from where a code object is sent. One way to search would be to grep idlelib for rpc calls. Another would be to add a print statement to pickle_code, start Idle from the console with 'python -m idlelib', and run through all the menu options that involve the user process while watching the console for messages. So you could try one of the searches methods to find a test. Or you can wait for someone who currently understands pickle well enough to review and apply the patches without a test. 2. It is normal. The third argument of copyreg.pickle() is not used now. The patch LGTM. Thank you for your feedback, Terry. 1. IDLE is behaving differently than the builtin interpreter. It should be higher priority, because it leads beginners into believing that code objects are picklable. 2. No, wrapping code objects in a CodePickler with __getstate__ et al will not work. What is important in my implementation of CodePickler is that the dispatch_table is private, so it will not leak private reduction function, as it happens right now. 3. That doesn't make sense. What works now is pickling of code objects, removing the copyreg call, without my patch, will break the rpc as well. To test this, comment the copyreg.pickle call in idlelib.rpc and just define a function in IDLE, you'll get a PicklingError. I added a small test file, which can be used to test what happens now and what happens after my patch. 4. Better, here's a traceback which can explain from where codes are sent. The traceback is from inside pickle_code, using code.InteractiveInterpreter. File "C:\Python34\lib\idlelib\PyShell.py", line 1602, in main root.mainloop() File "C:\Python34\lib\tkinter\__init__.py", line 1069, in mainloop self.tk.mainloop(n) File "C:\Python34\lib\tkinter\__init__.py", line 1487, in __call__ return self.func(*args) File "C:\Python34\lib\idlelib\MultiCall.py", line 179, in handler r = l[i](event) File "C:\Python34\lib\idlelib\PyShell.py", line 1188, in enter_callback self.runit() File "C:\Python34\lib\idlelib\PyShell.py", line 1229, in runit more = self.interp.runsource(line) File "C:\Python34\lib\idlelib\PyShell.py", line 671, in runsource return InteractiveInterpreter.runsource(self, source, filename) File "C:\Python34\lib\code.py", line 74, in runsource self.runcode(code) File "C:\Python34\lib\idlelib\PyShell.py", line 762, in runcode (code,), {}) File "C:\Python34\lib\idlelib\rpc.py", line 256, in asyncqueue self.putmessage((seq, request)) File "C:\Python34\lib\idlelib\rpc.py", line 348, in putmessage s = pickle.dumps(message) Wanting to make sure that a patch does not break Idle makes perfect sense to me. As it turns out, running *any* code from the editor (even empty) works for these patches. Idle compiles the code to a code object *in the Idle process* and if no compile errors, ships the code object to the user process via rpc to be executed in the user process. I comfirmed this with a print inside pickle_code. With this clear, I have downloaded both patches to test on 2.7 and 3.4. New changeset 90c62e1f3658 by Terry Jan Reedy in branch '3.4': Issue #21986: Idle now matches interpreter in not pickling user code objects. New changeset cb94764bf8be by Terry Jan Reedy in branch 'default': Merge with 3.4: #21986, don't pickle user code objects. Code objects get wrapped in 3 tuple layers before being 'pickled', so a private dispatch is the easiet solution. Since copyreg.dispatch_table has only two items to copy, I pushed a version of the first patch. Since private dispatch tables are new in 3.3 and not in 2.7, I withdraw the idea of patching 2.7. It would be good to add a test of rpc.dumps(). Current code copies copyreg.dispatch_table at the moment of rpc import. But it can be changed later and these change will not affect private copy. For example the re module adds pickleability of compiled regex patterns. I think the use of ChainMap is more safe solution. I agree that a test for dumps would be a good idea. Ditto for hundreds of other untested functions. I don't see this one as a priority. PyShell imports non-idlelib modules, including re, before idlelib modules, such as rpc. So the re addition is there, though I strongly doubt that compiled regexs are every sent to the user process. Since about 10 different messages of different types are sent as part of startup, the fact that Idle does start is strong evidence that dumps and much of the rest of rpc is ok. I would start a test of rpc by documenting the protocol and listing the messages types so at least one of each could be tested. > PyShell imports non-idlelib modules, including re, before idlelib modules, > such as rpc. So the re addition is there, though I strongly doubt that > compiled regexs are every sent to the user process. But we can't guarantee that this alway will be so. Other stdlib modules can register picklers, and IDLE can import them in future. This code is not protected against future changes in other places of the code.
https://bugs.python.org/issue21986
CC-MAIN-2017-43
refinedweb
1,789
69.28
This action might not be possible to undo. Are you sure you want to continue? 08/30/2011 text original Release 1.0 Selenium Project August 23, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Programming Your Test . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 How Selenium RC Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . 4.3. . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 The Selenium Server – When to Use It . . . . . . . . 7. . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . .6 Introducing WebDriver’s Drivers . . . . . . . . . User Extensions . . . . . . . . . . .10 Selenium WebDriver Wiki . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . 6. . . . . 6 Selenium 1 (Selenium RC) 6. . . 5. . . . . . . . . . . . 7 . . . . . . . . . . . . .6 UI Mapping . . . . . . . . . . . . . . . . .0 and WebDriver 4. . . . .28 4 Writing a Test Suite . . .8 Adding Some Spice to Your Tests . . . . . . . . . . . . . . . . . . . .4 Migrating from Selenium 1. . . . . . . . . . 6. WebDriver: Advanced Usage 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Server Options . . . . . . . . . .3 Validating Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Page Object Design Pattern . . . . . . . . . . . . . . . . . . .8 Driver Specifics and Tradeoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Browser Startup Manipulation 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. .24 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Reporting Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Handling HTTPS and Security Popups . . . . . . . . . . . 6. . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . .0 . . . . . . . 40 41 42 42 42 45 45 45 46 48 48 51 51 56 59 60 60 61 61 63 64 64 64 65 65 67 67 67 69 71 75 81 82 84 87 91 91 95 96 96 103 103 103 105 106 108 110 111 Selenium 2. 4. . . . . . . . . .11 Selenium RC Architecture . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Selenium 2. . . . . . .4 From Selenese to a Program . . . . 4. . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Introducing Test Design . . . . Format . . . . . . . . 5. . . . . . . . . . .6 Learning the API . . . . . .3 AdvancedUserInteractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . .5 Wrapping Selenium Calls . .27 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Types of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Location Strategies . . . .25 3. . . . . . . . . . . . . .26 3. . . . . . . . . . . . . . . . . . . Executing Selenium-IDE Tests on Different Browsers Troubleshooting . . . . . . . . . . . . . . . . .11 Next Steps . . . . . . . . .1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .2 RemoteWebDriver . . . 6. . . . . . . . . . . . . . .7 Parallelizing Your Test Runs .0 Features . . . . . . . . . . . . . . . . . . . ii . . . . . . .13 Supporting Additional Browsers and Browser Configurations 6. . . . . .1 Explicit and Implicit Waits . . . . 4. . 6. . . . . . . . . . . . . . . . . . . . . . . . . . .9 WebDriver-Backed Selenium-RC . . . . .5 Getting Started With Selenium-WebDriver 4. . . . 5 . . . . . . . . . . . . . . . . . . .3 Installation . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . 4. . . . . . . . . . . . . . . . . . . . .5 Cookies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Troubleshooting Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 HTML5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . .7 Commands and Operation . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Specifying the Path to a Specific Browser . . . . . . . . . . . . . . . . . .3 Setting Up a Selenium-WebDriver Project . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . Test Design Considerations 7. . . 6. . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . . . 177 177 177 178 178 178 178 . . 115 117 119 119 119 119 120 121 121 125 129 Selenium-Grid User-Extensions 9. . . . . . .6 Using User-Extensions With Selenium RC . . . . . . . . . . . . . . . . . . . . . . . . . . .0 Java Client Driver Configuration 143 12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 8 9 Data Driven Testing . . . . . . . . . . . . . . . . . . . . . . . .8 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . .3 Before Starting . . . . . . . . . 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Using User-Extensions With Selenium-IDE 9. . . . . . . . . . . . . .0 Project into IntelliJ Using Maven 12 Selenium 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Actions . . . . . . . . . . 9. .5 Next Steps . . . . . . . . . . . . . . . . . . . . .7. . . 114 Database Validation . . . . . . . . . .4 Locator Strategies . . . . . . . . . . . . . . . . . . 15. . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . .6 Common Problems . 9. . . . . . . . . . . . . . . 175 15 Migrating From Selenium RC to Selenium WebDriver 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 12. . .2 Starting to use CSS instead of XPATH . . . . . . . . . . . . . . .4 Getting Started . . . . . . . . . . . . . .1 Configuring Selenium-RC With Eclipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 . . . . . . . . . . . . .1 Useful XPATH patterns . . . . . . . . . . . . . . . .NET client driver configuration 11 Importing Sel2. . . . . . . .3 Accessors/Assertions . . . . . . . . . . . . . 15. . . . . . . . . . . . . . . . . . . . . .2 Configuring Selenium-RC With Intellij . . . .1 How to Migrate to Selenium WebDriver . . . 15. . . . . 159 13 Python Client Driver Configuration 171 14 Locating Techniques 175 14. .2 Why Migrate to WebDriver . 175 14. iii . . . . . . 15. . . . . . . . . . . . . . . . . . . . . iv . 0 Contents: CONTENTS 1 . Release 1.Selenium Documentation. 0 2 CONTENTS .Selenium Documentation. Release 1. Thanks very much for reading. and revising old material. we are frequently checking-in new writing and revisions as we go.CHAPTER ONE NOTE TO THE READER–DOCS BEING REVISED FOR SELENIUM 2. if you find an error. please let us know. We are very excited to promote Selenium and. and welcome! The Documentation Team would like to welcome you. along with its tight integration with the browser. In both cases we have written the “SelDocs” to help test engineers of all abilities to quickly become productive writing your own Selenium tests. Still.0 release. please be patient with us. We are currently updating this document for the Selenium 2. We have worked very. We have aimed our writing so that those completely new to test automation can use this document as a stepping stone. In short. we really want to “get the word out” about Selenium.0! Hello. we are once again working hard. as just mentioned. hopefully. very hard on this document. You can send an email directly to the Selenium Developers forum (“selenium-developers” <selenium-developers@googlegroups. on the new revision. is unmatched by available proprietary tools. we believe this documentation will truly help to spread the knowledge around. Experienced users and “newbies” alike will benefit from our Selenium User’s Guide. Whether you are brand-new to Selenium. and to thank you for being interested in Selenium. While reading. we do check our facts first and are confident the info we’ve submitted is accurate and useful. However. or have been using it for awhile. to expand its user community.com>) with “Docs Error” in the subject line. And. test design topics that should be interesting to the experienced software engineer. If so. Still. We believe you will be similarly excited once you understand how Selenium approaches test automation. particularly in one of our code examples. at the same time we have included a number of advanced. Rather than withholding information until it’s finally complete. This means we are currently writing and editing new material. Why? We absolutely believe this is the best tool for web-application testing. It’s quite different from other automation tools. We feel its extensibility and flexibility. – the Selenium Documentation Team 3 . you may experience typos or other minor errors. Selenium Documentation. Note to the Reader–Docs Being Revised for Selenium 2.0 4 Chapter 1. Release 1.0! . perhaps most.CHAPTER TWO INTRODUCTION 2. Test automation has specific advantages for improving the long-term efficiency of a software team’s testing processes. This user’s guide introduces Selenium. technical information on the internal structure of Selenium and recommended uses of Selenium are provided. Many examples are provided. software applications today are written as web-based applications to be run in an Internet browser. There are many advantages to test automation.1 Test Automation for Web Applications Many. There are a number of commercial and open source tools available for assisting with the development of test automation. Selenium is possibly the most widelyused open source solution. and presents commonly used best practices accumulated from the Selenium community. Also. In an era of highly interactive and responsive software processes where many organizations are using some form of Agile methodology. The effectiveness of testing these applications varies widely among companies and organizations. For regression testing this provides that responsiveness. Test automation supports: • Frequent regression testing • Rapid feedback to developers • Virtually unlimited iterations of test case execution • Support for Agile and extreme development methodologies • Disciplined documentation of test cases • Customized defect reporting • Finding defects missed by manual testing 2. Test automation is often the answer.2 To Automate or Not to Automate? Is automation always advantageous? When should one decide to automate test cases? 5 . test automation is frequently becoming a requirement for software projects. This user’s guide will assist both new and experienced Selenium users in learning effective techniques in building test automation for web applications. teaches its features. Test automation means using a software tool to run repeatable tests against the application to be tested. Most are related to the repeatability of the tests and the speed at which the tests can be executed. he realized there were better uses of his time than manually stepping through the same tests with every change he made. webapps became more and more powerful over time. for example. allowing many options for locating UI elements and comparing expected test results against actual application behavior. then manual testing is the best solution. For the short term. The Beijing Olympics mark China’s arrival as a global power. but testers had to work around the limitations of the product. 2. But the most important story of that year was the merging of Selenium and WebDriver. “Why are the projects merging? Partly because webdriver addresses some shortcomings in selenium (by being able to bypass the JS sandbox. massive mortgage default in the United States triggers the worst international recession since the Great Depression. Google had long been a heavy user of Selenium. 2. thus avoiding the restrictions of a sandboxed Javascript environment. Release 1. using all sorts of special features new browsers provide and making this restrictions more and more painful. The WebDriver project began with the aim to solve the Selenium’ pain-points.3 Introducing Selenium Selenium is set of different software tools each with a different approach to supporting test automation. Most Selenium QA Engineers focus on the one or two tools that most meet the needs of their project. Also. different things became impossible to do.0 It is not always advantageous to automate test cases. Selenium had massive community and commercial support. In 2006 a plucky engineer at Google named Simon Stewart started work on a project he called WebDriver. These operations are highly flexible. One of Selenium’s key features is the support for executing one’s tests on multiple browser platforms. then any automation might need to be rewritten anyway. Selenium RC was ground-breaking because no other product allowed you to control a browser from a language of your choosing. Introduction .Selenium Documentation. there is currently no test automation available. which underlies all the functionality of Selenium Remote Control (RC) and Selenium IDE. sometimes there simply is not enough time to build test automation. Simon wanted a testing tool that spoke directly to the browser using the ‘native’ method for the browser and operating system. Perhaps the best explanation for why WebDriver and Selenium are merging was detailed by Simon Stewart. For instance. if the application’s user interface will change considerably in the near future. To make things “worst”. however learning all the tools will give you many different options for approaching different test automation problems. allowing him to automatically rerun tests against multiple browsers. The entire suite of tools results in a rich set of testing functions specifically geared to the needs of testing of web applications of all types. That library eventually became Selenium Core. Being a smart guy. still reeling from the untimely loss of Heath Ledger. in a joint email to the WebDriver and Selenium community on August 6. While Selenium was a tremendous tool. 2009. The Dark Knight is viewed by every human (twice). If an application has a very tight deadline. and it’s imperative that the testing get done within that time frame. but WebDriver was clearly the tool of the future. it wasn’t without it’s drawbacks. Jump to 2008. There are times when manual testing may be more appropriate. manual testing may be more effective.4 Brief History of The Selenium Project Selenium first came to life in 2004 when Jason Huggins was testing an internal application at ThoughtWorks. He developed a Javascript library that could drive interactions with the page. the creator of WebDriver. And we’ve got a gorgeous 6 Chapter 2. The joining of the two tools provided a common set of features for all users and brought some of the brightest minds in test automation under one roof. Because of its Javascript based automation engine and the security limitations browsers apply to Javascript. 5. Selenium IDE is simply intended as a rapid prototyping tool. Selenium’s Tool Suite 7 . along with the Selenium 1 technology underneath the WebDriver API for maximum flexibility in porting your tests. At the time of writing there is no plan to add such thing. Python.2 Selenium 1 (aka. This brand new automation tool provides all sorts of awesome features. Selenium RC was the main Selenium project for a long time. Selenium IDE has a recording feature. Selenium 2 still runs Selenium 1’s Selenium RC interface for backwards compatibility. it is not designed to run your test passes nor is it designed to build all the automated tests you will need.0 is the product of that effort. In addition. It is a Firefox plugin and provides an easy-to-use interface for developing automated tests. the newest and more powerful tool. PRuby. including a more cohesive and object oriented API as well as an answer to the limitations of the old implementation. Note: Even though Selenium IDE has a “Save” feature that allows users to keep the tests in a tablebased format for later import and execution.3 Selenium IDE Selenium IDE (Integrated Development Environment) is a prototyping tool for building test scripts.0 API). Selenium IDE doesn’t provide iteration or conditional statements for test scripts. The Selenium developers recommend for serious. Perl and C#) and support for almost every browser out there. Selenium 1 is still actively supported (mostly in maintenance mode) and provides some features that may not be available in Selenium 2 for a while. before the WebDriver/Selenium merge brought up Selenium 2. 2. including support for several languages (Java. It supports the WebDriver API and underlying technology. Each has a specific role. both the Selenium and WebDriver developers agreed that both tools have advantages and that merging the two projects would make a much more robust automation tool. The reasons are partly technical and partly based on the Selenium developers encouraging best practices in test automation which always requires some amount of programming. Selenium RC or Remote Control) As you can read in Brief History of The Selenium Project. Release 1. HP.5. partly because selenium addresses some shortcomings in webdriver (such as supporting a broader range of browsers) and partly because the main selenium contributors and I felt that it was the best way to offer users the best possible framework. 2. Javascript. 2. robust test automation either Selenium 2 or Selenium 1 to be used with one of the many supported programming languages. Specifically.5 Selenium’s Tool Suite Selenium is composed of multiple software tools.5.1 Selenium 2 (aka. As you can read in Brief History of The Selenium Project.Selenium Documentation.” 2. Selenium 2. 2. Selenium Webdriver) Selenium 2 is the future direction of the project and the newest addition to the Selenium toolkit. which records user actions as they are performed and then exports them as a reusable script in one of many programming languages that can be later executed.5. This has two advantages. At the time of writing the Selenium developers are planning on the Selenium-WebDriver API being the future direction for Selenium. Mac Windows. However. run tests Start browser. both have strengths and weaknesses which are discussed in the corresponding chapters of this document. 2. Mac As applicable * Tests developed on Firefox via Selenium IDE can be executed on any other supported browser via a simple Selenium RC command line. We recommend those who are completely new to Selenium to read through these sections.6 Choosing Your Selenium Tool Many people get started with Selenium IDE.0. Selenium Grid allows you to run your tests in parallel. Selenium 1 is provided for backwards compatibility. that is. Mac Windows. Linux.Selenium Documentation. run tests Start browser.4 Selenium-Grid Selenium-Grid allows the Selenium RC solution to scale for large test suites and for test suites that must be run in multiple environments. Linux. Browser Firefox 3. Mac Windows. if you have a large test suite. different tests can be run at the same time on different remote machines. To effectively use Selenium you will need to build and run your tests using either Selenium 2 or Selenium 1 in conjunction with one of the supported programming languages. run tests Start browser. run tests Partial support possible** Operating Systems Windows. Still. Mac Windows Selenium 1 (RC) Start browser. Linux. Mac Windows. or a slow-running test suite. run tests Start browser.5. Linux. run tests Start browser. however. sometimes within seconds. Release 1. Linux. In each case Selenium Grid greatly improves the time it takes to run your suite by making use of parallel processing. run tests Start browser. run tests Start browser.0–we will do that very soon. Using the IDE you can create simple tests quickly. run tests Start browser. recommend you do all your test automation using Selenium IDE. Also. for those who are adopting Selenium for the first time. Mac Windows Windows Windows Windows. Introduction . run tests Start browser. Which one you choose depends on you. We don’t. Mac Windows. run tests Start browser. you can boost its performance substantially by using Selenium Grid to divide your test suite to run different tests at the same time using those different machines. First.7 Supported Browsers IMPORTANT: This list was for Sel 1. Linux. 8 Chapter 2. run tests Start browser. Linux. If you are not already experienced with a programming or scripting language you can use Selenium IDE to get familiar with Selenium commands. 2.0 2. It requires updating for Sel2. if you must run your test suite on multiple environments you can have different remote machines supporting and running your tests in them at the same time. and therefore building a new test suite from scratch. you will probably want to go with Selenium 2 since this is the portion of Selenium that will continue to be supported in the future. run tests Start browser. Mac Windows. Mac Windows. Selenium-Grid This chapter is not yet developed. These customizations are described in various places throughout this document. along with their trade-offs and limitations. Release 1. 2. extended and customized. There are many ways you can add functionality to both Selenium test scripts and Selenium’s framework to customize your test automation. this chapter may still interest you in that you can use Selenium IDE to do rapid prototyping of your tests. We also demonstrate techniques commonly asked about in the user forum such as how to design setup and teardown functions. In addition. and a chapter on test design techniques. The various modes.8 Flexibility and Extensibility You’ll find that Selenium is highly flexible. or configurations. We do assume. pop-ups. that Selenium RC supports are described. using the Selenium Integrated Development Environment. Many examples are presented in both programming languages and scripting languages. Solutions to common problems frequently difficult for new Sel-R users are described here. and the opening of new windows. This is perhaps Selenium’s greatest strength when compared with other automation tools. Selenium 1 Explains how to develop an automated test program using the Selenium RC API.Selenium Documentation. but depending on browser security settings there may be technical limitations that would limit certain features. Selenium 2 Explains how to develop an automated test program using Selenium 2. how to implement data-driven tests (tests where one can varies the data between test passes) and other methods of programming common test automation tasks. however. We’ve provided information on the Selenium architecture. User extensions Describes ways that Selenium can be modified. for instance. the installation and setup of Selenium RC is covered here. This section also demonstrates how your test script can be “exported” to a programming language for adding more advanced capabilities not supported by Selenium IDE. Test Design Considerations This chapter presents programming techniques for use with SeleniumWebDriver and Selenium RC. We introduce Selenium to new users and we do not assume prior Selenium experience. if you are experienced in programming. since Selenium is Open Source. If you are not experienced in programming. For the more experienced. the sourcecode can always be downloaded and modified. For the more experienced user. this guide can act as a reference. 2. examples of common usage. handling Security Certificates. 2. The remaining chapters of the reference present: Selenium IDE Introduces Selenium IDE and describes how to use it to build test scripts. we recommend browsing the chapter and subheadings. Also.0 ** Selenium RC server can start any executable. that the user has at least a basic understanding of test automation.8.9 What’s in this Book? This user’s guide targets both new users and those who have already used Selenium but are seeking additional knowledge. An architecture diagram is provided to help illustrate these points. Also. https requests. but still hoping to learn test automation this is where you should start and you’ll find you can create quite a few automated tests with Selenium IDE. Flexibility and Extensibility 9 . and the continued efforts of the current developers. • Dave Hunt • Mary Ann May-Pumphrey • Noah Sussman • Paul Grandjean • Peter Newhook • Santiago Suarez Ordonez • Tarun Kumar 2. They have truly designed an amazing tool.10. and long term involvement in the Selenium community. He also set us up with everything we needed on the seleniumhq. As an administrator of the SeleniumHQ website.0 2.Selenium Documentation. Also thanks goes to Andras Hatvani for his advice on publishing solutions. Patrick helped us understand our audience. the following people have made significant contributions to the authoring of this user’s guide or with out publishing infrastructure or both. Release 1.org website for publishing the documents.10 The Documentation Team–Authors Past and Present In alphabetical order. his support was invaluable when writing the original user’s guide. And of course. we would not have such a great tool to pass on to you. creator of Selenium RC. we must recognize the Selenium Developers. Without the vision of the original designers. and to Amit Kumar for participating in our discussions and for assisting with reviewing the document. Introduction . 10 Chapter 2.1 Acknowledgements A huge special thanks goes to Patrick Lightbody. 11 . It’s an easy-to-use Firefox plug-in and is generally the most efficient way to develop test cases. so you will need to click ‘Allow’ to proceed with the installation. first. but also an excellent way of learning Selenium script syntax.CHAPTER THREE SELENIUM-IDE 3. 3.2 Installing the IDE Using Firefox.1 Introduction The Selenium-IDE (Integrated Development Environment) is the tool you use to develop your Selenium test cases. This is not only a time-saver. This chapter is all about the Selenium IDE and how to use it effectively. download the IDE from the SeleniumHQ downloads page Firefox will protect you from installing addons from unfamiliar locations.. as shown in the following screenshot. Selenium-IDE .Selenium Documentation. first showing a progress bar. Release 1. you’ll be presented with the following window.0 When downloading from Firefox. Select Install Now. The Firefox Add-ons window pops up. and when the 12 Chapter 3. Selenium Documentation. After Firefox reboots you will find the Selenium-IDE listed under the Firefox Tools menu. Restart Firefox. displays the following. Installing the IDE 13 .0 download is complete. 3.2. Release 1. 14 Chapter 3. or creating new test cases. It opens as follows with an empty script-editing window and a menu for loading.3 Opening the IDE To run the Selenium-IDE.0 3. simply select it from the Firefox Tools menu. Selenium-IDE .Selenium Documentation. Release 1. only one item on this menu–UI-Element Documentation–pertains to Selenium-IDE. is the record button.1 Menu Bar The File menu allows you to create. You can set the timeout value for certain commands.4. including a step feature for debugging your test cases. open.2 Toolbar The toolbar contains buttons for controlling the execution of your test cases.4. The Options menu allows the changing of settings. 3. delete. Release 1. and save test case and test suite files. The Help menu is the standard Firefox Help menu. IDE Features 15 . and specify the format (language) used when saving your test cases.Selenium Documentation. add user-defined user extensions to the base set of Selenium commands. 3. The Edit menu allows copy. and select all operations for editing the commands in your test case. the one with the red-dot.0 3.4.4 IDE Features 3. undo. The right-most button. paste. The other tab . Apply Rollup Rules: This advanced feature allows repetitive sequences of Selenium commands to be grouped into a single action. Selenium-IDE . Detailed documentation on rollup rules can be found in the UI-Element Documentation on the Help menu. The TestRunner is not commonly used now and is likely to be deprecated. Target. The Command. Run: Runs the currently selected test. 16 Chapter 3. If a second parameter is specified by the Reference tab. This button is for evaluating test cases for backwards compatibility with the TestRunner. The Source view also allows one to edit the test case in its raw form.Source displays the test case in the native format in which the file will be stored. including copy. TestRunner Mode: Allows you to run the test case in a browser loaded with the Selenium-Core TestRunner. The first parameter specified for a command in the Reference tab of the bottom pane always goes in the Target field. cut and paste operations. Run All: Runs the entire test suite when a test suite with multiple test cases is loaded.3 Test Case Pane Your script is displayed in the test case pane.Selenium Documentation. Record: Records the user’s browser actions. Pause/Resume: Allows stopping and re-starting of a running test case. Release 1. and Value entry fields display the currently selected command along with its parameters. It has two tabs.0 Speed Control: controls how fast your test case runs. this is HTML although it can be changed to a programming language such as Java or C#. or a scripting language like Python. See the Options menu for details. it always goes in the Value field. By default.4. one for displaying the command and their parameters in a readable “table” format. When only a single test is loaded this button and the Run All button have the same effect. Most users will probably not need this button. Step: Allows you to “step” through a test case by running it one command at a time. 3. Use for debugging test cases. These are entry fields where you can modify the currently selected command. If there is a mismatch in any of these three areas. While the Reference tab is invaluable as a quick reference. Reference The Reference tab is the default selection whenever you are entering or modifying Selenese commands and parameters in Table mode. IDE Features 17 . Reference. and Rollup– depending on which tab is selected.4. the command will not run correctly. When entering or modifying commands. Notice the Clear button for clearing the Log. a drop-down list will be populated based on the first characters you type. In Table mode. UI-Element. the order of parameters provided must match the order specified. whether from Table or Source mode. it is still often necessary to consult the Selenium Reference document.Selenium Documentation. the Reference pane will display documentation on the current command. it is critically important to ensure that the parameters specified in the Target and Value fields match those specified in the parameter list in the Reference pane. 3.4 Log/Reference/UI-Element/Rollup Pane The bottom pane is used for four different functions–Log. The number of parameters provided must match the number specified. Log When you run your test case.4. Also notice the Info button is a drop-down allowing selection of different levels of information to log.0 If you start typing in the Command field. even if you do not first select the Log tab. These messages are often useful for test case debugging. you can then select your desired command from the drop-down. and the type of parameters provided must match the type specified. Release 1. error messages and information messages showing the progress are displayed in this pane automatically. 3. Here we’ll simply describe how to add them to your test case.5 Building Test Cases There are three primary methods for developing test cases.2 Adding Verifications and Asserts With the Context Menu Your test cases will also need to check the properties of a web-page. Selenium-IDE will attempt to predict what command. A paragraph or a heading will work fine. This requires assert and verify commands. 3.1 Recording Many first-time users begin by recording a test case from their interactions with a website.. that is in the chapter on Selenium Commands – “Selenese”. your test case will continue running commands before the page has loaded all its UI elements.5.select command • clicking checkboxes or radio buttons . 3. along with the parameters. With Selenium-IDE recording.type command • selecting options from a drop-down listbox . Release 1. you will find additional commands will quickly be added to this menu. and deselecting “Start recording immediately on open. Otherwise. The context menu should give you a verifyTextPresent command and the suggested parameter should be the text itself.0 UI-Element and Rollup Detailed information on these two panes (which cover advanced features) can be found in the UIElement Documentation on the Help menu of Selenium-IDE. Open a web-page of your choosing and select a block of text on the page.click command Here are some “gotchas” to be aware of: • The type command may require clicking on some other area of the web page for it to record.. You will see a context menu showing verify and/or assert commands. The first time you use Selenium. If you do not want Selenium-IDE to begin recording automatically you can turn this off by going under Options > Options. Now. there may only be one Selenium command listed. You will often need to change this to clickAndWait to ensure your test case pauses until the new page is completely loaded. right-click the selected text. Typically. We won’t describe the specifics of these commands here. This will cause unexpected test case failures.” During recording. you will need for a selected UI element on the current web-page. As you use the IDE however.5. Selenium-IDE will automatically insert commands into your test case based on your actions. go to the browser displaying your test application and right click anywhere on the page. a test developer will require all three techniques. Frequently. When Selenium-IDE is first opened.click or clickAndWait commands • entering values . 3.Selenium Documentation. Selenium-IDE . 18 Chapter 3. this will include: • clicking a link . the record button is ON by default. Let’s see how this works. • Following a link usually records a click command. selecting verifyElementPresent for an image should later cause that command to be available on the primary context menu the next time you select an image and right-click. You can learn a lot about the Selenium commands simply by experimenting with the IDE. feel free to use the IDE to record and select commands into a test case and then run it. This shows many.3 Editing Insert Command Table View Select the point in your test case where you want to insert the command.0 Also. Table View Select the line in your test case where you want to insert the comment. For now though.5. in the Test Case Pane. many more commands. or a user control like a button or a checkbox. left-click on the line where you want to insert a new command. An empty command will cause an error during execution. Insert Comment Comments may be added to make your test case more readable.. just create empty comments. an empty comment won’t. 3. Try a few more UI elements. To do this. <!-. For example. Building Test Cases 19 . 3. To do this. along with suggested parameters. again. Right-click and select Insert Comment. and enter the HTML tags needed to create a 3-column row containing the Command. the more commonly used ones will show up on the primary context menu. notice the Show All Available Commands menu option. if one is required). i. These comments are ignored when the test case is run. for testing your currently selected UI element. the IDE will add a blank line just ahead of the line you selected. Your comment will appear in purple font. Source View Select the point in your test case where you want to insert the command. Try right-clicking an image. and second parameter (again.5.your comment here -->.e. Again. Once you select these other options. in the Test Case Pane. these commands will be explained in detail in the chapter on Selenium commands. Be sure to save your test before switching back to Table view.Selenium Documentation. Now use the Command field to enter the comment. Comments may also be used to add vertical white space (one or more blank lines) in your tests. Right-click and select Insert Command. Now use the command editing text fields to enter your new command and its parameters. first parameter (if one is required by the Command). Source View Select the point in your test case where you want to insert the comment. You may need to use Show All Available Commands to see options other than verifyTextPresent. left-click between the commands where you want to insert a new command. Release 1. Add an HTML-style comment. 7 Using Base URL to Run Test Cases in Different Domains The Base URL field at the top of the Selenium-IDE window is very useful for allowing test cases to be run across different domains. Suppose that a site named. 3. and from the context menu select Toggle Breakpoint. Source View Since Source view provides the equivalent of a WYSIWYG editor. When you open an existing test case or suite.0 Edit a Command or Comment Table View Simply select the line to be changed and edit it using the Command. Stop in the Middle You can set a breakpoint in the test case to cause it to stop on a particular command.com had an in-house beta 20 Chapter 3. Run Any Single Command Double-click any single command to run it by itself. and you can do a batch run of an entire test suite. This is also available from the context menu. stop and start it. However. run a single command you are currently developing. select a command. when you are not sure if it is correct. and Value fields. Start from the Middle You can tell the IDE to begin running from a specific command in the middle of the test case.6 Running Test Cases The IDE allows many options for running your test case. Selenium-IDE . To set a breakpoint. It lets you immediately test a command you are constructing. right-click. To continue click Resume. Release 1.Selenium Documentation. simply modify which line you wish– command. Run a Test Suite Click the Run All button to run all the test cases in the currently loaded test suite. To set a startpoint. To save your Selenium-IDE tests for later use you can either save the individual test cases. This also is used for debugging. and from the context menu select Set/Clear Start Point. Execution of test cases is very flexible in the IDE. 3. Selenium-IDE displays its Selenium commands in the Test Case Pane.4 Opening and Saving a Test Case Like most programs. Selenium distinguishes between test cases and test suites. you’ll be prompted to save them before saving the test suite. or comment. parameter. You can run a test case all at once. Stop and Start The Pause button can be used to stop the test case while it is running. This is useful for debugging your test case. select a command. right-click. The icon of this button then changes to indicate the Resume button. there are Save and Open commands under the File menu. 3. You can double-click it to see if it runs correctly. Target.portal. This is useful when writing a single command. or save the test suite.5. If the test cases of your test suite have not been saved. Run a Test Case Click the Run button to run the currently displayed test case. run it one line at a time. pop up windows. Ajax functionality. Accessors and Assertions. In addition Selenium commands support testing of window size. The command set is often called selenese.portal. Selenium commands come in three “flavors”: Actions.portal. A sequence of these commands is a test script.Selenium Documentation. A command is what tells Selenium what to do. are the set of commands that run your tests. selection list options.0 site named. • Actions are commands that generally manipulate the state of the application. test for specific content. The Command Reference lists all the available commands. event handling.html: Base URL setting would be run against 3.com/about.news. the execution of the current test is stopped. alerts. Any test cases for these sites that begin with an open statement should specify a relative URL as the argument to open rather than an absolute URL (one starting with a protocol such as http: or https:). test for broken links. one can test the existence of UI elements based on their HTML tags.com.com/about. These commands essentially create a testing language. and table data among other things. and many other web-application features. 3. Here we explain those commands in detail.html: This same test case with a modified. often called selenese. submitting forms. In selenese. If an Action fails. Selenium Commands – “Selenese” 21 . and we present the many choices you have in testing your web application when using Selenium. For example. the test case below would be run against. They do things like “click this link” and “select that option”. Selenium provides a rich set of commands for fully testing your web-app in virtually any way you can imagine. input fields. or has an error. Selenium-IDE will then create an absolute URL by appending the open command’s argument onto the end of the value of Base URL. Release 1.8.portal. mouse position.8 Selenium Commands – “Selenese” Selenium commands.news. Release 1. and in still others the command may take no parameters at all. Selenium scripts that will be run from Selenium-IDE will be be stored in an HTML text file format. they consist of the command and two parameters. This allows a single “assert” to ensure that the application is on the correct page. but they verify that the state of the application conforms to what is expected. “clickAndWait”. • Accessors examine the state of the application and store the results in variables.0 Many Actions can be called with the “AndWait” suffix. e.9 Script Syntax Selenium commands are simple. e. Locators. text patterns. “verifyText” and “waitForText”. Here are a couple more examples: goBackAndWait verifyTextPresent type type Welcome to My Home Page (555) 666-7066 ${myVariableAddress} id=phone id=address1 The command reference describes the parameter requirements for each command. selenium variables. etc. and the final column contains a value. asserts the page title and then verifies some content on the page: 22 Chapter 3. This consists of an HTML table with three columns. followed by a bunch of “verify” assertions to test form field values. For example: verifyText //div//a[2] Login The parameters are not always required. They will succeed immediately if the condition is already true. Selenium-IDE . • Assertions are like Accessors. The first column identifies the Selenium command.g. 3. “verify”. However. When an “assert” fails. Here is an example of a test that opens a page. and that Selenium should wait for a new page to load. This suffix tells Selenium that the action will cause the browser to make a call to the server. Parameters vary. but they should be present. the second is a target. in others one parameter is required. • a text pattern for verifying or asserting expected page content • a text pattern or a selenium variable for entering text in an input field or for selecting an option from an option list. They are also used to automatically generate Assertions. logging the failure. they will fail and halt the test if the condition does not become true within the current timeout setting (see the setTimeout action below). the test is aborted. you can “assertText”. The second and third columns may not require values depending on the chosen Selenium command. “waitFor” commands wait for some condition to become true (which can be useful for testing Ajax applications). however they are typically: • a locator for identifying a UI element within a page. Each table row represents a new Selenium command. “storeTitle”. and ” waitFor”. For example. it depends on the command. When a “verify” fails. and the commands themselves are described in considerable detail in the section on Selenium Commands. labels. Examples include “make sure the page title is X” and “verify that this checkbox is checked”. the test will continue execution. All Selenium Assertions can be used in 3 modes: “assert”.Selenium Documentation.g. In some cases both are required. <html> <head> <title>Test Suite Function Tests .11 Commonly Used Selenium Commands To conclude our introduction of Selenium. Nunit could be employed.10. An HTML table defines a list of tests where each row defines the filesystem path to each test. if C# is the chosen language. from the Selenium-IDE.10 Test Suites A test suite is a collection of tests. With a basic knowledge of selenese and Selenium-IDE you can quickly produce and run testcases. we’ll show you a few typical Selenium commands. These are probably the most commonly used commands for building tests. When using Selenium-IDE. Since the whole reason for using Sel-RC is to make use of programming logic for your testing this usually isn’t a problem./SaveValues. test suites also can be defined using a simple HTML file. Often one will run all the tests in a test suite as one continuous batch-job. This is done via programming and can be done a number of ways. 3.html" >Login</a></td></tr> <tr><td><a href= ".html" >Test Save</a></td></tr> </table> </body> </html> A file similar to this would allow running the tests all at once. one after another. 3. Additionally./Login. If using an interpreted language like Python with Selenium-RC than some simple programming would be involved in setting up a test suite. The syntax again is simple. An example tells it all. Test suites can also be maintained when using Selenium-RC.Selenium Documentation. 3. Release 1./SearchValues. Commonly Junit is used to maintain a test suite if one is using Selenium-RC with Java. Test Suites 23 .0 .Priority 1</title> </head> <body> <table> <tr><td><b>Suite Of Tests</b></td></tr> <tr><td><a href= ".html" >Test Searching for Values</a></td></tr> <tr><td><a href= ". and the web designers frequently change the specific image file along with its position on the page. specific text is at a specific location on the page? For example.. waitForPageToLoad pauses execution until an expected new page loads. will you test that. and start each group with an “assert” followed by one or more “verify” test commands. you are testing for the existence of an image on the home page.0 open opens a page using a URL. verifyText verifies expected text and it’s corresponding HTML tag are present on the page. click/clickAndWait performs a click operation. On the other hand. Selenese allows multiple ways of checking for UI elements. whereas a “verify” will fail the test and continue to run the test case. specific text is somewhere on the page? 3. Called automatically when clickAndWait is used. If you’re not on the correct page. you may want to check many attributes of a page without aborting the test case on the first failure as this will allow you to review all failures on the page and take the appropriate action.13 Assertion or Verification? Choosing between “assert” and “verify” comes down to convenience and management of failures. Effectively an “assert” will fail the test and abort the current test case. There’s very little point checking that the first paragraph on the page is the correct one if your test has already failed when checking that the browser is displaying the expected page. An example follows: 24 Chapter 3. if you are testing a text heading. as defined by its HTML tag. waitForElementPresent pauses execution until an expected UI element. is present on the page. verifyTable verifies a table’s expected contents. It is important that you understand these different methods because these methods define what you are actually testing. 1. Release 1. you’ll probably want to abort your test case so that you can investigate the cause and fix the issue(s) promptly. For example.Selenium Documentation. as defined by its HTML tag.. verifyElementPresent verifies an expected UI element. and optionally waits for a new page to load. Selenium-IDE . however. The best use of this feature is to logically group your test commands. verifyTitle/assertTitle verifies an expected page title. is present on the page. 3. verifyTextPresent verifies expected text is somewhere on the page. 3. the text and its position at the top of the page are probably relevant for your test. then you only want to test that an image (as opposed to the specific image file) exists somewhere on the page. If.12 Verifying Page Elements Verifying UI elements on a web page is probably the most common feature of your automated tests. an element is present somewhere on the page? 2. 2 verifyElementPresent Use this command when you must test for the presence of a specific UI element.0 Command open assertTitle verifyText assertTable verifyTable verifyTable Target /download/ Downloads //h2 1. and that it follows a <div> tag and a <p> tag. You can check the existence of links.13. Here are a few more examples.Selenium Documentation. 3. Only if this passes will the following command run and “verify” that the text is present in the expected location. 3. One common use is to check for the presence of an image.2.2.2 1.13. For example: Command verifyTextPresent Target Marketing Analysis Value This would cause Selenium to search for. divisions <div>. is present on the page. etc.1 verifyTextPresent The command verifyTextPresent is used to verify specific text exists somewhere on the page. that the text string “Marketing Analysis” appears somewhere on the page currently being tested. Command verifyElementPresent Target //div/p/img Value This command verifies that an image. 3. Release 1. The first (and only) parameter is a locator for telling the Selenese command how to find the element. specified by the existence of an <img> HTML tag.2. and only if this passed will the remaining cells in that row be “verified”.0 beta 2 The above example first opens a page and then “asserts” that the correct page is loaded by comparing the title with the expected value. rather then its content. 2008 1.1 1. and verify. Assertion or Verification? 25 . Again. Use verifyTextPresent when you are interested in only the text itself being present on the page.3 Value Downloads Selenium IDE June 3. The test case then “asserts” the first column in the second row of the first table contains the expected value. only the HTML tag. Do not use this when you also need to test where the text occurs on the page. paragraphs. Locators are explained in the next section.13. This verification does not check the text. verifyElementPresent can be used to check the existence of any HTML tag within. It takes a single argument–the text pattern to be verified. locators are explained in the next section. a target is required.=password (5) • identifier=continue (6) • continue (6) Since the identifier type of locator is the default. The various locator types are explained below with examples for each. verifyText must use a locator. the identifier= in the first three examples above is not necessary. and consists of the location strategy followed by the location in the format locatorType=location. then the first element with a name attribute matching the location will be used. For instance. 3. 26 Chapter 3. the first element with the id attribute value matching the location will be used. but also more explicit. Command verifyText Target //table/tr/td/div/p Value This is my text and it occurs right after the div inside the table.1 Locating by Identifier This is probably the most common method of locating elements and is the catch-all default when no recognized locator type is used. Locating by Id This type of locator is more limited than the identifier locator type. Release 1.13.14 Locating Elements For many Selenium commands. If you choose an XPath or DOM locator. With this strategy. The locator type can be omitted in many cases. Use this when you know an element’s id attribute. Selenium-IDE .3 verifyText Use verifyText when both the text and its UI element must be tested.0 3. This target identifies an element in the content of the web application. you can verify that specific text appears at a specific location on the page relative to other UI components on the page.14.Selenium Documentation. 3. If no element has a matching id attribute. or really via any HTML property. The default filter type is value (matching the value attribute). but its functionality must be regression tested. So if the page structure and organization is altered. In the case where web designers frequently alter the page.) Note: Unlike some types of XPath and DOM locators. the test will still pass. the three types of locators above allow Selenium to test a UI element independent of its location on the page.Selenium Documentation. testing via id and name attributes. then you can use filters to further refine your location strategy. If multiple elements have the same value for a name attribute. becomes very important.14. Selenium users can leverage this powerful language to target elements in their web applications. As HTML can be an implementation of XML (XHTML). Locating Elements 27 .. XPath extends beyond (as well as supporting) the simple methods of locating by id 3. Locating by XPath XPath is the language used for locating nodes in an XML document. You may or may not want to also test whether the page structure changes. globbing is used to display all the files ending with a . Release 1. Below is an example of two commands that use globbing patterns.e. Patterns allow you to describe.” the test would still pass. globbing includes a third special character. Using a pattern for both a link and a simple test that the link worked (such as the verifyTitle above does) can greatly reduce the maintenance for such test cases. link locators can utilize a pattern. or many characters. via the use of special characters. Examples of commands which require patterns are verifyTextPresent. And as has been mentioned above. verifyTitle. There are three types of patterns: globbing.c. However. By using a pattern rather than the exact text.Menu”. [ ] (character class) which translates to “match any single character found inside the square brackets. and verifyPrompt. verifyAlert. To specify a globbing pattern parameter for a Selenese command.1 Globbing Patterns Most people are familiar with globbing as it is utilized in filename expansion at a DOS or Unix/Linux command line such as ls *. assertConfirmation. Matching Text Patterns 31 . verifyText.” i. • Locators starting with “document” will use the DOM locator strategy. Selenium globbing patterns only support the asterisk and character class.c extension that exist in the current directory.. nothing.0 • Locators starting with “//” will use the XPath locator strategy. the verifyTitle will pass as long as the two words “Film” and “Television” appear (in that order) anywhere in the page’s title.15. and exact. you can prefix the pattern with a glob: label. The glob pattern’s asterisk will match “anything or nothing” between the word “Film” and the word “Television”.15. you can also omit the label and specify just the pattern itself. because globbing patterns are the default. the ?. Globbing is fairly limited. a single character. by using a pattern rather than the exact text. 3. However.15 Matching Text Patterns Like locators. Only two special characters are supported in the Selenium implementation: * which translates to “match anything. See Locating by XPath. The actual link text on the page being tested was “Film/Television Department”. For example.” A dash (hyphen) can be used as a shorthand to specify a range of characters (which are contiguous in the ASCII character set). if the page’s owner should shorten the title to just “Film & Television Department. Command click verifyTitle Target link=glob:Film*Television Department glob:*Film*Television* Value The actual title of the page reached by clicking on the link was “De Anza Film And Television Department . See Locating by DOM 3. In this case. A few examples will make the functionality of a character class clear: [aeiou] matches any lowercase vowel [0-9] matches any digit [a-zA-Z0-9] matches any alphanumeric character In most other contexts. patterns are a type of parameter frequently required by Selenese commands.Selenium Documentation. the click command will work even if the link text is changed to “Film & Television Department” or “Film and Television Department”. regular expressions. 3. what text is expected rather than having to specify that text exactly. *Television Department regexp:. Regular expressions are also supported by most high-level programming languages.* Value The example above is functionally equivalent to the earlier example that used globbing patterns for this same test. and awk. For example. Selenese regular expression patterns offer the same wide array of special characters that exist in JavaScript. including the Linux/Unix command-line utilities grep.” It is the equivalent of the one-character globbing pattern * (a single asterisk).Selenium Documentation. Release 1. regexp: [0-9]+ is a simple pattern that will match a decimal number of any length. and a host of tools. the latter is case-insensitive. Whereas Selenese globbing patterns support only the * and [ ] (character class) features. Alaska contains info on the sunrise time: Command open verifyTextPresent Target. The former is case-sensitive. Command click verifyTitle Target link=regexp:Film. many text editors.html regexp:Sunrise: *[0-9]{1.2}:[0-9]{2} [ap]m Value Let’s examine the regular expression above one part at a time: Sunrise: * [0-9]{1.yahoo. Selenium-IDE . The first one uses what is probably the most commonly used regular expression pattern–. This two-character sequence can be translated as “0 or more occurrences of any character” or more simply. In Selenese.* (“dot star”). “anything or nothing.*Television. The more complex example below tests that the Yahoo! Weather page for Anchorage. suppose your test needed to ensure that a particular table cell contained nothing but a number. Below are a subset of those special characters: PATTERN . sed. A few examples will help clarify how regular expression patterns can be used with Selenese commands.*Film. regular expression patterns allow a user to perform many tasks that would be very difficult otherwise. The only differences are the prefix (regexp: instead of glob:) and the “anything or nothing” pattern (.* instead of just *).0 Regular Expression Patterns Regular expression patterns are the most powerful of the three types of patterns that Selenese supports.com/forecast/USAK0012:. [] * + ? {1. causing Selenium to raise a timeout exception. as waitForElementPresent or waitForVisible. Be aware. if there was an earlier select option labeled “Real Numbers.16. it simply runs in sequence. if you wanted to select an item labeled “Real *” from a dropdown. data is retrieved from server without refreshing the page. Release 1. leading to test failures. 3.” it would be the option selected rather than the “Real *” option. 3. The best approach would be to wait for the needed element in a dynamic period and then continue the execution as soon as the element is found. So.g. The AndWait alternative is always used when the action causes the browser to navigate to another page or reload the present one. Thus. which wait dynamically. clickAndWait) tells Selenium to wait for the page to load after the action has been done.17 The waitFor Commands in AJAX applications In AJAX driven web applications. if you needed to look for an actual asterisk character (which is special for both globbing and regular expression patterns). So. 3. the exact pattern would be one way to do that. checking for the desired condition every second and continuing to the next command in the script as soon as the condition is met. globbing patterns and regular expression patterns are sufficient for the vast majority of us. Pausing the test execution for a certain period of time is also not a good approach as web element might appear later or earlier than the stipulated period depending on the system’s responsiveness. while the AndWait alternative (e.0 Exact Patterns The exact type of Selenium pattern is of marginal usefulness. The asterisk in the glob:Real * pattern will match anything or nothing. For example.g. if you use an AndWait command for an action that does not trigger a navigation/refresh.). This happens because Selenium will reach the AndWait‘s timeout without seeing any navigation or refresh being made.16 The “AndWait” Commands The difference between a command and its AndWait alternative is that the regular command (e. click) will do the action and continue with the following command as fast as it can. It uses no special characters at all.Selenium Documentation. The “AndWait” Commands 33 .18 Sequence of Evaluation and Flow Control When a script runs. select //select glob:Real * In order to ensure that the “Real *” item would be selected. the following code might work or it might not. load or other uncontrolled factors of the moment. your test will fail. 3. This is done using waitFor commands. Using andWait commands will not work as the page is not actually refreshed. one command after another. Run the script using Selenium-RC and a client library such as Java or PHP to utilize the programming language’s flow control features. Also. Command verifyText Target //div/p Value ${userName} A common use of variables is for storing input for an input field. for a functional test of dynamic content. when they have many junior-level people running tests for them. 34 Chapter 3. does not support condition statements (if-else. when combined with a data-driven test design (discussed in a later section). Here are a couple more commonly used store commands. or when programming skills are lacking). However. etc. 3. It takes two parameters. Command type Target id=login Value ${userName} Selenium variables can be used in either the first or second parameter and are interpreted by Selenium prior to any other operations performed by the command.Selenium Documentation. Selenium variables can be used to store values passed to your test program from the command-line. possibly involving multiple pages. enclose the variable in curly brackets ({}) and precede it with a dollar sign like this. programming logic is often needed. If this is your case. 3.js extension. Run a small JavaScript snippet from within the script using the storeEval command. A Selenium variable may also be used within a locator expression. 2. from another program. there are three options: 1.js extension. consider a JavaScript snippet or the goto_sel_ide. When flow control is needed. you’ll want to use the stored value of your variable. It simply stores a boolean value–“true” or “false”–depending on whether the UI element is found. Selenium-IDE . Command store Target paul@mysite. The plain store command is the most basic of the many store commands and can be used to simply store a constant value in a selenium variable. An equivalent store command exists for each verify and assert command. To access the value of a variable.19. or from a file. the text value to be stored and a selenium variable. 3. by itself.0 Selenese. Most testers will export the test script into a programming language file that uses the Selenium-RC API (see the Selenium-IDE chapter).). etc. Install the goto_sel_ide.) or iteration (for. Many useful tests can be conducted without flow control.1 storeElementPresent This corresponds to verifyElementPresent. some organizations prefer to run their scripts from SeleniumIDE whenever possible (for instance. while. However.19 Store Commands and Selenium Variables You can use Selenium variables to store constants at the beginning of a script. Use the standard variable naming conventions of only alphanumeric characters when choosing a name for your variable. Release 1.org Value userName Later in your script. Embedding JavaScript within Selenese is covered in the next section.3 storeEval This command takes a script as its first parameter.2 storeText StoreText corresponds to verifyText. The associative array containing your test case’s variables is named storedVars.0 3. and waitForEval. StoreText can be used to extract text from the page being tested.toLowerCase() Value name uc lc JavaScript Usage with Non-Script Parameters JavaScript can also be used to help generate values for parameters. you’ll want to access and/or manipulate a test case variable inside the JavaScript snippet used as a Selenese parameter.19. Release 1.toUpperCase() storedVars[’name’].19. Below is an example in which the type command’s second parameter value is generated via JavaScript code using this special syntax: 3.20. even when the parameter is not specified to be of type script. These parameters require no special syntax. All variables created in your test case are stored in a JavaScript associative array. if found. special syntax is required–the JavaScript snippet must be enclosed inside curly braces and preceded by the label javascript. as in javascript {*yourCodeHere*}. in this case. Whenever you wish to access or manipulate a variable within a JavaScript snippet. 3. in this case the JavaScript String object’s toUpperCase method and toLowerCase method.. storeEval. Command store storeEval storeEval Target Edith Wharton storedVars[’name’]. 3. normally the Target field (because a script parameter is normally the first or only parameter). It uses a locater to identify specific page text. StoreEval allows the test to store the result of running the script in a variable. A Selenium-IDE user would simply place a snippet of JavaScript code into the appropriate field. In most cases.1 JavaScript Usage with Script Parameters Several Selenese commands specify a script parameter including assertEval. However. An associative array has string indexes rather than sequential numeric indexes.20 JavaScript and Selenese Parameters JavaScript can be used with two types of Selenese parameters: script and non-script (usually expressions). The text. 3. you must refer to it as storedVars[’yourVariableName’]. JavaScript and Selenese Parameters 35 . is stored in the variable. verifyEval.Selenium Documentation.20. These notes also can be used to provide context within your test result reports. } } function show_alert(){ alert( "I’m blocking!" )." ).21 echo .getElementById( ’output’ ).Selenium Documentation.22 Alerts.toUpperCase()} 3." ). if (confirmation==true){ output( "Confirmed. } </script> </head> <body> 36 Chapter 3." ). } else{ output( "Rejected!" ).0 Command store type Target league of nations q Value searchString javascript{storedVars[’searchString’]. Selenium-IDE . } function open_window(windowName){ window. Release 1. } function show_confirm(){ var confirmation=confirm( "Chose an option.html" . Popups.childNodes[0].open( "newWindow. "Selenium" ). echo statements can be used to print the contents of Selenium variables.windowName). This is useful for providing informational progress notes in your test which display on the console as your test is running.nodeValue=resultText.. Command echo echo Target Testing page footer now. Finally. output( "Alert is gone. Username is ${userName} Value 3. } function show_prompt(){ var response = prompt( "What’s the best web QA tool?" . output(response). You’ll notice that after you close the alert the text “Alert is gone. If you fail to assert the presence of a pop-up your next command will be blocked and you will get an error similar to the following [error] Error: There was an unexpected Confirmation! [Chose an option. with false halting the test.. Selenium can cover JavaScript pop-ups. 3. Alerts. However. and Multiple Windows 37 . This will return true or false. To handle a pop-up. You must include an assertion of the alert to acknowledge it’s presence.0 34 35 36 37 38 39 40 41 42 43 44 45 <input type= "button" id= <input type= "button" id= <input type= "button" id= <a href= "newWindow.1 Alerts Let’s start with asserts because they are the simplest pop-up to handle. JavaScript pop-ups will not appear. you must call it’s assertFoo(pattern) function. Alerts. But before we begin covering alerts/confirms/prompts in individual detail. Fortunately. open the HTML sample above in a browser and click on the “Show alert” button. To begin. as well as moving focus to newly opened popup windows. Now run through the same steps with Selenium IDE recording. Release 1. you can use assertAlertPresent. I never tried to assert that alert.Selenium Documentation. Value You may be thinking “Thats odd.22. Popups.22. This is because the function calls are actually being overridden at runtime by Selenium’s own JavaScript.” But this is Selenium-IDE handling and closing the alert for you. If you just want to assert that an alert is present but either don’t know or don’t care what text it contains. Your test will look something like this: Command open click assertAlert verifyTextPresent Target / btnAlert I’m blocking Alert is gone. just because you cannot see the pop-up doesn’t mean you don’t have do deal with it. it is helpful to understand the commonality between them.]. If you remove that step and replay the test you will get the following error [error] Error: There was an unexpected Alert! [I’m blocking!]. and verify the text is added after you close the alert.” is displayed on the page. by default Selenium will select OK when a confirmation pops up.0 Confirmations Confirmations behave in much the same way as alerts. Selenium can’t know that you’re cancelling before you open a confirmation) Simply switch these two commands and your test will run fine. However. Try recording clicking on the “Show confirm box” button in the sample page. right-click. Then click the Run button to execute the test case beginning at that startpoint. we recommend you ask one of the developers in your organization. Your test may look something like this: Command open click chooseCancelOnNextConfirmation assertConfirmation verifyTextPresent Target / btnConfirm Choose and option. but click on the “Cancel” button in the popup.Selenium Documentation. suppose your test case first logs into the website and then performs a series of tests and you are trying to debug one of those tests. However. It can be reset by calling chooseOkOnNextConfirmation. and from the context menu select Toggle Breakpoint.23 Debugging Debugging means finding and fixing errors in your test case. To do this. We won’t teach debugging here as most new users to Selenium will already have some basic experience with debugging. select a command. 3. with assertConfirmation and assertConfirmationPresent offering the same characteristics as their alert counterparts. This is a normal part of test case development.1 Breakpoints and Startpoints The Sel-IDE supports the setting of breakpoints and the ability to start and stop the running of a test case. then run your test case from a startpoint placed after the login portion of your test case. but you need to keep rerunning your tests as you are developing them. That is. This is because the order of events Selenium-IDE records causes the click and chooseCancelOnNextConfirmation to be put in the wrong order (it makes sense if you think about it. from any point within the test case. You may notice that you cannot replay this test. You can login once. To set a startpoint. then assert the output text. select a command. 38 Chapter 3. because Selenium complains that there is an unhandled confirmation. It is also sometimes useful to run a test case from somewhere in the middle to the end of the test case or up to a breakpoint that follows the starting point. Selenium-IDE . For example.23. Release 1. one can run up to a specific command in the middle of the test case and inspect how the test case behaves at that point. Rejected Value The chooseCancelOnNextConfirmation function tells Selenium that all following confirmation should return false. right-click. That will prevent you from having to manually logout each time you rerun your test case. and from the context menu select Set/Clear Start Point. 3. you only need to login once. If this is new to you. Then click the Run button to run your test case from the beginning up to the breakpoint. set a breakpoint on the command just before the one to be examined. To set a breakpoint. 1.23. 3. Note that the first 3. 3. Use its Search feature (Edit=>Find) to search for a keyword to find the HTML for the UI element you’re trying to test. and certain assert and verify commands. Immediately pause the executing test case with the Pause button.4 Page Source for Debugging Often. you simply must look at the page source (the HTML for the webpage you’re trying to test) to determine a problem. This feature can be very useful for learning more about locators.2 Stepping Through a Testcase To execute a test case one command at a time (“step through” it). 1. Click the Find button. i. it stores additional information which allows the user to view other possible locator-type arguments that could be used instead. follow these steps: 1. This is useful when building a locator for a command’s first parameter (see the section on locators in the Selenium Commands chapter). among others. Repeatedly select the Step button. Debugging 39 . Start the test case running with the Run button from the toolbar. 3. click. the separate HTML window will contain just a small amount of source.23. In this case. Alternatively.23. clickAndWait. select just that portion of the webpage for which you want to see the source. type.5 Locator Assistance Whenever Selenium-IDE records a locator-type argument. It can be used with any command that identifies a UI element on a webpage.23. From Table view. select any command that has a locator parameter. Below is a snapshot showing the contents of this drop-down for one command. Then rightclick the webpage and select View Selection Source. Simply right-click the webpage and select ‘View->Page Source.3 Find Button The Find button is used to see which UI element on the currently displayed webpage (in the browser) is used in the currently selected Selenium command.e. Firefox makes this easy. The HTML opens in a separate window.. and is often needed to help one build a different type of locator than the type that was recorded. with highlighting on the portion representing your selection.0 3. Release 1.23. when debugging a test case. Now look on the webpage: There should be a bright green rectangle enclosing the element specified by the locator parameter. 3. the new test case will appear immediately below the previous test case. Selenium-IDE . In the latter case. The example below is of a test suite containing four test cases: <html> <head> <meta http- <title>Sample Selenium Test Suite</title> </head> 40 Chapter 3. Release 1.24 Writing a Test Suite A test suite is a collection of test cases which is displayed in the leftmost pane in the IDE.0 column of the drop-down provides alternative locators. The test suite pane will be automatically opened when an existing test suite is opened or when the user selects the New Test Case item from the File menu. 25 User Extensions User extensions are JavaScript files that allow one to create his or her own customizations and features to add additional functionality.html" >B Links</a></td></tr> <tr><td><a href= ". you must close and reopen Selenium-IDE in order for the extensions file to be read. IMPORTANT: THIS SECTION IS OUT OF DATE–WE WILL BE REVISING THIS SOON. 3. After selecting the OK button. put the pathname to its location on your computer in the Selenium Core extensions field of Selenium-IDE’s Options=>Options=>General tab.html" >D Links</a></td></tr> </tbody> </table> </body> </html> Note: Test case files should not have to be co-located with the test suite file that invokes them. look at the page created by its author. Often this is in the form of customized commands although this extensibility is not limited to additional commands./c. that is indeed the case. 3. Perhaps the most popular of all Selenium-IDE extensions is one which provides flow control in the form of while loops and primitive conditionals. at the time of this writing. Release 1.Selenium Documentation. Any change you make to an extension will also require you to close and reopen SeleniumIDE. However./d. For an example of how to use the functionality provided by this extension./b./a.html" >C Links</a></td></tr> <tr><td><a href= ". This extension is the goto_sel_ide. a bug prevents Windows users from being able to place the test cases elsewhere than with the test suite that invokes them. To install this extension.js.25. There are a number of useful extensions created by users. And on Mac OS and Linux systems.html" >A Links</a></td></tr> <tr><td><a href= ". User Extensions 41 .0 <body> <table cellpadding= "1" cellspacing= "1" border= "1" > <thead> <tr><td>Test Cases for De Anza A-Z Directory Links</td></tr> </thead> <tbody> <tr><td><a href= ". you can alter it by editing a configuration file which defines the generation process. Use File=>Open Test Suite instead. allows you to select a language for saving and displaying the test case.Selenium Documentation. The default is HTML. 3. Essentially. 3. The -htmlSuite command-line option is the particular feature of interest.28 Troubleshooting Below is a list of image/explanation pairs which describe frequent sources of problems with SeleniumIDE: Table view is not available with this format.27 Executing Selenium-IDE Tests on Different Browsers While Selenium-IDE can only run tests against Firefox. i. The workaround is to close and reopen Selenium IDE. error loading test case: no command found You’ve used File=>Open to try to open a test suite file. Each supported language has configuration settings which are editable. Java. If you will be using Selenium-RC to run your test cases. Release 1. However the author has altered the C# format in a limited manner and it has worked well. This topic is covered in the Run Selenese tests section on Selenium-RC chapter. you will be using with Selenium-RC for developing your test programs. program code supporting your test is generated for you by Selenium-IDE. PHP.26 Format Format. tests developed with Selenium-IDE can be run against other browsers. Note: At the time of this writing. under the Options menu. An enhancement request has been raised to improve this error message. note that if the generated code does not suit your needs. Select the language. This is under the Options=>Options=>Format tab. this feature is used to translate your test case into a programming language.0 Information on writing your own extensions can be found near the bottom of the Selenium Reference document. 42 Chapter 3. using a simple command-line interface that invokes the Selenium-RC server. If you are able to reproduce this reliably then please provide details so that we can work on a fix. Also. 3. See issue 1008. this feature is not yet supported by the Selenium developers.e. Your test case will be translated into a series of functions in the language you choose. for more information. Selenium-IDE . This message can be occasionally displayed in the Table tab when Selenium IDE is launched. See issue 1010. Then simply save the test case using File=>Save. and the second required parameter (if one exists) must go in the Value field. Also. Selenium-IDE is very space-sensitive! An extra space before or after a command will cause it to be unrecognizable. Whenever your attempt to use variable substitution fails as is the case for the open command above. This is sometimes due to putting the variable in the Value field when it should be in the Target field or vice versa.init]” nresult: “0x80520012 (NS_ERROR_FILE_NOT_FOUND)” location: “JS frame :: chrome://selenium-ide/content/fileutils. the two parameters for the store command have been erroneously placed in the reverse order of what is required. In the example above.. “Component returned failure code: 0x80520012 (NS_ERROR_FILE_NOT_FOUND) [nsIFileInputStream.. Make sure that the test case is indeed located where the test suite indicates it is located. investigate using an appropriate waitFor* or *AndWait command before the failing command.28.js :: anonymous :: line 48” data: no] One of the test cases in your test suite cannot be found.Selenium Documentation. Troubleshooting 43 . See issue 1011.0 This type of error may indicate a timing problem. 3. it indicates that you haven’t actually created the variable whose value you’re trying to access. the element specified by a locator in your command wasn’t fully loaded when the command was executed. Try putting a pause 5000 before the command to determine whether the problem is indeed related to timing. An enhancement request has been raised to improve this error message. and in the test suite file where they are referenced. error loading test case: [Exception. If so. Release 1.html extension both in their filenames. i. make sure that your actual test case files have the . the first required parameter must go in the Target field.e. For any Selenese command.. In the example above. 44 Chapter 3. Release 1. Your extension file’s contents have not been read by Selenium-IDE. See issue 1013. Selenium-IDE is correct that the actual value does not match the value specified in such test cases. Also.Selenium Documentation. This type of error message makes it appear that Selenium-IDE has generated a failure where there is none. See issue 1012. This defect has been raised. Be sure you have specified the proper pathname to the extensions file via Options=>Options=>General in the Selenium Core extensions field. Thus. However.0 This defect has been raised. Selenium-IDE . The problem is that the log file error messages collapse a series of two or more spaces into a single space. Selenium-IDE is correct to generate an error. which is confusing. Selenium-IDE must be restarted after any change to either an extensions file or to the contents of the Selenium Core extensions field. but is misleading in the nature of the error. note that the parameter for verifyTitle has two spaces between the words “Selenium” and “web” The page’s actual title has only one space between these words. programming interface.0 Features Selenium 2.2 The Selenium Server – When to Use It You may. The goal is to develop an object-oriented API that provides additional support for a larger number of browsers along with improved support for modern advanced web-app testing problems. which is primarily used for Selenium 1. 4.CHAPTER FOUR SELENIUM 2.1 Selenium 2. In addition. depending on how you intend to use Selenium. and simpler. The primary new feature is the integration of the WebDriver API. automate the AUT from within the browser.Server to inject javascript into the browser and to then translate messages from your test program’s language-specific Selenium client library into commands that invoke the javascript commands which in turn. This addresses a number of limitations along with providing an alternative. Additional information will be provided as we go which should make this chapter more solid.0 AND WEBDRIVER NOTE: We’re currently working on documenting these sections. The Selenium Server provides Selenium-RC functionality. If you will be strictly using the WebDriver API you do not need the Selenium Server. Since WebDriver uses completely different technology to interact with the browsers. need the Selenium Server. however be aware we are also still working on this chapter. 4. the Selenium Server is not needed. 45 . if you’re using Selenium-WebDriver.0 backwards compatability. These new features are introduced release in the release announcement in the Official Selenium Blog.0 has many new exciting features and improvements over Selenium 1. In short. you don’t need the Selenium-Server. NOTE: We will add a description of SEL 2. or may not. Another reason for using the Selenium-Server is if you are using Selenium-Grid for distributed exectution of your tests. Selenium-RC however requires the Selenium. if you are using Selenium-backed Web-Driver (the WebDriver API but with back-end Selenium technology) you will also need the Selenium Server. Selenium-WebDriver makes direct calls to the browser using each browser’s native support for automation. we will be proofreading and reviewing it. These topics are described in more detail later in this chapter. Finally.0 new features–for now we refer readers to the release announcement. We believe the information here is accurate. 0. Selenium 2.0 4. Release 1. IntelliJ IDEA or Eclipse. Finally.selenium</groupId> <artifactId>selenium-java</artifactId> <version>2. using a maven pom.0 Java project is to use Maven. How you do this depends on your programming language and your development environment.w3. This can be created with a text editor.3. import the project into your preferred development environment.apache.xml (project configuration) file.4.seleniumhq. Your pom. the version listed above was the most current. Create this file in the folder you created for your project.0</modelVersion> <groupId>MySel20Proj</groupId> <artifactId>MySel20Proj</artifactId> <version>1. you can import the maven project into your preferred IDE. and will create the project for you.3 Setting Up a Selenium-WebDriver Project To install Selenium means to set up a project in a development so you can write a program using Selenium.xml file.org/POM/4. we’ve provided an appendix which shows this.0. First.xml files or for using Maven since there are already excellent references on this. mvn clean install This will download Selenium and all its dependencies and will add them to the project.0 and WebDriver .0 java client library) and all it’s dependencies.org/2001/XMLSchema-instance" xsi:schemaLocation= ". Now. Once you’ve done this.apache. We won’t teach the details of pom.0.0. 46 Chapter 4. you need a pom.0</version> </dependency> </dependencies> </project> The key component adding Selenium and its dependencies are the lines <dependency> <groupId>org. however there were frequent releases immediately after the releast of Selenium 2. Maven will download the java bindings (the Selenium 2.apac <modelVersion>4. to use Maven.0" xmlns:xsi= ". For those not familiar with this.selenium</groupId> <artifactId>selenium-java</artifactId> <version>2.1 Java The easiest way to set up a Selenium 2.seleniumhq. from a command-line. create a folder to contain your Selenium project files.0.0</version> <dependencies> <dependency> <groupId>org.0" encoding="UTF-8"?> <project xmlns= ". Check the SeleniumHq website for the current release and edit the above dependency accordingly.Selenium Documentation. At the time of writing. CD into the project directory and run maven as follows.0</version> </dependency> Be sure you specify the most current version.xml file will look something like this. <?xml version="1. 4. Then. To include Selenium in your project. 4. Setting Up a Selenium-WebDriver Project 47 .0 at this time.3.6 PHP PHP is not supported in Selenium 2. If you have questions. including all dependencies. If you are using Windows Vista or above. click “Properties”. gem install selenium-webdriver Teaching Ruby development itself is beyond the scope of this document. click “Unblock” and click “OK”.5 Perl Perl is not supported in Selenium 2. please post a note to the Selenium developers. Release 1. Note that we do not have an official NuGet package at this time. or would like to assist providing this support.2 C# Selenium 2. If you have questions. simply download the latest selenium-dotnet zip file from. pip install selenium Teaching Python development itself is beyond the scope of this document. please post a note to the Selenium developers.3. To add Selenium to your Python environment run the following command from a commandline.com/p/selenium/downloads/list.4 Ruby If you are using Ruby for test automation then you probably are already familiar with developing in Ruby.3.3. 4. and add a reference to each of the unzipped dlls to your project in Visual Studio (or your IDE of choice). 4. however there are many resources on Python and likely developers in your organization can help you get up to speed. 4. you should unblock the zip file before unzipping it: Right click on the zip file.3 Python If you are using Python for test automation then you probably are already familiar with developing in Python.0 Importing a maven project into IntelliJ IDEA. however there are many resources on Ruby and likely developers in your organization can help you get up to speed. To add Selenium to your Ruby environment run the following command from a command-line.0 is distributed as a set of unsigned dlls.3. 4. or would like to assist providing this support.3.0 at this time. Unzip the contents of the zip file. 4.Selenium Documentation.google. support.google.ExpectedCondition. 48 Chapter 4.example. // Alternatively the same thing can be done like this // driver. Start by setting up a WebDriver project if you haven’t already.1 Java package org.selenium. We’ve included this as an appendix.ui. and you usually don’t need to remember to start any additional processes or run any installers before using it.WebDriverWait.openqa.5 Getting Started With Selenium-WebDriver WebDriver is a tool for automating testing web applications.firefox. public class Selenium2Example { public static void main(String[] args) { // Create a new instance of the Firefox driver // Notice that the remainder of the code relies on the interface. WebDriver driver = new FirefoxDriver(). as opposed to the proxy server with Selenium-RC. An easy way to get started is this example. org.openqa.selenium.ui. easier to use than the Selenium-RC (1.selenium. you can see that WebDriver acts just as any normal library: it is entirely self-contained. we have provided tips on how to migrate your existing code to Selenium 2. has written an article on migrating from Selenium 1. Release 1.openqa. so it can be used equally well in a unit testing or from a plain old “main” method. // And now use this to visit Google driver.0. // Enter something to search for element.openqa. Once your project is set up.sendKeys( "Cheese!" ).0.openqa.support. which searches for the term “Cheese” on Google and then outputs the result page’s title to the console.FirefoxDriver. Setting Up a Selenium-WebDriver Project.4 Migrating from Selenium 1. which will help make your tests easier to read and maintain.Selenium Documentation. Migrating From Selenium RC to Selenium WebDriver 4.0) API.com").openqa. This was described in the previous section.name( "q" )).WebElement.openqa. and in particular to verify that they work as expected.5.0. This section introduces WebDriver’s API and helps get you started becoming familiar with it. // not the implementation.WebDriver. org. Simon Stewart.By. You’re now ready to write some code. It’s not tied to any particular test framework.0.0 and WebDriver .navigate(). org.0 4. // Find the text input element by its name WebElement element = driver.get( "("" ). 4.0 For those who already have test suites writting using Selenium 1. org. the lead developer for Selenium 2. Selenium 2.google.selenium. It aims to provide a friendly API that’s easy to explore and understand. import import import import import import org.selenium.selenium.findElement(By. org. } } 4.5. } } 4.Selenium Documentation. //Close the browser driver. 10)).5.0 // Now submit the form.out.com/" ).webdriver.until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { return d.Selenium. System. Release 1.exceptions import TimeoutException from selenium.ui import WebDriverWait # available since 2. WebDriver will find the form for us from the element element. IWebElement query = driver.out.SendKeys( "Cheese" ). Getting Started With Selenium-WebDriver 49 . // Wait for the page to load. // Google’s search is rendered dynamically with JavaScript.Selenium.support.Quit().Title).startsWith( "cheese!" ). // Check the title of the page System.WriteLine( "Page title is: " + driver. timeout after 10 seconds (new WebDriverWait(driver.Console. class GoogleSuggest { static void Main(string[] args) { IWebDriver driver = new FirefoxDriver(). // Should see: "cheese! .Navigate().toLowerCase().3 Python from selenium import webdriver from selenium.FindElement(By. //Notice navigation is slightly different than the Java version //This is because ’get’ is a keyword in C# driver.println( "Page title is: " + driver.GoToUrl( "( "Page title is: " + driver. // TODO add wait driver.2 C# using OpenQA.Google Search" System.submit().getTitle()).getTitle()). using OpenQA.google.quit().common.0 import time 4.Firefox. query. } }).Name( "q" )).5. find_element_by_name( " q " ) # type in the search inputElement.quit #{ driver.Firefox() # go to the google home page driver. Selenium 2. the last thing that seems to be updated i WebDriverWait(driver.until { driver.title try: # we have to wait for the page to refresh. you will learn more about how to use WebDriver for things such as navigating forward and backward in your browser’s history.startswith( " ch # You should see "cheese! .title } " In upcoming sections.0 and WebDriver .com " ) # find the element that’s name attribute is q (the google search box) inputElement = driver.new(:timeout => 10) wait.find_element :name => " q " element.title. 50 Chapter 4.lower().send_keys " Cheese! " element.4 Ruby require ’rubygems’ require ’selenium-webdriver’ driver = Selenium::WebDriver. 10).send_keys( " Cheese! " ) # submit the form (although google automatically searches now without submitting) inputElement.start_with? " cheese! " } puts " Page title is driver.for :firefox driver.get( " puts " Page title is #{ driver.0 # Create a new instance of the Firefox driver driver = webdriver.quit() 4.submit() # the page is ajaxy so the title is originally this: print driver.Google Search" print driver.title } " wait = Selenium::WebDriver::Wait.downcase.title finally: driver.google.com " element = driver.until(lambda driver : driver.get ". We also provide a more thorough discussions and examples.Selenium Documentation.5. Release 1. and how to test web sites that use frames and windows. this additional flexibility comes at the cost of slower overall speed.6. 4. or the CSS properties that apply to it. but it’s not graphical. By writing your tests against the WebDriver interface. but sometimes it’s good to be able to test using a real browser.6 Introducing WebDriver’s Drivers WebDriver is the name of the key interface against which tests should be written.Selenium Documentation. it is possible to pick the most appropriate driver for a given test.0 4. Release 1. let’s start with the HtmlUnit Driver: WebDriver driver = new HtmlUnitDriver(). To keep things simple.firefox. If you need to ensure such pages are fully loaded then you can use “waits”. WebDriver will wait until the page has fully loaded (that is.openqa. but there are several implementations. However. and will depend on their familiarity with the application under test. This varies from person to person. and your testing framework. Firstly.ie.openqa. Secondly. For sheer speed.selenium.selenium.7. Introducing WebDriver’s Drivers 51 .openqa. you may wish to choose a driver such as the Firefox Driver. especially when you’re showing a demo of your application (or running the tests) for an audience. Which you use depends on what you want to do. To support higher “perceived safety”. the HtmlUnit Driver is great. which refers to whether or not an observer believes the tests work as they should. which refers to whether or not the tests work as they should.com" ).ChromeDriver We’re currently upating this table You can find out more information about each of these by following the links in the table.chrome. As a developer you may be comfortable with this.openqa.google. there’s “actual safety”. which means that you can’t watch what’s happening.HtmlUnitDriver org. there’s “perceived safety”. It’s worth noting that if your page uses a lot of AJAX on load then WebDriver may not know when it has completely loaded. This has the added advantage that this driver actually renders content to a screen.FirefoxDriver org. 4.selenium. These include: Name of driver HtmlUnit Driver Firefox Driver Internet Explorer Driver Chrome Driver Opera Driver iPhone Driver Android Driver Available on which OS? All All Windows All Class to instantiate org.get( " Commands and Operation 4. and so can be used to detect information such as the position of an element on a page. the “onload” event has fired) before returning control to your test or script. and it falls into two parts. this idea is referred to as “safety”. Often. WebDriver.htmlunit. This can be measured and quantified.InternetExplorerDriver org.1 Fetching a Page The first thing you’re likely to want to do with WebDriver is navigate to a page.selenium. The normal way to do this is by calling “get”: driver. 0 and WebDriver . what you type will be appended to what’s already there. So. Selenium 2. for example) an exception will be thrown. What can you do with it? First of all.Selenium Documentation.name( "passwd" )).findElement(By. Don’t worry! WebDriver will attempt to do the Right Thing. which makes it possible to test keyboard shortcuts such as those used on GMail.sendKeys( " and some" . WebDriver offers a number of ways of finding elements. you may want to enter some text into a text field: element. For example. Instead. more specifically. we represent all types of elements using the same interface: Web Element.3 Locating UI Elements (WebElements) Note: This section still needs to be developed.xpath( "//input[@id=’passwd-id’]" )). element = driver. Release 1. If nothing can be found. What we’d really like to do is to interact with the pages. WebDriver has an “Object-based” API. given an element defined as: <input type= "text" name= "passwd" id= "passwd-id" /> you could find it using any of the following examples: WebElement element. we need to find one. It is possible to call sendKeys on any element. A side-effect of this is that typing something into a text field won’t automatically clear it. Keys. 52 Chapter 4.0 4.sendKeys( "some text" ). Locating elements in WebDriver is done using the “By” class. and if you call a method that makes no sense (“setSelected()” on a “meta” tag. You can also look for a link by its text.ARROW_DOWN). You can simulate pressing the arrow keys by using the “Keys” class: element.findElement(By.7. First of all.clear().id( "passwd-id" )). element = driver. 4. not all of them will make sense or be valid. If there’s more than one element that matches the query.2 Interacting With the Page Just being able to go to places isn’t terribly useful. a NoSuchElementException will be thrown. This means that although you may see a lot of possible methods you could invoke when you hit your IDE’s auto-complete key combination. but be careful! The text must be an exact match! You should also be careful when using XPATH in WebDriver. element = driver. This class implements all location strategies used by WebDriver. You can easily clear the contents of a text field or textarea: element. the HTML elements within a page.7. then only the first will be returned. you’ve got an element. or.findElement(By. This can lead to some unexpected behaviour unless you are aware of the differences in the various xpath engines. the “input” tag does not require the “type” attribute because it defaults to “text”. so for the following piece of HTML: <input type= "text" name= "example" /> <INPUT type= "text" name= "other" /> The following number of matches will be found XPath expression //input //INPUT HtmlUnit Driver 1 (“example”) 0 Firefox Driver 2 2 Internet Explorer Driver 2 0 Sometimes HTML elements do not need attributes to be explicitly declared because they will default to known values.println(String. On those browsers that don’t have native XPath support.. select. List<WebElement> allOptions = select.deselectAll().selectByVisibleText( "Edam" ).xpath( "//select" )). } This will find the first “SELECT” element on the page.7. and cycle through each of it’s OPTIONs in turn. we have provided our own implementation.Selenium Documentation.findElement(By. WebDriver uses a browser’s native XPath capabilities wherever possible.7. which provides useful methods for interacting with these. As you will notice.findElements(By. For example.getValue())). for (WebElement option : allOptions) { System.format( "Value is: %s" . select. The rule of thumb when using xpath in WebDriver is that you should not expect to be able to match against these implicit attributes. Commands and Operation 53 . 4.4 User Input .setSelected(). option.findElement(By. and you can use “setSelected” to set something like an OPTION tag selected.0 Using XPATH Statements At a high level. Select select = new Select(driver. this isn’t the most efficient way of dealing with SELECT elements.out. Release 1.xpath( "//select" ))). but what about the other elements? You can “toggle” the state of checkboxes.Filling In Forms We’ve already seen how to enter text into a textarea or text field. 4.tagName( "option" )). option. WebDriver’s support classes include one called “Select”. printing out their values. Dealing with SELECT tags isn’t too bad: WebElement select = driver. and selecting each in turn. and then select the OPTION with the displayed text of “Edam”. If you call this on an element within a form. 54 Chapter 4. 4. If the element isn’t in a form.0 and WebDriver . But how do you know the window’s name? Take a look at the javascript or link that opened it: <a href= "somewhere.findElement(By.child" ). it’s possible to iterate over every open window like so: for (String handle : driver.window( "windowName" ).0. Selenium 2. WebDriver will walk up the DOM until it finds the enclosing form and then calls submit on that.5 Moving Between Windows and Frames Some web applications have any frames or multiple windows. All calls to driver will now be interpreted as being directed to the particular window. One way to do this would be to find the “submit” button and click it: driver.id( "submit" )). } You can also swing from frame to frame (or into iframes): driver. you probably want to submit it.switchTo(). // Assume the button has the ID "submit" :) Alternatively. then the NoSuchElementException will be thrown: element.switchTo(). Once you’ve finished filling out the form. WebDriver has the convenience method “submit” on every element.0 This will deselect all OPTIONs from the first SELECT on the page. It’s possible to access subframes by separating the path with a dot. and you can specify the frame by its index too. you can pass a “window handle” to the “switchTo(). would go to the frame named “child” of the first subframe of the frame called “frameName”.html" target= "windowName" >Click here to open a new window</a> Alternatively.7.frame( "frameName" ).Selenium Documentation. That is: driver. Knowing this.click().window(handle).frame( "frameName.getWindowHandles()) { driver.submit().switchTo().switchTo(). All frames are evaluated as if from *top*. WebDriver supports moving between named windows using the “switchTo” method: driver.window()” method. Release 1. // And now output all the available cookies for the current URL Set<Cookie> allCookies = driver. // Now set the cookie. Release 1.manage(). loadedCookie. read its contents or even type into a prompt.manage(). we covered navigating to a page using the “get” command ( driver.com" ).0 beta 1.addCookie(cookie).example.com" ).Selenium Documentation. 4.7. you need to be on the domain that the cookie will be valid for: // Go to the correct domain driver.to()” and “get()” do exactly the same thing.out. there is built in support for handling popup dialog boxes. 4. and navigation is a useful task. the method to do this lives on the main WebDriver interface. Because loading a page is such a fundamental requirement.7. With this object you can now accept.com")) As you’ve seen.navigate(). for (Cookie loadedCookie : allCookies) { System. prompts.7 Navigation: History and Location Earlier. This interface works equally well on alerts.get( ". Please be aware that this functionality depends entirely on the underlying browser. "value" ).navigate().to( ". driver. This will return the currently open alert object.println(String.6 Popup Dialogs Starting with Selenium 2.getName(). To reiterate: “navigate(). After you’ve triggered an action that opens a popup. This one’s valid for the entire domain Cookie cookie = new Cookie( "key" .example. dismiss.7.alert().get(". you may be interested in understanding how to use cookies.format( "%s -> %s" .navigate(). It’s just possible that something unexpected may happen when you call these methods if you’re used to the behaviour of one browser over another.getCookies(). WebDriver has a number of smaller. confirms. First of all. driver. but it’s simply a synonym to: driver.0 4. task-focused interfaces.g } 4. Commands and Operation 55 . loadedCookie.switchTo().back().8 Cookies Before we leave these next steps. you can access the alert with the following: Alert alert = driver. One’s just a lot easier to type than the other! The “navigate” interface also exposes the ability to move backwards and forwards in your browser’s history: driver. Refer to the JavaDocs for more information. knowing that there are more and more sites that rely on JavaScript? We took the conservative approach. Selenium 2. With each release of both WebDriver and HtmlUnit. Pros • Fastest implementation of WebDriver • A pure Java solution and so it is platform independent. this is based on HtmlUnit. we reassess this decision: we hope to enable JavaScript by default on the HtmlUnit at some point. 4. HtmlUnit has an impressively complete implementation of the DOM and has good support for using JavaScript. but it is no different from any other browser: it has its own quirks and differences from both the W3C standard and the DOM implementations of the major browsers.0 and WebDriver . WebElement element = driver.1 HtmlUnit Driver This is currently the fastest and most lightweight implementation of WebDriver.8. Although the DOM is defined by the W3C each browser has its own quirks and differences in their implementation of the DOM and in how JavaScript interacts with it. (new Actions(driver)). and by default have disabled support when we use HtmlUnit. 56 Chapter 4. As the name suggests.perform(). WebElement target = driver.Selenium Documentation. When we say “JavaScript” we actually mean “JavaScript and the DOM”.findElement(By. do we enable HtmlUnit’s JavaScript capabilities and run the risk of teams running into problems that only manifest themselves there.9 Drag And Drop Here’s an example of using the Actions class to perform a drag and drop. target). we had to make a choice. or do we leave JavaScript disabled. • Supports JavaScript Cons • Emulates other browsers’ JavaScript behaviour (see below) JavaScript in the HtmlUnit Driver None of the popular browsers uses the JavaScript engine used by HtmlUnit (Rhino). If you test JavaScript using HtmlUnit the results may differ significantly from those browsers. despite its ability to mimic other browsers.dragAndDrop(element.name( "source" )).findElement(By. Release 1.name( "target" )).8 Driver Specifics and Tradeoffs 4. As of rc2 this only works on the Windows platform.0 4. With WebDriver.7. WebDriver driver = new FirefoxDriver(profile). FirefoxProfile profile = new FirefoxProfile(profileDir).useragent. There are two ways to obtain this profile. WebDriver driver = new FirefoxDriver(profile). FirefoxProfile profile = allProfiles. Assuming that the profile has been created using Firefox’s profile manager (firefox -ProfileManager): ProfileIni allProfiles = new ProfilesIni(). driver. Driver Specifics and Tradeoffs 57 .8.override" . WebDriver driver = new FirefoxDriver(profile). if the profile isn’t already registered with Firefox: File profileDir = new File( "path/to/top/level/of/profile" ).setJavascriptEnabled(true). To enable them: 4. they are disabled by default.2 Firefox Driver Pros • Runs in a real browser and supports JavaScript • Faster than the Internet Explorer Driver Cons • Slower than the HtmlUnit Driver Changing the User Agent This is easy with the Firefox Driver: FirefoxProfile profile = new FirefoxProfile(). until we feel native events are stable on Firefox for Linux. This will cause the HtmlUnit Driver to emulate Internet Explorer’s JavaScript handling by default.setPreferences( "foo. profile.addAdditionalPreferences(extraPrefs). profile.addAdditionalPreference( "general.bar" . 23). Enabling features that might not be wise to use in Firefox As we develop features in the Firefox Driver. Mofifying the Firefox Profile Suppose that you wanted to modify the user agent string (as above).getProfile( "WebDriver" ).0 Enabling JavaScript If you can’t wait. Alternatively. but you’ve got a tricked out Firefox profile that contains dozens of useful extensions. profile.Selenium Documentation. For example. 4. we expose the ability to use them. Release 1.8. "some UA string" ). enabling JavaScript support is very easy: HtmlUnitDriver driver = new HtmlUnitDriver(). 0 FirefoxProfile profile = new FirefoxProfile(). WebDriver is now inside of the Chrome browser iteslf. Please take special note of the Required Configuration section. 4. Note that since Chrome uses its own V8 JavaScript engine rather than Safari’s Nitro engine. Cons • Slower than the HtmlUnit Driver 58 Chapter 4. It has also been successfully tested on Vista. JavaScript execution may differ. Release 1.Selenium Documentation. profile.3 Internet Explorer Driver This driver has been tested with Internet Explorer 6. Pros • Runs in a real browser and supports JavaScript • Because Chrome is a Webkit-based browser. the Chrome Driver may allow you to verify that your site works in Safari. 4. WebDriver driver = new FirefoxDriver(profile). Selenium 2.4 Chrome Driver Chrome Driver is maintained / supported by the Chromium project iteslf. Please see our wiki for the most up to date info.8.8. Pros • Runs in a real browser and supports JavaScript Cons • Obviously the Internet Explorer Driver will only work on Windows! • Comparatively slow (though still pretty snappy :) Info See the Internet Explorer section of the wiki page for the most up to date info.0 and WebDriver . 7 and 8 on XP. Info See the Firefox secion in the wiki page for the most up to date info.setEnableNativeEvents(true). 4. WebDriver-Backed Selenium-RC 59 . It’s provided to help ease the migration path to SeleniumWebDriver.stop().google. This is primarily provided for backwards compatablity.com" ). These means that you can use the underlying WebDriver technology using the Selenium-RC API. in the same test code.click( "name=btnG" ).quit(). the JVM will continue running after //the browser has been closed. Selenium-WebDriver is used like this: // You may use any WebDriver implementation.0 Getting running with Chrome Driver Note: this section is likely out of date.7 Android Driver See the Android Driver wiki article in the Selenium Wiki for information on using the Android Driver. Release 1.8. baseUrl). Call stop on the WebDriverBackedSelenium instance //instead of calling driver. close the browser.google. // Create the Selenium implementation Selenium selenium = new WebDriverBackedSelenium(driver.9 WebDriver-Backed Selenium-RC The Java version of WebDriver provides an implementation of the Selenium-RC API. used by selenium to resolve relative URLs String baseUrl = "" . 4. 4. It allows those who have existing test suites using the SeleniumRC API to use WebDriver under the covers.Selenium Documentation. Otherwise.9.type( "name=q" . // Get the underlying WebDriver implementation back. Download the Chrome Driver executable and follow the other instructions on the wiki page 4.8.getUnderlyingWebDriver() //Finally. If you used the commands at the beginning of this chapter for setting up a Selenium 2 project you should already have the Chrome Driver along with all the other drivers. this allows one to use both APIs. 4. selenium. side-by-side. // Perform actions with selenium selenium. "cheese" ). Firefox is used here as an example WebDriver driver = new FirefoxDriver(). WebDriver driverInstance = ((WebDriverBackedSelenium) selenium). // A "base url".8. This will refer to the // same WebDriver instance as the "driver" variable above.5 Opera Driver See the Opera Driver wiki article in the Selenium Wiki for information on using the Opera Driver. Also. selenium.6 iPhone Driver See the iPhone Driver wiki article in the Selenium Wiki for information on using the Mac iOS Driver. selenium.open( ". CommandExecutor executor = new SeleneseCommandExecutor( "http:localhost:4444/" . 60 Chapter 4. Also.9.2 Cons • Does not implement every method • More advanced Selenium usage (using “browserbot” or other built-in JavaScript methods from Selenium Core) may not work • Some methods may be slower due to underlying implementation differences 4. 4. This is presented.0 4.3 Backing WebDriver with Selenium WebDriver doesn’t support as many browsers as Selenium RC does. Release 1.Selenium Documentation. There are currently some major limitations with this approach. because we’re using Selenium Core for the heavy lifting of driving the browser. Both of these chapters present techniques for writing more maintainable tests by making your test code more modular.9.0 and WebDriver . you may want to look at the Test Design Considerations chapter.9. "http:// WebDriver driver = new RemoteWebDriver(executor. you are limited by the JavaScript sandbox.10 Selenium WebDriver Wiki You can find further resources for WebDriver in WebDriver’s wiki 4. Once getting familiar with the Selenium-WebDriver API you will then want to learn how to build test suites for maintainability. The approach most Selenium experts are now recommending is to design your test code using the Page Object Design Pattern along with possibly a Page Factor. extensibility. Also.setBrowserName( "safari" ). notably that findElements doesn’t work as expected. and reduced fragility when features of the AUT frequently change. capabilities). so in order to provide that support while still using the WebDriver API. Selenium-WebDriver provides support for this by supplying a PageFactory class. along with other advanced topics. for high-level description of this technique.11 Next Steps This chapter has simply been a high level walkthrough of WebDriver and some of its key capabilities. Selenium 2. you can make use of the SeleneseCommandExecutor It is done like this: Capabilities capabilities = new DesiredCapabilities() capabilities. in the next chapter 4. TimeSpan. There are some convenience methods provided that help you write code that will wait only as long as required.sleep().1 Explicit Waits An explicit waits is code you define to wait for a certain condition to occur before proceeding further in the code.FindElement(By.Until<IWebElement>((d) => { return d.support. Java WebDriver driver = new FirefoxDriver().webdriver.Id( "someDynamicElement" )). Python from selenium import webdriver from selenium. driver.ui import WebDriverWait # available since 2.Url = "" . }).CHAPTER FIVE WEBDRIVER: ADVANCED USAGE 5.findElement(By. WebDriverWait in combination with ExpectedCondition is one way this can be accomplished. C# IWebDriver driver = new FirefoxDriver(). The worst case of this is Thread.Firefox() 61 .get( "" ).4. WebDriverWait wait = new WebDriverWait(driver.until(new ExpectedCondition<WebElement>(){ @Override public WebElement apply(WebDriver d) { return d.FromSeconds(10)). which sets the condition to an exact time period to wait. WebElement myDynamicElement = (new WebDriverWait(driver. 10)) .1. 5.0 ff = webdriver. driver.1 Explicit and Implicit Waits Waiting is having the automated task execution elapse a certain amount of time before continuing with the next step. IWebElement myDynamicElement = wait. }}).id( "myDynamicElement" )). get " " wait = Selenium::WebDriver::Wait. WebDriverWait by default calls the ExpectedCondition every 500 milliseconds until it returns successfully.find_element_by_id( " myDynamicEle finally: ff. Release 1. 10).get( "" ).Timeouts().2 Implicit Waits An implicit wait is to tell WebDriver to poll the DOM for a certain amount of time when trying to find an element or elements if they are not immediately available.SECONDS).9 or if you installed without gem require ’selenium-webdriver’ driver = Selenium::WebDriver. driver. Java WebDriver driver = new FirefoxDriver().ImplicitlyWait(TimeSpan.id( "myDynamicElement" )).0 ff. TimeUnit.Manage().implicitlyWait(10. 5. Python from selenium import webdriver ff = webdriver.Selenium Documentation.quit() Ruby require ’rubygems’ # not required for ruby 1.until { driver. C# WebDriver driver = new FirefoxDriver(). WebElement myDynamicElement = driver.new(:timeout => 10) # seconds begin element = wait.timeouts(). This example is also functionally equivalent to the first Implicit Waits example.get( " " ) try: WebDriverWait(ff. driver.manage().until(lambda driver : driver. driver.Url = "" .FindElement(By.10 seconds. driver.Id( "someDynamicElement" )).Firefox() 62 Chapter 5. IWebElement myDynamicElement = driver.FromSeconds(10)). The default setting is 0. the implicit wait is set for the life of the WebDriver object instance.for :firefox driver.findElement(By. WebDriver: Advanced Usage . A successful return is for ExpectedCondition type is Boolean return true or not null return value for all other ExpectedCondition types.quit end This waits up to 10 seconds before throwing a TimeoutException or if it finds the element will return it in 0 .find_element(:id => " some-dynamic-element " ) } ensure driver. Once set.1. By. // Now submit the form. 5.selenium.2 RemoteWebDriver You’ll start by using the HtmlUnit Driver.find_element_by_id( " myDynamicElement " ) Ruby require ’rubygems’ # not required for ruby 1.WebElement.example. Because of this.timeouts.openqa.implicitly_wait(10) ff.selenium. public class HtmlUnitExample { public static void main(String[] args) { // Create a new instance of the html unit driver // Notice that the remainder of the code relies on the interface. package org. // not the implementation. // Find the text input element by its name WebElement element = driver.selenium.WebDriver.for :firefox driver.selenium.name( "q" )). // Enter something to search for element.openqa.implicit_wait = 10 # seconds driver.selenium.get " " begin element = driver. Release 1.findElement(By.quit end 5.0 ff.get( " " ) myDynamicElement = ff.find_element(:id => " some-dynamic-element " ) ensure driver.google. RemoteWebDriver 63 .openqa. org. WebDriver driver = new HtmlUnitDriver().2.sendKeys( "Cheese!" ). // Check the title of the page System.submit().manage.com" ).openqa.Selenium Documentation.9 or if you installed without gem require ’selenium-webdriver’ driver = Selenium::WebDriver. // And now use this to visit Google driver.HtmlUnitDriver. org. WebDriver will find the form for us from the element element.get( ". This is a pure Java driver that runs entirely in-memory.println( "Page title is: " + driver.out. org.htmlunit. import import import import org. you won’t see a new browser window open.getTitle()).openqa. Release 1. class Example { static void Main(string[] args) { //to use HtmlUnit from .WriteLine( "Page title is: " + driver.Quit().Selenium Documentation.Remote. element.5 Cookies Todo 64 Chapter 5. element. You should see a line with the title of the Google search results as output on the console.Selenium. driver. //the . System. Congratulations. using OpenQA.HtmlUnit().ReadLine(). WebDriver: Advanced Usage . //The rest of the code should look very similar to the Java library IWebElement element = driver. IWebDriver driver = new RemoteWebDriver(desiredCapabilities). you’ve managed to get started with WebDriver! 5.Net driver. Below is the same example in C#.0 } } HtmlUnit isn’t confined to just Java.Name( "q" )).3 AdvancedUserInteractions Todo 5.Console.Selenium.GoToUrl( "( "Cheese!" ).FindElement(By.4 HTML5 Todo 5. Selenium makes accessing HtmlUnit easy from any language.Console. System.Navigate(). } } Compile and run this.Title).Net we must access it through the RemoteWebDriver //Download and run the selenium-server-standalone-2.ca/" ).Net Webdriver relies on a slightly different API to navigate to //web pages because ’get’ is a keyword in .0b1. Note that you’ll need to run the remote WebDriver server to use HtmlUnit from C# using OpenQA.jar locally to run this ICapabilities desiredCapabilities = DesiredCapabilities. 6.0 5.7 Parallelizing Your Test Runs Todo 5.Selenium Documentation. Browser Startup Manipulation 65 .6 Browser Startup Manipulation Todo Topics to be included: • restoring cookies • changing firefox profile • running browsers with plugins 5. Release 1. WebDriver: Advanced Usage .Selenium Documentation. Release 1.0 66 Chapter 5. 67 .2 How Selenium RC Works First. including support for several languages (Java.2. 6. and acts as an HTTP proxy. Javascript. before the WebDriver/Selenium merge brought up Selenium 2. Perl and C#) and support for almost every browser out there.. the newest and more powerful tool. • Client libraries which provide the interface between each programming language and the Selenium RC Server..CHAPTER SIX SELENIUM 1 (SELENIUM RC) 6. HP. Selenium RC was the main Selenium project for a long time. Python. Selenium 1 is still actively supported (mostly in maintenance mode) and provides some features that may not be available in Selenium 2 for a while.1 Introduction As you can read in Brief History of The Selenium Project. PRuby. Here is a simplified architecture diagram. 6. interprets and runs the Selenese commands passed from the test program.1 RC Components Selenium RC components are: • The Selenium Server which launches and kills browsers. intercepting and verifying HTTP messages passed between the browser and the AUT.. we will describe how the components of Selenium RC operate and the role each plays in running your test scripts. 68 Chapter 6. Selenium-Core is a JavaScript program. Release 1. The browser. executes the Selenium command. actually a set of JavaScript functions which interprets and executes Selenese commands using the browser’s built-in JavaScript interpreter.Selenium Documentation. The Server receives the Selenese commands from your test program using simple HTTP GET/POST requests. Then the server passes the Selenium command to the browser using Selenium-Core JavaScript commands. This means you can use any programming language that can send HTTP requests to automate Selenium tests on the browser.2 Selenium Server Selenium Server receives Selenium commands from your test program. This occurs when your test program opens the browser (using a client library API function). This runs the Selenese action or verification you specified in your test script.0 The diagram shows the client libraries communicate with the Server passing each Selenium command for execution. interprets them. Selenium 1 (Selenium RC) . using its JavaScript interpreter. 6.2. and reports back to your program the results of running those tests. The RC server bundles Selenium Core and automatically injects it into the browser. which doesn’t require any special installation. optionally. Within each interface.3. See the Selenium-IDE chapter for specifics on exporting RC code from Selenium-IDE.3.2 Running Selenium Server Before starting any tests you must start the server.3 Client Libraries The client libraries provide the programming support that allows you to run Selenium commands from a program of your own design.Selenium Documentation. you simply need to: • Install the Selenium RC Server. The Selenium-IDE can translate (using its Export menu item) its Selenium commands into a client-driver’s API function calls. which run Selenium commands from your own program. java -jar selenium-server-standalone-<version-number>. there is a programming function that supports each Selenese command. a set of functions. Release 1. i. And. You could download them from downloads page Once you’ve chosen a language to work with.jar This can be simplified by creating a batch or shell executable file (.sh on Linux) containing the command above. Go to the directory where Selenium RC’s server is located and run the following from a command-line console.. Your program can receive the result and store it into a program variable and report it as a success or failure.1 Installing Selenium Server The Selenium RC server is simply a Java jar file (selenium-server-standalone-<version-number>.bat on Windows and . The client library also receives the result of that command and passes it back to your program. A Selenium client library provides a programming interface (API). or possibly take corrective action if it was an unexpected error. For the server to run you’ll need Java installed and the PATH environment variable correctly configured to run it from the console. There is a different client library for each supported language. if you already have a Selenese test script created in the SeleniumIDE. 6. So to create a test program. you simply write a program that runs a set of Selenium commands using a client library API.e. Then make a shortcut to that executable on your desktop and simply double-click the icon to start the server. Selenium has set of libraries available in the programming language of your choice. Just downloading the zip file and extracting the server in the desired directory is sufficient. • Set up a programming project using a language specific client driver. 6.0 6.2.jar). Installation 69 . The client library takes a Selenese command and passes it to the Selenium Server for processing a specific action or test against the application under test (AUT).3. 6. You can check that you have Java correctly installed by running the following on a console: 6. you can generate the Selenium RC code.3 Installation Installation is rather a misnomer for Selenium. NET Client Driver • Download Selenium RC from the SeleniumHQ downloads page • Extract the folder • Download and install NUnit ( Note: You can use NUnit as your test engine. see the Appendix sections Configuring Selenium RC With Eclipse and Configuring Selenium RC With Intellij. • Add the selenium-java-<version-number>.4 Using the Python Client Driver • Download Selenium RC from the SeleniumHQ downloads page • Extract the file selenium.3. export a script to a Java file and include it in your Java project.py • Either write your Selenium test in Python or export a script from Selenium-IDE to a python file. or you can write your own simple main() program.3.) • Create a java project. The API is presented later in this chapter. • Run Selenium server from the console. If you’re not familiar yet with NUnit.) 70 Chapter 6. or write your Selenium test in Java using the selenium-java-client API.jar. however NUnit is very useful as a test engine.5 Using the .3 Using the Java Client Driver • Download Selenium RC java client driver from the SeleniumHQ downloads page. 6. or TestNg to run your test. • Execute your test from the Java IDE or from the command-line. you can also write a simple main() function to run your tests. • Add to your project classpath the file selenium-java-<version-number>. • Extract selenium-java-<version-number>. Release 1.Selenium Documentation.5 or later). For details on Java test project configuration. you’re ready to start using Selenium RC. Netweaver.jar files to your project as references.jar file • Open your desired Java IDE (Eclipse. IntelliJ.py • Run Selenium server from the console • Execute your test from a console or your Python IDE For details on Python client driver configuration. These concepts are explained later in this section. 6. You can either use JUnit. • From Selenium-IDE.3. etc. 6. NetBeans. Selenium 1 (Selenium RC) . • Add to your test’s path the file selenium. see the appendix Python Client Driver Configuration.0 java -version If you get a version number (which needs to be 1. Net language (C#.IntegrationTests. see the Selenium-Client documentation 6.Core.3. nunit. framework. install it from RubyForge • Run gem install selenium-client • At the top of your test script.Net IDE (Visual Studio.Selenium.google. 6. see the appendix . nunit.dll. we provide several different language-specific examples.Selenium Documentation.6 Using the Ruby Client Driver • If you do not already have RubyGems. • Write your own simple main() program or you can include NUnit in your project for running your test.core.dll. add require "selenium/client" • Write your test script using any Ruby test harness (eg Test::Unit. Imagine recording the following test with Seleniumopen / type q selenium rc IDE. These concepts are explained later in this chapter. clickAndWait btnG assertTextPresent Results * for selenium rc Note: This example would work with the Google search page Sample Test Script Let’s start with an example Selenese test script.dll.dll • Write your Selenium test in a .4. or export a script from Selenium-IDE to a C# file and copy this code into the class file you just created.com 6. • Execute your test in the same way you would run any other Ruby script.NET client driver configuration. • Run Selenium server from console • Run your test either from the IDE. SharpDevelop.0 • Open your desired . Mini::Test or RSpec). For details on Ruby client driver configuration. from the NUnit GUI or from the command line For specific details on .Selenium.Net). From Selenese to a Program 71 .dll and ThoughtWorks.Selenium. 6. ThoughtWorks. VB. MonoDevelop) • Create a class library (. Release 1. • Run Selenium RC server from the console.UnitTests.4 From Selenese to a Program The primary task for using Selenium RC is to convert your Selenese into a programming language. ThoughtWorks. In this section.dll) • Add references to the following DLLs: nmock.NET client driver configuration with Visual Studio.dll.4. verificationErrors = new StringBuilder(). [SetUp] public void SetupTest() { selenium = new DefaultSelenium( "localhost" . } } } 72 Chapter 6.Selenium Documentation. selenium. In C#: using using using using using using System. ". To see an example in a specific language. Release 1.4. System. selenium.ToString()).RegularExpressions. selenium. namespace SeleniumTests { [TestFixture] public class NewTest { private ISelenium selenium.Text. Selenium 1 (Selenium RC) . NUnit.Stop(). verificationErrors. selenium.AreEqual( "selenium rc . Selenium. 4444.Text. select one of these buttons.WaitForPageToLoad( "30000" ). System.AreEqual( "" . "*firefox" . } [Test] public void TheNewTest() { selenium. } [TearDown] public void TeardownTest() { try { selenium.Type( "q" .Framework.oriented programming language.Click( "btnG" ). "selenium rc" ). If you have at least basic knowledge of an object.Open( "/" ). System.Threading. } catch (Exception) { // Ignore errors if unable to close the browser } Assert. Assert. selenium.Google Search" .0 6.2 Selenese as Programming Code Here is the test script exported (via Selenium-IDE) to each of the supported programming languages. you will understand how Selenium runs Selenese commands by reading one of these examples. private StringBuilder verificationErrors.Start().GetTitle()). selenium.com/" ). $sel->type_ok( "q" .isTextPresent( "Results * for selenium rc" )). "*firefox" ). $sel->wait_for_page_to_load_ok( "30000" ). warnings.php’ .regex. } public void testNew() throws Exception { selenium.waitForPageToLoad( "30000" ). $sel->click_ok( "btnG" ). From Selenese to a Program 73 .google.thoughtworks.Selenium Documentation.open( "/" ). Test::Exception.util. $sel->open_ok( "/" ).com/" . Test::WWW::Selenium. import com.type( "q" . browser => "*firefox" . Test::More "no_plan" .google. my $sel = Test::WWW::Selenium->new( host => "localhost" . } } In Perl: use use use use use use strict. port => 4444. class Example extends PHPUnit_Extensions_SeleniumTestCase { function setUp() { 6. public class NewTest extends SeleneseTestCase { public void setUp() throws Exception { setUp( ". "selenium rc" ).0 In Java: /** Add JUnit framework to your classpath if not already there * for this example to work */ package com. selenium.Pattern.tests. selenium.*. "selenium rc" ). selenium. browser_url => ". $sel->is_text_present_ok( "Results * for selenium rc" ). import java. In PHP: <?php require_once ’PHPUnit/Extensions/SeleniumTestCase.click( "btnG" ). Time::HiRes qw( sleep ) .4. Release 1. assertTrue(selenium. selenium. } } ?> in Python: from selenium import selenium import unittest.click( " btnG " ) sel. " selenium rc " ) sel.selenium. re class NewTest(unittest.set_context( " test_new " ) 74 Chapter 6.selenium = selenium( " localhost " . $this->assertTrue($this->isTextPresent( " Results * for selenium rc " )).type( " q " .start end @selenium. $this->waitForPageToLoad( " 30000 " ). $this->click( " btnG " ).Selenium Documentation. " selenium rc " ).com/ " ). } function testMyTestCase() { $this->open( " / " ).open( " / " ) sel. " ht @selenium. $this->type( " q " . " " ) self.verificationErrors = [] self.failUnless(sel.start() def test_new(self): sel = self. " *firefox " . " *firefox " .is_text_present( " Results * for selenium rc " )) def tearDown(self): self.0 $this->setBrowser( " *firefox " ).verificationErrors) in Ruby: require " selenium " require " test/unit " class NewTest < Test::Unit::TestCase def setup @verification_errors = [] if $selenium @selenium = $selenium else @selenium = Selenium::SeleniumDriver.google.assertEqual([]. 4444.wait_for_page_to_load( " 30000 " ) self.TestCase): def setUp(self): self.new( " localhost " .stop() self. Release 1. self. Selenium 1 (Selenium RC) . time.selenium sel.google. $this->setBrowserUrl( ". 4444. Also. so you’ll find a separate explanation for each.NET if you are using one of those languages.5. you will need to change the browser-open parameters in the statement: 6. You will probably want to rename the test class from “NewTest” to something of your own choosing.click " btnG " @selenium.1 Java For Java. Release 1.0 end def teardown @selenium. Teaching JUnit or TestNG is beyond the scope of this document however materials may be found online and there are publications available. 6. @verification_errors end def test_new @selenium.Selenium Documentation.wait_for_page_to_load " 30000 " assert @selenium. • Java • C# • Python • Ruby • Perl. This makes it even easier.is_text_present( " Results * for selenium rc " ) end end In the next section we’ll explain how to build a test program using the generated code.stop unless $selenium assert_equal []. If you are already a “java-shop” chances are your developers will already have some experience with one of these test frameworks. • Write a very simple main program that executes the generated code. There are essentially two tasks: • Generate your script into a programming language from Selenium-IDE.type " q " . optionally modifying the result. The language-specific APIs tend to differ from one to another. you can adopt a test engine platform like JUnit or TestNG for Java. " selenium rc " @selenium.5. Programming Your Test 75 .open " / " @selenium. or NUnit for .5 Programming Your Test Now we’ll illustrate how to program your own tests using examples in each of the supported programming languages. Optionally. we show language-specific examples. Here. people use either JUnit or TestNG as the test engine. Some development environments like Eclipse have direct support for these via plug-ins. PHP 6. Selenium 1 (Selenium RC) . "*iehta". import java.google.example. You will probably have to rename the test class from “NewTest” to something of your own choosing. 4444.com/").open( "/" ).waitForPageToLoad( "30000" ). Also. // We specify the package of our tests import com.tests. It can be used with any .Text. 76 Chapter 6.selenium. " C# The . selenium.google. You’ll use this for instantiating a // browser and making it do what you need. public class NewTest extends SeleneseTestCase { // We create our Selenium test case public void setUp() throws Exception { setUp( ". The generated code will look similar to this. selenium.isTextPresent( "Results * for selenium rc" )).NET. package com. selenium. 4444.NET testing framework like NUnit or the Visual Studio 2005 Team System.com/").type( "q" .google. This example has comments added manually for additional clarity. // Selenium-IDE add the Pattern module because it’s sometimes used for // regex validations. using System.Selenium Documentation.Pattern. It includes the using statement for NUnit along with corresponding NUnit attributes identifying the role for each member function of the test class. // This is the driver’s import.5. You can see this in the generated code below.0 selenium = new DefaultSelenium("localhost".click( "btnG" ). "*firefox" ).*.com/" .util. Selenium-IDE assumes you will use NUnit as your testing framework. using System. ". // These are the real test steps } } 6. assertTrue(selenium. Release 1.thoughtworks. The Selenium-IDE generated code will look like this. // We instantiate and start the browser } public void testNew() throws Exception { selenium. you will need to change the browser-open parameters in the statement: selenium = new DefaultSelenium("localhost".NET Client Driver works with Microsoft. "selenium rc" ). You can remove the module if it’s not used in your // script. "*iehta".regex. Selenium Documentation, Release 1.0 using using using using System.Text.RegularExpressions; System.Threading; NUnit.Framework;" 6.5. Programming Your Test 77 Selenium Documentation, Release 1.0 manage the execution of your tests. Or alternatively, you can write a simple main() program that instantiates the test object and runs each of the three methods, SetupTest(), TheNewTest(), and TeardownTest() in turn. 6.5.3 Python Pyunit is the test framework to use for Python. To learn Pyunit refer to its official documentation <>_. The basic test structure 78 Chapter 6. Selenium 1 (Selenium RC) Selenium Documentation, Release 1 6.5.4 Ruby Selenium-IDE generates reasonable Ruby, but requires the old Selenium gem. This is a problem because the official Ruby driver for Selenium is the Selenium-Client gem, not the old Selenium gem. In fact, the Selenium gem is no longer even under active development. Therefore, it is advisable to update any Ruby scripts generated by the IDE as follows: 1. On line 1, change require "selenium" to require "selenium/client" 2. On line 11, change Selenium::Client::Driver.new Selenium::SeleniumDriver.new to 1.8. 6.5. Programming Your Test 79 if any. 80 Chapter 6. because that # was specified when we created the new driver instance. :browser => " *chrome " . " selenium rc " # Click the button named "btnG" @selenium.set_context( " test_untitled " ) end # The teardown method is called after each test. assert_equal []. push it onto the array of errors. Release 1. # Notice that the star (*) is a wildcard that matches any # number of characters.type " q " .stop # Print the array of error messages. # Note that we don’t need to set a timeout here. above. :port => 4444.com/ " .google.wait_for_page_to_load begin # Test whether the search results contain the expected text.click " btnG " # Wait for the search results page to load. def teardown # Stop the browser session.new \ :host => " localhost " . @selenium. def test_untitled # Open the root of the site we specified when we created the # new driver instance. @selenium.is_text_present( " Results * for selenium rc " ) rescue Test::Unit::AssertionFailedError # If the assertion fails.Selenium Documentation. @selenium.open " / " # Type ’selenium rc’ into the field named ’q’ @selenium. :url => ". @verification_errors end # This is the main body of your test. Selenium 1 (Selenium RC) .start # Print a message in the browser-side log and status bar # (optional).0 @verification_errors = [] # Create a new instance of the Selenium-Client driver. assert @selenium. above. @selenium = Selenium::Client::Driver. @selenium. :timeout_in_second => 60 # Start the browser session @selenium. 4444. "*firefox" ). browser => "*firefox" .com/ " ) self. Learning the API 81 . 4444. assuming you understand Selenese.google. browser_url => ". If you are using Selenium RC with either of these two languages please contact the Documentation Team (see the chapter on contributing). however.com/" .com/" selenium. ". to support Perl and PHP users. much of the interface will be self-explanatory.5 Perl.Selenium Documentation.com/"). We would love to include some examples from you and your experiences. Release 1.google.5. we explain the most critical and possibly less obvious aspects. In Python: self. 6. PHP The members of the documentation team have not used Selenium RC with Perl or PHP.start() 6.0 @verification_errors << $! end end end 6. Here.6 Learning the API The Selenium RC API uses naming conventions that. 6.google.selenium.6. "*firefox" . In PHP: $this->setBrowser("*firefox").google.google. ". In Perl: my $sel = Test::WWW::Selenium->new( host => "localhost" .selenium = selenium( " localhost " .1 Starting the Browser In C#: selenium = new DefaultSelenium( "localhost" . In Java: setUp( ". " *firefox " . $this->setBrowserUrl("" ). port => 4444. JUnit and TestNG. essentially identical to a user typing input into the browser.7.e. port Specifies the TCP/IP socket where the server is listening waiting for the client to establish a connection. " @selenium. by using the locator and the string you specified during the method call. " *firefox " .Selenium Documentation.0 In Ruby: @selenium = Selenium::ClientDriver. url The base url of the application under test.start Each of these examples opens the browser and represents that browser by assigning a “browser instance” to a program variable. These. Usually.NET also has its own.1 Test Framework Reporting Tools Test frameworks are available for many programming languages. browser The browser in which you want to run the tests. For example. Selenium 1 (Selenium RC) . These methods execute the Selenium commands. but what if you simply want something quick that’s already done for you? Often an existing library or test framework can meet your needs faster than developing your own test reporting code.type( " field-id " . like open or type or the verify commands. This is a required parameter. 6. This program variable is then used to call methods from the browser.new( " localhost " . it allows you to build your reporting customized to your needs using features of your chosen programming language.7 Reporting Results Selenium RC does not have its own mechanism for reporting results. 6. Rather. include library code for reporting results. Java has two commonly used test frameworks. That’s great. Release 1.2 Running Commands Once you have the browser initialized and assigned to a variable (generally named “selenium”) you can make it run Selenese commands by calling the respective methods from the browser variable. This also is optional in some client drivers. 4444. Note that some of the client libraries require the browser to be started explicitly by calling its start() method. " string to type " ) In the background the browser will actually perform a type operation. NUnit. i. along with their primary function of providing a flexible test engine for executing your tests. . This is required by all the client libs and is integral information for starting up the browser-proxy-AUT communication. to call the type method of the selenium object: selenium. For example. so in this case localhost is passed. The parameters required when creating the browser instance are: host Specifies the IP address of the computer where the server is located.6. In some clients this is an optional parameter. 6. 82 Chapter 6. this is the same machine as where the client is running. 6.2 Test Report Libraries Also available are third-party libraries specifically created for reporting test results in your chosen programming language. The ones listed here are commonly used and have been used extensively (and therefore recommended) by the authors of this guide.Selenium Documentation. • Also. we’ll direct you to some specific tools in some of the other languages supported by Selenium. Reporting Results 83 . learning curve you will naturally develop what works best for your own situation. From there most will examine any available libraries as that’s less time consuming than developing your own. Refer to JUnit Report for specifics. 6. That may gradually lead to you developing your own reporting. • If Selenium Test cases are developed using TestNG then no external task is required to generate test reports. See TestNG Report for more. A TestNG-xslt Report looks like this. for a very nice summary report try using TestNG-xslt.4 Test Reporting Examples To illustrate. See ReportNG for more. Release 1.7. Regardless.3 What’s The Best Approach? Most people new to the testing frameworks will being with the framework’s built-in reporting features. but short. We will simply introduce the framework features that relate to Selenium along with some techniques you can apply. These often support a variety of formats such as HTML or PDF.0 We won’t teach the frameworks themselves here. • ReportNG is a HTML reporting plug-in for the TestNG framework. Test Reports in Java • If Selenium Test cases are developed using JUnit then JUnit Report can be used to generate test reports. There are good books available on these test frameworks however along with information on the internet.7.7. 6. after the initial. As you begin to use Selenium no doubt you will start putting in your own “print statements” for reporting progress. It is intended as a replacement for the default TestNG HTML report.7. colour-coded view of the test results. possibly in parallel to using a library or test framework. 6. ReportNG provides a simple. that’s beyond the scope of this user guide. The TestNG framework generates an HTML report which list details of tests. however iteration is impossible. Basically. It’s the same as for any program. Test Reports for Python • When using Python Client Driver then HTMLTestRunner can be used to generate a Test Report. Note: If you are interested in a language independent log of what’s going on. You will find as you transition from the simple tests of the existence of page elements to tests of dynamic functionality involving multiple web-pages and varying data that you will require programming logic for verifying expected results. adding programming logic to your tests. the Selenium-IDE does not support iteration and standard condition statements.0 See TestNG-xslt for more. Program flow is controlled using condition statements and iteration. In addition.8 Adding Some Spice to Your Tests Now we’ll get to the whole reason for using Selenium RC. In this section we’ll show some examples of how programming language constructs can be combined with Selenium to solve common testing problems. Refer to RSpec Report for more. See HTMLTestRunner. you may need exception handling for error recovery. In addition you can report progress information using I/O. Selenium 1 (Selenium RC) . we have written this section to illustrate the use of common programming techniques to give you greater ‘verification power’ in your automated testing. Please refer to Logging Selenium. Logging Selenium extends the Java client driver to add this Selenese logging ability. Logging the Selenese Commands • Logging Selenium can be used to generate a report of all the Selenese commands in your test along with the success or failure of each. Test Reports for Ruby • If RSpec framework is used for writing Selenium Test Cases in Ruby then its HTML report can be used to generate a test report. take a look at Selenium Server Logging 6. and most conditions will be much easier in a programming language.Selenium Documentation. You can do some conditions by embedding javascript in Selenese parameters. For these reasons and others. 84 Chapter 6. Release 1. we can iterate over the search results for a more flexible and maintainable solution.click( "btnG" ). when running the following line: selenium.Selenium Documentation. although the code is simple and can be easily adapted to the other supported languages. sel.waitForPageToLoad( "30000" ).8. let’s check the Selenium search results. sel. For example. "grid" }.type( "q" . For example.isTextPresent( "Results * for selenium " + s)). "rc" .0 The examples in this section are written in C# and Java. String[] arr = { "ide" .. If you have some basic knowledge of an object-oriented programming language you shouldn’t have difficulty understanding this section.open( "/" ). A common problem encountered while running Selenium tests occurs when an expected element is not available on page. you may want to to execute a search multiple times. But multiple copies of the same code is not good program practice because it’s more work to maintain. Release 1. } 6. Using the same Google search example we used earlier. perhaps for verifying your test results you need to process a “result set” returned from a database. sel.8. assertTrue( "Expected text: " +s+ " is missing on page. // Execute loop for each String in array ’arr’. Adding Some Spice to Your Tests 85 .type( "q" . In C#: // Collection of String values. "selenium " +s). If element ‘q’ is not on the page then an exception is thrown: 6. foreach (String s in arr) { sel.1 Iteration Iteration is one of the most common things people need to do in their tests. "selenium " +s). sel. By using a programming language. 6.2 Condition Statements To illustrate using conditions in tests we’ll start with an example.8." . Or. "}" . if(selenium.// Create array in java scri script += "var cnt = 0.0 com. Consider an application having check boxes with no static identifiers. Release 1. // Create array in java scrip script += "inputFields = window. The getEval method of selenium API can be used to execute JavaScript from selenium RC. script += "for(var i=0. not the test window." ).images." + // increment the counter. } else { System. In this case one could evaluate JavaScript from selenium RC to get ids of all check boxes and then exercise them.3 Executing JavaScript from Your Test JavaScript comes very handy in exercising an application which is not directly supported by selenium.length. Selenium 1 (Selenium RC) . "}" + // end of if.getEval( "window.8. // end of for. "Selenium rc" ).toString().length.SeleniumException: ERROR: Element q not found This can cause your test to abort. But often that is not desirable as your test script has many other subsequent tests to perform. i++) {" .out. 86 Chapter 6. public static String[] getAllCheckboxIds () { String script = "var inputId = new Array()." .id !=’undefined’ " + "&& inputFields[i]." . } To count number of images on a page: selenium.type( "q" .printf( "Element: " +q+ " is not available on page." . script += "inputId.id !=null " + "&& inputFields[i]." + // Save check box id to inp "cnt++.// Convert array in to string.document. // Split the s return checkboxIds.split( ".Selenium Documentation.getElementsByTagName(’input’).isElementPresent( "q" )) { selenium. Let’s look at this using Java. For some tests that’s what you want. Remember to use window object in case of DOM expressions as by default selenium window is referred to.id . // If element is available on page then perform type operation. // If input fie script += "inputId[cnt]=inputFields[i]. i<inputFields. A better approach is to first validate if the element is really present and then take alternatives when it it is not.getAttribute(’type’) == ’checkbox’) {" . // Loop through the script += "if(inputFields[i]." )." . script += "var inputFields = new Array()." ) } The advantage of this approach is to continue with test execution even if some UI elements are not available on page.getEval(script)." .selenium. 6.document. String[] checkboxIds = selenium.thoughtworks. // Counter for check box ids. 9. However.2 Multi-Window Mode If you are using Selenium 1.proxyPort. the server is started by running the following.jar To see the list of options. The provided descriptions will not always be enough. 6. http.0 6.proxyHost. $ java -jar selenium-server-standalone-<version-number>. Server Options 87 .9.9 Server Options When the server is launched. run the server with the -h option.9. $ java -jar selenium-server-standalone-<version-number> -h You’ll see a list of all the options you can use with the server and a brief description of each.1 Proxy Configuration If your AUT is behind an HTTP proxy which requires authentication then you should configure http.com - 6.proxyUser and http.jar -Dhttp.Selenium Documentation. 6. Release 1. Selenium by default ran the application under test in a sub frame as shown here. http.proxyHost=proxy. command line options can be used to change the default server behaviour. $ java -jar selenium-server-standalone-<version-number>.0.0 you can probably skip this section. since multiwindow mode is the default behavior. so we’ve provided explanations for some of the more important options.proxyPassword using the following command. Recall. prior to version 1. Release 1. Selenium 1 (Selenium RC) . and needed to be loaded into the top frame of the window.0 Some applications didn’t run correctly in a sub frame.Selenium Documentation. The multi-window mode option allowed the AUT to run in a separate window rather than in the default frame where it could then have the top frame it required. 88 Chapter 6.).e. follow this procedure.exe -profilemanager 6. you can probably skip this section. you will need to explicitly specify the profile. then type and enter one of the following: firefox.9. First.9. select “Run”. using the standard for earlier Selenium versions) you can state this to the Selenium Server using the option -singlewindow 6.0. so if you are using Selenium 1. to create a separate Firefox profile. Release 1.Selenium Documentation. Server Options 89 .0 and later runs in a separate profile automatically. However. Open the Windows Start menu.0. Selenium RC 1. if you want to run your test within a single frame (i.3 Specifying the Firefox Profile Firefox will not run two instances simultaneously unless you specify a separate profile for each instance.0 For older versions of Selenium you must specify multiwindow mode explicitly with the following option: -multiwindow As of Selenium RC 1. java -jar selenium-server-standalone-<version-number>. and the ID number of the thread that logged the message.server. if the test doesn’t complete within that amount of time.9. Then when you run Selenium Server.Selenium Documentation.exe -P Create the new profile using the dialog. run all the tests and save a nice HTML report with the results. not a single test. Note: When using this option. regardless of whether they are profile files or not. This command line is very long so be careful when you type it.jar -htmlSuite "*firefox" "http:// This will automatically launch your HTML suite.9. Selenium 1 (Selenium RC) .0 firefox. the command will exit with a non-zero exit code and no results file will be generated. The log file also includes the logger name. the server will start the tests and wait for a specified number of seconds for the test to complete. . Release 1. 6..log This log file is more verbose than the standard console logs (it includes DEBUG level logging messages).SeleniumDriverResourceHandler Browser 465828/:top frame1 posted START NEW The message format is 90 Chapter 6.jar -log selenium.5 Selenium Server Logging Server-Side Logs When launching selenium server the -log option can be used to record valuable debugging information reported by the Selenium Server to a text file. More information about Firefox profiles can be found in Mozilla’s Knowledge Base 6.openqa. Also be aware the -htmlSuite option is incompatible with -interactive You cannot run both at the same time.4 Run Selenese Directly Within the Server Using -htmlSuite You can run Selenese html files directly within the Selenium Server by passing the html file to the server’s command line. For example: 20:44:25 DEBUG [12] org. For instance: java -jar selenium-server-standalone-<version-number>. Note this requires you to pass in an HTML Selenese suite.selenium. 10. a script placed on any website you open would be able to read information on your bank account if you had the account page opened on other tab. To work within this policy. So for example. 6.MESSAGE This message may be multiline. When specifying the run mode. pass the -browserSideLog argument to the Selenium Server. this is used to allow your tests to run against a browser not directly supported by Selenium RC. Specifying the Path to a Specific Browser 91 . Also.mysite2.’s not fundamental for a Selenium user to know this.11. these can be more useful to the end-user than the regular Selenium Server logs. it cannot run that loaded code against www. It cannot perform functions on another website. to log browserSideLogs (as well as all other DEBUG level logging messages) to a file.com. use the *custom specifier followed by the full path to the browser’s executable: *custom <path to browser> 6.Selenium Documentation. Release 1. This is useful if you have different versions of the same browser and you wish to use a specific one.1 The Same Origin Policy The main restriction that Selenium faces is the Same Origin Policy.com–even if that’s another of your sites.. in many cases.mysite. 6.10 Specifying the Path to a Specific Browser You can specify to Selenium RC a path to a specific browser.11 Selenium RC Architecture Note: This topic tries to explain the technical implementation behind Selenium RC. Selenium-Core (and its JavaScript commands that make all the magic happen) must be placed in the same origin as the Application Under Test (same URL).jar -browserSideLog -browserSideLog must be combined with the -log argument. If this were possible. Browser-Side Logs JavaScript on the browser side (Selenium Core) also logs important messages. To access browser-side logs. 6. java -jar selenium-server-standalone-<version-number>. if the browser loads JavaScript code when it loads www. This is called XSS (Cross-site Scripting).0 TIMESTAMP(HH:mm:ss) LEVEL [THREAD] LOGGER . Selenium-Core was limited by this problem since it was implemented in JavaScript. the Selenium Server acts as a client-configured 1 HTTP proxy 2 . In Proxy Injection Mode. Its use of the Selenium Server as a proxy avoids this problem. It then masks the AUT under a fictional URL (embedding Selenium-Core and the set of tests and delivering them as if they were coming from the same origin). It. Selenium RC is not.0 Historically. tells the browser that the browser is working on a single “spoofed” website that the Server provides. Being a proxy gives Selenium Server the capability of “lying” about the AUT’s real URL. however. Release 1. that sits between the browser and the Application Under Test. restricted by the Same Origin Policy. It acts as a “web server” that delivers the AUT to the browser. Here is an architectural diagram. Note: You can find additional information about this topic on Wikipedia pages about Same Origin Policy and XSS. 2 The browser is launched with a configuration profile that has set localhost:4444 as the HTTP proxy.2 Proxy Injection The first method Selenium used to avoid the The Same Origin Policy was Proxy Injection. 6.Selenium Documentation. this is why any HTTP request that the browser does will pass through Selenium server and the response will pass through it and not from the real server. 1 92 Chapter 6. Selenium 1 (Selenium RC) . The proxy is a third person in the middle that passes the ball between the two parts.11. essentially. the following happens: 1. 6. 5. typically opening a page of the AUT. 3. Selenium RC Architecture 93 . 7. Selenium-Core instructs the browser to act on that first instruction. 4. it sends the page to the browser masking the origin to look like the page comes from the same server as Selenium-Core (this allows Selenium-Core to comply with the Same Origin Policy). The Server interprets the command and then triggers the corresponding JavaScript execution to execute that command within the browser. Selenium RC server launches a browser (or reuses an old one) with a URL that injects SeleniumCore’s JavaScript into the browser-loaded web page. Selenium RC server communicates with the Web server asking for the page and once it receives it. The client/driver establishes a connection with the selenium-RC server.Selenium Documentation.0 As a test suite starts in your favorite language. The browser receives the open request and asks for the website’s content from the Selenium RC server (set as the HTTP proxy for the browser to use).11. 6. The client-driver passes a Selenese command to the server. Release 1. 2. Selenium Core is able to directly open the AUT and read/interact with its content without having to pass the whole AUT through the Selenium RC server. The browser receives the web page and renders it in the frame/window reserved for it.0 8. Selenium RC server launches a browser (or reuses an old one) with a URL that will load SeleniumCore in the web page. the following happens: 1. 3. 6. The client/driver establishes a connection with the selenium-RC server. 2. Selenium-Core gets the first instruction from the client/driver (via another HTTP request made to the Selenium RC Server). or filling file upload inputs and pretty useful stuff for Selenium). By using these browser modes.11.Selenium Documentation. 94 Chapter 6.3 Heightened Privileges Browsers This workflow in this method is very similar to Proxy Injection but the main difference is that the browsers are launched in a special mode called Heightened Privileges. As a test suite starts in your favorite language. Here is the architectural diagram. which allows websites to do things that are not commonly permitted (as doing XSS. Selenium 1 (Selenium RC) . Release 1. renders it in the frame/window reserved for it. you may need to explicitly install this security certificate. and should not be used unless required by legacy test programs. The browser now thinks untrusted software is trying to look like your application. In earlier versions of Selenium RC. It responds by alerting you with popup messages. Selenium RC supports this. typically opening a page of the AUT. the browser will need a security certificate. when the browser accesses the AUT using HTTPS. Selenium RC. 6. and should not use. *chrome or *iehta were the run modes that supported HTTPS and the handling of security popups.0 4. In earlier versions. (again when using a run mode that support this) will install its own security certificate. 6. To get around this. use *chrome or *iehta. 6. This tricks the browser into thinking it’s accessing a site different from your AUT and effectively suppresses the popups.0 you do not need. Once the browser receives the web page. Using these run modes. You specify the run mode when your test program initializes Selenium. you will not need to install any special security certificates. to your client machine in a place where the browser can access it. and these popups cannot be closed using Selenium RC.1 Security Certificates Explained Normally. These are provided for backwards compatibility only. The browser receives the open request and asks the Web Server for the page.0 the run modes *firefox or *iexplore are recommended. 5. When this occurs the browser displays security popups. When dealing with HTTPS in a Selenium RC test. Another method used with earlier versions of Selenium was to install the Cybervillians security certificate provided with your Selenium installation. These were considered ‘experimental modes although they became quite stable and many people used them. Selenium-Core acts on that first instruction. Selenium RC will handle it for you. In Selenium RC 1.0 beta 1.0 beta 2 and later use *firefox or *iexplore for the run mode. for the run mode. If you are using Selenium 1. However.12. your browser will trust the application you are testing by installing a security certificate which you already own. Most users should no longer need to do this however. When Selenium loads your browser it injects code to intercept messages between the browser and the server. Handling HTTPS and Security Popups 95 .).12. To ensure the HTTPS site is genuine. it will assume that application is not ‘trusted’. temporarily.Selenium Documentation. if you are running Selenium RC in proxy injection mode. Their use will present limitations with security certificate handling and with the running of multiple windows if your application opens additional browser windows. you must use a run mode that supports this and handles the security certificate for you.12 Handling HTTPS and Security Popups Many applications switch from using HTTP to HTTPS when they need to send encrypted information such as passwords or credit card information. In version 1. Release 1. these older run modes. This is common with many of today’s web applications. including Selenium RC 1. Otherwise. you may still run your Selenium tests against a browser of your choosing by using the “*custom” run-mode (i.exe&2=htt Note that when launching the browser this way. it’s generally better to use the binary executable (e.org website for supported browsers. If so. Be aware that Mozilla browsers can vary in how they start and stop. when a browser is not directly supported. in place of *firefox or *iexplore) when your test application starts the browser.. an exception will be thrown in your test program.. Consult your browser’s documentation for details.1 Unable to Connect to Server When your test program cannot connect to the Selenium Server. See the SeleniumHQ.14. be sure you started the Selenium Server.0 6. Release 1. 6.g. 6. you pass in the path to the browsers executable within the API call. Unix users should avoid launching the browser using a shell script.NET and XP Service Pack 2) If you see a message like this.13. One may need to set the MOZ_NO_REMOTE environment variable to make Mozilla browsers behave a little more predictably." (using .14 Troubleshooting Common Problems When getting started with Selenium RC there’s a few potential problems that are commonly encountered. then there is a problem with the connectivity between the Selenium Client Library and the Selenium Server. With this.13 Supporting Additional Browsers and Browser Configurations The Selenium API supports running against multiple browsers in addition to Internet Explorer and Mozilla Firefox. For example. This can also be done from the Server in interactive mode. without using an automatic configuration. It should display this message or a similar one: "Unable to connect to remote server.1 Running Tests with Different Browser Configurations Normally Selenium RC automatically configures the browser.. but if you launch the browser using the “*custom” run mode. but instructions for this can differ radically from browser to browser. you must manually configure the browser to use the Selenium Server as a proxy. In addition. Normally this just means opening your browser preferences and specifying “localhost:4444” as an HTTP proxy.. 96 Chapter 6. you can force Selenium RC to launch the browser as-is. cmd=getNewBrowserSession&1=*custom c: \P rogram Files \M ozilla Firefox \M yBrowser..e. you can launch Firefox with a custom configuration like this: cmd=getNewBrowserSession&1=*custom c: \P rogram Files \M ozilla Firefox \f irefox..exe&2=h 6. Selenium 1 (Selenium RC) . firefox-bin) directly.Selenium Documentation. We present them along with their solutions here.Inner Exception Message: No connection could be made because the target machine actively refused it. it inserts a dummy URL.0) cannot start because the browser is already open and you did not specify a separate profile.4 Firefox Refused Shutdown While Preparing a Profile This most often occurs when your run your Selenium RC test program against Firefox. 6. If you have difficulty connecting. the connectivity should be fine assuming you have valid TCP/IP connectivity between the two machines.0 When starting with Selenium RC. 6. however. 6. the most likely cause is your test program is not using the correct URL. telnet. most people begin by running thier test program (with a Selenium Client Library) and the Selenium Server on the same machine.14. To do this use “localhost” as your connection parameter. See the section on Firefox profiles under Server Options. you can use common networking tools like ping.lang. We recommend beginning this way since it reduces the influence of potential networking problems which you’re getting started. Check the parameters you passed to Selenium when you program opens the browser.14.14. If unfamilar with these. ifconfig(Unix)/ipconfig (Windows). but the browser doesn’t display the website you’re testing. sorry.2 Unable to Load the Browser Ok. • The run mode you’re using doesn’t match any browser on your machine. Check to be sure the path is correct. but if the Selenium Server cannot load the browser you will likley see this error. This can easily happen. many people choose to run the tests this way. you didn’t specify a separate profile when you started the Selenium Server. Assuming your operating system has typical networking and TCP/IP settings you should have little difficulty. (500) Internal Server Error This could be caused by • Firefox (prior to Selenium 1.Selenium Documentation. Also check the user group to be sure there are no known issues with your browser and the “*custom” parameters. If. you do want to run Selenium Server on a remote machine.3 Selenium Cannot Find the AUT If your test program starts the browser successfully. Troubleshooting Common Problems 97 . In truth. You must manually change the URL to the correct one for your application to be tested. The error from the test program looks like this: Error: java. Release 1. your system administrator can assist you. but you already have a Firefox browser session running and.14.RuntimeException: Firefox refused shutdown while preparing a profile Here’s the complete error message from the server: 6. • You specified the path to the browser explicitly (using “*custom”–see above) but the path is incorrect. etc to ensure you have a valid network connection. not a friendly error message. When you use Selenium-IDE to export your script. .lock To resolve this.919 INFO . Selenium RC 0.. • *iexplore: If the browser is launched using *iexplore.idc....0_07" Java(TM) 2 Runtime Environment.. then it must be because the Selenium Server was not correctly configured as a proxy. run this from the command line..7 404 error when running the getNewBrowserSession command If you’re getting a 404 error while attempting to open a page on “. Selenium Server attempts To configure the global proxy settings in the Internet Options Control Panel.. For example.openqa. Proxy Configuration highly depends on how the browser is launched with *firefox. The Selenium Server requires Java 1. use the latest release version of Selenium with the most widely used version of your browser. it only appears to exist when the proxy is properly configured...14.. *opera. 6.browserlaunchers. you may need to update the JRE.0_07-b03) Java HotSpot(TM) Client VM (build 1.822 WARN .openqa. Try 98 Chapter 6. At times you may be lucky (I was).92 does not support Firefox 3..5.google.GET /selenium-server/driver/?cmd=getNewBrowserSession&1=*fir efox&2=http%3a%2f%2fsage-webapp1. When in doubt.5 Versioning Problems Make sure your version of Selenium supports the version of your browser. The “selenium-server” directory doesn’t exist on google.Selenium Documentation. *iexplore... Standard Edition (build 1.. You must make sure that those are correctly configured when Selenium Server launches the browser. or *custom. To check double-check your java version.RuntimeException: Firefox refused shutdown while preparing a profile at org.java:277) . java -version You should see a message showing the Java version.FirefoxCustomProfileLaunc her$FileLockRemainedException: Lock file still present! C:\DOCUME~1\jsvec\LOCALS ~1\Temp\customProfileDir203138\parent.lang.5.5 or higher. you could be having a problem with Internet Explorer’s proxy settings.. Selenium 1 (Selenium RC) .14. mixed mode) If you see a lower version number.waitForFullProfileToBeCreated(FirefoxCustomProfileLauncher. 16:20:27.minor version 49.0 16:20:03.server. see the section on Specifying a Separate Firefox Profile 6..com.1 java.browserlaunchers.. Caused by: org. 6. or you may simply need to add it to your PATH environment variable.com/seleniumserver/“.....selenium.0_07-b03..14. java version "1..5.selenium.6 Error message: “(Unsupported major..FirefoxCustomProfileLaunc her. But don’t forget to check which browser versions are supported by the version of Selenium you are using..server.0)” while starting server This error says you’re not using a correct version of Java.com HTTP/1. Release 1.Preparing Firefox profile.qa. accesses a page from and then accesses a page from) or switching protocols (moving from to). like this: open(“. and so ther are no known issues with this functionality. Permission issues are covered in some detail in the tutorial. use a username and password in the URL. otherwise you’ll get a 404 error. – If you need to use a proxy to access the application you want to test.9 Handling Browser Popup Windows There are several kinds of “Popups” that you can get during a Selenium test. If you had successfully configured the browser’s proxy settings incorrectly. This error can be intermittent.. • SSL certificate warnings: Selenium RC automatically attempts to spoof SSL certificates when it is enabled as a proxy.14. Each type of popup needs to be addressed differently.8 Permission Denied Error The most common reason for this error is that your session is attempting to violate the same-origin policy by crossing domain boundaries (e. as described in RFC 1738.proxyHost”.0 looking at your Internet Options control panel.14. Read the section about the The Same Origin Policy. Try configuring the browser to use the wrong proxy server hostname. 6. • For other browsers (*firefox. but you may need to configure your 6. Proxy Injection carefully. • *custom: When using *custom you must configure the proxy correctly(manually). You may need to know how to manage these. which is one way to make sure that one is adjusting the relevant settings. or the wrong port. Troubleshooting Common Problems 99 . you should never see SSL certificate warnings. • HTTP basic authentication dialogs: These dialogs prompt for a username/password to login to the site. see more on this in the section on HTTPS.Selenium Documentation.g. or are no longer available (after the page has started to be unloaded). This is most typically encountered with AJAX pages which are working with sections of a page or subframes that load and/or reload independently of the larger page. You may not be able to close these popups by running selenium commands if they are initiated by the browser and not your AUT. Double-check that you’ve configured your proxy settings correctly. Click on the “Connections” tab and click on “LAN Settings”. *opera) we automatically hard-code the proxy for you. If you’re encountering 404 errors and have followed this user guide carefully post your results to user group for some help from the user community. see the Proxy Configuration for more details. To check whether you’ve configured the proxy correctly is to attempt to intentionally configure the browser incorrectly. 6. or with *iehta browser launcher. Release 1.com/blah/blah/blah“). then the browser will be unable to connect to the Internet. – You may also try configuring your proxy manually and then launching the browser with *custom. you’ll need to start Selenium Server with “-Dhttp. Often it is impossible to reproduce the problem with a debugger because the trouble stems from race conditions which are not reproducible when the debugger’s overhead is added to the system. This error can also occur when JavaScript attempts to find UI objects which are not yet available (before the page has completely loaded). If your browser is configured correctly. To login to a site that requires HTTP basic authentication. so make sure that executable is on the path. the real firefox-bin is located on: 100 Chapter 6.startup. 6. make sure that the real executable is on the path.14.12 Is it ok to load a custom pop-up as the parent page is loading (i. which is usually too early for us to protect the page.0 browser to trust our dangerous “CyberVillains” SSL certificate authority. 6.onload() function runs)? No.alert. You can specify the path to firefox-bin directly.14. leaving the browser running. cmd=getNewBrowserSession&1=*firefox /usr/local/firefox/firefox-bin&2=. 0). If executing Firefox through a shell script. • modal JavaScript alert/confirmation/prompt dialogs: Selenium tries to conceal those dialogs from you (by replacing window. before the parent page’s javascript window.startup. Selenium 1 (Selenium RC) . Comment this line like this: “//user_pref(“browser.14. Release 1. when it comes time to kill the browser Selenium RC will kill the shell script.10 On Linux.prompt) so they won’t stop the execution of your page. Selenium relies on interceptors to determine window names as they are being loaded. 6. Note: This section is not yet developed. Again. If you’re seeing an alert pop-up.c 6. why isn’t my Firefox browser session closing? On Unix/Linux you must invoke “firefox-bin” directly. 6.14.13 Problems With Verify Commands If you export your tests from Selenium-IDE. On most Linux distributions. so if you are using a previous version. See the sections on these topics in Chapter 4. 0).Selenium Documentation. window. These interceptors work best in catching new windows if the windows are loaded AFTER the onload() function.” and try again.page”. Selenese contains commands for asserting or verifying alert and confirmation popups..google. versions of Selenium before 1.0 needed to invoke “firefox-bin” directly.14 Safari and MultiWindow Mode Note: This section is not yet developed.15 Firefox on Linux On Unix/Linux. it’s probably because it fired during the page load process.11 Firefox *chrome doesn’t work with custom profile Check Firefox profile folder -> prefs.14. you may find yourself getting empty verify strings from your tests (depending on the programming language used).page”.confirm and window.e.14.js -> user_pref(“browser. like this. 6. refer to the HTTPS section for how to do this. Selenium may not recognize windows loaded before the onload function. Selenium Documentation. like this: " *firefox /usr/lib/firefox-x.14. Troubleshooting Common Problems 101 . So.x.x/ Where the x.x.x is the version number you currently have.bashrc file: export PATH= "$PATH:/usr/lib/firefox-x.17 Where can I Ask Questions that Aren’t Answered Here? Try our user group 6. Release 1. you will have to add the following to your . to add that path to the user’s path.x/firefox-bin " 6. you can specify the path to firefox-bin directly in your test. For example: //td[@style="background-color:yellow"] This would work perfectly in Firefox.x/" If necessary.x. you should use: //td[@style="BACKGROUND-COLOR:yellow"] This is a problem if your test is intended to work on multiple browsers.x. IE interprets the keys in @style as uppercase.0 /usr/lib/firefox-x. Opera or Safari but not with IE. So.14. 6.16 IE and Style Attributes If you are running your tests on Internet Explorer and you cannot locate elements using their style attribute.14. even if the source code is in lowercase. but you can easily code your test to detect the situation and try the alternative locator that only works in IE. Release 1.0 102 Chapter 6.Selenium Documentation. Selenium 1 (Selenium RC) . 7. your application involves files being moved to different locations.2. and trademarks information? • Does each page begin with heading text using the <h1> tag? And. If.1 Introducing Test Design We’ve provided in this chapter information that will be useful to both.CHAPTER SEVEN TEST DESIGN CONSIDERATIONS 7. is a simple test for the existence of a static. does each page have the correct text within that header? You may or may not need content tests. priorities set by the project manager and so on. for example. will certainly make many decisions on what to test. If your page content is not likely to be affected then it may be more efficient to test page content manually. These terms are by no means standard.. those new to test automation and for the experienced QA professional.2 Types of Tests What parts of your application should you test? That depends on aspects of your project: user expectations. although the concepts we present here are typical for web-application testing. We’ve created a few terms here for the purpose of categorizing the types of test you may perform on your web application. Once the project boundaries are defined though.1 Testing Static Content The simplest type of test. you. content tests may prove valuable. 103 . 7. The more experienced reader will find these interesting if not already using these techniques. privacy policy. a content test. We also describe ‘design patterns’ commonly used in test automation for improving the maintenance and extensibily of your automation suite. time allowed for the project. UI element. Here we describe the most common types of automated tests. the tester. non-changing. Thing though certainly depends on the function of the web application. i.2. data can retrieved from the application server and then displayed on the page without reloading the entire page.0 7. 7.2.4 Testing Dynamic Elements Often a web page element has a unique identifier used to uniquely locate that element within the page. and real-time data updates among others. and one or more response pages. Release 1. Function tests are often the most complex tests you’ll automate. the easy way to think of this is that in Ajax-driven applications. and returning some type of results. check boxes. registration to the site. 7. Here’s an example. This means your test script which is verify that a document exists may not have a consistent identifier to user for locating that document. Usually these are implemented using the html tag’s ‘id’ attribute or it’s ‘name’ attribute. dynamic elements with varying identifiers are on some type of result page based on a user action. 7.5 Ajax Tests Ajax is a technology which supports dynamically changing user interface elements which can dynamically change without the browser having to reload the page. and ‘doc6148’ on a different instance of the page depending on what ‘document’ the user was retrieving. For example. or strictly the element itself is reloaded.2. some web servers might name a displayed document doc3861 one instance of a page. These names can be a static. But. User input can be via text-input fields. such as animation. or if files are occasionally relocated. or any other browser-supported input. Often a function test will involve multiple pages with a formbased input page containing a collection of input fields.e unchanging. 104 Chapter 7. Submit and Cancel operations.3 Function Tests These would be tests of a specific function within your application. requiring some type of user input. Often. but are usually the most important.2. They can also be dynamically generated values that vary each instance of the page. Testing involves clicking each link and verifying the expected page. among others. account settings changes. user account operations. The next time the same page is opened it will likely be a different value. There’s a countless ways Ajax can be used to update elements on a web page. However if your web designers frequently alter links.2 Testing Links A frequent source of errors for web-sites is broken links or missing pages behind links. complex data retrieval operations. Function tests typically mirror the user-scenarios used to specify the features and design or your application. <input type= "checkbox" value= "true" id= "addForm:_ID74:_ID75:0:_ID79:0: checkBox" /> This shows an HTML tag for a check box. string constant. drop-down lists. Its ID (addForm:_ID74:_ID75:0:_ID79:0:checkBox) is a dynamically generated value. Only a portion of the page. Test Design Considerations .Selenium Documentation. If static links are infrequently changed then manual testing may be sufficient. link tests should be automated. Typical tests can be for login. RSS feeds. Selenium Documentation, Release 1.0 7.3 Validating Results 7.3.1 Assert vs. Verify flag fi specific specific element, say, an image, is at a specific location. Getting a feel for these types of decisions will come with time and a little experience. They are easy concepts, and easy to change in your test. 7.3. Validating Results 105 Selenium Documentation, Release 1.0 7.4 Location Strategies 7.4.1fic. 7.4.2 Locating Dynamic Elements defines Identifier. So, for your test script to click this button you simply need to use the following selenium command. click adminHomeForm Or, in Selenium 1.0 106 Chapter 7. Test Design Considerations Selenium Documentation, Release 1.0 selenium.click( "adminHomeForm" ); Your application, however, may generate HTML dynamically where the identifier defines Identifier, this approach would not work. The next time this page is loaded the Identifier first(expectedText); } } This approach will work if there is only one check box whose ID has the text ‘expectedText’ appended to it. 7.4. Location Strategies 107 if it’s not available wait for a predefined period and then again recheck it.Selenium Documentation. Let’s consider a page which brings a link (link=ajaxLink) on click of a button on page (without refreshing the page) This could be handled by Selenium using a for loop. One way to prevent this is to wrap frequently used selenium calls with functions or class methods of your own design. Instead of duplicating this code you could write a wrapper method that performs both functions. Ajax is a common topic in the user forum and we recommend searching previous discussions to see what others have done. // Loop initialization. many tests will frequently click on a page element and wait for page to load multiple times within a test. The approach is to check for the element.4. } catch (Exception e) // Pause for 1 second. if (second >= 60) break.click(elementLocator). you will want to use utility functions to handle code that would otherwise be duplicated throughout your tests.0 (Selenium-RC) a bit more coding is involved. second++) { // If loop is reached 60 seconds then break the loop. but it isn’t difficult. 7. try { if (selenium.waitForPageToLoad(waitPeriod).0 7..0 WebDriver API. Release 1.3 Locating Ajax Elements As was presented in the Test Types subsection above. This is then executed with a loop with a predetermined time-out terminating the loop if the element isn’t found. // Search for element "link=ajaxLink" and if available then break loop. } This certainly isn’t the only solution. selenium. This is explained in detail in the WebDriver chapters. * * param elementLocator * param waitPeriod */ 108 Chapter 7.sleep(1000). Thread. In Selenim 2. The parameter is a By object which is how WebDriver implements locators. The best way to locate and verify an Ajax element is to use the Selenium 2.isElementPresent( "link=ajaxLink" )) break. selenium.0 you use the waitFor() method to wait for a page element to become available.5 Wrapping Selenium Calls As with any programming. /** * Clicks and Waits for page to load. for (int second = 0. Test Design Considerations . For example. To do this with Selenium 1. a page element implemented with Ajax is an element that can be dynamically refreshed without having to refresh the entire page. It was specifically designed to address testing of Ajax elements where Selenium 1 has some limitations. Clicks on element only if it is available on page. while posting a message to a log about the missing element.Clicks on element only if it is available on page. * * param elementLocator */ public void safeClick(String elementLocator) { if(selenium. then safe methods could be used.1 ‘Safe Operations’ for Element Presence Another common usage of wrapping selenium methods is to check for presence of an element on page before carrying out some operation. } } This example uses the Selenium 1 API but Selenium 2 also supports this.click(elementLocator).click(elementLocator).5. login button on home page of a portal) then this safe method technique should not be used. Using safe methods is up to the test developer’s discretion.0 public void clickAndWait(String elementLocator. } } In this second example ‘XXXX’ is simply a placeholder for one of the multiple location methods that can be called here. } else { // Using the TestNG API for logging Reporter.findElement(By. /** * Selenium-WebDriver -. if(webElement != null) { selenium.getUrl()). Hence. 7.XXXX(elementLocator)).click(elementLocator). This. Release 1. is not available on page +selenium. /** * Selenum-RC -. } else { // Using the TestNG API for logging Reporter.log( "Element: " +elementLocator+ ". is not available on page + getDriver().isElementPresent(elementLocator)) { selenium.e.waitForPageToLoad(waitPeriod). essentially. For instance.Selenium Documentation. } 7. String waitPeriod) { selenium. implements a ‘verify’ with a reporting mechanism as opposed to an abortive assert.log( "Element: " +elementLocator+ ". selenium. But if element must be available on page in order to be able to carry out further operations (i. if test execution is to be continued. Wrapping Selenium Calls 109 . * * param elementLocator */ public void safeClick(String elementLocator) { WebElement webElement = getDriver().getLocation()). This is sometimes called a ‘safe operation’. the following method could be used to implement a safe operation that depends on an expected element being present. even in the wake of missing elements on the page.5. username.createnewevent).events. in multiple test scripts.loginbutton).click( "adminHomeForm:_activityold" ). there is a central location for easily finding that object.6 UI Mapping A UI map is a mechanism that stores all the locators for a test suite in one place for easy modification when identifiers or paths to UI elements change in the AUT. Test Design Considerations . • Cryptic HTML Identifiers and names can be given more human-readable names improving the readability of test scripts.type( "loginForm:tbUsername" .waitForPageToLoad( "30000" ). A better script could be: public void testNew() throws Exception { selenium. selenium. Even regular users of the application might have difficulty understanding what thus script does. • Using a centralized location for UI objects instead of having them scattered throughout the script. selenium.com" ).waitForPageToLoad( "30000" ). selenium.test.type(admin.viewoldevents). } Now. selenium. } This script would be hard to follow for anyone not familiar with the AUT’s page source. public void testNew() throws Exception { selenium. "xxxxxxxx" ). To summarize.click( "loginForm:btnLogin" ). rather than having to search through test script code. it allows changing the Identifier in a single place. selenium. Basically. When a locator needs to be edited. selenium. a UI map is a repository of test script objects that correspond to UI elements of the application being tested. Also. selenium. selenium. Consider the following. selenium.click( "addEditEventForm:_IDcancel" ).test.Selenium Documentation. What makes a UI map helpful? Its primary purpose for making test script management much easier. selenium. "xxxxxxxx" ). or for that matter. a UI map has two significant advantages.open( ". selenium. selenium.cancel). selenium. The test script then uses the UI Map for locating the elements to be tested.click( "adminHomeForm:_activitynew" ). selenium.com" ). selenium.0 7.waitForPageToLoad( "30000" ). Release 1. using some comments and whitespace along with the UI Map identifiers makes a very readable script.click(admin. example (in java).events.open( "(admin.waitForPageToLoad( "30000" ).click(admin.events.click(admin. This makes script maintenance more efficient. rather than having to make the change in multiple places within a test script.waitForPageToLoad( "30000" ).waitForPageToLoad( "30000" ). selenium. difficult to understand. 110 Chapter 7. selenium.username. One could create a class or struct which only stores public String variables each storing a locator. Consider a property file prop.0 public void testNew() throws Exception { // Open app url. The Page Object Design Pattern provides the following advantages.viewoldevents).com" ).loginbutton).test. 7. but we have introduced a layer of abstraction between the test script and the UI elements.loginbutton = loginForm:btnLogin admin. } There are various ways a UI Map can be implemented. selenium.click(admin.username = loginForm:tbUsername admin.events. Release 1. 7. selenium. a text file storing key value pairs could be used. the tests themselves don’t need to change. Alternatively.cancel).cancel = addEditEventForm:_IDcancel admin.events.7.viewoldevents = adminHomeForm:_activityold The locators will still refer to html objects. // Click on Cancel button. selenium.type(admin. Page Object Design Pattern 111 . only the code within the page object needs to change.events.waitForPageToLoad( "30000" ). a properties file containing key/value pairs is probably best method.events. selenium. // Click on Create New Event button. Values are read from the properties file and used in the Test Class to implement the UI Map. admin. selenium.createnewevent = adminHomeForm:_activitynew admin. The benefit is that if the UI changes for the page. Subsequently all changes to support that new UI are located in one place. For more on Java properties files refer to the following link. selenium. There is clean separation between test code and page specific code such as locators (or their use if you’re using a UI map) and layout.click(admin.open( " Page Object Design Pattern Page Object is a Design Pattern which has become popular in test automation for enhancing test maintenance and reducing code duplication. A page object is an object-oriented class that serves as an interface to a page of your AUT.createnewevent). selenium. In Java. // Click on Login button. // Provide admin username.waitForPageToLoad( "30000" ).properties which assigns as ‘aliases’ reader-friendly identifiers for UI elements from the previous example.events. The tests then use the methods of this page object class whenever they need to interact with that page of the UI. // Click on View Old Events button. selenium.click(admin. "xxxxxxxx" ).Selenium Documentation. 1.events.click(admin.waitForPageToLoad( "30000" ). First. selenium.getTitle(). Applying the page object techniques this example could be rewritten like this in the following example of a page object for a Sign-in page. Useful information on this technique can be found on numerous blogs as this ‘test design pattern’ is becoming widely used. selenium. If the AUT’s UI changes its identifiers. To get you started. we’ll illustrate page objects with a simple example. */ public class SignInPage { private Selenium selenium.click( "sign-in" ). both are intertwined in a single method. "Login was unsuccessful" ). if(!selenium.selenium = selenium. typical of test automation. Assert.equals( "Sign in page" )) { throw new IllegalStateException( "This is not sign in page. /*** * Tests login feature */ public class Login { public void testLogin() { selenium. public SignInPage(Selenium selenium) { this.0 2. selenium. all tests that had to use this login page. The id-locators would be spread in multiple tests. Many have written on this design pattern and can provide useful tips beyond the scope of this user guide. curre +selenium. though. Release 1. 2. "testUser" ).type( "inputBox" . layout. In both cases this allows any modifications required due to UI changes to all be made in one place. } } There are two problems with this approach. 1. /** * Page Object encapsulates the Sign-in page.waitForPageToLoad( "PageWaitPeriod" ). } } /** * Login as valid user * 112 Chapter 7.type( "password" . the test itself must change. or how a login is input and processed.assertTrue(selenium. Test Design Considerations . We encourage the reader who wishes to know more to search the internet for blogs on this subject.Selenium Documentation. "my supersecret password" ).getLocation()). consider an example.isElementPresent( "compose button" ). There is single repository for the services or operations offered by the page rather than having these services scattered through out the tests. There is no separation between the test method and the AUTs locators (IDs in this example). that does not use a page object. selenium.click( "sign-in" ).isElementPresent( "compose button" ).getTitle(). Release 1.waitForPageToLoad( "waitPeriod" ).0 * @param userName * @param password * @return HomePage object */ public HomePage loginValidUser(String userName. public HomePage(Selenium selenium) { if (!selenium. /** * Page Object encapsulates the Home Page */ public class HomePage { private Selenium selenium.equals( "Home Page of logged in user" )) { throw new IllegalStateException( "This is not Home Page of logged "is: " +selenium.getLocation()). selenium.type( "passwordfield" . Page Object Design Pattern 113 . } } and page object for a Home page could look like this. "password" ).loginValidUser( "userName" . password). } } 7.type( "usernamefield" . the login test would use these two page objects as follows.assertTrue(selenium.7. Assert. /*** * Tests login feature */ public class TestLogin { public void testLogin() { SignInPage signInPage = new SignInPage(selenium).Selenium Documentation. selenium. HomePage homePage = signInPage. String password) { selenium. These methods in turn might return more Page Objects for example click on Compose mail button could return ComposeMail class object*/ } So now. "Login was unsuccessful" ). return new HomePage(selenium). } /*More methods offering the services represented by Home Page of Logged User. userName). } } public HomePage manageProfile() { // Page encapsulation to manage profile functionality return new HomePage(selenium). and should. never in an page object. refer to the Selenium RC wiki for examples of reading data from a spreadsheet or for using the data provider capabilities of TestNG. Some use a Page Factory for instantiating their page objects. were loaded correctly. If a page in the AUT has multiple components. 7. text file. In Python: # Collection of String values source = open( " input_file.txt " . many have blogged on this topic and we encourage the reader to search for blogs on these topics. Release 1. this is a well-known topic among test automation professionals including those who don’t use Selenium so searching the internet on “data-driven testing” should reveal many blogs on this topic.waitForPageToLoad( " 30000 " ) self. Page objects themselves should never be make verifications or assertions. essentially creating additional tests.type( " q " .8 Data Driven Testing Data Driven Testing refers to using the same test (or tests) multiple times with varying data. or perhaps loaded from a database. . search) sel.readlines() source. These data sets are often from external files i.is_text_present( " Results * for " + search)) The Python script above opens a text file.e. When the test is designed for varying data. There is one. and possibly critical elements on the page. A page object does not necessarily need to represent an entire page. single. Discussing all of these is beyond the scope of this user guide. both the SignInPage and HomePage constructors check that the expected page is available and ready for requests from the test. In the examples above. without requiring changes to the test code.csv file.Selenium Documentation. and iterates over the array doing a search and assert on each string.0 There is a lot of flexibility in how the page objects may be designed. The Page Object design pattern could be used to represent components on a page. it may improved maintainability if there was a separate page object for each component.failUnless(sel. 114 Chapter 7.open( " / " ) sel. Here. This is part of your test and should always be within the test’s code. be within the page object and that is to verify that the page. This verification should be done while instantiating the page object. Additionally. There are other design patterns that also may be used in testing. This is a very basic example. Data driven testing is a commonly used test automation technique used to validate an application against many varying input. This file contains a different search string on each line.close() # Execute For loop for each String in the values array for search in values: sel. and the services the page provides via methods but no code related to what is being tested should be within the page object. " r " ) values = source. The code then saves this in an array of strings. For more examples. The page object will contain the representation of the page. Test Design Considerations . but there are a few basic rules for getting the desired maintainability of your test code. but the idea is to show that running a test with varying data can be done easily with a programming or scripting language. As was mentioned earlier. verification which can.click( " btnG " ) sel. the input data can expand. we merely want to introduce the concepts to make the reader aware of some of the things that can be done. String url = "jdbc:sqlserver://192. selenium. Consider the example of a registered email address to be retrieved from a database and then later compared against the UI. "Unable to log This is a simple Java example of data retrieval from a database.waitForPageToLoad(timeOut).type( "userID" .executeQuery ( "select top 1 email_address from user_register_table" ). // Create statement object which would be used in writing DDL and DML // SQL statement.jdbc. Database Validation 115 . Release 1. // Fetch value of "email_address" from "result" object.executeQuery // method which returns the requested information as rows of data in a // ResultSet object.1. selenium.168.assertTrue(selenium.9.sqlserver. // Use the emailAddress value to login to application.microsoft. secretPassword). // Send SQL SELECT statements to the database via the Statement.9 Database Validation Another common type of testing is to compare data in the UI against the data actually stored in the AUT’s database.SQLServerDriver" ).createStatement().180:1433.0 7. Assert. Class. ResultSet result = stmt.click( "loginButton" ).type( "password" . // Prepare connection url. public static Statement stmt = con.Selenium Documentation.forName( "com.DatabaseName=TEST_DB" . "password" ). In Java: // Load Microsoft SQL Server JDBC driver. "username" . // Get connection to DB. 7. emailaddress). public static Connection con = DriverManager.isTextPresent( "Welcome back" +emailaddress). String emailaddress = result. you can use them to retrieve data and then use the data to verify what’s displayed by the AUT is correct. Since you can also do database queries from a programming language. assuming you have database support functions.getConnection(url. selenium. selenium.getString( "email_address" ). An example of establishing a DB connection and retrieving data from the DB could look like this. Selenium Documentation.0 116 Chapter 7. Test Design Considerations . Release 1. org/how_it_works. If there is a member of the community who is experienced in SeleniumGrid. We would love to have you contribute.seleniumhq. 117 . please contact the Documentation Team. and would like to contribute.html This section is not yet developed.CHAPTER EIGHT SELENIUM-GRID Please refer to the Selenium Grid website. 0 118 Chapter 8. Release 1.Selenium Documentation. Selenium-Grid . adding your own actions.doTypeRepeated = function(locator. but it has not been reviewed and edited. }. Selenium will automatically look through methods on these prototypes.3 Accessors/Assertions All getFoo and isFoo methods on the Selenium prototype are added as accessors (storeFoo). // Replace the element text with the new text this. The following examples give an indication of how Selenium can be extended with JavaScript. Selenium. Example: Add a “typeRepeated” action to Selenium. using name patterns to recognize which ones are actions. You can also define your own assertions literally as simple “assert” methods. An assert method can take up to 2 parameters. assertions and locators. which types the text twice into a text box.2 Actions All methods on the Selenium prototype beginning with “do” are added as actions. verifyFoo and waitForFoo registered.1 Introduction It can be quite simple to extend Selenium. This is done with JavaScript by adding methods to the Selenium object prototype.CHAPTER NINE USER-EXTENSIONS NOTE: This section is close to completion. Example: Add a valueRepeated assertion. text) { // All locator-strategies are automatically handled by "findElement" var element = this. valueToType). which will be passed the second and third column values in the test.page(). 9. 9. and the PageBot object prototype. For each action foo there is also an action fooAndWait registered.prototype.findElement(locator). // Create the text to type var valueToType = text + text.replaceText(element. which will be passed the second and third column values in the test. assertions and locator-strategies. 9. that makes sure that the element 119 . An action method can take up to two parameters. On startup.page(). which will also auto-generate “verify” and “waitFor” commands. For each accessor there is an assertFoo. Selenium. waitForFoo and waitForNotFoo for every getFoo All getFoo and isFoo methods on the Selenium prototype automatically result in the availability of storeFoo. verifyFoo. 120 Chapter 9. waitForTextLength. Release 1. waitForFoo. and waitForNotFoo commands. PageBot. and waitForNotTextLength commands. assertTextLength. // Make sure the actual value matches the expected Assert.findElement(locator).prototype. assertFoo. assertFoo. text) { return this. actualValue).getText(locator). assertNotTextLength.getElementsByTagName( "*" ). assertNotFoo. waitForValueRepeated and waitForNotValueRepeated. The 2 commands that would be available in tests would be assertValueRepeated and verifyValueRepeated. verifyTextLength.0 value consists of the supplied text repeated. looking for ones that have // a value === our expected value var allElements = inDocument.length. the following commands will automatically be available: storeTextLength.assertValueRepeated = function(locator. // Get the actual element value var actualValue = element.Selenium Documentation.1 Automatic availability of storeFoo.3.locateElementByValueRepeated = function(text.getTextLength = function(locator. assertNotFoo. inDocument) { // Create the text to search for var expectedValue = text + text. storeValueRepeated.length. if you add a getTextLength() method. that finds the first element a value attribute equal to the the supplied value repeated. 9.page(). // Loop through all elements. for (var i = 0. verifyNotFoo. Also note that the assertValueRepeated method described above could have been implemented using isValueRepeated. Example: Add a “valuerepeated=” locator. }. A locator strategy takes 2 parameters.4 Locator Strategies All locateElementByFoo methods on the PageBot prototype are added as locator-strategies.value. }.prototype. User-Extensions . // Create the text to verify var expectedValue = text + text. i++) { var testElement = allElements[i]. 9. and the second being the document in which to search.matches(expectedValue.prototype. text) { // All locator-strategies are automatically handled by "findElement" var element = this. i < allElements. Selenium. verifyNotTextLength. the first being the locator string (minus the prefix). Example. with the added benefit of also automatically getting assertNotValueRepeated. // The "inDocument" is a the document you are searching. Create your user extension and save it as user-extensions. First. ". 2.) HttpCommandProcessor proc. Open Firefox and open Selenium-IDE. 1.Selenium Documentation. 9.js. is the official Selenium suggested approach. Options 4.5. your user-extension should now be an options in the Commands dropdown.5 Using User-Extensions With Selenium-IDE User-extensions are very easy to use with the selenium IDE. While this name isn’t technically necessary. If you are using client code generated by the Selenium-IDE you will need to make a couple small edits. 6. In your empty test. you will need to create an HttpCommandProcessor object with class scope (outside the SetupTest method. In Selenium Core Extensions click on Browse and find the user-extensions.value === expectedValue) { return testElement. Your user-extension will not yet be loaded.0 if (testElement. This can be done in the test setup. Using User-Extensions With Selenium-IDE 121 . 9.ca/" ). "*iexplore" . 3. 1. instantiate that HttpCommandProcessor object DefaultSelenium object. Below.6 Using User-Extensions With Selenium RC If you Google “Selenium RC user-extension” ten times you will find ten different approaches to using this feature. Next. Release 1. 4444. 9. } } return null. 9. }. as you would the proc = new HttpCommandProcessor( "localhost" .value && testElement.6. Place your user extension in the same directory as your Selenium Server. you must close and restart Selenium-IDE.1 Example C# 1. 2. it’s good practice to keep things consistent. Click on OK. Click on Tools. just below private StringBuilder verificationErrors. 5. create a new command. js file. Remember that user extensions designed for Selenium-IDE will only take two arguments.DoCommand( "alertWrapper" . private HttpCommandProcessor proc. regardless of the capitalization in your user-extension. System. 4444. System.Framework. NUnit. but a longer array will map each index to the corresponding user-extension parameter. selenium = new DefaultSelenium(proc). 1. inputParams is the array of arguments you want to pass to the JavaScript user-extension. your test will fail if you begin this command with a capital. "*iexplore" selenium = new DefaultSelenium(proc). 4444. verificationErrors = new StringBuilder().0 1.Start(). string[] inputParams = { "Hello World" }. Selenium. Start the test server using the -userExtensions argument and pass in your user-extensinos. private StringBuilder verificationErrors. Because JavaScript is case sensitive.Text. In this case there is only one string in the array because there is only one parameter for our user extension. } 122 Chapter 9. //selenium = new DefaultSelenium("localhost". 1. namespace SeleniumTests { [TestFixture] public class NewTest { private ISelenium selenium. User-Extensions . Notice that the first letter of your function is lower case. Within your test code. proc. execute your user-extension by calling it with the DoCommand() method of HttpCommandProcessor.jar -userExtensions user-extensions.RegularExpressions. System. Instantiate the DefaultSelenium object using the HttpCommandProcessor object you created. Selenium automatically does this to keep common JavaScript naming conventions.Selenium Documentation.Threading. Release 1. [SetUp] public void SetupTest() { proc = new HttpCommandProcessor( "localhost" . inputParams). java -jar selenium-server. This method takes two arguments: a string to identify the userextension method you want to use and string array to pass arguments.js file.Text. "*iexplore".js using using using using using using System. selenium. AreEqual( "" . verificationErrors. } [Test] public void TheNewTest() { selenium. inputParams).Open( "/" ). Using User-Extensions With Selenium RC 123 .Stop(). } } } Appendixes: 9. Release 1.6. proc.0 [TearDown] public void TeardownTest() { try { selenium.ToString()).DoCommand( "alertWrapper" .}.Selenium Documentation. } catch (Exception) { // Ignore errors if unable to close the browser } Assert. string[] inputParams = { "Hello World" . User-Extensions .0 124 Chapter 9.Selenium Documentation. Release 1. NET client Driver can be used with Microsoft Visual Studio. • Launch Visual Studio and navigate to File > New > Project. 125 . • Select Visual C# > Class Library > Name your project > Click on OK button.CHAPTER TEN .NET CLIENT DRIVER CONFIGURATION . To Configure it with Visual Studio do as Following. Rename it as appropriate. . 126 Chapter 10. • Under right hand pane of Solution Explorer right click on References > Add References.cs) is created. Release 1.0 • A Class (.NET client driver configuration .Selenium Documentation. nunit. ThoughtWorks.dll and click on Ok button 127 .framework.nmock.Selenium. Release 1.dll. nunit.ThoughtWorks.dll.UnitTests. ThoughtWorks.dll.Selenium.core.IntegrationTests.Core. Selenium.dll.dll.0 • Select following dll files .Selenium Documentation. Release 1.0 With This Visual Studio is ready for Selenium Test Cases. .NET client driver configuration . 128 Chapter 10.Selenium Documentation. 0 chapter. 129 . showing how to create a Selenium 2.0 java project into IntelliJ. open IntelliJ and from the entry page. From the New Project dialog select Import Project from External Model. First. In this appendix we provide the steps. click Create New Project.0 PROJECT INTO INTELLIJ USING MAVEN We are currently working on this appendix.CHAPTER ELEVEN IMPORTING SEL2. You must have followed that process before you can perform these steps. The information provided here is accurate. including screen captures. This appendix then shows you how to import the maven-created Selenium 2. although it may not be finished. This process is described in the Selenium 2.0 java client-driver project in IntelliJ IDEA. These steps assume you have already used maven with a pom.xml file to set up the project. Now you will see a dialog allowing you to set project options including the project’s root directory. 130 Chapter 11. Importing Sel2.0 Project into IntelliJ Using Maven .0 From the list of project types. select maven.Selenium Documentation. Release 1. 0 Click the ‘.Selenium Documentation. Now the settings dialog will show the directory you just selected. 131 .’ button to set the root folder... Release 1. Select your maven project and continue.Selenium Documentation. Importing Sel2.0 This next dialog shows the name of your maven project as specified in the pom. Release 1. 132 Chapter 11.0 Project into IntelliJ Using Maven . Enter a name for your project.xml file. Now in IntelliJ you can see all these libraries. 133 . These next two screen captures shows the libraries you should now have in your project.Selenium Documentation. Release 1.0 Once your project has been imported it should look like this in IntelliJ. The maven project download many dependencies (libraries) when you originally ran ‘mvn install’. java file). you still need to create a module and at least one Java class (a .0 Before you can start writing Selenium code. First select the Project’s root in IntelliJ and right click.Selenium Documentation.0 Project into IntelliJ Using Maven . 134 Chapter 11. Importing Sel2. Release 1. Selenium Documentation.0 And select Create Module. Release 1. 135 . In the dialog select the radio button Create Module From Scratch. Selenium Documentation, Release 1.0 Select Java Module and enter a name for the new module. 136 Chapter 11. Importing Sel2.0 Project into IntelliJ Using Maven Selenium Documentation, Release 1.0 And next, you must create a folder for the source code. By convention this is almost always named ‘src’. 137 Selenium Documentation, Release 1.0 Now we’re on the last dialog. Typically you don’t need to select any ‘technollogies’ here. Unless you know for a fact you will be using Groovy or some other technology. 138 Chapter 11. Importing Sel2.0 Project into IntelliJ Using Maven 0 Now that the module is created. your project should show the following structure. Release 1. 139 .Selenium Documentation. you need to create a .java file with a corresponding java class.0 Project into IntelliJ Using Maven . Enter the class name. Release 1.java file should now be created. The . Importing Sel2. It should look like this in your project.0 Finally. 140 Chapter 11.Selenium Documentation. congrats! And hope you enjoy coding your first Selenium automation! 141 .0 If your project now looks like the one displayed above. Release 1. you’re done.Selenium Documentation. Selenium Documentation. Release 1.0 Project into IntelliJ Using Maven . Importing Sel2.0 142 Chapter 11. • Select File > New > Other. (Europa Release). Python. Following lines describes configuration of Selenium-RC with Eclipse . Cobol. PHP and more. 143 . Perl.0 “selenium-java-<version-number>. in other languages as well as C/C++. It is written primarily in Java and is used to develop applications in this language and.1 Configuring Selenium-RC With Eclipse Eclipse is a multi-language software development platform comprising an IDE and a plug-in system to extend it.Version: 3.jar” to your project classpath.0. by means of the various plug-ins.3.CHAPTER TWELVE SELENIUM 1. It should not be too different for higher versions of Eclipse • Launch Eclipse. 0 • Java > Java Project > Next 144 Chapter 12. Selenium 1. Release 1.0 Java Client Driver Configuration .Selenium Documentation. Release 1.0 • Provide Name to your project.5 selected in this example) > click Next 12. Configuring Selenium-RC With Eclipse 145 .1.Selenium Documentation. Select JDK in ‘Use a project Specific JRE’ option (JDK 1. (This described in detail in later part of document. Selenium 1.Selenium Documentation.) 146 Chapter 12. Project specific libraries can be added here.0 • Keep ‘JAVA Settings’ intact in next window. Release 1.0 Java Client Driver Configuration . 0 • Click Finish > Click on Yes in Open Associated Perspective pop up window.Selenium Documentation. Configuring Selenium-RC With Eclipse 147 . Release 1. 12.1. Release 1.0 Java Client Driver Configuration . 148 Chapter 12.Selenium Documentation.0 This would create Project Google in Package Explorer/Navigator pane. Selenium 1. Selenium Documentation.0 • Right click on src folder and click on New > Folder 12. Configuring Selenium-RC With Eclipse 149 . Release 1.1. • This should get com package insider src folder. Selenium 1. 150 Chapter 12.Selenium Documentation.0 Java Client Driver Configuration . Release 1.0 Name this folder as com and click on Finish button. 0 • Following the same steps create core folder inside com 12. Configuring Selenium-RC With Eclipse 151 .1. Release 1.Selenium Documentation. Release 1.0 Java Client Driver Configuration . This is a place holder for test scripts. Selenium 1. 152 Chapter 12.0 SelTestCase class can be kept inside core package. Test scripts package can further be segregated depending upon the project requirements. Create one more package inside src folder named testscripts.Selenium Documentation. Please notice this is about the organization of project and it entirely depends on individual’s choice / organization’s standards. Release 1.1.e. This is a place holder for jar files to project (i. selenium server etc) 12.0 • Create a folder called lib inside project Google. Selenium client driver. Right click on Project name > New > Folder. Configuring Selenium-RC With Eclipse 153 .Selenium Documentation. Release 1.0 This would create lib folder in Project directory.Selenium Documentation. 154 Chapter 12. Selenium 1.0 Java Client Driver Configuration . Release 1.Selenium Documentation. Configuring Selenium-RC With Eclipse 155 .0 • Right click on lib folder > Build Path > Configure build Path 12.1. Select the jar files which are to be added and click on Open button. Release 1. 156 Chapter 12.Selenium Documentation. Selenium 1.0 • Under Library tab click on Add External Jars to navigate to directory where jar files are saved.0 Java Client Driver Configuration . 0 After having added jar files click on OK button.Selenium Documentation. Release 1. Configuring Selenium-RC With Eclipse 157 .1. 12. 0 Added libraries would appear in Package Explorer as following: 158 Chapter 12. Selenium 1.0 Java Client Driver Configuration .Selenium Documentation. Release 1. 0 12. Intellij provides a set of integrated refactoring tools that allow programmers to quickly redesign their code. • Open a New Project in IntelliJ IDEA. IntelliJ IDEA provides close integration with popular open source development tools such as CVS.2. 12. Release 1. Apache Ant and JUnit.0 It should not be very different for higher version of intelliJ.Selenium Documentation. Configuring Selenium-RC With Intellij 159 .2 Configuring Selenium-RC With Intellij IntelliJ IDEA is a commercial Java IDE by the company JetBrains. Subversion. Following lines describes configuration of Selenium-RC with IntelliJ 6. • Click Next and provide compiler output path. Release 1.Selenium Documentation. 160 Chapter 12.0 • Provide name and location to Project. Selenium 1.0 Java Client Driver Configuration . 0 • Click Next and select the JDK to be used.2.Selenium Documentation. Configuring Selenium-RC With Intellij 161 . 12. Release 1. • Click Next and select Single Module Project. 0 • Click Next and select Java module. 162 Chapter 12. • Click Next and select Source directory.Selenium Documentation. Selenium 1. • Click Next and provide Module name and Module content root. Release 1.0 Java Client Driver Configuration . 2. Release 1.0 • At last click Finish. 12. • Click on Project Structure in Settings pan. Configuring Selenium-RC With Intellij 163 . Adding Libraries to Project: • Click on Settings button in the Project Tool bar. This will launch the Project Pan.Selenium Documentation. Selenium Documentation. Release 1.0 Java Client Driver Configuration .0 • Select Module in Project Structure and browse to Dependencies tab. Selenium 1. 164 Chapter 12. 2. 12. (Multiple Jars can be selected b holding down the control key.jar.0 • Click on Add button followed by click on Module Library.).Selenium Documentation. • Browse to the Selenium directory and select selenium-java-client-driver.jar and seleniumserver. Release 1. Configuring Selenium-RC With Intellij 165 . Selenium 1.0 • Select both jar files in project pan and click on Apply button. 166 Chapter 12. Release 1.0 Java Client Driver Configuration .Selenium Documentation. 2.0 • Now click ok on Project Structure followed by click on Close on Project Settings pan. Configuring Selenium-RC With Intellij 167 . Release 1. 12. Added jars would appear in project Library as following.Selenium Documentation. 168 Chapter 12.0 Java Client Driver Configuration . Selenium 1.Selenium Documentation. Release 1.0 • Create the directory structure in src folder as following. • Herein core contains the SelTestCase class which is used to create Selenium object and fire up the browser. Configuring Selenium-RC With Intellij 169 .Selenium Documentation. Release 1. Hence extended structure would look as following.2. 12. testscripts package contains the test classes which extend the SelTestCase class.0 Note: This is not hard and fast convention and might very from project to project. Selenium 1.0 170 Chapter 12. Release 1.0 Java Client Driver Configuration .Selenium Documentation. • Installing Python Note: This will cover python installation on Windows and Mac only.CHAPTER THIRTEEN PYTHON CLIENT DRIVER CONFIGURATION • Download Selenium-RC from the SeleniumHQ downloads page • Extract the file selenium. (even write tests in a text processor and run them from command line!) without any extra work (at least on the Selenium side). After following this.py • Either write your Selenium test in Python or export a script from Selenium-IDE to a python file.x-win32-x86. Run the installer downloaded (ActivePython-x. Download Active python’s installer from ActiveState’s official site: • Run Selenium server from the console • Execute your test from a console or your Python IDE The following steps describe the basic installation procedure. as in most linux distributions python is already pre-installed by default. – Windows 1.com/Products/activepython/index. • Add to your test’s path the file selenium.x. the user can start using the desired IDE.mhtml 2.msi) 171 .x. pythonmac. Python Client Driver Configuration .0 • Mac The latest Mac OS X version (Leopard at this time) comes with Python pre-installed. Release 1.org/ (packages for Python 2.x).5. To install an extra Python.Selenium Documentation. get a universal binary at. 172 Chapter 13. Download the last version of Selenium Remote Control from the downloads page 2. you’re done! Now any python script that you create can import selenium and start interacting with the browsers. You will find the module in the extracted folder.Selenium Documentation.0 You will get a . • Installing the Selenium driver client for python 1. Extract the content of the downloaded zip file 3. Copy the module with the Selenium’s driver for Python (selenium.py) in the folder C:/Python25/Lib (this will allow you to import it directly in any script you write). Congratulations. Release 1. It contains a . it’s located inside seleniumpython-driver-client.dmg file that you can mount. 173 .pkg file that you can launch. Release 1.0 174 Chapter 13. Python Client Driver Configuration .Selenium Documentation. Incidentally. the element <span class="top heading bold"> can be located based on the ‘heading’ class without having to couple it with the ‘top’ and ‘bold’ classes using the following XPath: //span[contains(@class.1 text Not yet written .2.2 starts-with Many sites use dynamic values for element’s id attributes. however with CSS locators this is much simpler (and faster).CHAPTER FOURTEEN LOCATING TECHNIQUES 14. One simple solution is to use XPath functions and base the location on what you do know about the element. the contains function can be used.1.1 Useful XPATH patterns 14. • XPath: //div[contains(@class.1. 14.1. Useful for forms and tables.2 Starting to use CSS instead of XPATH 14. ’heading’)].1 Locating elements based on class In order to locate an element based on associated class in XPath you must consider that the element could have multiple classes and defined in any order. For example. ’text-’)] 14. which can make them difficult to locate. 14. ’article-heading’)] 175 .1. To demonstrate.3 contains If an element can be located by a value that could be surrounded by other text. this would be much neater (and probably faster) using the CSS locator strategy css=span.locate elements based on the text content of the node.locate elements based on their siblings.heading 14.4 siblings Not yet written . if your dynamic ids have the format <input id="text-12345" /> where 12345 is a dynamic number you could use the following XPath: //input[starts-with(@id. Release 1. Locating Techniques .0 • CSS: css=div.article-heading 176 Chapter 14.Selenium Documentation. compact API.2 Why Migrate to WebDriver Moving a suite of tests from one API to another API requires an enormous amount of effort. This more closely mimics the way that your users work with your site and apps.1 How to Migrate to Selenium WebDriver A common question when adopting Selenium 2 is what’s the correct thing to do when adding new tests to an existing set of tests? Users who are new to the framework can begin by using the new WebDriver APIs for writing their tests. WebDriver makes use of native events in order to interact with a web page. because this has the best support for making the migration. which may make it easier for you to decide where to spend your effort. • Smaller. This guide is written using Java. Mozilla and Google are all active participants in WebDriver’s development. Opera. As we provide better tools for other languages. But what of users who already have suites of existing tests? This guide is designed to demonstrate how to migrate your existing tests to the new APIs. this guide shall be expanded to include those languages. allowing all new tests to be written using the new features offered by WebDriver. In addition. WebDriver offers the advanced user interactions APIs which allow you to model complex interactions with your site. The method presented here describes a piecemeal migration to the WebDriver APIs without needing to rework everything in one massive push. and each have engineers working to improve the framework. 177 . • Support by browser vendors. this means that support for WebDriver is baked into the browser itself: your tests run as fast and as stably as possible. This means that you can allow more time for migrating your existing tests. • Better emulation of user interactions.CHAPTER FIFTEEN MIGRATING FROM SELENIUM RC TO SELENIUM WEBDRIVER 15. Why would you and your team consider making this move? Here are some reasons why you should consider migrating your Selenium Tests to use WebDriver. WebDriver’s API is more Object Oriented than the original Selenium RC API. This can make it easier to work with. Often. 15. Where possible. At this point. but to unwrap the WebDriver instance as required. this might be a short process or a long one. Once you’ve done this. This may sound obvious.5 Next Steps Once your tests execute without errors. using WebDriver throughout and instantiating a Selenium instance on demand: Selenium selenium = new WebDriverBackedSelenium(driver.6 Common Problems Fortunately. "*firefox" .start(). Release 1. If you need to extract the underlying WebDriver implementation from the Selenium instance. In either case.Selenium Documentation.com" ). 15. you’re not the first person to go through this migration. 15. ". selenium. so it’s completely normal for there to be some bumps and hiccups. you can simply cast it to WrapsDriver: WebDriver driver = ((WrapsDriver) selenium). At some point. This should be replaced like so: WebDriver driver = new FirefoxDriver(). this is done like so: Selenium selenium = new DefaultSelenium( "localhost" . 178 Chapter 15. the next stage is to migrate the actual test code to use the WebDriver APIs. 4444. baseUrl). "" ). Depending on how well abstracted your code is. but it’s not completely perfect. run your existing tests.3 Before Starting In order to make the process of migrating as painless as possible. make sure that all your tests run properly with the latest Selenium release. The Selenium emulation is good. This allows you to continue passing the Selenium instance around as normal. the approach is the same and can be summed up simply: modify code to use the new API when you come to edit it. When using Selenium RC. but it’s best to have it said! 15. you can flip the relationship.4 Getting Started The first step when starting the migration is to change how you obtain your instance of Selenium.yoursite. Selenium selenium = new WebDriverBackedSelenium(driver. Migrating From Selenium RC to Selenium WebDriver . This will give you a fair idea of how much work needs to be done.yoursite. and how to solve them. so here are some common problems that others have seen. you’re codebase will mostly be using the newer APIs.getWrappedDriver().0 15. keyPress( "name" .1 Clicking and Typing is More Complete A common pattern in a Selenium RC test is to see something like: selenium. The most common reason is that it’s hard to tell the difference between a page load not having started yet.readyState has changed” or something else entirely? WebDriver attempts to simulate the original Selenium behavior. 30).6.2 WaitForPageToLoad Returns Too Soon Discovering when a page load is complete is a tricky business. the result of filling in the form field would be “exciting texttt”: not what you’d expect! The reason for this is that WebDriver more accurately emulates user behavior. When using the WebDriverBackedSelenium. Release 1. The solution to this is to wait on something specific. selenium. WebElement element= wait. or for some Javascript variable to be set to a specific value. and so will have been firing events all along.Selenium Documentation. and a page load having completed between method calls.isDisplayed()) { return toReturn. The final direct invocations of “key*” cause the JS handlers to fire as expected. this might be for the element you want to interact with next. selenium. The only interesting bit is that the “ExpectedCondition” will be evaluated repeatedly until the “apply” method returns something that is neither “null” nor Boolean. 15. “when all AJAX requests are complete”. Do we mean “when the load event fires”.FALSE. Commonly.keyDown( "name" . } }. Common Problems 179 . This sometimes means that control is returned to your test before the page has finished (or even started!) loading.until(visibilityOfElementLocated(By. if (toReturn.findElement(locator).0 15. Where “visibilityOfElementLocated” is implemented as: public ExpectedCondition<WebElement> visibilityOfElementLocated(final By locator) { return new ExpectedCondition<WebElement>() { public WebElement apply(WebDriver driver) { WebElement toReturn = driver.6. but it’s almost all boiler-plate code. } return null. "t" ). This relies on the fact that “type” simply replaces the content of the identified element without also firing all the events that would normally be fired if a user interacts with the page. “when document. "t" ). selenium. "t" ).6. 15.id( "some_id" ))). “when there’s no network traffic”. "exciting tex" ). You can tell that this has happened if a “StaleElementException” is thrown by WebDriver. } This may look complex.type( "name" . This same fact may sometimes cause a page load to fire earlier than it would do in a Selenium 1 test. but this doesn’t always work perfectly for various reasons. An example would be: Wait<WebDriver> wait = new WebDriverWait(driver.keyUp( "name" . browserbot.findElement(’id=foo’.Selenium Documentation. But It Does In Selenium 1 In Selenium 1.timeouts(). and your needs are simple. the location is retried until either it is present. It’s worth taking the time to look for these. consider using the implicit waits: driver. 15. CSS Selectors in Selenium 1 were implemented using the Sizzle library. you could access bits of Selenium Core to make things easier. If you’re using the WebDriverBackedSelenium and use a Sizzle locator instead of a CSS Selector for finding elements. adding all these “wait” calls may clutter up your code. How can you tell if you’re using Selenium Core? Simple! Just look to see if your “getEval” or similar calls are using “selenium” or “browserbot” in the evaluated Javascript. the idiom for doing this is to first locate the element.executeScript( "return arguments[0]. This implements a superset of the CSS Selector spec. browserbot.0 Of course. a warning will be logged to the console. 15. Release 1. String name = (String) ((JavascriptExecutor) driver). Alternatively. Thus: String name = selenium.implicitlyWait(30.findElement(By. Notice how the passed in “element” variable appears as the first item in the JS standard “arguments” array. this is no longer possible. or until 30 seconds have passed. and then pass that as an argument to the Javascript. TimeUnit. particularly if tests are failing because of not being able to find elements.tagName" ) becomes: WebElement element = driver. WebDriver will always use the native browser methods unless there’s no alternative. every time an element is located. and it’s not always clear where you’ve crossed the line. That means that complex xpath expressions may break on some browsers. it was common for xpath to use a bundled library rather than the capabilities of the browser itself.SECONDS).3 Finding By XPath or CSS Selectors Doesn’t Always Work.getEval( "selenium. Migrating From Selenium RC to Selenium WebDriver . By doing this. you might be using the browserbot to locate elements. In WebDriver. Fortunately.6. so you can use “window” or “document” directly. This means that you need to use the “return” keyword: 180 Chapter 15.6. and therefore when you executed Javascript. element).5 Executing Javascript Doesn’t Return Anything WebDriver’s JavascriptExecutor will wrap all JS and evaluate it as an anonymous expression. If that’s the case. WebDriver always evaluates JS in the context of the current window.id( "foo" )). You might be using the browserbot to obtain a handle to the current window or document of the test.4 There is No Browserbot Selenium RC was based on Selenium Core.6.manage().tagName" . if the element is not present.getCurrentWindow()). As WebDriver is not based on Selenium Core. 15. " ). Common Problems 181 .Selenium Documentation.6. 15. becomes: ((JavascriptExecutor) driver).0 String title = selenium.getCurrentWindow().title.getEval( "browserbot. Release 1.executeScript( "return window.title" ). This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/63549518/Selenium-Documentation
CC-MAIN-2016-07
refinedweb
37,889
52.46
Community Teams About Start Free Trial Come for the solution, stay for everything else. Start Free Trial somewhereinafrica asked on 12/7/2010 need not in depth guide to DFS All guides to implementing DFS are long and irrelevant. I just want two folders to replicate between two servers, does anyone know of a step-by-step guide that shows you the basic combinations with DFS? I have read the microsoft one, and it starts with "to do this you need 3 servers...) and then goes on with all these examples that are only useful in a lab environment. I have googled my eyes tired, help... (server 2008 std ed) Windows Server 2008 Active Directory 25 1 Last Comment somewhereinafrica 8/22/2022 - Mon netcmh 12/7/2010 Have you seen the step by step from Microsoft? very easy to follow netcmh 12/7/2010 Sorry, I now notice that you did read it netcmh 12/7/2010 however the link i sent you has this line "you need a minimum of two servers configured in the test lab as follows" This is the best money I have ever spent. I cannot not tell you how many times these folks have saved my bacon. I learn so much from the contributors. rwheeler23 netcmh 12/7/2010 explains on how to set it up with pictures :) NashvilleGuitarPicker 12/7/2010 I'll assume that you want to use Windows 2008 DFS-R to replicate two folders on different servers. Try these out. They were written for Windows 2003 R2, but it's pretty much the same in Windows 2008. There are more articles linked to the end of these that might be of further help. Step 1: Setting up a DFS namespace Step 2: Setting up replication Krzysztof Pytko 12/7/2010 Hi, do you have any older DFSes based on 2000 or 2003 in your network? Would yu like to use domain based DFS or standalone DFS? What is your domain functional level (2000 native, 2003, 2008)? Thank you in advance Regards, Krzysztof Get an unlimited membership to EE for less than $4 a week. Unlimited question asking, solutions, articles and more. Tom Scott 12/7/2010 A simpler approach is to download and use RichCopy from Microsoft. RichCopy replaced RoboCopy for this kind of functionality. An example command line might be as follows: "\\server1\share1\folderpa th" "\\server2\share1\folderpa th" /FSD /NE /P /QA /QP "driveletter:\someserver\s omeshare\R ichCopyLog .log" /UE /US /UD /UC /UPF /UFC /UCS /UET This compares the two folders, copies/replicates what is different and log the process. You can get all the help from the command line while you test how it works for you. You can enter something similar to the example as the command line to run in a Scheduled Task on one of the servers in question. - Tom ASKER CERTIFIED SOLUTION jakethecatuk 12/7/2010 somewhereinafrica 12/7/2010 @jakethecatuk Wow, that was super simple. How come not a single guide showed me this option? So what have I done by not setting up replication folders and all that other stuff in all the other guides? I mean your approach worked in a complet;y different folder than all the guides show?!?!? Feels like i cheated and did something wrong? ASKER somewhereinafrica 12/7/2010 ok, i see what happens, I just set up 2 folders and they are synchronizing. Nothing more nothing less... So i guess the easy way out of this would be to now share the 'replication folder' on the new server and connect users to that share instead of the one here in the main office? Replication is taken care of by DFS and the sharing is just plain old sharing. Is this right thinking? Your help has saved me hundreds of hours of internet surfing. fblack61 jakethecatuk 12/7/2010 You haven't done anything wrong. DFS comes in two parts: - Part 1 - the distributed file system which is what is set up above Part 2 - the namespace in AD. What this does is update AD with a new folder share that points to both servers. This allows you to map to one share only but you can connect to either server (in your case, but as many servers as you want). If you want users to connect to a common share for both servers - you need to setup a namespace. ASKER somewhereinafrica 12/7/2010 I was just thinking about how that works. So explain the theory: 1 - I create a "name space" which is a sort of alias for my virtual file library host. 2 - now i attach a folder to this name space. lets call it "data" folder (c:\data) on the main server 3 - i now create another folder on my second server and call that "data' and sync it with the main server 4 - in the main office people can now either connect to the original "\\server\share_name" or the new "\\server.local\dfs-share- name" 5 - in the second office, people should now connect to? A - the DFS share name on the local server (\\new_server.local\dfs-sh are-name) B - The DS name on the main server (\\server.local\dfs-share- name) ?? jakethecatuk 12/7/2010 two questions for the price of one is cheating :P 1. From Server Manager, expand Roles and then File Services - you should see DFS Management listed 2. Right click on Namespaces and choose New Namespace 3. On the Namespace Server screen, type in the name of the primary server and click on 4. On the Namespace Name and Settings screen, give it a share name - i.e. Data 5. Click on Edit Settings . Leave the local path of the shared folder as set. Make sure you choose All users have read and write permissions and click on OK 6. On the Namespace Type screen, this is how you share the namespace. If you have a domain, leave it as Domain-based namespace and you will see in the preview of domain based namespace, how you connect to the share \\domain.loca\data . Click on 7. Click on Create to create the namespace. You will now have a namespace - you can now add your second server to the namespace. 1. Right click on your new namespace and choose Add Namespace Server 2. On the Add Namespace Server screen, type in the name of your second server. 3. Click on Edit Settings . Leave the local path of the shared folder as set. Make sure you choose All users have read and write permissions and click on OK 4. Click on OK to add the server. You now have your two servers offering the namespace which is good for redundancy. You can now add a folder: - 1. Right click on your new namespace and choose New Folder 2. On the New Folder screen, type in the folder name. 3. Click on Add and on the Add Folder Target screen, enter the full path to the shared folder and click on OK 4. Click on OK again to create the folder. You will be prompted to create a replication group - click on No I would suggest that when you create the shared folder on each server that you make it a hidden share by putting a $ on the end of the share name. Get an unlimited membership to EE for less than $4 a week. Unlimited question asking, solutions, articles and more. jakethecatuk 12/7/2010 BTW - clicking on Yes to create the replication group will do what was in my previous post - but you won't understand how it's setup if you do that :) ASKER somewhereinafrica 12/7/2010 your answers are awesome, thank you so much or helping... So this is where you loose me: 1 - i create the 'name-space', I now have an AD FDQN to use (\\server.local) 2 - I create a new folder inside the DFS manager and give it a name (data for example) 3 - I now associate my original share with this folder inside the DFS as you say in step 3 in your last guide (the old share "\\server\share_name' can now be accessed through "\\server.local\dfs-share- name") 4 - I now also add a folder on the new server to this, so that the original share copies all it's content to the new folder on the new server. This folder i can access via "\\new_server.local\dfs-sh are-name so here i am lost. I create a folder in DFS (step 1) to create a new "virtual folder". I then link the existing share to this virtual folder. Yet on the new server i need to create a new folder to physically copy the data to, but i also have to create a virtual older on the new server? So i now have this mess of 2 shared folders, 2 virtual dfs folders, and to reach any of them i need to put in the DFS name instead of simply sharing it. I guess what I don;t understand is why would i not just simply set up the original copy scheme you taught me first, so that the folders are synced, then i just share the new folder on the new server and leave the original share alone. WHat is the gain in using the DFS name space thingimajing? jakethecatuk 12/7/2010 DFS namespaces are great if you have a lot of servers providing the same data - you have one common share name to connect to regardless of location. If you have two servers - DFS namespaces can be a bit of overkill as you have worked out. There would be nothing wrong with just setting up the replication and attaching to the share on each server. DFS will keep the folders in sync and all will be good - I simply gave you the option of using namespaces. If it's too complicated - don't do it. I started with Experts Exchange in 2004 and it's been a mainstay of my professional computing life since. It helped me launch a career as a programmer / Oracle data analyst William Peck ASKER somewhereinafrica 12/7/2010 dude, you rocked my world with this. i wish people would be more like this when they answer instead of just sending links to other sites. I am clear on all except: " you have one common share name to connect to regardless of location" I don't get that, what is the difference between using '\\servername\sharename' versus "\\servername.local\dfs-sh are-name". I mean if i have 2 servers, on 2 different locations, the data would still be accessed via \\server.local\dfsname on one side and \\server2.local\dfsname on the other side. I don;t get the "great" part. jakethecatuk 12/7/2010 OK...imagine this scenario. You have ten servers sharing files in different locations and you have users in those ten locations. As part of the login process, you map a drive to the users which is their S:\ drive for example. If users don't move around, then they can be set to map to their local server for their share. Now imagine you have users moving around - how will you map the roaming user to the local server? User1 who is normally in office1 will always try and map to \\server1\data and if the links between offices aren't that fast, your users will complain about slow speeds. Or, you write a very complicated login script that can work out where they are and map the network drive accordingly. Or, the easy way is to use DFS Namespace. Instead of using \\{local server}\data or complicated scriting to map to the correct server, you use \\domain\data for every user and whereever your user is based, they always attach to the local share on the local server. Even if they change office, they will still map to the server in the local office. Does it make a bit more sense now? ASKER somewhereinafrica 12/7/2010 that was how i was hoping it would work. But how does the client know which is the "closest" share? i create a namespace server on the main server i do the same on the new server i now map the original share on the main server "\\server\data\" and create a DFS share called "datax" I create a folder on the new server, and set up a synch between the new folder and the original share I now have a perfectly synched folder in two locations. Now one of my users goes to the other office with his laptop when he fires up the computer, his "X:" looks for "\\server.local\datax" How will the local server know and divert his request to the local folder instead of going accross the VPN? I mean, how does the server know which of the 2 synched folders is the local one? Get an unlimited membership to EE for less than $4 a week. Unlimited question asking, solutions, articles and more. ASKER somewhereinafrica 12/7/2010 and again, I can't thank you enough for taking the time... jakethecatuk 12/7/2010 I'm guessing that your two locations are part of the same domain. If you configure your users to connect to \\domain\data instead of \\server\data, the domain handles the mapping request. When you try and connect to \\domain\data, the PC will look at the local DC for this and it knows what the local DC is because of the IP address. Site 1: DC - 192.168.1.1 Site 2: DC - 192.168.2.1 When your PC has an IP address of 192.168.1.10, when it logs in, it will know that the DC is 192.168.1.1. Similarly, when your PC has an IP address of 192.168.2.10, when it logs in, it will know that the DC is 192.168.2.1. In simple terms, the namespace resolution will work in a similar way. Whatever you do...don't map directly to the server share or you will have problems. If you using namespaces, always ALWAYS map to the domain share. jakethecatuk 12/7/2010 Hit post too soon... So the local DC will know which file server is local and will always try and connect to that share over the remote share. Experts Exchange is like having an extremely knowledgeable team sitting and waiting for your call. Couldn't do my job half as well as I do without it! James Murphy NashvilleGuitarPicker 12/7/2010 Make sure that you also set up your sites in Active Directory Sites and Services - define your subnets and assign them to the appropriate site. This helps DFS and other services figure out where the "closest" appropriate server is. If done properly, DFS will pick the closest available server when you use the DFS namespace to access shares. ASKER somewhereinafrica 12/7/2010 ooooohhhh, I see, i don't do "\\servername.local\dfs-sh are" I do "\\domainname.local\dfs-sh are" that makes allot more sense. what if i had 2 servers in the same room, same subnet, which one would be used first (if they were both replicating and sharing the same folder) I though you said earlier that I could synch 2 folders and share them just fine. Is that not true? or is this only when you are dealing with folders active with DFS? ASKER somewhereinafrica 12/7/2010 Yeah, I need to set up an additional site for the server in AD. Gonna look in to that tomorrow. speaking of which. a different site is still the same domain, right? so if I set this server up on the DFS system it won;t matter that i move this server to another office, change the IP and make it the controller for this "new AD site". because it will find it through DNS, right? very easy to follow
https://www.experts-exchange.com/questions/26662641/need-not-in-depth-guide-to-DFS.html
CC-MAIN-2022-40
refinedweb
2,682
80.31
I'm trying to use the OpenCL SDK for the Terasic DE5_Net board.I'm using the BSP provided by Terasic in their website. But I start having (at least) two differences with the expected setup. 1) I am using the latest Quartus and OpenCL SDK Versions (Version 17.0.2 Build 602). 2) I am using an Ubuntu 16.10 with a 4.8.0-59-generic kernel version Apparently the first point is not problematic. The second point is more troublesome. When you try to compile the board kernel driver (>aocl install) the compilation crashes because the get_user_pages function (from mm.h) has less parameters in newer kernels. I have tried to implement the missing function (with more parameters) getting inspiration from older kernel sources. See the implementation below // @author David Castells-Rufas // In new kernels (> 4.6) get_user_pages use current task info, // so go to the more complete function. I get some inspiration from // long get_user_pages_old_kernel(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages, struct vm_area_struct **vmas) { int flags = 0; if (write) flags |= FOLL_WRITE; if (force) flags |= FOLL_FORCE; return __get_user_pages(tsk, mm, start, nr_pages, flags, pages, vmas, NULL); } When I do this, the driver compiles but when I execute I get some errors and the system finally crash. With this [failing] driver you can still do >aocl diagnose (without parameters) but when you try ">aocl diagnose acl0" the system starts transferring memory, and the application detects a lot of memory errors... Transferring 8192 KBs in 16 512 KB blocks ... Error! Mismatch at element 1008: 3f0 != 1efbf0, xor = 001ef800 Error! Mismatch at element 1009: 55b1a2e9 != 1229541c, xor = 4798f6f5 ... To solve these issues I have looked for a debian package with a kernel having the get_user_pages function with the long list of parameters. I found it to be 4.4.0-24-generic. So I installed it and recompiled the driver against it. After rebooting my system using the 4.4.0 kernel the OpenCL driver works and ">aocl diagnose acl0" runs smoothly. However I would like to work with the newer kernel as it has other implications (for other drivers). Anyone knows whether there are drivers adapted to newer Linux Kernels? Or whether anyone has tried to do it or either Altera/Intel or Terasic have internal people working on that? Thanks ! Link Copied Terasic only officially supports CentOS 7 right now, you should not use any other OS unless you are willing to dig through and manually fix all the problems you will face like what you mentioned above. There reason why they specifically stick to CentOS is that they don't want to spend their resources on keeping the driver updated for the quickly-changing kernels on Debian/Ubuntu/etc. Needless to say, you will not receive any support from Teraisc unless you use a supported OS, either; the first thing they will tell you is to switch to a supported OS.Furthermore, Terasic has not released a BSP for Quartus v17.0.2 yet. Even though their v16.1 BSP might work with the newer Quartus on their Stratix V board (but definitely not the Arria 10 one), I recommend sticking with Quartus v16.1.2 to avoid more headaches. With the help of Michal Hocko, and Lorenzo Stoakes I found the solution!Thank you so much guys!! In some newer kernels (I am using 4.8) you can simply use get_user_pages_remote instead get_user_pages. For some kernels, the parameter list is exactly the same!! Be carefull, than recent kernels have a slightly different parameter list, but that I guess that can be easily adapted. To be able to support both (4.4) and (4.8) kernels I modifyed the code as follows... # if LINUX_VERSION_CODE < KERNEL_VERSION(4,6,0) ret = get_user_pages(target_task, target_task->mm, start_page + got * PAGE_SIZE, num_pages - got, 1, 1, p + got, vma); # else ret = get_user_pages_remote(target_task, target_task->mm, start_page + got * PAGE_SIZE, num_pages - got, 1,1, p+got, vma); # endif Now ">aocl diagnose acl0" seems to be working smoothly on 4.8 !! Yes, you are right HRZ.But that's the great thing about Open Source software, the community can help Terasic and Intel to support newer kernels. See my auto-response. Great, I am glad you got it to work in the end. Hi,Always stick to the published operating system requirement, Ubuntu is not yet a supported operating system. Regards, CloseCL (This message was posted on behalf of Intel Corporation) Now I switched to Ubuntu 17.10, new update in Now that AOCL 17.1 support Ubuntu 16.04.2, and the aclpci_cmd.c file uses get_user_pages_remote for newer kernels,aocl install still crashed with Ubuntu 16.04.3: Building driver for BSP with name a10_ref make: Entering directory '/usr/src/linux-headers-4.13.0-37-generic' CC /tmp/opencl_driver_8bc8zu/aclpci_queue.o CC /tmp/opencl_driver_8bc8zu/aclpci.o /tmp/opencl_driver_8bc8zu/aclpci.c: In function ‘aclpci_irq’: /tmp/opencl_driver_8bc8zu/aclpci.c:343:17: error: implicit declaration of function ‘send_sig_info’ int ret = send_sig_info(aclpci->signal_number, &aclpci->signal_info, aclp ^ cc1: some warnings being treated as errors scripts/Makefile.build:308: recipe for target '/tmp/opencl_driver_8bc8zu/aclpci.o' failed make: *** Error 1 Makefile:1550: recipe for target '_module_/tmp/opencl_driver_8bc8zu' failed make: *** Error 2 make: Leaving directory '/usr/src/linux-headers-4.13.0-37-generic' aocl install: failed.Board: Arria 10GX Is it because I'm using Ubuntu 16.04.3 instead of 16.04.2 (rollback seems like a pain in the butt so I haven't try it yet) Anyone have idea how to solve this? Thanks! This happened to me and my solution is Add #include <linux/sched/signal.h> to the aclpci.h file. The location of the prototype, send_sig_info, changed, which is not in linux/sched.h anymore for the newer kernel, so that it failed to compile.
https://community.intel.com/t5/Intel-Quartus-Prime-Software/DE5-NET-OpenCL-support-on-Ubuntu-16-10/td-p/247430?t=1541575434952
CC-MAIN-2021-39
refinedweb
977
59.09
[Date Index] [Thread Index] [Author Index] Re: if I open multiple files in Mathematica As you have discovered, definitions (rules) are associated with a Mathematica session and NOT with a Mathematica notebook. One work-around is to have use a different Mathematica Context for each notebook. A context is essentially a "namespace". See the Mathematica Book's materials for details. Here's a simple example. You have two notebooks. In one you evaluate the following input cells: Begin["This`"] x=100; End[] If you evaluate any cells in that notebook after evaluating the Begin cell and the x = 100 cell, but before evaluating the End[] cell, x will take the vaule 100 there. In the other notebook you evaluate input cells: Begin["Other`"] x = "hello"; End[] If you evaluate any cells in that notebook after evaluating the Begin cell and the x = "hello" cell, but before evaluating the End[] cell, x will take the vaule "hello" there. But: After you have evaluated both End[] cells, now no matter in what notebook you do further evaluations, if you refer to x you will just get the symbol x and neither of those two values. But if you want to use one of the values from within those context, you qualify x with the context name: This`x 100 This`x + 23 123 Other`x "hello" StringJoin[Other`x, ", world!"] hello, world Jackie wrote: > The variables in different files will be cross-referenced. I don't like > this feature, What should I
http://forums.wolfram.com/mathgroup/archive/2006/Feb/msg00072.html
CC-MAIN-2016-07
refinedweb
250
59.84
I just updated my project to Scala 2.10.0 using SBT 0.12. But now, when running sbt, I get the following error: java.lang.NoClassDefFoundError: scala/reflect/ManifestFactory$ at X.build.Unidoc$.<init>(Unidoc.scala:8) at X.build.Unidoc$.<clinit>(Unidoc.scala) at X.build.ServicesBuild$.<init>(Build.scala:25) at X.build.ServicesB I've got some code that references scala.collection.jcl written against Scala 2.7.7. I'm now trying to compile it against Scala 2.8 for the first time, and I'm getting this error: "value jcl is not a member of package collection". Is there a substitute/replacement for jcl in 2.8? I've installed ensime according to the README.md file, however, I get errors in the inferior-ensime-server buffer with the following: Following Scala mailing lists, different people often say: "compiler rewrites this [scala] code into this [java/scala??] code". For example, from one of the latest threads, if Scala sees class C(i: Int = 4) { ... } then the compiler rewrites this as (effectively): class C(i: Int) { ... }object C { def init$default$1: Int = 4} Final purpose of this feat: Use android device for development by fast compiling without needing to use proguard each single time (which causes huge delays) First try is on Sony Ericsson Xperia Mini Pro I have installed in it Cyanogenmod which is already rooted and the root checker app has verified it. This is the app I used to embed Scala library 2.9.1 insid This is driving me crazy - I'm working on play with scala, social secure etc and I keep getting this error when I'm trying to make a case class. Edit: similar question to this: Scala and Play2: ClassCastException: java.lang.Object cannot be cast to play.api.libs.json.JsValue Edit: moving the method createIdentityFromUser to the static companion class seems to fix this. fsc (fast scala compiler) is faster than scalac. but during TDD cycles i consume 3 seconds to compile sources over less than 1s to run my tests.. suggestion to reduce compile time near 0? obviously, buy a faster computer is not an answer :) i mean use some scala internals to compile source faster as possible I am trying to figure out memory-efficient AND functional ways to process a large scale of data using strings in scala. I have read many things about lazy collections and have seen quite a bit of code examples. However, I run into "GC overhead exceeded" or "Java heap space" issues again and again. Often the problem is that I try to construct a lazy collection, but evaluate each new I have maven-eclipse-plugin I'm using Scala to write Specs BDD tests for my Java code and the setup above is working very nicely so far. However, I have one puzzling problem and I would like to k I'd been programming in C#, but was frustrated by the limitations of its type system. One of the first things, I learned about Scala was that Scala has higher kinded generics. But even after I'd looked at a number of articles, blog entries and questions I still wasn't sure what higher kinded generics were. Anyway I'd written some Scala code which compiled fine, Does this snippet use higher kind
http://bighow.org/tags/Scala/1
CC-MAIN-2017-39
refinedweb
558
64.71
The primary goal of XML Schema was to provide a language to specify the structure of XML documents. We already had DTDs but these did not have an XML syntax and needed to be updated with new features such as namespaces as well as additional datatypes. There was also a need for stronger typing. But when XML Schema finally appeared ([XML Schema Part 1: Structures Second Edition], [XML Schema Part 2: Datatypes Second Edition]), it was put to a host of other uses that the creators had not anticipated. In this paper we discuss two such uses -- mappingXML documents to Java™ objects under the control of an XML Schema and using XML Schemas to control the structure of XML documents shredded into a relational database. In each case we find some Schema features difficult to map and discuss possible workarounds. In this section we discuss some XML Schema constructs that are difficult to map into object structures. The [XML Schema Part 1: Structures Second Edition] construct "choice" allows variations in the structure of an element. There is no corresponding construct in Java™ although other languages allow variant types. For example: <xsd:complexType <xsd:sequence> <xsd:choice> <xsd:element <xsd:element </xsd:choice> <xsd:element </xsd:sequence> </xsd:complexType> The structure of the PurchaseOrder type can vary according to the options defined in the choice. There is no direct mapping of this feature into Java™, but a check can be performed at runtime. This means, however, that errors cannot be caught until runtime. An alternative would be to have no choice support and require that the choices in a schema be defined as separate types using derivation by extension. In the same way, derivation by restriction cannot be mapped to a Java™ construct and must be checked at runtime. In [XML Schema Part 1: Structures Second Edition], the "fixed", "final", "finalDefault", "block" and "blockDefault" attributes can be used to control how and if derivation is allowed from a complex or simple type. For example: <complexType name="Address" final="restriction"> <sequence> <element name="name" type="string"/> <element name="street" type="string"/> </sequence> </complexType> The only one of these attributes that has a direct mapping to Java™ is the "final" attribute with a value of "#all" on a complexType definition which prohibits further derivation from that type. This corresponds to the "final" modifier that can be applied to a Java™ class. The "finalDefault", "block", and "blockDefault" have no direct mapping to Java™ features but these features are very similar to what is provided with "final". We are not clear how an element with such controlling attributes can be implemented in Java™. The "fixed" feature relates to facets and, in our view, need not be supported. Facets in [XML Schema Part 2: Datatypes Second Edition] allow datatypes to be constrained by value and by lexical form. For example: <xsd:simpleType <xsd:restriction <xsd:maxExclusive </xsd:restriction> </xsd:simpleType> This cannot be expressed as constraints on Java™ types. The only way to implement facets is to create a configurable validator that checks the value of the datatype at runtime. [XML Schema Part 1: Structures Second Edition] allows extensible elements via xs:any to be restricted to a namespace. For example: <complexType name="foo"> <sequence> <any namespace=""/> </sequence> </complexType> Again, this cannot be mapped into a Java™ feature and must be checked at runtime. [XML Schema Part 2: Datatypes Second Edition] allows an annotation element to be specified for most elements but is ambiguous in some cases. The source of ambiguity is related to the specification of an annotation element for a reference to a schema element using the "ref" attribute. This arises in three cases: For example, consider the following schema fragment. <xs:element <xs:complexType> <xs:element <xs:element </xs:complexType> </xs:element> XML Schema spec is unclear on whether an annotation element can be specified on the reference to the "Name" element and whether it takes precedence over an annotation on the element referered to. In our products, we assume that an annotation element can be specified in each of the three cases mentioned above. Furthermore, the annotation element is assumed to be associated with the abstract schema component as follows: The Oracle database products provide facilities for managing XML Schemas in the database as well as shredding XML instances into relational tables in a Schema-controlled manner. Product documentation is available at [Using Oracle XML DB and XML Schema]. A paper with the technical highlights is available at [XML Schemas in Oracle XML DB]. Support is provided for the following tasks: We discuss below some XML Schema features that make the implementation of the above facilities particularly challenging. Although we attempt to push as many constructs as possible down to the SQL level, There are some schema constructs that either don't map well to SQL and/or don't affect the storage. Such constraints are enforced at the XML layer (above the O-R level). The instance data is stored in the column/table corresponding to the head element. However, the actual element QName is stored in a system (binary) column. This system column contains other pieces of information that are not mapped directly - such as namespace prefixes, comments, PIs, etc - but need to be preserved to ensure fidelity of retrieved documents and fragments. We have seen heavy use of substitution groups in XBRL schemas. Datatypes such as duration that don't have a SQL equivalent are stored in a VARCHAR2 column. Actually, this is an option for other datatypes, such as xsd:integer, also. Though the default for xsd:integer is SQL NUMBER, users can choose to override the storage mechanism using a Schema annotation. Redefine is the only XML Schema feature that is not supported by XML DB. Interestingly, it should be noted that none of our customers have asked for this as yet. In contrast, we have seen customer schemas with heavy usage of every other schema feature including type derivation, substitution groups, wildcards, etc. Key/Keyref is enforced at the XML layer and not directly captured at the SQL level. The main reason is that these express constraints at the level of a single document - whereas in most of the cases, users want to express constraint on the document collection. For example, /PurchaseOrder/@ID is unique across all documents stored in the PurchaseOrder table. It is not possible to express this in the Schema unless you define a virtual collection element, virtual documents representing a table, etc. Users are not very comfortable with this virtualization. Also, it does not make sense when the unit of store/fetch is a single purchase order. A few other observations from our implementation experience: Though derivation by extension maps cleanly to the sub-type (UNDER) construct in SQL, there is no corresponding construct for derivation by restriction. We chose to map restricted complex types to a dummy sub-type i.e. a derived SQL type with no extra fields. This provides a full-fledged SQL type corresponding to the restricted complex type, but does not enforce the restriction at the SQL level. While mapping the [HTTP Extensions for Distributed Authoring -- WEBDAV] resource model to XML DB using XML Schema, we needed to define a wildcard which allows any elements not in our (Oracle) namespace - but also permits the null namespace. This is not permitted in XML Schema. If you use ##other, it automatically also excludes the null namespace. We had to work around this by adding proprietary extensions to our handling of wildcards. This paper has discussed the mapping of XML Schema constructs into two other languages. Some XML Schema constructs do not map cleanly into facilities offered by other languages and we have discussed workarounds for some of them.
http://www.w3.org/2005/05/25-schema/oracle.html
CC-MAIN-2016-26
refinedweb
1,289
51.07
Kubernetes Accelerator Helps in All Phases of Kubernetes Adoption Kubernetes Accelerator Helps in All Phases of Kubernetes Adoption In this article, we discuss how a Kubernetes accelerator can help in all phases of Kubernetes adoption, as it significantly decreases complexity. Join the DZone community and get the full member experience.Join For Free A Kubernetes accelerator tames a lot of the Kubernetes' complexity. Complexity is one of the biggest barriers to organizations using Kubernetes in general. Even if tech groups like what Kubernetes can do, the inherent complexity in Kubernetes puts off a lot of organizations. When more engineers and organizations can smooth out a lot of those pain points, there will be even greater adoption of Kubernetes. Kubernetes modules are configured both to run individual nodes as well as clusters of nodes, allowing users to choose which arrangement they want. Kubernetes will orchestrate all of them just fine. Here is an example: We bought a bunch of instances. We bought a bunch of compute on AWS, and we wanted to instantiate those by creating nodes, each with their own characteristics on where to get data, which database to use, etc. Kubernetes clusters also have their own individual control software that we used. We could have also used Helm. We configured resource utilization, so that if we got above a 50% of CPU utilization, we automatically spun up another node to offload some of the workload so that no individual instance gets too bogged down. That can be configured automatically, whether it's CPU usage, RAM usage, or other services. There are lots of different metrics you can configure with an accelerator. It allows us to just look at the console and see whether we’ve got 25 nodes because of a bunch of applications, or one big application. When we use Docker, that is where Kubernetes really shines because we have Docker images and can easily load the Docker images into the cluster. Now the thing with EKS as opposed to something like Fargate, is that the biggest difference there is with EKS, is that you must specify various things using YAML configuration files. It's called a ConfigMap file, which basically tells EKS how you want to do various things. But with Fargate, a lot of that is managed for you. So you could just give a Fargate a Docker image, and it will automatically run it. The use case really depends on how much individual level of control you want. EKS is more suited for organizations that have dedicated ops teams and that know how to manage all the complexity of Kubernetes. Because it can be very quickly not be simple depending on what you are doing. So, if you are not careful, you will really have to put a leash on it. Depending on whether you want to run multiple clusters or not, it depends on which sort of modules you enable. An accelerator automatically provisions Kubernetes nodes in EKS, and then once those are provisioned, you get a ConfigMap file that you could use to automatically configure many things. So you can get up to speed with Kubernetes in 15 minutes. We use a lot of Terraform, and most people using EKS to have a lot of control over what nodes are doing. Out of the box, if you spin up a cluster through the AWS console, you just get a cluster, but you don't really get a whole lot of knowledge about what to do with it. But, with a Kubernetes accelerator, a lot of those questions are solved for you because we've done a lot of the work to specify how many CPUs are needed per cluster, how much RAM, etc. It's configurable so whatever resource usage typically fits your application, you can configure that. It does not come out of the box with just a normal EKS filter because it has the groundwork for that configuration for you. An accelerator should help install and implement Kubernetes because it provisions a cluster, and you can configure it automatically to your liking. It spins up a cluster and because you can specify resource usage, you can configure automatically as you're deploying the cluster, rather than just deploying kind of a blank cluster and having to do a lot of configuration after the fact. A Kubernetes accelerator should help you when you first decide to use Kubernetes, to set up Kubernetes, when you are using Kubernetes. It should save substantial time. Since writing good YAML files can take a day or two, by using the defaults in the accelerator, instead of taking a day, you can be up and running in 20 minutes. It's a good reason to want to use Kubernetes because now it's super easy. Here’s a specific example of why. Example cluster config: resource "aws_eks_cluster" "default" { count = var.enabled ? 1 : 0 name = module.label.id role_arn = join("", aws_iam_role.default.*.arn) version = var.kubernetes_version enabled_cluster_log_types = var.enabled_cluster_log_types vpc_config { security_group_ids = [join("", aws_security_group.default.*.id)] subnet_ids = var.subnet_ids endpoint_private_access = var.endpoint_private_access endpoint_public_access = var.endpoint_public_access } Example ConfigMap YAML: xxxxxxxxxx apiVersionv1 kindConfigMap metadata nameaws-auth namespacekube-system data mapRoles - "groups": - "system:bootstrappers" - "system:nodes" "rolearn": "arn:aws:iam::450022699797:role/atlas-develop-eks-workers" "username": "system:node:{{EC2PrivateDNSName}}" Look for integration with ECR, which is the Elastic Container Registry. If your application is already running inside of it, is already containerized. You can do a pretty impressive automatically loaded Kubernetes because all you have to do is load your image into ECR, and then once it does that, it'll give you a URL. It will give you a container URL that you can plug into EKS, and that will automatically have everything running as you deploy your stuff. If you have the usual Kubernetes kind of dashboard or control panel and managing clusters, and you are already good at fine-tuning, an accelerator should also help you. If you know you need to expand, it will help you expand faster, and if you know you need more than one cluster, it should definitely help you as well. If you are already managing existing clusters, you can specify a lot of the configuration options through the YAML file, and it will automatically manage that for you in code rather than you as the DevOps person having to manually go into the AWS console and do various things. One issue that engineers will have to be aware of is that provisioning a cluster, while easy, if using Terraform, can take a long time to provision successfully by AWS. It has been our experience that provisioning often takes 15-20 minutes so engineers should be aware of this. This is an AWS limitation and not the result of having used Terraform. All the options that you would typically specify in the console are configurable in the YAML file. So whatever options are supported in the console are supported in that YAML file. This means you do not have to leave Kubernetes and go to AWS in order to specify some things. And you also simplify your version control because it is easier to manage changes. You can make a change and then commit it to a repository, and then have it so that all your team knows how to manage all of that from one place. Because an accelerator leverages infrastructure as code, it is much easier to centralize and manage all the different options that DevOps people need to manage various things. The biggest barrier to organizations using Kubernetes in general is the time cost. Even if they like the idea of what Kubernetes does, the inherent complexity that can arise in Kubernetes, puts off a lot of organizations. And the more people, the more organizations like us can smooth out a lot of those pain points, I think will really drive greater adoption of Kubernetes both now and in the future. Please contact us for access to the repository which you can find at: Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/kubernetes-accelerator-helps-in-all-phases-of-kube
CC-MAIN-2020-24
refinedweb
1,354
52.29
Explain Like I am 5. It is the basic tenets of learning for me where I try to distill any concept in a more palatable form. As Feynman said: I couldn’t do it. I couldn’t reduce it to the freshman level. That means we don’t really understand it. So, when I saw the ELI5 library that aims to interpret machine learning models, I just had to try it out. One of the basic problems we face while explaining our complex machine learning classifiers to the business is interpretability. Sometimes the stakeholders want to understand — what is causing a particular result? It may be because the task at hand is very critical and we cannot afford to take a wrong decision. Think of a classifier that takes automated monetary actions based on user reviews. Or it may be to understand a little bit more about the business/the problem space. Or it may be to increase the social acceptance of your model. This post is about interpreting complex text classification models. To explain how ELI5 works, I will be working with the stack overflow dataset on Kaggle. This dataset contains around 40000 posts and the corresponding tag for the post. This is how the dataset looks: And given below is the distribution for different categories. This is a balanced dataset and thus suited well for our purpose of understanding. So let us start. You can follow along with the code in this Kaggle Kernel Let us first try to use a simple scikit-learn pipeline to build our text classifier which we will try to interpret later. In this pipeline, I will be using a very simple count vectorizer along with Logistic regression. from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegressionCV from sklearn.pipeline import make_pipeline # Creating train-test Split X = sodata[['post']] y = sodata[['tags']] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # fitting the classifier vec = CountVectorizer() clf = LogisticRegressionCV() pipe = make_pipeline(vec, clf) pipe.fit(X_train.post, y_train.tags) Let’s see the results we get: from sklearn import metrics def print_report(pipe): y_actuals = y_test['tags'] y_preds = pipe.predict(X_test['post']) report = metrics.classification_report(y_actuals, y_preds) print(report) print("accuracy: {:0.3f}".format(metrics.accuracy_score(y_actuals, y_preds))) print_report(pipe) The above is a pretty simple Logistic regression model and it performs pretty well. We can check out its weights using the below function: for i, tag in enumerate(clf.classes_): coefficients = clf.coef_[i] weights = list(zip(vec.get_feature_names(),coefficients)) print('Tag:',tag) print('Most Positive Coefficients:') print(sorted(weights,key=lambda x: -x[1])[:10]) print('Most Negative Coefficients:') print(sorted(weights,key=lambda x: x[1])[:10]) print("--------------------------------------") ------------------------------------------------------------ OUTPUT: ------------------------------------------------------------ Tag: python Most Positive Coefficients: [('python', 6.314761719932758), ('def', 2.288467823831321), ('import', 1.4032539284357077), ('dict', 1.1915110448370732), ('ordered', 1.1558015932799253), ('print', 1.1219958415166653), ('tuples', 1.053837204818975), ('elif', 0.9642251085198578), ('typeerror', 0.9595246314353266), ('tuple', 0.881802590839166)] Most Negative Coefficients: [('java', -1.8496383139251245), ('php', -1.4335540858871623), ('javascript', -1.3374796382615586), ('net', -1.2542682749949605), ('printf', -1.2014123042575882), ('objective', -1.1635960146614717), ('void', -1.1433460304246827), ('var', -1.059642972412936), ('end', -1.0498078813349798), ('public', -1.0134828865993966)] -------------------------------------- Tag: ruby-on-rails Most Positive Coefficients: [('rails', 6.364037640161158), ('ror', 1.804826792986176), ('activerecord', 1.6892552000017307), ('ruby', 1.41428459023012), ('erb', 1.3927336940889532), ('end', 1.3650227017877463), ('rb', 1.2280121863441906), ('gem', 1.1988196865523322), ('render', 1.1035255831838242), ('model', 1.0813278895692746)] Most Negative Coefficients: [('net', -1.5818801311532575), ('php', -1.3483618692617583), ('python', -1.201167422237274), ('mysql', -1.187479885113293), ('objective', -1.1727511956332588), ('sql', -1.1418573958542007), ('messageform', -1.0551060751109618), ('asp', -1.0342831159678236), ('ios', -1.0319120624686084), ('iphone', -0.9400116321217807)] -------------------------------------- ....... And that is all pretty good. We can see the coefficients make sense and we can try to improve our model using this information. But above was a lot of code. ELI5 makes this exercise pretty simple for us. We just have to use the below command: import eli5 eli5.show_weights(clf, vec=vec, top=20) Now as you can see the weights value for Python is the same as from the values we got from the function we wrote manually. And it is much prettier and wholesome to explore. But that is just the tip of the iceberg. ELI5 can also help us to debug our models as we can see below. Let us now try to find out why a particular example is misclassified. I am using an example which was originally from the class Python but got misclassified as Java: y_preds = pipe.predict(sodata['post']) sodata['predicted_label'] = y_preds misclassified_examples = sodata[(sodata['tags']!=sodata['predicted_label'])&(sodata['tags']=='python')&(sodata['predicted_label']=='java')] eli5.show_prediction(clf, misclassified_examples['post'].values[1], vec=vec) In the above example, the classifier predicts Java with a low probability. And we can examine a lot of things going on in the above example to improve our model. For example: We get to see that the classifier is taking a lot of digits into consideration(not good)which brings us to the conclusion of cleaning up the digits. Or replacing DateTime objects with a DateTime token. Also see that while dictionary has a negative weight for Java, the word dictionaries has a positive weight. So maybe stemming could also help. We also see that there are words like <pre><code> that are influencing our classifier. These words should be removed while cleaning. Why is the word date influencing the results? Something to think about. We can take a look at more examples to get more such ideas. You get the gist. This is all good and fine but*** what if models that we use don’t provide weights for the individual features like LSTM?*** It is with these models that explainability can play a very important role. To understand how to do this, we first create a TextCNN model on our data. Not showing the model creation process in the interest of preserving space but think of it as a series of preprocessing steps and then creating the deep learning model. If interested, you can check out the modelling steps in this Kaggle kernel . Things get interesting from our point of view when we have a trained black-box model object. ELI5 provides us with the eli5.lime.TextExplainer to debug our prediction - to check what was important in the document to make a prediction decision. To use <strong>TextExplainer</strong> instance, we pass a document to explain and a black-box classifier (a predict function which returns probabilities) to the <strong>fit()</strong> method. From the documentation this is how our predict function should look like: predict (callable) — Black-box classification pipeline. predict should be a function which takes a list of strings (documents) and return a matrix of shape (n_samples, n_classes) with probability values - a row per document and a column per output label. So to use ELI5 we will need to define our own function which takes as input a list of strings (documents) and return a matrix of shape (n_samples, n_classes). You can see how we first preprocess and then predict. def predict_complex(docs): # preprocess the docs as required by our model val_X = tokenizer.texts_to_sequences(docs) val_X = pad_sequences(val_X, maxlen=maxlen) y_preds = model.predict([val_X], batch_size=1024, verbose=0) return y_preds Given below is how we can use TextExplainer. Using the same misclassified example as before in our simple classifier. import eli5 from eli5.lime import TextExplainer te = TextExplainer(random_state=2019) te.fit(sodata['post'].values[0], predict_complex) te.show_prediction(target_names=list(encoder.classes_)) This time it doesn’t get misclassified. You can see that the presence of keywords dict and list is what is influencing the decision of our classifier. One can try to see more examples to find more insights. So how does this work exactly? <strong>TextExplainer</strong> generates a lot of texts similar to the document by removing some of the words, and then trains a white-box classifier which predicts the output of the black-box classifier and not the true labels. The explanation we see is for this white-box classifier. This is, in essence, a little bit similar to the Teacher-Student model distillation, where we use a simpler model to predict outputs from a much more complex teacher model. Put simply, it tries to create a simpler model that emulates a complex model and then shows us the simpler model weights. Understanding is crucial. Being able to interpret our models can help us to understand our models better and in turn, explain them better. ELI5 provides us with a good way to do this. It works for a variety of models and the documentation for this library is one of the best I have ever seen. Also, I love the decorated output the ELI5 library provides with the simple and fast way it provides to interpret my models. And debug them too. To use ELI5 with your models you can follow along with the code in this Kaggle Kernel If you want to learn more about NLP and how to create Text Classification models, I would like to call out the <em><strong>Natural Language Processing</strong></em> course in the <strong>Advanced machine learning specialization</strong> . Do check it out. It talks about a lot of beginners to advanced level topics in NLP. You might also like to take a look at some of my posts on NLP in the NLP Learning series. Thanks for the read. I am going to be writing more beginner-friendly posts in the future too. Follow me up at <strong>Medium</strong> or Subscribe to my <strong>blog</strong> Also, a small disclaimer — There might be some affiliate links in this post to relevant resources as sharing knowledge is never a bad idea.
https://mlwhiz.com/blog/2019/11/08/interpret_models/
CC-MAIN-2021-25
refinedweb
1,611
60.92
Learning Functional Programming has a high learning curve. However, if you have something familiar to base it off of, it helps a lot. If you know React & Redux, this gives you a huge head start. Below, we’ll cover the basics of Elm using React & Redux/Context as a basis to help make it easier to learn. The below deviates a bit from the Elm guide, both in recommendations and in attitude. Elm development philosophy is about mathematical correctness, learning & comprehending the fundamentals, and keeping things as brutally simple as possible. I’m impatient, don’t mind trying and failing things 3 times to learn, and immersing myself in complexity to learn why people call it complex and don’t like it. I’m also more about getting things done quickly, so some of the build recommendations follow more familiar toolchains React, Angular, and Vue developers are used too which is pretty anti-elm simplicity. Docs To learn React, most start at the React documentation. They are _really_ good. They cover the various features, where they’re recommended, and tips/caveats a long the way. For Redux, I hate the new docs despite them working extremely hard on them. I preferred the original egghead.io lesson by Dan Abramov on it. To learn Elm, most recommend starting at the Official Guide. It starts at the very beginning by building a simple app and walks you through each new feature. It focuses (harps?) on ensuring you know and comprehend the fundamentals before moving on to the next section. Tools To build and compile, and install libraries for React apps, you install and use Node.js. It comes with a tool called npm (Node Package Manager) which installs libraries and runs build and other various commands. For Elm, you install the elm tools. They’re available via npm, but given the versions don’t change often, it’s easier to just use the installers. They come with a few things, but the only ones that really matter day to day are the elm compiler and the elm REPL to test code quickly, like you’d do with the node command. Developing The easiest, and most dependable long term way to build & compile React applications is create-react-app. Webpack, Rollup, and bundlers are a path of pain, long term technical debt maintenance burdens… or adventure, joy, and efficient UI’s based on your personality type. Using create-react-app, you’ll write JavaScript/JSX, and the browser will update when you save your file. Without create-react-app, you’d manually start React by: ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('root') ) Elm recommends you only use the compiler until your application’s complexity grows enough that you require browser integration. Elm Reactor currently sucks, though, so elm-live will give you the lightest weight solution to write code and have the browser automatically refresh like it does in create-react-app. It’s like nodemon or the browser-sync days of old. The story here isn’t as buttoned up as create-react-app. You install elm-live, but still are required to finagle with html and a root JavaScript file. Same workflow though; write some elm code in Main.elm and when you save your file, it refreshes the browser automatically. Starting Elm on your page is similar to React: Elm.Main.init({ node: document.getElementById('myapp') }) Building When you’re ready to deploy your React app, you run npm run build. This will create an optimized JavaScript build if your React app in the build folder. There are various knobs and settings to tweak how this works through package.json and index.html modifications. Normally, the build folder will contain your root index.html file, the JavaScript code you wrote linked in, the vendor JavaScript libraries you reference, and various CSS files. You can usually just upload this folder to your web server. The Elm compiler makes a single JavaScript file from an elm file running elm make. This includes the Elm runtime, your Elm code compiled to JavaScript, and optionally optimized (but not uglified). Like React, you initialize it with calling an init function and passing in a root DOM node. Unlike create-react-app, you need to do this step yourself in your HTML file or another JavaScript file if you’re not using the basic Elm app (i.e. browser.sandbox ). Language React is based on JavaScript, although you can utilize TypeScript instead. While React used to promote classes, they now promote functions and function components, although they still utilize JavaScript function declarations rather than arrow functions. // declaration function yo(name) { return `Yo, ${name}!` } // arrow const yo = name => `Yo, ${name}!` TypeScript would make the above a bit more predictable: const yo = (name:string):string => `Yo, ${name}` Elm is a strongly typed functional language that is compiled to JavaScript. The typings are optional as the compiler is pretty smart. yo name = "Yo, " ++ name ++ "!" Like TypeScript, it can infer a lot; you don’t _have_ to add types on top of all your functions. yo : String -> String yo name = "Yo, " ++ name ++ "!" Notice there are no parenthesis, nor semi-colons for Elm functions. The function name comes first, any parameters if any come after, then equal sign. Notice like Arrow Functions, there is no return keyword. All functions are pure with no side effects or I/O, and return _something_, so the return is implied. Both languages suffer from String abuse. The TypeScript crew are focusing on adding types to template strings since this is an extremely prevalent to do in the UI space: changing strings from back-end systems to show users. Most fans of types think something with a String is untyped which is why they do things like Solving the Boolean Identity Crisis. Mutation While much of React encourages immutability, mutation is much easier for many people to understand. This is why tools like Immer are so popular for use in Redux. In JavaScript, if you want to update some data on a Person Object, you just set it. person = { name : "Jesse" } person.name = "Albus" However, with the increase in support for immutable data, you can use Object Destructuring Assignment to not mutate the original object: personB = { ...person, name : "Albus" } In Elm, everything is immutable. You cannot mutate data. There is no var or let, and everything is a const that is _actually_ constant (as opposed to JavaScript’s const myArray = [] which you can still myArray.push to). To update data, you destructure a similar way. { person | name = "Albus" } HTML React uses JSX which is an easier way to write HTML with JavaScript integration that enables React to ensure your HTML and data are always in sync. It’s not HTML, but can be used inside of JavaScript functions, making the smallest React apps just 1 file. All JSX is assumed to have a root node, often a div if you don’t know semantic HTML like me. Just about all HTML tags, attributes, and events are supported. Here is an h1 title: <h1>Hello, world!</h1> Elm uses pure functions for everything. This means html elements are also functions. Like React, all HTML tags, attributes, and events are supported. The difference is they are imported from the HTML module at the top of your main Elm file. h1 [] [ text "Hello, world!" ] Components In React, the draw is creating components, specifically function components. React is based on JavaScript. This means you can pass dynamic data to your components, and you have the flexibility on what those Objects are and how they are used in your component. You can optionally enforce types at runtime using prop types. function Avatar(props) { return ( <img className="Avatar" src={props.user.avatarUrl} alt={props.user.name} /> ) } In Elm, there are 2 ways of creating components. The first is a function. The other advanced way when your code gets larger is a separate file and exporting the function via Html.map. Elm is strictly typed, and types are enforced by the compiler, so there is no need for runtime enforcement. Thus there is no dynamic props, rather you just define function arguments. You don’t have to put a type definition above your function; Elm is smart enough to “know what you meant”. avatar user = img [ class "Avatar" , src user.avatarUrl , alt user.name ] [ ] View In React, your View is typically the root component, and some type of Redux wrapper, like a Provider. ReactDOM.render( <Provider store={store}> <App /> </Provider>, rootElement ) In Elm, this is a root method called view that gets the store, or Model as it’s called in Elm as the first parameter. If any child component needs it, you can just pass the model to that function. view model = app model mapStateToProps vs Model In React, components that are connected use the mapStateToProps to have an opportunity to snag off the data they want, or just use it as an identity function and get the whole model. Whatever mapStateToProps returns, that is what your component gets passed as props. const mapStateToProps = state => state.person.name // get just the name const mapStateToProps = state => state // get the whole model In Elm, your Model is always passed to the view function. If your view function has any components, you can either give them just a piece of data: view model = app model.person.name Or you can give them the whole thing: view model = app model In React, you need to configure the connect function take this mapStateToProps function in when exporting your component. In Elm, you don’t have to do any of this. Action Creator vs Messages In React, if you wish to update some data, you’re going to make that intent known formally in your code by creating an Action Creator. This is just a pattern name for making a function return an Object that your reducers will know what to do with. The convention is, at a minimum, this Object contain a type property as a String. const addTodo = content => ({ type: ADD_TODO, content }) // Redux calls for you addTodo("clean my desk") In Elm, you just define a type of message called Msg, and if it has data, the type of data it will get. type Msg = AddTodo String -- to use AddTodo "clean my desk" In React, Action Creators were originally liked because unit testing them + reducers was really easy, and was a gateway drug to pure functions. However, many view them as overly verbose. This has resulted in many frameworks cropping up to “simplify Redux”, including React’s built-in Context getting popular again. In Elm, they’re just types, not functions. You don’t need to unit test them. If you misspell or mis-use them, the compiler will tell you. View Events In React, if a user interacts with your DOM, you’ll usually wire that up to some event. const sup = () => console.log("Clicked, yo.") <button onClick={sup} /> In Elm, same, except you don’t need to define the handler; Elm automatically calls the update function for you. You just use a message you defined. If the message doesn’t match the type, the compiler will yell at you. type Msg = Pressed | AddedText String button [] [ onClick Pressed ] -- works input [] [ onChange Pressed ] -- fails to compile, input passes text but Pressed has no parameter input [] [ onChange AddedText ] -- works because input changing will pass text, and AddedText has a String mapDispatchToProps vs Msg In React Redux, when someone interacts with your DOM and you want that event to update your store, you use the mapDispatchToProps object to say that a particular event fires a particular Action Creator, and in your component wire it up as an event via the props. Redux will then call your reducer functions. const increment = () => ({ type: 'INCREMENT' }) -- action creator const mapDispatchToProps = { increment } const Counter = props => ( <button onClicked={props.increment} /> ) export default connect( null, mapDispatchToProps )(Counter) In Elm, we already showed you; you just pass your message in the component’s event. Elm will call update automatically. The update is basically Elm’s reducer function. type Msg = Increment button [] [ onClick Increment ] Store vs Model In Redux, you store abstracts over “the only variable in your application” and provides an abstraction API to protect it. It represents your application’s data model. The data it starts with is what the default value your reducer (or many combined reducers) function has since it’s called with undefined at first. There is a bit of plumbing to wire up this reducer (or combining reducers) which we’ll ignore. const initialState = { name : 'unknown' } function(state = initialState, action) {...} In Elm, you first define your Model’s type, and then pass it to your browser function for the init function or “the thing that’s called when your application starts”. Many tutorials will show an initialModel function, but for smaller models you can just define inline like I did below: type alias Model = { name : String } main = Browser.sandbox { init = { name = "Jesse" } , view = view , update = update } There isn’t really a central store that you directly interact with in Redux. While it does have methods you can use before Hooks became commonplace, most of the best practices are just dispatching Action Creators from your components. It’s called store, but really it’s just 1 or many reducer functions. You can’t really see the shape of it until runtime, especially if you have a bunch of reducer functions. In Elm, it’s basically same, but the Model DOES exist. It’s a single thing, just like your store is a single Object. That type and initial model you can see, both at the beginning of your app, and at runtime. Reducers vs Update The whole reason you use Redux is to ensure your data model is immutable and avoid a whole class of bugs that arise using mutable state. You also make your logic easier to unit test. You do that via pure functions, specifically, your reducer functions that make up your store. Every Action Creator that is dispatched will trigger one of your reducer functions. Whatever that function returns, that’s your new Store. It’s assumed you’re using Object destructuring, Immutablejs, or some other Redux library to ensure you’re not using mutation on your state. If you’re using TypeScript, you can turn on “use strict” in the compiler settings to ensure your switch statement doesn’t miss a possible eventuality. const updatePerson = (state, action) => { switch(action.type) { case 'UPDATE_NAME': return {...state, name: action.newName } default: return state } } Elm has no mutation, so no need to worry about that. Whenever a Msg is dispatched from your view, the Elm runtime will call update for you. Like Redux reducers, your job is to return the new Model, if any from that function. Like TypeScript’s switch statement strictness, Elm’s built in pattern matching will ensure you cannot possibly miss a case. Note that there is no need of a default because that can’t happen. update msg model = case msg of UpdateName name -> { model | name = name } JavaScript, TypeScript, and Elm however can still result in impossible states. You should really think about using the types fully to ensure impossible states are impossible. Thunk & Saga vs Elm In React, as soon as you want to do something asynchronous in Redux, you need to reach for some way to have your Action Creators plumbing be async. Thunks are the easiest; you offload the async stuff to the code in your Components and it’s just a normal Promise that pops out an Action Creators at various times: before, during, after success, after failure. Saga’s are more advanced and follow the saga pattern. For situations where the back-end API’s are horrible, and you have to do most of the heavy lifting of orchestrating various services on the front-end, Saga’s offer a few advantages. They allow you to write asynchronous code in a pure function way. Second, they maintain state _inside_ the functions. Like closures, they persist this state when you invoke them again and still “remember” where you were. In side effect heavy code where you don’t always have a lot of idempotent operations, this helps you handle complex happy and unhappy paths to clean up messes and still inform the world of what’s going on (i.e. your Store). They even have a built in message bus for these Sagas to talk to each other with a reasonable amount of determinism. They’re hard to debug, a pain to test, verbose to setup, and a sign you need heavier investment on tackling your back-end for your front-end story. Elm has no side effects. Calling http.get doesn’t actually make an HTTP XHR/fetch call; it just returns an Object. While you can do async things with Task, those are typically edge cases. So there is no need for libraries like Thunk or Saga. Whether the action is sync like calculating some data, or async like making an HTTP call, Elm handles all that for you using the same API. You’ll still need to create, at minimum, 2 Msg‘s; 1 for initiating the call, and 1 for getting a result back if the HTTP call worked or not. Both React & Elm still have the same challenge of defining all of your states, and having a UI designer capable of designing for those. Examples include loading screens, success screens, failure screens, no data screens, unauthorized access screens, logged out re-authentication screens, effectively articulating to Product/Business why modals are bad, and API throttling screens. No one has figured out race conditions. Error Boundaries React has error boundaries, a way for components to capture an error from children and show a fallback UI vs the whole application exploding. While often an after thought, some teams build in these Action Creators and reducers from the start for easier debugging in production and a better overall user experience. Elm does not have runtime exceptions, so there is no need for this. However, if you utilize ports and talk to JavaScript, you should follow the same pattern in Redux, and create a Msg in case the port you’re calling fails “because JavaScript”. While Elm never fails, JavaScript does, and will. Adding a New Feature When you want to add a new feature to React Redux, you typically go, in order: - create a new component(s) - add new hooks/action creators - update your mapDispatchToProps - add a new reducer - re-run test suite in hopes you didn’t break anything To add a new feature to Elm, in order: - create a new component(s) - add a new Msgtype - add that Msgtype to your component’s click, change, etc - update your updatefunction to include new Msg - compiler will break, ensuring when it compiles, your app works again. That #5 for Elm is huge. Many have learned about it after working with TypeScript for awhile. At first, battling an app that won’t compile all day feels like an exercise in futility. However, they soon realize that is a good thing, and the compiler is helping them a ton, quickly (#inb4denorebuilttscompilerinrust). When it finally does compile, the amount of confidence they have is huge. Unlike TypeScript, Elm guarantees you won’t get exceptions at runtime. Either way, this is a mindset change of expecting the compiler to complain. This eventually leads you to extremely confident massive refactoring of your application without fear. Updating Big Models React and Elm both suffer from being painful to update large data models. For React, you have a few options. Two examples, just use a lens function like Lodash’ set which supports dynamic, deeply nested paths using 1 line of code… or use Immer. For Elm, lenses are an anti-pattern because the types ensure you don’t have undefined is not a function …which means everything has to be typed which is awesome… and brutal. I just use helper functions. Testing For React the only unit tests you need are typically around your reducer functions. If those are solid, then most bugs are caused by your back-end breaking, or changing the JSON contract on you unexpectedly. The minor ones, like misspelling a click handler, are better found through manual & end to end testing vs mountains of jest code. End to end / functional tests using Cypress can tell you quickly if your app works or not. If you’re not doing pixel perfect designs, then snapshot tests add no value and they don’t often surface what actually broke. The other myriad of JavaScript scope/closure issues are found faster through manual testing or Cypress. For useEffect, god speed. For Elm, while they have unit tests, they don’t add a lot of value unless you’re testing logic since the types solve most issues. Unit tests are poor at validating correctness and race conditions. Typically, strongly typed functional programming languages are ripe for property / fuzz testing; giving your functions a bunch of random inputs with a single test. However, this typically only happens when you’re parsing a lot of user input for forms. Otherwise, the server is typically doing the heavy lifting on those types of things. Instead, I’d focus most of your effort on end to end tests here as well with unhappy paths to surface race conditions. Conclusions React and Elm both have components. In both languages, they’re functions. If you use TypeScript in React, then they’re both typed. Your Action Creators are a Msg type in Elm. If you use TypeScript, they’re a simpler discriminated union. In React, you have a Store, which is 1 big Object which represents your applications data model. Through Event Sourcing, it’s updated over time. In Elm, you have a single Model, and it’s updated over time as well. In React, through a ton of plumbing, your Action Creators are dispatched when you click things to run reducer functions. These pure functions return data to update your store. Elm is similar; clicking things in your view dispatches a Msg, and your update function is called with this message, allowing you to return a new model. Both require good UI designers to think about all the possible states, and both get good returns on investment in end to end / functional tests. For Elm, you don’t need to worry about error boundaries, or async libraries.
https://jessewarden.com/2020/10/react-developers-crash-course-into-elm.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+jessewarden+%28Flex+and+Flash+Developer+-+Jesse+Warden+dot+Kizz-ohm%29
CC-MAIN-2020-50
refinedweb
3,744
63.59
From: John Maddock (john_at_[hidden]) Date: 2006-06-04 11:12:58 AlisdairM wrote: > Several TR1 tests are failing due to an old workaround that took swap > out of namespace std::tr1. > > The latest compiler is much more compatibile the the overloads for > complex math, so updated that workaround too. Good. >. Yep, I usually remember not to do that, but sometimes they slip in. > Diffs follow against 1_34_0 branch. OK to commit? > [mainline, then merge to branch] Almost certainly, but I don't see the diffs attached? John. Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
http://lists.boost.org/Archives/boost/2006/06/105770.php
CC-MAIN-2013-20
refinedweb
115
79.16
Banner selector in Actionscript 3 part 2 This is part 2 of the banner selector in Actionscript 3 tutorial where the images will transition to the next image. Make sure you have completed the previous tutorial before attempting this one as additional code and modifications are added. I have also used the tweenlite plugin for the tweening which can be downloaded here. Banner selector in Actionscript 3- part 2 Step 1 Open up your Banner selector FLA file. On the timeline select the images layer and remove all the key frames (Shift + F6). This will leave you with an empty Images layer on the timeline and nothing on the stage. Your timeline should look like below. Step 2 Still on the Images layer, drag your images from the library onto the stage so there horizontally next to each other. Make sure your first image is exactly matching the dimensions of the stage area, so the x and y positions are at 0. Step 3 Select all your images on the stage and convert it into a movie clip (F8) and then give the instance name: banner_mc. Step 4 Open the Rulers by selecting View > Rulers on the menu bar then find the horizontal starting positions for each of your images. My image starting positions are: 0, 400, 800, and 1200, but your position will be different. Step 5 On the timeline select the Actions layer then open up the action panel (F9) and make the following modifications. I assume you have completed the previous banner selector tutorial as most of the code has been left out. //Import the tweenlite packages. import com.greensock.*; //This function tweens to appropriate image on the banner. Note my image starting //positions are negative as they are moving to the left which is backwards. function playBanner(event:MouseEvent):void { switch (event.target) { case button1 : TweenLite.to(banner_mc, 2, {x:0}); break; case button2 : TweenLite.to(banner_mc, 2, {x:-400}); break; case button3 : TweenLite.to(banner_mc, 2, {x:-800}); break; case button4 : TweenLite.to(banner_mc, 2, {x:-1200}); break; } } Step 6 Test your movie Ctrl + Enter. You should notice that the images transition instead of jumping to the next image.
http://www.ilike2flash.com/2010/02/banner-selector-in-actionscript-3-part.html
CC-MAIN-2017-04
refinedweb
364
65.22
The recent CTP release of ADO.Net Data Services (Astoria) V1.5 contains a nice feature that allows result set counting on the server side. In this blog post, I’ll go over the various ways you can benefit from this feature in your data service. Type of Row Count Before going into details on how to use the feature, we should first take a look at what the server will count for you. There are two scenarios here: counting the total result set, and counting what the server actually put on the wire. Counting the total result set is helpful when you need to know exactly how many entities exist in a set, regardless of any paging constrains you put on the query. Take the classic Northwind database for example. There are 91 Customers in the database, if you ask the server “How many customers are there in total”, you are asking for the total count on the set “Customers”, and the result will always be 91, even if you instruct the server to only give you the first 10 customers back (via the $top=10 query). This type of counting is very useful when you want to do client-driven paging. Since you must know the total number of entities before you can calculate how many page links you need to render on the control. Astoria supports counting the total result set via the $inlinecount=allpages option. As the name suggests, the count value will be returned inline with the actual result set. The other type of counting is when you want to know before hand how many entities the server will (eventually) put on the wire when you execute the query. The reason I put “(eventually)” in that sentence will be explained later. This type of counting happens when you ask the server “How many orders will you give back if I want the first 100 orders made by customers ’ALFKI’”. The server will answer “6” in this case because there are only 6 orders under Customer ‘ALFKI’, but if you change the question to “how many orders will you give back if I want the first 5 orders made by ‘ALFKI’”, then the server will tell you “5”. This type of counting is supported in Astoria by the $count segment, and since you are only interested in the count value, the result is just a plaintext number. In other words, $inlinecount=allpages is a query option that gives you the count neglect any paging effects ($skip, $top etc.), while $count segment will take explicitly stated paging effect into account. Inline Counting When you specify $inlinecount=allpages in the query option string, the server will respond will the normal syndication of result set (or a JSON block) plus the count value embedded in the results. In ATOM serialization, the count is returned in a feed level tag called “count”, under the “Metadata V2” namespace. For example, the query "/Customers?$inlinecount=allpages&$top=5” will give you the following tag plus 5 customers: <m2:count xmlns:m2=” dataservices/metadata”>91</m2:count> Inline Counting on the Client On our client, if the resulting feed contains the count tag, you can extract the value by accessing the “TotalCount” property on the QueryOperationResponse object. Of course, if the count tag is not present, getting the property value will cause an InvalidOperationException to be thrown. There are many ways for the client to generate a request to cause the server to respond with the count tag. We have provided a new API for the ALinq users called IncludeTotalCount(). The method exists on the DataServiceQuery class, and returns a new instance of DataServiceQuery that will cause the $inlinecount query option to be generated. For example, you can write: var q = (from c in ctx.CreateQuery<Customers>(“/Customers”).IncludeTotalCount().Take(5) select c) as DataServiceQuery<Customers>; var results = q.Execute() as QueryOperationResponse<Customers>; long countValue = results.TotalCount; An other way to achieve this is to use the AddQueryOption API and manually add “inlinecount” option with value “allpages”. Of course, if you just specify an URI with the correct counting option and directly execute it from the context, you will also be able to use the TotalCount property to access the count tag in the result. It is also possible to batch requests generated by IncludeTotalCount. The individual count value can be accessed in each of the QueryOperationResponse object in the batch response. Value Counting The $count segment can be added to any entity sets on the server, the result is a plaintext response that represents the count of entities in that set. Different from inline counting , query options can be added to this segment to modify the number of entity to be counted. For example, given that there are 91 customers in total, the following table illustrates the behavior of $count Since the response is in PlainText format, the Accept header should be compatible with “text/plain”. For example, “*/*” will work, but “application/json” will cause an exception to the thrown. Value Counting on the Client In V1, calling Count and LongCount on a DataServiceQuery<> will throw NotSupportedException. These two APIs are now implemented and mapped to the $count segment on the corresponding entity set. Note that these APIs will cause the query to execute immediately and returns the count value, hence it’s a synchronized operation and thus not available on the Silverlight Client. Here’s an example of how you can use these APIs: var q = (DataServiceQuery<Customers>)(from c in ctx.CreateQuery<Customers>("Customers") select c); long countValue = q.LongCount(); You can call Count or LongCount on any DataServiceQuery<> that doesn’t already have a counting option (i.e., one that’s generated by IncludeTotalCount). Counting on Links Endpoint The query option "inlinecount” and the “$count” segment can also be applied to $links end points. However there is no equivalent APIs on the client side. Here are some examples illustrating how: /Customers(‘ALFKI’)/$links/Orders?$inlinecount=allpages returns a PlainXML format with the <m2:count> tag embedded. /Customers(‘ALFKI’)/$links/Orders/$count returns a PlainText response “6” (there are 6 links to customer ALFKI’s orders) Counting with Expansion When you specify inline counting together with expansions (via $expand), the count value represents the number of entities that exists in the outermost set. Each entity will be expanded as normal, but there won’t be a count tag for each of the expanded set. When you specify value counting ($count segment), expansion is entirely ignored. Counting with Server Driven Paging Server driven paging is a new feature in V1.5 that allows server side enforced paging. This means on SDP enabled services, you may be given back a partial set for any request you make, together with a link on where to get the next part of the set. Ideally, SDP will not affect counting, hence in the beginning of the article I wrote how many entities the server will "eventually" give back. However this is not the case right now. When you enable SDP, you will find that both $inlinecount and $count will be affected by the partial set effect. This however is a known issue and we are tracking it right now.
http://blogs.msdn.com/b/peter_qian/archive/2009/03/18/getting-row-count-in-ado-net-data-services.aspx
CC-MAIN-2015-48
refinedweb
1,204
60.45
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also #include <stdlib.h> int setenv(const char *envname, const char *envval, int overwrite); The setenv() function updates or adds a variable in the environment of the calling process. The envname argument points to a string containing the name of an environment variable to be added or altered. The environment variable is set to the value to which envval points. The function fails if envname points to a string which contains an '=' character. If the environment variable named by envname already exists and the value of overwrite is non-zero, the function returns successfully and the environment is updated. If the environment variable named by envname already exists and the value of overwrite is zero, the function returns successfully and the environment remains unchanged. If the application modifies environ or the pointers to which it points, the behavior of setenv() is undefined. The setenv() function updates the list of pointers to which environ points. The strings described by envname and envval are copied by this function. Upon successful completion, 0 is returned. Otherwise, -1 is returned, errno set to indicate the error, and the environment is left unchanged. The setenv() function will fail if: The envname argument is a null pointer, points to an empty string, or points to a string containing an '=' character. Insufficient memory was available to add a variable or its value to the environment. See attributes(5) for descriptions of the following attributes: getenv(3C), unsetenv(3C), attributes(5), standards(5) Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i099ic/index.html
CC-MAIN-2015-22
refinedweb
262
54.32
Created attachment 623363 [details] compartment merging Here's another idea for reducing CC pauses when tearing down pages, which is a less radical version of bug 751283: if we somehow decide we're killing a particular compartment (perhaps by doing something similar to bug 695480), we represent the entire compartment as a single node in the CC graph. We treat pointers to objects in that compartment as pointers to the compartment. We implement a special JSCompartment participant. I'm hoping this can be implemented more easily than some of my other ideas for speeding CC, and that it will help in the worst cases. A) Out edges Instead of tracing through individual objects in a compartment, we directly scan the compartment for out edges. There are two kinds of out edges of a compartment. 1) Cross-compartment wrappers (CCW). Each compartment keeps a table of wrappers, so we can NoteJSChild all of them. Any wrapper that is the target of a marked object must itself also be marked, and thus won't be added to the CC graph. 2) C++ pointers. For this, we call nsXPConnect::Traverse on all gray objects in the compartment, forwarding the NoteXPCOMChildren we get, and ignoring the rest. Each compartment is treated as unmarked. This is only okay for non-WANT_ALL_TRACES CCs, but we avoid that problem by not doing compartment blobbing in W_A_T CCs. In essence, we are only dealing with the portion of the compartment that is marked gray. Unlinking a compartment, like unlinking a JS object, doesn't do anything so this should be okay, as long as we respect the reachability relation for all paths we do take. We don't add any edges from black objects to C++ objects, so they will miss a reference, and be kept alive. For CCWs, if the source in our compartment is black, then the target in another compartment should also be black, and thus won't be added to the graph. B) Performance Scanning compartments in this way should hopefully be faster than tracing through it. 1) We don't ever have to examine non-objects (shapes, type objects, strings, etc.). 2) Locality should be better because we're tracing linearly through arenas instead of randomly, and intermixed with other things outside of the compartment. 3) There is less XPConnect-CC glue. C) Experiment I did a very crude experiment which suggests that in the case of closing a tab you can determine that a compartment is garbage without examining its interior geometry. See the attached CC graph, which only includes the largest globs in the graph. What I did here was modify the CC logging to print out the compartment of each JS object, then I merged together all JS things in a single compartment together in the graph. The squares are JS things that are garbage. There are two very large ones, one with 9933 objects in it, and another with 2472 objects. In this particular case, it appears that these compartment blobs would not be kept alive by anything, even with this imprecise information about reachability. Side node: the hexagon with 1517 nodes in it is a DOM. Separately, I'd like to look into khuey's idea for merging together DOMs in the CC graph. A DOM is strongly connected, so merging a DOM in that way is precise. Created attachment 623712 [details] [diff] [review] WIP, doesn't crash immediately To my great shock, this seems to work reasonable well for at least a simple example, of closing a single tab with this page on it: without merging: CC(T+31.1) duration: 22ms, suspected: 66, visited: 2226 RCed and 10395 GCed, collected: 1678 RCed and 9954 GCed (11632 waiting for GC) ForgetSkippable 5 times before CC, min: 0 ms, max: 5 ms, avg: 1 ms, total: 7 ms, removed: 1310 with merging: CC(T+39.8) duration: 8ms, suspected: 79, visited: 2257 RCed and 34 GCed, collected: 1663 RCed and 4 GCed (1667 waiting for GC) ForgetSkippable 6 times before CC, min: 0 ms, max: 3 ms, avg: 1 ms, total: 9 ms, removed: 1594 This is a debug build, so the timing may be suspect, but you can see that this got rid of 80% of the graph. Traversing the compartment is probably going to be slower than an individual object, but it looks like it might be faster overall. Created attachment 623721 [details] with compartment merging This is the entire CC graph, with compartment merging. I still use post-facto DOM merging, as in the other PDF file. But you can see for the two blobs present in the normal graph (with 10000 and 2500 nodes, approx), they have been reduced to a single node. Here's another fun little comparison. I opened 5 Tech Crunch tabs, then a new empty tab, then let things settle out, then I did "close all other tabs" on the empty tab, closing the 5 tech crunch tabs. with merging: CC(T+74.0) duration: 99ms, suspected: 1665, visited: 28869 RCed and 93 GCed, collected: 27962 RCed and 59 GCed (28021 waiting for GC) ForgetSkippable 3 times before CC, min: 0 ms, max: 6 ms, avg: 4 ms, total: 13 ms, removed: 194 without merging: CC(T+57.1) duration: 568ms, suspected: 1702, visited: 28911 RCed and 207627 GCed, collected: 28175 RCed and 207184 GCed (235359 waiting for GC) ForgetSkippable 4 times before CC, min: 0 ms, max: 8 ms, avg: 4 ms, total: 19 ms, removed: 383 My experimental patch above always merges compartments. This is bad because it is possible to have one cycle through a compartment that is garbage while there's another that isn't, so merging will cause us to hold memory alive for longer. It is also possible that if we have a single object in a compartment to be added to the CC graph, it will be faster to just add that object. I need to decide when we want to enable compartmental merging. I've been trying it out in nsGlobalWindow::FreeInnerObjects(). Generally, we want to enable it for a compartment that we're pretty sure is going to be freed. We could enable compartment merging either globally or on a per-compartment basis. There's also a matter of how long it should remain enabled. If a compartment has been marked mostly black by CC optimizations, then we really want to keep merging it until there has been a GC. When I was trying out a variant of my patch that set global merging for the first CC after a FreeInnerObjects(), I noticed a case that went: FreeInnerObjects() -> merged CC that doesn't do anything -> GC -> big CC So that's bad. I could wait until there has been a GC, but in the modern era where the browser does compartmental GCs more often, that may not be sufficient. We may want to do something like suspect a compartment, and then keep merging until after a GC has been done on that compartment. I don't know how hard that is. Anyways, as you can see there's a pretty huge state space here, and I'd appreciate any thoughts people may have. Also, FYI, a Linux64 Try run was green, except for some Moth tests that were crashing, probably due to my failure to support Weak Maps. Olli tried out my patch, and it sounded like it was pretty leaky. Leaks caused by this patch won't show up as shutdown leaks, because I think the browser kind of throws out compartments pretty aggressively at shutdown. I think the leaks I saw were somehow related to a_chrome_compartment<->another_chrome_compartment edges. (In reply to Olli Pettay [:smaug] from comment #7) > I think the leaks I saw were somehow related to > a_chrome_compartment<->another_chrome_compartment edges. It might be a good idea to only merge content compartments at first, as those are probably larger and more common to tear down. Of course, we still want to not merge them all the time. I think the strategy I'm going to go with is a global setting in a CC to merge compartments or not. The extra complexity from a more fine grained enabling of merging doesn't seem to buy you anything, and this will let me hoist the decision about merging or not to nsJSEnvironment, which knows when GCs are scheduled. I did some timing of how long the compartment scanning was taking, and while it takes a decent chunk of time, it doesn't seem to take more than 10% of the CC time, so it isn't really worth trying to flip it on or off for individual compartments to try to eke out speed improvements in the case where only a few objects in a compartment would be visited. Created attachment 624600 [details] [diff] [review] revised compartment merging patch This patch is similar to the previous version, but it adds a new argument to the cycle collector that controls whether compartments are merged, to allow us to control scheduling merged compartments from nsJSEnvironment. Created attachment 624604 [details] [diff] [review] part 2: add simple heuristic for scheduling compartmental merging CCs Here's a rough attempt at a fairly simple heuristic for controlling compartmental merging for CCs. This adds a function |nsJSContext::PrepareForBigCC()| that is supposed to be called in places where you suspect a compartment is going to begin the process of being torn down. Right now, this is only called in nsGlobalWindow::FreeInnerObjects(). When that function is called, it sets a flag |GotPrepForBigCC|. We don't necessarily do a merging CC at the next CC: we assume the compartment being torn down was subject to extensive CC optimizations, and thus likely has been marked black and won't truly be ready for being torn down until the GC can be run again. At the start of the next GC where the flag has been set, we clear that flag and set another flag, NextCCIsBig. (Note that due to incremental GC, we want to do this at the start of the GC and not the end: any compartment that starts the tear down process during the GC won't necessarily get its black bits cleared during this GC.) At the start of a CC, if NextCCIsBig is set, we clear it and do a merging CC. Having two flags allows us to cover the case where we signal a big CC, do a GC, then signal another big CC, then do a CC. In this event, we want to know that we should do a merging CC after the next GC, to cover the second signaled big CC. One scenario I am somewhat concerned about that this code does not cover is non-merging-CC starvation: we could potentially end up doing merging CCs over and over, and not free up some memory. It didn't seem to be a big problem in some simple browsing I was doing, but maybe it could be in general. I'd probably solve this by adding yet another timer, that would force a non-merging CC at least once a minute or whatever. Created attachment 625251 [details] [diff] [review] part 1: add JS hooks for scanning CC things in a compartment Created attachment 625259 [details] [diff] [review] part 2: implement CompartmentParticipant, add hooks for it in CC Created attachment 625260 [details] [diff] [review] part 3: schedule merging CC after killing windows Comment on attachment 625251 [details] [diff] [review] part 1: add JS hooks for scanning CC things in a compartment Bill, how reasonable/unreasonable does this look? I forgot when I was talking to you in IRC before that I'd made my custom scanning function by modifying IterateCells, the function you recommended I use. I can probably use IterateCells directly, with a callback wrapper, if that's cleaner. Comment on attachment 625251 [details] [diff] [review] part 1: add JS hooks for scanning CC things in a compartment This seems okay. What do you need the wrapper map callback for? (In reply to Bill McCloskey (:billm) from comment #16) > This seems okay. What do you need the wrapper map callback for?. (In reply to Andrew McCreight [:mccr8] from comment #17) >. That's most of them. Unfortunately there are more. My intention with bug 742841 is to gather all the cross-compartment references into a single map. It's stalled right now, but I could get back to it if you need it. Ah, that's unfortunate. Are the debugger objects just the first ones you happened to start patching? Or can I just disable this optimization when there are any debugger objects. ;) I could also limit this to content compartments or something. Well, if it is just these debugger objects, then I can just disable it for the debugger compartment. Though I'm not sure I want to add jsfriendapi hooks for telling if a compartment is a debugger compartment or not. The debugger is the only thing I know of that has cross-compartment pointers that aren't in the wrapper map. I'll try to get that patch ready soon. The alternatives don't sound very appealing. Awesome, thanks! My current heuristic for triggering these isn't perfect: 1. GC(T+473.5) Total Time: 169.2ms, Compartments Collected: 271, Total Compartments: 314 // SET_NEW_DOCUMENT 2. CC(T+478.4) duration: 70ms, suspected: 2303, visited: 32071 RCed and 95 merged GCed, collected: 1838 RCed and 10 GCed (1848 waiting for GC) 3. GC(T+482.5) Total Time: 42.7ms, Compartments Collected: 244, Total Compartments: 314 // CC_WAITING 4. GC(T+483.5) Total Time: 88.9ms, Compartments Collected: 287, Total Compartments: 287 // FULL_GC_TIMER 5. CC(T+484.8) duration: 395ms, suspected: 211, visited: 30187 RCed and 183877 GCed, collected: 29395 RCed and 183419 GCed (212814 waiting for GC) What happens here is that we trigger a GC after closing 4 TC tabs, then do our merged CC, but for some reason the compartmental GC didn't unmark enough to actually collect much. Then we have another few GCs, including a full GC, and then finally we get a CC that frees things, but we already did our merged GC and we get a big CC. This doesn't seem to happen much, though, even in the presence of compartmental GCs. One possible approach would be to keep a set of globals we've designated as being cleared, and then track on an individual basis whether they have been GCed, and somehow use that to drive our decision to merge or not. One possible cause for this problem is that I opened a bunch of TC tabs, then opened a blank tab, which I kept in the foreground. Then I let the TC tabs sit for a while. If this caused them to stop running JS (because they were in the background and I wasn't interacting with them), then the compartmental GC wouldn't visit them in step 1. There seems to be some provision for that in nsJSContext::GC, but maybe with CPG that isn't sufficient? I'll have to investigate further. Should we mark closed nsJSContexts active? (In reply to Olli Pettay [:smaug] from comment #25) > Should we mark closed nsJSContexts active? That sounds like a good idea! That will avoid the problem in Comment 24 without any further changes, I think, and sounds like a good idea to boot. I filed bug 757884 for that. I kind of want to use JSTracer instead of the new GCThingCallback (to make handling these children more uniform), but it uses indirect marking and passing in a reference to a key of crossCompartmentWrappers seems a little sketchy. new rebased and refactored stack of patches: Oddly this caused a lot of Bug 625273 failures. I still need to make forced CC non-merging. Maybe that will help. Two more try runs, two more rounds of the weird failure on OSX. I get some other failures in Mochitest-chrome on Linux, including (two times) a crash from OOM. That at least is more sensible! I may work around this by only merging content compartments. Those are probably most of what we care about anyways. I just hope that that isn't merely papering over failures.. (In reply to Andrew McCreight [:mccr8] from comment #31) >. Doesn't the PAGE_HIDE GC happen 5 seconds after the page is closed? That seems like more than enough time to close things down. Do the intervening GC or CC somehow free up something that allows us to recognize that the window is gray? The PAGE_HIDE GC is the first GC or CC thing that runs after the page close. I'm not sure why the window ends up black. The GC heap dump from the start of the CCs wasn't very useful. It turns out it is easy enough to look for inactive windows with gray global objects, so I think I'll do that to trigger merging, rather than a weird heuristic about killing windows, and tracking whether we've GCed in between. I added some logging to see how long CCs were, and I think I was getting oranges because merged CC times were becoming horrifically long, a few seconds. More disturbingly, they were intermixed with a few unmerged CCs that were very fast, which maybe could happen if you have huge compartments that are mostly black, but have a few gray objects in them. Anyways, never merging system compartments seems to have fixed the oranges, but maybe I can do some further tweaking. Created attachment 636472 [details] [diff] [review] part 1: add JS hooks for scanning CC things in a compartment Created attachment 636726 [details] [diff] [review] part 2: add flag to control JS CC traversal behavior Created attachment 636727 [details] [diff] [review] part 3: add shim to NoteJSChild to allow it to be reused Created attachment 636751 [details] [diff] [review] part 4: define CompartmentParticipant Created attachment 636755 [details] [diff] [review] part 5: add support for JSCompartment merging to the CC Created attachment 636756 [details] [diff] [review] part 6: indicate in error console if we did a merging CC DoMergingCC() is a stub that will be filled in later. Created attachment 636759 [details] [diff] [review] part 7: indicate if CycleCollectNow is forced If a cycle collection is forced, you probably care more about thoroughness than performance, so we shouldn't merge compartments. This patch lays the groundwork for that by adding an argument to CycleCollectNow that indicates if it is forced or not. Created attachment 636764 [details] [diff] [review] part 8: do a merging CC when there's a gray global This is the patch that actually turns on compartment merging CCs. It schedules a merged CC when the parent of a JSContext's "global object" is gray, and that parent is in a non-system compartment. This is the best way I could come up with to find the global object of the nsGlobalWindow. The idea is that when a window object is marked gray, is when we're probably going to have a ton of content JS that needs to be cleaned up. This open-ended approach is nice, because it does not rely on trying to guess when that will happen. On the minus side, if a window is leaking, we could end up merging all the time. The next patch attempts to mitigate the problem there. Here's what the first 3 CCs after a page close of TechCrunch look like, without merging: CC(T+30.5) duration: 13ms, suspected: 369, visited: 5000 RCed and 4831 GCed, collected: 4123 RCed and 1196 GCed (5319 waiting for GC) ForgetSkippable 12 times before CC, min: 0 ms, max: 2 ms, avg: 0 ms, total: 9 ms, removed: 3779 CC(T+37.0) duration: 59ms, suspected: 350, visited: 5960 RCed and 29541 GCed, collected: 5375 RCed and 26634 GCed (32009 waiting for GC) ForgetSkippable 3 times before CC, min: 0 ms, max: 2 ms, avg: 1 ms, total: 4 ms, removed: 295 CC(T+43.4) duration: 7ms, suspected: 63, visited: 1193 RCed and 3060 GCed, collected: 24 RCed and 0 GCed (24 waiting for GC) ForgetSkippable 2 times before CC, min: 0 ms, max: 2 ms, avg: 1 ms, total: 3 ms, removed: 198 ----- Here's what the look like with merging: CC(T+42.7) duration: 10ms, suspected: 229, visited: 2002 RCed and 3057 merged GCed, collected: 361 RCed and 9 GCed (370 waiting for GC) ForgetSkippable 8 times before CC, min: 0 ms, max: 2 ms, avg: 0 ms, total: 7 ms, removed: 2613 CC(T+49.1) duration: 18ms, suspected: 331, visited: 6582 RCed and 3168 merged GCed, collected: 5412 RCed and 110 GCed (5522 waiting for GC) ForgetSkippable 2 times before CC, min: 1 ms, max: 3 ms, avg: 2 ms, total: 4 ms, removed: 121 CC(T+55.1) duration: 7ms, suspected: 36, visited: 1147 RCed and 3056 GCed, collected: 3 RCed and 0 GCed (3 waiting for GC) ForgetSkippable 2 times before CC, min: 0 ms, max: 2 ms, avg: 1 ms, total: 2 ms, removed: 132 ----- The first one isn't much faster, but the middle one, where the work is actually done, takes less than a third the time without merging (it has about 1/10th the JS nodes in the graph). Then the third CC for both (after the merging is done because the page has been cleaned up) is the same in both. I should say that TechCrunch is an unusually JS-heavy webpage, so this is a best-case scenario. I tried this before with the HTML5 spec and it made no difference. Comment on attachment 636726 [details] [diff] [review] part 2: add flag to control JS CC traversal behavior parts 2 and 3 shouldn't change behavior Comment on attachment 636751 [details] [diff] [review] part 4: define CompartmentParticipant This is the most technically complex patch, even though it is pretty short in terms of lines of code, so I'd appreciate it if the two of you could look at it. Comment on attachment 636755 [details] [diff] [review] part 5: add support for JSCompartment merging to the CC Most of the changes here (but not all!) are just threading around aMergeCompartment. Comment on attachment 636726 [details] [diff] [review] part 2: add flag to control JS CC traversal behavior >+enum TraverseSelect { >+ TRAVERSE_CPP, >+ TRAVERSE_FULL >+}; Nit, I don't remember xpconnect's coding style rules, but elsewhere { should be on its own line. Comment on attachment 636756 [details] [diff] [review] part 6: indicate in error console if we did a merging CC Sorry for all the bug spam... Comment on attachment 636764 [details] [diff] [review] part 8: do a merging CC when there's a gray global This patch is a bit hacky. Let me know if you have any better ideas how to trigger a merged CC... Created attachment 636815 [details] [diff] [review] part 9: don't merge too much I almost forgot this patch. The purpose of this patch is to avoid extreme behavior on the merging scheduling. We shouldn't merge more than 3 times in a row, and when we don't merge, we should un-merge at least 3 times in a row, to make sure we actually break up garbage. The main bad scenario I am thinking of here is when we leak a document, we could end up with having a gray document all of the time. We don't want to merge all the time, because then we may leak. With this patch, we'll get 3 merged CCs in a row, then 3 unmerged. It may also help with automated tests that open and close pages rapidly. Comment on attachment 636764 [details] [diff] [review] part 8: do a merging CC when there's a gray global >+AnyGrayGlobalParent() >+{ >+ if (!nsJSRuntime::sRuntime) { >+ return false; >+ } >+ JSContext *iter = nsnull; >+ JSContext *cx; >+ while ((cx = JS_ContextIterator(nsJSRuntime::sRuntime, &iter))) { >+ if (JSObject *global = JS_GetGlobalObject(cx)) { >+ if (JSObject *parent = js::GetObjectParent(global)) { >+ if (js::GCThingIsMarkedGray(parent) && >+ !js::IsSystemCompartment(js::GCThingCompartment(parent))) { >+ return true; >+ } >+ } >+ } >+ } >+ return false; >+} Comment on attachment 636815 [details] [diff] [review] part 9: don't merge too much > DoMergingCC(bool aForced) > { >- return !aForced && AnyGrayGlobalParent(); >+ // Don't merge too many times in a row, and do at least a minimum >+ // number of unmerged CCs in a row. >+ static const int minConsecutiveUnmerged = 3; >+ static const int maxConsecutiveMerged = 3; >+ >+ static int unmergedNeeded = 0; >+ static int mergedInARow = 0; PRInt32? And static variables should be sVariableName Comment on attachment 636472 [details] [diff] [review] part 1: add JS hooks for scanning CC things in a compartment Review of attachment 636472 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsfriendapi.cpp @@ +465,5 @@ > return reinterpret_cast<gc::Cell *>(thing)->isMarked(gc::GRAY); > } > > +JS_FRIEND_API(JSCompartment*) > +js::GCThingCompartment(void *thing) I think we've been using GetX in the APIs (e.g., GetContextCompartment). So this should probably be GetGCThingCompartment. @@ +482,5 @@ > + } > +} > + > +JS_FRIEND_API(void) > +js::VisitGrayObjectCells(JSCompartment *compartment, GCThingCallback *cellCallback, void *data) No need for a wrapper around the jsgc.cpp version. It's fine to have a JS_FRIEND_API function in jsgc.cpp. ::: js/src/jsgc.cpp @@ +4090,5 @@ > > namespace gc { > > +void > +VisitGrayObjectCells(JSCompartment *compartment, GCThingCallback *cellCallback, void *data) Could you call this IterateGrayCells instead, and move it out of the gc namespace so it's like IterateCells? Comment on attachment 636755 [details] [diff] [review] part 5: add support for JSCompartment merging to the CC >+ JSCompartment *MergeCompartment(void *gcthing) { >+ if (!mMergeCompartments) >+ return nsnull; >+ JSCompartment *comp = js::GCThingCompartment(gcthing); >+ if (js::IsSystemCompartment(comp)) { >+ return nsnull; >+ } >+ return comp; >+ } Please be consistent with {} usage. (In reply to Bill McCloskey (:billm) from comment #55) > Could you call this IterateGrayCells instead, and move it out of the gc > namespace so it's like IterateCells? Maybe IterateGrayObjects? It only iterates over objects. Or at least, that is the intent. ;) Because objects are the only GCthings that can contain pointers to C++ objects. Comment on attachment 636751 [details] [diff] [review] part 4: define CompartmentParticipant Review of attachment 636751 [details] [diff] [review]: ----------------------------------------------------------------- This all seems surprisingly reasonable. I realize now that VisitGrayObjectCells only visits objects. So maybe a better name would be IterateGrayObjects. ::: js/xpconnect/src/nsXPConnect.cpp @@ +2667,5 @@ > + * compartment, where one is garbage and the other is live. If we merge the entire > + * compartment, the cycle collector will think that both are alive. > + * > + * We don't have to worry about losing track of a garbage cycle, because any such garbage > + * cycle incorrectly identified as live must pass contain at least one C++ to JS edge, "pass contain" -> "contain" (In reply to Bill McCloskey (:billm) from comment #58) > This all seems surprisingly reasonable. Yeah, deciding how to schedule merging CCs ended up being the most annoying part. I think the core merging code worked the very first time I tried it. Previously I'd only been testing closing a tab, but I checked just now and merging works fine when navigating too. I navigated a bit, then I saw some --DOMWINDOW in the log, then we got a merging CC. There then were some more --DOMWINDOW for various follow button things, then a non-merging CC, but the non-merging CC was pretty fast and didn't have that many JS objects, so it looks like the first merging one caught everything important. Created attachment 636877 [details] [diff] [review] (folded patch for testing) Thanks for the quick reviews everybody! Comment on attachment 636815 [details] [diff] [review] part 9: don't merge too much Review of attachment 636815 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/base/nsJSEnvironment.cpp @@ +2987,5 @@ > + static int unmergedNeeded = 0; > + static int mergedInARow = 0; > + > + MOZ_ASSERT(0 <= unmergedNeeded <= minConsecutiveUnmerged); > + MOZ_ASSERT(0 <= mergedInARow <= maxConsecutiveMerged); You know that doesn't work, right? (In reply to :Ms2ger from comment #64) > You know that doesn't work, right? Oops, I guess I need to && stuff together? I've spent too much time using theorem provers... Created attachment 637926 [details] [diff] [review] part 10: followup to fix assertion Comment on attachment 637926 [details] [diff] [review] part 10: followup to fix assertion Review of attachment 637926 [details] [diff] [review]: ----------------------------------------------------------------- rs=me Thanks for catching this. Looks like there is a small drop in P75 and P95 and VISITED_GCED is lower too. So, the patch seems to actually work :)
https://bugzilla.mozilla.org/show_bug.cgi?id=754495
CC-MAIN-2016-22
refinedweb
4,696
58.72
Hi, If anyone would like to try out this new development version of clucene and let us know how well it works with sword (especially on non-English languages) that would be great. Regards, Daniel -------- Original Message -------- Subject: [CLucene-dev] clucene 0.9.0 Date: Mon, 28 Mar 2005 01:57:44 +0200 From: Ben van Klinken Hi All, If anyone is interested in trying out my branch, please go to: and download one of the clucene-0.9.0 files I've changed a lot of things in it, including many performance changes. You should notice a considerable performance increase. In some places 20% and even up to 100% increases in speed... and more to come. I've done a major restructure of the configure system, so it should be more x-platform. I've managed to get it compile on all the compile farms except the ppc's - which are complaining about the _T's which shouldn't be too hard to fix - although i'm not sure about endianess compatibility... I've pasted below my change log for this branch. There are a lot of changes now, including some fairly major header changes. There are still a few more changes to go before I suggest people start using it, but if you want to experiment please go for it. Note: You'll have to bootstrap to use the linux version. Also i've got the beginnings of a documentation script running, and also an automated build script. Go to clucene.sourceforge.net to see an 'underconstruction' page. ---- Have changed the clucene interface significantly. All functions that used references have been changed to pointers. Where a function returns a reference to an internal variable, a reference is used. When a function "consumes" a variable - i.e. takes responsibility for deleting the object, the function will be (should be) a reference. This has not been applied rigoursly yet, but will be done over a period of time. Pyclene: I have managed to compile pyclene. This is what I needed to do: I had problems compiling pyclene because ProcessorNameString wasn't defined in my registry. Putting in a semi-valid value seemed to fix it (in my case "amd athlon"). I replaced all the relevant function names using the list in CLBackwards as a guide. By default now, clucene is built with unicode if supported. So an _ASCII preprocessor had to be added I had to do some post-processing on the _clucene_wrap.cpp. I was getting to static definitions for many of the functions. For example: static static lucene_util_FileReader___eq__... Removed one of the statics for each function. This was also occuring float_t had to be defined in pyclene.i, not sure why - HitCollector::collect would not generate its directory interface Had to explicity setup the typemap for TCHAR* change log: Large commit: I realise that some of these changes will break people's code, but I think in the interest of creating a clucene which is more 'accessible' to new developers these changes should be made. Please let me know what you think. * In order to make clucene a more standard release where abouts developers can more easily begin programming clucene, some of the exotic functions have been converted to the TCHAR equivalents, which should be more familiar to some and will be at least a more standardised version of the former functions. These change also changes char_t to TCHAR. Include the file CLucene/CLBackwards.h after StdHeader.h to (hopefully) maintain backwards compatibility. See notes in CLBackwards.h for more information. * Linux Unicode version. Removed UTF8 code in favour of real unicode. Ensure _UNICODE is defined in config.h to enable unicode. _UNICODE uses 2 bytes to store its characters. Note that this is only clucene's internal representation, characters are still stored in the index as UTF8. This brings the linux version of clucene up to the same character capabilities as java-lucene. * Implmented debug reference counting. This allows developers to see which clucene classes have not been properly deleted. Define LUCENE_ENABLE_LUCENEBASE to enable this functionality. Note that this only counts clucene objects - it does not include undeleted non-clucene memory (and strings returned from clucene), nor does it guarantee that memory leaks within clucene objects haven't occurred. None the less, it is still usefull for general clucene usage * Closely related to the lucenebase functionality, there is a pseudo-reference counting mechanism. This mechanism is not used internally, but can be used by developers to ensure that their objects are not deleted by clucene's internal handlings. A Document returned from a Hits object, for example, will only last as longs as it is valid in the hits cache. Calling __cl_addref() on the Document object will ensure it is not deleted by clucene's internals. The Hits cache will call _DELETE on the Document - which will not actually delete the object - and only when the 'owner' calls _DECDELETE (or __cl_decref(), then _DELETE) will the object truly be deleted. Define LUCENE_ENABLE_REFCOUNT to enable this functionality. * This change has made it important to call _CLNEW when creating clucene objects. _DELETE should always be called for pointers, or _LDELETE for l-values ( returned from a function, for example). _DECDELETE calls __cl_decref() and if the refcount is 0, deletes the object (note: make sure you don't use an function value here, because the function will be called twice, which may have undesirable results) * A fairly major rework of StdHeader.h has been done. This should help in cross-platform compilation. I have used some ideas from the stl port code to identify platforms. The configuration should be a lot more accurate maintainable and cleaner (maybe *grin*). * To Use an alternative CLConfig.h, define OVERRIDE_DEFAULT_CLCONFIG. A file called AltCLConfig.h will be included. Make sure you define all the required definitions in this file. If anyone can think of a better way of doing this, please let me know. * I have made CLucene MSVC 6 compatible. Quite a few changes were made to make the code MSVC 6 compatible. Most of these are transparent, except the VoidList and VoidMap classes. Now the value and key respectively for these classes are now assigned as a pointer and thus VoidList<Directory> will be a list of Directory pointers (previously VoidList<Directory*>). * The Reader Class has been reworked. The Reader does not use the FSInputStream anymore, instead the character encoding can be specified as ASCII,UTF8,8859_1,UNICODEBIG or UNICODELITTLE. This should fix some of the 'read past EOF' errors that have been occuring because the files are assumed to be utf8. PLATFORM_DEFAULT_READER_ENCODING can be used as a default character encoding. LUCENE_OOR_CHAR is used when converting between a larger character type and a smaller type. * Changed the character conversion function names. STRDUP_XtoX and STRCPY_XtoX where X is A(ascii) W(unicode) or T(the current character type) * Changed CND_DEBUG functionality so that users can implement their own debug function. See _CND_DEBUG_DONTIMPLEMENT_OUTDEBUG in the CLConfig.h * changed float_t to double_t - TODO: is this better? i think it has better accuracy??? * Changed the thread implementation so that users can implement their own thread handling functionality in their own code. See _LUCENE_DONTIMPLEMENT_THREADMUTEX in CLConfig.h * Examples/Util has an example of using incremental indexing. You can also use groups to increment only certain parts of the index. Use the syncronize command to remove documents that have changed or that have been deleted from the index. Merge has been fixed to use the proper addIndexes function. * To reduce incompatibilities, all using namespace lucene::* have been removed from headers. === Other things === * I've begun some of the process of gnu'ifying clucene. By making it a more standard package. Please make suggestions on this. John Wheeler (I think it's you working on some of the documentation) can you give us an idea I can update my build script to include documentation in the release. Is this a good idea? Or should we just provide help on how to create the documentation - what tools, etc. * Please look through the files like AUTHORS, etc and check for mistakes and things that have been left out. I'd like to see the basic documentation, at least, correct and 'helpful' * Added a 'monolithic' msvc project. This is based on David Rushby's idea - all the .cpp are compiled into one object, thus speeding up the compilation exponentially. It is good for quickly building the project, but not as good for developing and debugging. * Changed all internal file representations from TCHAR to char. Having TCHAR representations of files is not necessary and only makes porting to unicode for *nix more difficult. Some changes will need to be made to client code for this to work. * implemented Java style String interning function. This is used in Term (and for caching in the future). This will save some memory and might increase performance a bit. Functions that compare field names now compare with == instead of _tcscmp. * Option of pre-allocating memory for Terms. This can increase performance *alot*, but will increase memory use. See the Term.h file for more information. This feature can be disabled to save memory (See CLConfig.h) * Fixed StringBuffer.append(double). This has the side affect of changing the query.toString value, which now should more similar to the results that the java version returns. * Made some significant changes to WildcardTermEnum. This should speed up this query a lot. -- Ben van Klinken
http://www.crosswire.org/pipermail/sword-devel/2005-March/021926.html
CC-MAIN-2014-52
refinedweb
1,573
67.35
MetaCPAN::API - A comprehensive, DWIM-featured API to MetaCPAN version 0.44 #: This module will be updated regularly on every MetaCPAN API change, and intends to provide the user with as much of the API as possible, no shortcuts. If it's documented in the API, you should be able to do it. Because of this design decision, this module has an official MetaCPAN namespace with the blessing of the MetaCPAN developers. Notice this module currently only provides the beta API, not the old soon-to-be-deprecated API.. While it's possible to access the methods defined by the API spec, there's still a matter of what you're really trying to achieve. For example, when searching for "Dave", you want to find both Dave Cross and Dave Rolsky (and any other Dave), but you also want to search for a PAUSE ID of DAVE, if one exists. This is where DWIM comes in. This module provides you with additional generic methods which will try to do what they think you want. Of course, this does not prevent you from manually using the API methods. You still have full control over that, if that's what you wish. You can (and should) read up on the generic methods, which will explain how their DWIMish nature works, and what searches they run. THIS MODULE IS DEPRECATED, DO NOT USE! This module has been completely rewritten to address a multitude of problems, and is now available under the new official name: MetaCPAN::Client. Please do not use this module..
http://search.cpan.org/~xsawyerx/MetaCPAN-API/lib/MetaCPAN/API.pm
CC-MAIN-2014-52
refinedweb
259
71.65
Flask-FeatureFlags 0.5 Enable or disable features in Flask apps based on configuration =================== [![Build Status]] This is a Flask extension that adds feature flagging to your applications. This lets you turn parts of your site on or off based on configuration. It's useful for any setup where you deploy from trunk but want to hide unfinished features from your users, such as continuous integration builds. You can also extend it to do simple a/b testing or whitelisting. Installation ============ Installation is easy with pip: pip install flask_featureflags To install from source, download the source code, then run this: python setup.py install Flask-FeatureFlags supports Python 2.6, 2.7, and 3.3+ with experimental support for PyPy. Version 0.1 of Flask-FeatureFlags supports Python 2.5 (but not Python 3), so use that version if you need it. Be aware that both Flask and Jinja have dropped support for Python 2.5. Docs ==== For the most complete and up-to-date documentation, please see: [] Setup ===== Adding the extension is simple: from flask import Flask from flask_featureflags import FeatureFlag app = Flask(__name__) feature_flags = FeatureFlag(app) In your Flask app.config, create a ``FEATURE_FLAGS`` dictionary, and add any features you want as keys. Any UTF-8 string is a valid feature name. For example, to have 'unfinished_feature' hidden in production but active in development: class ProductionConfig(Config): FEATURE_FLAGS = { 'unfinished_feature' : False, } class DevelopmentConfig(Config): FEATURE_FLAGS = { 'unfinished_feature' : True, } **Note**: If a feature flag is used in code but not defined in ``FEATURE_FLAGS``, it's assumed to be off. Beware of typos. If you want your app to throw an exception in dev when a feature flag is used in code but not defined, add this to your configuration: RAISE_ERROR_ON_MISSING_FEATURES = True If ``app.debug=True``, this will throw a ``KeyError`` instead of silently ignoring the error. Usage ===== Controllers/Views ----------------- If you want to protect an entire view: from flask import Flask import flask_featureflags as feature @feature.is_active_feature('unfinished_feature', redirect_to='/old/url') def index(): # unfinished view code here The redirect_to parameter is optional. If you don't specify, the url will return a 404. If your needs are more complicated, you can check inside the view: from flask import Flask import flask_featureflags as feature def index(): if feature.is_active('unfinished_feature') and some_other_condition(): # do new stuff else: # do old stuff Templates --------- You can also check for features in Jinja template code: {% if 'unfinished_feature' is active_feature %} new behavior here! {% else %} old behavior... {% endif %} Using other backends ==================== Want to store your flags somewhere other than the config file? There are third-party contrib modules for other backends. Please see the documentation here: [] Feel free to add your own - see CONTRIBUTING.rst for help. Customization ============= If you need custom behavior, you can write your own feature flag handler. A feature flag handler is simply a function that takes the feature name as input, and returns True (the feature is on) or False (the feature is off). For example, if you want to enable features on Tuesdays: from datetime import date def is_it_tuesday(feature): return date.today().weekday() == 2: You can register the handler like so: from flask import Flask from flask_featureflags import FeatureFlag app = Flask(__name__) feature_flags = FeatureFlag(app) feature_flags.add_handler(is_it_tuesday) If you want to remove a handler for any reason, simply do: feature_flags.remove_handler(is_it_tuesday) If you try to remove a handler that was never added, the code will silently ignore you. To clear all handlers (thus effectively turning all features off): feature_flags.clear_handlers() Clearing handlers is also useful when you want to remove the built-in behavior of checking the ``FEATURE_FLAGS`` dictionary. To enable all features on Tuesdays, no matter what the ``FEATURE_FLAGS`` setting says: from flask import Flask from flask_featureflags import FeatureFlag app = Flask(__name__) feature_flags = FeatureFlag(app) feature_flags.clear_handlers() feature_flags.add_handler(is_it_tuesday) Chaining multiple handlers -------------------------- You can define multiple handlers. If any of them return true, the feature is considered on. For example, if you want features to be enabled on Tuesdays *or* Fridays: feature_flags.add_handler(is_it_tuesday) feature_flags.add_handler(is_it_friday) **Important:** the order of handlers matters! The first handler to return True stops the chain. So given the above example, if it's Tuesday, ``is_it_tuesday`` will return True and ``is_it_friday`` will not run. You can override this behavior by raising the StopCheckingFeatureFlags exception in your custom handler: from flask_featureflags import StopCheckingFeatureFlags def run_only_on_tuesdays(feature): if date.today().weekday() == 2: return True else: raise StopCheckingFeatureFlags If it isn't Tuesday, this will cause the chain to return False and any other handlers won't run. Acknowledgements ================ A big thank you to LinkedIn for letting me opensource this, and for my coworkers for all their feedback on this project. You guys are great. :) Questions? ========== Feel free to ping me on twitter [@trustrachel] or on the [Github] project page. Changes ======= 0.1 (April 17, 2013) -------------------- Initial public offering. 0.2 (June 20, 2013) -------------------- Revved the version number so I could re-upload to PyPI. No real changes other than that. :/ 0.3 (June 27, 2013) ------------------- * Dropped support for Python 2.5, and added support for Python 3.3 and Flask 0.10 * Now testing with PyPy in Travis! * Added ``RAISE_ERROR_ON_MISSING_FEATURES`` configuration to throw an error in dev if a feature flag is missing. 0.4 (April 8, 2014) ------------------- * General code cleanup and optimization * Adding optional redirect to is_active_feature, thank you to michaelcontento * Fixed syntax error in docs, thank you to iurisilvio 0.5 (August 7, 2014) ------------------- Official support for contributed modules, thank you to iurisilvio! He contributed the first for SQLAlchemy, so you can store your flags in the database instead. Other contributions welcome. - Downloads (All Versions): - 107 downloads in the last day - 656 downloads in the last week - 2660 downloads in the last month - Author: Rachel Sanders - License: Apache - Platform: any - Categories - Development Status :: 3 - Alpha - Environment :: Web Environment - Intended Audience :: Developers - License :: OSI Approved :: Apache Software License - Operating System :: OS Independent - Programming Language :: Python - Topic :: Internet :: WWW/HTTP :: Dynamic Content - Topic :: Software Development :: Libraries :: Python Modules - Package Index Owner: trustrachel - DOAP record: Flask-FeatureFlags-0.5.xml
https://pypi.python.org/pypi/Flask-FeatureFlags/0.5
CC-MAIN-2015-22
refinedweb
1,005
56.25
Hi All, I have developed a Application where it contains so many Alert Messages and i want display All Alert messages in a particular position of respected screen. I have tried the following code but it works for individual alert messages. I don't want to set x and y properties individually, i want set x and y properties globally. Is there any way that i can apply for all the alert messages in my application. myAlert = Alert.show('Hello World'); PopUpManager.centerPopUp(myAlert); myAlert.x = 0; myAlert.y = 0; Thanks in Advance Have you thought about just creating a Panel that is not visible. Then when you need to display the alert, display the panel and append the text. You cold then add a button to the panel acknowledging the alert, which would set the panel visible attribute to false and clear the text. That way you only have one alert window to close. You could override the Alert class. This would like something like: public class MyAlert extends Alert { public get x():void { return 0; } public get y():void { return 0; }
https://forums.adobe.com/thread/1171612
CC-MAIN-2017-30
refinedweb
183
73.68
Kubernetes进阶实战读书笔记:资源管理基础(一) 一、资源对象及API群组 1、表征状态转移 基本元素为资源:resource 资源即对象、一个资源通常意味着一个附带类型和关联数据、支持的操作方法以及与其他对象的关系的对象、他们是有状态的事物、即rest中的S(state) 表征:representation REST组件通过使用表征来捕获资源的当前逾期状态并在组件之间传输改表征从而对资源执行操作、表征是一个字节序列、由数据、描述数据的元数据以及偶尔描述元数据组成、表征的数据格式为媒体类型 常用的有JSON或XML.API客户端不能直接访问资源、他们需要执行动作来改变资源的状态 行为:action 2、资源分类 kubernetes将一切事物都抽象为API资源、资源可以分组为集合,每个集合只包含单一类型的资源、集合、资源、子集及资源间的关系如下图所示 二、Kubernetes资源对象 1、常用资源对象 2、工作负载型资源 daemonset 3、发现和负责均衡 4、配置存储:volume 5、集群级资源 6、元数据型资源 三、资源在API中的组织形式 1、资源在API中的组织形式 2、资源类型 3、种类(kind) 4、集合(collecton) 5、资源或对象 四、访问Kubernetes REST API 接助命令在本地主机上为API Server启动一个代理网关、由它支持使用HTTP进行通讯、其工作逻辑如下图所示 例如、本地127.0.0.1的8080端口上启动API Server的一个代理网关 [root@master ~]# kubectl proxy --port=8080 Starting to serve on 127.0.0.1:8080 1、列出集群上所有的Namespaces对象 [root@master ~]# curl localhost:8080/api/v1/namespaces { "kind": "NamespaceList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/namespaces", "resourceVersion": "329417" ...... 2、安装JSON的命令行处理器jq命令 安装EPEL源: yum install epel-release -y 安装jq: yum install jq -y 3、仅显示相关的NamespacesList对象中的各成员对象 [root@master ~]# curl -s localhost:8080/api/v1/namespaces/ | jq .items[].metadata.name "default" "ingress-nginx" "kube-node-lease" "kube-public" "kube-system" "weave" 4、给出特定的Namespaces资源对象的名称则能够直接获取相应的资源信息以kube-system名称空间为例 [root@master ~]# curl -s localhost:8080/api/v1/namespaces/kube-system { "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "kube-system", "selfLink": "/api/v1/namespaces/kube-system", "uid": "1e69045d-bfea-4292-b4e8-1fbaaefaae22", "resourceVersion": "14", "creationTimestamp": "2020-08-03T15:20:46Z", "managedFields": [ { "manager": "kube-apiserver", "operation": "Update", "apiVersion": "v1", "time": "2020-08-03T15:20:46Z", "fieldsType": "FieldsV1", "fieldsV1": {"f:status":{"f:phase":{}}} } ] }, "spec": { "finalizers": [ "kubernetes" ] }, "status": { "phase": "Active" } 五、资源配置清单 1、资源配置清单:namespaces kube-system [root@master ~]# kubectl get namespaces kube-system -o yaml apiVersion: v1 kind: Namespace metadata: creationTimestamp: "2020-08-03T15:20:46Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-apiserver operation: Update time: "2020-08-03T15:20:46Z" name: kube-system resourceVersion: "14" selfLink: /api/v1/namespaces/kube-system uid: 1e69045d-bfea-4292-b4e8-1fbaaefaae22 spec: finalizers: - kubernetes status: phase: Active 除了极少数资源之外、Kubernetes系统上的绝大多数资源都是由其使用者所创建的、创建时、需要以上述输出结果中类似的方式以YAML或JSON序列化方案定义资源的相关配置数据 即用户期望的目标状态、而后再由Kubernetes的底层组件确保活动对象的运行时状态与用户提供的配置清单中定义的状态无限接近 2、资源配置清单:deployments.apps myapp-deploy [root@master chapter5]# kubectl get deployments.apps myapp-deploy -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: ...... name: myapp-deploy namespace: default resourceVersion: "361135" selfLink: /apis/apps/v1/namespaces/default/deployments/myapp-deploy uid: 74ac89a8-0f06-43a4-81bc-c39fddbde74d spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: myapp ....... status: availableReplicas: 3 conditions: ....... 事实上、对几乎所有的资源来说apiVersion、kind、metadata字段的功能基本上都是相同的、但spec则用于资源的期望状态、而status字段则用于记录活动对象的当前状态 六、对象资源格式 1、所有一级字段 2、metadata嵌套字段 必选字段 可选字段 3、spec字段 七、资源配置清单格式文档 1、了解一级字段 [root@master ~]# kubect: status <Object> Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: 2、了解二级字段 kubectl explain pods.spec [root@master ~]# kubectl explain pods.spec KIND: Pod VERSION: v1 RESOURCE: spec <Object> DESCRIPTION: Specification of the desired behavior of the pod. More info: PodSpec is a description of a pod. FIELDS: activeDeadlineSeconds <integer> Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity <Object> If specified, the pod's scheduling constraints automountServiceAccountToken <boolean> AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers <[]Object> -required- List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. dnsConfig <Object> Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. dnsPolicy <string>'. enableServiceLinks <boolean> EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers <[]Object> List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature. hostAliases <[]Object> HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostIPC <boolean> Use the host's ipc namespace. Optional: Default to false. hostNetwork <boolean> Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID <boolean> Use the host's pid namespace. Optional: Default to false. hostname <string> Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets <[]Object> ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. For example, in the case of docker, only DockerConfig type secrets are honored. More info: initContainers <[]Object> List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: nodeName <string> NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector <map[string]string> NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: overhead <map[string]string> Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: This field is alpha-level as of Kubernetes v1.16, and is only honored by servers that enable the PodOverhead feature. preemptionPolicy <string> PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. This field is alpha-level and is only honored by servers that enable the NonPreemptingPriority feature. priority <integer> The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName <string> If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates <[]Object> If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: restartPolicy <string> Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: runtimeClassName <string> RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: This is a beta feature as of Kubernetes v1.14. schedulerName <string> If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. securityContext <Object> SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccount <string> DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName <string> ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: shareProcessNamespace <boolean>. Optional: Default to false. subdomain <string> If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds <integer> Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations <[]Object> If specified, the pod's tolerations. topologySpreadConstraints <[]Object> TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. This field is only honored by clusters that enable the EvenPodsSpread feature. All topologySpreadConstraints are ANDed. volumes <[]Object> List of volumes that can be mounted by containers belonging to the pod. More info: 3、了解三级以上字段 kubectl explain pods.spec.containers [root@master ~]# kubectl explain pods: command <[]string> Entrypoint array. Not executed within a shell. The docker image's ENTRYPOINT: env <[]Object> List of environment variables to set in the container. Cannot be updated. envFrom <[]Object>. image <string> Docker image name. More info: This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy <string> Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: lifecycle <Object> Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe <Object> Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info:. readinessProbe <Object> Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: resources <Object> Compute Resources required by this container. Cannot be updated. More info: securityContext <Object> Security options the pod should run with. More info: More info: startupProbe <Object> StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. This is a beta feature enabled by the StartupProbe feature flag. More info: stdin <boolean> Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce <boolean> Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath <string> Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy <string> Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty <boolean> Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices <[]Object> volumeDevices is the list of block devices to be used by the container. volumeMounts <[]Object> Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir <string> Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 4、优势 kubectl get deployments.apps myapp-deploy -o yamal --export > deploy-demo.yaml 命令行只支持部分资源对象的部分属性、而资源清单支持配置资源的所有属性字段而且使用配置清单文件还能够进行版本追踪、复审等高级功能的操作 八、资源对象管理方式 1、声明式编程 2、陈述式编程 3、命令分类
https://www.cnblogs.com/luoahong/p/13439280.html
CC-MAIN-2020-50
refinedweb
2,355
50.53
. Most operating systems have some way of keeping track of the current date and time. ANSI C makes this information available in various formats through the library functions defined in time.h. The time function returns a value of type time_t (usually a long), which is an implementation-dependent encoding of the current date and time. You in turn pass this value to other functions which decode and format it. The program in Listing 1 uses the functions time, localtime and strftime to print the current date and time in various formats. The localtime function breaks the encoded time down into struct tm { int tm_sec; /* (0 - 61) */ int tm_min; /* (0 - 59) */ int tm_hour; /* (0 - 23) */ int tm_mday; /* (1 - 31) */ int tm_mon; /* (0 - 11) */ int tm_year; /* past 1900 */ int tm_wday; /* (0 - 6) */ int tm_yday; /* (0 - 365) */ int tm_isdst; /* daylight savings flag */ };local time overwrites a static structure each time you call it, and returns its address (therefore only one such structure is available at a time in a program without making an explicit copy). The ctime function returns a pointer to a static string which contains the full time and date in a standard format (including a terminating newline). strftime formats a string according to user specifications (e.g., %A represents the name of the day of the week). See Table 1 for the complete list of format descriptors. Time/Date Arithmetic You can do time/date arithmetic by changing the values in a tm structure. The program in Listing 2 shows how to compute a date a given number of days in the future, as well as the elapsed execution time in seconds. Note the optional alternate syntax for the time function (the time_t parameter is passed by reference instead of returned as a value). The mktime function alters a tm structure so that the date and time values are within the proper ranges, after which the day-of-week (tm_wday) and day-of-year (tm_yday) fields are updated accordingly. mktime brings the date and time values in the tm structure into their proper ranges, and updates the day of week (tm-wday) and day of year (tm-yday) values accordingly. This occurs when a date falls outside the range that your implementation supports. My MS-DOS-based compiler, for example, cannot encode dates before January 1, 1970, but VAXC can process dates as early as the mid-1800s. The asctime function returns the standard string for the time represented in tm parameter (so ctime (&tval) is equivalent to asctime (localtime(&tval)). The function difftime returns the difference in seconds between two time_t encodings as a double. If you need to process dates outside your system's range or calculate the interval between two dates in units other than seconds, you need to roll your own date encoding. The application in Listing 3 through Listing 5 shows a technique for determining the number of years, months and days between two dates, using a simple month-day-year structure. It subtracts one date from another, much as you might have done in elementary school (i.e., it subtracts the days first, borrowing from the month's place if necessary, and so on). Note that leap years are taken into account. For brevity, the date_interval function assumes that the dates are valid and that the first date entered precedes the second. Following the lead of the functions in time.h, It returns a pointer to a static Date structure which holds the answer. File Time/Date Stamps Most operating systems maintain a time/date stamp for files. At the very least, you can find out when a file was last modified. (The common make facility uses this information to determine if a file needs to be recompiled, or if an application needs to be relinked). Since file systems vary across platforms, there can be no universal function to retrieve a file's time/date stamp, so the ANSI standard doesn't define one. However, most popular operating systems (including MS-DOS and VAX/VMS) provide the UNIX function stat, which returns pertinent file information, including the time last modified expressed as a time_t. The program in Listing 6 uses stat and difftime to see if the file time1.c is newer than (i.e., was modified more recently than) time2.c. If you need to update the time/date stamp of a file to the current time, simply overwrite the first byte of a file with itself. Although the contents haven't changed, your file system will think it has, and will update the time/date stamp accordingly. (Know your file system! Under VAX/VMS, you get a newer version of the file, while the older version is retained). This is sometimes called "touching" a file. The implementation of touch in Listing 7 creates the file if it doesn't already exist. Note that the file is opened in "binary" mode (indicated by the character b in the open mode string I'll discuss file processing in detail in a future capsule). Table 1 Format descriptors for strftime Code Sample Output --------------------------------------------- %a Wed %A Wednesday %b Oct %B October %c Wed Oct 07 13:24:27 1992 %d 07 (day of month [01-31]) %H 13 (hour in [00-23]) %I 01 (hour in [01-12]) %j 281 (day of year [001-366]) %m 10 (month [01-12]) %M 24 (minute [00-59]) %p PM %S 27 (second [00-59] ) %U 40 (Sunday week of year [00-52]) %w 3 (day of week [0-6]) %W 40 (Monday week of year [00-52]) %x Wed Oct 7, 1992 %X 13:24:27 %y 92 %Y 1992 %Z EDT (daylight savings indicator) Listing 1 time1.c prints the current date and time in various formats #include <stdio.h> #include <time.h> ; } /* Output The current date and time: 10/06/92 12:58:00 Or in default system format: Tue Oct 06 12:58:00 1992 Or getting really fancy: Tuesday, October 06, day 280 of 1992. The time is 12:58 PM. */ /* End of File */ Listing 2 time2.c shows how to compute a date a given number of days in the future, as well as the elapsed execution time in seconds #include <stdio.h> #include <stdlib.h> #include <time.h> main() { time_t start, stop; struct tm *now; int ndays; /* Get current date and time */ time(&start); now = localtime(&start); /* Enter an interval in days */ fputs("How many days from now? ",stderr); if (scanf("%d",&ndays) !=1) return EXIT_FAILURE; now->tm_mday += ndays; if (mktime(now) != -1) printf("New date: %s",asctime(now)); else puts("Sorry. Can't encode your date."); /* Calculate elapsed time */ time(&stop); printf("Elapsed program time in seconds: %f\n", difftime(stop,start)); return EXIT_SUCCESS; } /* Output How many days from now? 45 New date: Fri Nov 20 12:40:32 1992 Elapsed program time in seconds: 1.000000 */ /* End of File */ Listing 3 date.h a simple date structure struct Date { int day; int month; int year; }; typedef struct Date Date; Date* date_interval(const Date *, const Date *); /* End of File */ Listing 4 date_int.c computes time interval between two dates /* date_int.c: Compute duration between two dates */ #include "date.h" #define isleap(y) \ ((y)%4 == 0 && (y)%100 != 0 || (y)%400 == 0) static int Dtab [2][13] = { {0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}, {0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31} }; Date *date_interval(const Date *d1, const Date *d2) { static Date result; int months, days, years, prev_month; /* Compute the interval - assume d1 precedes d2 */ years = d2->year - d1->year; months = d2->month - d1->month; days = d2->day - d1- 5 tdate.c illustrates the date_interval function /* tdate.c: Test date_interval() */ = date_interval(&d1, &d2); printf("years: %d, months: %d, days: %d\n", result->year, result->month, result->day); return EXIT_SUCCESS; } /* Sample Execution: Enter a date, MM/DD/YY> 10/1/51 Enter a later date, MM/DD/YY> 10/6/92 years: 41, months: 0, days: 5 */ /* End of File */ Listing 6 ftime.c determines if time1.c is newer than time2.c /* ftime.c: Compare file time stamps */ #include <stdio.h> #include <stdlib.h> #include <sys/stat.h> #include <time.h> main() { struct stat fs1, fs2; if (stat("time1.c",&fs1) == 0 && stat("time2.c",&fs2) == 0) { double interval = difftime(fs2.st_mtime,fs1.st_mtime); printf("time1.c %s newer than time2.c\n", (interval < 0.0) ? "is" : "is not"); return EXIT_SUCCESS; } else return EXIT_FAILURE; } /* Output time1.c is not newer than time2.c */ /* End of File */ Listing 7 touch.c updates time stamp by overwriting old file or creating the file if it doesn't exist /* touch.c: Update a file's time stamp */ #include <stdio.h> void touch(char *fname) { FILE *f = fopen(fname,"r+b"); if (f != NULL) { char c = getc(f); rewind(f); putc(c,f); } else fopen(fname,"wb"); fclose(f); } /* End of File */
http://www.freshsources.com/19930038.HTM
crawl-002
refinedweb
1,489
72.36
With oauth authentication for Flickr API keys if you're using it server-side (authenticated calls from the browser are too insecure to support for the moment, and will throw an error). You also get API route proxying so you can call the Flickr methods through your own server and get Flickr responses back for free. Super handy! You don't want to read this entire README.md, so just find what you want to do in this handy quick start guide right here at the start of the README, and off you go! Script-load the browser/flickrapi.dev.js library for development work, and use the browser/flickrapi.js library in production. You can access Flickr by creating an API instance as: var flickr = new Flickr({ api_key: "1234ABCD1234ABCD1234ABCD1234ABCD" }); Then query Flickr using the API as described over at - for instance, to search for all photographs that text-match the terms "red panda", you call: flickr.photos.search({ text: "red+panda" }, function(err, result) { if(err) { throw new Error(err); } // do something with result } All calls are asynchronous, and the callback handling function always has two arguments. The first, if an error occurs, is the error generated by Flickr; the second, if the call succeeds, is the result sent back from Flickr, as plain JavaScript object. Note: this is not secure. People will be able to see your API key, and this is pretty much the worst idea(tm), so you probably want to use this library... Script-load the browser/flickrapi.dev.js library for development work, and use the browser/flickrapi.js library in production, but don't use your API key. Instead, point to your server as a flickr API proxy: var flickr = new Flickr({ endpoint: "http//yourhostedplace/services/rest/" }); To make this work, have flickapi running on your server with a proxy route enabled, and you'll be able to make use of all the Flickr API calls, without having to put your credentials anywhere in your client-side source code. Proxy mode is explained below, but is essentially a one-liner add to your regular connect/express app. Install like any other package: $> npm install flickrapi --save After that, you have two choices, based on whether you want to authenticate or not. Both approaches require an API key, but using OAuth2 authentication means you get access to the full API, rather than only the public API. To suppress the progress bars in stdout you can include a progress attribute when initializing: var flickr = new Flickr({ api_key: "1234ABCD1234ABCD1234ABCD1234ABCD", progress: false }); var Flickr = require("flickrapi"), flickrOptions = { api_key: "API key that you get from Flickr", secret: "API key secret that you get from Flickr" }; Flickr.tokenOnly(flickrOptions, function(error, flickr) { // we can now use "flickr" as our API object, // but we can only call public methods and access public data }); var Flickr = require("flickrapi"), flickrOptions = { api_key: "API key that you get from Flickr", secret: "API key secret that you get from Flickr" }; Flickr.authenticate(flickrOptions, function(error, flickr) { // we can now use "flickr" as our API object }); var Flickr = require("flickrapi"), flickrOptions = { api_key: "API key that you get from Flickr", secret: "API key secret that you get from Flickr", requestOptions: { timeout: 20000, /* other default options accepted by request.defaults */ } }; Flickr.tokenOnly(flickrOptions, function(error, flickr) { // we can now use "flickr" as our API object, // but we can only call public methods and access public data }); That's it, that's all the quickstart guiding you need. For more detailed information, keep reading. If you just wanted to get up and running, then the preceding text should have gotten you there! calling API functions is then a matter of calling the functions as they are listed on, so if you wish to get all your own photos, you would call: flickr.photos.search({ user_id: flickr.options.user_id, page: 1, per_page: 500 }, function(err, result) { // result is Flickr's response }); Simply add an authenticated: true pair to your function call. Compare: flickr.people.getPhotos({ api_key: ... user_id: <your own ID> page: 1, per_page: 100 }, function(err, result) { /* This will give public results only, even if we used Flickr.authenticate(), because the function does not *require* authentication to run. It just runs with fewer permissions. */ }); To: flickr.people.getPhotos({ api_key: ... user_id: <your own ID> authenticated: true, page: 1, per_page: 100 }, function(err, result) { /* This will now give all public and private results, because we explicitly ran this as an authenticated call */ }); If your app is a connect or express app, you get Flickr API proxying for free. Simply use the .proxy() function to set everything up and then call your own API route in the same way you would call the Flickr API, minus the security credentials, since the servers side Flickr api object already has those baked in. As an example, the test.js script for node-flickrapi uses the following code to set up the local API route: var express = require("express"); Flickr.authenticate(FlickrOptions, function(error, flickr) { var app = express(); app.configure(function() { ... flickr.proxy(app, "/service/rest"); ... }); ... }); This turns the /service/rest route into a full Flickr API proxy, which the browser library can talk to, using POST operations. To verify your proxy route works, simply use cURL in the following fashion: curl -X POST -H "Content-Type: application/json" -d '{"method":"flickr.photos.search", "text":"red+pandas"}' Note that the proxy is "open" in that there is no explicit user management. If you want to make sure only "logged in users" get to use your API proxy route, you can pass an authentication middleware function as third argument to the .proxy function: function authenticator(req, res, next) { // assuming your session management uses req.session: if(req.session.authenticated) { return next(); } next({status:403, message: "not authorised to call API methods"}); } flickr.proxy(app, "/service/rest/", authenticator); If you're running the code server-side, and you've authenticated with Flickr already, you can use the Flickr.upload function to upload individual photos, or batches of photos, to your own account (or, the account that is tied to the API key that you're using). Flickr.authenticate(FlickrOptions, function(error, flickr) { var uploadOptions = { photos: [{ title: "test", tags: [ "happy fox", "test 1" ], photo: __dirname + "/test.jpg" },{ title: "test2", tags: "happy fox image \"test 2\" separate tags", photo: __dirname + "/test.jpg" }] }; Flickr.upload(uploadOptions, FlickrOptions, function(err, result) { if(err) { return console.error(error); } console.log("photos uploaded", result); }); }); For the list of available upload properties, see the Flickr Upload API page. You can use this module to very easily download all your Flickr content, using the built in downsync function: var Flickr = require("flickrapi"), flickrOptions = { ... }; Flickr.authenticate(flickrOptions, flickrapi.downsync()); That's all you need to run. The package will generate a data directory with your images in ./data/images (in several sizes), and the information architecture (metadata, sets, collections, etc) in ./data/ia. If you want this in a different directory, you can pass the dir as an argument to the downsync function: var Flickr = require("flickrapi"), flickrOptions = { ... }; Flickr.authenticate(flickrOptions, flickrapi.downsync("userdata/me")); This will now create a ./data for the flickr API information, but also a ./userdata/me/ directory that contains the images and ia dirs with your personal data. If you just want to immediately downsync all your data right now, simply use the test.js application with the --downsync runtime argument: add your Flickr API key information to the .env file and then run: $> node test --downsync Run through the authentication procedure, and then just wait for it to finish. Once it's done, you should have a local mirror of all your Flickr data. (Re)syncing is a mostly a matter or running the downsync function again. This will update anything that was updated or added on Flickr, but will not delete anything from your local mirror that was deleted from Flickr unless specifically told to do so, by passing a second argument (internally known as the "removeDeleted" flag in the code) to the downsync function call: var Flickr = require("flickrapi"), flickrOptions = { ... }; Flickr.authenticate(flickrOptions, flickrapi.downsync("userdata/me", true)); If true, this will delete local files that were removed on Flickr (e.g. photos that you didn't like anymore, etc). If false, or omitted, no pruning of the local mirror will be performed. If you downloaded all your Flickr data, you can use these in your own node apps by "dry loading" Flickr: var Flickr = require("flickrapi"), flickrData = Flickr.loadLocally(); This will give you an object with the following structure: { photos: [photo objects], photo_keys: [photo.id array, sorted on publish date], photosets: [set objects], photoset_keys: [set.id array, sorted on creation date], collections: [collection objects], collection_keys: [collection.id array, sorted on title], } Not sure what these objects look like? head over to your ./data/ia directory and just open a .json file in your favourite text editor. The loadLocally function can take two arguments, namely a location where the ia data can be found, and an options object. If you want to pass in an options object you must supply a location, too. flickrData = Flickr.loadLocally("./userdata", { loadPrivate: false }); Currently the options object only has one meaningful property, loadPrivate, which determines whether or not photos and photosets that are marked "not public" in Flickr show up in the photo_keys and photoset_keys lists. On first run, the package will fetch all known methods from Flickr, and cache them for future use. This can take a bit, as there are a fair number of methods, but is inconsequential on subsequent package loading. On first run, the authentication function will notice that there are no access_token and access_token_secret values set, and will negotiate these with Flickr using their oauth API, based on the permissions you request for your API key. By default, the only permissions are "read" permissions, but you can override this by adding a permissions property to the options object: permissions: "read"will give the app read-only access (default) permissions: "write"will give it read + write access permissions: "delete"will give it read, write and delete access Note that you cannot make use of the upload functions unless you authenticate with write or delete permissions. Running the app will show output such as the following block: $> node app { oauth_callback_confirmed: 'true', oauth_token: '...', oauth_token_secret: '...' } prompt: oauth_verifier: _ Once the app reaches this point it will open a browser, allowing you to consent to the app accessing your most private of private parts. On Flickr, at least. If you agree to authorize it, you will get an authorisation code that you need to pass so that the flickrapi can negotiate access tokens with Flickr. Doing so continues the program: $> node app { oauth_callback_confirmed: 'true', oauth_token: '...', oauth_token_secret: '...' } prompt: oauth_verifier: 123-456-789 Add the following variables to your environment: export FLICKR_USER_ID="12345678%40N12" export FLICKR_ACCESS_TOKEN="72157634942121673-3e02b190b9720d7d" export FLICKR_ACCESS_TOKEN_SECRET="99c038c9fc77673e" These are namespaced environment variables, which works really well with env packages like habitat, so if you're going to use a namespace-aware enviroment parser, simply add these variables to your environment, or put them in an .env file and then parse them in. If you would prefer to use plain process.env consulting, remove the FLICKR_ namespace prefix, and then pass process.env as options object. Alternatively, if you don't mind hardcoding values (but be careful never to check that code in, because github gets mined by bots for credentials) you can put them straight into your source code: var FlickrOptions = { api_key: "your API key", secret: "your API key secret", user_id: "...", access_token: "...", access_token_secret: "..." } The flickrapi package will now be able to authenticate with Flickr without constantly needing to ask you for permission to access data. By default the oauth callback is set to "out-of-band". You can see this in the .env file as the FLICK_CALLBACK="oob" parameter, but if this is omitted the code falls back to oob automatically. For automated processes, or if you don't want your uers to have to type anything in a console, you can override this by setting your own oauth callback endpoint URL. Using a custom callback endpoint, the oauth procedure will contact the indicated endpoint with the authentication information, rather than requiring your users to manually copy/paste the authentication values. Note your users will still need to authenticate the app from a browser! To use a custom endpoint, add the URL to the options as the callback property: var options = ...; options.callback: "..."; Flickr.authenticate(options, function(error, flickr) { ... } You can make your life easier by using an environment variable in the .env file rather than hardcoding your endpoint url: export FLICKR_CALLBACK="http://..." The callback URL handler will at its minimum need to implement the following middleware function: function(req, res) { res.write(""); options.exchange(req.query); } However, having the response tell the user that authorisation was received and that they can safely close this window/tab is generally a good idea. If you wish to call the exchange function manually, the object expected by options.exchange looks like this: { oauth_token: "...", oauth_verifier: "..." } If all you wanted to know was how to use the flickrapi library, you can stop reading. However, there's some more magic built into the library that you might be interested in, in which case you should totally keep reading. There are a number of special options that can be set to effect different authentication procedures. Calling the authenticate function with an options object means the following options can also be passed: options = { ... // console.logs the auth URL instead of opening a browser for it. nobrowser: true, // only performs authentication, without building the Flickr API. noAPI: true, // suppress the default console logging on successful authentication. silent: true, // suppress writing progress bars to stdout progress: false ... } If you use the noAPI option, the authentication credentials can be extracted from the options object inside the callback function that you pass along. The options.access_token and options.access_token_secret will contain the result of the authentication procedure. If, for some reason, you want to (re)compile the client-side library, you can run the $> node compile command to (re)generate a flickrapi.js client-side library, saved to the browser directory. This generates a sparse library that will let you call all public methods (but currently not any method that requires read-private, write, or delete permissions), but will not tell you what's wrong when errors occur. If you need the extended information, for instance in a dev setting, use $> node compile dev to generate a flickrapi.dev.js library that has all the information needed for developing work; simply use this during development and use the flickrapi.js library in production. Note that no min version is generated; For development there is no sense in using one, and the savings on the production version are too small to matter (it's only 10kb smaller). If your server can serve content gzipped, the minification will have no effect on the gzipped size anyway (using gzip, the plain library is ~4.5kb, with the dev version being ~30kb). Once you have a Flickr API object in the form if the flickr variable, the options can be found as flickr.options so you don't need to pass those on all the time. This object may contain any of the following values (some are quite required, others are entirely optional, and some are automatically generated as you make Flickr API calls): your API key. your API key secret. ###user_id your user id, based on your first-time authorisation. ###access_token the preauthorised Flickr access token. ###access_token_secret its corresponding secret. ###oauth_timestamp the timestamp for the last flickr API call. ###oauth_nonce the cryptographic nonce that request used. ###force_auth true or false (defaults to false) to indicate whether to force oauth signing for functions that can be called both key-only and authenticated for additional data access (like the photo search function) ###retry_queries if used, Flickr queries will be retried if they fail. ###afterDownsync optional; you can bind an arg-less callback function here that is called after a downsync() call finishes. ###permissions optional 'read', 'write', or 'delete'. defaults to 'read'. ###nobrowser optional boolean, console.logs the auth URL instead of opening a browser window for it. ###noAPI optional boolean, performs authentication without building the Flickr API object. ###silent optional boolean, suppresses the default console logging on successful authentication. ###progress optional boolean, suppresses writing progress bars to stdout if set to false ###requestOptions adds ability to pass default options for request module
https://openbase.com/js/@puzzleboss/flickrapi
CC-MAIN-2022-27
refinedweb
2,745
53.61
Dissecting Google's Billion Word Language Model Part 1: Character Embeddings Earlier this year, some researchers from Google Brain published a paper called Exploring the Limits of Language Modeling, in which they described a language model that improved perplexity on the One Billion Word Benchmark by a staggering margin (down from about 50 to 30). Last week, they released that model. As someone with an interest in character-aware language models, I’ve been looking forward to sniffing around this thing. In this post, I’ll go into the very first layer of the model: character embeddings. Background - language models To begin with, let’s define what we mean by a language model. A language model is just a probability distribution over sequences of words. Given a sentence like “Hello world”, or “Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo”, the model outputs a probability, telling us how likely that sentence is. Language models are evaluated by their perplexity on heldout data, which is essentially a measure of how likely the model thinks that heldout data is. Lower is better. The lm_1b language model takes one word of a sentence at a time, and produces a probability distribution over the next word in the sequence. Therefore it can calculate the probability of a sentence like “Hello world.” as… P("<S> Hello world . </S>") = product(P("<S>"), P("Hello" | "<S>"), P("world" | "<S> Hello"), P("." | "<S> Hello world"), P("</S>" | "<S> Hello world .")) ("<S>" and "</S>" are beginning and end of sentence markers.) The lm_1b architecture The lm_1b architecture has three major components, shown in the image on the right: - The ‘Char CNN’ stage (blue) takes the raw characters of the input word and produces a word-embedding. - The LSTM (yellow) takes that word representation, along with its state vector (i.e. its memory of words it’s seen so far in the current sentence), and outputs a representation of the word that comes next. - A final softmax layer (green) learns a distribution over all the words of the vocabulary, given the output of the LSTM. Char CNN? This is short for character-level convolutional neural network. If you don’t know what that means, forget I said anything - because in this post, I’ll be focusing on what happens before the network does any convolving. Namely, character embeddings. Character embeddings? The most obvious way to represent a character as input to our neural network is to use a one-hot encoding. For example, if we were just encoding the lowercase Roman alphabet, we could say… onehot('a') = [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] onehot('c') = [0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] And so on. Instead, we’re going to learn a “dense” representation of each character. If you’ve used word embedding systems like word2vec then this will sound familiar. The first layer of the Char CNN component of the model is just responsible for translating the raw characters of the input word into these character embeddings, which are passed up to the convolutional filters. In lm_1b, the character alphabet is of size 256 (non-ascii characters are expanded into multiple bytes, each encoded separately), and the space these characters are embedded into is of dimension 16. For example, ‘a’ is represented by the following vector: array([ 1.10141766, -0.67602301, 0.69620615, 1.96468627, 0.84881932, 0.88931531, -1.02173674, 0.72357982, -0.56537604, 0.09024946, -1.30529296, -0.76146501, -0.30620322, 0.54770935, -0.74167275, 1.02123129], dtype=float32) That’s pretty hard to interpret. Let’s use t-SNE to shrink our character embeddings down to 2 dimensions, to get a sense of where they fall relative to one another. t-SNE will try to arrange our embeddings so that pairs of characters that are close together in the 16-dimensional embedding space are also close together in the 2-d projection. A few interesting regularities jump out here: - not only are digits clumped closely together, they’re basically arranged in order along a snaky number line! - in many cases, the uppercase and lowercase versions of a letter are very close. However, a few, such as “k/K” are widely separated. In the original embedding space, 50% of lowercase letters have their uppercase counterpart as their nearest alphabetical neighbor. - in the upper-right corner, all the ASCII punctuation marks that can end a sentence ( .?!) are in a tight huddle - meta characters (in salmon-pink) form a loose cluster. Non-terminal punctuation forms an even looser one (with “%” and “)” as outliers). There’s also a lack of regularity that’s worth noting here. Other than the (inconsistent) association of uppercase/lowercase pairs, alphabetical characters seem to be arranged randomly. They’re well-separated from one another, and are smeared all across the projected space. There’s no island of vowels, for example, or liquid consonants. Nor is there a clear overall separation between uppercase and lowercase characters. It could be that this information is present in the embeddings, but that t-SNE just doesn’t have enough degrees of freedom to preserve those distinctions in a 2-d planar projection. Maybe by inspecting each dimension in turn, we can pick up on some more subtleties in the embeddings? Hm, maybe not. You can check out (bigger) plots of all 16 dimensions here - I haven’t managed to extract much signal from them however. Vector math Perhaps the most famous feature of word embeddings is that you can add and subtract them, and (sometimes) get results that are semantically meaningful. For example: vec('woman') + (vec('king') - vec('man')) ~= vec('queen') It’d certainly be interesting if we could do the same with character vectors. There aren’t a lot of obvious analogies to be made here, but what about adding or subtracting ‘uppercaseness’? def analogy(a, b, c): """a is to b, as c is to ___, Return the three nearest neighbors of c + (b-a) and their distances. """ # ... ‘a’ is to ‘A’ as ‘b’ is to… >>> analogy('a', 'A', 'b') b: 4.2 V: 4.2 Y: 5.1 Okay, not a good start. Let’s try some more: >>> analogy('b', 'B', 'c') c: 4.2 C: 5.2 +: 5.9 >>> analogy('b', 'B', 'd') D: 4.2 ,: 4.9 d: 5.0 >>> analogy('b', 'B', 'e') N: 4.7 ,: 4.7 e: 5.0 Partial success? Repeating this a bunch of times, we get the ‘right’ answer every once in a while, but it’s not clear if it’s any better than chance. Remember that half of lowercase letters have their uppercase counterpart as their nearest neighbour. So if we strike off a short distance from a letter in a random direction, we’ll probably land near its counterpart a decent proportion of the time for that reason alone. Vector math - for real this time I guess the only thing left to try is… >>> analogy('1', '2', '2') 2: 2.4 E: 3.6 3: 3.6 >>> analogy('3', '4', '8') 8: 1.8 7: 2.2 6: 2.3 >>> analogy('2', '5', '5') 5: 2.7 6: 4.0 7: 4.0 # It'd be really surprising if this worked... >>> nearest_neighbors(vec('2') + vec('2') + vec('2')) 2: 6.0 1: 6.9 3: 7.1 Okay, note to self: do not use character embeddings as tip calculator. It seems useful to embed digits of similar magnitude close to each other, for reasons of substitutability. ‘36 years old’ is pretty much substitutable for ‘37 years old’ (or even ‘26 years old), ‘$800.00’ is more like ‘$900.00’ or ‘$700.00’ than ‘$100.00’. And based on our t-SNE projection, it seems like the model has definitely done this. But that doesn’t mean that it’s arranged the digits on a line. (For one thing, there are probably some digit-specific quirks the model needs to learn. For example, that years usually start with ‘20’ or ‘19’.) Making sense of it all Before guessing at why certain characters were embedded in such-and-such a way, we should probably ask: why are they using character embeddings in the first place? One reason could be that it reduces the model complexity. The feature detectors in the Char CNN part of the model only need to learn 16 weights for every character they’re looking at, rather than 256. Removing the character embedding layer increases the number of feature detector weights by 16x, from ~460k (4096 filters * max width of 7 * 16-dimensional embedding) to ~7.3m. That sounds like a lot, but the total number of parameters for the whole model (CNN + LSTM + Softmax) is 1.04 billion! So a few extra million shouldn’t be a big deal. In fact, lm_1b uses char embeddings, because their Char CNN component is modeled after Kim et. al 2015, who used char embeddings. A footnote from that paper gives this explanation: Given that |C| is usually small, some authors work with onehot representations of characters. However we found that using lower dimensional representations of characters (i.e. d < |C|) performed slightly better. (Presumably they mean better performance in the sense of perplexity, rather than e.g. training speed.) Why would character embeddings improve performance? Well, why do word embeddings improve performance in natural language problems? They improve generalization. There are a lot of words out there, and a lot of them occur infrequently. If we tend to see “raspberry”, “strawberry”, and “gooseberry” in similar contexts, we’ll give them nearby vectors. Now if we’ve seen the phrases “strawberry jam” and “raspberry jam” a few times, we can guess that “gooseberry jam” is a reasonably probable phrase, even if we haven’t seen it in our corpus once. Generalizing over characters? At first, the analogy with word vectors doesn’t seem like a good fit. The billion word benchmark corpus has 800,000 distinct words, whereas we’re dealing with a mere 256 characters. Is generalization really a concern? And how do we generalize from a “g” to other characters? The answer seems to be that we don’t really. We can sometimes generalize between uppercase and lowercase versions of the same character, but other than that, alphabetical characters have distinct identities, and they’re going to occur so frequently that generalization isn’t a concern. But are there characters that occur infrequently enough that generalization is important? Let’s see… Well, it’s not quite Zipfian (we get closer to a straight line with only the y-axis being on a log scale, as above, rather than with a log-log scale), but there’s clearly a long tail of infrequent characters (mostly non-ASCII code points, and some rarely occurring punctuation). Maybe our embeddings are helping us generalize our reasoning about those characters. In the t-SNE plot above, I only showed characters that occur fairly frequently (I set the threshold at the least frequent alphanumeric character, ‘X’). What if we plot the embeddings for all characters that appear in the corpus at least 50 times? This would seem to support our hypothesis! As before, our letters (green markers), are pretty antisocial, and rarely in touching range of one another. But in several places, our long-tail pink characters form tight clusters or lines. My (handwavey) best guess: alphabetical characters get distinct, widely-separated embeddings, but characters that occur infrequently (the pinks) and/or characters with a high degree of substitutability (digits, terminal punctuation), will tend to be placed together. That’s it for now. Thanks to the Google Brain team for releasing the lm_1b model. If you want to do your own experiments on it, be sure to check out their instructions here. I’ve made the scripts I used to generate the visualizations in this post available here - feel free to re-use/modify them, though they’re messy as hell. Tune in next time, when we’ll look at the next stage of the Char CNN pipeline - convolutional filters! Tagged: Machine Learning, Data Visualization
http://colinmorris.github.io/blog/1b-words-char-embeddings
CC-MAIN-2017-13
refinedweb
2,043
65.42
ldr.ld ENTRY(_start) SECTIONS { . = 0x25800; .text : { *(.text) } .data : { *(.data) *(.rodata) } .bss : { bss = .; *(.bss) } } types.h #ifndef _TYPES_H_ #define _TYPES_H_ typedef char s8; typedef unsigned char u8; typedef short s16; typedef unsigned short u16; typedef int s32; typedef unsigned int u32; typedef long long int s64; typedef unsigned long long int u64; #endif start.S .text /* Loader entry. */ .global _start _start: /* Setup stack pointer. */ ila sp, 0x3DFA0 /* Well... */ brsl lr, main _hang: br _hang main.c #include "types.h" void *_memcpy(void *dst, void *src, u32 len); void main() { //Copy eid root key/iv to shared LS. _memcpy((u8 *)0x3E000, (u8 *)0x00000, 0x30); //Hang (the PPU should copy the key/iv from shared LS now). while(1); } void *_memcpy(void *dst, void *src, u32 len) { u8 *d = (u8 *)dst; u8 *s = (u8 *)src; u32 i; for(i = 0; i < len; i++) d[i] = s[i]; return dst; } • Please Register at PS3News.com or Login to make comments on Site News articles. Thanks!: [Register or Login to view links] Options: List of flags: [Register or Login to view links] [Register or Login to view links] followed by a [Register or Login to view links] [Register or Login to view links] [Register or Login to view links]...
http://www.ps3news.com/ps3-cfw-mfw/video-irismanager-3-56-jfw-dh-ps3-cfw-backups-open-pstore/page-9/
CC-MAIN-2014-41
refinedweb
205
80.82
YARP ROS Interoperation Past and Future But Not Now This page gives historical steps in YARP/ROS interoperation, and is occasionally used during development of new features. You probably want which gives the current steps needed for interoperation. This Is Not the Page You Are Looking For - See for how to use ROS with YARP. - This page is historical, and contains layer after layer of improving methods which will just confuse you. - From time to time this page contains development information for a new, otherwise undocumented layer that will also confuse you. - Might we suggest you visit. Setup: Compiling YARP to support ROS You'll need to turn the following flags on in CMake before (re-)compiling YARP: * CREATE_OPTIONAL_CARRIERS (for support of ROS wire protocols) * CREATE_IDLS (for support of ROS .msg/.srv files) Those flags in turn create extra options, and the following should be turned on: * ENABLE_yarpcar_tcpros_carrier (the usual protocol for topics) * ENABLE_yarpcar_rossrv_carrier (tcpros with minor tweaks for services) * ENABLE_yarpcar_xmlrpc_carrier (used by ros nameserver and node interfaces) * ENABLE_yarpidl_rosmsg (a program to convert .msg/.srv files into YARP-usable form) At some point, all these will become available by default, but that is not yet the case. Setup: directing YARP clients to use the ROS name-server In fact, you can run this server on a machine *without* a ROS installation, and it will fall back on interrogating the website for type information. This will work for types used in ROS packages documented on that site. Let's start a ROS program to print out strings: $ rosrun roscpp_tutorials listener This program subscribes to a topic called "/chatter". Let's publish something there from YARP: $ yarp write /chatter@/yarp_writer yarp: Port /yarp_writer active at tcp://192.168.1.2:54473 yarp: Port /chatter+@/yarp_writer active at tcp://192.168.1.2:50896 yarp: Sending output from /chatter+@/yarp_writer to /listener using tcpros hello? Once we type a message (here "hello?") and hit return, we should see it echoed on the listener's console: [ INFO] [1386605949.838711935]: I heard: [hello?] Under the hood, the yarp port has found the type of data expected (from the listener) and is matching what you enter with that type. If you try to send a number, you'll get a message like this: 42 yarp: Structure of message is unexpected (expected std_msgs/String) (If you actually want to send a string that looks like an integer, just put it in quotes). In this case, the listener is expecting a message of type std_msgs/String. Trying things out: using ROS services Let's start a ROS service that adds two integers: rosrun rospy_tutorials add_two_ints_server This creates a service named /add_two_ints that expects two integers and gives back an integer. We can use it from yarp rpc, for example: $ yarp rpc /add_two_ints 22 20 Response: 42 1 -10 Response: -9 This looks straightforward, but it relies critically on YARP being able to determine ROS types. If you were to shut down the yarpidl_rosmsg server, here is what you would see: $ yarp rpc /add_two_ints Do not know anything about type 'rospy_tutorials/AddTwoInts' Could not connect to a type server to look up type 'rospy_tutorials/AddTwoInts' With the type server running, YARP can make the needed translations. Let's try the opposite direction, setting up a ROS-style rpc server, for example this: $ yarp rpcserver /add_two_ints@/my_int_server --type rospy_tutorials/AddTwoInts Notice that we specify a node name for the server here, and also we give the type of data it expects and returns (ROS-style clients expect us to know that). Now from ROS we can do: $ rosrun roscpp_tutorials add_two_ints_client 3 4 On the "yarp rpcserver" terminal we will now see: Waiting for a message... yarp: Receiving input from /add_two_ints_client to /add_two_ints-1@/my_int_server using tcpros Message: 3 4 Reply: It is waiting for us to reply. We can type in whatever number we like ("7" if we are feeling constructive), and it will be reported on the "roscpp_tutorials" terminal: [ INFO] [1386793386.003997149]: Sum: 7 Trying things out: sending/receiving simple messages from code Please see "example/ros" in the YARP source code for full examples. For simple cases, we can just use YARP Bottles whose content matches ROS types. For example, to call a ROS service that adds two integers, we could do this (error checking abbreviated, see example/ros directory for full code): #include <stdio.h> #include <stdlib.h> #include <yarp/os/all.h> using namespace yarp::os; int main(int argc, char *argv[]) { if (argc!=3) return 1; // expect two integer arguments Network yarp; RpcClient client; if (!client.open("/add_two_ints@/yarp_add_int_client")) return 1; Bottle msg, reply; msg.addInt(atoi(argv[1])); msg.addInt(atoi(argv[2])); if (!client.write(msg,reply)) return 1; printf("Got %d\n", reply.get(0).asInt()); return 0; } An example CMakeLists.txt file to compile this and link with YARP would be: cmake_minimum_required(VERSION 2.8.7) find_package(YARP REQUIRED) include_directories(${YARP_INCLUDE_DIRS}) add_executable(add_int_client_v1 add_int_client_v1.cpp) target_link_libraries(add_int_client_v1 ${YARP_LIBRARIES}) On the ROS side we'd do: rosrun rospy_tutorials add_two_ints_server Then on the YARP side we can try it out (assume the above program is compiled as add_int_client_v1): $ ./add_int_client_v1 4 6 yarp: Port /yarp_add_int_client active at tcp://192.168.1.2:35731 yarp: Port /add_two_ints+1@/yarp_add_int_client active at tcp://192.168.1.2:35004 Got 10 Now looking at the code, there are some things to note. We use an RpcClient. This is a regular yarp Port, configured for writing to a single target. Equivalently we could have used this: Port client; client.setRpcClient(); If we try to use a regular port without telling YARP how we'll be using it (by calling one of setRpcClient(), setRpcServer(), setWriteOnly(), setReadOnly()), YARP will complain because it won't know how to describe it to ROS. However, if you have a YARP-using program that you can't easily modify to add such a call, you can sneak the needed information into the port name. For new code, it may be convenient to create ROS-like nodes explicitly rather than having names that bundles node and topic/service names together. YARP now has a yarp::os::Node class that can be used like this (modifying our example for adding two integers): #include <stdio.h> #include <stdlib.h> #include <yarp/os/all.h> using namespace yarp::os; int main(int argc, char *argv[]) { if (argc!=3) return 1; // expect two integer arguments Network yarp; Node node("/yarp_add_int_client"); RpcClient client; if (!client.open("add_two_ints")) return 1; // names that omit leading "/" belong to node Bottle msg, reply; msg.addInt(atoi(argv[1])); msg.addInt(atoi(argv[2])); if (!client.write(msg,reply)) return 1; printf("Got %d\n", reply.get(0).asInt()); return 0; } Here's a YARP equivalent of the ROS listener/talker tutorial: // listener #include <stdio.h> #include <yarp/os/all.h> using namespace yarp::os; int main(int argc, char *argv[]) { Network yarp; Node node("/yarp/listener"); Port port; port.setReadOnly(); if (!port.open("chatter")) return 1; while (true) { Bottle msg; if (!port.read(msg)) continue; printf("Got [%s]\n", msg.get(0).asString().c_str()); } return 0; } // talker #include <stdio.h> #include <yarp/os/all.h> using namespace yarp::os; int main(int argc, char *argv[]) { Network yarp; Port port; port.setWriteOnly(); if (!port.open("/chatter@/yarp/talker")) return 1; for (int i=0; i<1000; i++) { char buf[256]; sprintf(buf,"hello ros %d", i); Bottle msg; msg.addString(buf); port.write(msg); printf("Wrote: [%s]\n", buf); Time::delay(1); } return 0; } One thing to watch out for is that if you stop a program using ^C or if it crashes, YARP will not yet unregister your ports with ROS. If this bothers you, either add a signal handler, or run "rosnode cleanup" from time to time - and my apologies. Trying things out: sending/receiving complicated messages from code When dealing with larger, more complex messages (e.g point clouds), Bottles get awkward and it is desirable to be able to access parts of the message in a more structured way. For this case, you can use the yarpidl_rosmsg program. We met this program before when we ran it as a type server. We can also use it as a utility to translate ROS .msg/srv files into YARP-compatible header files. Given a .msg file (or a type name that rosmsg can find), yarpidl_rosmsg will produce a C++ class that implements the yarp::os::Portable interface and so can be written to or read from a YARP port. Likewise for a .srv file. If the file uses nested types, a header file will be generated for each type needed. As a very simple example, to translate the AddTwoInts type we've been using in examples, we could do: yarpidl_rosmsg --web true rospy_tutorials/AddTwoInts (I use --web true because I'm testing on a machine without a full ROS install; if you have all the needed packages then that could be omitted). This would generate the following header file: class rospy_tutorials_AddTwoInts : public yarp::os::Portable { public: // ... yarp::os::NetInt64 a; yarp::os::NetInt64 b; bool read(yarp::os::ConnectionReader& connection) { ... } bool write(yarp::os::ConnectionWriter& connection) { ... } }; Note we can directly access to 64-bit integers, a and b, and have read/write serialization methods that YARP ports can use. We also get a class for the reply: class rospy_tutorials_AddTwoIntsReply : public yarp::os::Portable { public: // ... yarp::os::NetInt64 sum; bool read(yarp::os::ConnectionReader& connection) { ... } bool write(yarp::os::ConnectionWriter& connection) { ... } }; In code, we could use an AddTwoInts service as follows: rospy_tutorials_AddTwoInts msg; rospy_tutorials_AddTwoIntsReply reply; msg.a = 20; msg.b = 22; port.write(msg,reply); printf("Sum: %d\n", reply.sum); // should print 42 Code generation can be automated using the yarp_idl_to_dir macro, for example: include(YarpIDL) set(generated_libs_dir "${CMAKE_CURRENT_BINARY_DIR}") yarp_idl_to_dir(Demo.msg ${generated_libs_dir}) See src/idls/rosmsg/tests/demo/ in the YARP source code for examples of usage. ...
http://wiki.icub.org/wiki/YARP_ROS_Interoperation_Past_and_Future_But_Not_Now
CC-MAIN-2022-33
refinedweb
1,645
55.54
=head1 NAME Enbugger - Enables the debugger at runtime. =head1 SYNOPSIS my $ok = eval { ...; 1 }; if ( not $ok ) { # Oops! there was an error! Enable the debugger now! require Enbugger; Enbugger->stop; } =head1. =head1 INSTALLATION To install this module, run the following commands: perl Makefile.PL make make test make install =head1 USING THE DEBUGGER =head2 Loading'; =head2 Unloading the debugger You wish. There is no implemented way to unload the debugger. Here's how you'd do it if you wanted to implement this feature. =over =item # Set the various C pointers set by Perl_init_debugger to NULL =item # Clear the DB:: package. Beware of the C<DB> and C<sub> functions. If you ever load another debugger again you'll need to ensure you have at least stub functions left or you could suffer a fatal, deadly death. =item # Change all C<dbstate> B::COP nodes back to be C<nextstate> ops. =back =head1 GETTING INTO THE DEBUGGER =head2 Programatically Call the public class method C<< Enbugger->stop >>. At a minimum, it will just request that your current debugger stop execution. If needed, it'll go as far as loading a debugger. =head3 An example if ( ... ) { # an unlikely occurance I'd like to manually inspect if or when # it happens. Enbugger->stop; } =head2 On %SIG events If you load the ); =head3$ =head2 (L<>): L =head1 PUBLIC API Enbugger has a public API where you as the user can trigger the debugger from your code or affect which debugger is loaded. =over =item CLASS-E<gt>stop Stops execution and signals your debugger. Loads a debugger with C<< CLASS->load_debugger >> if one hasn't been loaded yet. =item CLASS-E<gt>load_debugger( DEBUGGER ) =item CLASS-E<gt>load_debugger Loads your requested debugger. Defaults to using C<$Enbugger::DefaultDebugger> if you don't specify a debugger. If a debugger has already been loaded, either returns silently if the current debugger is what you requested or throws an exception if you requested a different debugger. =item $Enbugger::DefaultDebugger The default debugger. This is C<perl5db> unless you change it. =item CLASS-E<gt>write( TEXT ) Writes some thing to the console or wherever is appropriate for your current debugger. =item CLASS-E<gt>DEBUGGER_CLASS Returns the class name for the currently loaded debugger class. If no debugger has been loaded yet, this contrives to load the default debugger. =back =head1 PLUGGABLE DEBUGGERS Enbugger supports registering debuggers. Any debugger intended to be used must be registered first. The default, proper behavior is to register all possible debuggers. =head2 Registered debuggers The following is a list of all default, registered debuggers. So far only the L<perl5db.pl> debugger has received any testing. =over =item perl5db This is the default perl debugger. See also L<Enbugger::perl5db> and L<perl5db.pl>. =item trepan This is the L<Devel::Trepan> debugger. See also L<Devel::Trepan> or L<>. =cut #=item ebug # #This is the L<Devel::ebug> debugger. See also L<Enbugger::ebug>. # #=item sdb # #This is the L<Devel::sdb> debugger. See also L<Enbugger::sdb>. # #=item ptkdb # #This is the L<Devel::ptkdb> debugger. See also L<Enbugger::ptkdb>. =back =head2. =over =item CLASS-E<gt>register_debugger( DEBUGGER ) Register a debugger with L<Enbugger>. =back =head3 Required methods You must implement the following methods. =over =item CLASS-E<gt>_stop Your debugger must implement a C<_stop> method. This method will be called by the Enbugger-E<gt>stop method. When this method is called, you should stop the current process and invoke your debugger. =item CLASS-E<gt>_load_debugger Your debugger must implement a C<load_debugger> method. It will be called when your debugger should be loaded. Your method is responsible for loading the debugger. =item CLASS-E<gt>_write( TEXT ) Your debugger must implement a C<_write> method. This method should accept text to log to the console or whatever is appropriate. =back =head1 UTILITY FUNCTIONS =over =item CLASS-E<gt>load_source Loads the source code for the program. =item CLASS-E<gt>load_file( FILE ) Loads the source code for a specific file. =item CLASS-E<gt>instrument_runtime Sets all available breakpoints to be either breakable or not. This avoids making any part of the Enbugger:: or DB:: packages a part of something that's visible to the debugger. =item instrument_op( B::*OP ) A function that modifies L<B::COP> objects. =back =head1 PRIVATE METHODS The followings methods exist but I'm not sure whether they'll continue to exist in their current form so they're private for now. =over =item CLASS-E<gt>_compile_with_nextstate =item CLASS-E<gt>_compile_with_dbstate =item CLASS-E<gt>_instrumented_ppaddr =item CLASS-E<gt>_uninstrumented_ppaddr =back =head1 DEPENDENCIES A C compiler. =head1 SUPPORT AND DOCUMENTATION After installing, you can find documentation for this module with the perldoc command. perldoc Enbugger::Restarts You can also look for information at: =over =item RT, CPAN's request tracker L<> =item AnnoCPAN, Annotated CPAN documentation L<> =item CPAN Ratings L<> =item Search CPAN L<> =back =head1 AUTHOR Joshua ben Jore E<gt>jjore@cpan.orgE<lt> =head1 ACKNOWLEDGEMENTS =over =item Brock Wilcox (awwaiid) =item R. Bernstein (rocky) =item Erkan Yilmaz =item Robert Messer at IntelliSurvey for sponsoring breakpoints =back =head1 COPYRIGHT AND LICENCE. =begin emacs ## Local Variables: ## mode: pod ## mode: auto-fill ## End: =end emacs
https://metacpan.org/release/Enbugger/source/lib/Enbugger.pod
CC-MAIN-2019-43
refinedweb
884
60.01
[SOLVED] UART.read and UART.write not cooperating with modbus slave Hi, I'm having trouble getting my LoPy to work as a simple modbus RTU master. I have the LoPy UART1 connected to a weather station via a max3232 RS232<-->TTL converter (set to 3.3V TTL). I've then tried the following code: from machine import UART import struct import time port = UART(1,19200) port.init(19200,bits=8,parity=UART.EVEN,stop=1) while 1: port.write(b'\x01\x03\x00\x03\x00\x16\x34\x04') print('data sent') time.sleep(0.5) for i in range(5000): if port.any() > 0: print(port.read(port.any())) time.sleep_ms(1) This sends a modbus request to read holding registers. The message is taken directly from a Python script that I have working on my PC using serial.Serial, so I'm confident that the message should be correct. I've used a serial sniffer to confirm that the message sent by the LoPy looks the same as that sent from the PC using serial.Serial. Loopback tests have worked and I can UART.read/UART.write data to/from the LoPy from/to my PC as expected, so I'm confident that the baud rate, parity, etc. are all good. However, when I run this script, nothing is read. So the message is either being written incorrectly (incorrect timing, possibly?) or the read command is just not picking up the incoming data... I've tried setting the weather station and LoPy up with different baud rates, parity, etc. with no luck - I never see any data come in. I also tried sending data continuously from the weather station (every second), and still can't read anything on the LoPy, which makes me think it must be an issue with UART.read (or it's compatibility with the weather station somehow...). Python on my PC can read the continuous stream of data using Serial.read. Any help would be great! I've been banging my head against this for a few days now... Problem solved thanks to @daniel 's suggestion. Playing around with the oscilloscope, I realised that the TX and RX wires on the RS232-side of the max3232 converter needed to be swapped (even though the old configuration had worked for weather station-PC communication and LoPy-PC communication). I have data coming in fine now. :) @daniel OK, I found an oscilloscope, and can see the request but no response from the weather station. So there must be an issue in the message itself, or UART settings, I guess? Could it be the change from \x16\x34 to \x164 mentioned below? There isn't any flow control that I'm aware of - I'm only using RX and TX wires, with the default 'flow' in UART (which I think is 0?). I don't have a logic analyser at hand, but I'll try to borrow one today and will report back with results. One thing that I have noticed is that micropython interprets b'\x01\x03\x00\x03\x00\x16\x34\x04' as b'\x01\x03\x00\x03\x00\x164\x04'. I don't have very much experience with hex, binary, etc. - are these two messages equivalent? @alg do you have some kind of flow control enabled on the bus? Maybe the LoPy is not setting the control pin properly? Can you hook up a logic analyser and check that data is actually flowing into the LoPy?
https://forum.pycom.io/topic/2095/solved-uart-read-and-uart-write-not-cooperating-with-modbus-slave
CC-MAIN-2020-29
refinedweb
583
66.84
Would be nice to have something similar for vim :-) -- --) > > Hello all, > > Attached is an updated script for generating PI files to provide > autocomplete on standard .NET objects. > > It now handles all the standard .NET member types (including static > properties, enumeration fields, indexers, events and so on). > > It also recurses into sub-namespaces generating new pi-files > for all of > them. > > This script is hardcoded to add references to, and then generate PI > files for: > > System > System.Data > System.Drawing > System.Windows.Forms > > It generates 90 pi files (90 namespaces) taking up 24mb! The > autocomplete it provides is awesome though. :-) > > I had to do a fair bit of violence to the standard > generate_pi.py script > so I *doubt* it is desirable to merge it back in. Obviously > very happy > for this to be included with Wing if you want, or merged if > you think it > is worth it. Is it ok for me to offer this for download from > my site? If > I make further changes I will email this list. > > The big thing to add is the return type for methods. > > Is it possible to specify return types for properties? (Currently any > attribute without an obvious parallel in Python I have turned into a > property in the PI files). > > The only real caveat with the current script (that I am aware > of - bug > reports and contributions welcomed) is that None is a common > enumeration > field member. This is invalid syntax in Python, so I rename > these to None_. > > There are quite a few minor changes sprinkled through the code - plus > the __main__ part of the script is very different. I have > tried to mark > changes with a # CHANGE: comment, but it should be relatively > amenable > to diffing anyway... > > For reference I was using IronPython 2.0.1, with .NET 3.5 > installed and > Wing 3.2beta 1. > > All the best, > > Michael Foord > > Michael Foord wrote: > > Hello all, > > > > I've created a modified version of the 'generate_pi.py' which > > generates the interface files for .NET libraries. It is attached. > > > > At the moment it generates PI files for the following assemblies / > > namespaces (hardwired at the bottom of the code): > > > > System > > System.Data > > System.Drawing > > System.Windows.Forms > > > > To run it, create a new directory and add this to the > 'Interface File > > Path' (File menu -> Preferences -> Source Analysis -> Advanced -> > > Insert). > > > > Then from the command line switch to this directory (if you are on > > Vista you will need to run cmd with admin privileges due to > a defect > > explained below). Execute the command: > > > > ipy generate_pi_for_net.py > > > > This generates the pi files. It doesn't work *as well* on 64 bit > > windows because the .NET XML help files (or whatever they > are called) > > are in a different location so the docstrings are not > always available > > - which is why I am not just distributing the pi files yet. > > > > The script doesn't yet understand static properties on > classes - so it > > actually *fetches* static properties rather than looking at the > > descriptor (which is available in the class __dict__ so > should be easy > > to fix). This is what causes inadvertent registry lookups etc and > > therefore requires admin privileges. > > > > It doesn't yet understand multiple overloads. This may require a > > change to Wing or may not matter. > > > > It isn't yet able to do anything with the information about return > > types - which would allow Wing to know the type of objects > returned by > > methods. This may be easy to add? > > > > It is late so I am going to bed. At some point I will explain the > > simple changes I had to make to the standard generate_pi.py script > > (although they are mostly straightforward). I will also do further > > work on it as it will be very useful to me... > > > > All the best, > > > > Michael > > > > > -------------------------------------------------------------- > ---------- > > > > _________________________________________________ > > Wing IDE users list > > > > > -- > > >
https://mail.python.org/pipermail/ironpython-users/2009-April/010132.html
CC-MAIN-2014-15
refinedweb
629
65.32
Hello there! I'm using Jboss ESB 4.10 and read a lot about exception handling and how it is done by now. I know that a custom composer class can be defined for my gateway (which receives ESB-unaware messages) and in that custom composer I can set the faultTo of the call. The problem is that the posts I read about this topic are mainly 3-4 years old and I'm afraid they are quite outdated since when I try to assign a custom composer to my JMS listener in the mentioned way, it simply runs to validation error during deployment. So the JMS listener definition looks like this (its a working esb without the property custom composer of course) <jms-listener <jms-message-filter <property name="composer-class" value="MyMessageComposer"></property> </jms-listener> And the validation error message which i got from eclipse (the later one during deployment is not too talkative) "cvc-complex-type.2.4.d: Invalid content was found starting with element 'property'. No child element is expected at this point." The MyMessageComposer looks like this: public class MyMessageComposer extends AbstractMessageComposer<Object> { @Override protected void populateMessage(Message message, Object payload) throws MessageDeliverException { System.out.println("Payload is :" + payload.toString()); message.getBody().add(payload); } } For a possible solution I could think about creating a proxy service which connects to the gateway and in a custom action does nothing but setting faultTo of the call and then invokes the service I'm in trouble with now, but this sounds like a little bit of hack to me. Please explain me the workaround for the exception handling with the custom message composer approach and also some opinion about the proxy service idea would be nice. Thanks a lot in advance! Regards, Peter I was able to resolve this issue by modifying my jboss-esb.xml NOT directly but strictly from the Jboss ESB Editor (eclipse). When I made the property settings on the jms-listeners from the editor the output was exactly the same in the source file, but the validation error did not occured. I guess theres some schema file generation in the background when you edit the jboss-esb.xml with the editor which of course does not happen in case you just edit the plain xml. Hope this will help others who ran into this.
https://developer.jboss.org/message/649050?tstart=0
CC-MAIN-2015-11
refinedweb
392
51.28
Technical Support Support Resources Product Information Information in this article applies to: I am using existing code for the I2C interface for a Philips LPC2000 device. However, when I executed the following function, my software jumps to the Data Abort Handler (DAbt_Handler). What can be wrong? #include "LPC22xx.h" #define REG(addr) (*(volatile unsigned long *)(addr)) void InitialiseI2C (void) { REG(I2C_I2CONCLR) = 0xFF; REG(PINSEL0) = 0x50; // Set output pin SCL and SDA REG(I2C_I2CONSET) = 0x40; REG(I2C_I2CONSET) = 0x64; REG(I2C_I2DAT) = 0x42; REG(I2C_I2CONCLR) = 0x08; REG(I2C_I2CONCLR) = 0x20; } You need to remove the REG() macros in your source code, since the pointer construct is already part of the register definition file LPC22xx.h that is provided with the Keil CARM Compiler. Instead of REG(I2C_I2CONCLR) just use I2C_I2CONCLR. Example: void InitialiseI2C (void) { I2C_I2CONCLR = 0xFF; PINSEL0 = 0x50; // Set output pin SCL and SDA I2C_I2CONSET = 0x40; I2C_I2CONSET = 0x64; I2C_I2DAT) = 0x42; I2C_I2CONCLR = 0x08; I2C_I2CONCLR = 0x20; } Last Reviewed: Tuesday, July 26, 2005
http://www.keil.com/support/docs/2924.htm
crawl-002
refinedweb
156
50.46
↑ Grab this Headline Animator In this tutorial about the Silverlight PivotViewer control I would like to explain how to make use of Custom Actions. Small labels are placed on top of items in the viewer and clicking them will trigger an event. The only downside of these actions is that they aren’t fully implemented. So you’ll have to extend the PivotViewer control yourself. If you haven’t worked with the PivotViewer before, you might want to have a look at the Building your first PivotViewer application tutorial I wrote earlier first. This tutorial continues on that. Start by adding a new class to a Silverlight project and naming this PivotViewerEx. Make this class inherit from PivotViewer. Create an override of the GetCustomActionsForItem method. This method is called by the PivotViewer when it is in need of some custom actions. The id of the item the user is hovering is passed to this method. You can do some filtering of the actions based on that. In this example no filtering takes place and the whole collection of custom actions is returned. The list of custom actions is defined as property of this class. It is instantiated by the constructor. public class PivotViewerEx:PivotViewer { public List<CustomAction> CustomActions { get; set; } public PivotViewerEx() { CustomActions=new List<CustomAction>(); } protected override List<CustomAction> GetCustomActionsForItem(string itemId) return CustomActions; } The extended PivotViewer control works in the same way as the original. It can be added to the XAML like below. The event that handles the clicks on the actions is added too. <UserControl x:Class="PivotViewerCustomActions.MainPage" xmlns="" xmlns:x="" xmlns:d="" xmlns:mc="" xmlns:PivotViewerCustomActions="clr-namespace:PivotViewerCustomActions" mc:Ignorable="d" d: <Grid x: <PivotViewerCustomActions:PivotViewerEx x: </Grid> </UserControl> In the code behind, in the constructor of the page, the custom actions itself are added to the PivotViewer. The constructor of the CustomAction class takes four parameters: public MainPage() InitializeComponent(); Pivot.CustomActions.Add( new CustomAction("Add to cart", new Uri(""), "Add to cart","Add")); new CustomAction("Details", new Uri(""), "Details","Details")); Pivot.LoadCollection("", string.Empty); When the user clicks on the actions the CustomActionClicked event is fired. The event gets an instance of the ItemActionEventArgs class as event arguments. This class contains two properties. ItemId contains the the ID of the item the action was placed over. You can get the actual item by calling the GetItem method on the PivotViewer control with this ID. The second property of the ItemActionEventArgs class is CustomActionId. This is the ID of the custom action that was clicked. In the example below one of two a message boxes is shown depending on the action clicked. It won’t be hard to extend this further to your needs. private void CustomActionClicked(object sender, ItemActionEventArgs e) PivotItem item = Pivot.GetItem(e.ItemId); if (e.CustomActionId == "Add") MessageBox.Show(string.Format("Add {0} to Cart", item.Name)); if (e.CustomActionId == "Details") MessageBox.Show(string.Format("Show details of {0}", item.Name)); } CustomActions provide a great way to add some extra functionality to the PivotViewer control. Too bad they aren’t fully implemented (yet?). The simple method shown in this tutorial uses code behind to add the actions. It would be great to be able to add these actions in XAML one day. The next tutorial will be about using the Silverlight PivotViewer control in an MVVM application. If you want to be the first to know when it is available, you can subscribe to the RSS-Feed or follow me on twitter.
http://geekswithblogs.net/tkokke/archive/2010/08/30/pivotviewer-ndash-custom-actions.aspx
crawl-003
refinedweb
583
50.43
I have wanted to try Python, and now I've had enough time to try it out. My goal was to learn why it is so popular and widely used. I started out with Django, but soon find out that it was too much. I needed to learn the basics of Python first. I found Dive Into Python, which is a tutorial for Python 2.x. There is already Python 3.x, but the majority of programs and libraries work only with 2.x. Python Programming Language Python is an interpreted, high-level and very readable programming language. It supports object-oriented, functional and imperative programming. It does not include curly brackets to indicate scope. Python uses indentation to separate code blocks. The language rejects Perl the philosophy: "there is more than one way to do it" in favor of "there should be one—and preferably only one—obvious way to do it". Python is a dynamically typed and strongly typed programming language. Python implementations come with interactive REPL which is an excellent way to experiment with the language's features. Just invoke a Python executable without any parameters and REPL will fire up itself. If you want more than instant executions, you should try out some IDE. I used JetBrains PyCharm 30-day evaluation version. First Python Program Python is very simple to learn and write. Here's an example program: print "Hello" That's it. You can execute that in REPL or write a snip to file that has a filename extension .py. Run by executing python hello.py or run inside PyCharm. Defining a Function Defining a single function without any Class definitions is a no-brainer: def multiplyByTwoAndPrint(number): """ Multiply given number by two and print the result """ print number * 2 print multiplyByTwoAndPrint(10) print multiplyByTwoAndPrint.__doc__ Every function starts with a keyword def fallowed by name of the function and argument(s) in parentheses. Multiple arguments are separated with commas. Function definitions end with a colon. In Python you don't specify return type. Every function in Python returns a value and if there's no return keyword defined, None is returned. None is Python's null. Remember to indent your code correctly! Functions can define comments that begin and end with """. It's sort of javadoc, but you can access it in run-time using the built-in __doc__ function. Example above will print: 20 None Multply given number by two and print the result First line is printed by our newly defined function. The second line will print None because we did not return anything. Python will return None for us. The third line is the docstring of the function. Conclusion Python is a very easy language to start programming with. It holds very powerful ways to manipulate data, which are essential for a good programming language. This was the first part of my Python tutorial. In the next part I will introduce Dictionaries, Lists and ways to manipulate them. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/python-tutorial-basics
CC-MAIN-2017-51
refinedweb
509
59.3
I have Exchange 2K Enterprise Edition on Win2k Sp3 in a Front-end/Back-end Configuration. I have installed Exchange 2K into an existing 5.5 site. All went well except for the front-end server not directing the traffic to the back-end server. I have seen some postings about disjointed DNS namespaces causing issues. I do have a disjointed DNS namespace for Active Directory (corp.xxx.com) versus the SMTP domain. ( xxx.com). The issue is this OWA will not seem to work. If I connect to the Back-end server, the OWA logs me in fine. It seems the front-end does not know about the back end. Any troublshooting tips or idea please?
http://www.verycomputer.com/421_a69f58cdcf18095f_1.htm
CC-MAIN-2019-09
refinedweb
117
77.64
Now that you can install and run Python in a variety of ways, it's time to get a real handle on the language itself. This section goes over Python's types, carries on from last chapter's section on math and loops , and also introduces a few new concepts. As you have seen from the previous Hello World! examples, Python doesn't need a lot of punctuation. In particular, Python doesn't use the semicolon (;) to mark the end of line. Unlike C, Perl, or a number of other languages, the end of a line is actually marked with a newline, so the following is a complete command in Python: print "hello" Code blocks are indicated in Python by indentation following a statement ending in a colon , for example: if name == this is true: # run this block of code else: # run this block of code Getting used to white space that actually means something is probably the most difficult hurdle to get over when switching to Python from another language. NOTE CAUTION UNIX, Windows, and the Macintosh Operating System all have different conven tions for how to terminate lines in text files. This is an unfortunate feature of multi-platform programming, and since Python uses terminated lines as syntax, your Python scripts written in text editors may not work on different platforms. The Macintosh version of Python recently fixed this problem; it now checks line endings when it opens a file and adjusts them on a per-file basis. It may be possi ble to find or write a filter that substitutes end-of-line characters for different platforms. Compiling scripts to byte-code before platform- hopping is another possible workaround. Python includes a handful of built-in data types (see Table 3.1); the most commonly used of these data types are numbers, strings, lists, dictionaries, and tuples. Numbers are fairly obvious, although there are several different number types, depending upon the complexity and length of the number that needs to be stored. Strings are simply rows of letters . Lists are groups that are usually comprised of numbers or letters. Dictionaries and tuples are advanced variable types that are similar to lists and comparable to arrays in other languages. These types all have built-in operations, and some have built-in modules or methods for handling them. Python has several basic numeric types; they are listed in Table 3.2. Integers are the most commonly used math construct and are comparable to C's long integer. Long integers are size-unlimited integers and are marked by an ending L. Floating points are integers that need a floating decimal point and are equivalent to C's double type. Octal numbers always start with a 0, and hexadecimal integers always begin with a 0x in Python. Numbers can be assigned just like you would in a high school algebra math problem: X = 5 The basic math operators (+, -, *, /, **, %, and so on), which were listed in Chapter 2, can be used in the standard mathematical sense. # Make x equal to 2 times 6 x = (2*6) # Make y equal to 2 to the power of 6 y = (2 ** 6) Print y Python always rounds down when working with integers, so you if you divide 1 by 20 you will always get 0 unless you use floating point values. To change over to floating point math, simply place the decimal in one of the equation's numbers somewhere, like so: # This will equal 0 x = (1/20) print x # This will get you a floating point y = (1.0/20) print y In addition to your basic math operators, comparison operators (>, <, !=, ==, =, >=, and <=) and logical operators (and, or, not) can be used with basic math in Python. These operators can also compare strings and lists. NOTE The truncation , or "rounding down," during integer division is one of the more common stumbling blocks for new users to Python. Python comes with a built-in math module that performs most of the complex constant functions. The more common constants are listed in Table 3.3. Python also has a built-in random module just for dealing with random numbers. A few of the more common random functions are listed in Table 3.4. You designate strings in Python by placing them within quotes (both single and double quotes are allowed): Print "hello" 'hello' Strings store, obviously, strings of characters. Occasionally you will want to print a special character, like a quote, and Python accepts the traditional backslash as an escape character. The following line: Print "\"hello\"" prints the word hello in quotes. A few other uses of the escape sequence are illustrated in Table 3.5. Like with variables , you can manipulate strings with operators. For instance, you can concatenate strings with the + operator: # This will print mykonos all together print 'my'+'konos' Anything you enter with print automatically has a newline, \n , appended to it. If you don't want a newline appended, then simply add a comma to the end of the line with your print statement (this only works in non-interactive mode): # These three print statements will all print on one line print "I just want to fly", print "like a fly", print "in the sky" Lists were introduced in the Chapter 1. In Python, lists are simply groups that can be referenced in order by number. You set up a list within brackets [] initially. Integer-indexed arrays start at 0. The following code snippet creates a list with two entries, entry 0 being "Ford" , and entry 1 being "Chrysler" , and then prints entry 0: cars = ["Ford", "Chrysler"] print cars[0] In Python, there are a number of intrinsic functions, or methods , that allow the user to perform operations on the object for which they are defined. Common list methods are listed in Table 3.6. So, for instance, you can add to the list simply by using the append method: cars.append ("Toyota") print cars Or you can slice up lists by using a colon. Say you want to print just the first through the second item from the cars list. Just do the following: print cars[0:2] Lists can contain any number of other variables, even strings and numbers, in the same list, but cannot contain tuples or nested lists. Once created, lists can be accessed by name, and any entry in a list can be accessed with its variable number. You can also reference the last item in a list by using 1 as its reference number. # This line prints the last entry in the cars list: print cars[-1] You can also use the basic operators explained in Chapter 2 to perform logic on lists. Say you need to print the cars list twice. Just do this: print cars+cars Lists can also be compared. In a case like this: [1, 2, 3, 4] > [1, 2, 3, 5] the first values of each list are compared. If they are equal, the next two values are compared. If those two are equal, the next values are compared. This continues until the value in one is not equal to the value in the other; if all of the items in each list are equal, then the lists are equal. NOTE CAUTION Characters in a string act just like elements in a list, and can be manipulated in many of the same ways, but you cannot replace indi vidual elements in a Python string like you can with a list. If you need to iterate over a sequence of numbers, the built-in function range() is extremely useful. It generates lists containing arithmetic progressions, for instance: # This snippet assigns the numbers 0 through 9 to list1 and then prints the, list1=range(10) print list1 It is possible to let range start at another number, or to specify a different increment: # The following line assigns the numbers 5-9 to list2 list2=range(5, 10) print list2 # The following line creates a list that jumps by 5s from 0 through 50 and assigns it to list3 list3=range(0, 50, 5) print list3 # The following line does the same only in negative numbers list4=range(-0, -50, -5) print list4 Python also has a structure called a tuple . Tuples are similar to lists and are treated similarly, except that they are designated by parentheses instead of brackets: tuple1 = ( a, b, c) You don't actually need parentheses to create a tuple, but it is considered thoughtful to include them: tuple1 = a, b, c You can create an empty tuple by not including anything in parentheses: tuple1 = () There is also a version of the tuple, called a singleton , that only has one value: Singleton1 = a, While lists normally hold sequences of similar data, tuples (by convention) are normally used to holds sequences of information that aren't necessarily similar. For example, while a list may be used to hold a series of numbers, a tuple would hold all of the data on a particular studentname, address, phone number, student ID, and so onall in one sequence. So what makes tuples so special and different? Well, for one thing, tuples can be nested in one another: tuple1=(1,2,3) tuple2=(4, 5, 6) tuple3 = tuple1, tuple2 print tuple3 When you enter the last line and print out tuple3, the output is: ((1, 2, 3), (4, 5, 6)). NOTE TIP For convenience, there is tuple() function that converts any old list into a tuple. You can also perform the opposite operation, using the list() function to convert a tuple to a list. You can see how Python continues to bracket and organize the tuples together. Nesting tuples together in this way, also called packing , can provide a substitute for things like two-dimensional arrays in C. There is one more interesting feature, called multiple assignments , in tuples. X, Y = 0, 1 Python assigns X and Y different values, but on the same line of code. Multiple assignments can be very useful and quite a timesaver. Python has a third structure that is also similar to a list; these are called dictionaries and are indexed by assigned keys instead of automatic numeric list. Often called associative arrays or hashes in other languages, dictionaries are created in Python in much the same way as lists, except that they are used to create indexes that can be referenced by corresponding keys. An example of this might be a phone directory, where each telephone number (value) can be referenced by a person's name (key). Dictionaries are designated with curly braces instead of brackets. The keys used to index the items within a dictionary are usually tuples, so you will see them put together often. You can create an empty directory in the same way you create empty tuples, except that you replace the parentheses with curly braces, like so: dictionary1 = {} You assign keys and values into a dictionary using colons and comas, like so: key : value, key : value, key : value So for instance, in the phone number directory example: directory = {"Joe" : 5551212, "Leslie" : 5552316, "Brenda" : 5559899} Then you can access specific indexes by placing the key into brackets. If I wanted to reference Brenda's phone number later on, the following snippet would do the job and give me 5559899: directory [Brenda] If I had mistyped the number, I could change it to new value like this: directory[Brenda] = 5558872 Dictionaries have a number of standard methods associated with them; these are listed in Table 3.7. Identifiers are used in Python to name variables, methods, functions, or modules. Identifiers must start with a non-numeric character, and they are case sensitive, but they can contain letters, numbers, and underscores (_). There are also a handful of words Python reserves for other commands. These are listed below: and elif else except exec finally for from global if import in is lambda not orassert passbreak printclass raisecontinue returndef trydel while As a convention (but not necessarily a rule), identifiers that begin with two underscores (__) have special meanings or are used as built-in symbols. For instance, the __init__ identifier is designated for startup commands. Python's variables are loosely typed, and you can assign any type of data to a single variable. So, you can assign the variable x a numeric value, and then turn around later in the same program and assign it a string: X=111 Print x X="Mythmaker" Print x NOTE Not realizing that Python's variable names are case-sensitive seems to be one of the most common mistakes new users to the language suffer from. The very common if , elif , and else statements showed up in Chapter 2. These are used in Python to control program flow and make decisions: if x ==1: print "odd" elif x == 2: print "even" else: print "Unknown" if can also be used with Boolean expressions and comparison operators to control which blocks of code execute. Unlike with most other languages, you'll see that parentheses aren't commonly used to separate blocks in Python, but colons, tabs, and newlines are. if 1 >2 : print "One is greater than two" else : print "One is not greater than two" You saw how Python's for loop is used in Chapter 2. for is fairly versatile, and works with lists, tuples, and dictionaries. for x in cars: print x The following example uses for to loop through the numbers 09 and then print them: for x in range(0, 10) : print x This same example can be rewritten with a while loop: x = 0 while x <= 10 : print str(x) x += 1 The else clause will not fire if the loop is exited via a break statement. A number of convenient shortcuts exist for use with Python for loops; you'll get used to using them after a while. For instance, Python will run through each item in a string or list and assign it to a variable with very little necessary syntax: for X in "Hello": # In two lines you can print out each item of a string print X You can use a few borrowed C statements in for and while loops in order to control iterations, including the break statement, which breaks out of the current for or while loop, and the continue statement, which jumps to the next iteration of a loop. You can also add to the loop an else clause that will execute after the loop is finished (in the case of for loops) or when the while condition becomes false (in the case of while loops). x = 0 while x <= 10 : if x == 22: # this breaks out of this while loop break print str(x) if x <=11: # this jumps to the next loop Iteration continue x += 1 else: # This happens when x <=10 becomes false break NOTE CAUTION It's a common mistake, when first playing with loops, to create a never-ending loop that locks out any program control. For instance, the following code will never encounter a condition to exit and will therefore execute forever: while 1 == 1: print "Endless loop." Python is based on modules. What this means is that when a Python source file needs a function that is in another source file, it can simply import the function. This leads to a style of development wherein useful functions are gathered together and grouped in files (called modules ) and then imported and used as needed. For instance, let's say the source file MyFile.py has a useful function called Useful1 . If you want to use the Useful1 function in another script, you just use an import command and then call the function, like so: import MyFile MyFile.Useful1() For instance, create a file called TempModule.py with the following four lines: def one(a): print "Hello" def two(c): print "World" This file defines two functions: the first function prints "Hello" and the second one prints "World". To use the two functions, import the module into another program by using the import command, and then simply call them, like so: import TempModule.py TempModule.one(1) TempModule.two(1) The (1) is included here because each function must take in one argument. You can also use dir() to print out the functions of an imported module. These will include whatever has been added and also a few built-in ones (namely __doc__ , __file__ , __name__ , and __built-ins__ ). Module-based programming becomes particularly useful in game programming. Let's say you like the Useful1 function, but it really hinders performance when it runs in a game because it makes a lot of intense graphical calls or does a lot of complex math. You can fix Useful1 by simply rewriting the necessary functions and typing in MyFile.py as C++ code (or another language like assembly) and then registering the functions with the same module name. The original Python script doesn't even have to change; it just now calls the new, updated, faster C++ code. Modules make it possible to prototype the entire game in Python first and then recode bits and pieces in other, more specialized programming languages. Python has a large selection of modules built into the default distribution, and a few of the commonly used ones are listed in Table 3.8. Python ships with a number of great, well-documented libraries. Some of these libraries are providers of Python's much-celebrated flexibility. The library list is constantly growing, so you may want to check out the Python library reference below before embarking on any major projects:
https://flylib.com/books/en/1.77.1.37/1/
CC-MAIN-2018-43
refinedweb
2,937
64.34
Hi, I ve got problem in implementing ADT in my program, below I posted my PQ.c, PQ.h -> which is the ADT (not full ADT) you'll see why in my code...and office.c+ office. which is the program using the functions implemented in PQ this is the code (not full code, but will gives you the idea of my problem): PQ.h PQ.cPQ.cCode: typedef struct workers PQItem; //typedef WRecord PQItem; struct pqueue { int size; PQItem *item; }; typedef struct pqueue PQ; PQ *initPQ( void ); void swapArray( PQItem *arr1, PQItem *arr2 ); office.hoffice.hCode: #include <stdio.h> #include <stdlib.h> #include "PQ.h" PQ *initPQ( void ) { PQ *pq; pq = malloc ( sizeof(PQ) ); if( pq == NULL ) { fprintf( stderr, "ERROR: Memory allocation for priority queue failed;" "program terminated.\n" ); exit( EXIT_FAILURE ); } return pq; } void swapArray( PQItem *arr1, PQItem *arr2 ) { PQItem temp; temp = *arr1; *arr1 = *arr2; *arr2 = temp; return; } office.coffice.cCode: #include "PQ.h" #define NAMESIZE 100 #define BASE 100 /* Structure Template */ typedef int Time; struct workers { char *name; Time starttime; Time stoptime; }; typedef struct workers WRecord; //typedef struct workers PQItem; /* Functions Prototype */ void usage( char *progname ); FILE *open_file( char *progname, char *fname, const char *mode ); char *alloc_string_memory( int len ); char *get_name( FILE *fp ); WRecord *read_worker( FILE *fp, int workers_total ); Time add_time( Time time1, Time time2); My problem is PQItem, where it is the same type as struct workers, but I dont know how to linked the files therefore PQ recognize struct workers (or PQItem or WRecord), there in my program you'll see two appearences ofMy problem is PQItem, where it is the same type as struct workers, but I dont know how to linked the files therefore PQ recognize struct workers (or PQItem or WRecord), there in my program you'll see two appearences ofCode: #include <stdio.h> #include <stdlib.h> #include <string.h> #include "office.h" int main (argc....) { /* main program here, not related with problem */ } /*functions here (from function prototype declared in office.h, which is not related as well IMO */ one with // and without...that's one the combination I ve tried to make the linking work..one with // and without...that's one the combination I ve tried to make the linking work..Code: typedef struct workers PQItem; but so far what I ve get is these error: - redefinition error -or no semicoolon at the end of the struct -or pointer redeferencing into INCOMPLETE type (this is in swapArray functions in PQ.c, where it doesnt recognize PQItem, and I believe this is the problem, HOW to make PQItem recognized??) i hope someone can help me or gimme any suggestion, I dont need full ADT...just semi ADT thanks Ferdinand
https://cboard.cprogramming.com/c-programming/27315-half-adt-nested-struct-problem-printable-thread.html
CC-MAIN-2017-13
refinedweb
447
70.84
Applies to: C166 C Compiler Information in this article applies to: Is there a way in C to use the PRIOR assembly instruction to normalize a value? Yes. The _prior_ intrinsic library routine uses the PRIOR instruction to count the number of shifts required to normalize a number. The return value is a number from 0 to 15 which indicates how many times the number must be shifted left before the MSB is set. For example: #include <intrins.h> void testprior (int val) { volatile int temp; temp = _prior_ (val); } Article last edited on: 2005-10-20 07:42:31 Did you find this article helpful? Yes No How can we improve this article?
http://infocenter.arm.com/help/topic/com.arm.doc.faqs/ka9969.html
CC-MAIN-2017-39
refinedweb
113
61.67
XmlWriter.WriteAttributeString Method (String, String, String) When overridden in a derived class, writes an attribute with the specified local name, namespace URI, and value. Namespace: System.XmlNamespace: System.Xml Assembly: System.Xml (in System.Xml.dll) Parameters - localName - Type: System.String The local name of the attribute. - ns - Type: System.String The namespace URI to associate with the attribute. - value - Type: System.String The value of the attribute. This method writes out the attribute with a user defined namespace prefix and associates it with the given namespace. If localName is "xmlns" then this method also treats this as a namespace declaration. In this case, the ns argument can be Nothing.. For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
https://msdn.microsoft.com/en-us/library/xkd34zdt(v=vs.95)?cs-save-lang=1&cs-lang=vb
CC-MAIN-2018-09
refinedweb
130
62.04
See . Hello Justin - I've been digging into Vibrate and Battery, since they seemed similar to adding in lights, though I've also been looking at radio and audio. The two paths seem to be: Add Lights into the HAL. This is done by adding functionality into the hal/gonk and then calling the hal_impl version from Hal.cpp. Do I need to stub out the same functions in hal/linux, windows, fallback, android, just so those will compile? Also, do I need to do the loop back in hal/sandbox? From there, create a manager in dom/lights (similar to the battery manager already there). Then add the new interfaces into dom/base/Navigator.cpp and idl. The alternative is to add in to dom/system/b2g (or maybe create a new dom/lights/b2g) and create and idl there and either directly handle the lights or create a proxy or daemon to control the lights (though I think that is over kill in this case). The latter seems more self contained and probably cleaner. However, it is a bit more opaque as to how it all hooks up and becomes available to the javascript. Probably need to read more of the XPCOM interface. Thoughts? -Jim Straus Let's do the following - add interface to Hal.h - add a Lights.cpp gonk impl to hal/gonk - add a Lights.cpp fallback stubs to hal/fallback. Use this everywhere !gonk. - add a forwarding impl to hal/sandbox/SandboxHal.cpp Trust me, that will be much simpler than implementing this in dom/system. Does comment 2 answer all your questions, Jim? I don't think we want DOM apis for most of this. The lights will be driven from inside Gecko. If wifi is on, we turn on the wifi light. If a notification is pending, we turn on the notification light. For now just expose Gecko-internal privileged APIs. We do want DOM APIs for the backlight, keyboard light, and maybe softkey backlight, though, right? (We could hardcode that the softkey backlight is on whenever the screen is on -- this is what my Nexus S does, and I think it's virtuous. But it's a policy decision, so I think this is better left to JS code.) Hi Jim, how is this work coming along? We need these changes to better support the backlight of the "maguro" device. Created attachment 587822 [details] [diff] [review] Patch to Gecko This adds GetLight and SetLight, modifies GetBrightness and SetBrightness to use the new interface and should expose the functions to js. Created attachment 587824 [details] [diff] [review] Patch to glue/gonk/device/samsung/c1-common Extends liblights to support getting the state of a light. Created attachment 587825 [details] [diff] [review] Patch to glue/gonk/hardware/libhardware Extends liblights to support getting the state of a light. Added patches. Note that this extends the liblights interface to support reading the state of the lights. On the Samsung, the button lights can't be read, but the backlight can. Can someone please review? Jim, patches against all the non-gecko stuff are best submitted as GitHub pull request. (Also, you don't get to set r+ yourself ;). You need to ask somebody to review your patch, by setting review to '?' and entering their email address in the corresponding field.) Comment on attachment 587822 [details] [diff] [review] Patch to Gecko Hey Jim, I ran out of time to fully review this tonight. Asap tomorrow. But a few comments from scanning the patch - there are multiple patches concatenated here, but that doesn't fit within our mercurial/bugzilla workflow. If the patches are logically distinct, please post them as separate attachments, "Part 1", "Part 2", etc. If they're not logically distinct, please post a patch including all changes. It looks to me like these patches should be concatenated. - what's the consumer of nsIHal? - we'll be able to test the lights implementation by using DOM APIs exposed to content, and then checking the hw state changes with virtual qemu devices. I wouldn't bother with creating hal/tests. This may sound a little odd, but we typically only test external interfaces; tests of internal interfaces like hal tend to not be worth their value. Comment on attachment 587825 [details] [diff] [review] Patch to glue/gonk/hardware/libhardware We can't extend the android hal API, because libhardware is typically provided as a proprietary blob. Unfortunately we need to stay synced with upstream on this :(. The "r-" here means, "this patch isn't going in the right direction, so let's take a different approach." Comment on attachment 587824 [details] [diff] [review] Patch to glue/gonk/device/samsung/c1-common Is this in support of the added "get_light" hal API? If so, the same comment applies here. I'm clearing the request flag here to mean, "there are questions I need answered". Do we have to strictly support the Android libraries (I assume this is what you mean by staying in synch with upstream) or can we ask manufacturers to have extended versions of the libraries (that are fully backward compatible)? Right now, I don't believe there is an abstract way to retrieve the state of the lights, which means that everyone has to build custom functions to control things like the backlight independent of libhardware. We can ask, but for now we can't rely on it. For example, the patch here would cause us to crash on the maguro, as things stand currently. We should assume we can't change android-hal/libhardware.so at all until proven otherwise. Yes, we'll need to track the light state in gecko. C'est la guerre ;). Comment on attachment 587822 [details] [diff] [review] Patch to Gecko Hi Jim, It's pretty hard for me to review concatenated patches like this. Can you either flatten or separate them into logically distinct pieces, posted separately? Thanks! Created attachment 589365 [details] [diff] [review] Patch to Gecko to allow light controls Notes for review: SetLight takes parameters of which light is being set the mode for setting, the flash mode for setting the flashOMS and flashOffMS for the user flash mode the color (32-bit values of ARGB, converted to a brightness if that's all that's supported) Hal.cpp - Added in calls to SetLight and GetLight Hal.h - Defines the calls to SetLight and Get Light, the various lights that can be set, the light modes and flash modes. FallbackHal.cpp Default implementations of SetLight and GetLight. SetLight does nothing and GetLight returned full on. GonkHal.cpp Does the actual work, calling liblights. I removed the references to get_light and am now caching the last value set in SetLight, returning that. If/when we extend liblights.h, we can convert back. There are conditional HAVEGETLIGHT in the code to allow for easy conversion. Note that SetScreenBrightness uses SetLight and GetScreenBrightness uses GetLight. Also note that the old GetScreenBrightness actually read the screen brightness, but not in a generic way. PHal.ipdl Defines a structure and functions for GetLight and SetLight across process boundaries. nsIHal.idl Constants and functions for access from JS Chris. Hopefully this will be easier to review. No patches to gonk. I'll create a new bug for that. Comment on attachment 589365 [details] [diff] [review] Patch to Gecko to allow light controls Hi Jim, Looks very good, thanks! Some comments are below. >diff --git a/dom/system/b2g/nsIHal.idl b/dom/system/b2g/nsIHal.idl Since we don't have a consumer of this interface yet, let's pull the nsIHal part of this patch out and save it for if we do. Filing that as a separate bug, like you did for bug 718897, would be great. (We may not ever need to use this interface directly from JS.) >diff --git a/hal/Hal.h b/hal/Hal.h >--- a/hal/Hal.h >+++ b/hal/Hal.h >@@ -41,16 +41,17 @@ > #define mozilla_Hal_h 1 > > #include "mozilla/hal_sandbox/PHal.h" > #include "base/basictypes.h" > #include "mozilla/Types.h" > #include "nsTArray.h" > #include "prlog.h" > #include "mozilla/dom/battery/Types.h" >+#include "nsString.h" I don't believe that this #include is needed. >+enum { >+ HAL_HARDWARE_UNKNOWN = -1, >+ HAL_HARDWARE_FAIL = 0, >+ HAL_HARDWARE_SUCCESS = 1, >+ HAL_LIGHT_ID_BACKLIGHT = 0, >+ HAL_LIGHT_ID_KEYBOARD = 1, >+ HAL_LIGHT_ID_BUTTONS = 2, >+ HAL_LIGHT_ID_BATTERY = 3, >+ HAL_LIGHT_ID_NOTIFICATIONS = 4, >+ HAL_LIGHT_ID_ATTENTION = 5, >+ HAL_LIGHT_ID_BLUETOOTH = 6, >+ HAL_LIGHT_ID_WIFI = 7, >+ HAL_LIGHT_ID_COUNT = 8, >+ HAL_LIGHT_MODE_USER = 0, >+ HAL_LIGHT_MODE_SENSOR = 1, >+ HAL_LIGHT_FLASH_NONE = 0, >+ HAL_LIGHT_FLASH_TIMED = 1, >+ HAL_LIGHT_FLASH_HARDWARE = 2 >+}; >+ Since we're in C++ here, we can make these separate named |enum| types. For example, enum LightType { LIGHT_BACKLIGHT, LIGHT_KEYBOARD, //... }; This will help the C++ compiler catch abuses of the API. >+/** >+ *. >+ */ >+long SetLight(const long& light, const long& mode, const long& flash, const long& flashOnMS, const long& flashOffMS, const long& color); >+long GetLight(const long& light, long *mode, long *flash, long *flashOnMS, long *flashOffMS,long *color); Couple of things here - |long| is guaranteed to be the same size or smaller (Windows 64-bit, sigh) than the ISA word size, so |const long&| doesn't save any stack space. Using plain |long| arguments is fine, or |const long| if you want the C++ compiler to check immutability of the arguments within the function definition. - but, since you've already defined |struct LightConfiguration| for IPC, please use it here for the hal:: API. That would allow writing the cleaner API bool SetLightConfig(LightType aWhich, const hal::LightConfiguration& aConfig); bool GetLightConfig(LightType aWhich, hal::LightConfiguration* aConfig); (I wrote |bool| return values here because I'm not sure we need to distinguish between HAL_HARDWARE_UNKNOWN and HAL_HARDWARE_FAIL. We can always change this later.) >diff --git a/hal/gonk/GonkHal.cpp b/hal/gonk/GonkHal.cpp >-const char *screenBrightnessFilename = "/sys/class/leds/lcd-backlight/brightness"; > double > GetScreenBrightness() > { > void > SetScreenBrightness(double brightness) > { \o/, these changes are righteous! >+ >+struct Devices { >+ light_device_t* lights[HAL_LIGHT_ID_COUNT]; >+}; >+ >+static Devices* devices = NULL; >+ Another couple of small nits - the Gecko style for static variables is |static Foo sFoo| - what's the intended usage of the Devices struct? We're not trying to free it on shutdown, and indeed that might be pretty hard. I would recommend either * keep |struct Devices|, and have it use ClearOnShutdown to free the memory (xpcom/base/ClearOnShutdown.h) * or, changing |struct Devices| into static light_device_t sLights[LIGHT_COUNT]; and not bothering with freeing the memory for now. >+light_device_t* get_device(hw_module_t* module, char const* name) A few more nits - |static light_device_t*| - Gecko style for naming functions is LikeThis(). I don't like it personally, but that's the style. >+/** >+ * The state last set for the lights until liblights supports >+ * getting the light state. >+ * >+ * @author jstraus (1/13/2012) We track author information using the " * Contributor(s):" section in the file header, and we track blame using our version control tools. Feel free to add yourself to the " * Contributor(s):" list in this file! :) But, this is annotation isn't necessary. >+static light_state_t StoredLightState[HAL_LIGHT_ID_COUNT]; >+ Nit: naming style is |sStoredLightState|. >+long >+SetLight(const long& light, const long& mode, const long& flash, const long& flashOnMS, const long& flashOffMS, const long& color) >+{ >+ light_state_t state; >+ >+ if (!devices) { Please refactor this initialization code into a helper function. >+ int err; >+ hw_module_t* module; >+ >+ devices = (Devices*)malloc(sizeof(Devices)); If you keep |struct Devices|, please make this a call to |new Devices()|, and memset all the pointers to 0 in the constructor. >+ err = hw_get_module(LIGHTS_HARDWARE_MODULE_ID, (hw_module_t const**)&module); >+ if (err == 0) { >+ devices->lights[HAL_LIGHT_ID_BACKLIGHT] >+ = get_device(module, LIGHT_ID_BACKLIGHT); This could be written more compactly with an auxiliary data structure like struct LightTypeName { LightType mType; const char* mName; } kLightIds[] = { LIGHT_BACKLIGHT, LIGHT_ID_KEYBOARD, //... LIGHT_COUNT, nsnull }; and then in this code, a |for| loop over the kLightIds. (In Gecko style, "k" means "constant".) >+ memset(&state, 0, sizeof(light_state_t)); >+ state.color = color; >+ state.flashMode = flash; >+ state.flashOnMS = flashOnMS; >+ state.flashOffMS = flashOffMS; >+ state.brightnessMode = mode; >+ Adding a helper to convert between |LightConfiguration| and |light_state_t| might be useful. >+long >+GetLight(const long& light, long *mode, long *flash, long *flashOnMS, long *flashOffMS, long *color) >+{ >+ *color = state.color; >+ *flash = state.flashMode; >+ *flashOnMS = state.flashOnMS; >+ *flashOffMS = state.flashOffMS; >+ *mode = state.brightnessMode; >+ And similarly here, a helper for converting light_state_t -> LightConfiguration. >diff --git a/hal/sandbox/PHal.ipdl b/hal/sandbox/PHal.ipdl >--- a/hal/sandbox/PHal.ipdl >+++ b/hal/sandbox/PHal.ipdl >@@ -43,16 +43,25 @@ include protocol PBrowser; > namespace mozilla { > > namespace hal { > struct BatteryInformation { > double level; > bool charging; > double remainingTime; > }; >+ struct LightInformation { This is just a naming nit, but I think |LightConfiguration| might be clearer here. This is used to get/set the requested parameters of a particular light, not query any varying state. >+ long light; I don't feel particularly strongly about whether the light ID is part of the configuration or not. I could see arguments both ways. I'll leave that up to your judgment. This should use the LightType enum we add to Hal.h >+ long mode; >+ long flash; Similarly, these should use more specific enum types. >+ long flashOnMS; >+ long flashOffMS; Since |long| is an architecture-specific type, and our IPC system works across processes that run with different architecture types (x86 vs. x86-64, currently) I generally discourage use of variable-sized types in IPC decls. I think uint32_t would work just as well here. >+ long color; This definitely needs to be a fixed-size type, uint32_t. >+ long status; Since this is the status of a particular request, not a general status, I don't think it should live in LightConfiguration. You can "return" as many values in IPC response messages as you want, so if this was added here to ensure only one return value, that's not necessary. For example, sync GetLight(LightType light) returns (bool status, LightInformation aLightInfo); is perfectly legal. >+ sync SetLight(long light, long mode, long flash, long flashOnMS, long flashOffMS, long color) returns (long status); >+ sync GetLight(long light) returns (LightInformation aLightInfo); > Let's use LightConfiguration for these, and split out status per above. This was a fair number of comments, but it's really a bunch of small style stuff. This patch is close to ready to land. (I cleared the review request to mean, "I would like to see an updated patch with these comments addressed".) Thanks! Created attachment 590422 [details] [diff] [review] Patch to Gecko to allow light controls Notes for review: nsIHal is pulled and in a separate bug for future inclusion. Personally, I think we should expose as much as we can, so I don't want it to get lost. You never know when someone will make creative use of a device. Enums were created for the various constants and used throughout. LightConfiguration (formally LightInformation) is the interface and used throughout. I kept the hardware fail vs. hardware unknown. If the light doesn't exist you get hardware unknown. If there is a problem actually controlling a light, you get hardware fail. May not make a difference, but conceivably one could iterate over the lights with GetLight and see which ones exist. Fixed the naming conventions and use of uint32_t. I made the allocation a static. It's small and should never go away once initialized. Thanks for the info on the ipdl being able to return more than one value. Question: Can enums be in the ipdl? I didn't see an example, so they aren't in there now. Comment on attachment 590422 [details] [diff] [review] Patch to Gecko to allow light controls Jim, you have to set the requestee so chris gets notified of your review request. Add his email in the requestee field next to the ?. Comment on attachment 590422 [details] [diff] [review] Patch to Gecko to allow light controls I posted a couple comments. Chris is the module owner, so I can't actually review this. Comment on attachment 590422 [details] [diff] [review] Patch to Gecko to allow light controls >diff --git a/hal/Hal.h b/hal/Hal.h >+enum HALStatus { Call this LightStatus. You didn't convince me that this enum return is useful, because you didn't describe a concrete use within Gecko. The current users of SetLight() don't even check the return value. But this bug is dragging on too long so let's just get the code landed. I'll leave it up to you whether you think the complication to this interface is worth a potential use in the future. You know my opinion ;). >+enum LightMode { The semantics of this isn't obvious, please document it. >+/** >+ *. >+ */ >+uint32_t SetLight(const hal::LightType& light, const hal::LightConfiguration &aConfig); >+uint32_t GetLight(const hal::LightType& light, hal::LightConfiguration &aConfig); What's the returned value here mean? Docs need to describe that. Is it LightStatus? If so, use the C++ type. Or, save yourself and future users some trouble and use bool ;). >diff --git a/hal/fallback/FallbackHal.cpp b/hal/fallback/FallbackHal.cpp >+#include "nsIHal.h" This doesn't exist anymore. >+uint32_t >+SetLight(const LightType& light, const hal::LightConfiguration& aConfig) >+{ >+ return HAL_HARDWARE_SUCCESS; According to your docs above, this should have returned UNKNOWN. This doesn't help convince me that the return enum is useful ;). >diff --git a/hal/gonk/GonkHal.cpp b/hal/gonk/GonkHal.cpp >+ int status; >+ hal::LightType light = hal::HAL_LIGHT_ID_BACKLIGHT; >+ >+ status = hal::GetLight(light, aConfig); |status| is going to generate a compiler warning because it's a dead variable. Remove it. Still not arguing for an enum return type ... ;) >+ int brightness = aConfig.color() & 0xFF; >+ return brightness / 255.0; Add a note that we assume that the backlight is monochromatic so it doesn't matter which color component we return. This assumption is maintained by SetScreenBrightness(). > void > SetScreenBrightness(double brightness) > // Convert the value in [0, 1] to an int between 0 and 255, then write to a > // string. This comment isn't true anymore. > int val = static_cast<int>(round(brightness * 255)); uint32_t val = int32_t(round(brightness * 255)); >+ int color = (val<<16) + (val<<8) + val; >+ uint32_t. According to lights.h, you have to set the high byte to 0xff. We have a nice helper to manage color components somewhere in Gecko, but I don't remember where and it's probably not worth the trouble here. >diff --git a/hal/sandbox/PHal.ipdl b/hal/sandbox/PHal.ipdl >+ struct LightConfiguration { >+ uint32_t light; >+ uint32_t mode; >+ uint32_t flash; You didn't address my previous comment to use the C++ types here. >+ sync SetLight(uint32_t light, LightConfiguration aConfig) returns (uint32_t status); >+ sync GetLight(uint32_t light) returns (LightConfiguration aConfig, uint32_t status); Same here. >diff --git a/hal/sandbox/SandboxHal.cpp b/hal/sandbox/SandboxHal.cpp >+long Wrong return value. >+SetLight(const hal::LightType& light, const hal::LightConfiguration &aConfig) >+{ >+ uint32_t status = -1; Don't hard-code this value. Either switch to bool or stick to the named enum values. Per your documentation above, I think you should use ERROR as the default return here. This is also not helping your case for the enum return ;). >+long Same as above. >+GetLight(const hal::LightType& light, hal::LightConfiguration &aConfig) >+{ >+ uint32_t status = -1; Same as above. I'm a bit concerned about numerous comments that weren't addressed here. I'm going to need to see another version of the patch. Please comment here, or e-mail or ping me on IRC if you have questions. Created attachment 591334 [details] [diff] [review] Patch to Gecko to allow light controls Figured out how to add enums to ipdl, changed the names to fit more the style (starting with leading "e", camel cased). Changed code to make use of it. Fixed up comments for the enumeration. Changed return type to bool, false = failed, true = succeed. HalStatus doesn't exist any more. Comment on attachment 591334 [details] [diff] [review] Patch to Gecko to allow light controls >diff --git a/hal/Hal.h b/hal/Hal.h >+/** >+ * GGET the value of a light returninn a particular color, with a specific flash pattern. Couple of typos here. >+ *. You don't need to duplicate this part of the comment from SetLight(). >diff --git a/hal/HalTypes.h b/hal/HalTypes.h >@@ -0,0 +1,108 @@ >+/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ >+/* ***** BEGIN LICENSE BLOCK ***** Please use the new license block /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this file, * You can obtain one at. */ I just switched my emacs macro over to it today. Sorry for not pointing this out before. Please keep the modelines. Looks good! Please just fix up the minor nits above and let's get this landed! :D Created attachment 591621 [details] [diff] [review] Patch to Gecko to allow light controls Fixed typos, reduced comment, updated licenses across files Comment on attachment 591621 [details] [diff] [review] Patch to Gecko to allow light controls Oh sorry ... I meant only update the license header on the new file you added. But, thanks for doing this anyway, needed to be done. :) Also, for future reference, if I mark a patch "r+" that means I don't need to see the changes you make to address my review comments. If you feel like the changes should get another look-over, then by all means request one. But it's not required. Hi Jim, this patch doesn't apply anymore over mozilla-central revision 5b0900b3e71c (hg) user: Kyle Huey <khuey@kylehuey.com> date: Tue Jan 31 11:38:24 2012 -0500 summary: Bug 563318: Switch to MSVC 2010 on trunk. r=ted $ hg qpush applying 712378 patching file hal/sandbox/PHal.ipdl Hunk #2 FAILED at 66 1 out of 2 hunks FAILED -- saving rejects to file hal/sandbox/PHal.ipdl.rej patching file hal/sandbox/SandboxHal.cpp Hunk #2 FAILED at 119 Hunk #3 FAILED at 234 2 out of 3 hunks FAILED -- saving rejects to file hal/sandbox/SandboxHal.cpp.rej patch failed, unable to continue (try -v) patch failed, rejects left in working dir errors during apply, please fix and refresh 712378 Please update the patch and I'll push it to tryserver for you. Created attachment 593725 [details] [diff] [review] patch Created attachment 593726 [details] [diff] [review] patch Rebased. We desperately need this patch. Created attachment 593743 [details] [diff] [review] updated Rebased, lots of build-bustage fixes (Jim, need to make sure patches you post build! :) ), and addressed some review comments that were missed. Sorry had to backout since it conflicted with bug 697641's backout, which was causing failures on all native Android tests. (Can land after rebase). Created attachment 593960 [details] [diff] [review] Patch to Gecko to allow light controls Merged with latest m-c trunk Chris, I always do a build before submitting patches. Jim, please rebase attachment 593743 [details] [diff] [review], which contains numerous build fixes and addresses some review comments that were overlooked. Thanks!
https://bugzilla.mozilla.org/show_bug.cgi?id=712378
CC-MAIN-2017-04
refinedweb
3,779
66.44
This page uses content from Wikipedia and is licensed under CC BY-SA. The Rijndael S-box is a square matrix (square array of numbers) used in the Rijndael cipher, which the Advanced Encryption Standard (AES) cryptographic algorithm was based on.[1] The S-box (substitution box) serves as a lookup table. The S-box is generated by determining the multiplicative inverse for a given number in GF(28) = GF(2)[x]/(x8 + x4 + x3 + x + 1), Rijndael's finite field. Zero, which has no inverse, is mapped to zero. The multiplicative inverse is then transformed using the following affine transformation: where [x7, ..., x0] is the multiplicative inverse as a vector. This affine transformation is the sum of multiple rotations of the byte as a vector, where addition is the XOR operation. The matrix multiplication can be calculated by the following algorithm: After the matrix multiplication is done, XOR the value by the decimal number 99 (the hexadecimal number 0x63, the binary number 0b01100011, the bit string 11000110 representing the number in LSb first notation). This will generate the following S-box, which is represented here with hexadecimal notation: Here the column is determined by the least significant nibble, and the row is determined by the most significant nibble. For example, the value 0x9a is converted into 0xb8 by Rijndael's S-box. For C, C++ here is the initialization of the table: unsigned char s }; The inverse S-box is simply the S-box run in reverse. For example, the inverse S-box of 0xb8 is 0x9a. It is calculated by first calculating the inverse affine transformation of the input value, followed by the multiplicative inverse. The inverse affine transformation is as follows: The following table represents Rijndael's inverse S-box: For C, C++ implementation, here is the initialization of the table: unsigned char inv_s }; The Rijndael S-Box was specifically designed to be resistant to linear and differential cryptanalysis. This was done by minimizing the correlation between linear transformations of input/output bits, and at the same time minimizing the difference propagation probability. The Rijndael S-Box can be edited, which defeats the suspicion of a backdoor built into the cipher that exploits a static S-box. The authors claim that the Rijndael cipher structure should provide enough resistance against differential and linear cryptanalysis if an S-Box with "average" correlation / difference propagation properties is used. An equivalent equation for the affine transformation is where b' b and c are 8 bit arrays and c is 01100011.[2] [3] The following C code calculates the S-box: #include <stdint.h> #define ROTL8(x,shift) ((uint8_t) ((x) << (shift)) | ((x) >> (8 - (shift)))) void initialize_aes_sbox(uint8_t sbox[256]) { uint8_t p = 1, q = 1; /* loop invariant: p * q == 1 in the Galois field */ do { /* multiply p by 3 */ p = p ^ (p << 1) ^ (p & 0x80 ? 0x1B : 0); /* divide q by 3 (equals multiplication by 0xf6) */ q ^= q << 1; q ^= q << 2; q ^= q << 4; q ^= q & 0x80 ? 0x09 : 0; /* compute the affine transformation */ uint8_t xformed = q ^ ROTL8(q, 1) ^ ROTL8(q, 2) ^ ROTL8(q, 3) ^ ROTL8(q, 4); sbox[p] = xformed ^ 0x63; } while (p != 1); /* 0 is a special case since it has no inverse */ sbox[0] = 0x63; } The following Python code calculates the S-box: def ROTL8(x,shift) : return 0xff & ( ( (x) << (shift) ) | ( (x) >> (8 - (shift) ) ) ) def initialize_aes_sbox() : sbox = [None] * 256 p = q = 1 firstTime = True # loop invariant: p * q == 1 in the Galois field while p != 1 or firstTime : # To simulate a do/while loop # multiply p by 3 p = p ^ (p << 1) ^ (0x1B if p & 0x80 else 0) p = p & 0xff # divide q by 3 q ^= q << 1 q ^= q << 2 q ^= q << 4 q ^= 0x09 if q & 0x80 else 0 q = q & 0xff # compute the affine transformation xformed = q ^ ROTL8(q, 1) ^ ROTL8(q, 2) ^ ROTL8(q, 3) ^ ROTL8(q, 4) sbox[p] = xformed ^ 0x63 firstTime = False # 0 is a special case since it has no inverse sbox[0] = 0x63 return sbox
https://readtiger.com/wkp/en/Rijndael_S-box
CC-MAIN-2018-05
refinedweb
664
54.86
As This is my first article over here on code project .In the life of developing softwares, sometimes we couldnt make things properly worked . For example whatever we think is simple is not simple for us or whatever we think difficult is not difficult for us .It all depends upon the persons mental approach and Logic for solving . Today I going to discuss the basic thing for removing decimal points from the Price. Because I have spend like 4 hours just to solve this matter . So I thought to discuss this basics with others who will not want to spend 4 hours just to solve this problem. Because I am working with company who used to develop products for stock excahnge and financial Institutes , And my job is very critical to make EOD or END OF DAY Files. This involves very much familarity with the Database design & schema with relations . Basically The files Needed for firms at the end of day for Settlement & record keeping of all database fields which they have used for stocks buying or selling etc.. So EOD file is to be character specific means every field should be cheked with its length constraints, the fields which are to be taken from DB are all accoridng to the firms format should be different for different firms or companys. My problem is that I need all numerical values to be converted to a positive integer.(i.e. if I find price for "19.99" to be converted to "1999") with leading and traling by zeros .For that I Have Solved My problem and want others who get irritated just to solve again this thing find a solution that will check a numerical value to see if a decimal point is present, and if it is, then to remove it. The solution to above problem for some developers is just to multiply Decimal values with 100 to get these kinda values to be converted to a positive integer. If we want Price This (19.99 or 56.23) to This(1999 or 5623) , just multiply by 100 . But My problem is to get values in Positive integer with leading and traling by zeros , Like for example if I have value 0.23456 and 504.34678 etc . so My end result will be like 00000230000 and 00504346780 As in my scenario I have price constraint to be 11 characters long without decimal point and the format should be like , nnnnn.nnnnnn(without decimal point equal to 11 characters long). I have to write Data to file , for that I used Stream Writer. First Initialize Stream writer. Private StreamWriter sw1 ;<?xml:namespace prefix = o<o:p> Private StreamWriter sw1 ; sw1 = new StreamWriter(AppDomain.CurrentDomain.BaseDirectory)+"\<A href="">\Temp.txt</A><A href="">"); </A><o:p> sw1 = new StreamWriter(AppDomain.CurrentDomain.BaseDirectory)+"\ <A href="">\Temp.txt</A><A href="">"); </A> As I want to create text file in projects Debug Folder, You can specify any path for creating text file .I am omitting most of the things like connection string ,Dataset,DataAdapter etc. But as To discuss the problem for removing decimal point from price. Take Strings gn,gn1,gn2 and int i .<o:p> <o:p> string gn,gn1,gn2 ;<o:p> string gn,gn1,gn2 ; int i; in my case I have taken values from DB , for filed Price .so for taking values from DB in gn. gn = (dr["Price"].ToString()); // you can used hard coded string as well , where "dr" is DataRow.<o:p> gn = (dr["Price"].ToString()); // you can used hard coded string as well , where "dr" is DataRow i = gn.IndexOf("."); // This line will return place of decimal"." from string gn <o:p> i = gn.IndexOf("."); // This line will return place of decimal"." from string gn As indexof fuction gives numeric values , if i = -1 , it means there is no decimal within string or if i = 1, then it means decimal is after first place ,if i = 2 ,then it means after second place etc. if( i > 0 )<o:p> if( i > 0 ) {<o:p> { gn1 = gn.Remove(i, gn.Length - i);<o:p> gn1 = gn.Remove(i, gn.Length - i); gn2 = gn.Remove(0,i+1);<o:p> gn2 = gn.Remove(0,i+1); }<o:p> } else <o:p> else gn1 = gn;<o:p> gn1 = gn; gn1 = gn1.PadLeft(5,'0');<o:p> gn1 = gn1.PadLeft(5,'0'); gn2 = gn2.PadRight(6,'0');<o:p> gn2 = gn2.PadRight(6,'0'); gn3 = string.Concat(gn1,gn2);<o:p> gn3 = string .Concat(gn1,gn2); sw1.WriteLine(gn3);<o:p> sw1.WriteLine(gn3); Hope so this article will help those who will be getting Upset in a same situation . Although this article is for beginer level but as i found intresting to post here. Hope so You will enjoy this. And better to feed back your comments. I am Muhammad saffi Hussain . Working as Developer & Analyst in Dubai & pakistan . This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here ((double) dr [ "Price" ] ).ToString ( "00000.000000" ).Replace ( "." , "" ) General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/14622/To-Remove-Decimal-From-Price
CC-MAIN-2018-22
refinedweb
903
75.4
Don’t try to be too smart. Be boring, predictable and consistent.. Recently, I had to do some work with PHP/WordPress at my day job and encountered a couple of examples of an API doing the unexpected which give something to learn from. array_rand The idea is simple. You pass an array, specify a number of random values to return (optionally, defaults to 1) and you get them back. Since it might return multiple values, it sounds reasonable to expect to get an array back, especially when passing the optional parameter, right? Well, not really. It returns an integer with a key if the number of random values requested was 1. It does make sense if the method always returned one key, obviously, but that’s not the case. So the developer now has to check if the code requested a single value and handle this case separately. Actually, this method does a bit more. When thinking about returning random things from an array, the most predictable option seems to be to return random values. However, this method does return keys to random values and not the values themselves. According to the docs, the motivation is “[t]his is done so that random keys can be picked from the array as well as random values.”. Again, all in good intentions and makes sense at first. But it would simplify things if it just returned a list of random values from the array and if you need random keys, you can pass an array that consists of all keys available. update post meta This WordPress method updates a meta value (key-value store associated with a post), given a post id, key and new value. Sounds simple and it does what it says on the box. The problem here comes with a return value: “Returns meta_id if the meta doesn’t exist, otherwise returns true on success and false on failure. NOTE: If the meta_value passed to this function is the same as the value that is already in the database, this function returns false.”. So, it returns the same value in a case of 1) database error 2) update which updated nothing. Sometimes, it's definitely useful to know if any actual updates where performed and you might want to inform the user that no values were changed. But that’s application specific and there are many application where it simply doesn’t matter and the API consumer/developer is in a good position to decide if it’s required. What’s much more important is ability to properly handle errors. In case of a database failure, you might want to inform the user, halt the execution, log it somewhere. However, this implementation makes a straightforward check much more difficult. How could it be solved? Simple true on success (success as in the database operation was successful) and false on error might do the job. To achieve similar functionality, you can return a number of rows updated (0 if none, instead of false) and boolean false if no rows were updated. In PHP, 0 !== false so that would solve the issues. These two examples show how well intended extra smarts in an API call can just make it more difficult to use it. Just something to have in mind when implementing the next function that will be widely used in the wild, often in unexpected use cases. Thanks for reading this post! If you have ideas and want to share them with me, feel free to contact me at gediminas [dot] rap [at] google’s little mail service.
https://medium.com/@GedRap/dont-try-to-be-too-smart-be-boring-predictable-and-consistent-d63ff2a8e5d1
CC-MAIN-2019-13
refinedweb
599
71.44
24 April 2012 04:33 [Source: ICIS news] SINGAPORE (ICIS)--Sinopec has settled its April purified terephthalic acid (PTA) contract at yuan (CNY) 9,000/tonne ($1,426/tonne), which is CNY150/tonne lower than its March contract, a company source said on Tuesday. The price is on a ?xml:namespace> “Many players are still facing a loss [because] the selling [spot] prices are lower than the costs,” he added. The player said the exact cost of contracted cargoes, which excludes freight charges, interest and discounts, is CNY8,730/tonne. The spot prices of PTA in east China were at CNY8,650-8,700/tonne EXW (ex-works) on 23 April, according to Chemease, an ICIS service in China. The spot prices in the domestic market will remain stable in the near term, another market player said. Other major PTA producers in China, including Yisheng Petrochemical, Xiang Lu Petrochemical, Zhejiang Yuan Dong Petrochemical, BP Zhuhai and Oriental Petrochemical, are expected to announce their April PTA contract prices in one or two days, an industry source said. Sinopec settled its March contract at CNY9,150/ton
http://www.icis.com/Articles/2012/04/24/9552867/chinas-sinopec-settles-april-pta-contract-at-cny9000tonne.html
CC-MAIN-2014-10
refinedweb
185
58.62
Up to [cvs.NetBSD.org] / src / sys / ddb Request diff between arbitrary revisions Default branch: MAIN Current tag: netbsd-1-5-ALPHA2 Revision 1.13 / (download) - annotate - [select for diffs], Tue Jun 6 18:50:56 2000 UTC (20 years, 4 months ago) by sore.12: +2 -1 lines Diff to previous 1.12 (colored) #include <sys/systm.h> for the snprintf() prototype. This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box.
http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/ddb/db_lex.c?sortby=date&only_with_tag=netbsd-1-5-ALPHA2
CC-MAIN-2020-45
refinedweb
103
68.67
On Fri, 21 Jul 2000, Kristoffer Lawson wrote: > Just tested the software out briefly and based on what I've seen of it the > programme really does look and feel good. > > A few initial questions: > > - As you use Tcl throughout the system (a good thing indeed!) I wondered > if it might be possible to build a parser in Tcl instead of > C? Specifically, is there a Tcl API like the C API for handling the > project database? (True, building one on top of the C API is not a big > chore). Specifically I would like support for XOTcl () and > OTcl. See the documentation for the Tcl interface to the DB. That should at least get your started. As for extending the Tcl support, we have talked about this at length and we think the best approach would be to use the Tcl parser API exposed by regular Tcl at the C layer. We currently use our own parser written for Tcl. To be honest, it would be better to toss out our Tcl parser and use the one from the Tcl core, and since the Tcl core is already part of SN, it is just waiting for someone to come along and use it :) > OTOH it probably wouldn't be horribly difficult to edit the Tcl parser. > The idea would be that: > > Class Foo -superclass Bar ;# Create Foo class, the superclass is Bar > Foo instproc ;# Create method in Foo > Foo proc ;# Create procedure in Foo (not seen in the object instance) > ;# Basically just means the parser should handle local variables > ;# as local instead of global (as it does now with anything that > ;# doesn't occur inside a proc command parameter). > Foo ob ;# Create object 'ob' of type 'Foo' > > The simple case should not be difficult but of course the problem with the > dynamic nature of XOTcl is that you can add and remove methods, change an > object's class or superclass at any point during run-time. How have you dealt > with it when dealing with Tcl namespaces, which are quite dynamic > themselves? Yes, this is another weak area in our Tcl parser. It was written before Tcl namespaces were in Tcl, so it has no namespace support. > - Is the Tcl parser behaviour correct when assuming that any "set" > statement inside curly brackets is actually setting a global variable? I > think by default it shouldn't do anything (ie. not add a variable to the > variable list) and have exceptions to the rule when dealing with while, > for, proc etc. The reason for this is that inside curlies I might have > data that might look like I'm setting a variable but actually I'm not. It > might just be plain data, or maybe I'm sending the code to another > interpreter or whatever. Tcl is very dynamic, so in general you can not assume that any command does anything. You need to make some assumptions otherwise you will get nowhere fast. Set should at least know if it is a local var, a global var, or a class instance or class common (static) var, but, this is not always possible. > This is related to the previous comment because currently XOTcl instance > variables appear as "global variables" which is not correct. While it > naturally would be nice if the environment recognized them as instance > variables I believe it's better not to recognize them at all than to mark > them as globals. You will need to hack it to fix that. > - I seem to be having problems with emacs/Xemacs and the IDE (btw. I think > it's great that you have put in extra effort to get emacs to interact with > the IDE). When looking up for a symbol with M-. I get the following error: > > (1) (error/warning) Error in process filter: (void-variable DisplayableOb) The emacs/xemacs stuff is in there for hackers, so if you have any problems you will need to fix them on your own. When you do, please send us the patches so the next guy will not need to. later Mo DeJong Red Hat Inc
https://sourceware.org/pipermail/sourcenav/2000q3/000053.html
CC-MAIN-2021-25
refinedweb
683
67.89
: myInterpreter begin 32 word find dup if execute else number then again ;And here would be a compiler: : my-] begin 32 word find dup if dup immediate? if execute else compile then else compile-number then again ;A word such as [ would be an immediate word, which manipulates the return stack (or in ANSI-like dialects, would change the state of a variable imaginatively called STATE) to break out of the compiler loop, while a word like ] usually is the compiler itself. This preserves Forth's [ and ] semantics. Of course, you'll also need to re-implement : and :NONAME as well, but these are rather trivial. For a concrete example, here's how to change the compiler so that all words prefixed by the back-tick are POSTPONEd: \ not tested code; but it would look/feel a lot like this. create macro-buffer 80 allot : create-postpone-macro S" POSTPONE " macro-buffer 1+ swap move ( embed the "POSTPONE " part into the buffer ) count dup -rot macro-buffer 10 + swap move ( embed the name of the word after POSTPONE ) 9 + macro-buffer c! ; ( and set the length of the whole string. ) : word-starts-with-`? dup 1+ c@ [char] ` = ; : new-] state on begin 32 word find dup if dup 0< if execute ( it was an immediate word ) else compile ( it wasn't an immediate, so compile it instead ) else word-starts-with-`? if compute-postpone-macro macro-buffer count evaluate else number literal then then state @ 0= until ; : : (:) new-] ; ( most Forth systems have a word like (:), that implements the core of : without actually invoking the compiler ) : ] new-] ; immediate ( note this word compiled with the "new" :-compiler! )And that should be it. Yeah, it's a bit more complex than just creating a reader macro in Lisp, but hey, when the whole thing compiles to maybe 300 bytes or less on a 32-bit system, you could afford some 15 of these things in memory before you even hit the complexity of the reader itself, let alone the interpreter and compiler logic. :-) (Not sure who changed the ] to [ characters. In Forth, [ is used to enter into interpreter mode from inside a definition. This allows compile-time pre-computed constants with zero run-time overhead, like this: : abc blah [ 2 4 * 5 6 * + ] literal blort ;. Hence, ] is the entry-point to the compiler, NOT [.) --SamuelFalvo? def python_interpreter(code): exec code def ruby_interpreter code eval code endHow is this any different from implementing a Lisp interpeter using Lisp's eval? --DavidMcLean?
http://c2.com/cgi-bin/wiki?MetaCircularEvaluator
CC-MAIN-2015-32
refinedweb
416
55.47
PHP Dependency Management with Composer. Installing Composer Composer is bundled as an executable Phar archive, so make sure you have the Phar extension enabled in your php.ini file (uncomment extension=phar.so). I recommend download the latest snapshot of the Composer executable directly from the project’s website. Alternatively, there is an installer script that you can run. If you’re comfortable with the issues surrounding such installers, you can cut and paste the following taken from the Composer website: curl -s | php To make Composer globally accessible on your system, move the resulting composer.phar file to a suitable location, like so: sudo mv composer.phar /usr/local/bin/composer Using Composer If you’ve done any Ruby or Node.js programming then Composer may seem a bit familiar to you. The dependency manager was inspired by Bundler and npm. You first create a composer.json file that lists all of your project’s dependencies, and then with a simple command you can fetch or maintain them. To add a library to your project, create a file named composer.json with content that resembles this example: { "require": { "illuminate/foundation": "1.0.*" }, "minimum-stability": "dev" } The require key lists the project’s dependencies. The dependency in this example is Illuminate (version 4 of the Laravel framework). Of course Illuminate depends on a whole lot of other packages, and Composer will install these too. Following the package name you see the required version number. I’ve specified the application can use any minor update in the 1.0 branch. You can also specify specific versions or versions within a given range. You can find more information on package versions on the Composer website. The minimum-stability key is present because not all of Illuminate’s dependencies are stable yet. If omitted, the rule’s default value is “stable” and the install would fail. Now to actually install Illuminate, run the following in your project’s directory: php composer.phar install Composer creates the folder named vendors and downloads the dependencies into it. As a convenience, Composer also creates a PSR-0 autoloader for you to pull the libraries into your code; simple require vendors/autoloader.php in your code to use them. <?php require_once "vendors/autoloader.php"; // your code here All of Composer’s data on installed libraries in the file composer.lock. Composer tracks versions of libraries are currently installed and what their VCS URL is. It’s like a registry with all information about the local libraries in it. When installing/updating libraries, Composer also updates this file. You can then keep the packages up-to-date by running composer.phar update. Packaging Your Own Code You might be thinking, how does Composer know where to download the code just by the name “illuminate/foundation”? Composer has an official repository named Packagist that it connects to. You can search there to see what libraries are available for management through Composer. You can even create your own packages and submit them to Packagist making them available to others. Creating your own libraries is quite simple since your project can be viewed as a library with it’s dependencies already listed in composer.json. In fact, the documentation says: “the only difference between your project and libraries is that your project is a package without a name.” In order to package your code for others to use, you need to define some additional keys: { "name": "AlexCogn/Illumination", "version" : "1.0.0", "require": { "illuminate/foundation": "1.0.*" }, "minimum-stability": "dev" } If you have your project on GitHub, it is recommended to use your account name in the project’s namespace. If you are indeed using GitHub, Packagist can fetch the version numbers from there so you don’t really have to define it explicitly as I did above. Now you can publish the VCS link of your project to Packagist. Pagckagist makes this really easy; either register a new account or log in with GitHub and then click the giant Submit Package button. Provide the repository’s URL and Packagist will crawl it. In Closing Today you might have been introduced to another great project you didn’t know about. Or, maybe you’ve heard about Composer but haven’t had time to check it out. In either case, I hope you learned something today: Composer can be very useful in automating the management of your project’s dependencies. Make sure you also check out the official Composer documentation for updates and techniques I couldn’t discuss in this article. See you next time! - John - Marcel - John Stevenson - Niko Kivelä
http://www.sitepoint.com/php-dependency-management-with-Composer/
CC-MAIN-2015-22
refinedweb
769
57.57
Insert space after a certain character in C++ In this tutorial, we will learn how to insert a space after a certain character in a string in C++. Before start writing our program, It is good to make an algorithm to determine how to achieve the objective of that program. The algorithm for this program is as given below: - Take an input string containing a certain character. - Take an empty string. - Use the for loop to access each of its characters. If the character is not that certain character, concatenate it to the empty string else concatenate it with additional space. To implement this, we should iterate in a loop throughout the string length to find that certain character. Then, add a space after a certain character to achieve the given objective. For this, we will use the concatenation operator. The sample code illustrates it: C++ program to insert a space after a certain character in a string #include <iostream> using namespace std; string replace(string str, char c) { string s1=""; for(int i=0;i<str.length();i++) { if(str[i]!=c) s1=s1+str[i]; else s1=s1+str[i]+" "; } return s1; } int main() { string s="Hi:Bye:Hello:Start:End"; char c=':'; cout<<"Input string:"<<s<<endl; s=replace(s,c); cout<<"Updated string:"<<s<<endl; return 0; } Output: Input string:Hi:Bye:Hello:Start:End Updated string:Hi: Bye: Hello: Start: End Program explanation: Consider an input string ‘s’ and a certain character (say ‘:’). Define a function replace with two arguments: an input string and a character and with the return type string. Now, take another empty string ‘s1’. Then, iterate throughout the input string using a for loop checking for each of its characters. If the character is not a ‘:’, just concatenate it with the string ‘s1’. If the character is a ‘:’, then concatenate it with an additional space which is the objective of the program. Call the replace function with values input string and ‘:’ and store the returned value in the string ‘s’ and display the result on the screen. I hope this post was helpful and it helped you clear your doubts! Thanks for reading. Happy coding! Recommended Posts: Adding comments in C++ Adding A Character To A String in C++
https://www.codespeedy.com/insert-space-after-a-certain-character-in-cpp/
CC-MAIN-2020-29
refinedweb
378
62.58
import MARC records from your library system, Accelerated Reader Bookguide looks for matching books. When matching books are found, the software marks them as books that you own. To see the matching results, you can either create a book list as part of the import or generate a results report. When you import MARC record files, it's helpful to create a list from the results of the import. You can use the list to print book labels. The list can only be created as you import; you cannot create a list of the imported records later. After you import your MARC records from your library circulation system into Accelerated Reader Bookguide, Accelerated Reader Bookguide separates the titles in the import into two groups: Usually, if a book in your collection does not match a Reading Practice must manually mark that book as one you own. See Marking Books and Quizzes for more information. You can generate Import Results reports for successful MARC record imports. See Checking Import History and Printing Results for more information. Print Topic| Email Topic Print Topic
https://help.renlearn.co.uk/ARBGAU/MARC_Records
CC-MAIN-2020-10
refinedweb
181
63.29
Those people who develop the ability to continuously acquire new and better forms of knowledge that they can apply to their work and to their lives will be the movers and shakers in our society for the indefinite future. Let's start this topic with an example, we have some tasks to do, some tasks are independent of each other, but some tasks depend on other tasks and those tasks must be done before these tasks. We can model this problem with a graph, the tasks are vertices and the dependencies are edges. Topological sorting is an algorithm that helps us to find an ordering of tasks such that for every task $u$ that has to be done before task $v$, $u$ comes before $v$ in the ordering. Topological sorting works on DAGs, that is, graphs wit no cycles, if the graph contains a cycle it's impossible to find a solutions. For example, if task $A$ depends on task $B$ and task $B$ depends on task $A$ then it's impossible to determine which task has to be done first, see Chicken or Egg. Example problem Here you have a problem where you have to literally find an ordering for tasks: 10305 - Ordering Tasks. Here is a working implementation of Topological sorting. #include <iostream> #include <cstdio> #include <algorithm> #include <vector> using namespace std; class Graph { private: vector<vector<int> > G; public: Graph() {} Graph(int nodes) { G.resize(nodes); } void addEdge(int u, int v) { G[u].push_back(v); } void dfs(int u, vector<bool> &visited) { visited[u] = true; for (int v : G[u]) { if (!visited[v]) { dfs(v, visited); } } printf("%d ", u + 1); } void sort() { vector<bool> visited(G.size(), false); for (int u = 0; u < int(G.size()); u++) { if (!visited[u]) { dfs(u, visited); } } } }; int main() { int n, m; scanf("%d %d", &n, &m); while (n != 0 || m != 0) { Graph graph(n); for (int i = 0; i < m; i++) { int u, v; scanf("%d %d", &u, &v); graph.addEdge(v - 1, u - 1); } graph.sort(); scanf("%d %d", &n, &m); } return 0; } The solutions works as follows: - If task $u$ has to be done before task $v$ create a directed edge from $v$ to $u$. - Create a table $visited[]$ such that $visited[u]$ is true if node $u$ has been visited. - Run a Depth-First Search on all nodes that has not been visited yet and print the nodes in reverse order, we can do that by printing node $u$ before leaving the recursive function. The resulting sequence of nodes is a valid ordering. Notice that it's possible to have multiple valid solutions. The previous problem is easy because is the classical example to explain Topological sorting. However, most of the time these problems are not so obvious and we have to work a bit to uncover the underlying problem, for example 1034 - Hit the Light Switches (hint, think a bout connected components).
https://letmethink.mx/posts/toposort
CC-MAIN-2018-39
refinedweb
487
60.85
ScalaCheck is a well-known property-based testing library, based on ideas from Haskell’s QuickCheck. It is also a Typelevel project. In this post, I’d like to show some of the underlying mechanisms, stripped down to the bare minimum. Testing with properties is well-understood in academia and widely used in parts of the industry – namely the parts which embrace functional programming. However, the design space of property-testing libraries is rather large. I think it is high time to talk about various tradeoffs done in libraries. Here, I’d like to contribute by implementing a ScalaCheck clone from scratch using a very similar design and explaining the design choices along the way. This is not an introduction to property testing. However, it can be read as a guide to implementation ideas. QuickCheck, ScalaCheck and the like are nice examples of functional library design, but their internals are often obscured by technicalities. I hope that by clearing up some of the concepts it will become easier to read their code and perhaps designing your own property-testing library. The basic point of a property testing library is providing an interface looking roughly like this: class Prop { def check(): Unit = ??? } object Prop { def forAll[A](prop: A => Boolean): Prop = ??? } Now, you can use that in your test code: Prop.forAll { (x: Int) => x == x } This expresses that you have a property which is parameterized on a single integer number. Hence, the library must somehow provide these integer numbers. The original Haskell QuickCheck, ScalaCheck and many other libraries use a random generator for this. This comes with a number of advantages: But it is also not without problems: Of course, there are other possible design choices: For this post, we’re assuming that random generation is a given. Do we want to do this purely or poorly? Of course, this motto is tongue-in-cheek. Just because something isn’t pure doesn’t mean that it is poor. To understand the design space here, let’s focus on the smallest building block: A primitive random generator. There are two possible ways to model this. The mutable way is what Java, Scala and many other languages offer in their libraries: trait Random { def nextInt(min: Int, max: Int): Int def nextFloat(): Float def nextItem[A](pool: List[A]): A } By looking at the types alone, we can already see that two subsequent calls of nextInt will produce different results; the interface is thus impure. The pure way is to make the internal state (also known as “seed” in the context of random generators) explicit: trait Seed { def nextInt(min: Int, max: Int): (Int, Seed) def nextFloat: (Float, Seed) def nextItem[A](pool: List[A]): (A, Seed) } object Seed { def init(): Seed = ??? } Because this is difficult to actually use (don’t mix up the Seed instances and use them twice!), one would wrap this into a state monad: class Random[A](private val op: Seed => (A, Seed)) { self => def run(): A = op(Seed.init())._1 def map[B](f: A => B): Random[B] = new Random[B]({ seed0 => val (a, seed1) = self.op(seed0) (f(a), seed1) }) def flatMap[B](f: A => Random[B]): Random[B] = new Random[B]({ seed0 => val (a, seed1) = self.op(seed0) f(a).op(seed1) }) override def toString: String = "<random>" } object Random { def int(min: Int, max: Int): Random[Int] = new Random(_.nextInt(min, max)) val float: Random[Float] = new Random(_.nextFloat) } Now we can use Scala’s for comprehensions: for { x <- Random.int(-5, 5) y <- Random.int(-3, 3) } yield (x, y) // res2: Random[(Int, Int)] = <random> The tradeoffs here are the usual when we’re talking about functional programming in Scala: Reasoning ability, convenience, performance, … In the pure case, there are also multiple other possible encodings, including free monads. Luckily, this blog covers that topic in another post. How do other libraries fare here? scala.util.Random. Seedtrait from above, and there is also an additional state-monadic layer on top of it. Seed, but they don’t use the updated seed. Instead, their approach is via an additional primitive splitof type Seed => (Seed, Seed), which gets used to “distribute” randomness during composition (see the paper by Claessen & Pałka about the theory behind that). It is worth noting that Java 8 introduced a SplittableRandomclass. For this post, we’re assuming that mutable state is a given. We’ll use scala.util.Random (because it’s readily available) in a similar fashion to ScalaCheck 1.12.x. Asynchronous programming is all the rage these days. This means that many functions will not return plain values of type A, but rather Future[A], Task[A] or some other similar type. For our testing framework, this poses a challenge: If our properties call such asynchronous functions, the framework needs to know how to deal with a lot of Future[Boolean] values. On the JVM, although not ideal, we could fall back to blocking on the result and proceed as usual. On Scala.js, this won’t fly, because you just can’t block in JavaScript. Most general-purpose testing frameworks, like Specs2, have a story about this, enabling asynchronous checking of assertions. In theory, it’s not a problem to support this in a property testing library. But in practice, there are some complications: Futures in ScalaTest. Futurevalues? We can easily imagine wanting to draw from a pool of inputs stemming from a database, or possibly to get better randomness from random.org. (The latter is a joke.) Task? fs2’s Task? All of them? If in the first design decisions we had chosen exhaustive generators, this problem would be even tougher, because designing a correct effectful stream type (of all possible inputs) is not trivial. For this post, we’re assuming that we’re only interested in synchronous properties, or can always block. However, I’d like to add, I’d probably try to incorporate async properties right from the start if I were to implement a testing library from scratch. What about the existing libraries? morallyDubiosIOProperty(nowadays just ioProperty). But there is also more advanced support for monadic testing. Let’s summarize what we have so far: Now, I’d like to talk about how to “package” random generators. Earlier, we’ve only seen random integer and floating-point numbers, but of course, we want something more complex, including custom data structures. It is convenient to abstract over this and specify the concept of a generator for type A. The idea is to make a generator for a type as “general” as possible and then provide combinators to compose them. import scala.util.Random trait Gen[T] { def generate(rnd: Random): T } object Gen { val int: Gen[Int] = new Gen[Int] { def generate(rnd: Random): Int = rnd.nextInt() } } An obvious combinator is a generator for tuples: def zip[T, U](genT: Gen[T], genU: Gen[U]): Gen[(T, U)] = new Gen[(T, U)] { def generate(rnd: Random): (T, U) = (genT.generate(rnd), genU.generate(rnd)) } But we still have a problem: There is currently no way to talk about the size of the generated inputs. Let’s say we want to check an expensive algorithm over lists, for example with a complexity of $\mathcal O(n^3)$ over lists. A naive implemenation of a list generator would take a a random size, and then give you some generator for lists. The problem arises at the use site: Whenver you want to change the size of the generated inputs, you need to change the expression constructing the generator. But we’d like to do better here: For this post, there should be a way to specify a maximum size of generated values, together with a way to influence that size in the tests without having to modify the generators. Here’s how we can do that: import scala.util.Random trait Gen[T] { def generate(size: Int, rnd: Random): T override def toString: String = "<gen>" } object Gen {)) } } } We can now check this (note that for the purpose of this post we’ll be using fixed seeds): def printSample[T](genT: Gen[T], size: Int, count: Int = 10): Unit = { val rnd = new Random(0) for (i <- 0 until size) println(genT.generate(size, rnd)) } scala> printSample(Gen.int, 10) 2 6 -6 -8 1 4 -8 5 -4 -8 scala> printSample(Gen.int, 3) 2 -1 1 scala> printSample(Gen.list(Gen.int), 10) List() List(-6, -8, 1, 4, -8, 5) List(-8, -2) List(4, 6) List(4, 0) List(10, 4, -9, -7, 7) List(-8, 6, -4, 9, -1, 10, 4, 7, -8) List(1, 7, -7, 4, 0, 5, 4, 9, 7, 4) List(5, 9, -3, 3, -10) List() scala> printSample(Gen.list(Gen.int), 3) List(-1, 1) List(1, -3) List(-2, 3) That’s already pretty cool. But there’s another hidden design decision here: We’re using the same size on all sub-elements in the generated thing. For example, in Gen.list, we’re just passing the size through to the child generator. SmallCheck does that differently: The “size” is defined to be the total number of constructors in the generated value. For integer numbers, the “number of constructors” is basically the number itself. For example, the value List(1, 2) has size $2$ in our framework (length of the list), but size $1 + 2 + 2 = 5$ in SmallCheck (roughly: size of all elements plus length of list). Of course, our design decision might mean that stuff grows too fast. The explicit size parameter can be used to alleviate that, especially for writing recursive generators: def recList[T](genT: Gen[T]): Gen[List[T]] = new Gen[List[T]] { // extremely stupid implementation, don't use it def generate(size: Int, rnd: Random): List[T] = if (rnd.nextInt(size + 1) > 0) genT.generate(size, rnd) :: recList(genT).generate(size - 1, rnd) else Nil } // recList: [T](genT: Gen[T])Gen[List[T]] printSample(recList(Gen.int), 10) // List() // List(-6, 1, -6) // List(-8, 8, 4, 7, 0, 5, -4) // List(-8) // List(9, -3, 4) // List(1, 3, -7, -2, -3, 0, -3, 3) // List() // List(10, 5, -8, -4, -5, 4, -1) // List(-5, 9, 7) // List(-8) We can also provide a combinator for this: def resize[T](genT: Gen[T], newSize: Int): Gen[T] = new Gen[T] { def generate(size: Int, rnd: Random): T = genT.generate(newSize, rnd) } That one is useful because in reality ScalaCheck’s generate method takes some more parameters than just the size. Some readers might be reminded that this is just the reader monad and its local combinator in disguise. In order to make these generators nicely composable, we can leverage for comprehensions. We just need to implement map, flatMap and withFilter: import scala.util.Random trait Gen[T] { self => def generate(size: Int, rnd: Random): T // Generate a value and then apply a function to it def map[U](f: T => U): Gen[U] = new Gen[U] { def generate(size: Int, rnd: Random): U = f(self.generate(size, rnd)) } // Generate a value and then use it to produce a new generator def flatMap[U](f: T => Gen[U]): Gen[U] = new Gen[U] { def generate(size: Int, rnd: Random): U = f(self.generate(size, rnd)).generate(size, rnd) } // Repeatedly generate values until one passes the check // (We would usually call this `filter`, but Scala requires us to // call it `withFilter` in order to be used in `for` comprehensions) def withFilter(p: T => Boolean): Gen[T] = new Gen[T] { def generate(size: Int, rnd: Random): T = { val candidate = self.generate(size, rnd) if (p(candidate)) candidate else // try again generate(size, rnd) } } override def toString: String = "<gen>" } object Gen { // unchanged from above)) } } } Look how simple composition is now: case class Frac(numerator: Int, denominator: Int) // defined class Frac val fracGen: Gen[Frac] = for { num <- Gen.int den <- Gen.int if den != 0 } yield Frac(num, den) // fracGen: Gen[Frac] = <gen> printSample(fracGen, 10) // Frac(2,6) // Frac(-6,-8) // Frac(1,4) // Frac(-8,5) // Frac(-4,-8) // Frac(-2,-8) // Frac(4,6) // Frac(1,4) // Frac(0,-10) // Frac(10,4) And we can even read the construction nicely: “First draw a numerator, then draw a denominator, then check that the denominator is not zero, then construct a fraction.” However, we need to be cautious with the filtering. If you look closely at the implementation of withFilter, you can see that there is potential for an infinite loop. For example, when you pass in the filter _ => false. It will just keep generating values and then discard them. How do existing frameworks alleviate this? Gen[A]as above, and one that return Gen[Option[A]]. The latter uses a number of tries and if they all fail, terminates and returns None. The former uses the latter, but keeps increasing the size parameter. Of course, this might not terminate. filtermethod returns Gen[A], but the possibility of failure is encoded in the return type of its equivalent of the generatemethod, which always returns Option[T]. But there is also a combinator which retries until it finds a valid input, called retryUntil. As a side note: Gen as it is right now is definitely not a valid monad, because it internally relies on mutable state. But in my opinion, it is still justified to offer the map and flatMap methods, but don’t give a Monad instance. This prevents you from shoving Gen into functions which expect lawful monads. It’s still tedious to having to construct these generators by hand. Both QuickCheck and ScalaCheck introduce a thin layer atop generators, called Arbitrary. This is just a type class which contains a generator, nothing more. Here’s how it would look like in Scala: trait Arbitrary[T] { def gen: Gen[T] } // in practice we would put that into the companion object //object Arbitrary { implicit val arbitraryInt: Arbitrary[Int] = new Arbitrary[Int] { def gen = Gen.int } //} Based on this definition, ScalaCheck provides a lot of pre-defined instances for all sorts of types. For your custom types, the idea is that you define a low-level generator and wrap it into an implicit Arbitrary. Then, in your tests, you just use the implicitly provided generator, and avoid to drop down to constructing them manually. The purpose of the additional layer is explained easily: It is common to have multiple Gen[T] for the same T depending on which context it is needed in. But there should only be one Arbitrary[T] for each T. For example, you might have Gen[Int] for positive and negative integers, but you only have a single Arbitrary[Int] which covers all integers. You use the latter when you actually need to supply an integer to your property, and the former to construct more complex generators, like for Frac above. This is where everything really comes together. We’re now looking at how to use Gen to implement the desired forAll function we’ve seen early in the introduction of the post, and how that is related to the Prop type I didn’t define. I’ll readily admit that the following isn’t really a design decision per se, because we’ll be guided by the presence of type classes in Scala. Still, one could reasonably structure this differently, and in fact, the design of the Prop type in e.g. QuickCheck is much more complex than what you’ll see. The rest of this post will now depart from the way it’s done in ScalaCheck, although the ideas are still similar. Instead, I’ll try to show a simplified version without introducing complications required to make it work nicely. Let’s start with the concept of a property. A property is something that we can run and which returns a result. The result should ideally be something like a boolean: Either the property holds or it doesn’t. But one of the main features of any property testing library is that it will return a counterexample for the inputs where the property doesn’t hold. Hence, we need to store this counterexample in the failure case. In practice, the result type would be much richer, with attached labels, reasons, expectations, counters, … and more diagnostic fields. sealed trait Result case object Success extends Result final case class Failure(counterexample: List[String]) extends Result object Result { def fromBoolean(b: Boolean): Result = if (b) Success else // if it's false, it's false; no input has been produced, // so the counterexample is empty Failure(Nil) } You’ll note that I’ve used List[String] here, because in the end we only want to print the counterexample on the console. ScalaCheck has a dedicated Pretty type for that. We could do even more fancy things here if we wanted to, but let’s keep it simple. Now we define the Prop type: trait Prop { def run(size: Int, rnd: Random): Result override def toString: String = "<prop>" } What’s missing is a way to construct properties. Sure, we could implement the trait manually in our tests, but that would be tedious. Type classes to the rescue! We call something testable if it can be converted to a Prop: trait Testable[T] { def asProp(t: T): Prop } // in practice we would put these into the companion object //object Testable { // Booleans can be trivially converted to a property: // They are already basically a `Result`, so no need // to run anything! implicit val booleanIsTestable: Testable[Boolean] = new Testable[Boolean] { def asProp(t: Boolean): Prop = new Prop { def run(size: Int, rnd: Random): Result = Result.fromBoolean(t) } } // Props are already `Prop`s. implicit val propIsTestable: Testable[Prop] = new Testable[Prop] { def asProp(t: Prop): Prop = t } //} Now we’re all set: def forAll[I, O](prop: I => O)(implicit arbI: Arbitrary[I], testO: Testable[O]): Prop = new Prop { def run(size: Int, rnd: Random): Result = { val input = arbI.gen.generate(size, rnd) val subprop = testO.asProp(prop(input)) subprop.run(size, rnd) match { case Success => Success case Failure(counterexample) => Failure(input.toString :: counterexample) } } } Let’s unpack this step by step. I => O. This is supposed to be our parameterized property, for example { (x: Int) => x == x }. Because we abstracted over values that can be generated ( Arbitrary) and things that can be tested ( Testable), the input and output types are completely generic. In the implicitblock, we’re taking the instructions of how to fit everything together. Prop; that is, a thing that we can run and that produces a boolean-ish Result. Gen[I]which we get from the Arbitrary[I]. Iinto the parameterized property. To stick with the example, we evaluate the anonymous function { (x: Int) => x == x }at input 5, and obtain true. Propagain. This allows us to recursively nest forAlls, for example when we need two inputs. At this point we should look at an example. val propReflexivity = forAll { (x: Int) => x == x } // propReflexivity: Prop = <prop> Cool, but how do we run this? Remember that our tool is supposed to evaluate a property on multiple inputs. All these evaluations will produce a Result. Hence, we need to merge those together into a single result. We’ll also define a convenient function that runs a property multiple times on different sizes: def merge(rs: List[Result]): Result = rs.foldLeft(Success: Result) { case (Failure(cs), _) => Failure(cs) case (Success, Success) => Success case (Success, Failure(cs)) => Failure(cs) } def check[P](prop: P)(implicit testP: Testable[P]): Unit = { val rnd = new Random(0) val rs = for (size <- 0 to 100) yield testP.asProp(prop).run(size, rnd) merge(rs.toList) match { case Success => println("✓ Property successfully checked") case Failure(counterexample) => val pretty = counterexample.mkString("(", ", ", ")") println(s"✗ Property failed with counterexample: $pretty") } } What is happening here? mergefunction takes a list of Results and returns the first Failure, if it exists. Otherwise it returns Success. In case there are multiple Failures, it doesn’t care and just discards the later ones. checkfunction initializes a fresh random generator. Let’s check our property! scala> check(propReflexivity) ✓ Property successfully checked … and how about something wrong? scala> check(forAll { (x: Int) => | x > x | }) ✗ Property failed with counterexample: (0) Okay, we’re almost done. The only tedious thing that remains is that we have to use the forAll combinator, especially in the nested case. It would be great if we could just use check and pass it a function. But since we’ve used type classes for everything, we’re in luck! implicit def funTestable[I : Arbitrary, O : Testable]: Testable[I => O] = new Testable[I => O] { def asProp(f: I => O): Prop = // wait for it ... // ... // ... // it's really simple ... forAll(f) } Now we can check our functions even easier! scala> check { (x: Int) => | x == x | } ✓ Property successfully checked scala> check { (x: Int) => | x > x | } ✗ Property failed with counterexample: (0) scala> check { (x: Int) => (y: Int) => | x + y == y + x | } ✓ Property successfully checked scala> check { (x: Int) => (y: Int) => | x + y == x * y | } ✗ Property failed with counterexample: (0, 1) Now, if you look closely, you can basically get rid of the Prop class and define it as type Prop = Gen[Result] If you think about this for a moment, it makes sense: A “property” is really just a thing which feeds on randomness and produces a result. The only thing left is to define a driver which runs a couple of iterations and gathers the results; in our implementation, that’s the check function. I encourage you to spell out the other functions (e.g. forAll), and you will notice that our Prop trait is indeed isomorphic to Gen[Result]. In practice, QuickCheck uses such a representation (although with some more contraptions). It turns out that it’s not that hard to write a small property-testing library. I’m going to stop here with the implementation, although there are still some things to explore: Finally, I’d like to note that there are many more libraries out there than I’ve mentioned here, some of which depart more, some less, from the original Haskell implementation. They even exist for not-functional languages, e.g. Javaslang for Java or Hypothesis for Python. Correction: In a previous version of this post, I incorrectly stated that ScalaCheck uses a mutable random generator. This is only true up to ScalaCheck 1.12.x. I have updated that section in the post. Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog
https://typelevel.org/blog/2016/10/17/minicheck.html
CC-MAIN-2019-09
refinedweb
3,759
62.68
12 April 2012 10:55 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The chlor-alkali facility in Yeosu consists of a 275,000 tonne/year ethylene dichloride (EDC) plant, vinyl chloride monomer (VCM) and polyvinyl chloride (PVC) units with capacities of 750,000 tonne/year each as well as a 230,000 dry metric tonne/year caustic soda plant. The EDC and caustic soda units at Yeosu are currently undergoing expansion works that is slated to be completed by the end of this year. LG Chem will expand its EDC plant capacity to 575,000 tonnes/year while the capacity of its caustic soda unit will be ramped up to 500,000 dmt
http://www.icis.com/Articles/2012/04/12/9549439/south-koreas-lg-chem-restarts-yeosu-chlor-alkali-facility.html
CC-MAIN-2015-14
refinedweb
113
54.56
Am 09.09.2011 20:21, schrieb Jacob Holm: >). Of course. I'm not new to PEP 342 :) But I have to apologize: what I did test was the confusingly similar return yield - principal which isn't allowed (yes, I know that even return (yield -principal) isn't allowed, but that's not for syntactical reasons.) Now I checked properly: In fact, "yield" expressions after assignment operators are special-cased by the grammar, so that they don't need to be parenthesized [1]. In all other places, yield expressions must occur in parentheses. For example: myreturn = principal - yield Georg [1] I guess that's because it was thought to be a common case. I agree that it's not really helping readability.
https://mail.python.org/pipermail/python-ideas/2011-September/011446.html
CC-MAIN-2017-30
refinedweb
122
61.16
A class library DLL is a cleaner, simpler interface for programmers in all .NET languages including C# and VB.NET. Our class library diCrSysPKINet.dll provides an interface for .NET programmers to our CryptoSys PKI Toolkit. All methods in diCrSysPKINet.dll are straight static methods, so there is no need even to create any objects. Just add a reference to it in your project and use the CryptoSysPKI namespace and you are away. Introduction | How to use | The CryptoSys PKI Toolkit | Help Documentation | Structure | References The C# source code for the class library is available so you can change or add to it if you want. If you want to link statically instead in your C# project, then just include the CryptoSysPKI.cs module from the source code in your project and don't bother adding any references to the class library. diCrSysPKINet.dlllibrary file into a convenient folder (Hint: the original is included in the main installation for all versions and should be in the folder C:\Program Files\CryptoSysPKI\DotNetunless you installed it in a different directory). diCrSysPKINet.dll. using CryptoSysPKI;or (for VB.NET) Imports CryptoSysPKIto your code. using CryptoSysPKI; int n = General.Version(); Console.WriteLine("Version = {0}", n);or in VB.NET Imports CryptoSysPKI Dim n As Integer n = General.Version() Console.WriteLine("Version = {0}", n)See the example test code provided in the distribution download for both C# and VB.NET. To use the .NET class library, you need to download and install the trial version of the core CryptoSys PKI Toolkit, unless you already have a copy installed. Your (References) (Calls) .NET project -------------> diCrSysPKINet.dll --------> diCrPKI.dll [Class Library] [Core Win32 DLL] using CryptoSysPKI; OR Imports CryptoSysPKI This page last updated 21 February 2009
http://www.cryptosys.net/pki/pkidotnet.html
crawl-002
refinedweb
290
59.7
Dreamhost Broke Virtual Python Installations Yesterday I was puzzled to note my site horsetrailratings.com was reporting internal server error. Looking at the server logs did not help. Running the FastCGI script by hand gave a mysterious traceback that ended with: ImportError: /home/.../lib/python2.4/lib-dynload/collections.so: undefined symbol: _PyArg_NoKeywords Experimenting further it seemed just importing many stdlib modules resulted in the same thing, for example import threading failed with that. It seems others have run into the same problem with their virtual python installations. I filed a support request at Dreamhost, but they responded saying Python is working just fine. And it is, it is just that virtual python installations broke. I decided I would use this as an excuse to try and get Python 2.5 running on Dreamhost and migrate my application to that. It turned out to be easy with these instructions. I had tried compiling Python on Dreamhost before, but had always ended up with a binary that did not work. The working incantation is: ./configure --prefix=$HOME/opt/ --enable-unicode=ucs4 I guess I will need to set some kind of monitoring system for my web app so that I will know about outages sooner rather than later…
http://www.heikkitoivonen.net/blog/2008/05/20/dreamhost-broke-virtual-python-installations/
crawl-002
refinedweb
207
57.37
Opened 1 week ago Last modified 1 week ago In some cases, a BooleanField returns an integer instead of a bool. The following example uses a MySQL backend: # models.py from django.db import models class Simple(models.Model): b = models.BooleanField() b = models.BooleanField() $ ./manage.py syncdb Creating table djest_simple $ python Python 2.5 (r25:51908, Sep 10 2007, 13:30:49) [GCC 3.4.6 20060404 (Red Hat 3.4.6-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from djest.models import Simple >>> simple = Simple.objects.create(b=True) >>> simple.b True >>> simple_2 = Simple.objects.get(pk=simple.pk) >>> simple_2.b 1 This may not be a problem for normal usage, but makes testing, particularly using doctest, much less elegant. patch for django/db/models/fields/init.py This is kind of tricky, because under the hood in Python bool is a subclass of int that only ever has two instances (which test equal to 0 and 1 for False and True, respectively). Not all DBs actually store a boolean value, either; some store a 0 or a 1, and return that. Given that, and Python's general bent toward duck typing, I'm not sure whether we should strictly ensure that it always returns a bool instance. By Edgewall Software.
http://code.djangoproject.com/ticket/7190
crawl-001
refinedweb
218
61.12
Hello good Monks. First let me start out with the problem: given an arbitrary list of strings, find the longest common substring. My approach to this problem is to grab one of the strings from the list, scan through it with successively decreasing substring lengths, and check the list for matches. This is simple, and seems quite effective, but I'm wondering if there are any problems with this approach? Could it be done simpler and more efficiently? Is there a well-known algorithm to do this, and I am too daft to find it? Here is the working code I came up with. I do not have any particular problem with this code, I just wanted to run it by the Monastery to see if anyone could give me suggestions, or find any lurking problems: #!/usr/bin/perl use warnings; use strict; use Data::Dumper; for ([ qw(fooabc123 fooabc321 foobca232) ], [ qw(abcfoo123 bcafoo321 foo123abc) ], [ qw(foo bor boz bzo) ]) { print Dumper($_); print findlcs(@{ $_ }), "\n"; print "---\n"; } sub findlcs { my $substr = $_[0]; my $len = length $_[0]; my $off = 0; while ($substr) { my @matches = grep /\Q$substr/, @_; #printf "%s%-".(length($_[0])-$off)."s matches %d\n", # " " x $off, $substr, scalar @matches; last if @matches == @_; $off++; $len-- and $off=0 if $off+$len > length $_[0]; $substr = substr $_[0], $off, $len; } return $substr; } [download] It seems I was a bit hasty in posting this question, and probably should have done a little more research first. gmax has pointed out Longest Common Substring, and etcshadow has pointed out Algorithm::Diff (which I have seen before, and is undoubtedly the reason the phrase "longest common substring" was floating around in my brain to begin with). Also, I updated my node mentioning japhy's Longest common substring, but apparently the update got eaten. (Is it just my imagination, or didn't we recently gain the ability to update root nodes?) While Algorithm::Diff doesn't do exactly the same thing I am doing, it would have probably been useful at least. Sorry for wasting any time, and many thanks for the replies that did come! :-) Maybe this is more of a meditation, but I don't really consider it to be horribly obnoxious to ask a question that might be answerable via documentation search. Espescially when you consider how frequently people post ludicrously simple questions like "how do I write a regexp that does this: ...?" The other monks don't jump on those folks' cases, with "perldoc perlre... RTFM!" I don't understand why some of them are upset when someone asks a far more intelligent and informed question that might be answered by doc-diving. The truth of the matter is that doc-diving is often very difficult for finding where to start. You tend to run into the all-to-frequent problem of "I can find out what $module does, but I can't find which module does $thing". Anyway, don't appologize, it was a perfectly reasonable question. I (and the other folks) were just pointing you towards resources that might help in your search for more info. Can anyone tell me how to use Algoritm::Diff::LCS to find the longest common substring of an array of strings? cheers tachyon Building on japhy's regex, this seems to work and it's fairly simple, though I haven't tested it's efficiency. It returns undef if there is no common substring. Caveat: The strings mustn't contain nulls. #! perl -slw use strict; sub lcs{ my $strings = join "\0", @_; my $lcs; for my $n ( 1 .. length $strings ) { my $re = "(.{$n})" . '.*\0.*\1' x ( @_ - 1 ); last unless $strings =~ $re; $lcs = $1 } return $lcs; } my @a = <DATA>; chomp @a; print "lcs: ", lcs( @a ); __DATA__ The quick brown fox jump over the lazy dog The quick brown fox jumps over the lazy jumps over the lazy dog The quick brown fox quick brown fox jumps over the lazy dog [download] While significantly faster than the OP's, I believe it suffers from some of the same scalability issues. That's one thing I hate about regular expressions: determining the complexity of an algorithm built on them tends to be very difficult. Anyway, that also suffers from another problem. Try it with my @a = ('a' . 'X' x 32768, 'a' . 'O' x 32768); [download] Quantifier in {,} bigger than 32766 in regex; marked by <-- HERE in m/ +(.{ <-- HERE 32767}).*\0.*\1/ [download] -sauoq "My two cents aren't worth a dime."; While significantly faster than the OP's Actually, simply using index instead of m// in the grep makes my algorithm a bit faster than BrowserUk's regex. Granted, there's still a scalability problem here, but with the small tweak suggested by CombatSquirrel, it's much faster than the original, and works fine on data that is representative of what I'm actually using this for. Here is the benchmark code I used: And here are the results I got: I guess a length limit of 32k for the longest common substring could be a concern for some applications -- but nothing I regularly have to deal with. Do you have an alternative? Without looking at it too closely, that looks fairly clean. It isn't terribly efficient though. It won't scale well to very large strings, especially if the common substrings end up being relatively small. In the worst case, you try (N^2+N)/2 substrings. (Try that with ['a' . 'X' x 10000, 'a' . 'O' x 10000] for instance...) If you were doing this in C, you'd probably use Suffix Trees (offsite). Someone once asked about those at (the poorly named) suffix arrays but nothing much came of it except a suggestion by Zaxo that substr() could be used to create a data structure similar to a suffix tree and a note by BrowserUk that Zaxo's suggestions wouldn't work with versions less than 5.8.1. Update: Looking at Dominus's site, it turns out that the discussion on Expert QOTW #14 referenced by others also mentions suffix trees. Dominus points to this explanation of them. Warning: that link is to a directory with a bunch of stuff in it. This might be a better starting point. Or, if you prefer, the PDF version. There's also a powerpoint version in there. One method that might work well enough would be to concatenate the strings and then look for the longest non-overlapping repetition. You'll have to worry about string boundaries though. Here's some code for finding the longest repeated substring in case you take to the idea: sub repeated_substring { my ($ssl,$pos) = (1,-1); my $len = length($_[0]); my $i = 0; while ($i < $len-2*$ssl) { if (index($_[0],substr($_[0],$i,$ssl),$i+$ssl) == -1) { $i++; ne +xt } $pos = $i; $ssl++; } return $pos == -1 ? "" : substr($_[0], $pos, $ssl-1); } [download] If you joined the strings with some character that won't appear in the strings (a colon say), then you could modify the above such that as soon as you hit a colon, stop Update: I just noticed that you're blindly grabbing the first element in the list. An optimization would be to sort the list of strings by length and always start with the shortest one (assuming you continue using your method). DOH! I just realized that my method won't work at all! Update: Okay ... I'm stubborn. I know it. Here's how to *make* it work with the repeated_substring() routine: sub findlcs { my @ret; for my $i (0..$#_-1) { for my $j ($i..$#_) { my $str = join ":", @_[$i,$j]; my $ans = repeated_substring($str); push @ret, $ans; } } return (sort { length($a) <=> length($b) } @ret)[0]; } [download] revdiablo, you did it well. Don't knock your implementation. Another! update: I realized on my way to pick up the kids that even this method fails. I sure hope revdiablo is watching this to see what could have happened to him :-) -QM -- Quantum Mechanics: The dreams stuff is made of Here is my suffix trees based QOTW entry updated for LCS. In a sense this is the fastest type of algorithm possible, since it's guaranteed linear in both time and space relative to the combined string length (add one termination character to each string, and actually there is also O(number_of_strings) due to the way I currently mark the strings, though that can be improved upon) Unfortunately the needed complex datastructures make it use so much memory and need so much setup that many of the clever solutions in QOTW 14 can be converted to effectively faster solutions. In fact, my unsubmitted entry to QOTW #14 is where my repeated_substring() routine came from I'm sure that this approach could be made much faster and clearer. This is an interesting problem and that's an intruiging algorithm. It would be really nice to see these all implemented in C and compared. I suspect that yours would fair much better. Which set of data you consider to be more realistic, will determine which algorithm/implementation is better suited to your application I guess. It's not very often that the choice of best algorithm varies so wildly with the input data. fooabc123 fooabc321 foobca232 Rate Tilly revdiablo CombatSqu BrowserUk Tilly 89.6/s -- -45% -85% -85% revdiablo 163/s 81% -- -72% -72% CombatSqu 581/s 549% 257% -- -1% BrowserUk 587/s 554% 261% 1% -- abcfoo123 bcafoo321 foo123abc Rate Tilly revdiablo CombatSqu BrowserUk Tilly 91.7/s -- -37% -82% -84% revdiablo 145/s 58% -- -72% -74% CombatSqu 514/s 460% 255% -- -9% BrowserUk 563/s 514% 289% 10% -- foo bor boz bzo Rate Tilly revdiablo BrowserUk CombatSqu Tilly 408/s -- -43% -59% -82% revdiablo 719/s 76% -- -28% -68% BrowserUk 995/s 144% 38% -- -56% CombatSqu 2236/s 449% 211% 125% -- The quick brown fox jump over the lazy dog The quick brown fox jumps over the lazy jumps over the lazy dog The quick brown fox quick brown fox jumps over the lazy dog Rate Tilly revdiablo CombatSqu BrowserUk Tilly 3.59/s -- -59% -89% -95% revdiablo 8.75/s 143% -- -74% -87% CombatSqu 33.4/s 829% 282% -- -51% BrowserUk 68.6/s 1807% 683% 105% -- The quick brown fox jump over the lazy dog The quick brown fox jumps over the lazy jumps over the lazy dog The quick brown fox quick brown fox jumps over the lazy dog x Rate revdiablo CombatSqu BrowserUk Tilly revdiablo 3.79/s -- -71% -91% -95% CombatSqu 12.9/s 240% -- -69% -85% BrowserUk 41.0/s 984% 219% -- -51% Tilly 83.2/s 2098% 546% 103% -- The quick brown fox jump over the lazy dog xThe quick brown fox jump over the lazy dog xxThe quick brown fox jump over the lazy dog xxxThe quick brown fox jump over the lazy dog xxxxThe quick brown fox jump over the lazy dog Rate Tilly BrowserUk revdiablo CombatSqu Tilly 0.761/s -- -98% -100% -100% BrowserUk 44.2/s 5714% -- -99% -99% revdiablo 4728/s 621405% 10589% -- -25% CombatSqu 6305/s 828641% 14153% 33% -- [download] Benchmark Incidentally my solution compares much better than others if you make the input strings much longer than the common substrings. For instance if the common match is "The quick brown fox" and the strings are each a few hundred characters, I win by a wide margin. The code below is broken! Please see Re: Re: Re: finding longest common substring (ALL common substrings) for details, and the update at the bottom for a couple of 'fixed' versions. This will never win the "fastest longest common substring" accolade, but it is interesting in that in a list context, it returns a list of all common substring sorted by length (longest first). I was also surprised how simple it was to code, and fairly surprised by how efficient it was given what it does. sub lcs{ our %subs = (); my $n = @_; shift =~ m[^.*(.+)(?{ $subs{ $^N }++ })(?!)] while @_; my @subs = sort{ length $b <=> length $a } grep{ $subs{ $_ } == $n } keys %subs; return wantarray ? @subs : $subs[ 0 ]; } [download] Update: The following two versions work, in as much as they will return the longest common substring if called in a scalar context. They will also return all common substrings (ordered longest to shortest) when called in a list context. As lcs routines, they are both slow, with lcs3() being marginally quicker than lcs2(). I'm not sure how they compare performance-wise with other mechanism for generating all common substrings. As implemented, they also do not preserve the value of two (unavoidable?) globals %subs & $n. This could be fixed by judicious use of local if it is of concern. sub lcs2{ our %subs = (); my $selector = ''; for our $n ( 0 .. $#_ ) { vec( $selector, $n, 1 ) = 1; $_[ $n ] =~ m[^.*?(.+?)(?{ $subs{ $^N } = '' unless exists $subs{ $^N }; vec( $subs{ $^N }, $n, 1 ) = 1 })(?!)]; } return wantarray ? sort{ length $b <=> length $a } grep{ $subs{ $_ } eq $selector } keys %subs : reduce{ length $a > length $b ? $a : $b } grep{ $subs{ $_ } eq $selector } keys %subs; } sub lcs3{ our %subs = (); my $selector = ' ' x @_; for our $n ( 0 .. $#_ ) { substr( $selector, $n, 1, '1' ); $_[ $n ] =~ m[^.*?(.+?)(?{ $subs{ $^N } = ' ' x @_ unless exists $subs{ $^N }; substr( $subs{ $^N }, $n, 1, '1' ); })(?!)]; } return wantarray ? sort{ length $b <=> length $a } grep{ $subs{ $_ } eq $selector } keys %subs : reduce{ length $a > length $b ? $a : $b } grep{ $subs{ $_ } eq $selector } keys %subs; } [download] Whether the above code has any merits I'm not sure, but it's here should anyone find a good use for it. This is indeed interesting. With my test data, it's actually a bit faster than my original version (though we've seen how much different data will affect the various algorithms). Pretty impressive, considering what it does. There appears to be a problem, however. It returns undef if you feed it qw(foo bor boz bzo), but works fine with qw(foo boor booz bzoo) and qw(fo bor boz bzo). So if there are any mismatching number of o's, it returns undef. I don't see why offhand; maybe you have some ideas? Sorry. The code is flawed. It does produce all the common substrings, but it will often select the wrong "longest". The problem occurs because if a substring occurs twice in one of the input strings, and not at all in one of the others, it's count will be the same as if it had appeared once in both, The selection mechanism, the longest key who's count is equal to the number of input strings is bogus, but suffuciently convincing that it worked for all 5 sets of test data I tried it on! I'm trying to think of an efficient way of counting how many of the original strings each substring is found in, but the only one I've come up with so far would limit the number of input strings to 32. A couple of other ideas I tried worked, but carry enough overhead to make the method less interesting. I'll keep looking at it, but maybe my "surprise at the simplicity and efficiency" was the red flag that should have told me that I was missing something! Still, nothing ventured, nothing gained. details. #!perl -wl my @L1 = qw/A b c de fgh ijk lmno pqrst uvwxyz/; my @L2 = qw/a b c de fgh ijk lmnO pqrsT uvwxyZ/; my @MATCHES = (); my $LMATCH = 0; for $I1 (sort{length($b)<=>length($a)}@L1){ last if length($I1) < $LMATCH; for $I2 (grep{$_ eq $I1}sort{length($b)<=>length($a)}@L2){ $LMATCH = length($I2); push @MATCHES, $I1; } } print "LIST1: " . scalar @L1 . " items LIST2: " . scalar @L2 . " items MATCHES: @MATCHES"; [download] perl -e"map print(chr(hex(( q{6f634070617a6d692e7273650a}=~/../g)[hex]))), (q{375542349abb99098106c}=~/./g)" use strict; use warnings; my @first = qw/ short bigger longest superbig /; my @second = qw/ short longer even longer longest /; my $string = ""; foreach my $item ( @first ) { $string = $item if length $item > length $string and grep { $item eq $_ } @second; } print $string, "\n"; [download] One way to improve the algorithm to scale better may be to keep the arrays ordered in decending order of length so that you could just stop searching on the first match. That would require more overhead at "insertion" time, but much less at searching time. Dave I haven't looked at your solution in depth, but upon initial inspection it seems to do the same thing as pizza_milkshake's at Re: finding longest common substring. I must admit that -- at least to my eyes -- your version looks cleaner and is more understandable than his, but it doesn't seem like either of these solve the problem I was asking about. Perhaps my quick summary of the problem was unclear, or perhaps I am simply missing something. I was looking for the longest substring that is common to all elements of the list. In my example, I have 3 lists, but each one is analyzed independently. Both of your solutions seem to be searching two lists in parallel for the longest matching element. Maybe I'm just not looking at them from the right perspective? use strict; use warnings; my @array = ( "this is a string is", "a string this is", "tie a string together" ); my $string = join "|", @array; my $repeat = '\|.*?\1.*?' x ( scalar(@array) - 1 ); for my $n ( reverse ( 1 .. length( $string ) ) ) { next unless $string =~ /(.{$n}).*?$repeat/; print $1, "\n"; } [download] Ob-Update: This assumes the substrings don't overlap. And I've tinkered with the code to make it keep track of what the original substrings looked like. In so doing, I used the | character as a delimiter, which means it shouldn't appear in the original substrings. Thanks danger for the polite nudge. ;) Deep frier Frying pan on the stove Oven Microwave Halogen oven Solar cooker Campfire Air fryer Other None Results (322 votes). Check out past polls.
http://www.perlmonks.org/index.pl?node_id=308417
CC-MAIN-2016-26
refinedweb
2,996
68.81
Hello! Im in the process of learning C++ and boy am I having fun! The logic is overwhelming. Im stuck at an example of a card dealing program where the program prompts the user asking how many cards she wants to draw from a card deck. Naturally the same card cant show up twice. Now I understand must of the code but the process of choosing next available card is little confusing. The specific function for choosing next available card is: int select_next_available(int n) { int i = 0; while (card_drawn[i]) i++; while (n-- > 0) { i++; while (card_drawn[i]) i++; } card_drawn[i] = true; return i; } The first while-loop finds the first available "false" card i.e card that is not drawn yet. But I cant understand the two other while loops. The author explains it like this: "It then counts n more avialable cards, each time skipping over items already drawn." But I dont really understand the need for doing this. The functions "job" is to find the first available card that has the value false in the array, make it true and return it to the calling function which adds a suit and rank to it. Now the program knows that that specific place in the array is true and cant draw it again? Im consufed a since I dont have any teacher I thought you guys could help. Let me know if you need the rest of the code also. Btw this is my first post so hi! Card dealing exercise. Started by Donniedarko, Nov 16 2011 05:18 AM 5 replies to this topic #1 Posted 16 November 2011 - 05:18 AM #2 Posted 16 November 2011 - 06:25 AM I don't see the need either. You return the position of the first available card, which as you state is the purpose of the function. The other information does not add to the problem and changes your i value, unless we are missing something. :confused: Put you code in code tags next time. Use the # sign. code goes here Perfection of means and confusion of ends seem to characterize our age. Albert Einstein :confused: #3 Posted 16 November 2011 - 07:38 AM Okay. Maybe its better I post the whole program: Most likely me not understanding some part of the code making use of those two last while-loops in the select_next_available function. #include <iostream> #include <cstdlib> #include <ctime> #include <cmath> using namespace std; int rand_0toN1(int n); void draw_a_card(); int select_next_available(int n); bool card_drawn[52]; int cards_remaining = 52; char *suits[4] = {"hearts", "diamonds", "spades", "clubs"}; char *ranks[13] = {"ace", ;; // Return this number. } int rand_0toN1(int n) { return rand() % n; } Most likely me not understanding some part of the code making use of those two last while-loops in the select_next_available function. #4 Posted 16 November 2011 - 09:51 AM Ok i see. Although the algorithm gets the next available card he does not want it to get them in a sequential manner. So he uses a random value n to offset the position that will be returned. The functions returns the nth available unused card. Perfection of means and confusion of ends seem to characterize our age. Albert Einstein :confused: #5 Posted 16 November 2011 - 11:27 AM Okay I understand. Correct me if I'm wrong, but that part of the function is not really necessary since the randomization of the argument passed when the call occurs is already done in a previous stage of the program? If so, does it add more "randomness" to the program? +Thanks btw. +Thanks btw. #6 Posted 16 November 2011 - 05:38 PM The call to the random function in the 'main' part of the code only generates a random number, with its max possible value being the amount of possible cards you can choose from. It then passes this value to the get next selection function to work as an offset for the card to be drawn. Without passing the random value to the get next selection function you will return the first card that has not been chosen. That makes the result predictable. Perfection of means and confusion of ends seem to characterize our age. Albert Einstein :confused: 1 user(s) are reading this topic 0 members, 1 guests, 0 anonymous users
http://forum.codecall.net/topic/66777-card-dealing-exercise/
crawl-003
refinedweb
718
70.73
Copyright ©2001 work is part of the W3C Semantic Web Activity. It has been produce by www-rdf-comments@w3.org, a mailing list with public archive). The editors and the Working Group plan to address feedback in future revisons of this document. 0. Introduction 0.1 Model-theoretic semantics 0.2 Graph Syntax 1. Interpretations 1.1 Technical Note 1.2 URIs, resources and literals 1.3 Ground RDF Interpretations 1.4 Example 2. Unlabeled nodes as existential assertions 2.1 Comparison with logic 3. Entailment in RDF 3.1 Skolemization 3.2 Merging RDF graphs 4. RDFS interpretations 5. RDFS entailment 6. RDF containers Appendix A. Summary of model theory Appendix B. Acknowledgements References [RFC 2396]. It does not provide any analysis of time-varying data or of changes to URI denotations. It does not support some of the uses of rdf containers described in [RDFMS], such as the use of properties applied to containers to indicate properties of the members, and it ignores reification. Some of these may be covered by future extensions of the model theory. Any semantic theory must be attached to a syntax. Of the several syntactic forms for RDF, we have chosen the RDF graph as described in [RDFMS] as the primary syntax, largely for its simplicity. We understand linear RDF notations such as N-Triples and rdf/xml [RDF/XML] as lexical notations for describing an RDF graph. An RDF graph is a partially labeled directed graph in which nodes are either unlabeled - these are also called anonymous or blank nodes - or else labeled with either URIs or literals; arcs are labeled with URIs; and distinct labeled. We use the N-triples syntax described in [RDFTestCases] to describe RDF graphs. Note that while this syntax uses bNode expressions to identify a particular unlabeled node, those expressions are not considered to be the label of that node. In particular, two N-triples documents which differ only in their bNode expressions will be understood to describe the same RDF graph. (bNode expressions triple, directed from the node corresponding to the subject to the node corresponding to the object. Nodes corresponding to urirefs and literals are labeled with the corresponding URI or literal, and arcs are labeled with the property URI from the corresponding triple. Notice that this requires that all occurrences of a particular uriref or bNode identifier. Some definitions wil be useful in what follows. An RDF graph will be said to be ground if every node in the graph is labeled. The vocabulary of a graph is the set of URIs that it contains.. Notice that the question of whether or not a class contains itself as a member is quite different from the question of whether or not it is a subclass of itself. In what follows,...) Similarly, the model theory makes no assumptions about the exact nature of literals, other than that they. (This memo uses the QName syntax to refer to certain URIs defined by RDF. The following namespace prefix and URI values are assumed throughout: Prefix: rdf, namespace URI: Prefix: rdfs, namespace URI:)-empty use of the phrase "asserted triple" is a deliberate weasel-worded artifact, to allow an RDF graph or document to contain triples which are being used for some non-assertional purpose. Strict conformity to the RDF 1.0 specification [RDFMS] assumes that all triples in a document are asserted triples, but making the distinction allows RDF parsers and inference engines to conform to the RDF syntax and to respect the RDF model theory without necessarily being fully committed to it.abeled nodes exactly like URIs, semantically speaking, by extending the IS mapping to include them as well as URIs. That would amount to adopting the view that an unlabeled node is equivalent to a node with an unknown label. However, it seems to be more in conformance with [RDFMS] to treat unlabeled nodes as existentially bound variables. This will require some definitions, as the theory so far provides no meaning for unlabeled nodes.. Notice that we have not changed the definition of an interpretation. The same interpretation that provides a truth-value for ground graphs also assigns truth-values to graphs with unlabeled nodes, even though it provides no interpretation for the unlabeled nodes themselves. Notice also that the unlabeled omitted.abeledabeled nodes have been merged. Merging lemma. The merge of a set S is entailed by S, and entails every member of S. Notice that unlabeled nodes are not identified with other nodes in the merge, and indeed this reflects a basic principle of RDF graph inference: nodes with the same URI must be identified (i.e. the graph must be tidy), but unlabeled nodes must not be identified with other nodes or re-labeledabeled node in E is labeled with a URI in E'. Then E does not entail E'. Anonymity lemma 2. Suppose. It must reflect the addition of new information about the identity of two unlabeled' URI labels expressionxx foo baz . from _:yyy foo baz . Recall however that by our conventions, these two N-triples documents describe the same RDF graph, and any graph is both a subgraph and an instance of itself. RDF Schema [RDFSchema])) } [RDFSchema] . They correspond more closely to the meanings assumed by DAML+OIL.[DAML]. stand for any uriref, bNode or literal expression, and uuu for any uriref or bNode (but not a literal). It is easy to see that this process will terminate on any finite RDF graph, since there are only finitely many possible triplets that can be formed from a given finite vocabulary. For example, the schema-closure of the graph consisting of the single triplet foo bar baz . contains the following N-triples: foo bar baz . foo rdf:type rdfs:Resource . baz rdf:type rdfs:Resource . bar. This document has benefited from inputs from many members of the RDF Core Working Group. Particular thanks to Dan Connolly for clarifying the relationship between RDF and RDFS, Ora Lassilla for the idea of using graph syntax, Sergey Melnick for the translation into logic, and Jos deRoo and Graham Klyne for finding errors in earlier drafts. The use of an explicit extension mapping to allow self-application without violating the axiom of foundation was suggested by Chris Menzel. Peter Patel-Schneider found an error in an earlier draft.
https://www.w3.org/TR/2001/WD-rdf-mt-20010925/
CC-MAIN-2021-31
refinedweb
1,054
56.96
In this article let’s learn how to convert a python dictionary into JSON. Let us first understand what JSON is. JSON stands for javascript object notation. It is generally used to exchange information between web clients and web servers. The structure of JSON is similar to that of a dictionary in python. The conditions are, that the JSON key must always be a string with double quotation marks. And, the value corresponding to the key could be of any data type, like string, integer, nested JSON, etc. The return type of JSON is the ‘string’ object type. Also read: How to convert JSON to a dictionary in Python? Example: import json a = '{ "One":"A", "Two":"B", "Three":"C"}' Dictionary in python is a built-in data type used to store data in a key associated with the value format. The data stored in the dictionary is unordered, unique pairs (keys are always unique, value can be repeated), and mutable. The return type of dictionary is the ‘dict’ object type. Example: #Dictionary in python is built-in datatype so #no need to import anything. dict1 = { 'One' : 1, 'Two' : 2, 'C': 3} Covert Dict to JSON Python does have a default module called “json” that helps us to convert different data forms into JSON. The function we are using today is The json. dumps() method allows us to convert a python object (in this case dictionary) into an equivalent JSON object. - First, we import json module - Assign a variable name to the dictionary that has to be converted into a JSON string. - Use json.dumps(variable) to convert Note: Do not get confused between json.dumps and json.dump. json.dumps() is a method that can convert a Python object into a JSON string, whereas json.dump() is a method that is used for writing/dumping JSON into a file. Syntax of json json.dumps(dict,intend) - dict – The python dictionary we need to convert - intend – Number of indentations (the space at the beginning of the code line) import json dict1 ={ "Name": "Adam", "Roll No": "1", "Class": "Python" } json_object = json.dumps(dict1, indent = 3) print(json_object) Output: { "Name": "Adam", "Roll No": "1", "Class": "Python" } Convert dictionary to JSON using sort_keys attribute Using the sort_key attribute in the previously discussed dumps() method returns a JSON object in a sorted fashion. If the attribute is set as TRUE, then the dictionary is sorted and converted into a JSON object. If it is set as FALSE, then the dict has converted the way it is without sorting. import json dict1 ={ "Adam": 1, "Olive" : 4, "Malcom": 3, "Anh": 2, } json_object = json.dumps(dict1, indent = 3, sort_keys = True) print(json_object) Output: { "Adam": 1, "Anh": 2, "Malcom": 3, "Olive": 4 } Convert nested dict into JSON A dict declared inside a dict is known as a nested dict. the dumps() method can also convert such nested dict into json. dict1 ={ "Adam": {"Age" : 32, "Height" : 6.2}, "Malcom" : {"Age" : 26, "Height" : 5.8}, } json_object = json.dumps(dict1, indent = 3, sort_keys = True) print(json_object) Output: { "Adam": { "Age": 32, "Height": 6.2 }, "Malcom": { "Age": 26, "Height": 5.8 } } Summary In this article, we’ve discussed how to convert a dictionary data structure to JSON for further processing. We use the json module to serialize the dictionary into JSON.
https://www.askpython.com/python/dictionary/convert-dictionary-to-json
CC-MAIN-2022-33
refinedweb
547
73.78
Gitweb:;a=commit;h=bdf88217b70dbb18c4ee27a6c497286e040a6705 Commit: bdf88217b70dbb18c4ee27a6c497286e040a6705 Parent: 0ddc9cc8fdfe3df7a90557e66069e3da2c584725 Author: Roland McGrath <[EMAIL PROTECTED]> AuthorDate: Wed Jan 30 13:31:44 2008 +0100 Committer: Ingo Molnar <[EMAIL PROTECTED]> CommitDate: Wed Jan 30 13:31:44 2008 +0100 Advertising x86: user_regset header The new header <linux/regset.h> defines the types struct user_regset and struct user_regset_view, with some associated declarations. This new set of interfaces will become the standard way for arch code to expose user-mode machine-specific state. A single set of entry points into arch code can do all the low-level work in one place to fill the needs of core dumps, ptrace, and any other user-mode debugging facilities that might come along in the future. For existing arch code to adapt to the user_regset interfaces, each arch can work from the code it already has to support core files and ptrace. The formats you want for user_regset are the core file formats. The only wrinkle in adapting old ptrace implementation code as user_regset get and set functions is that these functions can be called on current as well as on another task_struct that is stopped and switched out as for ptrace. For some kinds of machine state, you may have to load it directly from CPU registers or otherwise differently for current than for another thread. (Your core dump support already handles this in elf_core_copy_regs for current and elf_core_copy_task_regs for other tasks, so just check there.) The set function should also be made to work on current in case that entails some special cases, though this was never required before for ptrace. Adding this flexibility covers the arch needs to open the door to more sophisticated new debugging facilities that don't always need to context-switch to do every little thing. The copyin/copyout helper functions (in a later patch) relieve the arch code of most of the cumbersome details of the flexible get/set interfaces. Signed-off-by: Roland McGrath <[EMAIL PROTECTED]> Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]> Signed-off-by: Thomas Gleixner <[EMAIL PROTECTED]> --- include/linux/regset.h | 206 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 206 insertions(+), 0 deletions(-) diff --git a/include/linux/regset.h b/include/linux/regset.h new file mode 100644 index 0000000..85d0fb0 --- /dev/null +++ b/include/linux/regset.h @@ -0,0 +1,206 @@ +/* + *. + */ + +#ifndef _LINUX_REGSET_H +#define _LINUX_REGSET_H 1 + +#include <linux/compiler.h> +#include <linux/types.h> +struct task_struct; +struct user_regset; + + +/** + * user_regset_active_fn - type of @active function in &struct user_regset + * @target: thread being examined + * @regset: regset being examined + * + * Return -%ENODEV if not available on the hardware found. + * Return %0 if no interesting state in this thread. + * Return >%0 number of @size units of interesting state. + * Any get call fetching state beyond that number will + * see the default initialization state for this data, + * so a caller that knows what the default state is need + * not copy it all out. + * This call is optional; the pointer is %NULL if there + * is no inexpensive check to yield a value < @n. + */ +typedef int user_regset_active_fn(struct task_struct *target, + const struct user_regset *regset); + +/** + * user_regset_get_fn - type of @get function in &struct user_regset + * @target: thread being examined + * @regset: regset being examined + * @pos: offset into the regset data to access, in bytes + * @count: amount of data to copy, in bytes + * @kbuf: if not %NULL, a kernel-space pointer to copy into + * @ubuf: if @kbuf is %NULL, a user-space pointer to copy into + * + * Fetch_get_fn(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + void *kbuf, void __user *ubuf); + +/** + * user_regset_set_fn - type of @set function in &struct user_regset + * @target: thread being examined + * @regset: regset being examined + * @pos: offset into the regset data to access, in bytes + * @count: amount of data to copy, in bytes + * @kbuf: if not %NULL, a kernel-space pointer to copy from + * @ubuf: if @kbuf is %NULL, a user-space pointer to copy from + * + * Store_set_fn(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf); + +/** + * user_regset_writeback_fn - type of @writeback function in &struct user_regset + * @target: thread being examined + * @regset: regset being examined + * @immediate: zero if writeback at completion of next context switch is OK + * + * This call is optional; usually the pointer is %NULL. When + * provided, there is some user memory associated with this regset's + * hardware, such as memory backing cached register data on register + * window machines; the regset's data controls what user memory is + * used (e.g. via the stack pointer value). + * + * Write register data back to user memory. If the @immediate flag + * is nonzero, it must be written to the user memory so uaccess or + * access_process_vm() can see it when this call returns; if zero, + * then it must be written back by the time the task completes a + * context switch (as synchronized with wait_task_inactive()). + * Return %0 on success or if there was nothing to do, -%EFAULT for + * a memory problem (bad stack pointer or whatever), or -%EIO for a + * hardware problem. + */ +typedef int user_regset_writeback_fn(struct task_struct *target, + const struct user_regset *regset, + int immediate); + +/** + * struct user_regset - accessible thread CPU state + * @n: Number of slots (registers). + * @size: Size in bytes of a slot (register). + * @align: Required alignment, in bytes. + * @bias: Bias from natural indexing. + * @core_note_type: ELF note @n_type value used in core dumps. + * @get: Function to fetch values. + * @set: Function to store values. + * @active: Function to report if regset is active, or %NULL. + * @writeback: Function to write data back to user memory, @pos argument must be aligned according to @align; the @count + * argument must be a multiple of @size. These functions are not + * responsible for checking for invalid arguments. + * + * When there is a natural value to use as an index, @bias + * @bias from a segment selector index value computes the regset slot. + * + * If nonzero, @core_note_type + * (@n * @size) and nothing else. The core file note is normally + * omitted when there is an @active function and it returns zero. + */ +struct user_regset { + user_regset_get_fn *get; + user_regset_set_fn *set; + user_regset_active_fn *active; + user_regset_writeback_fn *writeback; + unsigned int n; + unsigned int size; + unsigned int align; + unsigned int bias; + unsigned int core_note_type; +}; + +/** + * struct user_regset_view - available regsets + * @name: Identifier, e.g. UTS_MACHINE string. + * @regsets: Array of @n regsets available in this view. + * @n: Number of elements in @regsets. + * @e_machine: ELF header @e_machine %EM_* value written in core dumps. + * @e_flags: ELF header @e_flags value written in core dumps. + * @ei_osabi: ELF header @e_ident[%EI_OSABI] value written in core dumps. + * + * A regset view is a collection of regsets (&struct user_regset, + * above). This describes all the state of a thread that can be seen + * from a given architecture/ABI environment. More than one view might + * refer to the same &struct user_regset, or more than one regset + * might refer to the same machine-specific state in the thread. For + * example, a 32-bit thread's state could be examined from the 32-bit + * view or from the 64-bit view. Either method reaches the same thread + * register state, doing appropriate widening or truncation. + */ +struct user_regset_view { + const char *name; + const struct user_regset *regsets; + unsigned int n; + u32 e_flags; + u16 e_machine; + u8 ei_osabi; +}; + +/* + * This is documented here rather than at the definition sites because its + * implementation is machine-dependent but its interface is universal. + */ +/** + * task_user_regset_view - Return the process's native regset view. + * @tsk: a thread of the process in question + * + * Return the &struct user_regset_view that is native for the given process. + * For example, what it would access when it called ptrace(). + * Throughout the life of the process, this only changes at exec. + */ +const struct user_regset_view *task_user_regset_view(struct task_struct *tsk); + + +#endif /* <linux/regset.h> */ - To unsubscribe from this list: send the line "unsubscribe git-commits-head" in the body of a message to [EMAIL PROTECTED] More majordomo info at
https://www.mail-archive.com/git-commits-head@vger.kernel.org/msg35816.html
CC-MAIN-2016-44
refinedweb
1,266
60.04
Orr: It seems only fair. After Arya’s brutally aborted reunion with her family last week, tonight we were offered a few literal and metaphorical homecomings. In descending order of satisfaction: Sam and Jon find each other once again at Castle Black; Daenerys adopts a city's worth of new dependents; Davos is accepted back into Stannis’s embrace about five minutes after the latter sentenced him to death; Jaime (or most of him) reunites with a less-than-entirely-ecstatic Cersei; and Theon (or at least his self-described "best part") is delivered to his dad in a box. Oh, and Tywin Lannister is restored to his rightful place as the man in Westeros on whose bad side you least want to be, king or no king. (Paying attention, Joffrey?) We've been here before, of course, in both Seasons 1 and 2: of the Red Wedding—competed for attention with a variety of subsidiary narratives. It's as if showrunners David Benioff and D. B. Weiss miscounted their seasonal allotment and belatedly realized that they had to cram four episodes' worth of plot down into two. That said, there was plenty to like tonight. Like the season overall, it offered glimpses of both the potential promise and the potential pitfalls of Benioff and Weiss making the story their own, rather than adhering slavishly to the blueprint provided by George R. R. Martin. Take, for example, the Tyrion-Sansa "sheep shift" scene, which is not in the books. I have my quibbles with the execution (does Sansa really need to be. On the other hand—and in keeping with the theme of The Many Loves of Tyrion Lannister—I increasingly dislike the portrayal of proud-but-resentful Shae, whose motivations are becoming ever more far-fetched. Let's see: Accept a large bag of diamonds to restart your life as a wealthy and beautiful woman who can pick among her suitors? Or remain a sulky, secret servant to the wife of your ex-lover (and, as such, a pretty clear candidate to be killed at any time) just because you want him to "tell you himself" that it's not working out? Perhaps it's just me, but having a gimlet-eyed, wise-to-the-world gal like Shae (also, relevantly, a career prostitute) choose Door Number Two seems a considerable reach. Elsewhere tonight—and there were a lot of elsewheres tonight—I thought that Samwell Tarley and Bran Stark's intersecting storylines provided some of the best material either has had to date (which is not, alas, saying all that much), and it was nice to see Peter Vaughan's Maester Aemon again after a long absence. Liam Cunningham, who plays Davos Seaworth, continues to make the most of pretty much every onscreen opportunity he gets, even if his Dragonstone costars (Stephen Dillane's Stannis and Carice van Houten's Melisandre) remain decidedly uneven. And the culmination of Daenerys's soft conquest of Yunkai was both a bit underwhelming—certainly relative to her incendiary capture of Astapor—and featured substantially too much crowd surfing. I give way to no one when it comes to appreciation of Rose Leslie's Ygritte, but her Angry Cupid act with Jon Snow felt a bit off to me. In the books, she puts an arrow in his leg during his direwolf-assisted escape from the wildlings, which took place last episode. Having her track him down to do the same at a later date (plus, um, where are all the other wildlings?) seemed an unnecessary indulgence, however striking she may be with a bow in her hand and tears in her eyes. Charles Dance, meanwhile, was in exquisite Tywin-form taking his family back in Hand at King's Landing, and never more so than when he trained his cool, death-ray gaze on his putative ruler (and grandson twice-over) Joffrey: “The King is tired. Take him to his chambers”—where's this guy when it's bedtime at my house? Which brings me to the interminably awaited (and widely guessed) revelation that Theon's torturer is, in fact, Ramsay Snow, the Bastard of Bolton, renowned flayer of skin and eater of pork sausages. His treatment represents perhaps the greatest departure that Benioff and Weiss have yet made in adapting Martin's novels, and I have a bone I've been waiting to pick for some time about how they went about it. (Those readers uninterested in questions about the transition from page to screen may want to skip the next few paragraphs.) In the Martin novels, we meet Ramsay in book two. A renowned rapist and murderer, he usurped the identity of his dead manservant/accomplice "Reek" in order to escape execution. In this guise, he's brought as a prisoner to Winterfell, where he ingratiates himself with Theon after the latter takes the city. It's Ramsay/Reek who, after Bran and Rickon Stark escape, persuades Theon to kill two peasant boys, mutilate their bodies, and claim that they are the Stark heirs. (In the show, this falls to a minor Ironborn character, Dagmer Cleftjaw.) Even setting aside the show's graphic (and to my mind unnecessary) depictions of Ramsay's torture of Theon—which take place "offscreen" in the books—there are plenty of narrative complaints that can be made regarding Ramsay's portrayal. Principally, his refabricating Theon as a broken creature called Reek (as we saw tonight) is vastly more interesting if you know that 1) he previously had a degenerate servant named Reek whom he sacrificed in his employ; and 2) when he initially met Theon he was pretending to be that servant. Of course, there are counterarguments, too (the Ramsay/Reek subterfuge is reasonably complicated, and Benioff and Weiss needed to cut where possible), and counterarguments to those counterarguments (if you had to cut something from Season Two, couldn't it have been from the Qarth subplot?). But my dismay at the Ramsay adjustments is less narrative than philosophical. In the novels, Ramsay is both the figure who tempts Theon across a moral boundary from which he can never return (the killing of the peasant boys) and the one who subsequently punishes him for his transgressions. By cutting out the first half of this equation, Benioff and Weiss have reduced Ramsay to a garden-variety psychosexual sociopath—more accomplished than, say, Joffrey, but not appreciably different in kind. In Martin's vision, by contrast, he is a chillingly literal embodiment of the Devil: He seduces you to evil and then sentences you to Hell. This is not a small alteration. 2 (which was still pretty damn good), even if it failed to rise to the near-perfection of Season 1 (which was immensely aided by its relative simplicity). To my mind, the fundamental question moving forward will be how well Benioff and Weiss can pare and adapt Martin's ever-more-sprawling (and unlikely to be completed) opus, and on this score I think Season 3 offered both reasons to be optimistic, and reasons to be less optimistic. Before asking what you guys thought, though, let me say thanks to you both for doing this—it's been an utter blast—and particular thanks for putting up with my occasionally idiosyncratic obsessions. Thanks, too, to our readers, for more of the same, for your many sharp comments and close analyses on subjects from Pod's penis to Talisa's fate, and most of all for not telling Spencer that everyone was going to die in Episode 9. So, what did you guys think? Kornhaber: What do I think? I think I'm still recovering from the massive twist Benioff and Weiss just served up: having the capstone to their show's arguably best season, and the follow-up to their most shocking and eventful episode yet, be one in which the most exciting developments revolve around the opening of mail. As you point out, Chris, this Thrones finale was in full Thrones finale form, hopscotching frantically across plotlines to tie a cliffhanger-y bow on each of them. In this case, though, that meant not much happened—games were not changed like they were when, say, Joffrey swapped Sansa for Margaery at the end of last season. Rather, they reached their long-foreseen conclusions with that series of homecomings you mention, and the not-even-shocking-to-newbie-me reveal of Torture Boy as a Bolton Man. But as an epilogue to the Red Wedding, the show's writers, interestingly, used all these plotlines to slyly confront a widespread viewer reaction to the carnage that had just unfolded. Last week, the two of you may have been slightly underwhelmed, and I may have been more-than-slightly blown away, but a lot of newcomers felt pure outrage. A common sentiment on Twitter: What kind of sicko was George R. R. Martin to groom a group of likable, noble, on-the-side-of-right protagonists only to viciously butcher them at the end of Season 3? ...” “Explain to me why it is more noble to kill tens of thousands of men in battle than a dozen at dinner,” Tywin retorts. That made me pause. The thing is, in the week since “The Rains of Castamere” aired, I've found myself more fully realizing the deep and layered ways that the slaying of the Starks was, yes, ignoble. Bran's fable about the cook who angered the gods by serving up a guest highlights one of them. And yet for how wrong these murders were, in the show's universe they fit right in. Tywin's moral calculation may even have been correct. We've known this is a brutal, loser-lose-all world at least since Eddard lost his head. Why, exactly, wouldn't Tywin seize the opportunity to snuff out our putative heroes—his enemies? Why, exactly, should we be shocked by these killings above all others? Again and again in “Mhysa,” Benioff and Weiss riff on the theme of noble and ordinary deaths—the question of whether some lives matter more than others by dint of being highborn, lowborn, unborn, newly born, Ironborn, or born a half man. Characters as diverse as Walder Frey, Shae, and Gendry air angst about how the more-privileged think they're intrinsically more valuable. Ramsay Snow strips Theon of his lordship to remind him, and the audience, that even the seemingly anointed are “just meat.” Tywin shares the tale of sparing his disappointing son an early death, while Baelon Grayjoy essentially sentences his to one. And at Dragonstone, Davos and Stannis again debate the fate of a man whose blood, according to Melisandre, matters a great deal more than most. “What is the life of one bastard boy against a kingdom?” Stannis asks. Davos's reply: “Everything.” I'd like to say that I unequivocally side with Davos against Stannis, with Tyrion against Tywin, with Yara—who pledges to rescue Theon—against Balon. But Martin, Benioff, and Weiss's great triumph is in how thoroughly they've communicated, in Davos's words, “the world has got so far bent.” By the end of Thrones’ third season, we've seen the extent to which making choices based on love, or notions of honor, or ideas of fixed morality, can make one vulnerable—and as Robb sadly learned, vulnerability can have horrifying consequences. At least at this point in the series, realpolitik seems like not only the practical philosophy, but often the more righteous one too. That could shift, though. Even relatively little happened in this episode, the finale's many raven-delivered tidings of foreboding did impart a sense that change is coming. Thrones’ storylines to the north of the Wall and to the south of Westeros have inched closer and closer to the mainland. Jaime's back at King's Landing, having been fundamentally altered in more ways than one. And Roose Bolton may be shaping up as an even more fascinating villain than Tywin. After all, he's kind of like Tywin, but creepier: even more strategically laconic, brimming not with scorn for his lessers but rather with clear-eyed opportunism. Plus, he—or at least his son—skins people, not deer. Here's hoping the next person this betrayer betrays is Walder Fray. Would that be the moral outcome? Who knows. Transfixing? Totally. Ross, your thoughts? Douthat: Gentlemen, it's been the greatest of pleasures, and I'm grateful that unlike poor Theon we've all come through this season in one piece. Since you both have covered the finale's details so rigorously, I thought I'd use my last raven-carried missive to step back and ask: What is this tangled, bloody story actually about? The strongest critique of Game of Thrones is that having peeled away some of the oversimplified good-versus-evil mythos from the fantasy genre and substituted a darker realism instead, its story ends up just going in grisly, increasingly depressing circles: The strong devour the weak and are devoured in their turn, the surprise of seeing characters we love perish becomes a kind of predictable narrative crutch in its own right, and each new installment brings nothing save (as one critic put it, in a review with some spoilers for non-book-readers) “a repetition of intrigue, treachery, and the stab in the back.” Of course, the story does actually promise to have some sort of endpoint, even if neither the novels nor the show have reached it yet—the moment when the “real war,” as Melisandre put it last night, will finally be joined, and some sort of Tolkienesque clash between White Walkers and dragons and whomever else will eclipse the bloody power plays of Lannisters and Boltons and Tyrells. But I don't think I'm alone in doubting that either Martin or Benioff and Weiss regard the story of warring nobles the same way Melisandre does—as just a long digressive prelude the real magical action that will kick off whenever the Wall is finally breached. So far the story has offered multiple models of how would-be rulers might approach that task, and all of them have proven insufficient. We were conditioned to root for Ned Stark and then his eldest son, both the embodiments of aristocratic honor, but the events of the show have provided a clinic in why honor without wisdom, guile, and cunning is just a sure path to the grave. The Stark lords were basically innocents abroad, and their guilelessness delivered the kingdom into the hands of the Lannisters, who began as cardboard villains but are now much more interesting because we understand their choices better: Their (literally) incestuous approach to politics is less high-minded than the Winterfell Way, but infinitely more effective. But the Lannister Way, too, has its limits, as Tyrion and Cersei's conversation this week suggests: If you treat everyone outside the family as an enemy, you'll eventually have to kill them all, and if you decide that the ends always justify the means you'll end up empowering true savages (like the Boltons, fils and pere) and you'll find yourself vulnerable to the wiles of men who don't even have families to feel loyalty toward (like Littlefinger, now offstage but no doubt still plotting his ascent). So there has to be an alternative. Is it the Tyrell Way, which involves buying the people's loyalty with bread and circuses and beauty? Varys's attempts to serve the common good from the shadows? Stannis's constant seesawing between Melisandre and Davos's competing views of political morality? More likely, the alternative is what Jon Snow, Tyrion and Daenerys—our three true protagonists, I think—are all groping toward: Some kind of balancing of honor and morality with ruthlessless and guile, in which vows are betrayed provisionally (as Jon did with Ygritte) in order that they might ultimately be fulfilled, the interests of family are served by serving the interests of the realm (as Tyrion did last season, and as Varys hopes he'll have a chance to do again), and the desires of ordinary people are served but also harnessed (as Daenerys is trying to do with the ex-slaves of Yunkai and Astapor) to the sovereign's political purposes. But none of them are on the throne just yet. And that last shot of a crowdsurfing Daenerys, now the “mother” of a hundred thousand slaves, was ecstatic but also appropriately ambiguous: an image of her growing power, but also a reminder of how power can seduce and corrupt long before true statesmanship is learned. We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.
https://www.theatlantic.com/entertainment/archive/2013/06/-i-game-of-thrones-i-hectic-morally-complex-crowdsurfing-season-finale/276696/
CC-MAIN-2019-47
refinedweb
2,804
62.21
: v.Audio.Dispose(); v.Dispose();. The code: public Audio Audio { get { Audio tempAudio; try { tempAudio = new Audio(this); } catch(Exception) { tempAudio = null; } return tempAudio; } } breaks the design rule that a property shouldn’t return a new instance of a class. Sound like the developers weren’t following .NET guidelines… Hi there, Thanks for an excellent article. Granted it’s resonably simple, but it’s exactly what I’m looking for at the moment, and have not found many other good resources with examples to play video files, those that did complained about bugs with memory allocation that you seemed to have nailed down. In particular though, I liked the idea of browsing the MSIL code to see what’s happening. Having codes in assember many years ago, I should be able to find my way around, so it’s an excellent tip, which I don’t think I would have though of straight away. Thanks, Adrian. How did you determine that resources were not being freed up? 1) Even after running the GC, waiting for pending finalizers, and running the GC again, the hundreds of megabytes of memory consumed by my application never decreased. After loading a few videos, playing them, and "releasing" them, my process was still consuming close to half a gig of virtual memory. 2) My MPEG2 decoder is configured to display an icon in the systray when active, and each instance of the decoder displays its own icon. So, for example, after playing three videos, three decoder icons still remained. Hi Stephan Thanks for your article. In your code sample you change the volume of the video I’m running into following problem using the audio object: The video events are not fired anymore after accessing the video.audio object: oVideo = new Video(ofdOpen.FileName); // Adding the event handler oVideo.Ending += new System.EventHandler(this.ClipEnded); oVideo.Starting += new System.EventHandler(this.ClipStarting); oVideo.Pausing += new System.EventHandler(this.ClipPausing); oVideo.Stopping += new System.EventHandler(this.ClipStopping); oVideo.Owner = this; // Start playing now oVideo.Play(); This works fine and the events are fired correctly. But after accessing the audio object this way MessageBox.Show(oVideo.Audio.Volume.ToString() ); (the result is 0 and not correct because a new instance of Audio was called) or after accessing the audio object the way: oAudio = oVideo.Audio; MessageBox.Show(oAudio.Volume.ToString() ); the events are not fired anymore. Do you know why? Hi Achim… I had the same problems… Maybe u could find an answer here: Hi! It was a good discussion. Could you let me know how to merge two video files or merge an image to the end of video? Thanks. Hy there. This i a great article about AudioVideoPlayback. When i read whats the problem i literaly wanted to bang my head against the wall. And there are other problems with the Video class. Video.RenderToTexture() ruins your original rendering loop. Achim i think i have a solution on the event fireing problem after a=v.Audio you should sent the ending event the same as the v.Ending event. I did it like this. aud = m_video.Audio; aud.Volume = GeneralVolume; aud.Ending += new EventHandler(m_video_Ending); m_video.Owner = owner; m_video.Ending += new EventHandler(m_video_Ending); And the events fire. Oh sh..! After 70000 consecutive video loads my app used 200mb-s more than normaly. Even if we workaround the audio propert we can’t avoid a minor memory leak. How come Microsoft won’t recompile this dll with the correct code??? They would save us from alot of headache. how can i insert a "audio level meter" by using managed Direct x ? Thanks. This also solved my problem where the video-file was locked (could not be deleted) until the application was terminated. Hi I’m French, so sorry for my english. I’ve try to do the same, but it doesn’t work. Can you send by email the dll at hager.jc@gmail.com ? Thanks Nice article but it doesnt work on my project. Im working with Vb.net and i do If (_video IsNot Nothing) Then _video.Stop() Dim a As Audio = _video.Audio a.Dispose() a = Nothing Dim v As Video = _video v.Dispose() v = Nothing End If but memory is still allocated : FFDSHOW tray icons dont disapear (neither audio nor video) and memory usage don’t decrease Thank for your help don’t know is it fits here but calling dispose is totaly wrong becouse GC do that automaticly. if you look for amount of used memory first look for allocated memory then used memory. it’s big diference i have done the above mentioned method which was Audio a = v.Audio; … do whatever I need to do with a instead of v.Audio… a.Dispose(); v.Dispose(); but the thing is. any other solution to this problem. I dont really understand the function Volume… It seems it works somewhere between -10000 and – 500? can someone tell me the exact range of that funtion? (im sorry for that bad english and noob question) # Regards, # Tom Do you have an idea why the .ending and .pausing events dont fire? Simple example: — using System; using Microsoft.DirectX.AudioVideoPlayback; namespace Test { class Program { static Audio audioObject; static void Main(string[] args) { audioObject = new Audio(@"C:test.mp3"); audioObject.Starting += new EventHandler(audioObject_Pausing); audioObject.Play(); System.Threading.Thread.Sleep(1000); audioObject.Pause(); System.Threading.Thread.Sleep(1000); audioObject.Play(); } static void audioObject_Pausing(object sender, EventArgs args) { Console.WriteLine("Pause Event!"); } } } Thanks for posting this. Unfortunately I did the same digging you did and discovered the same problem. And THEN I decided to ask Google. Sigh. One of these days I’ll learn 😉 BTW, this is one of my favorite namespaces! Why can’t i use: using Microsoft.DirectX.AudioVideoPlayback; ? I get an red line under DirectX.. Thanks for your posting! It’s clear my terrible problem. PingBack from Thanks for this article, it was really helpful :). Now, some anwsers to the other comments. Events of the video object aren’t fired if you set video’s volume using Audio.Volume property. Often it’s needed to use it though, so you have to create solutions that don’t use those events. From what I’ve found the range of the Volume is [-10000; 0]. However, I used -6000 as a lower bound and it’s still silence on my machine. Let me know if you hear something with Volume set to -6000. "…. …" As to the issue with c# I’m not sure what you are talking about. You should try creating new video object each time the new video is loaded. When you create Video, the Audio object associated with it will be initialized as well. I used the described solution in C# application and after loading several videos the memory usage doesn’t increase noticeably (it was before applying th fix described). "… Why can’t i use: using Microsoft.DirectX.AudioVideoPlayback; ? I get an red line under DirectX.. …" Make sure that you’ve added a reference to Microsoft.DrectX.AudioVideoPlayback in the project. To add a reference you have to right-click on the project in the Solution Explorer, choose Add reference… and choose the reference you want to add from the list (.NET tab). You also may need to download DirectX SDK to make it work. i experienced this problem when i set the audio.volume of the video, if you don’t touch the audio.volume the video dispose works fine…
https://blogs.msdn.microsoft.com/toub/2004/04/16/fun-with-audiovideoplayback/
CC-MAIN-2017-43
refinedweb
1,248
67.96
TinyGPS A Compact Arduino GPS/NMEA Parser TinyGPS is designed to provide most of the NMEA GPS functionality I imagine an Arduino user would want – position, date, time, altitude, speed and course – without the large size that seems to accompany similar bodies of code. To keep resource consumption low, the library avoids any mandatory floating point dependency and ignores all but a few key GPS fields. Usage To use, simply create an instance of an object like this: #include "TinyGPS.h" TinyGPS gps; Feed the object serial NMEA data one character at a time using the encode() method. (TinyGPS does not handle retrieving serial data from a GPS unit.) When encode() returns “true”, a valid sentence has just changed the TinyGPS object’s internal state. For example: #define RXPIN 3 #define TXPIN 2 SoftwareSerial nss(RXPIN, TXPIN); void loop() { while (nss.available()) { int c = nss.read(); if (gps.encode(c)) { // process new gps info here } } } You can then query the object to get various tidbits of data. To test whether the data returned is stale, examine the (optional) parameter “fix_age” which returns the number of milliseconds since the data was encoded. long lat, lon; unsigned long fix_age, time, date, speed, course; unsigned long chars; unsigned short sentences, failed_checksum; // retrieves +/- lat/long in 100000ths of a degree gps.get_position(&lat, &lon, &fix_age); // time in hhmmsscc, date in ddmmyy gps.get_datetime(&date, &time, &fix_age); // returns speed in 100ths of a knot speed = gps.speed(); // course in 100ths of a degree course = gps.course(); Statistics The stats method provides a clue whether you are getting good data or not. It provides statistics that help with troubleshooting. // statistics gps.stats(&chars, &sentences, &failed_checksum); - chars – the number of characters fed to the object - sentences – the number of valid $GPGGA and $GPRMC sentences processed - failed_checksum – the number of sentences that failed the checksum test Integral values Values returned by the core TinyGPS methods are integral. Angular latitude and longitude measurements, for example, are provided in units of millionths of a degree, so instead of 90°30’00″, get_position() returns a longitude value of 90,500,000, or 90.5 degrees. But… Using Floating Point …for applications which are not resource constrained, it may be more convenient to use floating-point numbers. For these, TinyGPS offers several inline functions that return more easily-managed data. Don’t use these unless you can afford to link the floating-point libraries. Doing so may add 2000 or more bytes to the size of your application. float flat, flon; // returns +/- latitude/longitude in degrees gps.f_get_position(&flat, &flon, &fix_age); float falt = gps.f_altitude(); // +/- altitude in meters float fc = gps.f_course(); // course in degrees float fk = gps.f_speed_knots(); // speed in knots float fmph = gps.f_speed_mph(); // speed in miles/hr float fmps = gps.f_speed_mps(); // speed in m/sec float fkmph = gps.f_speed_kmph(); // speed in km/hr Date/time cracking For more convenient access to date/time use this: int year; byte month, day, hour, minutes, second, hundredths; unsigned long fix_age; gps.crack_datetime(&year, &month, &day, &hour, &minute, &second, &hundredths, &fix_age); Establishing a fix TinyGPS objects depend on an external source, i.e. its host program, to feed valid and up-to-date NMEA GPS data. This is the only way to make sure that TinyGPS’s notion of the “fix” is current. Three things must happen to get valid position and time/date: - You must feed the object serial NMEA data. - The NMEA sentences must pass the checksum test. - The NMEA sentences must report valid data. If the $GPRMC sentence reports a validity of “V” (void) instead of “A” (active), or if the $GPGGA sentence reports fix type “0″ (no fix) then those sentences are discarded. To test whether the TinyGPS object contains valid fix data, pass the address of an unsigned long variable for the “fix_age” parameter in the methods that support it. If the returned value is TinyGPS::GPS_INVALID_AGE, then you know the object has never received a valid fix. If not, then fix_age is the number of milliseconds since the last valid fix. If you are “feeding” the object regularly, fix_age should probably never get much over 1000. If fix_age starts getting large, that may be a sign that you once had a fix, but have lost it. float flat, flon; unsigned long fix_age; // returns +- latitude/longitude in degrees gps.f_get_position(&flat, &flon, &fix_age); if (fix_age == TinyGPS::GPS_INVALID_AGE) Serial.println("No fix detected"); else if (fix_age > 5000) Serial.println("Warning: possible stale data!"); else Serial.println("Data is current."); Interfacing with Serial GPS To get valid and timely GPS fixes, you must provide a reliable NMEA sentence feed. If your NMEA data is coming from a serial GPS unit, connect it to Arduino’s hardware serial port, or, if using a “soft” serial port, make sure that you are using a reliable SoftSerial library. As of this writing (Arduino 0013), the SoftwareSerial library provided with the IDE is inadequate. It’s best to use my NewSoftSerial library, which builds upon the fine work ladyada did with the AFSoftSerial library. Library Version You can retrieve the version of the TinyGPS library by calling the static member library_version(). int ver = TinyGPS::library_version(); Resource Consumption Linking the TinyGPS library to your application adds approximately 2500 bytes to its size, unless you are invoking any of the f_* methods. These require the floating point libraries, which might add another 600+ bytes. Download The latest version of TinyGPS is available here: TinyGPS13.zip Change Log - initial version - << streaming, supports $GPGGA for altitude, floating point inline functions - also extract lat/long/time from $GPGGA for compatibility with devices with no $GPRMC - bug fixes - API re-org, attach separate fix_age’s to date/time and position. - Prefer encode() over operator<<. Encode() returns boolean indicating whether TinyGPS object has changed state. - Changed examples to use NewSoftSerial in lieu of AFSoftSerial; rearranged the distribution package. - Greater precision in latitude and longitude. Angles measured in 10-5 degrees instead of 10-4 as previously. Some constants redefined. - Minor bug fix release: the fix_age parameter of get_datetime() was not being set correctly. - Added Maarten Lamers’ distance_to() as a static function. - Arduino 1.0 compatibility - Added satellites(), hdop(), course_to(), and cardinal() - Improved precision in latitude and longitude rendering. get_position() now returns angles in millionths of a degree. Acknowledgements Many thanks to Arduino forum users mem and Brad Burleson for outstanding help in alpha testing this code. Thanks also to Maarten Lamers, who wrote the wiring library that originally gave me the idea of how to organize TinyGPS. Thanks also to Dan P. for suggesting that I increase the lat/long precision in version 8. Thanks to many people who suggested new useful features for TinyGPS, especially Matt Monson, who wrote some nice sample code to do so. All input is appreciated. Mikal Hart March 2nd, 2009 → 1:47 pm [...] [...] March 14th, 2009 → 11:57 am [...] won’t. I answer a few questions on the Arduino microcontroller forum and post an update to a library I [...] June 9th, 2009 → 3:08 pm [...] this journey I also looked into using the TinyGPS library. While it does a nice job of handling the core GPS parsing, I still had the primary desire to log [...] September 20th, 2009 → 9:05 am [...] serielle Verbindung werden die Daten ans Display geschickt. Verwendet werden die Bibliotheken TinyGPS und [...] January 20th, 2010 → 1:08 am [...] Arduino developers for the great librarires, and to Mikal Hart in particular for his work on the TinyGPS and NewSoftSerial [...] July 25th, 2010 → 6:03 am [...] and sending the bits of “Hello World” in 7N1 format. I spent some time with the TinyGPS and NewSoftSerial libraries from Mikal Hart, and got the parsing working nicely and building an [...] July 25th, 2010 → 11:33 am [...] can have the LCD output latitude, longitude, or whatever. You’ll need the TinyGPS library from Arduiniana downloaded and installed for it to work. They suggest using NewSoftSerial, but I couldn’t get [...] September 7th, 2010 → 7:40 pm [...] EEPROM so they are persistent between power cycles. The sketch uses Mikal Hart's excellent TinyGPS library and includes code from the ArduPilot Projectfor [...] September 11th, 2010 → 2:16 am [...] uses a TinyGPS library for easier GPS shield access. So device can be used as normal GPS tracking device the only [...] September 20th, 2010 → 12:12 am [...] point you will need to install two libraries into the Arduino software – NewSoftSerial and TinyGPS. Extract the folders into the libraries folder within your arduino-001x [...] October 7th, 2010 → 10:43 am [...] sketch uses Mikal Hart’s excellent TinyGPS library and includes code from the ArduPilot Project for [...] February 15th, 2011 → 9:06 am [...] 2 [...] March 2nd, 2011 → 7:52 am [...] attached code should pretty much speak for itself. Using the TinyGPS library, your current position is taken and the direction to the final location is calculated. [...] May 15th, 2011 → 10:43 pm [...] pull-down resistor). You will need to install the SdFAT library, NewSoftSerial library, TinyGPS library and the SdFat library if not already [...] June 11th, 2011 → 10:23 pm [...] USBシリアルを使っているとTX,RXが使えませんが、 [...] June 21st, 2011 → 12:20 am [...] Mikal Hart’s tinyGPS library; [...] July 1st, 2011 → 10:33 pm [...] [...] September 20th, 2011 → 11:45 pm [...] easy developing of the necessary Arduino application I used an existent dedicated library: TinyGPS (), which help us to parse all the packets received on the serial port from the GpsBee module. Ok, so [...] November 20th, 2011 → 4:37 pm [...] and gyroscope package and slapped together a simple SD card interface. The libraries I used were TinyGPS and fat16lib (for SD card use). Weather_Balloon_Code and schematic in case you’d like to [...] January 30th, 2012 → 4:57 pm [...] forget the 10k ohm pull-down resistor). You will need to install NewSoftSerial library, TinyGPS library and the SdFat library if not already [...] April 2nd, 2012 → 1:08 am [...] [...] April 7th, 2012 → 5:57 pm [...] it’s pretty simple. It only does those functions, but with the help of the truly fantastic TinyGPS library, I have a destination function in the [...] April 10th, 2012 → 2:59 am [...] TinyGPS for Arduino [...] April 29th, 2012 → 6:49 am [...] tinygps [...] May 4th, 2012 → 3:53 am [...] ちなみに、データ処理部分は、TinyGPS Libraryなる便利なものがあったので、それを使ってみた。 [...] May 21st, 2012 → 3:20 am [...] Библиотека для работы с GPS/NMEA для Arduino (TinyGPS) [...] June 5th, 2012 → 1:19 am [...] Download TinyGPS her [...] July 6th, 2012 → 12:58 am [...] Working with Arduino and TinyGPS [...] July 6th, 2012 → 12:59 am [...] Working with Arduino and TinyGPS [...] July 16th, 2012 → 9:33 pm [...] used the tinyGPS library to decode the NMEA GPS Data. Cooking-Hacks generously supplied both the GPS shield and SD Card [...] August 2nd, 2012 → 11:33 am [...] the TinyGPS library: Share this: Written by andrewhodel Posted in Engineering, [...] August 18th, 2012 → 4 18th, 2012 → 5:06 19th, 2012 → 12:36 pm [...] [...] September 18th, 2012 → 2:17 pm [...] a Base y testear el alcance de las antenas, el proceso lo he ejecutado en el Base. He utilizado la librería TinyGPS que aparentemente funciona bastante bien, y los programas usados han sido los [...] October 6th, 2012 → 5:16 am [...] (while adding more functionality) as we also need use the SD library for logging and also the TinyGPS library for GPS NMEA output [...] October 17th, 2012 → 5:02 am [...] project makes use of the TinyGPS library. It provides a useful parsing library with easy access to the data returned by the Serial [...] October 17th, 2012 → 4:03 pm [...] not hard to parse it yourself, but why go to the effort when there are libraries like TinyGPS that can do it for [...] October 22nd, 2012 → 8:33 pm [...] the next while tweaking things. During that time, we rewrote the code (attached below) to use the TinyGPS library and wired the switch up. For the switch, it was simply a matter of connecting the common pin on the [...] November 2nd, 2012 → 12:42 pm [...] Estava procurando uma biblioteca para interfacear com o modulo EM-411 mas não queria algo muito complexo, apenas algo que retornasse a distancia entre duas lat/long e o angulo entre elas. Então encontrei o projeto TinyGPS. [...] November 5th, 2012 → 3:20 pm [...] to interpret the information the GPS sends out ourselves as there’s a really helpful libraryTinyGPS that will do the hard work for [...] December 15th, 2012 → 5:57 am [...] Works with Arduino TinyGPS Library [...] December 21st, 2012 → 5 [...] February 12th, 2013 → 5:22 am [...], [...] February 28th, 2013 → 3:27 am [...] TinyGPS Arduino library makes it very simple to make your GPS do something useful, much thanks… Hope [...] April 18th, 2013 → 9:36 am [...] is standard 9600 baud serial NMEA sentences. There’s an excellent GPS NMEA decoder library for Arduino here which works with this [...] April 27th, 2013 → 9:01 am [...] TinyGPS is a library for Arduino that allows to use a GPS module relatively painlessly. [...] May 16th, 2013 → 11:04 pm [...] used the tinyGPS library to decode the NMEA GPS Data. Cooking-Hacks generously supplied both the GPS shield and SD Card [...] July 1st, 2013 → 1:18 am [...] using two really awesome libraries written by Mikal Hart, so make sure you have downloaded them! (TinyGPS and NewSoftSerial ) TinyGPS basically makes it easier for us to extract data like longitude and [...] July 9th, 2013 → 12:59 pm [...] TinyGPS | Arduiniana sentences – the number of valid $GPGGA and $GPRMC sentences processed; failed_checksum …. I am trying to input some proper NMEA commands but the response goes just like that. CHARS=0 …. 46 Trackbacks For This Post. เริ่มต้น สร้าง … [...] July 9th, 2013 → 9:33 pm [...] TinyGPS | Arduiniana sentences – the number of valid $GPGGA and $GPRMC sentences ….. 46 Trackbacks For This Post … and sending the bits of “Hello World” in 7N1 format. I spent … [...] July 16th, 2013 → 12:38 pm [...] click here to download it. You will also need to download the PWMServo library (version 2) and the TinyGPS library written by Mikal Hart. While you are at his site, thank him for his amazing work. If you [...] July 22nd, 2013 → 12:45 pm [...] for what I needed, and used interrupts, and other complex stuff Fortunately Time library used TinyGPS library, which is indeed very lightweight and straightforward. It also “requires” [...] August 27th, 2013 → 4:45 am [...] TinyGPS library [...] September 1st, 2013 → 1:45 pm [...] TinyGPS [...] September 11th, 2013 → 7:49 am [...] the next part, I am going to use one of the library available for Arduino, which is TinyGPS. You can download TinyGPS library here. This library ease your job to get all the information from [...] October 7th, 2013 → 3:29 pm [...] the Arduino developers for the great libraries, and to Mikal Hart in particular for his work on the TinyGPS and NewSoftSerial [...] October 11th, 2013 → 3:42 am [...] recently purchased one of those U-blox GPS modules from Ebay for $20 and after downloading the TinyGPS library for the Arduino, it works well once it has valid GPS data (I had to have mine close to the window) [...] October 13th, 2013 → 2:50 pm [...] a biblioteca TinyGPS (documentação e download da v13). Requer [...] February 9th, 2014 → 5:43 am [...] to the module and hooked it up to an ATmega328 running at 3.3V / 8MHz. I installed the Arduino tinygps library and uploaded one of the example sketches to the ATmega. I put my laptop, micro controller and GPS [...] April 22nd, 2014 → 1:58 am [...] This requires the TinyGPS and NewSoftSerial libraries from Mikal Hart: and [...] May 14th, 2014 → 1:03 pm [...] biblioteca tinyGPS sau [...] May 29th, 2014 → 4:45 pm [...] one extra sentence that is discarded by TinyGPS (a very cool library – check it out at). I corrected that and believe the GPS delivers the optimum number of sentences at 19200 baud to [...] June 25th, 2014 → 12:39 am [...] fly, so thankfully there is an Arduino library to do this for us - TinyGPS. So head over to the library website, download and install the library before [...] June 27th, 2014 → 8:46 pm [...] fly, so thankfully there is an Arduino library to do this for us - TinyGPS. So head over to the library website, download and install the library before [...] July 27th, 2014 → 11:26 am [...] Eine recht praktische und kompakte Library zum Parsen von GPS/NMEA Daten: TinyGPS [...] July 31st, 2014 → 10:07 pm [...] Library to connect to the shield and I modified the code a little to stream incoming data into Mikal Hart’s GPS Parser Library, TinyGPS. Here is the crux of the [...] August 24th, 2014 → 8:11 pm [...] TinyGPS Library [...] August 25th, 2014 → 1:45 am [...] TinyGPS Library [...] October 3rd, 2014 → 12:56 am [...] comes a sample application using Spark Core and TinyGPS library. TinyGPS is a very powerful and fast NMEA GPS parser for Arduino and compatible. In this [...] November 9th, 2014 → 2:54 pm [...] the Arduino IDE, I shall directly skip on the coding part then. Our arduino sketch will require Tiny GPS library, which you can download it from here. Import the downloaded library to your Arduino IDE. [...] December 27th, 2014 → 12:07 am [...] tutorials are either too complex (i.e. include extra stuff like displays) or they don’t use TinyGPS or TinyGPS++ library. In the first step I’ve managed to get GPS working and displaying the [...] March 1st, 2015 → 12:26 am [...] first thing I had to do software wise was install the TinyGps library. This was hard to do as the creator’s website was iffy at the time. I ended up downloading it from github and renaming it to get rid of the dash [...] April 22nd, 2015 → 9:06 am [...] TinyGPS (ссылка на скачивание в середине страницы) [...] April 22nd, 2015 → 10:44 pm [...] requires the TinyGPS and NewSoftSerial libraries from Mikal Hart: and Example [...] December 9th, 2016 → 12:41 pm [...] ser parsed a datos de ubicación, velocidad etc. Esto lo logramos usando la TinyGPS library () que veremos en la Parte [...] August 21st, 2017 → 2:59 am [...] the base station. If a valid acknowledgement packet is received, the current GPS information (using TinyGPS library) is retrieved from L80 GPS module and stored. Some of you might have argue why not use the full [...] August 30th, 2017 → 5:09 pm [...] [...] November 27th, 2017 → 10:22 am [...] a biblioteca TinyGPS (documentação e download da v13). Requer [...] December 4th, 2017 → 1:33 pm [...] LCDUn módulo GPS en este caso el EM-411Breadboard, jumperwires y un potenciometro.Biblioteca TinyGPS.Enlace al tutorial completo.Califique esto Sample rating [...] March 24th, 2018 → 5:26 am [...] photo you can see the GPS shield, which has now turned up. A quick bit of poking with the lovely TinyGPS library has shown that it’s giving me location data, and as a bonus, there’s a function [...]
http://arduiniana.org/libraries/tinygps/comment-page-12/
CC-MAIN-2019-09
refinedweb
3,109
68.77
Hibernate vs. Rails: The Persistence Showdown Of particular interest is. This Article Covers Hibernate And the wires were all a buzz about Rails... Much like a few other java folks, such as Bruce Tate and David Geary, I have been taking a look at a new web framework Rails. Of particular interest to me is its, a technology I'm very familiar Back Story One of the hotter topics on the minds of developers of late has been the rise in discussion about a new framework, Ruby on Rails. Rails is a MVC web framework, conceptually similar to Struts or Webwork, but written in the scripting language Ruby, instead of Java. Beyond just a web framework, it offers a number of integrated technologies, like built in code generation and its own ORM (Object Relational Mapping) tool, the ActiveRecord. By combining a number of tools into an single elegant integrated framework, it has led to claims that Rails development is "at least ten times faster with Rails than you could with a typical Java framework". Now the claims and counterclaims might lead one to believe that Rails either the reincarnated Savior itself or alternatively a foul creature from the 66th layer of the Abyss. Which one you believe probably depends on your favorite programming language starts with J or an R.What this isn't... This is not a Hibernate is l33t, Rails suxxor article. Rails appears to have some interesting concepts, but I'm very suspicious when people start throwing around 10x productivity figures. After a first pass, the framework appears to be very fast out of the gate especially if you are building a pure CRUD implementation model web database. However, its hard to see if the development velocity remains that high though. Overview Based on the research I've done so far, Rails seems very grounded in a single table/single object mind set. This makes it very well suited for simple models, with a few associations. The assumptions it make about database are reasonable and simple to deal with for a green field project. Hibernate shines as the object model gets more complicated, or against existing databases. Having been around a while longer, it is more mature and has a lot more features. But lets get into the weeds and look at some of the differences. Here is a list of concepts and questions I hope to answer in this article. -? Basic Architecture Differences The core difference between Rails ActiveRecord and Hibernate is the architectural patterns the two are based off of. Rails, obviously, is using the ActiveRecord pattern, where as Hibernate uses the Data Mapper/Identity Map/Unit of Work patterns. Just knowing these two facts gives us some insight into potential differences.Active Record Pattern ActiveRecord is "an object that wraps a row in a database table or view, encapsulates database access and adds domain logic on that data"[Fowler, 2003]. This means the ActiveRecord has "class" methods for finding instances, and each instance is responsible for saving, updating and deleting itself in the database. It's pretty well suited for simpler domain models, those where the tables closely resemble the domain model. It is also generally simpler then the more powerful, but complex Data Mapper pattern.Data Mapper Pattern The Data Mapper is "a layer of mappers that moves data between objects and a database while keeping them independent of each other and the mapper itself"[Fowler, 2003]. It moves the responsibility of persistence out of the domain object, and generally uses an identity map to maintain the relationship between the domain objects and the database. In addition, it often (and Hibernate does) use a Unit of Work (Session) to keep track of objects which are changed and make sure they persist correctly.General Pattern Implications So that covers the basic differences, the general implication of which should be fairly obvious. The ActiveRecord (Rails) will likely be easier to understand and work with, but past a certain point more advanced/complex usages will likely be difficult or just not possible. The question is is of course, when or if many projects will cross this line. Let's look at some specifics of the two frameworks. To illustrate these differences, we will be using code from my "Project Deadwood" sample app. (Guess what I've been watching lately.) :) The Value of Explicitness So what is explicitness worth? One of the key "features" of Ruby's ActiveRecord is that you don't need to specify the fields on class, but rather it dynamically determines its fields based off the columns in the database. So if you had a table "miners" that looked like this. create table miners ( id BIGINT NOT NULL AUTO_INCREMENT, first_name VARCHAR(255), last_name VARCHAR(255), primary key (id) ) class Miner < ActiveRecord::Base end miner.first_name = "Brom" On the other hand, your Hibernate class (Miner.java) specifies the fields, getters/setters and xdoclet tags looks like so. package deadwood; /** * @hibernate.class table="miners" */ public class Miner { private Long id; private String firstName; private String lastName; /** * @hibernate.id generator-class="native" */ public Long getId() { return id; } public void setId(Long id) { this.id = id; } /** * @hibernate.property column="first_name" */ public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } /** * @hibernate.property column="last_name" */ public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } } miner.setFirstName("Brom"); Now, as I have mentioned before here, both Rails and Hibernate really only need to specify the field name once. Hibernate details are in the code, ActiveRecord details are in the database. So when you are browsing through your ruby code and you see this... class GoldClaim < ActiveRecord::Base end The question is, what fields does this object have? You have to fire up MySQL Front (or whatever) and look at the database schema. When the number of domain models is small, this isn't a big deal. But when your project has 40+ tables/50ish domain objects (like a Hibernate project I'm currently working on), this would seem pretty painful. Ultimately, this might come down to preference, but having the details in the code makes it easier to understand and alter. Associations In the last section, the Miner class we looked at was single table oriented, mapping to a single miners table. ORM solutions support ways to map associated tables to in memory objects, Hibernate and Rails are no different. Both handle the most of the basic mapping strategies. Here's a non-exhaustive list of association supported by both of them, including the corresponding Hibernate - Rails naming conventions where appropriate. - Many to One/One to one - belongs_to/has_one - One to Many (set) - has_many - Many to Many (set) - has_and_belongs_to_many - Single Table Inheritance - Components (mapping > 1 object per table) As a comparative example, lets look at the many to one relationship. We are going to expand our Deadwood example from part I. We add to the Miner a many to one association with a GoldClaim object. This means there is a foreign key, gold_claim_id in the miners table, which links it to a row in the gold_claims table. (Java) public class Miner { // Other fields/methods omitted private GoldClaim goldClaim; /** * @hibernate.many-to-one column="gold_claim_id" * cascade="save" */ public GoldClaim getGoldClaim() { return goldClaim; } public void setGoldClaim(GoldClaim goldClaim) { this.goldClaim = goldClaim; } } (Rails) class Miner < ActiveRecord::Base belongs_to :gold_claim end Not much real difference here, both do functionally the same thing. Hibernate uses explicit mapping to specify the foreign key column, as well as the cascade behavior, which we will talk about next. Saving a Miner will save its associated GoldClaim, but updates and deletes to it won't affect the associated object.Transitive Persistence Non-demo applications tend to work with big graphs of connected objects. Its important for an ORM solution to provide a way to detect and cascade changes from in memory objects to the database, without the need to manually save() each one. Hibernate features a flexible and powerful version of this via declarative cascading persistence. Rails seems to offer a limited version of this, based on the type of association. For example, Rails seems to emulates Hibernate's cascade="save" behavior for the belongs_to association by default, as the following Rails listing demonstrates... miner = Miner.new("name" => "Brom Garrott") miner.gold_claim = GoldClaim.new( "name" => "Western Slope") miner.save # This saves both the Miner and GoldClaim objects miner.destroy # Deletes only the miner row from the databaseDeleting cascade="all" GoldClaim Miner Miner miner = new Miner(); miner.setGoldClaim(new GoldClaim()); session.save(miner); // Saves Miner and GoldClaim objects. session.delete(miner); // Deletes both of them.Miner Updating GoldClaim miner = Miner.find(@params['id']) miner.gold_claim.name = "Eastern Slope" miner.save This does not update the gold_claim.name. From the opposite direction (has_one), this does work... class GoldClaim < ActiveRecord::Base has_one :miner end claim = GoldClaim.find(@params['id']) claim.miner.name = "Seth Bullock" claim.save # Saves the miner's name By using the cascade="save-update", you could get this behavior on any association, regardless of which table the foreign key lives in. Hibernate doesn't base the transistive persistence behavior off the relationship type, but rather the cascade style, which is much more fine grained and powerful. Next, lets look at how each framework finds the objects we have persisted. Query Languages While there have been a number of similarities to this point between the two frameworks, when the topic comes to query languages, capabilities and usage starts to differ rapidly. Rails essentially uses SQL, the well known standard for getting data in and out of databases. In addition, via the use of dynamic finder methods, it has what I think of as its own 'mini' language which lets developers write simplified queries by basically inventing methods. But ultimately everything is expressed in terms of tables and columns. On the other hand, Hibernate has its own object oriented query language (Hibernate Query Language - HQL), which is deliberately very similar to SQL. How it differs is that it lets developers express their queries in terms of objects and fields instead of tables and columns. Hibernate translates the query into SQL optimized for your particular database. Obviously, inventing a new query language is very substantial task, but the expressiveness and power of it is one of Hibernate's selling points. Now let's a take a look at some samples of each of them.Rails Insta-Finders For simple queries, like 'find by property x and y', Rails lets you add dynamic finder methods which it will translate into SQL for you. Suppose for example, we want to find Miners based on first name and last name, you would write something like this. @miners = Miner.find_by_first_name_and_last_name("Elma", "Garrott")where clause # Returns only the first record @miner = Miner.find_first("first_name = ?", "Elma") # Finds up to 10 miners older than 30, ordered by age. @miners = Miner.find_all ["age > ?", 30], "age ASC", 10 # Like find all, but need complete SQL @minersWithSqA = Miner.find_by_sql [ "SELECT m.*, g.square_area FROM gold_claims g, miners m " + " WHERE g.square_area = ? and m.gold_claim_id = g.id", 1000] The big thing to realize is that since Rails classes have dynamic fields, all columns returned by the result set are smashed on the Miner object. In the last query, the Miner gets a square_area field that it doesn't normally get. This means the view might have to change, like so... # Normal association traversing <%= miner.gold_claim.square_area # Altered query for @minersWithSqA <%= miner.square_area %>Querying for Objects with HQL As mentioned before, being able to express in terms of objects and columns really powerful. While simple queries are definitely easier with Rails, when you have to start navigating across objects with SQL, HQL can be very convenient alternative. Let's take a look at our sample queries for HQL. // Find first Miner by name Query q = session.createQuery("from Miner m where m.firstName = :name"); q.setParameter("name", "Elma"); Miner m = (Miner) q.setMaxResults(1).uniqueResult(); // Finds up to 10 miners older than 30, ordered by age. Integer age = new Integer(30); Query q = session.createQuery( "from Miner m where m.age > :age order by age asc"); List miners = q.setParameter("age", age).setMaxResults(10).list(); // Similar to join query above, but no need to manually join Query q = session.createQuery( "from Miner m where m.goldClaim.squareArea = :area"); List minersWithSqA = q.setParameter("area", new Integer(1000)).list();Miner ${miner.goldClaim.squareArea} <%-- Traverse fields normally --%> Having covered some of the basics of fetching objects, let's turn your attention to how we can make fetching objects fast. The next section covers the means by which we can tune the performance. Performance Tuning Beyond just mapping objects to tables, robust ORM solutions need to provide ways to tune the performance of the queries. One of the risks of working with ORM's is that you often pull back too much data from the database. This tends to happen because it its very easy to pull back several thousand rows, with multiple SQL queries, with a simple statement like "from Miner". Common ORM strategies for dealing with this include Lazy fetching, outer join fetching and caching.Rails is very very Lazy What I mean by lazy is that when you fetch an object, the ORM tool doesn't fetch data from other tables, until you request the association. This prevents loading to much unneeded data. Both Rails and Hibernate support lazy loading associations, but Hibernate allows you to choose which associations are lazy. For example, here's how it works with Rails... @miner = Miner.find(1) # select * from miners where id = 1 @claim = @miner.gold_claim # select * from gold_claim where id = 1 This leads us to one of the great fallacies of ORM, that Lazy loading is always good. In reality, lazy loading is only good if you didn't need the data. Otherwise, you are doing with 2-1000+ queries what you could have done with one. This is dreaded N+1 select problem, where to get all the objects require N selects + 1 original selects. This problem gets much worse when you deal with collections..Outer Joins and Explicit Fetching Generally, one of the best way to improve performance is to limit the number of trips to the database. Better 1 big query than a few small ones. Hibernate has a number ways its handles the N+1 issue. Associations can be explicitly flagged for outer join fetching (via outer-join="true"), and you can add outer join fetching to HQL statements. For example... /** * @hibernate.many-to-one column="gold_claim_id" * cascade="save-update" outer-join="true" */ public GoldClaim getGoldClaim() { return goldClaim; } // This does one select and fetches both the Miner and GoldClaim // and maps them correctly. Miner m = (Miner) session.load(Miner.class, new Long(1)); In addition, when selecting lists or dealing with collection associations, you can use an explicit outer join fetch, like so... // Issues a single select, instead of 1 + N (where N is the # miners) List list = session.find("from Miner m left join fetch m.goldClaim"); The performance savings from this can very significant. On the other hand, Rails suffers badly from N+1 issues, and has limited means to solve this problem, other than writing explicit SQL joins, referred to as Piggy-back queries. The trouble is because Rails maps all the fields to the Miner object, you lose the association objects, meaning you need to alter your views and how you work with the domain model. Also, the query is fairly complicated, particularly if there are more than one association to be fetched. The @minersWithSqA query we did above is an example of a Piggy back query. In addition, all the additional fields are strings, losing their original type value. Queries get progressively worse as you add more associations.Caching While object caching isn't always going to be helpful or a performance silver bullet, Hibernate has a huge potential advantage here. It provides several levels of caching, including a session (UnitOfWork) level as well as an optional second level cache. You always use the '1st level' cache, as it prevents circular references and multiple trips to the database for the same object. Using a second level cache can allow much of the database state to stay resident in memory. This is especially useful for frequently read and reference data. Rails essentially has no options for caching at the database level. (Though it does support caching for the web tier). Conclusion While this is by no means a complete coverage, we have looked at some of the high level differences between the two frameworks. Hopefully you should have a basic understanding of what the opportunity costs of either framework are. I have covered the basic architectural patterns which underly both Rails and Hibernate, as well as a how explicitness applies to both framework's basic persistent classes. For associations, there are quite a few more mappings that are possible with an ORM, but that covers the basics that most developers use.. References - The Art of .war - Patrick's Weblog - Hibernate - Project Homepage - Ruby on Rails - Project Homepage - Patterns of Enterprise Application Architecture - By Martin Fowler - Getting Rolling with Rails - By Curt Hibbs About the Author Patrick Peak is co-author of Hibernate Quickly Though he feels very weird about referring to himself in the third person, Patrick Peak is currently the Chief Technology Officer of Browsermedia, a Web DevelopmentDesign firm in Bethesda, MD. His focus is on using open-source as building block for rapid web software development. Start the conversation
http://www.theserverside.com/news/1364757/Hibernate-vs-Rails-The-Persistence-Showdown
CC-MAIN-2017-09
refinedweb
2,920
55.64
Terminal In order to begin practicing using ObjectScript, you need to start the Terminal. Click the InterSystems IRIS launcher in the task bar and select Terminal from the menu, and authenticate if necessary (try SuperUser/SYS for Username/Password). This brings up the Terminal window, and you can see from the prompt that you are in the USER namespace. Namespaces are logical directories within InterSystems IRIS, containing code and globals (data). The Terminal allows you to type one or more ObjectScript commands on a single line (the Terminal does not support multiline input) and see the results immediately. In this tutorial, most pages will contain one or two simulated Terminal sessions like the one below demonstrating several examples mentioned in the text. Using VS Code - ObjectScript, you can create your own class definition, and copy and paste any example code into a method, and compile the class. Then you can run the examples yourself from the Terminal. When you first start the Terminal, you will be in the USER namespace. There are several ways to change to another namespace. You can use do ^%CD, set $namespace = "namespace", or znspace "namespace" (usually abbreviated zn "namespace"). During the tutorial, you'll work in the USER namespace. USER>do ^%CD Namespace: %SYS You're in namespace %SYS Default directory is c:\InterSystems\IRIS\mgr\ %SYS>set $namespace = "USER" USER>zn "%SYS" %SYS>do ^%CD Namespace: ? '?' for help. '@' (at-sign) to edit the default, the last namespace name attempted. Edit the line just as if it were a line of code. <RETURN> will leave you in the current namespace. Here are the defined namespaces: %SYS USER Namespace: USER You're in namespace USER Default directory is c:\InterSystems\IRIS\mgr\user\ USER>
https://docs.intersystems.com/healthconnectlatest/csp/docbook/Doc.View.cls?KEY=TOS_Terminal
CC-MAIN-2021-31
refinedweb
288
55.34
Hi, I have a programming project in which I have to create a game where the user can select the upper bound of numbers and difficulty they want and then guess the secret number within that bound and difficulty. So far, this is my code. It compiles and allows the user to select the upper bound. However, I am having trouble adding difficulty levels to my game. The difficulty levels limit the number of guesses the user has: easy giving 20 guesses, medium giving 10, hard giving 5. The user must be able to choose the difficulty as a command line parameter. Thus if the program is run as "./guess -d easy" it will play the easy mode. Does anyone have any ideas on how I should do this? I tried doing an if statement with: if (argc=='easy') but I don't know where to go from there. or a for loop with: for (int guesses=10; guesses <=10; guesses--) but it doesn't work. #include <iostream> #include <time.h> #include <cmath> using namespace std; int main(int argc, char **argv) { cin >> argc; if (argc== argc) do { int secret_number; int guesses; int user_guess; srand ( time(NULL) ); int upper_bound=argc; secret_number= rand () % upper_bound; cout << secret_number; do { cout << "I'm thinking of a number between 0 and " << upper_bound <<". Can you guess it?" << endl; cin >> user_guess; if (user_guess>secret_number) cout << "I'm thinking of a smaller number. Please try again" << endl; else if (user_guess<secret_number) cout << "I'm thinking of a larger number. Please try again" << endl; } while (user_guess!=secret_number); cout << "Correct! You guessed the secret number!" << endl; return 0; } while (argc==argc); } Thanks
https://www.daniweb.com/programming/software-development/threads/342385/need-help-with-guessing-game-code
CC-MAIN-2017-26
refinedweb
272
75.3
I've searched and read some of the topics. I've read this. and it's still not working. I can't seem to split up my program into 3 pieces. I've managed to split the program into 2, seperating the header and the class functions/main. But once I seperate main from the class functions, it says Linker Error. I've just about given up. I'm wondering if it just my problem. Is someone could please help, that would great. Here is my simple program (there's no comments). //Bank.h Code:#ifndef BANK_H #define BANK_H class Bank { public: Bank() { balance = 1000; transactions = 0; interest = .15;} Bank(float itsBalance) { balance = itsBalance; transactions = 0; interest = .15;} ~Bank() {std::cout << "Destructor called";} void deposit(float depAmount) { balance += depAmount; transactions++; } void withdraw(float wdrawAmount) { balance -= wdrawAmount; transactions++; } float getInterest() { return interest * balance; } float getBalance() { return balance; } int getTransactions() { return transactions; } void displayMenu(); private: float balance; int transactions; float interest; }; #endif // Bank.cpp // main.cpp// main.cppCode:#include <iostream> #include <conio.h> #include "Bank.h" using namespace std; void Bank::displayMenu() { cout << "\n\n1: View Balance\n" << "2: Deposit money\n" << "3: Withdraw\n" << "4: View Interest\n" << "5: View Transactions\n" << "6: Exit\n\n"; } Code:#include <iostream> #include <conio.h> #include "Bank.h" using namespace std; int main() { Bank account(5000); int choice, amount; while(1) { account.displayMenu(); cin >> choice; switch(choice) { case 1: cout << "You're balance is $" << account.getBalance(); break; case 2: cout << "How much money would you like to deposit?\n"; cin >> amount; account.deposit(amount); break; case 3: cout << "How much money would you like to withdraw?\n"; cin >> amount; if(amount > account.getBalance()) { cout << "Sorry, you don't have enough money in the bank\n"; break;} account.withdraw(amount); break; case 4: cout << "Your interest is $" << account.getInterest() << endl; break; case 5: cout << "You've made " << account.getTransactions() << " transactions.\n"; break; case 6: exit(0); default: cout << "Please enter a valid number!"; } }// while (choice !=6); getch(); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/55777-multiple-source-files-one-program.html
CC-MAIN-2016-07
refinedweb
334
61.33
IRC log of html-wg on 2008-02-21 Timestamps are in UTC. 00:02:24 [mjs] mjs has joined #html-wg 00:08:20 [mjs] mjs has joined #html-wg 00:40:14 [jgraham__] jgraham__ has joined #html-wg 00:41:54 [jgraham] jgraham has joined #html-wg 00:58:54 [olivier] olivier has joined #html-wg 01:14:24 [Lachy] Lachy has joined #html-wg 01:15:13 [jgraham_] jgraham_ has joined #html-wg 01:16:16 [MikeSmith] MikeSmith has joined #html-wg 01:16:52 [jgraham] jgraham has joined #html-wg 01:34:04 [DanC_lap] DanC_lap has joined #html-wg 01:36:22 [hyatt_] hyatt_ has joined #html-wg 01:41:13 [olivier] olivier has joined #html-wg 01:49:51 [jgraham__] jgraham__ has joined #html-wg 01:50:00 [hyatt] hyatt has joined #html-wg 01:51:23 [jgraham] jgraham has joined #html-wg 02:12:39 [Thezilch] Thezilch has joined #html-wg 02:31:57 [RelDrgn] RelDrgn has joined #html-wg 02:37:18 [RelDrgn] hm, I predict that this is probably a wholly inappropriate place to mention this, but the last revision of Overview.html truncated the file in the middle (Changes since 1.425: +1 -30610 lines) 02:40:20 [RelDrgn] (referring to ) 02:40:23 [RelDrgn] RelDrgn has left #html-wg 03:14:22 [hyatt] hyatt has joined #html-wg 04:00:46 [mjs_] mjs_ has joined #html-wg 04:40:36 [mjs] mjs has joined #html-wg 07:00:07 [mjs] mjs has joined #html-wg 07:15:35 [hyatt] hyatt has joined #html-wg 07:17:48 [hyatt] hyatt has joined #html-wg 07:18:23 [hyatt] hyatt has left #html-wg 07:39:05 [hyatt] hyatt has joined #html-wg 08:11:55 [aroben] aroben has joined #html-wg 08:30:51 [tH_] tH_ has joined #html-wg 09:12:48 [Lachy_] Lachy_ has joined #html-wg 09:22:26 [ROBOd] ROBOd has joined #html-wg 09:24:26 [peepo] peepo has joined #html-wg 09:45:52 [Lachy] Lachy has joined #html-wg 09:46:55 [aaronlev] aaronlev has joined #html-wg 10:03:26 [hsivonen] are the video workshop minutes Member-only on purpose? I though they were supposed to be public after a review 10:09:20 [MikeSmith] hsivonen - URL? 10:10:02 [hsivonen] MikeSmith: 10:10:28 [hsivonen] linked from the w3.org front page 10:10:56 [MikeSmith] OK, I'll ask Philippe now 10:11:05 [hsivonen] MikeSmith: thanks 10:16:14 10:16:18 [MikeSmith] thanks for the heads-up 10:24:06 [paullewis] paullewis has joined #html-wg 10:27:02 [hsivonen] MikeSmith: thanks 10:41:28 [zcorpan] zcorpan has joined #html-wg 11:02:16 [myakura] myakura has joined #html-wg 11:14:45 [Lachy] Lachy has joined #html-wg 12:03:43 [paullewis] paullewis has joined #html-wg 12:14:59 [Julian] Julian has joined #html-wg 13:42:03 [petersn] petersn has joined #html-wg 13:43:49 [Julian_Reschke] Julian_Reschke has joined #html-wg 13:48:29 [matt] matt has joined #html-wg 14:44:34 [Lachy] Lachy has joined #html-wg 14:46:34 [Lachy] Lachy has joined #html-wg 15:01:31 [DanC_lap] DanC_lap has joined #html-wg 15:19:07 [peepo] peepo has joined #html-wg 15:23:15 [xover] xover has joined #html-wg 15:23:16 [billmason] billmason has joined #html-wg 15:34:41 [Julian] Julian has joined #html-wg 15:39:25 [Lachy] Lachy has joined #html-wg 15:54:41 [Lachy] Lachy has joined #html-wg 16:00:08 [matt_] matt_ has joined #html-wg 16:05:54 [gsnedders] gsnedders has joined #html-wg 16:26:44 [zcorpan] i apologize for initiating a bikeshed topic on public-html 16:30:22 [MikeSmith] MikeSmith has joined #html-wg 16:31:55 [DanC_lap] cite element? or something else, zcorpan ? (I keep forgetting the relationship between IRC nicks and email names) 16:32:14 [zcorpan] DanC_lap: target=_blank 16:32:35 [oedipus] oedipus has joined #html-wg 16:32:37 [zcorpan] simonp@opera.com 16:45:00 [jdandrea] jdandrea has joined #html-wg 16:48:12 [Julian] zcorpan: I think it's a good discussion to have (target=_blank) 16:50:27 [zcorpan] Julian: sure 16:50:45 [Laura] Laura has joined #html-wg 16:50:50 [zcorpan] Julian: though, that thread seems to have turned into a bikeshed by now :) 16:52:45 [Steve_f] Steve_f has joined #html-wg 16:54:10 [Julian] zcorpan: I admit I didn't read all of it. 16:54:32 [dbaron] dbaron has joined #html-wg 16:56:00 [DanC_lap] the _blank thread looks mostly healthy; I just wish some test-case-elves were following along. 16:56:43 [sampablokuper] sampablokuper has joined #html-wg 16:57:15 [Philip] What kind of test case would help that thead? 16:57:55 [Gerrie] Gerrie has joined #html-wg 16:58:19 [DanC_lap] one where the input document has target="_blank" and the output is "conforming" or "non-conforming" 16:58:51 [DanC_lap] better if the input documents capture some use cases, such as gmail 16:59:49 [DanC_lap] the output could also be: new window allowed/required/forbidden 17:00:42 [DanC_lap] allowed + browser-makes-new-window = pass 17:00:50 [DanC_lap] allowed + brosers-uses-same-window = pass 17:01:00 [DanC_lap] required + browser-uses-same-window = fail 17:01:12 [DanC_lap] forbidden + browser-mades-new-window = fail 17:02:06 [zcorpan] the thread isn't about what browsers do, but about whether _blank should be conforming for authors 17:02:07 [DanC_lap] it makes a WG decision straghtforward to phrase 17:02:25 [Steve_f] dan: is the meeting happening now or have i got the times wrong? 17:02:30 [DanC_lap] oops 17:02:31 [sampablokuper] Ditto 17:02:33 [Zakim] Zakim has joined #html-wg 17:02:43 [DanC_lap] RRSAgent, pointer? 17:02:43 [RRSAgent] See 17:02:46 [DanC_lap] Zakim, this is html 17:02:46 [Zakim] ok, DanC_lap; that matches HTML_WG()12:00PM 17:02:52 [DanC_lap] Zakim, who's on the phone? 17:02:52 [Zakim] On the phone I see +1.218.340.aaaa, +1.858.354.aabb, +1.212.830.aacc, ??P2, [Microsoft], +049251280aaee 17:03:03 [DanC_lap] Zakim, call DanC-BOS 17:03:03 [Zakim] ok, DanC_lap; the call is being made 17:03:05 [Julian] Zakim, +049251280aaee is me 17:03:05 [Zakim] +DanC 17:03:07 [Zakim] +Julian; got it 17:03:13 [ChrisWilson] zakim, microsoft is me 17:03:13 [Zakim] +ChrisWilson; got it 17:03:30 [Zakim] +??P6 17:03:43 [DanC_lap] Zakim, ??P6 is SteveF 17:03:43 [Zakim] +SteveF; got it 17:04:13 [DanC_lap] Zakim, aaaa is Laura 17:04:13 [Zakim] +Laura; got it 17:04:23 [DanC_lap] Zakim, aabb is Jerry_S 17:04:23 [Zakim] +Jerry_S; got it 17:04:36 [DanC_lap] Zakim, aacc is Dave_B 17:04:36 [Zakim] +Dave_B; got it 17:04:43 [Gerrie] Gerrie Shults, not Jerry 17:05:31 [DanC_lap] agenda + Convene HTML WG teleconference of 2008-02-21T17:00:00Z 17:05:36 [DanC_lap] Zakim, take up item 1 17:05:36 [Zakim] agendum 1. "Convene HTML WG teleconference of 2008-02-21T17:00:00Z" taken up [from DanC_lap] 17:06:27 [DanC_lap] Zakim, who's on the phone? 17:06:27 [Zakim] On the phone I see Laura, Jerry_S, Dave_B, ??P2, ChrisWilson, Julian, DanC, SteveF 17:07:00 [DanC_lap] Laura Carlson 17:07:34 [Gerrie] Gerrie Shulte 17:07:44 [Gerrie] Gerrie Shults 17:07:47 [DanC_lap] Gerrie Shults / HP 17:08:12 [DanC_lap] David Bills 17:08:28 [DanC_lap] agenda + SQL statement support [Dave_B] 17:08:46 [DanC_lap] Zakim, ??P2 is Josh 17:08:46 [Zakim] +Josh; got it 17:09:02 [DanC_lap] Joshue O Connor 17:10:26 [DanC_lap] Steve Faulkner 17:11:00 [DanC_lap] agenda + orientation, process 17:11:15 [dfbills] dfbills has joined #html-wg 17:11:25 [DanC_lap] agenda + ISSUE-31 missing-alt 17:11:41 [DanC_lap] 17:12:01 [DanC_lap] agenda + ISSUE-34 commonality 17:12:22 [DanC_lap] agenda + ISSUE-35 aria-processing 17:12:41 [DanC_lap] agenda 2 = ISSUE-36 client-side-storage-sql 17:12:56 [DanC_lap] 17:13:28 [DanC_lap] is issue ISSUE-14 aira-role the same as issue-35? 17:14:06 [DanC_lap] agenda 6 = ISSUE-35 aria-processing , ISSUE-14 aira-role 17:15:39 [DanC_lap] agenda + semantic elements (cite thread, etc.) ACTION-48 17:16:16 [DanC_lap] agenda + canvas mailing list (which action?) 17:16:30 [DanC_lap] Zakim, agenda? 17:16:30 [Zakim] I see 8 items remaining on the agenda: 17:16:31 [Zakim] 1. Convene HTML WG teleconference of 2008-02-21T17:00:00Z [from DanC_lap] 17:16:34 [Zakim] 2. ISSUE-36 client-side-storage-sql 17:16:36 [Zakim] 3. orientation, process [from DanC_lap] 17:16:37 [Zakim] 4. ISSUE-31 missing-alt [from DanC_lap] 17:16:38 [Zakim] 5. ISSUE-34 commonality [from DanC_lap] 17:16:39 [Zakim] 6. ISSUE-35 aria-processing , ISSUE-14 aira-role 17:16:40 [Zakim] 7. semantic elements (cite thread, etc.) ACTION-48 [from DanC_lap] 17:16:42 [Zakim] 8. canvas mailing list (which action?) [from DanC_lap] 17:17:06 [DanC_lap] agenda + ISSUE-32 table-summary 17:17:45 [DanC_lap] next meeting 28 Feb, Chris W. to chair (4pm Pacific time) 17:17:58 [milesdefeyter] milesdefeyter has joined #html-wg 17:18:10 [Steve_f] dan: for issue 35 17:18:16 [DanC_lap] Zakim, next item 17:18:16 [Zakim] agendum 2. "ISSUE-36 client-side-storage-sql" taken up 17:19:00 [DanC_lap] issue-36 is a design issue... 17:19:13 [DanC_lap] see also ISSUE-16 (edit) 17:19:13 [DanC_lap] offline-applications-sql 17:19:35 [DavidFBills] DavidFBills has joined #html-wg 17:19:47 [smedero] smedero has joined #html-wg 17:19:59 [DanC_lap] issue-16 is a requirements/scope issue 17:20:28 [DanC_lap] Zakim, mute me 17:20:28 [Zakim] sorry, DanC_lap, I do not know which phone connection belongs to you 17:20:32 [DanC_lap] Zakim, mute DanC 17:20:32 [Zakim] DanC should now be muted 17:21:08 [DanC_lap] Zakim, unmute DanC 17:21:08 [Zakim] DanC should no longer be muted 17:21:47 [DanC_lap] Dave, is the message you're speaking of? 17:22:47 [ChrisWilson] agrees with Dan - but doesn't understand if "SQL-lite" is descriptive enough to go with currently. 17:23:19 [anne] it's SQLite 17:23:42 [ChrisWilson] sorry, someone else mis-typed in an email, and I haven't finished my first cup of coffee. 17:23:43 [DanC_lap] action-13? 17:23:43 [trackbot-ng] ACTION-13 -- Chris Wilson to talk to WebAPI and WAF WGs about their role in offline API stuff and how they work with and contribute to the discussion -- due 2008-02-21 -- OPEN 17:23:43 [trackbot-ng] 17:23:52 [gsnedders] What version of SQLite? Do we have to copy bugs from SQLite? 17:23:53 [gsnedders] etc. 17:24:12 [anne] ChrisWilson, no need for apologies, just making sure everyone is talking about the same thing :) 17:24:23 [DanC_lap] close action-13 17:24:23 [trackbot-ng] ACTION-13 Talk to WebAPI and WAF WGs about their role in offline API stuff and how they work with and contribute to the discussion closed 17:24:31 [DavidFBills] I guess the real question would be whether it would follow the feature set or actually implement the actual code 17:24:42 [ChrisWilson] So Anne, since you're not on the phone - the question is "is 'SQLite' a defined enough, interoperable spec to refer to?" 17:25:22 [anne] Probably not. As I said on the mailing list, the plan is to wait for two implementations and to define it then 17:25:24 [gsnedders] ChrisWilson: SQLite supports most of SQL92 — I think requiring SQL92 support would be easier 17:25:36 [gsnedders] < > 17:25:53 [anne] You don't want all of SQL anyway, as some features don't make sense for client-side storage 17:26:26 [anne] (encodings, transactions, etc. should probably all be banned from the language as far as this API is concerned) 17:26:27 [gsnedders] That's true, but it's probably better to define what we need in terms of SQL92 than any implementation 17:26:43 [aroben] aroben has joined #html-wg 17:26:55 [anne] I don't see why it needs to be in terms of something else, one more level of indirection 17:26:58 [DanC_lap] CW: I talked with Chaals about WebAPI/HTML boundaries... they're being re-chartered... 17:27:14 [DanC_lap] CW: I think we can find editors too... 17:27:32 [DavidFBills] anne: I agree 17:28:29 [Josh] Josh has joined #html-wg 17:28:41 [DanC_lap] ACTION: Dan check for offline api stuff in WebAPI proposed charter 17:28:41 [trackbot-ng] Created ACTION-53 - Check for offline api stuff in WebAPI proposed charter [on Dan Connolly - due 2008-02-28]. 17:28:59 [Josh] IRC is go ;-) 17:29:15 [DanC_lap] Zakim, next item 17:29:15 [Zakim] agendum 3. "orientation, process" taken up [from DanC_lap] 17:29:37 [ChrisWilson] agrees with anne - I'd rather NOT have a level of indirection, unless the redirection is to a very definitive specification. 17:29:50 [DanC_lap] 17:30:35 [oedipus] Extensible HyperText Markup Language Vocabulary namespace 17:31:02 [DanC_lap] Zakim, next item 17:31:02 [Zakim] agendum 4. "ISSUE-31 missing-alt" taken up [from DanC_lap] 17:31:11 [Sander] Sander has joined #html-wg 17:31:22 [DanC_lap] 17:33:18 [Zakim] -SteveF 17:33:40 [deltab] deltab has joined #html-wg 17:33:41 [DanC_lap] a periodic survey of top web sites 17:33:51 [DanC_lap] * M. Jackson Wilkinson 17:33:51 [DanC_lap] * Sean Fraser 17:33:51 [DanC_lap] * Terry Morris 17:33:51 [DanC_lap] * Serdar Kiliç 17:33:52 [DanC_lap] * Rene Saarsoo 17:33:54 [DanC_lap] * Patrick Taylor 17:33:55 [Zakim] +??P6 17:33:56 [DanC_lap] * Roman Kitainik 17:33:58 [DanC_lap] * James VanDyke 17:34:00 [DanC_lap] * Craig Saila 17:34:02 [DanC_lap] * Michael Turnwall 17:34:04 [DanC_lap] * Benjamin Hedrington 17:34:06 [DanC_lap] * Karl Dubost 17:34:08 [DanC_lap] * Marco Battilana 17:34:12 [DanC_lap] * Andrew Smith 17:34:12 [DanC_lap] * Shawn Medero 17:34:14 [DanC_lap] * Eric Eggert 17:34:16 [DanC_lap] * Ben Millard 17:34:18 [DanC_lap] * Thomas Bradley 17:34:20 [DanC_lap] * Mark Martin 17:34:22 [DanC_lap] * Balakumar Muthu 17:34:24 [DanC_lap] * Justin Thorp 17:34:28 [DanC_lap] * Samuel Santos 17:34:30 [DanC_lap] * Karl Groves 17:34:32 [DanC_lap] Zakim, ??P6 is SteveF 17:34:32 [Zakim] +SteveF; got it 17:34:56 [DanC_lap] DanC: Joshue, did you look at missing alt in your video survey? 17:35:01 [DanC_lap] Joshue: no, but could do... 17:36:09 [Josh] Would be glad to provide video footage to supplement discussion on @alt issue 17:36:13 [DanC_lap] "the failure of the HTML5 draft to make 17:36:13 [DanC_lap] @alt on <img> an across-the-board requirement (even if sometimes 17:36:13 [DanC_lap] it has the value of "") is a bug." 17:36:22 [DanC_lap] 17:37:05 [ChrisWilson] if it's required but allowed to be "", why is that any better? Is there a pointer to that discussion somewhere? 17:38:15 [Josh] The null alt value is ignored by Assistive Technology for one thing. 17:38:16 [gsnedders] ChrisWilson: the rational seems to be it "is inconsistent with WCAG" 17:38:43 [ChrisWilson] @Josh I think you're agreeing? 17:38:44 [gsnedders] (I've never seen any better rational than that, even if there is some) 17:39:00 [ChrisWilson] @gsnedders umm. yeah. 17:39:29 [smedero] What is considered a valid source of the "top web sites" if anyone wanted to fire off a crawling job? Alexa's "Top 500 Global Sites"? 17:39:38 [zcorpan] ChrisWilson: 17:39:40 . 17:39:49 [Josh] @Chris, am not sure yet what I am agreeing to ;-) 17:39:50 [anne] the PFWG actually failed to address our concerns 17:39:51 ) 17:39:59 [anne] their argument boils down to: "it should be invalid" 17:40:01 [oedipus] how so, anne 17:40:50 [Josh] @Chris One concern is that making the alt value optional send the wrong message to bad developers 17:41:00 [sampablokuper] Are spacer graphics still considered okay? 17:41:25 [zcorpan] sampablokuper: no (they never were) 17:41:34 [sampablokuper] right; thanks for confirming. 17:41:34 [Josh] That they can somehow ignore it, which in some cases they can (Spacers or presentation graphics/icons with null values are Ok etc) 17:41:54 [oedipus] no, spacer gifs are supposed to be supplanted by CSS, but then again, HTML4.x also formally deprecated use of BLOCKQUOTE for stylistic reasons, but like spacers, they continue to be everywhere 17:41:55 [anne] they can also ignore it by setting alt=randomValue 17:42:05 [anne] which is the point the PFWG failed to address 17:42:14 [Philip] sampablokuper, the current HTML5 draft says "The img must not be used as a layout tool. In particular, img elements should not be used to display fully transparent images, as they rarely convey meaning and rarely add anything useful to the document." 17:42:24 [Josh] @sampa Do you mean spacers GIFs as opposed to icon type or button graphics? 17:42:54 [Philip] (though that makes some of my canvas test cases non-conforming) 17:42:58 [sampablokuper] @Josh I was referring to the comment by oedipus above 17:43:12 [zcorpan] Philip: (test cases don't need to be conforming) 17:43:14 [Josh] @sampa no worries 17:43:15 [oedipus] sampablokuper, did you get my reply 17:43:40 [sampablokuper] oedipus, yes, thanks 17:44:03 [oedipus] when you use @foo all 4 screen readers i have here choke and don't speak the word -- that's why i don't like the @foo attribute shorthand 17:44:04 [DanC_lap] ACTION: SteveF draft text for HTML 5 spec to require producers/authors to include @alt on img elements 17:44:04 [trackbot-ng] Sorry, couldn't find user - SteveF 17:44:28 [Josh] back in a minnute 17:45:54 [DanC_lap] oedipus, can I assign this action to you, for admin purposes? 17:46:03 [oedipus] yes, dan 17:46:09 [DanC_lap] ACTION: Gregory work with SteveF draft text for HTML 5 spec to require producers/authors to include @alt on img elements 17:46:09 [trackbot-ng] Created ACTION-54 - Work with SteveF draft text for HTML 5 spec to require producers/authors to include @alt on img elements [on Gregory Rosmaita - due 2008-02-28]. 17:47:40 [DanC_lap] Zakim, next item 17:47:40 [Zakim] agendum 5. "ISSUE-34 commonality" taken up [from DanC_lap] 17:48:00 17:48:28 [dfbills] oh the dreaded spacer.gif 17:50:04 [DanC_lap] "Can we get access to tools that determine how often markup is used on the web?" 17:50:20 [dfbills] I've always found it interesting that img alt tags are designed for accessibility, but the most common use would be in lightweight or mobile browsers where the overhead in byte-count actually slows the loading of the code over the network. 17:50:27 [DanC_lap] CW: clearly these tools are valuable; Microsoft has some and in some cases I might be able to use them for HTML WG purposes 17:50:42 [DanC_lap] [somebody]: and Ian Hickson can do queries at google sometimes 17:51:10 [dfbills] Is anyone providing it on the web? or at least statistics similar to netcraft with the server stats? 17:51:32 [DanC_lap] ACTION: Dan ask the TAG about tag soup measurement techniques 17:51:33 [trackbot-ng] Created ACTION-55 - Ask the TAG about tag soup measurement techniques [on Dan Connolly - due 2008-02-28]. 17:51:39 [gsnedders] dfbills: Phillip Taylor has some data at < > 17:51:45 [smedero] It seems like the real issue with the markup analyzation isn't the tools so much as the web-scale data mining hassles. 17:51:50 [gsnedders] s/Phillip/Philip/ 17:52:06 [DanC_lap] Zakim, next item 17:52:06 [Zakim] agendum 6. "ISSUE-35 aria-processing , ISSUE-14 aira-role" taken up 17:52:11 [Steve_f] apologies, have to leave now. 17:52:27 [DanC_lap] Zakim, agenda? 17:52:27 [Zakim] I see 4 items remaining on the agenda: 17:52:29 [Zakim] 6. ISSUE-35 aria-processing , ISSUE-14 aira-role 17:52:30 [Zakim] 7. semantic elements (cite thread, etc.) ACTION-48 [from DanC_lap] 17:52:31 [Zakim] 8. canvas mailing list (which action?) [from DanC_lap] 17:52:32 [Zakim] 9. ISSUE-32 table-summary [from DanC_lap] 17:52:36 [oedipus] GJR: i just commented on this issue - i think it is malformed 17:52:47 [dfbills] smedero: sure- that's why we should target someone with access to search engine data or browser development tools 17:53:05 [DanC_lap] ADJOUN. 17:53:07 [Zakim] -Julian 17:53:08 [Josh] Bye all 17:53:08 [Zakim] -Dave_B 17:53:08 [Zakim] -ChrisWilson 17:53:10 [Zakim] -SteveF 17:53:14 [Zakim] -Laura 17:53:26 [oedipus] 17:53:28 [Zakim] -Jerry_S 17:53:39 [Zakim] -Josh 17:54:15 [DanC_lap] RRSAgent, draft minutes 17:54:15 [RRSAgent] I have made the request to generate DanC_lap 17:54:30 [DanC_lap] RRSAgent, make logs world-access 17:54:39 [sampablokuper] Sorry, what just happened? 17:54:56 [oedipus] i think the meeting ended and they are wrapping the minutes from the IRC log 17:55:03 [gsnedders] gsnedders has left #html-wg 17:55:10 [DanC_lap] I proposed to postpone the remaining agenda and adjourn, and all agreed, sampablokuper 17:55:12 [gsnedders] gsnedders has joined #html-wg 17:55:19 [sampablokuper] Thanks, DanC 17:56:11 [DanC_lap] the IRC channel was used for a mix of ordinary IRC chat and transcribing the teleconference; I didn't recruit a dedicated scribe, so it's a bit chaotic. 17:56:26 [DanC_lap] Meeting: HTML WG Weekly 17:56:33 [DanC_lap] Chair: DanC 17:57:03 [DanC_lap] ah. I wonder how to fix that, oedipus 17:57:17 [DanC_lap] Zakim, list participants 17:57:17 [Zakim] As of this point the attendees have been +1.218.340.aaaa, +1.858.354.aabb, +1.212.830.aacc, +049251280aadd, DanC, Julian, ChrisWilson, SteveF, Laura, Jerry_S, Dave_B, Josh 17:57:18 [oedipus] present+ Gerrie_Shults 17:57:25 [oedipus] present- Jerry_S 17:57:26 [DanC_lap] RRSAgent, draft minutes 17:57:26 [RRSAgent] I have made the request to generate DanC_lap 17:57:47 [DanC_lap] yup; thanks. 17:57:52 [oedipus] no problem 17:58:08 [DanC_lap] now... how to get the minutes to be clear that we didn't actually take up item 6? 17:58:46 [oedipus] hey, i just know the outsiders' tricks - the only ,tool i can use as non-staff is the plain text generator 17:59:23 [DanC_lap] well, I'd clean up the minutes manually, but my next teleconference convenes in ... um... now. 17:59:24 [sampablokuper] DanC, I'm happy to scribe meetings occasionally if you'd like. The only problem is that although I can dial in, I can't necessarily talk much as I'm in a shared office. 17:59:57 [DanC_lap] good to know, sampablokuper ; were you on the call today? 18:00:17 [sampablokuper] No, I wasn't, but I can be in future. 18:00:57 [sampablokuper] (As long as it's understood that during work hours I won't be able to say much on the phone, to avoid disturbing my colleagues in Cambridg University Library) 18:01:12 [sampablokuper] Oops Cambridge :) 18:01:23 [oedipus] you don't need to minutes the "shush"es 18:01:43 [sampablokuper] ;) 18:01:50 [oedipus] by the way, to correct when minuting the syntax is "s/wrongword/rightword 18:01:57 [oedipus] without quotes 18:02:08 [gsnedders] it's full sed syntax, AFAIK 18:02:28 [oedipus] Scribe's Quick Start Guide: 18:02:35 [oedipus] 18:02:40 [oedipus] RRSAgent IRC Bot Guidebook: 18:02:40 [oedipus] 18:02:45 [oedipus] Zakim Bridge Guidebook: 18:02:45 [oedipus] 18:02:57 [oedipus] bookmark those, and you'll be scribing with the stars... 18:03:03 [sampablokuper] Thank you, oedipus. I didn't realise sed syntax was popular in irc. Those links are very useful. 18:03:34 [oedipus] you are quite welcome - as those docs say, they represent the collective wisdom of w3c participants... 18:04:40 [sampablokuper] Great. Okay, back to work now... Bye, and thanks again. 18:04:44 [oedipus] danC, did you want to dismiss zakim or let him leave when he gets tired of monitoring the bridge? 18:04:48 [oedipus] bye 18:05:00 [gsnedders] oedipus: he normally just lets him leave 18:05:17 [oedipus] ok, thanks, gsnedders 18:05:29 [mjs] mjs has joined #html-wg 18:06:12 [oedipus] oedipus has left #html-wg 18:12:44 [mjs_] mjs_ has joined #html-wg 18:13:54 [jgraham_mibbit] jgraham_mibbit has joined #html-wg 18:14:38 [edas] edas has joined #html-wg 18:35:00 [Zakim] disconnecting the lone participant, DanC, in HTML_WG()12:00PM 18:35:02 [Zakim] HTML_WG()12:00PM has ended 18:35:05 [Zakim] Attendees were +1.218.340.aaaa, +1.858.354.aabb, +1.212.830.aacc, +049251280aadd, DanC, Julian, ChrisWilson, SteveF, Laura, Jerry_S, Dave_B, Josh 18:52:10 [adele] adele has joined #html-wg 18:56:45 [hyatt] hyatt has joined #html-wg 19:03:05 [mjs] mjs has joined #html-wg 19:08:37 [mjs] mjs has joined #html-wg 19:10:31 [Laura] Laura has joined #html-wg 19:27:08 [dbaron] dbaron has joined #html-wg 19:52:16 [DanC_lap] DanC_lap has joined #html-wg 19:53:06 [Philip] (and 57 with class=address) 19:53:51 [Hixie] hm, better than i expected 19:53:59 [Hixie] still pretty abysmal though 19:56:13 [Philip] (There's 7 with class=vcard, plus 12 on Wikipedia and 5 on Blogspot) 19:58:04 [Philip] ((counting people with class="contact vcard" etc too)) 20:01:34 [milesdefeyter] milesdefeyter has joined #html-wg 20:08:22 [Hixie] oh well do the cases with class=vcard and the cases with class=adr overlap? 20:08:33 [Hixie] because adr is a part of vcard that was taken out into its own mf 20:14:00 [Philip] Of the 6 non-Wikipedia with adr, 4 also have vcard, 1 is which looks like it's not an address, and 1 is which looks like it's not a microformatted address 20:14:46 [mjs] mjs has joined #html-wg 20:15:15 [Philip] I think adr isn't meant to be used by itself - it's just been split out so it can be reused as a component of other microformats (which currently is only hcard) 20:15:32 [Philip] Oh, but I think wrong 20:15:56 [Philip] because explicitly mentions some adr-not-in-hcard examples 20:19:52 [Philip] It's used much more than <q> anyway, so it's not doing too badly 20:21:09 [Hixie] yeah 20:37:27 [Lachy_] Lachy_ has joined #html-wg 20:45:31 [mjs] mjs has joined #html-wg 21:20:54 [Sander] Sander has joined #html-wg 21:23:22 [Sander] Sander has joined #html-wg 21:39:57 [mjs_] mjs_ has joined #html-wg 21:55:28 [hyatt] hyatt has joined #html-wg 22:13:18 [sbuluf] sbuluf has joined #html-wg 22:29:10 [aroben] aroben has joined #html-wg 22:31:31 [aroben_] aroben_ has joined #html-wg 22:32:17 [ChrisWilson] ChrisWilson has joined #html-wg 22:48:54 [Lachy_] Lachy_ has joined #html-wg 23:21:27 [mjs] mjs has joined #html-wg 23:23:43 [Lachy] Lachy has joined #html-wg 23:28:52 [mjs_] mjs_ has joined #html-wg 23:48:57 [hyatt] hyatt has joined #html-wg 23:58:15 [aroben__] aroben__ has joined #html-wg
http://www.w3.org/2008/02/21-html-wg-irc
CC-MAIN-2015-18
refinedweb
4,731
58.25
Development teams provisioning software services face a constant trade-off between speed and accuracy. New features should be made available in the least possible time with a high amount of accuracy, meaning no downtime. Unforeseen downtime due to human error is common for any manual integration processes your team uses to manage codebases. This kind of unexpected interruption can be one of the key drivers for a team to take on the challenge of automating their integration process. Continuous Integration (CI) occupies a sweet spot between speed and accuracy where features are made available as soon as possible. If a problem with the new code causes the build to fail, the contaminated build is not made available to the customer. The result is that the customer experiences no downtime. In this article, I will use CircleCI to demonstrate how CI can be applied to a Deno project. Deno is a simple, modern, and secure runtime for JavaScript and TypeScript. Using Deno gives you these advantages when you create a project: - Deno is secure by default. There is no file, network, or environment access, unless you explicitly enable it. - It supports TypeScript out of the box. - It ships as a single executable file. The sample project for this tutorial will be an API built with Oak. This API has one endpoint which returns a list of quiz questions. A test case will be written for the endpoint using SuperOak. Prerequisites Before you start, make sure these items are installed on your system: For repository management and continuous integration, you need: - A GitHub account. You can create one here. - A CircleCI account. You can create one here. To easily connect your GitHub projects, you can sign up with your GitHub account. Getting started Create a new directory to hold all the project files: mkdir deno_circleci cd deno_circleci To keep things simple, you can import a hard-coded array of questions in our project. Create a file named questions.ts and add this: export default [ { id: 1, question: "The HTML5 standard was published in 2014.", correct_answer: "True", }, { id: 2, question: "Which computer hardware device provides an interface for all other connected devices to communicate?", correct_answer: "Motherboard", }, { id: 3, question: "On which day did the World Wide Web go online?", correct_answer: "December 20, 1990", }, { id: 4, question: "What is the main CPU in the Sega Mega Drive / Sega Genesis?", correct_answer: "Motorola 68000", }, { id: 5, question: "Android versions are named in alphabetical order.", correct_answer: "True", }, { id: 6, question: "What was the first Android version specifically optimized for tablets?", correct_answer: "Honeycomb", }, { id: 7, question: "Which programming language shares its name with an island in Indonesia?", correct_answer: "Java", }, { id: 8, question: "What does RAID stand for?", correct_answer: "Redundant Array of Independent Disks", }, { id: 9, question: "Which of the following computer components can be built using only NAND gates?", correct_answer: "ALU", }, { id: 10, question: "What was the name of the security vulnerability found in Bash in 2014?", correct_answer: "Shellshock", }, ]; Setting up a Deno server In this section, we will use the Oak middleware framework to set up our Deno server. It is a framework for Deno’s HTTP server. It is comparable to Koa and Express. To begin, create a file named server.ts and add this code to it: import { Application, Router } from ""; import questions from "./questions.ts"; const app = new Application(); const port = 8000; const router = new Router(); router.get("/", (context) => { context.response.type = "application/json"; context.response.body = { questions }; }); app.addEventListener("error", (event) => { console.error(event.error); }); app.use(router.routes()); app.use(router.allowedMethods()); app.listen({ port }); console.log(`Server is running on port ${port}`); export default app; In this example, we are using the Application and Router modules imported from the Oak framework to create a new application that listens to requests on port 8000. Then we declare a route that returns a JSON response containing the questions stored in questions.ts. We also add an event listener to be triggered every time an error occurs. This will be helpful if you need to debug when there is an error. Running the application We can run the application to see what has been done so far using this command: deno run --allow-net server.ts Navigate to to review the response. Writing tests for the application Now that we have set up the server and run the app, we can write a test case for our API endpoint. Create a new file called server.test.ts and add this code to it: import { superoak } from ""; import { delay } from ""; import app from "./server.ts"; Deno.test( "it should return a JSON response containing questions with status code 200", async () => { const request = await superoak(app); await request .get("/") .expect(200) .expect("Content-Type", /json/) .expect(/"questions":/); } ); // Forcefully exit the Deno process once all tests are done. Deno.test({ name: "exit the process forcefully after all the tests are done\n", async fn() { await delay(3000); Deno.exit(0); }, sanitizeExit: false, }); In this example, we import the superoak module and the application we created in server.ts. Then we declare a test case where we create a request using SuperOak and our application. We then make a GET request to the application index route and make the following assertions: - A HTTP:OKresponse ( 200) is returned - The response received is a JSON response - The JSON response received has a node named questions Running the test locally Navigate to the terminal. From the root of the application, stop the server from running using CTRL + C. Then issue this command to run the test: deno test --allow-net server.test.ts This should be the response: test it should return a JSON response containing questions with status code 200 ... ok (30ms) test exit the process forcefully after all the tests are done ...% With your test cases in place, you can add the CircleCI configuration. Adding CircleCI configuration In your project root directory, create a folder named .circleci. Add a file called config.yml to that directory. mkdir .circleci touch .circleci/config.yml In .circleci/config.yml add: # Use the latest 2.1 version of CircleCI pipeline process engine. version: 2.1 jobs: build-and-test: docker: - image: denoland/deno:1.10.3 steps: - checkout - run: | deno test --allow-net server.test.ts workflows: sample: jobs: - build-and-test In this example, the first thing we do is to specify the version of CircleCI pipeline process engine. Always specify the latest version (2.1 at the time this article was written). After specifying the CircleCI version, we specify a job named build-and-test. This job has two key blocks: docker and steps. The docker block specifies the images we need for our build process to run successfully. In this case, we are using the official Deno Docker image. The steps block does the following: - Checks out the latest code from our GitHub repository - Runs the tests in server.test.ts The build-and-test job is executed as specified in the workflows block. Next, we need to set up a repository on GitHub and link the project to CircleCI. For help, review this post: Pushing your project to GitHub. Adding the project to CircleCI Log into your CircleCI account. If you signed up with your GitHub account, all your repositories will be displayed on your project’s dashboard. Next to your deno_circleci project, click Set Up Project. CircleCI will detect the config.yml file within the project. Click Use Existing Config and then Start Building. Your first build process will start running and complete successfully. Click build-and-test to review the job steps and the status of each job. Conclusion In this tutorial, I have shown you how to set up a continuous integration pipeline for a Deno application using GitHub and CircleCI. While our application was a simple one, with just one endpoint and one test case, we covered the key areas of pipeline configuration and feature testing. CI builds on software development best practices in testing and version control to automate the process of adding new features to software. This removes the risk of human error causing downtime in the production environment. It also adds an additional level of quality control and assurance to the software being maintained. Give continuous integration a try and make code base bottlenecks a thing of the past for your team! The entire codebase for this tutorial is available on GitHub..
https://circleci.com/blog/continuous-integration-deno/
CC-MAIN-2021-39
refinedweb
1,400
57.67
Custom Validation Rules In Laravel 5.5 Recently, Adam Wathan showed me a fresh approach to writing custom validation rules that he was implementing in his own projects. So, Adam and I decided to pair program the feature into Laravel one morning, and I’m really happy with the results. Defining The Rule I’ll use some code I recently wrote in an actual application to demonstrate the feature. In my application, I need to verify that a given GitHub repository and branch actually exists. Of course, the only way to do this is by making an API call to GitHub. This validation requirement is a great candidate for wrapping inside a custom validation rule. To get started, we simply define a class with two methods: passes and message. I chose to place my class in the App\Rules namespace: Let’s digest this code. The passes method will receive the $attribute and $value arguments from the Laravel Validator. The $attribute is the name of the field under validation, while the $value is the value of the field. This method only needs to return true or false , depending on whether the given value is valid or not. In my example, the Source object is an Eloquent model that represents a source control provider such as GitHub. The message method should return the appropriate error message to be used when validation fails. Of course, within this method you may retrieve a string from your translation files. Using The Rule Once we have defined our custom validation rule, we can use it during a request. To assign the rule to an attribute, we simply instantiate it within our array of rules. In this example, I’ll use the validate method, which is available directly from the Request object in Laravel 5.5: Of course, you could also use your custom rule from within a Form Request or any other location where you perform validation. This new feature provides a quick, easy way to define custom validation rules, and I’ve already made heavy use of this feature in my own code. I hope you will too. Enjoy!
https://medium.com/@taylorotwell/custom-validation-rules-in-laravel-5-5-c6cb250f65df
CC-MAIN-2017-47
refinedweb
355
63.19
Yes, exactly! 100% of the items have to meet the condition :) Yes, exactly! 100% of the items have to meet the condition :) Hello forum, I am trying to check if ALL the elements of an array meets a condition. the pseudocode should be: if (all the elements in the array > value) { do something; } There was a problem inside my PID_Compute function: I was uploading some global variables in a wrong way. I have built a void function that computes both PID controls inside it and now it works! :D Thank you Paul. Actually there are some more rows of code that are needed to send the signal to the motors. The problem is that the code works properly with only the first PID function (if I... Hi forum, Is it possible to call the same function two times in the same intervalTimer? I am trying to compute the same function with different input in an interval timer that runs at 1 kHz. ... Hi alisondmurray, I think you have copied and pasted my post from this link: Hi everybody, I have implemented both the data acquisition and the writing on the SD card in the void loop (no more timers and interrupts) and I am able to save the data on the card at 1 kHz.... wow, thank you. I will try it! :) Thank you very much for the help! Actually, when I write in the buffer I can see the size of the buffer increasing from 0 to 8 but myBuffer.length_front() is always 0. This is my code: ... I have a little trouble using Circular Buffer. I have initialize the buffer as follows: #include "circular_buffer.h" Circular_Buffer<byte, 8, 512> myBuffer; Thank you for the suggestions. I will try to use Circular Buffer! Hello forum, I have a very quick question that I was not able to solve online. Which is the maximum default size of a queue in the Queue Array library? And how can I set a different size of the... Thank you every body for your precious help. I am actually working on the code and I am trying to implement your suggestions. I will let you know any progress as soon as possible! :) You are right rcarr, thank you. I do not know how I could manage the transfer of data between different interrupts without using noInterrupts(), interrupts() ... Thank you all for the precious suggestions. I have modified the baud rate to 2000000, both T3.2 and T3.5 are working at 96 MHz, T3.2 sends data at 1 kHz (seems properly) and T3.5 tries to receive... Thank you for the suggestions, I will try to implement both the 4th start byte in the packet and a sort of state machine to read the incoming data. Actually, Teensy 3.2 works at 96 MHz whereas... Yes, I am sorry. I have tryed to make it a little more comprehensible: 13419 Thank you very much for your valuable help. Thank you very much. Yes, SERIAL_Incoming_Data. is a union that collects the incoming data. In the code there are two more timers: one for the control of two motors and one for the writing on SD... Thank you Paul. Following your suggestions, I have moved the receiving part of the code in the void loop and implemented the check for the first three bytes. Now the baud rate is set to 230400 and... Thank you very much for the useful information and suggestions Paul. I have already implemented the function "checkPacket(SERIAL_Incoming_Data.vData)" that checks the start and the end of the... Do you mean setup the transmission in the main loop and the receiving in SerialEvent()? How can I be sure about the transmission frequency in this case? Thank you Thank you for the interesting suggestions. The other SPI device is a Dual Encoder Breakout and I am using it to read two encoders at 2000 Hz (in an interval timer function). Then the encoder data... Is the problem due to the fact that the protocol is inside an interval timer? There could be something wrong in the code :) I have implemented the sending and receiving function inside an interval timer because I really need to send data at a specific frequency. This... Thank you for the suggestion. Unfortunately Teensy 3.2 has only one SPI port and I am already using it. I need to use a serial communication... Any suggestion on how to improve it? If you have... Good evening forum! I am trying to send data from a Teensy 3.2 to a Teensy 3.5 through Serial communication. I am sending a 34 bytes package on Serial1 (to Serial1). The code for the T3.2 and... Thank you for the suggestions. I have already implemented a timer priority (highest priority for the "small" timer that control the motor and default priority for the timer reading data from serial... It works with #pragma pack(push, 1) and #pragma pack(pop) ! :D Thank you very much for the suggestion KurtE! Hi everybody, I am trying to use the library <PID_v1.h> in Teensy 3.5. In particular I would like to execute the myPID.Compute(); inside an IntervalTimer function at a frequency of 1 kHz. Inside... Hello forum! I have a little problem with an union in Teensy 3.5. I am sending binary data packages from a Teensy 3.2 to a Teensy 3.5 via Serial communication. I am using a union structure to... Thank you GremlinWrangler. I will try for sure to swap the array being written and write the old one. If I do not consider the SD card, is there a way to understand if my timers are really... Thank you very much GremlinWrangler. I need deterministic writing in my code (1 kHz). If there is a way to easily write at 1 kHz in binary on the SD I would be grateful. Otherwise, could you give... Hi everybody, I am developing a software on Teensy 3.5 in which I have implemented three different timers at 1 kHz. One of the timers should save data on a SD card. If I save the millis() at each... Thank you rcarr! I have solved the problem with a TIP120 that I already had in the lab. Hi everybody, I am using a driver Advanced Control Driver 12A8-QDI to control a DC motor from Teensy 3.5. I would like to control the inhibit pin of the driver to be sure that it is... Thank you very much Manitou, so maybe the problem is not due to the PID function... Hi everybody, I am trying to control the speed of a DC motor with Teency 3.5 by reading data from the encoder. Everything seems working but, when the motor reaches the setpoint speed the... Thank you very much Theremingenieur!! :D :D This solves my problem! I have added the following line at my code and the "begin" and "end" transactions lines: SPISettings setMAX532(2000000,... Thank you very much turtle9er and WMXZ! :) Sorry, could you explain it a little better and post your code? Thank you. Hello forum, I am opening a new post about SPI communication with DAC since I am not able to solve the following problem since two weeks. I am trying to communicate with a DAC MAX 532 from... Hello everybody, Is there a way to add a delay in nanoseconds on Teency 3.5? I have tryed to search some examples but I could not find any. Thank you! :) Good morning forum! I am trying to figure out how to enable and disable only one interrupt at a time on Teency 3.5, because by using "noInterrupts();" and "interrupts();" all the interrupts are... I have one more question: Is there a way to write binary data on the SD card without closing the file at the end of the writing? The opening and closing operations are becoming really... Thank you very much WMXZ. I checked the file to be opened and I have also implemented the closing of the file after a number of cycles. In the previous script also the storage of DataPackets in... I have tryed the following code to write 512 bytes on the SD card but there is no file on the SD card. The 512 bytes (SDPacket) are composed by 16 packets (dataPacket) of 30 bytes each. Why is it... Thank you very much Paul. Tomorrow I will try it on my Teency 3.5. :) I have just a little question: If I would like to write on the SD card 8 values (or more) which are acquired at each loop,... Hello forum! Where can I found a complete example on how to write a binary file on SD card in Teency 3.5? Thank you very much in advance, Lorenzo Hi everybody, I am trying to write data on a SD CARD SanDisk 16 GB U1 from Teency 3.5. I would like to log data from different sensors and I was wondering which is the maximum frequency for... Thank you for your help Donziboy2. Now my DAC MAX532 seems to work, the connections for the unipolar mode are as follows: Pin Connected to 1 Shorted to 3 2 10V (I used a KA78L05A linear...
https://forum.pjrc.com/search.php?s=a1abf230b4a123d3571c7a04dcf553ef&searchid=5422735
CC-MAIN-2020-24
refinedweb
1,548
76.32
Alright, I'm new to the programming world... I've started learning by reading some online tutorials and the book 'C++ Primer'. I've just gone over some sections on functions and thought I'd make a little program of my own to test out and try out some ideas and get used to functions. As I'm in Trigonometry right now I thought it'd be fun to have a program convert angle measures in degrees (like 42.35 degrees) into degrees and minutes and seconds (42 degrees 21'). I did it and for most problems it seems to work but for this problem it does not: 42.35 degrees instead of the 42 degrees and 21 minutes it should solve to it comes up w/ 42 degrees 20 minutes and 59 seconds. ?? Here is the code: #include <stdlib.h> #include <iostream> using namespace std; int degr(float input); int minu(float input); int seco(float input); int degr(float input) { int degrees = input; return degrees; } int minu(float input) { int in = input; float deci = input - in; int minutes = deci*60; return minutes; } int seco(float input) { int in = input; float deci = input - in; float semi = deci*60; int minutes = semi; float b = semi - minutes; int seconds = b*60; return seconds; } int main() { float input = 0; int degrees = 0; int minutes = 0; int seconds = 0; cout << "Enter measure of angle in degrees for conversion to minutes and seconds: "; cin >> input; degrees = degr(input); minutes = minu(input); seconds = seco(input); cout << "\n\nMeasure of angle in minutes and seconds: " << degrees << " degrees " << minutes << " minutes " << seconds << " seconds" << endl << endl; system("PAUSE"); return 0; } I know its pretty messy now and that a lot of my methods are probably bad but thats all i know how to do right now. any help is appreciated. if it matters im running XP using bloodshed's dev-c++ compiler. oh and after reading the thread at the top i thought id better say that this is not homework or anything... im learning it on my free time and this is just a little program of mine i thought id make. like trying to apply the stuff ive learned you know? thanks!
https://cboard.cprogramming.com/cplusplus-programming/30174-program-doesnt-always-do-whats-expected-any-ideas-newbie.html
CC-MAIN-2017-26
refinedweb
365
63.73
Difference between revisions of "Mistral" Revision as of 15:20, 28 October 2013 Contents - 1 Mistral - 2 Use cases - 3 Rationale - 4 Terminology - 5 Design - 6 Implementation - 7 Links & IRC - 8 FAQ Mistral. Use cases Tasks Scheduling - Cloud Cron Problem Statement Pretty often while administering a network of computers there’s a need to establish periodic on-schedule execution of maintenance jobs for doing various kinds of work that otherwise would have to be started manually by a system administrator. The set of such jobs ranges widely from cleaning up needless log files to health monitoring and reporting. One of the most commonly known tools in Unix world to set up and manage those periodic jobs is Cron. It perfectly fits the uses cases mentioned above. For example, using Cron we can easily schedule any system process for running every even day of week at 2.00 am. For a single machine it’s fairly straightforward how to administer jobs using Cron and the approach itself has been adopted by millions of IT folks all over the world. Now what if we want to be able to set up and manage on-schedule jobs for multiple machines? It would be very convenient to have a single point of control over their schedule and themselves (i.e. “when” and “what”). Furthermore, when it comes to a cloud environment the cloud provides additional RESTful services (and not only RESTful) that we may also want to call in on-schedule manner along with operating system local processes. Solution Mistral service for OpenStack cloud addresses this demand naturally. Its capabilities allow configuring any number of tasks to be run according to a specified schedule in a scale of a cloud. Here’s the list of some typical jobs we can choose from: Run a shell script on specified virtual instances (e.g. VM1, VM3 and VM27). Run an arbitrary system process on specified instances. Start/Reboot/Shutdown instances. Call an accessible cloud services (e.g. Trove). Add instances to a load balancer. Deploy an application on specified instances. This list is not full and any other user meaningful jobs can be added. To make it possible Mistral provides a plugin mechanism so that it’s pretty easy to add new functionality via supplying new Mistral plugins. Basically, Mistral acts as a mediator between a user, virtual instances and cloud services in a sense that it brings capabilities over them like task management (start, stop etc.), task state and execution monitoring (success, failure, in progress etc.) and task scheduling. Since Mistral is a distributed workflow engine those types of jobs listed above can be combined in a single logical unit, a workflow. For example, we can tell Mistral to take care of the following workflow for us: On every Monday at 1.00 am start grepping phrase “Hello, Mistral!” from log files located at /var/log/myapp.log on instances VM1, VM30, VM54 and put the results in Swift. On success: Generate the report based on the data in Swift. On success: Send the generated report to an email address. On failure: Send an SMS with error details to a system administrator. On failure: Send an SMS with error details to a system administrator. A workflow similar to the one described above may be of any complexity but still considered a single task from a user perspective. However, Mistral is smart enough to analyze the workflow and identify individual sequences that can be run in parallel thereby taking advantage of distribution and load balancing under the hood. It is worth noting that Mistral is nearly linearly scalable and hence is capable to schedule and process virtually any number of tasks simultaneously. Notes So in this use case description we tried to show how Mistral capabilities can be used for scheduling different user tasks in a cloud scale. Semantically it would be correct to call this use case Distributed Con or Cloud Cron. One of the advantages of using a service like Mistral in case like this is that along with base functionality to schedule and execute tasks it provides additional capabilities like navigating over task execution status and history (using web UI or REST API), replaying already finished tasks, on-demand task suspension and resumption and many other things that are useful for both system administrators and application developers. Long-running business process A user makes a request to run a complex multi-step business process and wants it to be fault-tolerant so that if the execution crashes at some point on one node then another active node of the system can automatically take on and continue from the exact same point where it stopped. In this use case the user splits the business process into a set of tasks and let Mistral handle them in a sense that it serves as a coordinator and decides what particular task should be started at what time. So that Mistral calls back with "Execute action X, here is the data". If an application that executes action X dies then another instance takes the responsibility to continue the work. BigData analysis & reporting A data analyst can use Mistral as a tool for data crawling. For example, in order to prepare a financial report the whole set of steps for gathering and processing required report data can be represented as a graph of related Mistral tasks. As with other cases, Mistral makes sure to supply fault tolerance, high availability and scalability. Live migration A user specifies tasks for VM live migration triggered upon an event from Ceilometer (CPU consumption 100%). Rationale The main idea behind this services includes the following main points: Ability to upload custom task graph definitions. Graph definitions should be agnostic of any details of specific domains (like orchestration, deployment and so forth). The actual task execution is not performed by the service itself. The service rather serves a coordinator for other worker processes that do the actual work and notify back about task execution results. In other words, task execution should be asynchronous thus providing flexibility for plugging in any domain specific handling and opportunities to make this service scalable and highly available. The service must not contain a predefined set of actions that can be performed. All actions are specific to a particular task graph and described along with the graph itself using simple DSL. Basically, actions represent generic actions that the state machine can schedule to be executed on a worker. The worker itself has a knowledge about how to interpret the task graph actions and do the specific work. Terminology Task graph Graph of all possible tasks and valid transitions between them. Flow Route in a task graph that reflects one possible set of actions performed in a linear fashion. At the same time, the service logically can run individual flows independently thereby leaving freedom for various optimization on an implementation level such as using multiple parallel worker threads. Session A particular execution. That is, for the given task graph definition and chosen task the service should perform all required actions (subtasks) in order to complete this task. All transitions must be compliant to allowed configured transitions in the task graph definition. Identified by session_id. Task Defines a flow execution step. Each task is defined with its dependant tasks which the flow execution can jump from in order to reach that task. Identified by session_id + task_name. Target task The task that a client needs to execute at some point in time. Any task can be chosen as target task in the task graph definition. Once this task has been processed with success the session is considered completed. Action A particular instruction associated with a task that needs to be performed once the task dependencies are satisfied. Task state A task can be in a number of predefined states reflecting its current status: - INACTIVE - task dependencies are not satisfied. - PENDING - task dependencies are satisfied but task hasn’t started yet. - RUNNING - task is currently being executed. - SUCCESS - task has finished successfully. - FAILURE - task has finished with an error. All the actual task states belonging to current Session are persisted in DB under session_id key. Trigger There are several types of conditions which cause a new session to be created when it is met. The actual condition can occur many times and each time (with some limitations specified in the condition itself) a new session will be created. Design There is no final decision on the service design. It is actively discussed in mailing lists and IRC #openstack-mistral. Implementation There is no implementation yet. Links & IRC - Project at Launchpad: - Weekly IRC meeting is held on Mondays at 16:00 UTC on #openstack-meeting at Freenode. - Weekly IRC meeting agenda: Q: What is Mistral? A:. Q: Why offload business processes to 3rd party service? A: Reason 1: High Availability. A typical application’s workflow consists of many independent tasks like collecting data, processing, resource acquiring, obtaining user input, reporting, sending notifications, replicating data etc. All of the steps must happen in appropriate time as they depend on each other. Many such processes can run in parallel. Now if your application crashes somewhere in the middle or a power outage occurs your business process terminates at unknown stage in an unknown state. So you need to track a state of every single flow in a task graph in some external persistent storage like database so that you can resume it (or roll it back) from the place it crashed. You also need some health monitoring tool that would watch your app and if it crashed schedule unfinished flows on another instance. This is exactly what Mistral can do out of the box without reinventing the wheel for each application time and time again. Reason 2: Scalability. Most task graphs have steps that can be performed in parallel (i.e. different routes in a graph, flows). Mistral can distribute execution of such tasks across your application’s instances so that the whole execution would scale. Reason 3: Observable state. Because flow state is tracked outside of application it becomes observable. At any given moment system administrator can access information on what is currently going on, what tasks are in pending state and what has already been executed. You can obtain metrics on your business processes and profile them. Reason 4: Scheduling. Using Mistral you can schedule your process to be run periodically or at a fixed moment in future. You can have your execution to be triggered on alarm condition from an external health monitoring system or upon a new email in your mailbox. Reason 5: Dependency management offloading. Because you offload task management to an external service you don’t have to specify all the triggers and actions in advance. For example, you may say “here is the task that must be triggered if my domain is down for 1 minute” without specifying how exactly the event is obtained. System administrator can setup Nagios to watch your domain and trigger the action and replace it later with Ceilometer without your application being affected or even aware of the change. Administrator can even manually trigger the task using CLI or UI console. Or another example is having a task that triggers each time a flow reaches some desired state and let administrator configure what exactly needs to happen there (like send a notification mail and later replace it with SMS). Reason 6: Open additional points for integration. As soon as your business process is converted to a Mistral task graph that can be accessed by others other application can setup their own workflow to be triggered by your application reaching a certain state. For example suppose OpenStack Nova would declare a workflow for new VM instance spawning. One application (or system administrator) can hook to a task “finish” so that every time Nova spawns another instance you would receive a notification. Or suppose you want your users to have flexible quotas on how many instances one can spawn based on information in external billing system. Normally you would have to patch Nova to access your billing system but with Mistral you can just alter Nova’s task graph so that it includes your custom tasks that would do it instead. Reason 7: Formalized graphs of tasks are just easier to manage and understand. They can be visualized, analyzed and optimized. They simplify program development and debugging. You can model program workflows, replace task actions with stubs, easily mock external dependencies, do task profiling. Q: How do I make Mistral know about my task graphs? A: Task graphs are described using the DSL. Currently YAML is considered the primary syntax for Mistral DSL, however, other alternatives like JSON or XML can also be supported. There is a REST API that is used to upload task graphs, execute them and do run-time modifications against against them. DSL describes Tasks. Dependencies between tasks (what tasks need to be run before this task can be executed). Triggers that start execution upon some conditions. Q: What exactly are Mistral tasks? A: Tasks are objects. Each such object has: Name. Optional tag names. List of tasks it depends on. This can be both a fixed list or a YAQL expression. See for what is YAQL. Basically, it’s just a selector specifying the tasks this task depends on. For example, it may be built using task tag names. Optional YAQL expression that extracts data from current data context so that it would go as a task execution input. Optional task action (a signal to notify a worker to do some actual work). Q: What are Mistral workflows? A: Interdependent tasks form a structure known as graph. Workflow just describes what exactly in this graph should be run for achieving user’s goal (i.e. the whole graph may contain 20 interrelated tasks describing all possible steps of setting up a cluster but the workflow for spawning a single VM may only include 3 steps which form their own subgraph). When we start workflow execution (open new session) we say what node of that graph needs to be reached and Mistral walks all possible paths (executes independent parallel flows) to that node (task) executing all the tasks that are within those paths. Q: What are Mistral actions and how does Mistral execute them? A: Action is what to do when an exact task is triggered. Mistral cannot execute some domain specific actions. Neither can a user upload his code. Instead Mistral defines a set of common generic actions that can be used to signal your application to do the real task action. Those are: Call your app’s URI. Send an AMQP (RabbitMQ) message to some queue. Other types of signaling (email, UDP message, polling etc.). Mistral can be extended to include other general purpose actions like Calling Puppet, Chef, Murano, SaltStack etc. Executing some generic REST API calls. Remote script execution via SSH. etc. All Mistral actions must: Be generic and universal. No domain specific actions in Mistral. Be secure to be executed on shared servers. Not block (at least for significant time). Ideally be asynchronous. Q: Is it possible to organize a data flow between different tasks in Mistral? A: Yes, tasks belonging to the same task graph can take some input as a json structure (other formats are also possible), query a subset of this structure interesting for this particular task using YAQL expression () and pass it along with a corresponding action to a worker. Once the worker has done its processing it returns the result back using similar json format. So in this case Mistral acts as a data flow hub dispatching results of one tasks to inputs of other tasks. Q: Does Mistral provide a mechanism to run nested workflows? A: Instead of performing a concrete action associated with a task Mistral can start a nested workflow. That is, given the input that came into the task Mistral takes a new task graph and starts a new workflow with that input and after completion execution jumps back to the parent flow and continues from the same point. The closest analogy in programming would be calling one method from another passing all required parameters and optionally getting back a result. It’s worth noting that the nested workflow works in parallel with the rest of the activities belonging to the parent execution and it has its own isolated execution context. Q: What are some other potential Mistral capabilities? A: The team is also considering some other capabilities that may be implemented in Mistral or on top the base functionality in a form toolsets and frameworks: Manage task processing collocation within a cluster. Tasks priorities. Subscribing to Mistral events for arbitrary passive listeners. Namespaces (domains) to logically isolate task graphs from each other. Role Based Access Control for managing and executing workflows. Ability to start dedicated worker VMs able to perform a set of predefined (or configured) actions like executing a specified script or any arbitrary code (in Python, Java etc.). That may be targeted to use cases when a user needs to do some sort of parallel execution on a temporarily created cluster. For example, we may want to process a set of objects residing in Swift using 100 temporary worker VMs so that we can logically split this set of objects into 100 segments and let the workers process them individually. Plugin system that would allow to introduce additional means into DSL and REST API via custom plugins (say we use a plugin for connecting to Mule ESB using namespace “mule:” in DSL). Q: Who are Mistral users? A: Potential Mistral users are: Developers. Both who work on OpenStack services and those running in tenant’s VMs. Developers use Mistral DSL/API to access it. System integrators. They customize task graphs related with deployment using either special scripts or manually using Mistral CLI/UI. System administrators can use Mistral via additional toolset for common administrative tasks. This can be distributed cron, mass deployment tasks, backups etc. Q: How does Mistral relate to OpenStack? A: Although Mistral is quite generic it is built to become a natural part of OpenStack ecosystem. We are going to write Heat HOT templates for its installation, add support for it in Murano and have integration with Keystone. There also might be extensions (plugins) for Mistral that directly expose functionality provided by other OpenStack services like Trove or Heat. Q: Is Mistral going to be an OpenStack infrastructure-layer service (as Nova) or be deployed on user VMs inside OpenStack? A: Both use cases are valid and we are going to support both scenarios. Q: Why not just use TaskFlow? A: Mistral and TaskFlow have many similarities but target different use cases. TaskFlow is a Python library that you can use inside your Python app to manage Python workflows. Mistral is an out-of-process service that is language-agnostic and cannot execute some arbitrary Python code directly as TaskFlow does. But as an external service it can have distributed task execution, scalability and HA. Under the hood TaskFlow library can be used for Mistral implementation. We also plan to develop a TaskFlow engine that would help scheduling TaskFlow tasks over Mistral. Q: How does Mistral relate to Convection? A: We believe that Mistral is a Convection implementation that goes far beyond the initial proposal to address additional use cases. We closely work with TaskFlow team who are also the people behind Convection. Convection as a project was never started and Mistral was designed to take its place although under different name for trademark reasons. Q: Why not use Celery? A: While Celery is distributed task engine it was designed to execute custom Python code on preinstalled private workers. Again this is a different use case with Mistral which assumes the tasks can be executed on a shared service and do not require (or allow) custom code upload. In other words, Celery itself could be implemented on top of Mistral if it started now. Q: How does Mistral relate to Amazon SWF? A: Amazon SWF shares many ideas with Mistral but, in fact, is designed to be language-oriented (Java, Ruby, Python). It is hard and mostly meaningless to use SWF without its, for example, Java SDK that exposes its functionality as a set of Java annotations and interfaces. In this sense SWF is closer to Celery than to Mistral. Mistral on the other hand wants to be both simpler and more user-friendly. We want to have a service that is usable without an SDK in any programming language. At the same time it’s always possible to implement additional convenient language-oriented bindings based on cool features like Python decorators, Java annotations and aspects. At later stages Mistral may include SWF API adapter so that SWF applications may be migrated to Mistral.
https://wiki.openstack.org/w/index.php?title=Mistral&diff=next&oldid=34144
CC-MAIN-2021-43
refinedweb
3,482
55.03
Cover Photo by Anas Alshanti on Unsplash The “problem” When using the static site generator Gatsby you don’t have a base “App” component to play with. That said, there’s no component that wraps around your whole application where you can put your state that needs to be kept between routes/pages. Gatsby.js automatically (or automagically?) creates routes to pages you put in your page folder of your installation. Or, you create pages programmatically from your gatsby-node.js file. This will get us in trouble if we need, for example, a menu that should be visible and available for interaction on all our page routes. In my case, I had a mail form menu that could be shown or hidden in the right lower corner of my application. This component has a local state that will decide if the component is being shown or not. The below image shows the menu closed and opened. So… this is our problem. How can we tackle it? There’s a number of ways to deal with this but one way, and the approach I took, is described below. The Solution I’ll go straight to the point. Gatsby has a file that’s named gatsby-browser.js. We can use this file to make components wrap around our complete App and pages! This is great! This file lets us use the Gatsby Browser API. This API contains several useful functions but there’s one in particular that will fit our needs. It’s called wrapPageElement. Check out the code below. This is the actual code I used for my client’s app. // gatsby-browser.js // Import the component at the top of the file import MailWidgetWrapper from './src/components/MailWidgetWrapper'; export const wrapPageElement = ({ element, props }) => ( <MailWidgetWrapper {...props}>{element}</MailWidgetWrapper> ); Here, I’ve created a wrapper component that will be available on all the routes and pages in Gatsby. That’s Awesome! And just what we need. The wrapper component looks like this: // MailWidgetWrapper.js import React from 'react'; import MailWidget from './MailWidget'; const MailWidgetWrapper = ({ children }) => ( <> {children} <MailWidget /> </> ); export default MailWidgetWrapper; This is a really simple React Component who’s only function is to wrap our app and provide it with the MailWidget component. But how does wrapPageElement work? wrapPageElement First, I also highly recommend using gatsbyjs.org as much as you can for finding answers to anything regarding Gatsby. The site is excellent and full of really good and thorough explanations of most problems you will encounter. In our case, if you look at the code above, we have two parameters that get created for us in the wrapPageElement callback function: element and props. You should be familiar with props if you use React so they need no further introduction. In this case, the props are used by the page we’re currently on. We don’t need to use any of these props, as we only need to use the children (automatically created by React) prop. The MailWidgetWrapper just renders the children and the MailWidget. The children are the page we’re sending into the MailWidgetWrapper component from the gatsby-browser.js file, as shown below. The actual page lives in the element parameter and that’s the one we’re sending in with the expression {element}. <MailWidgetWrapper {…props}>{element}</MailWidgetWrapper> So in short, the parameters we get from wrapPageElement can be summarized: The props parameter are the props from the actual page we’re on. And the element parameter is the actual page we’re on The MailWidget Component My actual MailWidget component is quite large and has a lot of code that’s not relevant here. That’s why I'm just showing you a simple scaffolded example version of a MailWidget component below. This component is not actually relevant for the task of explaining the wrapPageElement function. The component can virtually be anything you like and has nothing to do with the implementation above. In my case it’s a MailWidget. It’s all up to you and what stateful component/s you need to be available on all your page routes. // MailWidget.js import React, { useState } from 'react'; const MailWidget = () => { const [isVisible, setIsVisible] = useState(false); const toggleVisible = () => { setIsVisible(!isVisible); }; return ( <div className={isVisible ? 'visible' : ''}> <button type="button" onClick={toggleVisible}> Hide/Show MailWidget </button> <h1>Hello, I'm your mailwidget</h1> </div> ); }; export default MailWidget; By the way, I’m all in on Hooks. I love Hooks and will use them in everything I do in React! That’s why I created my state with the useState hook in this one. The component above just uses a local state to decide if it should show itself or not. Conclusion There you have it! Hopefully, you’ve learned that it’s not difficult to have a component keeping its state between pages in Gatsby. And we all love Gatsby.js don’t we? ? Also, thank you for reading this post. I’m a Developer from Sweden that loves to teach and code. I also create courses on React and Gatsby online. You can find me on Udemy. Just search for Thomas Weibenfalk or hook me up on Twitter @weibenfalk I also have a Youtube channel were I teach free stuff, check it out here.
https://www.freecodecamp.org/news/keeping-state-between-pages-with-local-state-in-gatsby-js/
CC-MAIN-2021-25
refinedweb
877
66.54