text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Unlike desktop Linux systems, embedded Linux systems cannot afford to let applications eat up memory as they go or generate dumps because of illegal memory references. Among other things, there is no user to stop the offending applications and restart them. In developing applications for your embedded Linux system, you can employ special debugging libraries to ensure their correct behavior in terms of memory use. The following sections discuss two such libraries, Electric Fence and MEMWATCH. Though both libraries are worth linking to your applications during development, production systems should not include either library. First, both libraries substitute the C library's memory allocation functions with their own versions of these functions, which are optimized for debugging, not performance. Secondly, both libraries are distributed under the terms of the GPL. Hence, though you can use MEMWATCH and Electric Fence internally to test your applications, you cannot distribute them as part of your applications outside your organization if your applications aren't also distributed under the terms of the GPL. Electric Fence is a library that replaces the C library's memory allocation functions, such as malloc( ) and free( ), with equivalent functions that implement limit testing. It is, therefore, very effective at detecting out-of-bounds memory references. In essence, linking with the Electric Fence library will cause your applications to fault and dump core upon any out-of-bounds reference. By running your application within gdb, you can identify the faulty instruction immediately. Electric Fence was written and continues to be maintained by Bruce Perens. It is available from. Download the package and extract it in your ${PRJROOT}/debug directory. For my control module, for example, I used Electric Fence 2.1. Move to the package's directory for the rest of the installation: $ cd ${PRJROOT}/debug/ElectricFence-2.1 Before you can compile Electric Fence for your target, you must edit the page.c source file and comment out the following code segment by adding #if 0 and #endif around it: #if ( !defined(sgi) && !defined(_AIX) ) extern int sys_nerr; extern char * sys_errlist[ ]; #endif If you do not modify the code in this way, Electric Fence fails to compile. With the code changed, compile and install Electric Fence for your target: $ make CC=powerpc-linux-gcc AR=powerpc-linux-ar $ make LIB_INSTALL_DIR=${TARGET_PREFIX}/lib \ > MAN_INSTALL_DIR=${TARGET_PREFIX}/man install The Electric Fence library, libefence.a, which contains the memory allocation replacement functions, has now been installed in ${TARGET_PREFIX}/lib. To link your applications with Electric Fence, you must add the -lefence option to your linker's command line. Here are the modifications I made to my command module's Makefile: CFLAGS = -g -Wall ... LDFLAGS = -lefence The -g option is necessary if you want gdb to be able to print out the line causing the problem. The Electric Fence library adds about 30 KB to your binary when compiled in and stripped. Once built, copy the binary to your target for execution as you would usually. By running the program on the target, you get something similar to: # command-daemon Electric Fence 2.0.5 Copyright (C) 1987-1998 Bruce Perens. Segmentation fault (core dumped) Since you can't copy the core file back to the host for analysis, because it was generated on a system of a different architecture, start the gdb server on the target and connect to it from the host using the target gdb. As an example, here's how I start my command daemon on the target for Electric Fence debugging: # gdbserver 192.168.172.50:2345 command-daemon And on the host I do: $ powerpc-linux-gcc command-daemon (gdb) target remote 192.168.172.10:2345 Remote debugging using 192.168.172.10:2345 0x10000074 in _start ( ) (gdb) continue Continuing. Program received signal SIGSEGV, Segmentation fault. 0x10000384 in main (argc=2, argv=0x7ffff794) at daemon.c:126 126 input_buf[input_index] = value_read; In this case, the illegal reference was caused by an out-of-bounds write to an array at line 126 of file daemon.c. For more information on the use of Electric Fence, look at the ample manpage included in the package. MEMWATCH replaces the usual memory allocation functions, such as malloc( ) and free( ), with versions that keep track of allocations. It is very effective at detecting memory leaks such as when you forget to free a memory region or when you try to free a memory region more than once. This is especially important in embedded systems, since there is no one to monitor the device to check that the various applications aren't using up all the memory over time. MEMWATCH isn't as efficient as Electric Fence, however, to detect pointers that go astray. It was unable, for example, to detect the faulty array write presented in the previous section. MEMWATCH is available from its project site at. Download the package and extract it in your ${PRJROOT}/debug directory. MEMWATCH consists of a header and a C file, which must be compiled with your application. To use MEMWATCH, start by copying both files to your application's source directory: $ cd ${PRJROOT}/debug/memwatch-2.69 $ cp memwatch.c memwatch.h ${PRJROOT}/project/command-daemon Modify the Makefile to add the new C file as part of the objects to compile and link. For my command daemon, for example, I used the following Makefile modifications: CFLAGS = -O2 -Wall -DMEMWATCH -DMW_STDIO ... OBJS = daemon.o memwatch.o You must also add the MEMWATCH header to your source files: #ifdef MEMWATCH #include "memwatch.h" #endif /* #ifdef MEMWATCH */ You can now cross-compile as you would usually. There are no special installation instructions for MEMWATCH. The memwatch.c and memwatch.h files add about 30 KB to your binary once built and stripped. When the program runs, it generates a report on the behavior of the program, which it puts in the memwatch.log file in the directory where the binary runs. Here's an excerpt of the memwatch.log generated by running my command daemon: = == == == == == == MEMWATCH 2.69 Copyright (C) 1992-1999 Johan Lindh = == ==... ... unfreed: <3> daemon.c(220), 60 bytes at 0x10023fe4 {FE FE FE ... ... Memory usage statistics (global): N)umber of allocations made: 12 L)argest memory usage : 1600 T)otal of all alloc( ) calls: 4570 U)nfreed bytes totals : 60 The unfreed: line tells you which line in your source code allocated memory that was never freed later. In this case, 60 bytes are allocated at line 220 of daemon.c and are never freed. The T)otal of all alloc( ) calls: line indicates the total quantity of memory allocated throughout your program's execution. In this case, the program allocated 4570 bytes in total. Look at the FAQ, README, and USING files included in the package for more information on the use of MEMWATCH and the output it provides.
http://etutorials.org/Linux+systems/embedded+linux+systems/Chapter+11.+Debugging+Tools/11.4+Memory+Debugging/
CC-MAIN-2017-30
refinedweb
1,138
55.84
The Activity DesignerNot actually a WF 4.0 blog any more :) Evolution Platform Developer Build (Build: 5.6.50428.7875)2014-02-13T07:27:00ZSlimming down your build - don't copy the intellisense files!<p>So you know how a lot of nuget packages include intellisense XML files and they get copied to your output binaries folder during build?<br /><br />I did a quick web search but soon for how to fix this, but soon gave up and resorted to searching my msbuild .targets files instead. Finally I found it. The AllowedReferenceRelatedFileExtensions variable.</p> <p><br />Just add this to your .csproj (or .props or .targets file depending on how you're structuring your build).<br /><br /> <!-- note: This controls what files *related to a DLL* are copied to the output directory with the dll.<br /> Setting custom AllowedReferenceRelatedFileExtensions to just .pdb and .pri - EXCLUDE .xml so that<br /> all sorts of giant intellisense XML files are not copied to the bin directories!<br /> You can override this per-project if you really want the .xml files for some reason --><br /> <AllowedReferenceRelatedFileExtensions><br /> .pdb;<br /> .pri<br /> </AllowedReferenceRelatedFileExtensions></p> <p>Of course once I discovered the variable, it was *easy* to find many other people already knew about this. I was just failing to find it, sigh.</p> <p>Example: <a href=""></a></p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 Autofac magical - part II<p>Now that you've <a href="">read part I</a> perhaps you can answer this.</p> <p>What does this code do?</p> <p> class FO : IEnumerable<br /> {</p> <p> public IEnumerator GetEnumerator()<br /> {<br /> throw new NotImplementedException();<br /> }<br /> }</p> <p> class Program<br /> {<br /> static void Main(string[] args)<br /> {<br /> var cb = new ContainerBuilder();<br /> cb.Register(ctx => new FO()).As<IEnumerable>().InstancePerLifetimeScope();<br /> cb.Register<Func<FO>>(ctx => { return () => new FO(); }).As<Func<IEnumerable>>();<br /> var x = cb.Build();<br /> using (var s = x.BeginLifetimeScope())<br /> {<br /> var f = s.Resolve<Func<IEnumerable>>();<br /> var i1 = f();<br /> var i2 = f();<br /> var ie = x.Resolve<IEnumerable>();<br /> }<br /> }<br /> }</p> <p>In particular<br /><br />Are <strong>i1</strong>, <strong>i2</strong>, and <strong>ie</strong> distinct?<br /><br />What if you comment out the line cb.Register<Func<FO>>(ctx => { return () => new FO(); }).As<Func<IEnumerable>>();?</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 Autofac magical?<p>The answer is yes!</p> <p>OK let me explain. I never registered anything as Func<IObservable<object>> and yet the below code still works. It turns out to be that not only does autofac understand Func<Dependency>(), it understands a whole bunch of things that you may or may not have intended to be dependency relationships. :)</p> <p>The details [Copied straight from]</p> <table class="docutils" border="1"> <tbody valign="top"> <tr class="row-even"> <td><tt class="docutils literal"><span class="pre">B</span></tt></td> <td>Direct Dependency</td> </tr> <tr class="row-odd"> <td><em>A</em> needs <em>B</em> at some point in the future</td> <td><tt class="docutils literal"><span class="pre">Lazy<B></span></tt></td> <td>Delayed Instantiation</td> </tr> <tr class="row-even"> <td><em>A</em> needs <em>B</em> until some point in the future</td> <td><tt class="docutils literal"><span class="pre">Owned<B></span></tt></td> <td><a class="reference internal" href=""><em>Controlled Lifetime</em></a></td> </tr> <tr class="row-odd"> <td><em>A</em> needs to create instances of <em>B</em></td> <td><tt class="docutils literal"><span class="pre">Func<B></span></tt></td> <td>Dynamic Instantiation</td> </tr> <tr class="row-even"> <td><em>A</em> provides parameters of types <em>X</em> and <em>Y</em> to <em>B</em></td> <td><tt class="docutils literal"><span class="pre">Func<X,Y,B></span></tt></td> <td>Parameterized Instantiation</td> </tr> <tr class="row-odd"> <td><em>A</em> needs all the kinds of <em>B</em></td> <td><tt class="docutils literal"><span class="pre">IEnumerable<B></span></tt>, <tt class="docutils literal"><span class="pre">IList<B></span></tt>, <tt class="docutils literal"><span class="pre">ICollection<B></span></tt></td> <td>Enumeration</td> </tr> <tr class="row-even"> <td><em>A</em> needs to know <em>X</em> about <em>B</em></td> <td><tt class="docutils literal"><span class="pre">Meta<B></span></tt> and <tt class="docutils literal"><span class="pre">Meta<B,X></span></tt></td> <td><a class="reference internal" href=""><em>Metadata Interrogation</em></a></td> </tr> <tr class="row-odd"> <td><em>A</em> needs to choose <em>B</em> based on <em>X</em></td> <td><tt class="docutils literal"><span class="pre">IIndex<X,B></span></tt></td> </tr> </tbody> </table> <p>So now I know!</p> <p>using System;<br />using System.Collections.Generic;<br />using System.Linq;<br />using System.Text;<br />using System.Threading.Tasks;<br />using Autofac;</p> <p>namespace IsAutoFacMagical<br />{<br /> class CO : IObservable<object><br /> {<br /> public IDisposable Subscribe(IObserver<object> observer)<br /> {<br /> throw new NotImplementedException();<br /> }<br /> }</p> <p> class DO<br /> {<br /> public Func<IObservable<object>> Magic { get; set; }<br /> }</p> <p> class Program<br /> {<br /> static void Main(string[] args)<br /> {<br /> var cb = new ContainerBuilder();<br /> cb.RegisterType<CO>().As<IObservable<object>>();<br /> cb.RegisterType<DO>().PropertiesAutowired();<br /> var x = cb.Build();<br /> var d = x.Resolve<DO>();<br /> var m1 = d.Magic();<br /> }<br /> }<br />}<br /><br /></p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 some ideas for keeping that data access code (and test code) tidy...<p>I.</p> <p><span style="text-decoration: underline;">Principle #1 - Keep serialization knowledge where it belongs - in your data access layer!</span></p> <p>Problem manifestation #1: we were using Entity Framework with string fields and effectively storing enumeration values as strings. What this tends to lead to is little bits of smelly code everywhere like the following:</p> <p>deployment.Status = DeploymentStatus.Active.ToString();</p> <p>if (String.Equals(deployment.Status, DeploymentStatus.Suspended.ToString());</p> <p>DeploymentStatus status = (DeploymentStatus)Enum.Parse(typeof(DeploymentStatus), deployment.Status, true);<br />switch (status) {...}</p> <p>Problem manifestation #2: we were also using Entity Framework to store json serialized dictionaries of settings....</p> <p>Solution:</p> <p.</p> <p><span style="text-decoration: underline;">Principle #2 - Query objects can be testability goodness!</span></p> <p>Problem manifestation: (Basically we were half-way there, but realizing none of the benefits, leading to brittle mocking code.)</p> <p>Again with the Entity Framework, we had a bunch of smell test code which would be (Test code before:)<br /><br />var resourceSpec = new ResourceSpecification(resourceName, otherQueryParameters);<br />var fakeStore = A.Fake<DeploymentStore>();<br />A.CallTo(() => fakeStore.GetDeployment(resourceSpec)).Returns(new Deployment { ... });</p> <p 'how do I mock out the retrieval of the actual Deployment resource from the database'?<br /><br />Code before:</p> <p>var deployment = _context.GetDeployment(resourceSpec);<br /><br />Code after:</p> <p>var deployment = resourceSpec.GetDeployment(_context);</p> <p>Test code after:</p> <p>var resourceSpec = new MockResourceSpecification(new Deployment { });<br /><br /><span style="text-decoration: underline;">Principle #3 - Resolve Queries for objects up-front at the input validation layer, then the actual work can be done</span></p> <p>This may remind you of the 'query command separation' principle...<br />Code pattern to avoid:</p> <p>function DoSomethingWithDeployment(ResourceSpecification r, blah, blah, blah)<br />{<br /> ...<br /> CallOtherFunction(r, blah, blah, blah)<br /> ...<br />}</p> <p>function CallOtherFunction(ResourceSpecification r, blah, blah, blah)<br />{<br /> Deployment d = r.TryGetDeployment();<br /> if (d == null) throw new DeploymentDoesn'tExistException();<br />}<br /><br />Code pattern to use:</p> <p>function DoSomethingWithDeployment(ResourceSpecification r, blah, blah, blah)<br />{<br /> Deployment d = r.GetDeploymentOrBust(); //throws an exception if deployment doesn't exist<br /> ...<br /> CallOtherFunction(d, blah, blah, blah);<br /> ...<br />}</p> <p>Why is the second code so much better than the first? Because it's now much easier reasoning about and testing the code path for deployment doesn't exist. <br />-validation code is all right there in the function taking <em>user input</em>, which is where you <em>expect</em> that you have to write validation oriented test cases.<br />-contract between your functions got clearer and simpler<br />-the code path it is now one less function deep, so now less test cases for the deeper function!</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 and connection throttling when self hosting with OwinHttpListener<p>[Disclaimer before we begin: I'm not really an expert on OWIN (henceforth 'owin') or HttpListener - I just researched this as best I could myself so I may get some stuff wrong. Question my authority!]</p> <p><strong>Self-hosting </strong.</p> <p.</p> <p>For instance, recently we had a discussion around security of one such owin hosted HTTPS endpoint for diagnostics/management, and possible DOS attacks. While in our particular case we don't care that much if this particular endpoint goes down due to a DOS attack, we <em>do</em>.</p> <p>1. There only seem to be a few basic strategies to choose from:</p> <p><strong>Filtering out</strong> or blacklisting bad traffic:<br />-Examples: IP based filtering, request-header-based filtering. The idea here is that you can put a filter in your request pipeline that detects attacking requests/packets somehow by <em>what they look like</em>, and discarding them. Possibly you do this by manually configuring filtering <em>in reaction</em> to an identified DOS attack. In order to do this you must have logs which let you identify what the bad traffic looks like. Possibly you implement some kind of automatic filtering system, which might actually be more of a <em>throttling</em> system...<strong><br /><br />Throttling </strong>traffic:<br />-You can throttle the server overall and attempt to limit total incoming traffic on your server to what you know your server can handle. The point of this is to stop your application performance degrading by trying to handle too many requests at once...<br />-At application level, if you're in an authenticated app, you can throttle requests per-user, once you've authenticated their connection... <br />...<br /><strong><br />Out-scaling </strong>traffic:<br />-Throw so much servers and bandwidth into action that you can actually handle all the load coming at you. This one may get expensive, so someone is going to have to weigh up the cost of this vs the cost of not doing this and just being DOSed...</p> <p>2. In general, your best defense against DOS attacks is a multi-layered one.</p> <p>At the application level, you can easily write a request handler that throttles the number of simultaneous requests your application instance will try to serve. For instance see answers to:</p> <p><a href=""></a></p> ?</p> <p>BTW. IIS is apparently not vulnerable experimentally to e.g. slowloris. So what about HttpListener?<br /><br />3. In the case of HttpListener, it is built upon HTTP.SYS and there are already some throttling mechanisms in place, for you to use, with reasonable defaults. But if you are not in IIS and are leveraging HTTP.SYS via the <a href="">Http Server API</a> you should be able to exercise control too</p> <p>Here are <em>some</em> of the properties you can theoretically set, <a href="">once you find the right API</a>:</p> <p><strong>(per <br />EntityBody - </strong>The time, in seconds, allowed for the request entity body to arrive.<br /><strong>DrainEntityBody - </strong>The time, in seconds, allowed for the HTTP Server API to drain the entity body on a Keep-Alive connection.<br /><strong>RequestQueue - </strong>The time, in seconds, allowed for the request to remain in the request queue before the application picks it up.<br /><strong>IdleConnection - </strong>The time, in seconds, allowed for an idle connection.<strong><br />HeaderWait - </strong>The time, in seconds, allowed for the HTTP Server API to parse the request header.<br /><strong>MinSendRate - </strong>The minimum send rate, in bytes-per-second, for the response. The default response send rate is 150 bytes-per-second.<br /><strong><br /></strong><strong>MaxConnections (per url group)- </strong>The number of connections allowed. Setting this value to HTTP_LIMIT_INFINITE allows an unlimited number of connections.</p> <p><strong>HttpServerQueueLengthProperty (per request queue) - </strong>Modifies or sets the limit on the number of outstanding requests in the request queue.</p> <p>Note the documented defaults are:</p> <table> <tbody> <tr><th>Timer</th><th>HTTP Server API Default</th><th>HTTP Server API Wide Configuration</th><th>Application Specific Configuration</th></tr> <tr> <td>EntityBody</td> <td>2 Minutes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>DrainEntityBody</td> <td>2 Minutes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>RequestQueue</td> <td>2 Minutes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>IdleConnection</td> <td>2 Minutes</td> <td>Yes</td> <td>Limited</td> </tr> <tr> <td>HeaderWait</td> <td>2 Minutes</td> <td>Yes</td> <td>Limited</td> </tr> <tr> <td>MinSendRate</td> <td>150 bytes/second</td> <td>No</td> <td>Yes</td> </tr> </tbody> </table> <p> </p> <table> <tbody> <tr> <td>HttpServerQueueLengthProperty</td> <td>ULONG</td> <td>1000</td> </tr> </tbody> </table> <p>Now <em>luckily,</em> if you are using HttpListener you don't have to go read the unmanaged code docs and figure out how to P/Invoke everything, because there is a <strong>TimeoutManager</strong> on HttpListener which lets you set all these properties via the <a href="">HttpListenerTimeoutManager</a> class - although they are not as thoroughly documented in the .NET api. But of course the even better news I just implied is just that these default limits exist is already going to be giving your application some sensible filtering and throttling goodness.</p> <p><br />4. OwinHttpListener comes with some <em>additional </em>checks and balances you can use for throttling requests at its own entry gate (instead of your application's). The way to do this is</p> <p>owinHttpListener.SetRequestProcessingLimits(int maxAccepts, int maxRequests);</p> <p>also, for convenience, owinHttpListener <strong>also</strong> lets you do SetRequestQueueLimit(long length), which modifies the aforementioned RequestQueueLength property of the Http Server API which for some reason HttpListener does not expose.<br /><br /></p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 F# (1)<p>C#</p> <p><a title="In F# there are many different collection-looking syntaxes that compile into something. What do they mean?" href="">In F# there are many different collection-looking syntaxes that compile into something. What do they mean?</a><br /><br />{a, b, c, d}, [a, b, c, d], (a, b, c, d), [|a, b, c, d|]<br /><br />Which is really just another way of asking this: </p> <div><a class="question-hyperlink" href="">Why doesn't Seq.groupBy work like I think it will with lists? (F#)</a></div> <div><a class="question-hyperlink" href="">What is difference between lists, arrays, and tuples in F#?</a></div> <div>How do I filter a list in F#? Question and Answer: <a href="">Best Functional Way To Filter List in F#</a></div> <div><a title="What are the best ways to concatenate lists in F#?" href="">What are the best ways to concatenate lists in F#?</a><br />Why do I ask the question? Because I know the answer is going to be 'it depends...'</div> <div><a title="Does F# have something like ternary ?: operator?<br />" href="">Does F# have a ternary ?: operator?</a><br />(The answer is basically yes... but the syntax is quite different)</div> <div>If you particularly like/dislike this experimental blog post which is a collection of stack overflow links (looks like link spam doesn't it?) let me know in the comments by all means - your feedback is appreciated.</div> <div! :)</div><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 but the language design…?<p.</p> <p:</p> <p>2) Don’t be vaporware - have a real implementation early and make it as available as possible.</p> <p:</p> <p>3a) use a frontend compiler architecture generating C (or other highly portable language) code</p> <p>3b) generate code for a virtual machine [e.g. PL/0, Forth]</p> <p>3c) be a very compact language to define, and therefore easy to implement a boostrapper interpreter (e.g. LISP, Smalltalk)</p> <p>3d) be self-hosting (compiler written in its own language), so that if some small set of core-language features can be ported in a bootstrap compiler, it can then compile your main compiler can </p> <p>4) Have some major commercial backing to attach to that will promote your language! Seriously, this has helped a lot of languages over time. :)</p> <p.</p> <p>6) Be lucky.</p> <p:</p> <p>-he defines a simple abstract machine architecture <br />-he defines a simple and powerful language which supports structured programming and code reuse, without all these other complex features everyone wants to put in their languages <br />-he writes most of the compiler in self-hosting fashion <br />-any time he has a programming project he just ports the compiler bootstrapper and virtual machine to whatever architecture, and he can reuse all his knowledge on a brand new system. <br />-bliss…?</p> <p>Even if his language never catches on with the mainstream he can still be quite a productive programmer in his language anyway, especially since he can keep tweaking the language to his own liking.</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 performance counter data in .NET<p>Nothing you can’t easily write yourself, but while playing around, I wrote this one myself, and learned one or two things. <br /></p> <p>Firsltry, it demonstrates the wrinkle that when reading % processor time you need to specify the instance name of the processor, or the special instance name “_Total”.</p> <p>Secondly you’ll notice that the % CPU time is a performance counter of type Timer100NsInverse, which results in it showing weird large numbers. </p> <p>MDSN says “Counters of this type calculate active time by measuring the time that the service was inactive and then subtracting the percentage of active time from 100 percent.” </p> <p>In other words, it tells you your processor’s IDLE time, as a count of some units of 100Ns intervals, measured over some other arbitrarily sized interval which you presumably need to find out from the counter sample timestamps… complicated, sigh.</p> <p>I used <a href="">UltimateTimer</a> for the timer, of course...</p> <p>using System; <br />using System.Collections.Generic; <br />using System.Diagnostics; <br />using System.Linq; <br />using System.Text; <br />using System.Threading.Tasks;</p> <p>namespace pcpusher <br />{ <br />    class Program <br />    { <br />        static Dictionary<string, string> CountersToMetrics = new Dictionary<string,string> <br />        { <br />            { @"\Processor(_Total)\% Processor Time", "percentProcessorTime" } <br />         };</p> <p>        static Dictionary<string, PerformanceCounter> CountersToPC = new Dictionary<string,PerformanceCounter>();</p> <p>        static void Main(string[] args) <br />        { <br />            foreach (var k in CountersToMetrics.Keys) <br />            { <br />                 var parts = k.Split(new []{'\\'}, StringSplitOptions.RemoveEmptyEntries); <br />                try <br />                { <br />                     var safePart = parts[0].Contains("(") ? parts[0].Substring(0, parts[0].IndexOf("(")) : parts[0]; <br />                     var pc = new PerformanceCounter(safePart, parts[1], readOnly: true); <br />                    if (parts[0].Contains("(_Total)")) <br />                    { <br />                        pc.</div><img src="" width="1" height="1">tilovell09 should explain something<p>I sometimes see unit tests which are downright confusingly opaque. Being a unit test, they are testing some smallish piece of code. Unfortunately by the time I finish reading the test I am none the wiser for what the importance of the code in question is, and what role it is meant to play in the larger system.</p> <p>When I see a test like this, I feel like there has probably been a failure somewhere, for few possible reasons. <br />-It’s hard to code review the test. <br />-It’s hard to understand if the test failed in a way which <em>matters. <br /></em>-It’s possibly a <em>brittle </em>test which is closely tied to the implementation being tested and requires a lot of churn maintenance alongside its constantly churning implementation.</p> <p>I feel like the underlying cause of all this is that the person writing the test hasn’t really asked themselves the question ‘why is it important that this test be written, and pass?’ <br /> <br />The problem frequently shows itself in the <em>name</em> of the test, and the <em>assertion</em> section of the test.</p> <p>public void TestNotifyCondition() <br />{ <br />    obj = /* setup object */; <br />    Assert.Equal(true, obj.notifyCondition(true)); <br />}</p> <p>If you see this fail, will you instantly understand what you did wrong? Will you understand <em>why </em>someone asserts the return value is true? And how will you answer the question, 'Is it failing because I made a silly mistake, or because my intended design change to how this code works was a bad one?’</p> <p>If implementation says <em>what</em> your system does, tests should say <em>why</em> it does it that way.</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 IQueryable poisoning your interfaces?<p>Thanks indirectly to a comment on my previous post, today I read <a href="">‘IQueryable is tight coupling’</a> (disclaimer: his words). I feel like it contains an interesting mixture of truth and panic, and makes a fine discussion topic.</p> <p>The main interesting truth he mentions is: nobody implements IQueryable fully! </p> <p>Yes! If you’ve used an ORM, you’ve probably seen it happen many times. You’ve also maybe not seen it happen lately, because you’ve either trained yourself to think in terms of the feature set you know will work, or trained yourself to think in terms of the SQL that will be generated and then work backwards from there. Or just given up on ORMs completely because you think the abstraction is too leaky.</p> <p>OK now that the truth is mostly done, let the panicking begin!</p> <p><u>Panic attack! “Using IQueryable in your interfaces violates the liskov substitutability principle”</u></p> <p>Does this seem accurate? What does the principle state? </p> <p>’If an type A is a subtype of B then objects of type A may be used in <em>stead of</em> objects of type B’ is the headline summary. [Sideline philosophical question: if type B is an interface, do objects of type B even exist?] [<strong>Note</strong>: what interface is violating the principle here? He’s <strong>not</strong> saying <em>IQueryable itself </em>violates the interface (how can an interface violate itself?). He’s saying if you use IQueryable in <em>your</em> interface, you’re going to be violating the principle by accident.]</p> <p>Suppose you’re writing a new interface, there are three ways to use IQueryable in <em>your</em> interface. <br /> <br />a) <em>Provide</em> IQueryable</p> <p>Ploeh: You did know that query providers are evil because they don’t <em>really </em>allow arbitrary queries, right? LSP violated! <br /> <br />b) <em>Consume</em> IQueryable</p> <p>void DoStuff (IQueryable dataSource); How can this be evil? Is it bad because you’re encouraging people to implement yet more broken query providers, and violate LSP?! <br /> <br />finally c) <em>Transform </em>IQueryable</p> <p>IQueryable ModifiedDataSource(IQueryable realDataSource); Ploeh didn’t discuss this one but I feel it’s an important special case.</p> <p> </p> <p>OK, now the panicking has started, I just want to say 'Don’t Panic! (yet)’</p> <p>The value of LINQ in terms of using it in your interface is not that it makes a promise that ‘you can write any expression’. The beauty is its promise ‘you can write <em>a variety </em>of expressions <em>in normal C#’’</em>. Or, in other words, you can write <em>more expressive code</em>.</p> <p>Seriously.</p> <p>I mean, you are constantly faced with a choice between writing different pieces of code on the provider side, and a corresponding choice of different pieces of code on the consumer side too.</p> <p><u> <br />IQueryable world</u></p> <p>IQueryable<Package> GetPackages(); <br />var topPackages = GetPackages().Where(p => p.Author.Name == userId) <br /> <br />[or perhaps <br />IQueryable<Package> GetPackagesByAuthor(); <br />var topPackages = GetPackagesByAuthor().Take(50);] <br /> <br /><u> <br />Non-IQueryable world</u></p> <p>IEnumerable<Package> GetPackagesByAuthor(string userId, int count top = 50); <br />var topPackages = GetPackagesByAuthor(userId, 50); <br /> <br /> <br />Which of these interfaces is more <em>expressive?</em> Easy - The IQueryable one is more expressive. If you choose the IQueryable interface, your consuming code can literally <strong>do way more stuff, without having to rewrite the interface or the implementation of GetPackages. </strong>If you want to change the non-iqueryable GetPackagesByAuthor to return a different result set instead of Packages, good luck! </p> <p>Which of these interfaces is more <em>fragile</em>? If you want to change the business logic <em>consuming</em> GetPackagesByAuthor to get even packages which have already been deleted/unlisted<em>… oh dear! That’s a breaking interface change.</em></p> <p>What are we really scared of here?</p> <p><u>Diversion: Language designer diversion</u></p> <p>How would a language designer interpret LSP?  Well, LSP is also a guideline on <em>how your type checker might work. </em>I would suggest an alternative LSP phrasing aimed at the language designer: ‘When objects follow a protocol <em>where they are substitutable for each other at runtime</em>, your type system should allow you to <em>model that </em>with an appropriate common base type, so that programmers can write type expressions <em>such that type safety checking is satisfiable</em>’<em>.</em></p> <p>Yes, LSP really only makes sense when you’re talking about a statically type checked language.</p> <p><u>Second diversion: an interesting special case: transforming IQueryable?</u></p> <p>What about the case where we write something that transforms IQueryables like IQueryable SelectVersionsForPackages(IQueryable packages); <br /> <br />Logically, we are doing a <em>perfect </em>implementation of IQueryable! That is, we will provide you a perfect implementation of IQueryable in our output, as long as the implementation of the input IQueryable was perfect, and supports arbitrary expressions.</p> <p><u>Returning to the debate</u></p> <p>I think the LSP argument is really this: </p> <p>When you do IQueryable, this is bad because you are postponing <em>compile time </em>type safety checking to <em>runtime</em>. Yes, this is the <em>actual consequence</em> of the IQueryable fib<em>.</em></p> <p>I think my position is this:</p> <p>When you do IQueryables, yes, you are fibbing to the type checker, but it is a fib of great convenience because, it <em>lets every part in your system speak the same interoperable language if IQueryable while keeping the type checker happy, and losing no expressiveness.</em></p> <p>Let’s see what happens if we scale up the type-checking argument to look at the largest systems that are violators of the LSP – the ORMs themselves: what could any ORM have realistically done <em>instead of </em>implement IQueryable, that would have given you <em>type errors, instead of runtime errors.</em>? <br />-They could have written their own <em>new</em> interface to use instead of IQueryable. <br />-It would have been called IEntityFrameworkQuery or something. <br />-It would not allow you to pass it Expression<Func<T>> etc – because it <em>doesn’t really support arbitrary expressions</em>. In fact it would probably require you either to <em>manually build the expression trees, using code, in order that you can build type-checked expression trees that are not dangerous Expression<T>.</em></p> <p><em>Only… <br /></em><em>It wouldn’t really be an ORM any more then, would it? :-/</em></p> <p><em>And also since you had to use a custom interface, it’s highly unlikely that your code written to use it can interoperate with any other ORM…</em></p> <p>[So, just in case you are in doubt, let me point out the answer to my blog title is ‘No!’]</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 - how to make your EF queries testable<p>So, some time back I spent some time <a href="">agonizing around testability of code which has to talk to a database</a>. I now think I have found a somewhat reasonable answer, which assumes that you are using some kind of LINQ query relational database mapper like EntityFramework which my samples are based on.</p> <p>It has these basic features. <br />-Your queries are unit testable independent of a database. <br />-Your queries are usable/testable with a database, if you want to. <br />-Your queries aren’t automatically easy to mock out - at least the way I am doing it now. Maybe that can be improved, however the good news is that frequently you can just mock out your DbContext instead <br />-It leads you in a nice direction of <em>keeping the fully power of IQueryable at your fingertips.</em></p> <p>What do I mean by the last point?</p> <p>Well let’s see. I’m sure you’ve seen an interface like this:</p> <p>interface IPackagesService {</p> <p>     IList<PackageVersion> GetPackageVersionsById(string id); <br />     IList<Package> GetPackagesByAuthor(string author); <br />}</p> <p>This is basically done because it is a solution to a problem: you want to make it possible to mock out your queries, and make some business logic unit testable. But what you are also doing is trying to drink a firehose through a couple straws. The firehose is the full power of SQL. The straw is a (string author) filter on one hand, and a list of packages with a <em>predetermined set of eagerly-populated properties </em>on the other.</p> <p>Very soon these restrictions start to feel chafing, which leads the interface of your functions to mutate into more and more specialized and arcane looking function signatures:</p> <p>interface IPackagesService { <br />     IList<Package> GetPackagesByAuthor(string author, bool withAllPackageVersions = false); <br />}</p> <p>If you continue this and take it to its full extreme, you end up building a magical parameters object with many properties which you will set to try to <em>restore to yourself the full power of LINQ, which you took away by going through the (string author) straw in the first place.</em></p> <p>So what do you do instead? Anyway, I’ve basically given away what I think the answer already: IQueryable all the things!</p> <p>class IDbContext { <br />    IQueryable<Package> Packages { get; set; } <br />} <br /> <br />class PackageAndVersion { <br />     Package Package { get; set; } <br />     PackageVersion LatestVersion { get; set; } <br />}</p> <p>static class PackageQueries { <br />    static IQueryable<Package> ByAuthor(this IQueryable<Package> set, string author); <br />    static IQueryable<Packages> ById(this IQueryable<Package> set, string author); <br />    static IQueryable<PackageVersions> SelectAllVersions(this IQueryable<Package> set); <br />    static IQueryable<PackageAndVersion> SelectPackageWithLatestVersion(this IQueryable<Package> set); <br />} <br /></p> <p>When you consume it you are now writing: <br /> <br />var allUsersPackageVersions = _context.Packages.ByAuthor(user).SelectAllVersions(); <br />var allVersionsOfPackage = _context.Packages.ById(id).SelectAllVersions();</p> <p>and so on…</p> <p>By the way you’ll notice the implementation of any of those methods is trivial.</p> <p>ById(this IQueryable<Package> set, string author) { return set.Where(p =>p.Author.Name == author); } <br />SelectAllVersions(this IQueryable<Package> set) { return set.SelectMany(p => p.PackageVersions); }</p> <p>So you’ll probably say “It’ so simple. How could this possibly add any value?” Well, first of all, simplicity is kind of <strong>good </strong>in and of itself. “But I thought you were saying this is testable? How?” Now that’s a good question!</p> <p>Here are your options: <br />-Now it’s dead simple to test your query <em>in isolation against any IQueryable!</em> <br />-Testing your business logic <em>combined with your real query is also easy. </em>Just mocking out DbContext.Packages with a stub that returns a collection of packages. This is basically the ‘Database Faker’ strategy, when that is a good fit for what you want to test. <br />-You could also fake out your query the traditional way by having hookable methods that wrap the query at the top level, if you want to… I don’t think you do though. What you really want is… <br />-Or you could just do another magical thing: stub out your queries on demand and just give the results you want.</p> <p>Aha, this is a new idea to me! But how is that possible…? Well, luckily IQueryable is all virtual calls, so it’s mockable... <br /> <br />So now we can mock out out context <em>and the query in some fell swoops </em>by doing this:</p> <p>context = A.Fake<DbContext>();  <br />a.CallTo(() => context.Packages).Returns(new QueryFaker<Package>( <br />{ <br />   new string[] { “author1”, “author2”, “author3” }// expected results of the query at the point the query is enumerated - whatever the heck type you want – just put the expected results here <br />})); <br /> <br />var result = context.Packages.Top3().SelectAuthors().Select(author => author.name); <br />Assert.Equal(“author1”, result[0]);<t><t><t><t><t><t><t><t><t><telement><telement><telement><object>(expression, _results); } public TResult Execute<TResult>(Expression expression) { return (TResult)_results; } public object Execute(Expression expression) { return _results; } }</object></telement></telement></telement></t></t></t></t></t></t></t></t></t></p> <p> <br />Conclusion: <br />Dear reader, <br />The conclusion in your mind reading this should be <br />-YES! You really don’t have to insert interfaces between your classes <em>just so that your queries are mockable</em>. <br />-YES! You can still test your queries! <br /> <br />Which of the patterns discussed in the previous post does this come the closest to? You can debate, but I would say this is just a very lightweight variant/factoring of the QueryPattern, which in its more heavyweight version that I have seen before is similar in spirit, but requires you to model each query as a dependency object, and with the interesting bias that *not* mocking out the query is the ‘defaultest’ thing to do. You have to go the extra mile to mock out a query, and you’ll do it when you think it’s easier than providing fake data in the mock db. <br /> <br />Regards, <br />Tim</p> <p>PS, have I tried these ideas at more than toy implementation scale? Not yet. <img class="wlEmoticon wlEmoticon-smile" alt="Smile" src="" /> <br /></p> <p>Appendix: Bare bones implementation of QueryFaker: <br /> <br />class Query<t> : IQueryable<t>, IQueryable, IEnumerable<t>, IEnumerable, IOrderedQueryable<t>, IOrderedQueryable { public IQueryProvider Provider { get; protected set; } Expression expression; public Query() { this.expression = Expression.Constant(this); } public Query(Expression expression) { this.expression = expression; } Expression IQueryable.Expression { get { return this.expression; } } Type IQueryable.ElementType { get { return typeof(T); } } public IEnumerator<t> GetEnumerator() { return ((IEnumerable<t>)this.Provider.Execute(this.expression)).GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return ((IEnumerable)this.Provider.Execute(this.expression)).GetEnumerator(); } } class QueryFaker<t> : Query<t>, IQueryProvider { public object _results; public QueryFaker(object results) : base () { base.Provider = this; this._results = results; } public QueryFaker(Expression expression, object results) : base(expression) { base.Provider = this; this._results = results; } public IQueryable<telement> CreateQuery<telement>(Expression expression) { return new QueryFaker<telement>(expression, _results); } public IQueryable CreateQuery(Expression expression) { return new QueryFaker<object>(expression, _results); } public TResult Execute<TResult>(Expression expression) { return (TResult)_results; } public object Execute(Expression expression) { return _results; } }</object></telement></telement></telement></t></t></t></t></t></t></t></t></p> <p>Excuse the formatting – copy and paste sucks today. <img class="wlEmoticon wlEmoticon-surprisedsmile" style="style" alt="Surprised smile" src="" /></p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 code patterns I don't love in C#.<p>Pattern 1: TryGetFoo that returns boolean.</p> <p>MyEnum ret;<br />if (Enum.TryParse<MyEnum>(normalized, true, out ret)) { return ret; } else { return null; }</p> <p>You would think this slightly more concise line would work</p> <p>MyEnum ret;<br />return Enum.TryParse<MyEnum>(str, true, out ret) ? ret : null;<br /><br />But it turns out that is no good. Compiling the ternary operator, compiler can't even figure out the common type of null and MyEnum is Nullable<MyEnum>!<br /><br /.<br /><br />If javascript had enums it would probably be like:<br /><br />return enumType.TryParse(str, true);</p> <p>Of course javascript doesn't really have enums, so the function might either be returning some object, some string, or some number. But the point of parsing here is that at least you know it's constrained to a set of certain fixed values.<br /> </p> <p>Pattern 2: is/as/casting.</p> <p>if (reader.Value is Foo)<br />{<br /> (reader.Value as Foo).doSomething();<br />}</p> <p>If only the compiler and intellisense could auto-infer the Fooness in this scenario... of course there are reasons to do with method overloading that this would sometimes realy suck too.<br />Speaking of method overloading...<br /><br />Pattern 3: many slightly different method overloads with optional parameters<br /><br />LogException(a, b = null, c = null, d = null)<br />LogException(a, e, f = null)</p> <p><br />The thing I don't like about this pattern is that it always turns out that the particular parameters you want to use are not quite matching up with the order/selection of parameters that someone else thought would make good defaults. <br />To me it's awesome how this one gets solved in javascript. Parameter objects to the rescue!<br />LogException({ a: x, b: y, f: z});<br />Now you no longer have artificial strictures on which orders and subsets of parameters are valid to supply. Only the function logic itself will govern this (by throwing if it really must)<br />Of course for that to work, someone had to try hard to make one function work for all possible parameter sets. But at least there's much saved stress in the process of consumption. :-P</p> <p.<br />Now some debugging will confirm or deny this new hypothesis... denied. OK it was something else. But I had to consider the possibility. :p</p> <p><br /> </p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 Web Roles + RoleShared. Grrrrr…<p>I had a deployment to azure failing yesterday, and I thought ‘I know what caused this. It’s a dll dependency break from upgrading to Azure SDK 2.3.’</p> <p>Of course I was right.</p> <p>But looking at <em>how</em> I was getting screwed by the new sdk switch, I felt surprised at the mechanisms involved.</p> <p>Of course first I had to jump through some hoops to get remote desktop working – our logging wasn’t running either (SDK dependency), and configuring remote desktop after deployment from the portal tends to fail due to some timing issue with role restarts, so I had to configure remote desktop in the package and redeploy. Much time wasted.</p> <p>Once remoted in, I could see the event viewer. (Isn’t there a decent way to see windows event logs for your web roles without logging in via remote desktop? I feel sure there must be, but I don’t know about it. Why is that?)</p> <p>An unhandled exception occurred. Type: System.IO.FileLoadException Process ID: 3080 <br /> Process Name: WaIISHost <br /> Thread ID: 5 <br />AppDomain Unhandled Exception for role CacheExtension_IN_0 <br /> Exception: Could not load file or assembly 'Microsoft.WindowsAzure.Diagnostics, Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies.</p> <p>Aha! I was right! I felt happy and secure. All I had to do was add some redirects to my web.config declaring assembly redirect policy to version 2.3.0.0, and then life would be good again, right? Wait… WTH? <strong>I already have this redirect in my web.config.</strong> <strong>Why doesn’t it work?!</strong></p> <p><dependentAssembly> <br />  < assemblyIdentity <br />  < bindingRedirect <br />< /dependentAssembly> <br />< dependentAssembly> <br />  < assemblyIdentity <br />  < bindingRedirect <br />< /dependentAssembly> <br /></p> <p>Well it turns out, if you look at the actual call stack of the exception above, that it’s being called from my WebRole.OnStart() which is itself ultimately called from Microsoft.WindowsAzure.ServiceRuntime.Implementation.Loader.RoleRuntimeBridge.<InitializeRole>b__0() - which is something that runs in a process called WAIISHost.exe, <em>not </em>the IIS process w3wp.exe.</p> <p>So, web.config just does not apply here.</p> <p>Well, I’ve seen how you fix this for worker roles though. You just add app.config in your worker role project, and it ends up in your CSPKG as MyWorkerRole.dll.config, and everything works. Do I need to add app.config to my web project? That would be silly! I try it anyway, and it doesn’t work:</p> <p>In fact, I notice that what really happens is: build will copy <em>web.config (not app.config) </em>to bin/debug/<em>MyWebSite.dll.config. </em>OK. So why the heck doesn’t this work then?</p> <p>Well, of course there is a problem. Somehow, cspack will not include the MyWebSite.dll.config in the output cspkg file.</p> <p>So. Yeah.</p> <p>OK, easiest way to solve this is just update my assembly references in our RoleShared.dll which is the one which thinks it depends on Azure SDK 2.2 so they point at Azure SDK 2.3. Of course notice the name is <em>RoleShared</em>. <em>None of our other roles which we deploy with this dll yet use Azure SDK 2.3</em>. So I’m really just punting the problem down the road. Ugh. Grrr…</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 SDK has trouble with worker roles whose names are different from the role’s project name – and how to fix it<p>I had seen this before, but today I became determined to figure out how to fix my targets file. <br />The problem is caused by this code in the msbuild targets which assumes that project name and role name are the same. <br />AFAICS Azure 2.2 SDK, 2.3 SDK and 2.4 SDK have this exact same problem.</p> <p><ItemGroup> <br />      < WorkerRoleReferences <br />        < RoleType>Worker</RoleType> <br />        <RoleName>$(_WorkerRoleProjectName)</RoleName> <br />        <ProjectName>$(_WorkerRoleProjectName)</ProjectName></p> <p>To fix it, you change it to</p> <p><ItemGroup> <br />  < WorkerRoleReferences <br />    < RoleType>Worker</RoleType> <br />    <font style="background-color: rgb(255, 255, 0);"><RoleName>$(_WorkerRoleProjectRoleName)</RoleName></font> <br />    < ProjectName>$(_WorkerRoleProjectName)</ProjectName> <br /></p> <p>and define WorkerRoleProjectRoleName and above:</p> <p><PropertyGroup> <br />     <_WorkerRoleProject>%(WorkerRoleProjects.Identity)</_WorkerRoleProject> <br />     <_WorkerRoleProjectName>%(WorkerRoleProjects.Name)</_WorkerRoleProjectName> <br /> +     <font style="background-color: rgb(0, 255, 0);"><_WorkerRoleProjectRoleName>%(WorkerRoleProjects.RoleName)</_WorkerRoleProjectRoleName></font> <br />     <_WorkerRoleConfiguration>%(WorkerRoleProjects.Configuration)</_WorkerRoleConfiguration> <br />     <_WorkerRolePlatform>%(WorkerRoleProjects.Platform)</_WorkerRolePlatform> <br />    </PropertyGroup> <br /></p> <p><Message Text="WorkerRoleProject=$(_WorkerRoleProject)" /> <br />< Message <br /><font style="background-color: rgb(0, 255, 0);">+<Message Text="WorkerRoleProjectRoleName=$(_WorkerRoleProjectRoleName)" /></font> <br />< Message <br />< Message <br /></p> <p>Web Roles probably have the same issue too. I haven’t checked.</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 lexer hack<p>I'</p> <p></p> <p. :)</p> <p>So - I wondered if given its elegance this technique is a better modern alternative to the old tokenizer/parser split. And whether it really helps you solve tricky tokenizing problems.</p> <p>It turns out the latter is true. It can help you solve tricky tokenizing problems if you want your tokenizer to do clever things like tell you the difference between a typename and a variable name.<br / >=.</p> <p>But that's kind of silly. Why would you overload the 3 symbols in this way and need the tokenizer to <em>tell you which meaning</em> the symbols have?<br /><br />So when would it <em>actually</em> be useful to be doing context sensitive tokenization? When you want to <em>split up the syntax elements differently based on context.</em>.</p> <p>A subtle distinction, and one which has me say to myself "Who would possibly want their compiler to be so lawyer-picky as this? That's too hard to imagine a dumb computer doing."</p> <p>So anyway. The outcome of all this reading in my mind is that separate tokenizers and parsers is still a perfectly valid design pattern for modern compiler writing. Because it is hopefully going to be OK for your tokenizer to be pretty dumb.</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 vs module systems...<p>A well known pain point of the CLR is that loading your program and running a few lines takes too long. While I don't know where exactly my beliefs came from, my beliefs are that this is because of<br /><br />a) assemblies needing to be JITTED from MSIL in order to execute<br />b) in order to JIT you need to load lots of types which reference other types that reference other types, partly so the CLR can try to enforce type safety, partly because that's just how CLR works. There's cross-referencing baked in.<br />c) therefore, basically needing to load and process data that's spread out in lots of files on your disk</p> <p>So we can say it's because you have to do lots of compilation of your code, not just running it. And hence a new trend towards .net language code (C# etc) that is actually precompiled to native code, with thrilling statistics like 'starts up to 60% faster and uses less memory!'</p> <p>Cool. But anyway. I have wondered, is a scripting language like javascript any better off in theory than this? On the face of it it seems that yes, it still needs to do lots of compilation to load your script and use a smart JIT to optimize it, but no, it doesn't have to do nearly as much this type references other type and type safety checking stuff.</p> <p>What javascript does have as downsides from no inbuilt cross-referencing of types is a certain lack of safety such as a) you don't make type errors, and b) you need to figure out yourself how to get all your code loaded in the order that it is needed, so that you don't have the problem of calling functions before they are fully defined yet, or accidentally undefine and redefine things.</p> <p>With recent advances in javascript though, it seems like there are kind of solutions to these problems. Typescript can help a lot with a), and a module system like ARM can help you a lot with b). So, trollish question - is there any reason now not to write your servers in typescript+node.js? :)</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 tokenizing...<p>So I flippantly said 'write a helper function that captures the right pattern for tokenizing' last post... But when you sit down to think about it, a helper function feels like the <em>opposite</em> of what you logically do when you are implementing a finite state machine... because there is no way to have helper functions be case statements in a switch block!</p> <p>i.e. you can do this:</p> <p>if (t = MatchStringLiteral()) return t;<br />if (t =TryMatchSymbol('+=')) return t;<br />if (t = TryMatchSymbol('+')) return t;</p> <p>And that almost looks clean. (But not really clean - it seems like the whole if (t = blah return t pattern) cannot be abstracted out further, in C# anyway) but then if you ever programmed C your next thought is probably 'Would it perform well? I thought tokenizers are supposed to be implemented by lots of switch statements because those are fast?'</p> <p>Well, you can try to mix it up:<br /><br />switch(peeknextchar())<br />{<br /> case '+':<br /> if (t = TryMatchSymbol('+=')) return t;<br /> if (t = TryMatchSymbol('+')) return t;<br /> case 'a..z': if (t == TryMatchIdent()) return t; //actually I don't think you can do case 'a..z' in C# unfortunately<br /> .... </p> <p>} </p> <p>- but really this is just getting less and less elegant. And breaking the golden rule of optimization which is measure first. :p<br />Perhaps there should be a golden rule of abstraction too. Anyone know what it is? I don't think I've heard one yet. The layer of indirection saying comes to mind, but it's hardly prescriptive of an ideal.<br />If I were to invent one, perhaps it would be 'abstractions should be solutions to an <em>expressiveness</em> problem in your code'</p> <p>So anyway...<br />How can we write expressive C# code for tokenizing things? Not worrying about whether it is optimizable...<br />Well. Regex are an obvious way of specifying patterns of things to accept without writing loops etc... but they work less well for symbols where you have to know all the escape characters...<br />You might even be able to optimize... see? I can't stop worrying about optimizing. :)<br /> <br />So anyway... these tokens I normally think of as outputs to the tokenizing problem. But maybe the token itself also describes how to solve the problem?<br /><br />Imagine the increment by token '+='. You can have an instance of the token at some points in the document. They could even all be the same singleton instance if you don't need to know their position in the document. The singleton could also know how to match itself to the next part of the untokenized input stream and say whether it is the next token or not.<br /> <br />There's a whole bunch of conceptual overloading going on here, saying one object knows how to perform many tasks. I feel like this overloading is sort of a natural thing to do in javascript, because it's prototype based, and you can come up with ad-hoc solutions if you suddenly realize a need for separate token instances with different text per instance for string literal tokens, or you realize you need to record position on some tokens. You just start doing it, and de-singletonize your object graph as necessary to express the required variation.<br />You don't have to think up types and interfaces that plan in advance the separation of roles and responsibilities... that's kinda nice.</p> <p>In C# OTOH I am like 'do I need to have both a Token type and a TokenGenerator type? And some subclass that implements both might be good, for the singleton-happy case?'<br />I.e.:</p> <pre> interface IToken<br /> {<br /> string Text { get; set; }<br /> }<br /><br /> interface ITokenGenerator<br /> {<br /> Func<TokenizingState, IToken> Match { get; set; }<br /> }<br /><br /> class Stok: IToken, ITokenGenerator // 'SymbolToken' or 'SingletonToken' - covers all simple symbolic tokens: + - * { } ( ) . , ; :<br /> // also +=, -=, *=, /=, => <br /> {<br /> public Stok(string tokText)<br /> {<br /> this.Text = tokText;<br /> this.Match = (s) => s.StartsWith(this.Text) ? this : null;<br /> }<br /><br /> public string Text { get; set; }<br /> public Func<TokenizingState, IToken> Match { get; set; }<br /> }<br /><br /></pre> <pre>Now I can model my tokenizer as just a collection of token generators. Hopefully... [spot any obvious problems yet? :) ]</pre><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 which I try to write a tokenizer, and fail...<p>I reread something by Steve Yegge, which I think was his NBL thing. Anyway, he said something to the effect of 'writing a programming language will make you a better programmer'. And I thought 'Really? Well, why not. Practicing any sort of programming probably helps, but if you're also doing that kind of introspection maybe it helps more.'</p> <p>Anyway. Thought process continues and eventually I spent my evening writing a tokenizer. A buggy one. By the time I went to bed my brain was too fried to see the bugs in my code.</p> <p>Tonight I was still having trouble seeing the bugs in my code, until I compared the input with the output more closely.</p> <p>And this lead me to find that the bugs were the kind of bugs that probably happen to people writing tokenizers <em>all the time (at least people who don't get much practice)</em>.</p> <p>They are<br />1) Consuming too many chars of input when building the token. This goes unnoticed when the next char is meant to be ignored anyway, like spaces. But it suddenly becomes very noticeable when you are missing an expected '(' token.<br />2) Not outputting the final token, because that's the end of the input stream!</p> <p>Now I've identified my bugs, it's time to think about why they happened, and what to do better instead.</p> <p>Why bug 1?</p> <p>I decided that I would write my tokenizer as an implicit state machine, where the states are the execution flow through the code, i.e. Program Counter. Which is just a fancy way of saying lots of if/switch statements and while loops, with nesting as deep as my tokens are complicated, which luckily isn't very. Now that, in itself is not the cause of the bug. The cause of the bug is that the logic in the while loops has to be exactly right, i.e. <em>peek</em> at characters as you go along, <em>then consume them if matched, </em>instead of<em> pull characters</em>, then try to match them and (bug) forget them otherwise.</p> <p>So you have to do<br />while (peekc().isAlphaNumeric) { token.append(nextC()); }</p> <p>not<br />while (c.isAlphaNumeric() && (c = nextC())) { token.append(C); }</p> <p>Of course once you've discovered the <em>right</em> pattern, you might as well codify it somehow as a helper function, so you don't keep forgetting and screwing it up.</p> <p>Why bug 2?</p> <p>I decided nextC() would throw at end of input, and I would catch it higher up. This would have maybe worked, if I had finally clause that would return the token being constructed....</p> <p>It may just be the case that exceptions are just a really silly way of handling end of input. Still thinking about this one.</p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 Quick Reference to Azure Diagnostics<p>If you’ve used Azure much, you may have eventually decided to use the DiagnosticsAgent plugin, as I did. However, you may also be dissatisfied (personally I am) with the amount of detail that’s out there about how it all works, what it normally does, etc. So I’m going to take a point-in-time snapshot of diagnostics agent files on my VM and poke through them and record what I find interesting and important, so that I or anyone can look it up later as quick reference (as with most of my blog posts). [Since I work at MS I should also mention this post is a personal project, <strong>not</strong> official documentation, and this may even be completely wrong.  End disclaimers.] </p> <p>So... what actually happened when I add the diagnostics plugin to my worker role? Well, it’s going to depend a bit on what configuration I do:</p> <p>1) I need to add some configuration for it in cscfg. Actually, the only bit of configuration which goes in cscfg is a ‘diagnostics storage account’.</p> <p>2) I can add some <em>more </em>configuration for it in a diagnostics.wadcfg file in my cloud service project’s role. That might look something like this:</p> <p><?xml version="1.0" encoding="utf-8"?> <br />< DiagnosticMonitorConfiguration"</a>> <br />  < DiagnosticInfrastructureLogs /> <br />  <Directories> <br />    < IISLogs <br />    < CrashDumps <br />  </Directories> <br />  <br />  <!-- Note that the fastest scheduled transfer can go is 1 minute --> <br />  <Logs bufferQuotaInMB="1024" scheduledTransferPeriod="PT1M" scheduledTransferLogLevelFilter="Verbose" /> <br />  < WindowsEventLog <br />    < DataSource <br />  </WindowsEventLog> <br />  <PerformanceCounters bufferQuotaInMB="512" scheduledTransferPeriod="PT5M"> Counters> <br />< /DiagnosticMonitorConfiguration></p> <p>If you are learning about diagnostics, you can <em>notice </em>a bunch of things about it from this file:</p> <ul> <li>you can set a ‘configuration change poll interval’. More on this later.</li> <li>there’s a quota of how much storage to use</li> <li>logs generated by <em>something</em> are transferred on a ‘scheduled transfer period’</li> <li>windows event logs for configurable source can be collected, and processed again according to a ‘scheduled transfer period’</li> <li>you can pick performance counters off of the machine, by name, which you want automatically logged</li> <li>it uses the XML timespan format PT1M and PT5M everywhere to mean ‘one minute’ and ‘five minutes’</li> </ul> <p>And that noticing raises an important question: what are the <Logs> and who generates them?</p> <p>Well, turns out it’s really easy to generate logs. You can use the System.Diagnostics.TraceListener to collect logs you generate in code.</p> <p>For that to work, you need either some code or some configuration in your role.dll.config or app.config that creates the Azure Diagnostics TraceListener and adds it to the trace listener collection. <br /> <br />< system.diagnostics> <br />  <trace autoflush="true"> <br />    <listeners> <br />      <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics"> <br />        <filter type="" /> <br />      </add> <br />    </listeners> <br />  </trace> <br />< /system.diagnostics> <br /> <br />(Note that it’s using the Version 2.2.0.0 assembly because that’s the version of the Azure SDK we’re using to build and deploy the cloud service project, and so we can expect that is the right assembly version for what’s installed on the machine.) <br /> <br />OK, so that’s Logs. <br />So Azure Diagnostics trace listener is going to collect all your logs and performance counters and… do what with them? <br />So you knew the answer to that right? It’s going to (eventually) shove them Azure Tables. </p> <p>The table names are governed by convention, and I don’t know a way to reconfigure them. Which annoys me. But anyway, let’s accept that as a given for now. They are:</p> <p>-WADLogsTable <br />-WADPerformanceCountersTable</p> <p>You don’t need to do anything to create these tables, they will be automatically created in your storage account for you once your diagnostics are running.</p> <p>See I’m jumping ahead. Because we’re now done configuring, you want to deploy your role and run it, right? So what actually happens then?</p> <p>1) Azure runtime starts up the diagnostics agent plugin. Result of this is that DiagnosticsAgent.exe is launched, and it receives the diagnostics storage account info and .wadcfg that you configured as its initial configuration. <br />2) DiagnosticsAgent.exe sets up ‘configuration polling’ which monitors <em>something </em>for configuration changes. More about <em>something</em> later. <br />3) DiagnosticsAgent.exe launches the MonAgentHost.exe which I believe is the actual monitoring agent that polls your performance counters and logs, and transfers data to the table storage. MonAgentHost appears to be unmanaged code.</p> <p>Now this configuration polling thing is interesting. It’s possible to <em>change </em>the diagnostics configuration after your role is running. Basically the way you do this is you use the diagnostics configuration API ‘<a href="">Microsoft.WindowsAzure.Diagnostics.Management</a>’.</p> <p>In order to make changes using this API the basic flow you follow is: <br />1) get a RoleInstanceDiagnosticManager object that points at your particular role in that particular deployment <br />2) call its SetCurrentConfiguration() method</p> <p>What this then <em>actually does</em> it goes and updates (or creates) a specially named configuration blob in your diagnostics storage account, in the ‘wad-control-container’. It turns out that DiagnosticsAgent.exe has been pinging storage to try and read this configuration blob every minute or however often you set its configuration polling for. Once there’s new config here, that causes the diagnostics system to try to update itself with the new config.</p> <p>The slightly cool thing about this system is that you can use it to update the diagnostics configuration of <em>any role</em> at <em>any time from anywhere.</em> As long as you have permissions to update the blob in that container, of course. Your role can use this API to update its own config too, if that’s what you want, or perhaps you greatly prefer code-based configuration to .wadcfg XML.</p> <p>OK.</p> <p>So now my diagnostics are happily running and getting reconfigured and doing exactly what I want, right? Wait… what is it doing?</p> <p>Well, <br />1) Your app edm, and turns them into ETW events in the Windows event log <br />2) It collects</p> <p>They are just rows in Azure Tables? <br />Yes, well I mentioned that before, when I wrote <a href="">a quick tool for retrieving Azure Diagnostics logs</a>.</p> <p>But of course if you want to do anything <em>useful </em>with <em>large amounts of log data</em> you’re going to want to do some smarter analysis of the data… and so you’re going to want to know the schema of the table. Which I can’t find officially documented. Grr.</p> <p>So: here is the schema as mainly derived from the internet… (references: <br /> <br /><a title="" href=""></a> <br /><a title="" href=""></a> <br /><a title="" href=""></a> <br /><a title="" href=""></a></p> <p> </p> <p>The things that obviously matter for querying table storage are PartitionKey, RowKey, since they are what table storage can performantly query. Also Timestamp/EventTickCount matters to some degree, but you don’t want to be querying by Timestamp if you can query by PartitionKey+RowKey instead. <br /> <br />Consensus is that the way PartitionKey is generated is from DateTime.Ticks, plus a bunch of zeros: <br /> <br />0635313933600000000 <br /> <br />You’ll note that the values here tends to deviate slightly from the one in Timestamp. <br />Off the top of my head, I believe RowKey was just an arbitrary index value to ensure all entries in the partition have a unique key. <br /> <br />I’m of the belief that the difference is because EventTickCount is the timestamp of the actual log entry when its created on the machine. PartitionKey, on the other hand is either a rounded timestamp, or a timestamp of when it was uploaded by the monitoring agent. <br /> <br />Now remember, PartitionKeys and RowKeys are <strong>strings. </strong>So you don’t need to query for PartitionKey > 0635313933600000000, instead you can just search for > 06353139336.</p> <p>Aside from PartitionKey and RowKey, the table entries for WADLogsTable are: <br /> <br /> public long EventTickCount { get; set; } <br /> public string DeploymentId { get; set; } <br /> public string Role { get; set; } <br /> public string RoleInstance { get; set; } <br /> public int EventId { get; set; } <br /> public int Level { get; set; } <br /> public int Pid { get; set; } <br /> public int Tid { get; set; } <br /> public string Message { get; set; } </p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 code pattern of doing everything in IQueryProvider…<p>(Rambling) I’m taking another short foray into IQueryable land. From my <a href="">learnings last time</a>,.</p> <p>But once I start down that path, I have to think about how to implement QueryProvider.Execute(). So here is what my thoughts looked like:</p> <p>1) Visit current expression node. <br /.</p> <p>Example: The .Single() LINQ operator can easily be run as a client side post-processing step after requesting the first 2 elements of the sequence, if that’s something your backend supports.</p> <p.</p> <p?</p> <p <em>myExpressionTree….</em></p> <p>Then I think<em> wait… is it really easier to do that translation as a post-process? Why don’t we do it online, as the expression is built, in the IQueryable, itself?</em></p> <p><em>So…</em></p> <p>Basically, I’ve run around in a complete circle of self-doubt, and decided that it might be cleaner <em>not</em> to go with the trivial IQueryable implementation after all. Because it saves me from having to re-implement a deferred execution + visitor pattern in IQueryProvider.</p> <p>So, that’s the next avenue to explore! Which brings todays rambling to an end. If it made sense to you, you’re welcome.</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 it a bug?: ClientWebSocket<p>The following program always fails for me with the web socket reaching the aborted state within a couple seconds.</p> <p> <br /> class Program <br /> { <br />    static void Main(string[] args) <br />    { <br />        ClientWebSocket socket = new ClientWebSocket(); <br />        socket.ConnectAsync(new Uri("ws://localhost:8085/Echo.ashx"), CancellationToken.None).Wait(); <br />        var ob = new ArraySegment<byte>(Encoding.UTF8.GetBytes("hello")); <br />        var buffer = new ArraySegment<byte>(new byte[1024]); <br />        var sw = new Stopwatch(); <br />        sw.Start(); <br />        for (int i = 0; i < 100000; i++) <br />        { <br />            <font style="background-color: rgb(255, 255, 0);">socket.SendAsync(ob, WebSocketMessageType.Text, true, CancellationToken.None);</font> <br />            var rr = socket.ReceiveAsync(buffer, CancellationToken.None).Result; <br />            if (rr.EndOfMessage) <br />            { <br />                Console.WriteLine("received complete message: " + Encoding.UTF8.GetString(buffer.Array, 0, rr.Count)); <br />            } <br />        } <br />        sw.Stop(); <br />        Console.WriteLine("the end." + sw.ElapsedMilliseconds); <br />        Console.ReadLine(); <br />    } <br /> } <br /></p> <p>However, if I add a Wait() to the end of SendAsync(), then I can send/receive on the socket for a LONG time with no problems. <br />I was initially mystified as to why this happens. In fact I thought part of the design of websockets was that clients and servers can choose to send whenever they want, and for instance, clients can even pipeline multiple requests to the server.</p> <p>InnerException: System.Net.WebSockets.WebSocketException <br />       HResult=-2147467259 <br />       Message=The WebSocket is in an invalid state ('Aborted') for this operation. Valid states are: 'Open, CloseSent' <br />       StackTrace: <br />            at System.Net.WebSockets.WebSocket.ThrowOnInvalidState(WebSocketState state, WebSocketState[] validStates) <br />            at System.Net.WebSockets.WebSocketBase.<ReceiveAsyncCore>d__1.MoveNext() <br />       InnerException: <br /></p> <p>Am I misunderstanding something?</p> <p>Well this time I turn on first-chance exception debugging, and find that there is a previous exception which pushes the ClientWebSocket to the aborted state:</p> <p>There is already one outstanding 'SendAsync' call for this WebSocket instance. ReceiveAsync and SendAsync can be called simultaneously, but at most one outstanding operation for each of them is allowed at the same time.</p> <p>But wait… how can that be? My server only sends me a message in response to my request? So this can only fail if the socket completes receive the response to a message before it has finished sending the message… but actually that is possible! Because we never know whether the receive or send operation’s completion callback will be dispatched first by the underlying technology stack.</p> <p>OK. Well that’s certainly helpful, I just  have to fix the code slightly: <br /> <br />var sendResult = socket.SendAsync(ob, WebSocketMessageType.Text, true, CancellationToken.None); <br />var rr = socket.ReceiveAsync(buffer, CancellationToken.None).Result; <br />sendResult.Wait(); <br /> if (rr.EndOfMessage) <br /> { <br />    Console.WriteLine("received complete message: " + Encoding.UTF8.GetString(buffer.Array, 0, rr.Count)); <br /> } <br /> <br /> And my client app runs happily forever without aborting. Good.</p> <p>Still, I wonder – <em>is this throwing on concurrent sends behavior really the right thing? </em>Personally, I was very surprised with this - I was expecting that outgoing requests on a websocket would be queued at a lower level – should I really need to implement queuing in my application layer? But this bug may be in the eye of the beholder…</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 fast can your HTTP server go?<p>I have a burning question on my mind. How fast can an HTTP/HTTPS server go? When I say fast, I have some assumptions, which are based on removing all of the wishy-washy disclaimers. “That depends on what features of the framework you use.” “It depends on your database technology.” “It depends on your ORM" blah blah blah.</p> <p>You see actually I really don’t want to care about stuff because I want to first, know the answer to my question: for maximal scalability, how should I build my technology stack <em>from the bottom up? (</em>Not from the top down based on features.) Second, once I am really choosing a technology stack: I want to know <em>how much</em> scalability I gave up in exchange for the features I choose.</p> <p>Without that kind of knowledge, how can I evaluate the scalability of my app as I develop it, and figure out when I made the wrong technology chocie? <br /> <br />So anyway, the assumptions are:</p> <ul> <li>we’re running on a VM with modest specs, say anything up to 16GB ram and 4CPUs</li> <li>you can choose your technology/OS stack from anything</li> <li>the app is ‘hello world’ – no database, no backend of any kind, no session state, no auth</li> <li>you can have as many clients/connections as you want to saturate the server</li> <li>in the HTTPS scenario, you can reuse some connections</li> </ul> <p>I care about both</p> <ul> <li>throughput – request/sec</li> <li>latency, but only in the case where the server isn’t overloaded, skewing the latency numbers</li> </ul> <p>but throughput is probably more important… <br />So… <br />That was my burning question… <br />I spent quite a while researching this on the interwebs… <br />…and I eventually found one excellent page which takes a stab at answering my question for HTTP anyway, if not HTTPS</p> <p><a title="" href=""></a></p> <p> <br />But why did I have to search so long? Surely technology vendors realize this information is important in designing applications for scale, and people are going to find it out anyway. So why don’t they try to make it easy to find?</p> <p>Wouldn’t it be great if I can go to your top page, see a link or section saying ‘performance/scale’, and see a simple line of text that says ‘100K HTTP RPS Hello World - this is how great we can scale! Put us at the bottom of your technology stack!’</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 Testing DbContexts and queries – the status quo<p><span style="font-size: small;">So here’s the scene. I’ve been working on unit testing for a solid day, my percent coverage is up, my code is better factored, and now I do my ‘what is the most untested class I have’ analysis one more time, and discover that it is… ‘BillingEntitiesContext’.</span></p> <p><span style="font-size: small;">BillingEntitiesContext is my subclassed DbContext for describing my EF code first data model. (I am using EF 6.) This class logically has three sections:</span></p> <ol> <li><span style="font-size: small;">The constructor that calls the base class with a connection string (or connection string name). We can hope this is actually easy to unit tests, since it won’t talk to a database – database connection is lazy by default.</span></li> <li><span style="font-size: small;">The OnModelCreating() override. Which is where I declare the entities that are my data model. We can hope this is also actually easy to unit test, since it also doesn’t need to talk to a database!</span></li> <li><span style="font-size: small;">The kitchen sink! I.e. it is a bunch of prebaked LINQ queries exposed as helper methods for my application to use.</span></li> </ol> <p><span style="font-size: small;">Here’s an example of a kitchen sink helper, just so we start on the same page:</span></p> <p><span style="font-size: small;">internal Operation GetLastSuccessfulOperation() <br /> { <br /> return Set<Operation>() <br /> .Where(op => op.State == OperationStates.NotificationDone) <br /> .OrderByDescending(op => op.EndTime) <br /> .FirstOrDefault(); <br /> } <br /> <br /> OK. Anyway, the first half of the puzzle is how to test our Constructor and OnModelCreating(). We can do it easily, in one fell swoop:</span></p> <p><span style="font-size: small;">[Fact] <br /> public void TheBillingEntityDataModelCanBeInitialized() <br /> { <br /> Database.SetInitializer<BillingEntitiesContext>(null); <br /> var context = new BillingEntitiesContext("database;doesnt;exist"); <br /> context.Database.Initialize(false); <br /> }</span></p> <p><span style="font-size: small;">Code coverage tools verify that this does indeed call our desired constructor and OnModelCreating() override. <br />We can <em>guess</em> it never tried to touch a database, since a totally bogus connection string is accepted. But who really knows.</span></p> <p><span style="font-size: small;">So, anyway, that leaves me with 25 code blocks covered, and a hundred to go. An impressively easy 3% coverage gain in this tiny (almost toy sized) project. And I feel happy that it’s not just a pure numeric enhancement, it might theoretically even find a silly bug in model creation, should we ever accidentally introduce one… Of course F5 would have found it instantly also. OK.</span></p> <p><span style="font-size: small;">Well that was the <em>easy half of the problem, </em>what about the queries? The subject of unit testing such queries is something I’ve looked at before, and I’ve previously seen, and tried, following approaches:</span></p> <ul> <li><span style="font-size: small;"><strong>Integration Testing Optimist -</strong> Don’t unit test queries at all. In fact you don’t even need to unit test the code that <em>calls </em>the queries, as long as you have integration tests.</span></li> <li><span style="font-size: small;"><strong>Query Blind – a.k.a. Repository pattern - </strong>Encapsulate all your queries (or create an entire abstract data access layer) in an interface, and stub out the queries. You end up unit testing <em>everything except the queries. </em>I have found that this approach does enable you to write tests of the code <em>calling</em> the queries fairly easily. But Ayende has written very nice <a href="">lambasting</a> of repository wrappers in general, which my personal experience somewhat backs up. Maintaining repository interfaces and wrapper classes for every DbSet in your repository is no fun.</span></li> <li><span style="font-size: small;"><strong>Query pattern </strong>- Create a query class for every query, and treat each query as an injectable dependency which can be mocked out. You’re still testing <em>everything except the queries.</em></span></li> <li><span style="font-size: small;"><strong>Database Faker - </strong>Move the queries into to business logic and fake out the DBContext/DBSets<strong>. </strong>This is becoming a reasonably well-known approach, however, this comes with many possible leaky abstraction issues... In specific this works for testing simple queries but general it's actually quite hard to be sure the behavior of your fake matches the real DB.</span></li> </ul> <p><br /><span style="font-size: small;"> Now I have 2 gut feelings about what is the right way to test this query code. I feel 100% code coverage of all this EF code <em>should </em>be possible in a meaningful way. Testing queries makes <em>sense</em> as a thing to do because you can think of test cases to validate that they find the right subset of data. However I have <em>another </em>gut feeling that queries are <em>so </em>expressive, in order to <em>really </em>verify a moderately complex query it can take a <em>lot </em>of tests, or more obviously a lot of test data to do some querying on.</span></p> <p><span style="font-size: small;">Also, I received some advice about Database Faker. So far is the only approach of above that really comes close to testing the queries, unless you actually get to integration testing. And you can probably get it to work for the first few queries you try. </span><span style="font-size: small;">Unfortunately the idea of testing with a fake database has some serious downsides. As my manager once put it, you’re walking down a slippery slope of implementing an ever more fake database to emulate more and more behaviors of the real database that you discover you need – like foreign key references, transactions, ooh lah lah. Not all your code is select queries after all!</span></p> <p><span style="font-size: small;">So… who has found a better way?</span></p> <p><span style="font-size: small;">Some interesting proposals I have not yet truly tried:</span></p> <p><span style="font-size: small;">1) SQLite – as an in-memory database <br /></span><span style="font-size: small;">2) Effort – an in-memory database: <a title="" href=""></a></span></p> <p><span style="font-size: small;">Unfortunately, I so far cannot get Effort to work. And SQLite sounds like it will be perhaps a little bit too full-SQL in performance cost, requiring applying the database schema etc… So that’s where I am right now. Still trying to think of an <em>effective</em> way that takes those gut feelings into account.</span></p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 on unit tests (and more what happened next) and introducing ashmind and his Argument NuGet package<p>Here are a few thoughts about the refactoring process from going through my initial unit testing iteration. <br /></p> <ul> <li>Constructor Injection can make things look a lot more testable. But I think you want to be careful about jumping into this – don’t do it as the <em>first </em>step in refactoring. It’s probably better to first extracting bits of code that <em>don’t </em>have strong dependencies – and don’t need to know about a class state. Creating some beefy static methods that do a lot of work if you give them a few parameters, even if you could have gotten them from ‘this<strong>’</strong> if you did a non-static method, can just make life easier unit testing.</li> <li>Once you’ve extracted some dependency-lite code, what you should have left over is some dependency-heavier code where DI is a better, naturaler fit.</li> <li <em>types.</em></li> <li>Separating enumeration/grouping <em>control flow </em>logic from actual data processing can make code look really nice, where classes/methods have clearer responsibilities, and make tests simpler too. It can also lead to having more classes.</li> </ul> <p>None of that this refactoring should change your block count much. And it takes time, and then you still have to write the unit tests, so it feels frustrating.</p> <p.</p> <p?!</p> <p <em>almost</em> as good as actually covering that code. I think the reason is that I know this code runs every single time I run my app, so it’s going to get a lot of <em>real-world </em>coverage as opposed to unit test coverage. And I can feel that confidence in a way that I just don’t feel for an arbitrary factory class that is floating around.</p> <p.</p> <p>Aside from that there are two largish classes with lots of logic I can try to figure out how to test, that now have much nicer factored dependencies.</p> <p>While I’m not sure if it’s a valid topic for discussion I’ve also noticed a couple random <em>small things</em> you can do to reduce your overall # of code blocks along the way.</p> <ul> <li>Explicit default constructors increase your block count.</li> <li>Async/await increases your block count. And rarely leads to subtle bugs. Why use it if you don’t actually need it?</li> <li.</li> </ul> <p>The idea of creating either a MyProduct.ArgumentValidation library or a MyProduct.Comon library really doesn’t sound good. <br />Everyone knows this pattern of saying Require.NotNull(foo, “foo”), or Argument.Required(foo, “foo”) right? Which means you only have a single line of code [to test code coverage of] instead of a zillion <strong>if</strong> statements, in each of your methods? <br /> <br />But just how many times does this particular wheel need to be reinvented?</p> <p>Well it turns out, you might <em>finally </em>be able to stop and never write that class again. Someone called ashmind made a MIT-licensed open-sourced-on-github nuget package for that. <br /> <br /><a title="" href=""></a></p> <p>Jolly good, I say! Now, if our lawyers happen to think adopting this package this is as good an idea as I do, I should be able to avoid writing unit tests of another 12 blocks. <img class="wlEmoticon wlEmoticon-smile" alt="Smile" src="" /></p> <p>So honestly, I don’t know if I’ll <em>get to use it. </em>But I feel this package deserves some promotion. It is an earnest attempt to scratch out an itch, so I want to say try it out, and if you see anything it’s not good at, you can help make it better!</p> <p>[G’day ashmind – you didn’t ask for the publicity - I hope you don’t mind.]</p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09 with unit tests (what happened next)<p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">This is going to be a fairly uninteresting and hard to follow post, but its here for the record, I’ll try to distill something better out of it. This was a sort of as-it-happened log of my initial attempt following on from <a href="">my plan</a> earlier today. [Also I fail at Live Writer, and I overwrote this post, instead of creating a new post. And then undo that. D'oh.]</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">During refactoring for testability of key logic, block coverage sometimes changes a little before you write the test! <br /><br />Often the change is small, and often it is initially in the wrong direction.</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">Here I picked what I thought the most fragile, testable, and long-living piece of logic would be, and decided to initially pull it out to be a static function.<br /></span><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">Before refactor: 87.78% coverage of project not covered <br />After refactor, before writing a test: 88% of project not covered [net -2%] <br />After writing a test: 83.34% of project not covered [net +3%]</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">After that, I felt like it was time to see if refactoring for testability introduced or exposed any 'smell' elsewhere. <br /><br />But... refactoring itself can expose bugs in my understanding of the code.<br /></span><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">Before refactoring I had believed I was generating batches of 100 events where each batch had a unique ID. After, I realized I was actually generating batches of 100 events, where each batch had the SAME id as each OTHER batch in this set of generated batches. <br />So what was this batch ID actually used for? I mean clearly it's not useful for identifying or locating a single batch... <br />Perhaps Batch ID isn't really used for ANYTHING. It's generated, and persisted in our database, but never ever read again. Really?? <br />That can't be right can it? And I thought our database had a unique constraint on batch ids...???? In fact it's a primary key right? <br />Um.. yes. It's a primary key. <br />But yes, are we really generate batches with duplicate IDs? What the hell? <br />I told all my teammates I had found a bug! <br />My teammates politely told me that no, that scenario had been tested. <br />At which point I realized that I had not found a bug... in fact I had *introduced* a new bug while refactoring for testability. <br />I had been believing the ID generation function was a 1:1 function, butactually it was not a function at all - in the mathematical sense. <br />It is a GUID concatenating function that always generates unique values (A mathematical 'function' should have at most one single well-defined output value for any input. Like SQRT.)</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">And thus sanity was restored. <br />I fixed the new bug I had introduced. <br />This also led me to write a few more tests. <br /><br />The new tests provide no additional code coverage at all... but they verify the bugfix, and they help formalize the notion of what a batch ID is! <br />Our coverage takes a net small step backwards during the final bugfix. 83.54% uncovered, or 16.46% covered if you prefer.</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">Finally it was again time to take stock of my actual change and see whether the new code was overall feeling 'better' or 'worse'. </span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;"><strong>Moved:</strong> some code intimately related to BillingBatch class was moved over to the BillingBatch class. That's probably a win. <br /><strong>Separated:</strong> a glue 'data conversion' function became separated from the task of labelling records with batch IDs and became more of a pure and simple data conversion function. Separation of concerns. I call that a win. <br /><strong>Inlined:</strong> a rather trivial function that was called in two places since it is now called only in one place. <br /><strong>Overall feeling better or worse:</strong> just slightly better.</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">However...<br />suddenly I see things more clearly! <br />One of my other classes is basically just a factory, that turns configuration, into dependencies. It's not a 'mainloop' at all. <br />Refactoring commences, and suddenly, I've eliminated a bunch of blocks from my codebase and code coverage is still at... 16.4%. What the heck. <br /><br />But in the process I've now got a BillingOperations class with injectable dependencies that can itself be tested with mocks. Progress, sort of!</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">So since yesterday I’ve gone from having one BillingMainLoop class, which had a mixture of responsibilities around knowing data persistence details, knowing how to<br />iterate through events and group them into batches, and knowing how to take the app configuration and instantiate/configure all sorts of necessary dependencies, to having a much more specialized division of responsibilities:</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">-Batching class – knows how to group events into batches, but not what to do with batches<br />-BillingOperations – has a bunch of dependencies for getting events [input], and writing batches [output]. Delegates batching work to the Batching class. <br />-BillingOperationsFactory – creates BillingOperations given the app configuration.</span></p> <p><span style="font-family: 'Arial','sans-serif'; font-size: 10pt;">I am not a general fan of factory classes (surely a method would do the job?), but this iteration of the design does feel like it’s making it easier to think about how to test the code.</span></p><div style="clear:both;"></div><img src="" width="1" height="1">tilovell09
http://blogs.msdn.com/b/tilovell/atom.aspx
CC-MAIN-2015-27
refinedweb
14,925
54.93
On Wed, Aug 19, 2020 at 01:22:58PM +0200, Pavel Hrdina wrote: > On Wed, Aug 19, 2020 at 12:47:40PM +0200, Andrea Bolognani wrote: > > Right now we're unconditionally adding RPATH information to the > > installed binaries and libraries, but that's not always desired. > > > > Debian explicitly passes --disable-rpath to configure, and while > > I haven't been able to find the same option in the spec file for > > either Fedora or RHEL, by running > > > > $ readelf -d /usr/bin/virsh | grep PATH > > > > I can see that the information is not present, so I assume they > > also strip it somehow. > > > > Both Debian and Fedora have wiki pages encouraging packagers to > > avoid setting RPATH: > > > > > > > > > > Given the above, it looks like it's actually better to not go > > out of our way to include that information in the first place. > > I need to look into this because I remember adding the rpath there as > a result that something was not working correctly but now I don't > remember what was it. Originally I did not have it there. > > Pavel So I managed to remember what was the issue. If you run install libvirt into custom directory like this: meson build --prefix /my/custom/dir ninja -C build install and after that running: /my/custom/dir/bin/virsh will fail with: /lib64/libvirt.so.0: version `LIBVIRT_PRIVATE_6.7.0' not found (required by ./bin/bin/virsh) This is what autotools did by default as well and I did not know that there is an option --disable-rpath as it's not in output of `./configure --help`. If we don't care about the use case of installing libvirt into custom prefix and breaking it it should be OK to remove this from meson but my guess is that we should not do it. We can add an option like it was proposed in V1 but with the following changes. In meson.build we would have this: if get_option('rpath') libvirt_rpath = libdir else libvirt_rpath = '' endif and all places with install_rpath would use libvirt_rpath instead of libdir directly and we would not have to have the craze if-else. Pavel Attachment: signature.asc Description: PGP signature
https://listman.redhat.com/archives/libvir-list/2020-August/msg00613.html
CC-MAIN-2021-39
refinedweb
361
57.71
.algorithm.mutation This is a submodule of std.algorithm. It contains generic mutation algorithms. License: Authors: Source: std/algorithm/mutation.d - - The bringToFront function has considerable flexibility and usefulness. It can rotate elements in one buffer left or right, swap buffers of equal length, and even move elements across disjoint buffers of different types and different lengths.bringToFront takes two ranges front and back, which may be of different types. Considering the concatenation of front and back one unified range, bringToFront rotates that unified range such that all elements in back are brought to the beginning of the unified range. The relative ordering of elements in front and back, respectively, remains unchanged. Performs Ο(max(front.length, back.length)) evaluations of swap. Preconditions: Either front and back are disjoint, or back is reachable from front and front is not reachable from back.Parameters; import std.container : SList; auto list = SList!(int)(4, 5, 6, 7, 1, 2, 3); auto r1 = list[]; auto r2 = list[]; popFrontN(r2, 4); assert(equal(r2, [ 1, 2, 3 ])); bringToFront(r1, r2); assert(equal(list[], [ 1, 2, 3, 4, 5, 6, 7 ]));Examples:Elements can be swapped across ranges of different types: import std (areCopyCompatibleArrays!(SourceRange, TargetRange)); TargetRange copy(SourceRange, TargetRange)(SourceRange source, TargetRange target) if (!areCopyCompatibleArrays!(SourceRange, TargetRange) && isInputRange!SourceRange && isOutputRange!(TargetRange, ElementType!SourceRange)); - Copies the content of source into target and returns the remaining (unfilled) part of target. Preconditions: target shall have enough room to accommodate the entirety of source.Parameters:Returns:The unfilled part of targetSee]); - - Assigns value to each element of input range range.Parameters:See Also:Examples: int[] a = [ 1, 2, 3, 4 ]; fill(a, 5); assert(a == [ 5, 5, 5, 5 ]); - - Fills range with a pattern copied from filler. The length of range does not have to be a multiple of the length of filler. If filler is empty, an exception is thrown.Parameters:Examples: int[] a = [ 1, 2, 3, 4, 5 ]; int[] b = [ 8, 9 ]; fill(a, b); assert(a == [ 8, 9, 8, 9, 8 ]); - - Initializes all elements of range with their .init value. Assumes that the elements of the range are uninitialized.Parameters:See Also:Examples: import core.stdc.stdlib: malloc, free; struct S { int a = 10; } auto s = (cast(S*) malloc(5 * S.sizeof))[0 .. 5]; initializeAll(s); assert(s == [S(10), S(10), S(10), S(10), S(10)]); scope(exit) free(s.ptr); - -); assert(s21.a == 1 && s21.b == 2 && s22.a == 3 && s22.b == 4);Examples: struct S { @disable this(this); ~this() pure nothrow @safe @nogc {} } S s1; S s2 = move(s1); - - Similar to move but assumes target is uninitialized. This is more efficient because source can be blitted over target without destroying or initializing it first.Parameters:Examples: static struct Foo { pure nothrow @nogc: this(int* ptr) { _ptr = ptr; } ~this() { if (_ptr) ++*_ptr; } int* _ptr; } int val; Foo foo1 = void; // uninitialized auto foo2 = Foo(&val); // initialized // Using `move(foo2, foo1)` has an undefined effect because it destroys the uninitialized foo1. // MoveEmplace directly overwrites foo1 without destroying or initializing it first. assert(foo2._ptr is &val); moveEmplace(foo2, foo1); assert(foo1._ptr is &val && foo2._ptr is null); - - For each element a in src and each element b in tgt in lockstep in increasing order, calls move(a, b). Preconditions: walkLength(src) <= walkLength(tgt). This precondition will be asserted. If you cannot ensure there is enough room in tgt to accommodate all of src use moveSome instead.Parameters:Returns:The leftover portion of tgt after all elements from src have been moved.Examples: int[3] a = [ 1, 2, 3 ]; int[5] b; assert(moveAll(a[], b[]) is b[3 .. $]); assert(a[] == b[0 .. 3]); int[3] cmp = [ 1, 2, 3 ]; assert(a[] == cmp[]); - - Similar to moveAll but assumes all elements in target are uninitialized. Uses moveEmplace to move elements from source over elements from target.Examples:)); - - For each element a in src and each element b in tgt in lockstep in increasing order, calls move(a, b). Stops when either src or tgt have been exhausted.Parameters:Returns:The leftover portions of the two ranges after one or the other of the ranges have been exhausted.Examples: int[5] a = [ 1, 2, 3, 4, 5 ]; int[3] b; assert(moveSome(a[], b[])[0] is a[3 .. $]); assert(a[0 .. 3] == b); assert(a == [ 1, 2, 3, 4, 5 ]); - - Same as moveSome but assumes all elements in target are uninitialized. Uses moveEmplace to move elements from source over elements from target.Examples:[]); import std.algorithm.searching : all; assert(src[0 .. 3].all!(e => e._ptr is null)); assert(src[3]._ptr !is null); assert(dst[].all!(e => e._ptr !is null)); - - Defines the swapping strategy for algorithms that need to swap elements in a range (such as partition and sort). The strategy concerns the swapping of elements that are not the core concern of the algorithm. For example, consider an algorithm that sorts [ "abc", "b", "aBc" ] according to toUpper(a) < toUpper(b). That algorithm might choose to swap the two equivalent strings "abc" and "aBc". That does not affect the sorting since both [ "abc", "aBc", "b" ] and [ "aBc", "abc", "b" ] are valid outcomes.Some situations require that the algorithm must NOT ever change the relative ordering of equivalent elements (in the example above, only [ "abc", "aBc", "b" ] would be the correct result). Such algorithms are called stable. If the ordering algorithm may swap equivalent elements discretionarily, the ordering is called unstable. Yet another class of algorithms may choose an intermediate tradeoff by being stable only on a well-defined subrange of the range. There is no established terminology for such behavior; this library calls it semistable. Generally, the stable ordering strategy may be more costly in time and/or space than the other two because it imposes additional constraints. Similarly, semistable may be costlier than unstable. As (semi-)stability is not needed very often, the ordering algorithms in this module parameterized by SwapStrategy all choose SwapStrategy.unstable as the default. - -. In the simplest call, one element is removed. int[] a = [ 3, 5, 7, 8 ]; assert(remove(a, 1) == [ 3, 7, 8 ]); assert(a == [ 3, 7, 8, 8 ]);In the case above the element at offset 1 is removed and remove returns the range smaller by one element. The original array has remained of the same length because all functions in std.algorithm only change content, not topology. The value 8 is repeated because move was invoked to move elements around and on integers move simply copies the source to the destination. To replace a with the effect of the removal, simply assign a = remove(a, 1). The slice will be rebound to the shorter array and the operation completes with maximal efficiency. Multiple indices can be passed into remove. In that case, elements at the respective indices are all removed. The indices must be passed in increasing order, otherwise an exception occurs. int[] a = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]; assert(remove(a, 1, 3, 5) == [ 0, 2, 4, 6, 7, 8, 9, 10 ]);(Note how all indices refer to slots in the original array, not in the array as it is being progressively shortened.) Finally, any combination of integral offsets and tuples composed of two integral offsets can be passed in. int[] a = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]; assert(remove(a, 1, tuple(3, 5), 9) == [ 0, 2, 5, 6, 7, 8, 10 ]);In this case, the slots at positions 1, 3, 4, and 9 are removed from the array. The tuple passes in a range closed to the left and open to the right (consistent with built-in slices), e.g. tuple(3, 5) means indices 3 and 4 but not 5. If the need is to remove some elements in the range but the order of the remaining elements does not have to be preserved, you may want to pass SwapStrategy.unstable to remove. int[] a = [ 0, 1, 2, 3 ]; assert(remove!(SwapStrategy.unstable)(a, 1) == [ 0, 3, 2 ]);In the case above, the element at slot 1 is removed, but replaced with the last element of the range. Taking advantage of the relaxation of the stability requirement, remove moved elements from the end of the array over the slots to be removed. This way there is less data movement to be done which improves the execution time of the function. The function remove works on any forward range. The moving strategy is (listed from fastest to slowest): Parameters:Returns:a range containing all of the elements of range with offset removed -. - - Reduces the length of the bidirectional range range by removing elements that satisfy pred. If s = SwapStrategy.unstable, elements are moved from the right end of the range over the elements to eliminate. If s = SwapStrategy.stable (the default), elements are moved progressively to front such that their relative order is preserved. Returns the filtered range.Parameters:Returns:the range with all of the elements where pred is true removedExamples: ]); - - Reverses r in-place. Performs r.length / 2 evaluations of swap.Parameters:See Also:Examples: int[] arr = [ 1, 2, 3 ]; reverse(arr); assert(arr == [ 3, 2, 1 ]); - - Reverses r in-place, where r is a narrow string (having elements of type char or wchar). UTF sequences consisting of multiple code units are preserved properly.Parameters:Bugs:When passing a sting with unicode modifiers on characters, such as \u0301, this function will not properly keep the position of the modifier. For example, reversing ba\u0301d ("bád") will result in d\u0301ab ("d́ab") instead of da\u0301b ("dáb").Parameters:Returns:a Range with all of range except element at the start and endExamples))); - - Swaps all elements of r1 with successive elements in r2. Returns a tuple containing the remainder portions of r1 and r2 that were not swapped (one of them will be empty). The ranges may be of different types but must have the same element type and support swapping.Parameters:Returns:Tuple containing the remainder portions of r1 and r2 that were not swappedExamples: int[] a = [ 100, 101, 102, 103 ]; int[] b = [ 0, 1, 2, 3 ]; auto c = swapRanges(a[1 .. 3], b[2 .. 4]); assert(c[0].empty && c[1].empty); assert(a == [ 100, 2, 3, 103 ]); assert(b == [ 0, 1, 101, 102 ]); - - Initializes each element of range with value. Assumes that the elements of the range are uninitialized. This is of interest for structs that define copy constructors (for all other types, fill and uninitializedFill are equivalent).Parameters:See Also:Examples: import core.stdc.stdlib : malloc, free; auto s = (cast(int*) malloc(5 * int.sizeof))[0 .. 5]; uninitializedFill(s, 42); assert(s == [ 42, 42, 42, 42, 42 ]); scope(exit) free(s.ptr);
https://docarchives.dlang.io/v2.071.0/phobos/std_algorithm_mutation.html
CC-MAIN-2019-13
refinedweb
1,786
57.06
How do I loop in Python? The Python for loop has the following general form: for myvar in myiterator: # do something with myvar How do I loop over a list? If you coming from other languages such as C, you might be used to: for i in range(len(mylist)): myvar = mylist[i] # do something with myvar In Python, it's much better (that is, the intention is clearer) if you do the following instead: for myvar in mylist: # do something with myvar If you suddenly realise you want the value of the index i, you can easily adapt the previous loop: for i, myvar in enumerate(mylist):How do I loop over two lists simultaneously? # do something with myvar # do something with i Let's say you have a list of people's firstnames and another list containing their surnames, and you want to write them on the screen. Don't do the following: for i in range(len(firstnames)): print "%s %s" % (firstnames[i], secondnames[i]) Instead, you should make the intention of the code clear by using zip as follows: for firstname, secondname in zip(firstnames, secondnames): print "%s %s" % (firstname, secondname) Loop pattern #1 - Building a new list from an old newlist = [] for mynum in mylist: newnum = mynum*2 newlist.append(newnum) In the case of this simple example, it should be done as a list comprehension instead: newlist = [mynum*2 for mynum in mylist] Here is a slightly more complicated example, involving an "if" statement that acts as a filter for even numbers: newlist = [] for mynum in mylist: if mynum % 2 == 0: newnum = mynum*2 newlist.append(newnum) Again, this can be done instead as a list comprehension: newlist = [mynum*2 for mynum in mylist if mynum % 2 == 0]Loop pattern #2 - Summing things up Another common pattern is totalling things up using a list: total = 0 for mynum in mylist: total += mynum Although in this case, it would be easier to just use: total = sum(mylist) One thing to be careful of though is that using "+" to add strings is slow. If speed is important, avoid the following: longstring = ""Instead use pattern #1 to create a list of strings, and join them at the end: for myvar in mylist: # Create smallstring here somehow longstring += smallstring stringlist = []Loop pattern #3 - Filling in a dictionary for myvar in mylist: # Create smallstring here somehow stringlist.append(smallstring) longstring = "".join(stringlist) A common pattern is to update information in a dictionary using a loop. Let's take an example of counting up how many people have a particular firstname given a list of firstnames. The problem here is that it is necessary to check whether a particular key is in the dictionary before updating it. This can leads to awkward code like the following: name_freq = {} for firstname in firstnames: if firstname in name_freq: name_freq[firstname] += 1 else: name_freq[fistname] = 1 Now while this can be improved somewhat by using name_freq.get(firstname, 0), that's getting a bit complicated (and doesn't extend to dictionaries of lists). Instead you should use a defaultdict, a special dictionary that has a default value, as follows: from collections import defaultdict name_freq = defaultdict(int) for firstname in firstnames: name_freq[firstname] += 1 And what about where you wanted to store the corresponding surnames in a dictionary by firstnames? Use a defaultdict(list), of course: from collections import defaultdict same_firstname = defaultdict(list) for firstname, surname in zip(firstnames, surnames): same_firstname[firstname].append(surname) Loop pattern #4 - Don't use a loop Think dictionary, set, and sort. A lot of tricky algorithms can be implemented in a few lines with one or two of these guys. A trivial example is finding unique items in a list: set(mylist). Want to check whether that genome sequence only contains 4 letters?: assert len(set(mygenome))==4 The following example uses sort. Given a set of docking scores for 10 poses, find which poses have the top three scores. Here's a solution using the so-called decorate-sort-undecorate paradigm: # Decorate ("stick your data in a tuple with other stuff") tmp = [(x,i) for i,x in enumerate(pose_scores)] # Sort (uses the items in the first position of the tuple) tmp.sort(reverse=True) # Undecorate ("get your data back out of the tuple") top_poses = [z[1] for z in tmp] print top_poses[:3] Image credit: PanCa SatRio
http://baoilleach.blogspot.com/2009_02_01_archive.html
CC-MAIN-2014-42
refinedweb
730
50.3
If - or, in my opinion, when - universal binaries of Mozilla products ship, it will be useful to have them identify themselves in the User-Agent string. This will allow browser sniffers to determine if the running browser is universal or not, and tailor the options for the user accordingly. For example, the browser sniffer at will most likely want to offer a universal binary download to users already using a universal browser, but can offer architecture-specific downloads to users with thin versions. Created attachment 208657 [details] [diff] [review] Determine universal status at runtime Initially, we had planned on hardcoding the universal status of the app at build time, probably by having the build process check an environment variable (MOZ_MAC_UNIVERSAL) that would be set if the bits were destined to be merged into a universal binary. After much thought, I've decided that this is the wrong idea. - It prevents architecture-specific bits from being merged unless forethought is given to setting MOZ_MAC_UNIVERSAL; when MOZ_MAC_UNIVERSAL is set, those bits wouldn't be independently usable for thin single-architecture packages (without rebuilding necko and relinking the app). - It's probably the wrong thing to do in the XULRunner world, where presumably we don't care about putting the universal status of XR in the user-agent string, but we do care about the main app itself. This patch tailors the user-agent string at runtime based on whether the main executable is fat and contains PPC and x86 code. An immediate benefit of this is that existing PPC and x86 builds can be merged with no additional considerations* (other than a similar change needed to identify universalness for ASU, bug 323328,) and the user-agent string will be correct. * Merged PPC and x86 bits should have the same build ID. Either verify that the build ID is the same, or set MOZ_BUILD_DATE = yyyymmddhh in the environment before building to ensure parity. Comment on attachment 208657 [details] [diff] [review] Determine universal status at runtime Darin should review this: the useragent has some unique requirements in terms of web tracking and such; it's not clear to me that we ought to be altering the string that appears as "the OS"... rather we should add a modifier separately. But he knows the details and I don't. On the trunk I would ideally attach this to nsIXULRuntime and move that interface up into tier 9, but since we're pressed for time and can't change nsIXULRuntime on the branch let's get something coded from nsHttpHandler if that's the only expedient option. I'd encourage you to go for a shorter UA string if possible. " Universal" seems a bit long. Is that what Safari is sending? I don't think Safari indicates its universal status, but it's Apple and they're free to identify it some other way (like by build/version number). I'd happily go with " Uni" or even " U". This shouldn't be something that's easy to confuse with the " U; " (meaning strong encryption, short for USA) that's already in your UA string -- some banks, etc., may check for that in various not-very-robust ways. How about moving the IsUniversalBinary function into mozilla/xpcom/ somewhere? It seems like we will need to call it from mozilla/toolkit/xre/nsAppRunner.cpp as well, so it should live someplace common. David, let's say we go with " Uni". Is it safe to append to the OS/CPU identifier (currently "PPC Mac OS X Mach-O" or "Intel Mac OS X") or can you pinpoint a better spot to put it? Appending to OSCPU seems fine, as long as there's no semicolon, since I'd be afraid of adding the substring "; U". Is identifying the browser as being universal actually useful? I don't think adding it to the User-Agent string has much value. Safari certainly doesn't do it, even though both universal and non-universal versions of Safari exist. Users will care if their browser is native for their current system, not if it's universal. Installs on disks that could be shared are a reason for wanting something like this; I have no idea how common such things are. (A Web site offering a plugin could then provide a universal binary version of the plugin.) (In reply to comment #10) > Installs on disks that could be shared are a reason for wanting something like > this; I have no idea how common such things are. (A Web site offering a plugin > could then provide a universal binary version of the plugin.) I guess I should phrase what I said differently. Once a universal version of Firefox is released, presumably all subsequent versions will be universal until the Mozilla Foundation decides to drop PowerPC support. Similarly, once plug-in vendors release universal versions, I doubt they'll continue to provide non-universal versions. Building and qualifying two separate distributions of the same binary is too complicated for most companies. Another way to look at this is the example given in the first comment in this bug. The sniffer at probably wouldn't want to differentiate between users with universal and non-universal versions. Instead, you'd want to offer the universal build to everyone. Make that the One True Download just as the PowerPC version is today and the entire problem goes away. Eric, we're not sure we're going to need to do this. It will only be useful if there continue to be architecture-specific official released versions in addition to universals. That decision hasn't been made yet. I want to have this patch ready just in case. The mozilla bouncer script and similar UA detectors for sites that offer plugins are exactly why we'd want this in a world where any given release would be available in all of (ppc, x86, uni). Created attachment 209377 [details] [diff] [review] Make isUniversalBinary part of an XPCOM service Create a new xpcom service, and implement in nsMacUtils. isUniversalBinary is a read-only attribute. GetIsUniversalBinary is modified from the version in attachment 208657 [details] [diff] [review] to cache the result in a static. Created attachment 209378 [details] [diff] [review] Append " Uni" to nsHttpHandler::mOscpu when universal This depends on the previous patch, and calls into the new service to determine whether the application is universal or not when building the UA string. I'm soliciting review on this, as long as everyone understands that it may not be necessary. Comment on attachment 209377 [details] [diff] [review] Make isUniversalBinary part of an XPCOM service Please move the _CLASSNAME _CID _CONTRACTID defines out of the .idl file into nsMacUtils.h Created attachment 209472 [details] [diff] [review] Make isUniversalBinary part of an XPCOM service (v2) Created attachment 209473 [details] [diff] [review] Append " Uni" to nsHttpHandler::mOscpu when universal (v3) Comment on attachment 209472 [details] [diff] [review] Make isUniversalBinary part of an XPCOM service (v2) nsMacUtils.h should have include guards (i.e., #ifndef FOO, #define FOO, #endif). Comment on attachment 209472 [details] [diff] [review] Make isUniversalBinary part of an XPCOM service (v2) r=me with #includeguards Created attachment 209487 [details] [diff] [review] Make isUniversalBinary part of an XPCOM service (v3) (carrying forward bsmedberg's r+) At this point, it's looking increasingly like it won't be necessary to add " Uni" to the UA string, since releases from this point on will most likely be universal-only until either PPC becomes unsupported or the universal build falls out of fashon in favor of independent non-universal builds. The patch to add the service with the utility function is still needed for bug 323355 and anything else that might need to determine universalness of the app. I think that it might be useful to indicate in the UA (and only the UA, nowhere else) when the app is running under Rosetta. For the time being, at least, people will probably run under Rosetta for a variety of reasons, the most significant of which is that not all plugins have been updated yet. At the same time, I don't see the need to hunt down behavioral differences that only exist when running under Rosetta (yes, they exist). Exposing this in the UA makes it easier to identify bogus bug reports. I don't have any objection to a UA note that the app is running translated. I'm not sure how useful it'll be without adoption in other browsers, but you never know. By the way, it'd be great if you'd file bug reports about any behavioral differences you see aside from performance. They aren't something the Mozilla folks should care about, but Apple should care. Eric, so far the only one I came across was that an attempt to load Java crashes under Rosetta. Maybe it shouldn't crash and maybe that's our problem, but considering that Java is advertised to not work under Rosetta, this is at least an expected difference, and I doubt you guys care about this one much more than I do. (Although it would be nice if I could get a better stack to see if it's a crash that could be easily avoided.) I'll let you know for sure if I find any UNexpected behavior differences. Hmm...that sounds like a Firefox bug of some sort -- maybe a failure to check return codes or something like that. Java applets in Safari simply don't run when Safari's running translated. Comment on attachment 209473 [details] [diff] [review] Append " Uni" to nsHttpHandler::mOscpu when universal (v3) >-#elif defined (XP_MACOSX) && defined(__ppc__) >+#elif defined (XP_MACOSX) && (defined(__ppc__)||defined(__i386__)) spaces around "||", please. (I won't make you put them between "defined" and "(", though, even though that is local style, but feel free to do so.) >+ mOscpu.SetCapacity(24); I don't like magic numbers. How about: mOscpu.SetCapacity(sizeof("PPC Mac OS X Mach-O") - 1); or even better: #ifdef __ppc__ const char kMacOSCPU[] = "PPC Mac OS X Mach-O"; #else const char kMacOSCPU[] = "Intel Mac OS X"; #endif const char kMacUniversal[] = " Uni"; mOscpu.SetCapacity(sizeof(kMacOSCPU) + sizeof(kMacUniversal) - 2); mOscpu.AssignLiteral(kMacOSCPU); and then later (if universal) mOscpu.AppendLiteral(kMacUniversal); Created attachment 212220 [details] [diff] [review] As checked in Includes Makefile diff. This was checked in to support bug 323328. Looks like " Uni" won't be added to the UA. Requesting blocking in anticipation of bug 323328 being ready for the branch. I think this may have broken the build on XULRunner: Columbia. Ok, I'm going to update the reporter tool to recognize mac universl binaries. navigator.platform navigator.oscpu Someone want to help me out with the possible values for MacPPC, Universal Binaries, and x86 builds (assuming we have, or will have them)?. Robert, right now, it's looking like these values aren't going to help you determine universalness at all, the plan being to release universal-only. The values will indicate the host CPU: navigator.oscpu = "PPC Mac OS X Mach-O", "Intel Mac OS X" navigator.platform = "MacPPC", "MacIntel" If there's a major shift and attachment 209473 [details] [diff] [review] is taken, then the oscpu values will have " Uni" appended to them to signify universal builds. (In reply to comment #29) >. r+sr=dbaron, although you generally don't need (advance) review for simple changes to fix bustage Mark: just what I wanted. Just my $0.02 - IMHO browser shouldn't tell if it's universl or not... it's none of the website's business ;-). It should clearly state if it's running on intel, or ppc though, since there is a need for that. I don't think there's a precedent for displaying the build methodology of a client. Only the platform. Created attachment 212342 [details] [diff] [review] Regenerated patch Bustage fix checked in. This patch was regenerated by date and contains the new class name, suitable for branch use. Comment on attachment 209473 [details] [diff] [review] Append " Uni" to nsHttpHandler::mOscpu when universal (v3) approved for 180 branch, a=dveditz for drivers Comment on attachment 212342 [details] [diff] [review] Regenerated patch approved for 180 branch, a=dveditz for drivers "Regenerated patch" of "Make isUniversalBinary part of an XPCOM service" checked in on 1_8 and 1_8_0. "Append " Uni" to nsHttpHandler::mOscpu when universal" will NOT be checked in. Mark, Josh, where can QA get builds to verify this fix? For the moment, unofficial test builds come out hourly-ish at: Note that there's not much verifiable here from a user's perspective. We didn't change the UA string at all, so it still says "PPC Mac OS X Mach-O" or "Intel Mac OS X". The only place that this change should be at all perceptible is in the update service, which when universal, claims itself as "Darwin_Universal-gcc3", but that's bug 323328.
https://bugzilla.mozilla.org/show_bug.cgi?id=323657
CC-MAIN-2017-22
refinedweb
2,152
61.16
An object that represents a C++ lexical token. CPP_Token objects are usually obtained from token streams. See CPP_Token_Stream's for an example use. More... #include <cpp_token_stream.h> An object that represents a C++ lexical token. CPP_Token objects are usually obtained from token streams. See CPP_Token_Stream's for an example use. The fastest way to compare a token to an expected value is to first verify that its 'type_' is correct. Type is an integer so the comparison is quick. Next, verify that that the text_ is correct If the type is wrong, there's no reason to check the string value. To further speed up the comparison, if you want, you can verify that the first character of the text_ equals the desired string. So here is some code that compares a token to some possible known values: if(token.type_ == ';') return true; // no text comparison is needed if(token.type_ == CPP_Token::aln) return true; // any old identifier if(token.type_ == CPP_Token::der) return true; // its a :: operator if(token.type_ == CPP_Token::aln && token.text_[0] == 'i' && token.text_ == "if" ) return true; // it is the 'if' keyword Definition at line 58 of file cpp_token_stream.h. A list of names for the multi-character tokens and the end of file situation. Definition at line 106 of file cpp_token_stream.h. Automatic conversion to string. All you get is the token's string text_ value. Definition at line 98 of file cpp_token_stream.h. static method giving a string representation of a token's type. Definition at line 39 of file cpp_token_stream.cxx. what file was the token found in Definition at line 96 of file cpp_token_stream.h. what line does the token begin on Definition at line 94 of file cpp_token_stream.h. Definition at line 83 of file cpp_token_stream.h. be the exact text parsed -- including any quotes, \'s etc. An integer value definiting the token type. Single character tokens have a type value which equals their character value. Multi-character tokens have named values which are members of the token_types_ nested enumeration. Definition at line 87 of file cpp_token_stream.h.
http://www.bordoon.com/tools/structcxxtls_1_1CPP__Token.html
CC-MAIN-2019-18
refinedweb
348
52.97
First solution in Clear category for Say Hi by ikarus93 # 1. on CheckiO your solution should be a function # 2. the function should return the right answer, not print it. def say_hi(name, age): return "Hi. My name is {} and I'm {} years old".format(name, age) if __name__ == '__main__': #These "asserts" using only for self-checking and not necessary for auto-testing assert say_hi("Alex", 32) == "Hi. My name is Alex and I'm 32 years old", "First" assert say_hi("Frank", 68) == "Hi. My name is Frank and I'm 68 years old", "Second" print('Done. Time to Check.') Aug. 23, 2018 Forum Global Activity Jobs ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/say-history/publications/ikarus93/python-3/first/share/58ce3363d628d59b1efdb7a204e18209/
CC-MAIN-2018-51
refinedweb
116
77.74
I want this program to read a text file then target and replace anything start with < and end with > for example it finds <html>, replace that into **** but somehow i tested it and it didn't work than i expected. any suggestions? def remove_html(text): txtLIST = list(text) i = 0 while i < len(txtLIST): if txtLIST[i] == '<': while txtLIST[i] != '>': txtLIST.pop(i) txtLIST.pop(i) else: i = i + 1 replace = 4*'*' return replace.join(txtLIST) file = open('remHTML.txt','r') test = file display = remove_html(test) print display Lines 11 and 12 put "****" between every single character. Other than this, is the output what you expect? By the way, you may want to look at the BeautifulSoup Python library for working with html files (and extracting text from them). By()) you are not returning anything. Yeah I forgot to add that but the line 8-10 seems incorrect. You are modifying the passed in list text aren't you? I do not understand why you try to set other variable text_list. how do I run the procedure def remove_html(text)()) Related Articles
http://www.daniweb.com/software-development/python/threads/416862/remove-html-markup-in-the-input-text-return-a-plain-text-string
CC-MAIN-2013-48
refinedweb
182
74.59
. I did the last step turning off the pre-complied header. Thanks, did the trick. Now I can get on with C++ and I will look forward to the tutorial. Regards Logan. When Visual Studio has precompiled headers turned on (which it does by default), it expects the first line of any code (.cpp) file to be #include "stdafx.h". If this is missing, you will get an error. One option is to turn off precompiled headers. Another is to ensure the first line is #include "stdafx.h". Visual Studio should have created stdafx.h as part of your project when you created the project. I see, well the above code has it in the Header file, but not the corresponding .cpp file. It was done manually, I will experiment more with the add Class option. I take it the precompiled header is not an important thing to use, if I can just turn it off. Is it of any importance in the industry? I certainly want to be sure I understand the proper ways to do things. If it's okay to turn certain settings of, then great! Just so long as those settings are not an integral part of programming in a concise way. Least it works now. Nothing worse than wasting a load of time on code when you pretty much understand how it all works and then when you go to run it, the thing comes up with errors you can't comprehend. I had problems in the past with copying and pasting, that seems to be a hit and miss affair. Thanks all, you are all a great guiding beacon of hope. What the heck would I do without you guys! I can't reply to your newest comment, I guess there have been to many replies. Precompiled headers are meant to speed up compilation, you can turn them off without problems. I don't like stdafx.h, because it's Windows stuff. If you decide to use another compiler (Or share code with someone with another compiler) the #include has to be removed before your code can be compiled. Yes, they often use precompiled headers in the industry, as it speeds up compilation times. However, it's just a compilation optimization, so if you want to turn it off, there won't be any other ill effects. Hi there please can someone help me, I have a problem I cannot get this code to work. I am using Microsodt Visual Studio 2015. It is supposed to show a basic setup of a Class, with the Member variables places on the actual Header file. For whatever reason it keeps showing an error. Can you spot what is missing? I also left out the pragma code. Is it better to use it than the #ifndef option? Also wondering how I transfer the colour tags? It transfers when I copy and paste to a word file, but not here. Below is the BMI.h : #include "stdafx.h" #include <iostream> #include <string> using namespace std; #ifndef BMI_H #define BMI_H class BMI { public: // default constructor just to set member variables to null states BMI(); //overload constructor is a different way of calling a function, by adding parementers to it. //below the overload doesnt need to be passed by reference because we are only entering the value and sending it, we are not chaging it again, so it being a copy doesnt matter. BMI(string, int, double); //destructor, once the objects function is left it is destroyed out of memory ~BMI(); //Below are the accesor functions which return member variables //typicaly we use get in front as a universal way to identify it string getName() const; int getHeight() const; double getWeight() const; private: //Member variables string newName; int newHeight; double newWeight; }; #endif Below is the BMI.cpp : #include "BMI.h" //Below is default contructor coresponding code. BMI::BMI() { newHeight = 0; newWeight = 0.0; } //below is overload coresponding code. BMI::BMI(string name, int height, double weight) { newName = name; newHeight = height; newWeight = weight; } BMI::~BMI() { //leave empty } //below coresponding declaration for accesors string BMI::getName() const { return newName; } int BMI::getHeight() const { return newHeight; } double BMI::getWeight() const { return newWeight; } Below is the main.cpp : #include "stdafx.h" #include <iostream> #include <string> #include "BMI.h" int main() { string name; int height; double weight; cout << "Enter your name: "; cin >> name; cout << "Enter your height in inches: "; cin >> height; cout << "Enter your weight in pounds: "; cin >> weight; //BMI Student_1; //we create the object here that automaticly uses the default constructor //below if you want to also use overload constructor as well then add the following BMI Student_1(name, height, weight); cout << endl << "Patients name: " << Student_1.getName() << endl << "Height: " << Student_1.getHeight() << endl << "weight: " << Student_1.getWeight() << endl; return 0; } Hi Logan! > it keeps showing an error Your code compiles and runs fine for me (Using gcc 7.2.0), what's the error you're getting? > I also left out the pragma code. Is it better to use it than the #ifndef option? #ifndef is better, because #pragma once is not supported by all compilers (I don't know a single one that doesn't though) but #ifndef is more tedious. > Also wondering how I transfer the colour tags? The code highlighting is done by Visual Studio, it's not part of the source code. learncpp has it's own source code highlighter, just place CODE tags around your code (Yellow box below reply textfield) This is the error message I am getting: Severity Code Description Project File Line Suppression State Error C1010 unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source? BMI.main c:\users\logan\documents\visual studio 2015\projects\bmi.main\bmi.main\bmi.cpp 37 That appears to be a problem with visual studio. Glenn Teitelbaum @ Stackoverflow wrote Project Properties -> C++ -> Precompiled Headers set Precompiled Header to "Not Using Precompiled Header". I am at a loss. My entire venture into learning C++ is halted on this until I can get this to work. I don't see the options on my version to not have a precompiled Header, other than to create single files. But I don't see how that matters. This is basic stuff, every program is built using Classes, someone must know what exactly I need to do in order to make this program work. When you build these multiple files, do you build each one. On the tutorial I just saw him building the main one and it already worked with the Header and .cpp files. Maybe I am not doing something basic. If the code works for you, then perhaps my execution of the code is all wrong. Any basic steps I should be following? Should I put them into new files? Your IDE (Visual Studio) will do everything for you when you press build, there's no need to set up a compilation order and what not. The code works for me, because I am using a different compiler than you. Google "C1010" and see if you can find a solution to your problem, I don't have Visual Studio, I can't help any further. If you can't find a solution but want to continue the tutorials you can do so by using an online compiler (eg. ). Most online compilers don't support multiple files so you'll have to skip some tutorials. If you're lucky perhaps someone with Visual Studio will see your comment and help you. Thanks, I will checkout another IDE, and least it works the code. So must be something basic at fault. I will setup the files and remove any added settings, or change them around. Typo Spotted ! There is no need of (IntArray::) at Line No. 11 if you're defining the member function inside the class itself ! It gives an error ! Thanks For The Great Tutorials tho 🙂 Thanks. Visual Studio didn't complain about that. Oh ! It gave me an error thats why I put it in the comments 😉 dear alex, why the handler doesn't catch this exception? It's complaining about you trying to assign a char pointer to a string literal. This is dangerous, because string literals may end up in read-only memory. If you try and change them, your code will crash. You should use this syntax instead: That will ensure merhaba is allocated memory that you can safely change. Oh no, my question wasn't that. the question is why handler didn't catch it? the program is just running in screenshot (it's in the debug mode) here a new screen: Oh, sorry. Operator[] doesn't do any bounds checking, so it never throws an exception to be caught. you know what you are an amazing people for that support back! thank you, god will certainly remember your kindnesses! Hello Alex, others. It turns out that and have the same effect. Is there any difference that I'm not aware of? Not that I'm aware of.
http://www.learncpp.com/cpp-tutorial/145-exceptions-classes-and-inheritance/
CC-MAIN-2018-13
refinedweb
1,510
74.79
While working on the iteration of a blog with russian contents we had to change the slugs from cyrillic to transliterated russian. RusToLat is a great plugin that does just that, but unfortunately it only does the transliteration for new or "edited" posts (i.e. you have to open the post at least once and "edit" the permalink, then it will be transliterated). Since this blog more than 500 posts this manual updating wasn't an option so we wrote this simple script. Maybe it will save somebody some time. You'd probably better back-up your database before updating it (google for mysqldump syntax). Expand | Embed | Plain Text - <?php - - // just a helper func to see how long it took to process slugs - function microtime_float(){ - return ((float)$usec + (float)$sec); - } - $time_start = microtime_float(); - - // replace %root% and %secret% with your database credentials - $db = mysql_connect('localhost', 'root', 'secret'); - // replace %wordpress_db% with your database - mysql_select_db('wordpress_db', $db); - - $sql=""; - - // this dictionary is copied from the source of - // which is great, but only works for new posts - // in the situtation when I had > 500 posts it was easier to write this script - // than to go one by one updating slugs - "��"=>"YE","��"=>"I","��"=>"G","��"=>"i","�"=>"#","��"=>"ye","��"=>"g", - "��"=>"A","��"=>"B","��"=>"V","��"=>"G","��"=>"D", - "��"=>"E","��"=>"YO","��"=>"ZH", - "��"=>"Z","��"=>"I","��"=>"J","��"=>"K","��"=>"L", - "��"=>"M","��"=>"N","��"=>"O","��"=>"P","� "=>"R", - "�¡"=>"S","�¢"=>"T","�£"=>"U","�¤"=>"F","�¥"=>"X", - "�¦"=>"C","�§"=>"CH","�¨"=>"SH","�©"=>"SHH","�ª"=>"'", - "�«"=>"Y","�¬"=>"","Ã�ÂÂ"=>"E","�®"=>"YU","�¯"=>"YA", - "�°"=>"a","�±"=>"b","�²"=>"v","�³"=>"g","�´"=>"d", - "�µ"=>"e","��"=>"yo","�¶"=>"zh", - "�·"=>"z","�¸"=>"i","�¹"=>"j","�º"=>"k","�»"=>"l", - "�¼"=>"m","�½"=>"n","�¾"=>"o","�¿"=>"p","��"=>"r", - "��"=>"s","��"=>"t","��"=>"u","��"=>"f","��"=>"x", - "��"=>"c","��"=>"ch","��"=>"sh","��"=>"shh","��"=>"", - "��"=>"y","��"=>"","��"=>"e","��"=>"yu","��"=>"ya","�«"=>"","�»"=>"","�"=>"-" - ); - - // this is a name of file that will be generated to use later to actually update our DB - // it can be anything you want and by default it will be created in the same directory - // where this script is - $myFile = "slugs_fix.sql"; - - # slugs - $q = mysql_query("select * from wp_posts where post_status = 'publish' and post_type = 'post'", $db); - while ($row=mysql_fetch_assoc($q)){ - - $slug = $row["post_name"]; - $id = $row["ID"]; - - // post_name is url-encoded � it's stored in a format - // such as %D1%85%D0%B2%D0%BE%D1%81%D1%82 - // translate the string - - $sql .= "update wp_posts set post_name = '" . $slug . "' where id = '" . $id . "'; \n"; - - $stringData = $sql; - - $sql=""; - $slug=""; - - } - - mysql_close($old); - - - $time_end = microtime_float(); - $time = $time_end - $time_start; - - // okay, the file is written - // and now it can be used like this: - // mysql -u root -p wordpress_db < slugs_fix.sql - // after issuing this command your slugs should be updated Report this snippet Tweet
http://snipplr.com/view/50246/transliterate-existing-cyrillic-slugs-postname-in-wordpress/
CC-MAIN-2017-51
refinedweb
686
60.69
-14-2022 12:30 AM Hi, I'm trying to examine a malloc'ed array of struct in the variables window. When I try to set 'Estimated Number of Elements', I get a popup message 'a custom control callback raised an exception'. This occurs even if I set the estimated number to 1. When I close this popup, the 'Estimated Number of Elements' popup has a busy mouse pointer, and does not respond to OK or cancel buttons. At this point I am forced to restart CVI. CVI itself is not hung, the taskbar icon responds to right-click, close window. The issue occurs in both versions of CVI I have installed: 20.0.0 (49252) and 19.0.0 (49155). Has anyone seen this before or have any suggestions? Maybe related, maybe not, but the execution profiler does not work on either version of CVI on this PC. When enabled the code does nothing, forever.. min_rep_example: Note the problem does not occur if I remove d1 from the structure and code. #include <ansi_c.h> #include <utility.h> typedef struct struct_1_t { double d0; double d1; } struct_1_t; int MinRepEx(int sz) { struct_1_t *struct_arr_ptr = malloc(sz * sizeof(struct_1_t)); for(int i=0; i<sz; i++) { struct_arr_ptr[i].d0 = i * 1.0; struct_arr_ptr[i].d1 = i * 1.0 + 20.0; } for(int i=0; i<sz; i++) { printf("%.1f %.1f, ", struct_arr_ptr[i].d0, struct_arr_ptr[i].d1); } Breakpoint(); free(struct_arr_ptr); return 0; } int main (int argc, char *argv[]) { MinRepEx(5); return 0; } This site uses cookies to offer you a better browsing experience. Learn more about our privacy statement and cookie policy. What do you need our team of experts to assist you with? Thanks! We'll be in touch soon!
https://forums.ni.com/t5/LabWindows-CVI/a-custom-control-callback-raised-an-exception-when-Estimating/td-p/4203866
CC-MAIN-2022-27
refinedweb
287
67.76
Calling a Decrypt function within a class from a Data Binding expr Discussion in 'ASP .Net Security' started by kfrost, Dec 16, 2005.82 - Mike Wahler - Jan 10, 2005 Using operator new in function call and what the expr evaluates toEric Lilja, May 20, 2007, in forum: C++ - Replies: - 3 - Views: - 363 - Gianni Mariani - May 20, 2007 decrypt challenge - perl encrypt with ruby decrypt, Jun 13, 2007, in forum: Ruby - Replies: - 1 - Views: - 497 - Daniel Martin - Jun 16, 2007 Syntax bug, in 1.8.5? return not (some expr) <-- syntax error vsreturn (not (some expr)) <-- fineGood Night Moon, Jul 22, 2007, in forum: Ruby - Replies: - 9 - Views: - 353 - Rick DeNatale - Jul 25, 2007 Eval of expr with 'or' and 'and' withinNick the Gr33k, Jun 14, 2013, in forum: Python - Replies: - 40 - Views: - 363 - Nick the Gr33k - Jun 15, 2013
http://www.thecodingforums.com/threads/calling-a-decrypt-function-within-a-class-from-a-data-binding-expr.768096/
CC-MAIN-2015-18
refinedweb
139
53.89
Block swap algorithm for rotation of the array In this tutorial, we will learn about the block-swap algorithm for rotation of the array in C++. Also, we will have an array and we have to rotate the array by s elements. For Example: - Given array: {4, 5, 12, 8, 1, 9} - Here s=2 - Rotate elements by 2 in the array. - The output: { 12, 8, 1, 9, 4, 5 } Approach: - Firstly, initialize the array x and y. - Go for the following steps until the size of the array x becomes equal to the size of the array. - If x is shorter than y, then divide y into yl and yr such that yr is of the length equal to x. Now, Swap x and yr to change xylyr into yrylx. Now x is at the final position, so we will recur on y pieces. - If y is shorter than x, then divide x into xl and xr such that xl is of the length equal to y. Now, Swap xl and y to change xlxry into yxrxl. Now y is at the final position, so we will recur on x pieces. - Finally, the size of the array x is equal to the size of the array y. Now, Block swap them. This is the block swap algorithm. You may also like: How to Validate a phone number in C++? C++ program of Block-swap algorithm for rotation of the array Hence, you can see the recursive implementation here. #include<iostream> using namespace std; void printarr(int arr[], int sz); void swapfn(int arr[], int ff, int ss, int s); void lftrotation(int arr[], int s, int no) { // If number of elements that is rotated equal to zero or if equal to size of the array if(s == 0 || s == no) return; if(no-s == s) { swapfn(arr, 0, no-s, s); return; } /* If x is short*/ if(s< no-s) { swapfn(arr, 0, no-s, s); lftrotation(arr, s, no-s); } else /* If y is short than x*/ { swapfn(arr, 0, s, no-s); lftrotation(arr+no-s, 2*s-no, s); } } // print void printarr(int arr[], int sz) { int i; for(i = 0; i < sz; i++) cout<<arr[i]; } void swapfn(int arr[], int ff, int ss, int s) { int tmp; for(int i = 0; i<s; i++) { tmp = arr[ff + i]; arr[ff + i] = arr[ss + i]; arr[ss + i] = tmp; } } int main() { int arr[] = {4,1,8,0,5,6}; lftrotation(arr, 2, 6); printarr(arr, 7); getchar(); return 0; } OUTPUT EXPLANATION: INPUT: {4,1,8,0,5,6} s=2 OUTPUT: {8,0,5,6,4,1} You may also read:
https://www.codespeedy.com/block-swap-algorithm-for-rotation-of-the-array-cpp/
CC-MAIN-2020-40
refinedweb
446
73.41
I hope to write the join_lists function to take an arbitrary number of lists and concatenate them. For example, if the inputs are m = [1, 2, 3] n = [4, 5, 6] o = [7, 8, 9] then we I call print join_lists(m, n, o), it will return [1, 2, 3, 4, 5, 6, 7, 8, 9]. I realize I should use *args as the argument in join_lists, but not sure how to concatenate an arbitrary number of lists. Thanks. One way would be this (using reduce) because I currently feel functional: import operator from functools import reduce def concatenate(*lists): return reduce(operator.add, lists) However, a better functional method is given in Marcin's answer: from itertools import chain def concatenate(*lists): return chain(lists) although you might as well use itertools.chain(iterable_of_lists) directly. A procedural way: def concatenate(*lists): new_list = [] for i in lists: new_list.extend(i) return new_list A golfed version: j=lambda*x:sum(x,[]) (do not actually use this). Although you can use something which invokes __add__ sequentially, that is very much the wrong thing (for starters you end up creating as many new lists as there are lists in your input, which ends up having quadratic complexity). The standard tool is itertools.chain: def concatenate(*lists): return itertools.chain(*lists) or def concatenate(*lists): return itertools.chain.from_iterable(lists) This will return a generator which yields each element of the lists in sequence. If you need it as a list, use list: list(itertools.chain.from_iterable(lists)) If you insist on doing this "by hand", then use extend: def concatenate(*lists): newlist = [] for l in lists: newlist.extend(l) return newlist Actually, don't use extend like that - it's still inefficient, because it has to keep extending the original list. The "right" way (it's still really the wrong way): def concatenate(*lists): lengths = map(len,lists) newlen = sum(lengths) newlist = [None]*newlen start = 0 end = 0 for l,n in zip(lists,lengths): end+=n newlist[start:end] = list start+=n return newlist You'll note that this still ends up doing as many copy operations as there are total slots in the lists. So, this isn't any better than using list(chain.from_iterable(lists)), and is probably worse, because list can make use of optimisations at the C level. Finally, here's a version using extend (suboptimal) in one line, using reduce: concatenate = lambda *lists: reduce((lambda a,b: a.extend(b) or a),lists,[]) You can use sum() with an empty list as the start argument: def join_lists(*lists): return sum(lists, []) For example: >>> join_lists([1, 2, 3], [4, 5, 6]) [1, 2, 3, 4, 5, 6] This seems to work just fine: def join_lists(*args): output = [] for lst in args: output += lst return output It returns a new list with all the items of the previous lists. Is using + not appropriate for this kind of list processing? Another way: >>> m = [1, 2, 3] >>> n = [4, 5, 6] >>> o = [7, 8, 9] >>> p = [] >>> for (i, j, k) in (m, n, o): ... p.append(i) ... p.append(j) ... p.append(k) ... >>> p [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> Or you could be logical instead, making a variable (here 'z') equal to the first list passed to the 'join_lists' function then assigning the items in the list (not the list itself) to a new list to which you'll then be able add the elements of the other lists: m = [1, 2, 3] n = [4, 5, 6] o = [7, 8, 9] def join_lists(*x): z = [x[0]] for i in range(len(z)): new_list = z[i] for item in x: if item != z: new_list += (item) return new_list then print (join_lists(m, n ,o) would output: [1, 2, 3, 4, 5, 6, 7, 8, 9]
http://m.dlxedu.com/m/askdetail/3/7765b1376cb05221b0a5927324e2e845.html
CC-MAIN-2019-04
refinedweb
637
64.54
Since Python Tk widgets are classes, we can use inheritance to specialize widgets for our applications. A common use case is specifying themes for our widgets so that our GUI controls look consistent. In this tutorial, I’ll explain how to make themed Tk widgets. themed_buttons.py from tkinter import * class ThemedFrame(Frame): def __init__(self, parent=None, **configs): Frame.__init__(self, parent, **configs) self.config(bg='Red', borderwidth=10) self.pack(expand=YES, fill=BOTH) class ThemedButton(Button): def __init__(self, parent=None, **configs): Button.__init__(self, parent, **configs) self.config(font=('Arial', 32)) self.pack() if __name__ == '__main__': frame = ThemedFrame() ThemedButton(frame, text='Quit', command=(lambda: sys.exit())) frame.mainloop() The above code makes the following window. The background is red and the button has its font set to Arial 32. All of the ThemedButtons and ThemedFrames in this application will adhere to a consistent styling. Making the ThemedFrame and ThemedButton are fairly straightforward. For ThemedFrame, we create a ThemedFrame class and have it extend Frame. Line 6 calls the Frame’s __init__ method and then we start our custom configuration on line 7. In this case, we set the frame’s background to red and give it a border that is 10 pixels thick. Then we pack the frame and set it’s expand and fill options so that the frame always resizes with the window. ThemedButton follows the same pattern as ThemedFrame. The ThemedButton class extends Button. On line 12, we call Button’s __init__ method followed by configuration options on line 14. In this case, we set the button’s font to Arial 32. Then we call the pack() method. The demonstration part is found on lines 18-21. We create a ThemedFrame object on line 19. It’s made the same way as a regular Frame. Line 20 makes a ThemedButton. The constructor is consistent with Button’s constructor, so we are free to pass attributes such as the text and callback handlers to the button. Finally, we call mainloop() on ThemedFrame. All of this works because ThemedButton and ThemedFrame are simply specialization of their parent classes.
https://stonesoupprogramming.com/tag/tkinter/page/2/
CC-MAIN-2019-13
refinedweb
353
69.28
Extract custom metrics from any OpenMetrics endpoints. Note: All the metrics retrieved by this integration are considered as custom metrics. The OpenMetrics check is packaged with the Datadog Agent starting version 6.6.0. Edit the openmetrics.d/conf.yaml file at the root of your Agent’s configuration directory to add the different OpenMetrics instances you want to retrieve metrics from. Each instance is at least composed of the following parameters: prometheus_url: Points to the metric route (Note: it must be unique) namespace: Namespace to be prepended to all metrics (allows to avoid metrics name collision) metrics: A list of metrics that you want to retrieve as custom metrics, for each metric you can either simply add it to the list - metric_nameor renaming it like - metric_name: renamed. It’s also possible to use a *wildcard such as - metric*that fetches all matching metrics (to use with caution as it can potentially send a lot of custom metrics). There is also a couple of more advanced settings ( ssl, labels joining, tags,…) that are documented in the conf.yaml example configuration. Mistake in the docs? Feel free to contribute!
https://docs.datadoghq.com/integrations/openmetrics/
CC-MAIN-2019-26
refinedweb
188
54.93
multiple queries (1) By anonymous on 2020-12-11 17:38:04 [link] [source] The CLI evaluates multiple selects e.g. select * from table1;select * from table2; sqlite3_prepare does not complain when the sql statement contains multiple selects. When I invoke sqlite3_step, it has the first of the multiple select statements. How do I move to the second & subsequent statements? i.e What API do I call? (2) By Richard Hipp (drh) on 2020-12-11 18:56:45 in reply to 1 [link] [source] That's what the pzTail parameter to sqlite3_prepare() is for - it returns a pointer to the remaining text in the input that has not yet been prepared. So you do a loop. You run "sqlite3_prepare()" on the whole SQL, but you remember the pzTail. Then you run sqlite3_step() on the prepared statement until it finishes. Then you run sqlite3_finalize() on the prepared statement. Then as long as you have more SQL text to process, you do the whole thing again. (3) By anonymous on 2020-12-11 20:51:06 in reply to 2 [link] [source] (4) By anonymous on 2020-12-12 13:15:44 in reply to 2 [link] [source] */ ); I can retrieve the SQL statement that is executed (in the first pass of the prepare ... step ... finalize loop) from **pmtStmt; using the same code with argument **pzTail returns null or "". (I've tried several other C# expressions without success). Is there an SQLite3 function to retrieve the string from the pointer **pzTail? If there isn't, any clues on how to retrieve the string from the pointer **pzTail with C#? (5) By Larry Brasfield (LarryBrasfield) on 2020-12-12 16:22:35 in reply to 4 [link] [source] This is more of a C# question than a SQLite API question. That said, there is not going to be a good way to use that pzTail out pointer from the C# calling context. At the C API level, it will be pointing within the range of chars referenced by zSql. But at the C# calling level, the string parameter passed whose content ultimately becomes something referenced by a zSql is likely to be stored in a temporary whose lifetime expires before or during the return to the C# calling context. Hence, the pzTail value coming out of the C-level call will be referencing memory that likely will not be allocated to hold zSql content when the C# calling code regains control. If you have control of the adapter layer between the C# interface and the SQLite C library, you could create there a new C# string reflecting the content portion referenced by pzTail, and make that an out parameter of the C# interface. The means of copying C string content to C# string objects should be easily found (and they are off-topic here.) Is there an SQLite3 function to retrieve the string from the pointer **pzTail? No. That would be a simple C expression too trivial to merit an API entry. (6) By anonymous on 2020-12-12 16:50:27 in reply to 5 [link] [source] whose lifetime expires before or during the return to the C# calling context. Thanks for the insight; that is what seems to be happening although the pzTail pointer remains non-zero on return to C#. To me that suggests that the pointer is still alive in the DLL. If that is the case, then an API to return the remaining portion of the SQL would be handy. Unless I find a way (still trying to find one) to get the pzTail string value, I have a (I think neat) workaround, as below in pseudo code: while (sql.Length!=0) /* sql = multiple queries separated by ; { prepare ... step ... finalize ... executedSQL = the SQL statement that ppStmt executed /*I can get this */ sql = sql.Replace(executedSQL,""); } (7) By Larry Brasfield (LarryBrasfield) on 2020-12-12 18:44:14 in reply to 6 [link] [source] To me that suggests that the pointer is still alive in the DLL. If that is the case, then an API to return the remaining portion of the SQL would be handy. The pointer is nothing more than a single value, easily passed by value. The issue is whether that pointer points to something that can be referenced. My point is that is probably does not upon return to the C# call site. There is no way for a heretofore nonexistent SQLite API to later "return the remaining portion of the SQL" unless the connection were to store it away for future reference, which would also require a way to cease storing it. I would bet long odds against that happening, particularly because the existing API already permits the operations you would like to perform. (It does not support them in quite the manner you are thinking, but it does support the work-around that I suggested earlier.) Your possibly neat workaround could be made to work. With very similar memory allocation, you could just extract the single SQL statements from the statement glom and do the prepare/step/finalize on each one. That would be more straightforward, IMO. (8) By Keith Medcalf (kmedcalf) on 2020-12-12 18:57:56 in reply to 7 [link] [source] With very similar memory allocation, you could just extract the single SQL statements from the statement glom and do the prepare/step/finalize on each one. That would be more straightforward, IMO. Though of course you would have to parse the statement to find the endings if the statement contained any quotes (quote, double-quote, square-brackets or backticks) and make sure they were nested/unnested properly in order to find the end of a statement. Alternatively I suppose you could split on a semicolon and re-assemble complete statements using the sqlite3_complete API to fixup improper divisions... (9) By Larry Brasfield (LarryBrasfield) on 2020-12-12 19:13:23 in reply to 8 [link] [source] The OP had claimed " /*I can get this */", which I elected to bypass because I suspect it is the same glutton for tedious work responsible for several other threads here lately. Using sqlite3_complete() would certainly work, albeit without the clarity one might like. Not glomming statements together from the outset would be clearer yet. Or, if the OP in fact is creating the C#/SQLite-C adapter layer, it would be very simple for it to have an out integer parameter which returns the number of characters consumed. Given that the zSql content is utf-8, computing that may take a bit more than pointer differencing, but it would be at least clean. (10) By anonymous on 2020-12-12 21:49:36 in reply to 9 [link] [source] it would be very simple for it to have an out integer parameter which returns the number of characters consumed. I am unclear as to where (prepare or step or finalize or else where .. by reference to the pseudo code above) you would specify this? (12) By Larry Brasfield (LarryBrasfield) on 2020-12-13 19:44:58 in reply to 10 [link] [source] public int RunOneStatement( SQLiteConnection db, string sqlGlom, out int charsUsed, ... ) { char * zSql = sqlGlom.?; rc = sqlite3_prepare_v2(db.?, zSql, sqlGlom.bytelength, & pStmt, & pzTail); step ...; finalize ...; charsUsed = 0; while (*zSql && zSql < pzTail){ // Advance zSql by one utf-8 code. ++charsUsed; } // Free zSql if necessary. } The '...' in the signature would likely be a delegate to handle per-step actions that are needed. The '.?' methods are whatever it takes to get representations usable in native (or C) code. This would enable the same sort of loop, consuming a single SQL statement per iteration, that you envisioned when you asked about getting/using pzTail. The difference here is that pzTail is still a valid pointer where used. At the C# level calling the above function, just lop off as many character codes as charsUsed indicates, or terminate the loop when it equals zero. (13) By anonymous on 2020-12-14 14:45:57 in reply to 12 [link] [source] The value for ++charsUsed I get is none of these: - The length of the original SQL statement - The length of the executed SQL (in the current iteration) - The length of the remaining SQL (for the subsequent iterations) Possibly coding error on my part but the fact that it is a non-zero value is intriguing. I can workaround by simply replacing the executed SQL from the original SQL in every iteration. (14) By Keith Medcalf (kmedcalf) on 2020-12-14 15:15:54 in reply to 13 [link] [source] zTail needs to be a char* eg: The size of zTail and zSql are the size of a pointer (32-bit for 32-bit or 64-bit for 64-bit). bytesused will contain the number of bytes used, or -1 if all of them was used.The size of zTail and zSql are the size of a pointer (32-bit for 32-bit or 64-bit for 64-bit). bytesused will contain the number of bytes used, or -1 if all of them was used. char *zSql = <the SQL CString> char *zTail = 0; int bytesused = -1; sqlite3_prepare(... zSql ... &zTail); if (zTail) bytesused = (int)((IntPtr)zTail-(IntPtr)zSql); ... NB: I don't know C# -- as a Microsoft language I assume it is completely broken for all purposes so I have no idea how you interface C# with a actual computer code (15) By Larry Brasfield (LarryBrasfield) on 2020-12-14 15:55:27 in reply to 14 [link] [source] I believe that zSql, and hence zTail also, are pointers to a UTF-8 code sequence. [a] The reason that I did not advise the pointer arithmetic you suggest is because it does not yield the number of characters consumed unless they happened to be restricted to the ASCII subset of UTF-8 code points. [a. Per the sqlite3_prepare doc, zSql is "/* SQL statement, UTF-8 encoded */". ] (16) By Keith Medcalf (kmedcalf) on 2020-12-14 16:11:11 in reply to 15 [link] [source] But C# strings, like everything Microsoft, are UTF-16. So you have to convert it to some kind of "array of bytes" that are UTF-8 encoded. So since you are fiddling with an array of bytes then simply knowing the difference in bytes is meaningful. Knowing the number of "UTF-8" (Unicode) codepoints does not help with handling UTF-16 character-points. Unless you are expecting to be lucky. (17) By Larry Brasfield (LarryBrasfield) on 2020-12-14 16:34:46 in reply to 16 [link] [source] I have no expectation of luck or desire to rely on it. At the point I suggested, to the OP, a way of knowing how much of a multi-statement string was accepted by sqlite3_prepare, the "strings" are simple char* but known to be referencing UTF-8 code sequences. Computing the accepted number of "characters" (or UTF-8 code points) has nothing to do with C# at that level, with this tiny proviso: Once control gets back to the C# domain, where we can presume the multi-statement string appears as a CLR String type, it is quite easy to lop off the accepted portion using the .Substring(int startIndex) method, where that index is not a count of UTF-16 words but is a zero-based "character position". That .Substring() method can be safely used without having to anticipate that the UTF-16 encoding used for CLR String objects will result in some other number of characters being lopped off or that a UTF-16 code point representation will be sliced into pieces. It is because zSql (and zTail) point to possibly multi-byte character representations that my suggested code did not do simple pointer arithmetic. Doing that might have worked, but only by luck. As we know, having something work by luck is often a form of bad luck. (11) By anonymous on 2020-12-12 21:57:38 in reply to 8 [link] [source] glom Never seen this word being used; I had to look it up! Alternatively I suppose you could split on a semicolon I did consider this but getting the executed statement from ppStmt seems easier. (18) By anonymous on 2020-12-18 07:07:13 in reply to 2 [link] [source] I found further sound advice here. (22) By David Jones (vman59) on 2020-12-19 21:23:23 in reply to 18 [link] [source] I wrote my own support library that lets my applications deal with SQLite in a style in between the simplicity of sqlite3_exec() and tediousness of the standard prepare,bind,step,column_xxx loop. The basic pattern for this library is: Notes:Notes: #include "statement_store.h" sps_store sps; struct sps_context *ctx; int emp_num, dept_num; char *first, *last; sps_define1 (sps, "emp-by-last", "SELECT * FROM employee WHERE last LIKE ?1", "s"); ctx = sps_execute2 (sps, "emp-by-last","%son"); while (sps_next_row3(ctx,"itti",&emp_num,&first,&last,&dept_num)) { printf("%8d %-12s %-15s %5d\n", emp_num, first, last, dept_num); free ( first ); free ( last ); } if (ctx->rc != SQLITE_DONE) printf ("Error retrieving data"); printf ("Rows retrieved: %d\n", ctx->count); rc = sps_rundown4(ctx); - Pepares the SQL statement and saves the resulting sqlite3_stmt object in the sps_store object, associating it with a tag ("emp-by-last"). - Lookup the statement object and bind the callers arguments to the statement's parameters. The function uses the bind map passed to sps_define() ("s") to determine the number and data types of these arguments. - Retrieve the next row in the result set and convert the column values to the caller's arguments based on the conversion codes specified in the second argument ("itti" -> int, text, text, int). - Reset the statement and free the sps_context object. The statement is not finalized and may be reused by subsequent calls to sps_execute. (23) By anonymous on 2020-12-19 22:31:27 in reply to 22 [link] [source] I'm having a hard time following your code since I have virtually nil experience of C/C++. Nonetheless I think I get the idea. Is your approach executing each SQL statement ('fully') twice thus doubling runtime? ('fully' i.e. without LIMITing the number of records returned). This grabbed my attention: printf ("Rows retrieved: %dn", ctx->count); Is the value of ctx on reaching this statement always 1 or is it the number of rows returned by the SQL statement? I didn't think there was a way of retrieving the number of rows in a result except by reiteration. Looks like ctx = sps_execute2 (sps, "emp-by-last","%son"); is returning the row count with sps_execute2 reiterating the result. Is that correct? (24) By David Jones (vman59) on 2020-12-20 10:57:04 in reply to 23 [source] sps_execute() binds the parameter values and returns a context for retrieving the result rows of a query, no rows have been returned yet by the SQLite VDBE. The sps_next_row() loop retrieves at most one row each call and increments ctx->count if successful. My library has another function, sps_execute_rowless(), which does run the statement 'fully' before returning, but that's used where you expect sqlite3_step() to immediately return SQLITE_DONE rather than one or more SQLITE_ROW return values first (e.g. "COMMIT;"). (19) By anonymous on 2020-12-19 18:19:54 in reply to 2 [link] [source] I've sorted the loop using pzTail. SQL is: select * from highscores;select * from ajay;select * from highscores where 0 = 1;select date("now"); SQL executed is: select * from highscores; SQL remaining is: select * from ajay;select * from highscores where 0 = 1;select date("now"); SQL executed is: select * from ajay; SQL remaining is: select * from highscores where 0 = 1;select date("now"); SQL executed is: select * from highscores where 0 = 1; SQL remaining is: select date("now"); SQL executed is: select date("now"); SQL remaining is: Having started with multiple statements, every iteration progresses to the next SQL statement. However, if a statement encounters an error, the SQL in that iteration and those following is wrong. I introduced a deliberate error in the second SQL statement ... selectx instead of select as the first time round: SQL is: select * from highscores;selectx* from ajay;select * from highscores where 0 = 1;select date("now"); SQL executed is: select * from highscores; SQL remaining is: selectx* from ajay;select * from highscores where 0 = 1;select date("now"); SQL executed is: /* goes wrong hereon */ SQL remaining is: * from ajay;select * from highscores where 0 = 1;select date("now"); SQL executed is: SQL remaining is: from ajay;select * from highscores where 0 = 1;select date("now"); SQL executed is: SQL remaining is: ajay;select * from highscores where 0 = 1;select date("now"); SQL executed is: SQL remaining is: ;select * from highscores where 0 = 1;select date("now"); SQL executed is: ;select * from highscores where 0 = 1; SQL remaining is: select date("now"); SQL executed is: select date("now"); SQL remaining is: Any guidance on overcoming invalid SQL statements when in a prepare/step/finalize loop? (20) By Larry Brasfield (LarryBrasfield) on 2020-12-19 20:14:33 in reply to 19 [link] [source] You might do a search on "parser" and "error recovery" (together.) As you will see, it is not a trivial task. I am not surprised that the SQLite parser has not been made to somehow figure out what should be consumed as erroneous while leaving what is maybe not. If you insist on solving the problem, finding a statement separator not embedded in quoting delimiters is likely your best bet. You will need to replicate the SQLite scanner (or "lexical analyzer") for that. I am curious as to what the application is that makes solving your stated problem preferable to just complaining about the whole conglomerated statement sequence. (21) By anonymous on 2020-12-19 21:22:05 in reply to 20 [link] [source] Given multiple statements, the CLI either completes or stops at the first error. (I'd use the CLI for building/testing scripts). Programmatically, stopping at the first error is an option (examine the return code from sqlite3_step and stop if not zero). However, this makes the user experience terse/repetitive in that subsequent statements in the multiple queries statement may also be 'incorrect'. My point in asking was to see if there was some way of resuming with the next statement on encountering an error. From what you say, there isn't. Fair enough! To validate each statement in a multiple query statement, I am contemplating splitting at ; and executing each statement and reporting failures in the format "statements nx, ny failed" thereby allowing the user to revisit each failing statement in one go. Of course there is no guarantee that the revision will correct every invalid statement successfully; given this, stopping on the first error is a very reasonable option. (25) By Larry Brasfield (LarryBrasfield) on 2020-12-20 12:20:16 in reply to 21 [link] [source] If you merely split on semicolons, consider how that scheme will treat this: select "silly;column;name" from "silly;table;name" where "silly;column;name" not like '%;%' (26) By anonymous on 2020-12-20 12:55:51 in reply to 25 [link] [source] I know; thanks for making it obvious. Splitting at ; is not a straightforward solution since - there can be a literal value with semi-colon(s) embedded - there can be a literal value with semi-colon AND the literal value can be badly specified i.e. quotation marks are unbalanced. Looks like aborting the loop might be the only practical option when sqlite3_step does not return an expected return code On the back burner for now! (27) By Richard Hipp (drh) on 2020-12-20 13:22:36 in reply to 21 [link] [source] You can search ahead for either end-of-string or a semicolon. If you find a semicolon, then you also need to check that you have a complete SQL statement using the sqlite3_complete() interface, because otherwise the semicolon you found might be in the middle of a string literal or a trigger. (28) By Ryan Smith (cuz) on 2020-12-20 13:46:34 in reply to 27 [link] [source] ... the semicolon you found might be in the middle of a string literal or a trigger. or indeed inside an object name or inside a comment. I had to make an SQLite-like SQL parser for the SQLiteSpeed project long ago and the semi-colon test-case was very similar to Larry's example, only also inside a trigger definition with commented out semi-colons both EOL comments (-- ..;. EOL) and in-line comments (/* ..;. */) and so on. It seems such a simple rule, but parsing it can get hairy if tried straight from the text. (29) By Richard Hipp (drh) on 2020-12-20 14:07:51 in reply to 28 [link] [source] parsing it can get hairy That's why there is the sqlite3_complete() interface! The sqlite3_complete() function takes care of all the messy details for you and lets you know, whether or not the semicolon you found is the end of an SQL statement, or if it is embedded in the middle of an identifier or string literal or trigger. (30) By Ryan Smith (cuz) on 2020-12-20 14:20:20 in reply to 29 [link] [source] Well exactly - and a magnificent API it is. Perhaps my intent wasn't clear - I posted that specifically to discourage self-parsing of the SQL when such marvelous API's exist. As an aside... The reason why I know the trouble of doing it the other way, is that when I started doing work on said project, that was now more than 10 years ago (how time flies!) and at the time, this API was very much not available yet (at least, I believe it wasn't, right? - else I'm just a masochist). :) (32) By Richard Hipp (drh) on 2020-12-20 15:05:28 in reply to 30 [link] [source] this API was very much not available yet The sqlite3_complete() interface (or its pre-version-3 incarnation of "sqlite_complete()") has been available since the very first check-in of SQLite code on 2000-05-29. It's one of the first things I wrote, as it is important for the operation of the CLI. Perhaps it has not been sufficiently publicized... (31) By anonymous on 2020-12-20 14:46:27 in reply to 29 [link] [source] Time to take a look at sqlite3_complete. Question: sqlite3_step appears not to use sqlite3_complete. It appears to parse using each discreet word in the SQL string incrementally until it either fails to parse or hits a semi-colon. Is this correct? (33) By Keith Medcalf (kmedcalf) on 2020-12-20 18:23:09 in reply to 31 [link] [source] sqlite3_step does not use sqlite3_complete nor does it parse anything. sqlite3_step causes the VDBE program created by sqlite3_prepare* to execute to the point at which the next row of output is available, or the execution of the program completes -- or perhaps returns an error if the program execution detects an error condition. sqlite3_prepare_v2 is the thing which parses SQL statements and outputs VDBE programs to be executed by sqlite3_step. It returns it own error returns indicating if a problem was detected in preparation of the VDBE program. They are quite separate and distinct things. If sqlite3_prepare_v2 returns an error, then you have nought to be executing so the question is like saying "so if the house falls down then the store was out of green paint". It rests on a false premise. It is hard to tell if your real underlying problem is merely a failure to check error returns because you appear to be getting to somewhere that should be impossible for a rational person to get to -- or you are continually mistaking sqlite3_step for sqlite3_prepare_v2 -- and it is very difficult to tell what your problem is because of that. (34) By anonymous on 2020-12-20 23:26:00 in reply to 33 [link] [source] or you are continually mistaking sqlite3_step for sqlite3_prepare_v2 True - my mistake. Now sorted. (35) By Keith Medcalf (kmedcalf) on 2020-12-21 07:27:46 in reply to 34 [link] [source] You also suffer from a false premise in that you somehow assume the multiple statements executed in a batch are independent and that a failure in one of them means that you should just carry on with the rest of the batch. This is foolish and likely to lead to significant problems. When a series of statements are submitted to be executed "side by each" as the newfy's say, then when the second one of the sequence fails then those following after that are obviously not going to work as intended? So if a "batch of statements" are submitted is not the fact that a syntax error is detected somewhere in the batch properly an indication that the batch is in error? Why on earth would you want to ignore errors in a non-interactive program? Should not the executor of the batch of commands say "Hold on there matey, the engine did not start so the rest of your instructions cannot be executed until you fix that error condition"? There is no point starting the engine and steering to the open sea if the docking lines could not be untied. There may be exceptional situations in which someone batches together a bunch of non-dependent commands in sequence, but that is probably a very rare exception and will never be the general rule. (36) By anonymous on 2020-12-21 08:04:55 in reply to 35 [link] [source] Your assumptions & you are entitled. Now imagine this: the prepare/step/finalize loop reiterates through 10 queries. Assume that queries 2 5 7 9 are faulty i.e. cannot be prepared. My view is this: 1.logging queries 2 5 7 9 as faulty up front & in one go is a better response than logging query 2 is faulty, then query 5 is faulty, ... etc. 2.reiterating through all queries & executing ones that are valid means that 6 queries (1, 3, 4 6, 8 & 10 ) are processed. Stopping at 2 means that just 1 query is processed. However, I concede that the merits of the 2 approaches is debatable & dependent on the context ... for practical purposes, the SQL scripts will have been tested/proven so the likelihood of failure because of syntax is minimal but runtime failure because of data remains. If the task were to collate daily sales data from 10 community pharmacies stopping at 2 will discard data from 9 of them, completing the loop means that data from only 4 of them is discarded. I think that it is the business logic tier rather than the data tier that has the final say. (37) By Ryan Smith (cuz) on 2020-12-21 10:23:47 in reply to 36 [link] [source] Again, you may be able to craft a set of queries that are independent, but that cannot shape engine behaviour. 1.logging queries 2 5 7 9 as faulty up front & in one go is a better response than logging query 2 is faulty, then query 5 is faulty, ... etc. Imagine that query 1 in the list created a table and query 2 then needs to insert some values into that table - actually this is a very common way to start scripts on this forum intended to explain some bug report. Your idea that we can "check" if query 2 will fail regardless of whether query 1 failed or not becomes very obviously flawed - The Insert would fail horribly, yet if query 1 did actually succeed, query two would be perfectly executable. This is and always has been the premise of "ERROR" generation in all software I think, stop the moment we reach something that cannot be done (and perhaps cannot be handled by any of the provided levels of error-handling). Even your example sends chills down my spine: If the task were to collate daily sales data from 10 community pharmacies stopping at 2 will discard data from 9 of them, completing the loop means that data from only 4 of them is discarded. What financial manger would OK this behaviour? Seeing a report that only includes financials from those business units where "the query succeeded for"... That's Enron-level math. Fix the whole thing first, and only then use it. (38) By anonymous on 2020-12-21 12:48:15 in reply to 37 [link] [source] Fix the whole thing first, and only then use it. There is nothing to fix. The whole thing works to start with. And there is no practical way to envisage every single scenario of data that can ensue in real life. I think that it is the business logic tier rather than the data tier that has the final say.
https://sqlite.org/forum/info/1aa65114ef8631dd
CC-MAIN-2021-49
refinedweb
4,811
66.17
An array can have one dimension or more than one. If it has more than one, it is called a multidimensional array. Note that having multiple dimensions is not the same thing as a jagged array, which has other arrays as its elements. The dimensionality or rank of an array corresponds to the number of indexes used to identify an individual element. You can specify up to 32 dimensions, although more than three is rare. The following example declares a two-dimensional array variable and a three-dimensional array variable. Dim populations(200, 3) As Long Dim matrix(5, 15, 10) As Single The total number of elements is the product of the lengths of all the dimensions. In the preceding example, populations has a total of 804 elements (201 x 4), and matrix has 1056 elements (6 x 16 x 11). Each index ranges from 0 through the length specified for its dimension. A two-dimensional array is also called a rectangular array. When you add dimensions to an array, the total storage needed by the array increases considerably, so use multidimensional arrays with care. All arrays inherit from the Array class in the System namespace, and you can access the methods and properties of Array on any array. The following members of Array can be useful: The Rank property returns the array's rank (number of dimensions). The GetLength method returns the length along the specified dimension. The GetUpperBound method returns the highest index value for the specified dimension. The lowest index value for each dimension is always 0. The Length property returns the total number of elements in the array. The System.Array.Sort method sorts the elements of a one-dimensional array. Note that GetLength and GetUpperBound take a 0-based argument for the dimension you are specifying.
http://msdn.microsoft.com/en-us/library/d2de1t93(VS.80).aspx
crawl-002
refinedweb
302
55.84
In this problem, we are given an integer n. And we have to print all substrings of a number that can be formed but converting the string are not allowed i.e. we cannot convert an integer into string or array. Let’s take an example to understand the topic better − Input: number =5678 Output: 5, 56, 567, 5678, 6, 67, 678, 7, 78, 8 In order to solve this problem, we will need to use mathematical logic. Here, we will print the most significant bit first, then successive bits are printed. Step1: Take a 10’s power number based on the number of digits. Step2: print values recursively and divide the number by 10 and repeat until the number becomes 0. Step3: Eliminate the MSB of the number and repeat step 2 with this number. Step4: Repeat till the number becomes 0. #include <iostream> #include<math.h> using namespace std; void printSubNumbers(int n) ; int main(){ int n = 6789; cout<<"The number is "<<n<<" and the substring of number are :\n"; printSubNumbers(n); return 0; } void printSubNumbers(int n){ int s = log10(n); int d = (int)(pow(10, s) + 0.5); int k = d; while (n) { while (d) { cout<<(n / d)<<" "; d = d / 10; } n = n % k; k = k / 10; d = k; } } The number is 6789 and the substring of a number are − 6 67 678 6789 7 78 789 8 89 9
https://www.tutorialspoint.com/print-all-substring-of-a-number-without-any-conversion-in-cplusplus
CC-MAIN-2021-49
refinedweb
234
69.82
Compiling ScummVM/Wii Compiling ScummVM for Wii or Gamecube This page describes how you build a Wii or Gamecube binary from the ScummVM source tree. First, you have to choose how to obtain the sources: - a downloaded sources archive - a SVN checkout (latest is trunk) Required tools and/or libraries might change at times, and differences for various ScummVM versions are pointed out when necessary. Mandatory tools and libraries The latter two libraries are part of devkitPPC and are already installed with it. However, official ScummVM Wii and Gamecube binaries use unofficial versions, available via git here. Reasons: - libogc's malloc() has been modfied to utilize MEM2. Without this patch, a single binary with all game engines won't be able to run all games (like COMI) due to memory limits - libfat gained a read-ahead cache, without it video sequences will stutter Nevertheless, ScummVM should build just fine with either versions. Versions differences - v0.12.0 (first official version) is built with devkitPPC r15 - trunk (and eventually v0.13.0) changed to devkitPPC r16 (rev 15592 as of this writing): svn co patch it: --- Tremor-vanilla/misc.h 2008-12-20 17:09:56.000000000 +0100 +++ Tremor/misc.h 2008-12-23 22:44:58.000000000 +0100 @@ -48,7 +48,7 @@ #include <sys/types.h> -#if 0 +#if BYTE_ORDER==LITTLE_ENDIAN union magic { struct { ogg_int32_t lo; @@ -58,7 +58,7 @@ }; #endif -#if 1 +#if BYTE_ORDER==BIG_ENDIAN This port doesn't utilize ScummVM's configure system,, keep an engine toggle at STATIC_PLUGIN or disable it by just commenting that line - Vanilla scalers do not work, the code is i386 only - zlib and MAD are part of libogc - MPEG2 support has been dropped from the Wii port, if you want supports for it, you have to cross compile libmpeg2 When you're done with the Makefile, save it and run GNU make..
https://wiki.scummvm.org/index.php?title=Compiling_ScummVM/Wii&oldid=9669
CC-MAIN-2021-39
refinedweb
310
60.04
main.py This file contains the main entry point of the lambda function that is called for the Alexa skill. Python module with a base class to support an Alexa skill set, and scripts to build an AWS python distibution PyAlexa-Skill is an Open Source licensed Python package with a base class that support the necessary methods for an Alexa Skill set and two scripts to help with the creation of the main entry point and the packaging of the AWS Lambda Function for the Alexa Skill Set. The AlexaBaseHandler class is an abstract class that provides the necessary framework to build the necessary response hooks for an Alexa application. All of the abstract methods of this class must be implemented by the concrete implementation class. See the base class for details on the abstract methods. This method will take the 2 parameters that are sent to the lambda function and determine which of the Alexa handlers to invoke. For the Amazon Built-in requests such as AMAZON.YesIntent, process request will call a method of the form: on_<intentname>_intent, e.g. on_yes_intent(). It is expected that the concrete implementation will have a the necessary methods to support the built-in intents. For custom intents, such as, MyCustomIntent, process request will call a method of the form: on_<intentname>_intent, e.g. on_mycustomintent_intent(). In this case, no assumption is made about the custom intent name, so the entire name is lower cased, then used in the creation of the dynamic method call. For Amazon Built-in requests such as AudioPlayer.PlaybackStarted, process request will call a method of the form: on_<major name>_<minor name>, e.g. on_audioplayer_playbackstarted(). If any of the dynamically called methods is not found, a NotImplementedError exception is raised. This method ( from the Alexa color example ) will put together the speechlet portion into a properly formatted json message. This is typically called by the concrete implementations of the AlexaBaseHandler. This method (from the Alexa color example ) will construct a properly formatted response message so the Amazon Echo knows what to respond with. This class is a reference implementation that does nothing useful. All Alexa handlers are handled the same way. To create the concrete implementation use the following: from pyalexaskill import AlexaBaseHandler class MyConcreteAlexaHandler(AlexaBaseHandler.AlexaBaseHandler): # implement the abstract methods This file contains the main entry point of the lambda function that is called for the Alexa skill. This method ( which can be called anything, you just need to configure it in the lambda handler ), is the method that is called with the 2 parameters. This method will typically instantiate an concrete implementation of the AlexaBaseHandler and delegate to the process_request method. This file is the standard Python requirements file. This file is used by the create_deployment.py script to install the necessary 3rd party libraries that your Alexa skill might need. Any library specified in the requirements.txt file will be installed into your deployment directory. This script creates a zip file per the Amazon lambda specification, such that it is suitable to upload as your lambda function implementation. activate your virtualenv and execute like: create_aws_lambda.py -r <rootdir> -i "list,of,all,python,files,to,include" This script creates a template main entry point All deployments are stored in the deployments subdirectory and follow the naming convention of ‘deployment_n’ and ‘deployment_n.zip’, where ‘n’ is automatically calculated to the next largest ‘n’ in the directory. Right now it does this based on the name of the subdirectories of deployments - NOT - the names of the zip files. The deployment script will create a deployment directory and zip file for everything in the requirements.txt file AND the files in the deployment_files variable in the create_deployment.py file. When this script is done running, there should be a ‘deployment_n.zip’ file in the deployments directory. It is that file that needs to be upload to the Amazon Lambda console. activate your virutal env and execute like: create_aws_main.py This script creates a template concrete handler class. This template can be used as the starting point to create the necessary implementation details for the handler. activate your virtualenv and execute like: create_alexa_handler.py This script creates a template utterance and intent schema. This template can be used as the starting point to create the necessary implementation details for an actual utterance and intent schema. activate your virtualenv and execute like: create_alexa_test_skills.py Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyalexa-skill/
CC-MAIN-2017-26
refinedweb
757
55.95
I'm trying to figure out a way to return the value from a specific position inside a 2D array in JAVA. I've been searching for hours. I might be tired, or using the wrong terms but I can't find anything useful so far... So for example I have a variable "a" and I want it to receive the value that is contained at a specific array position. So for example, I want the value contained at the position : array[1][1] You just assign the value from the array as desired: public class Foo { public static void main(String[] args) { int[][] arr = {{1,2,3},{4,5,6},{7,8,9}}; int a = arr[1][1]; System.out.println(a); } } // outputs : 5 Note that if a value hasn't been put in an array position then it will be in the uninitialized state. For an int this is 0. int[][] arr = new int[9][9]; // all values in arr will be 0
https://codedump.io/share/fwJ9nhYQSfFw/1/return-specific-value-from-a-2d-array
CC-MAIN-2017-39
refinedweb
165
70.23
Are you sure? This action might not be possible to undo. Are you sure you want to continue? are right & how much money you lose when you are wrong is important. George Soros 1 TABLE OF CONTENTS Certificate Acknowledgement Executive Summary Objective of the Study Methodology Job Description Chapter 1. 2 Chapter 3. Introduction to Futures and Options Forward Contracts Future Contracts Options Payoffs for Derivative Contracts Chapter 4. Hedging, Arbitrage and Speculation Strategies Hedging Strategies with examples Arbitrage Strategies with examples Speculation Strategies with examples Chapter 5. Applicability of Derivative Instruments Risk Management: Concept and Definition Risk Management with Future Contract Risk Management with Options Introduction to Option Strategies Chapter 6. Achievements in Futures and Options Segment Comparative Analysis of F&O Segment with Cash Segment NSE Position Top 5 Traded Symbols Chapter 7. Conclusion Chapter 8. Suggestions and Recommendations Chapter 9. The Reference Material Glossary Bibliography 3 mission. Also it gives special emphasis on the selling of products and management of the company. It provides knowledge to the readers regarding the company‟s history. It also describe about the objective of this study. vision.EXECUTIVE SUMMARY Conceptually the mechanism of stock market is very simple. It also suggests some of the strategies that can be applied to earn more even when the market is too much 4 . customer base and the reasons to be associated with the company. People who are exposed to the same risk come together and agree that if anyone of the person suffers a loss the other will share the loss and make good to the person who lost. Further the project tells us about the profile of the company (Sharekhan Ltd. The next few chapters are devoted to the study of the Derivative Market and Derivative Instruments in a very basic way. It also enlightens the readers about Sharekhan Limited‟s strategies to acquire new customers. The initial part of the project focuses on the job and responsibilities I was allotted as a summer trainee.). It also makes the readers aware about the techniques and methodology used to bring this report alive. the Internet. and advice about derivative instruments are there in stock market. there are not too many people one can listen to if he want to avoid confusing. As a result one can have conviction in his portfolio in the hugely volatile stock market because a difficult and serious problem for all investors today is that there is entirely too much free information. the objective of the Dissertation is to do in depth research on these derivative instruments. Therefore. So one need to confine himself to just a very few sources of relevant facts and data and a sound system that has proven to be accurate and profitable over time. Realistically. people at work. It can be very risky and potentially dangerous. hype. promotion. brokers. advisers.volatile. The next part of the project throws light upon my findings and analysis about the company and the suggestions for the company for better performance. personal opinion. entertaining cable TV market programs. The readers can also find the comparative analysis of the Derivative Market and the Cash Market in the Indian context. OBJECTIVE OF THE STUDY To find out whether the Derivative Instruments are applicable in the Indian Stock Market which can work both in good and bad times so that it can minimize our risk and maximize our returns. and faulty personal market opinions. and other media. One get it from friends. 5 . contradictory. relatives. stock analysts. I have tried to identify various terms related to derivative trading. arbitrage and speculation strategies using both futures and options. 6 . I have given a brief introduction about the instruments. Segregation involved a thorough study of the strategies and possible use. data regarding the traded volume and number of contracts traded from December 2007 till May 2008. so that the reader is aware of basics of the subject. I identified hedging. which includes the comparison of derivative market with cash market. I have tried to analyze the instruments as per the Market Participant and the Market Trend. “Terms related to derivative market” Then I have tried to segregate the use of Instruments as per the Market Participants and Market Trend. Then I have done a secondary data based study on growth of Indian Derivative Market. I have also analyzed the top five most traded symbols in futures and options segment.METHODOLOGY During this project. Initially. and then segregated them into a chapter each. I have analyzed the Futures and Options. for which I have introduced a separate chapter. I have been handling the Following responsibilities: My job profile was to sale the products of the organization. Ghaziabad. canopies. My job profile was to coordinate the team and also help them to sale the product and also help them in field. My job profile was to do sales promotion through e-mails. 7 . AREA ASSIGNED I covered areas like Delhi. making cold calling. distributing pamphlets and etc. Faridabad and NCR. My job profile was to generate the leads by cold calling. My job profile was to understand customers‟ needs and advising them to make a portfolio as per their investment.JOB DESCRIPTION The company placed me as a Summer Trainee. Gurgaon. 30 AM Fixing appointment with clients. TARGET MARKET Different properties dealers. 8 .TARGET ASSIGNED To sell 18 accounts per month. Cold calling. Visit clients place. Lawyers Travel agencies Transport business House wives Businessmen Corporate Employees etc. Completing the formalities like filling the application form and documentation. Charted accountants. Demonstrate the product on Internet to the client. DAY TO DAY JOB DESCRIPTION Reporting time: 9. Chapter 1 9 . online trading. investment advice etc. NSE.Introduction of Sharekhan ltd.www. Derivatives. The site gives access to superior content and transaction facility to retail customers across the country. ABOUT SHAREKHAN LIMITED Sharekhan Ltd. Known for its 10 . The firm‟s online trading and investment site .com . is one of the leading retail stock broking house of SSKI Group which is running successfully since 1922 in the country. which has over eight decades of experience in the stock broking business. INTRODUCTION OF SHAREKHAN LTD. It is the retail broking arm of the Mumbai-based SSKI Group. Sharekhan offers its customers a wide range of equity related services including trade execution on BSE.sharekhan.was launched on Feb 8. 2000. depository services. jargon-free. It has 60 institutional clients spread over India. Verisign Financial Technologies India Ltd. Planetasia. Presently SSKI is one of the leading players in institutional broking and corporate finance activities. Spider Software Pvt Ltd. the site has a registered base of over one lakh customers. UK and US. like Sun Microsystems. With a legacy of more than 80 years in the stock markets. The company has used some of the best-known names in the IT industry. Oracle. the SSKI group ventured into institutional broking and corporate finance 18 years ago. SSKI holds a sizeable portion of the market in each of these segments. Sharekhan has always believed in investing in technology to build its business. PROFILE OF THE COMPANY 11 . 2002 Sharekhan launched Speed Trade. On April 17. a net-based executable application that emulates the broker terminals along with host of other information relevant to the Day Traders. Cambridge Technologies. and Shopper‟s Stop. The group has placed over US$ 1 billion in private equity deals. The content-rich and research oriented portal has stood out among its contemporaries because of its steadfast dedication to offering customers best-of-breed technology and superior market information. in terms of the size of deal. Essar. sector tapped etc. In the last six months Speed Trade has become a de facto standard for the Day Trading community over the net. Hutchison. This was for the first time that a netbased trading station of this caliber was offered to the traders. Sharekhan‟s ground network includes over 640 centers in 280 cities in India which provide a host of trading related services. Microsoft. investor friendly language and high quality research. Gujarat Pipavav. to build its trading engine and content. The objective has been to let customers make informed decisions and to simplify the process of investing in stocks. Intel & Carlyle are the other investors. Foreign Institutional Investors generate about 65% of the organization‟s revenue. The Corporate Finance section has a list of very prestigious clients and has many „firsts‟ to its credit. Some of the clients include BPL Cellular Holding. Far East. SSKI‟s institutional broking arm accounts for 7% of the market for Foreign Institutional portfolio investment and 5% of all Domestic Institutional portfolio investment in the country. The Morakhiya family holds a majority stake in the company. HSBC. Nexgenix. Vignette. with a daily turnover of over US$ 2 million.. 12 13. SHAREKHAN LIMITED’S MANAGEMENT TEAM Dinesh Murikya Tarun Shah Shankar Vailaya : : : : : : : Owner of the company CEO of the company Director (Operations) Director (Products & Technology) Head of Research Vice President of Equity Derivatives Vice President of Research Jaideep Arora Pathik Gandotra Rishi Kohli Nikhil Vora 14 are as follows: Equity and derivatives trading Depository services Online services Commodities trading Dial-n-trade Portfolio management Share shops Fundamental research Technical research 15 .PRODUCTS AND SERVICES OF SHAREKHAN LIMITED The different types of products and services offered by Sharekhan Ltd. com and is suitable for the retail investor who is risk-averse and hence prefers to invest in stocks or who does not trade too frequently. This account allow investors to buy and sell stocks online along with the following features like multiple watch lists.sharekhan.TYPES OF ACCOUNT IN SHAREKHAN LIMITED Sharekhan offers two types of trading account for its clints Classic Account (which include a feature known as Fast Trade Advanced Classic Account for the online users) and Speed Trade Account CLASSIC ACCOUNT This is a User Friendly Product which allows the client to trade through website www. Integrated 16 . 17 . NSE F&O & BSE. f. Demat and digital contracts. tic-by-tic charts. highest value etc. e. e. Online trading account for investing in Equities and Derivatives b. b. which enables one to buy and sell in an instant. d. This account comes with the following features: a.30 am c.) Hot keys similar to broker‟s terminal. It is ideal for active traders and jobbers who transact frequently during day‟s session to capitalize on intra-day price movement.Banking. This account comes with the following features: a. Instant cash transfer facility against purchase & sale of shares. get the trusted. After hours order placement facility between 8. Instant order Execution and Confirmation. Automatic funds transfer with phone banking facilities (for Citibank and HDFC bank customers) III. Free trading through Phone (Dial-n-Trade) I. professional advice of Sharekhan limited‟s Tele Brokers V. Two dedicated numbers(1800-22-7500 and 39707500) for placing the orders using cell phones or landline phones II. f. Real-time streaming quotes. Simple and Secure Interactive Voice Response based system for authentication IV. Technical Studies. d. g. Market summary (Cost traded scrip. c. Real-time portfolio tracking with price alerts and Instant credit & transfer. g.00 am and 9. Multiple Charting. Single screen trading terminal for NSE Cash. IPO investments. Instant order and trade confirmations by e-mail. Integration of: Online Trading +Saving Bank + Demat Account. Single screen interface for cash and derivatives. SPEED TRADE ACCOUNT This is an internet-based software application. h.10% HOW TO OPEN AN ACCOUNT WITH SHARE KHAN LIMITED? For online trading with Sharekhan Ltd. NIL first year Rs. from second calendar year onward Delivery . 18 . investor has to open an account. j.: One need to call them at phone number provided below and asks that he want to open an account with them.10 % Brokerage Delivery . Or If one stays in Mumbai. 750/= Intra-day – 0.0. Sharekhan has a huge network all over India (640 centers in 280 cities). he can call on 022-66621111 One can visit any one of Sharekhan Limited‟s nearest branches.a. Live market debts. One can call on the Toll Free Number: 1-800-22-7500 to speak to a Customer Service executive b.. CHARGE STRUCTURE Fee structure for General Individual: Charge Account Opening Classic Account Rs. Following are the ways to open an account with Sharekhan Ltd.50 % Depository Charges: Account Opening Charges Annual Maintenance Charges Rs. 300/= p. One can also log on to “. Alerts and reminders.0. 1000/= Intra-day . Back-up facility to place trades on Direct Phone lines.0. i. NIL Rs.com/Locateus.50% Speed Trade Account Rs.aspx” link to find out the nearest branch. a. sharekhan. a Welcome Kit is dispatched from Mumbai to the clients‟ address mentioned in the documents provided by them.com to know about their products and services. is scrutinized in the branch and then it is sent to Mumbai for further processing where after a few days the clients‟ account are generated and activated. pin code of the city. shop and his preferences regarding the type of account he wants. Generally the process of opening an account follows the following steps: LEAD MANAGEMENT SYSTEM (LMS) / REFERENCES CONTACT THE PERSON OVER PHONE OR THROUGH EMAIL FIXING AN APPOINTMENT WITH THE PERSON GIVING DEMONSTRATION YES 19 NO . These information are compiled in the headquarter of the company that is in Mumbai from where it is distributed through out the country‟s branches in the form of leads on the basis of cities and nearest share shops. One can send them an email at info@sharekhan. they can start trading and investing in shares. One can also visit the site www. After that the executives of the respective branches contact the prospective clients over phone or through email and give them information regarding the various types of accounts and the documents they need to open an account and then fix appointment with the prospective clients to give them demonstration and making them undergo the formalities to open the account. After that the forms that has collected from the clients. After the accounts are activated. his email address. city he lives in.com and click on the option “Open an Account” to fill a small query form which will ask the individual to give details regarding his name. phone number. his nearest Sharekhan Ltd. As soon as the clients receive the Welcome Kit. which contains the clients‟ Trading ID and Trading Password. DOCUMENTATION FILLING UP THE FORM SUBMISSION OF THE FORM LOGIN OF THE FORM SENDING ACCOUNT OPENING KIT TO THE CLIENT TRADING Apart from two passport size photographs. c. one needs to provide with the following documents in order to open an account with Sharekhan Limited. Flat Maintenance Bill (should be latest and should be in the name of the client) 20 . d. Passport (valid) Voter‟s ID Card Ration Card Driving License (valid) Electricity Bill (should be latest and should be in the name of the client) f. b.: Photocopy of the clients‟ PAN Card which should be duly attached Photo copy of any of the following documents duly attached which will serve as correspondence address proof: a. e. Telephone Bill (should be latest and should be in the name of the client) g. 5000). one for the Account Opening Fees and the other for the Margin Money (the minimum margin money is Rs. Eagle Eye. 33 hit the profit target. Investors Eye. 21 . Lease or Rent Agreement. ** A cancelled cheque should be given by the client if he provides Saving Bank Statement as a proof for correspondence address. High Noon. Out of 37 trading calls given by Sharekhan in the month of November 2007. These exclusive trading picks come only to Sharekhan Online Trading Customer and are based on in-depth technical analysis. j.h. Insurance Policy (should be latest and should be in the name of the client) i. It comprises a team of experts who constantly keep an eye on the share market and do research on the various aspects of the share market. Sharekhan's trading calls in the month of November 2007 has given 89% strike rate. RESEARCH SECTION IN SHAREKHAN LIMITED Sharekhan Limited has its own in-house Research Organisation which is known as Valueline. These reports are named as Pre-Market Report. one receives daily 5-6 Research Reports on their emails which they can use as tips for investing in the market. Saving Bank Statement** (should be latest) Two cheques drawn in favour of Sharkhan Limited. Generally the research is based on the Fundamentals and Technical analysis of different companies and also taking into account various factors relating to the economy. Sharekhan Limited‟s research on the volatile market has been found accurate most of the time. As a customer of Sharekhan Limited. NOTE: Only Saving Bank Account cheques are accepted for the purpose of Opening an account. 69% 121 13. twice by Euromoney Survey and four times by Asiamoney Survey. Sharekhan Limited won the CNBC AWARD for the year 2004.20% 6. AWARDS AND ACHIEVEMENTS SSKI has been voted as the Top Domestic Brokerage House in the research category.29% 5.Daring Derivatives and Post-Market Report.67% 192 21. POLL RESULTS: BROKER PREFERENCE 5paise Sharekhan Motilal oswal ICICI Direct HDFC Indiabulls Kotak Others 119 13.45% 194 21.67% 22 116 13.92% 38 46 59 4.11% . Apart from these. Sharekhan Limited issues a monthly subscription by the name of Valueline which is easily available in the market. Chapter 2 23 . However. most notably forwards. it is possible to partially or fully transfer price risks by locking–in asset prices. By their very nature. by lockingin asset prices.Introduction to Derivatives INTRODUCTION TO DERIVATIVES The emergence of the market for derivative products. these generally do not influence the fluctuations in the underlying asset prices. can be traced back to the willingness of risk-averse economic agents to guard themselves against uncertainties arising out of fluctuations in asset prices. 24 . futures and options. the financial markets are marked by a very high degree of volatility. As instruments of risk management. Through the use of derivative products. derivative products minimize the impact of fluctuations in asset prices on the profitability and cash flow situation of risk-averse investors. A contract which derives its value from the prices. etc. EMERGENCE OF DERIVATIVES Derivative products initially emerged as hedging devices against fluctuations in commodity prices. in a contractual manner. crude oil. rupee dollar exchange rate. wheat farmers may wish to sell their harvest at a future date to eliminate the risk of a change in prices by that date. The price of this derivative is driven by the spot price of wheat which is the “underlying”. and commodity-linked derivatives remained the sole form of such products for almost three hundred years. bond. soybean. since their emergence. For example. Financial derivatives came into spotlight in the post-1970 period due to growing instability in the financial markets. However. A very simple example of derivatives is curd. commodity or any other asset. index. index. 2. they accounted for about two-thirds of total transactions in derivative products. their 25 . which is derivative of milk. or index of prices. coffee. risk instrument or contract for differences or any other form of security. the market for financial derivatives has grown tremendously in terms of variety of instruments available. A security derived from a debt instrument. sugar. share. to buy or sell an asset in future. The asset can be a share. In recent years. interest rate. Such transaction is an example of a derivative. In simple word it can be said that Derivatives are financial contracts whose value/price is dependent on the behavior of the price of one or more basic underlying assets (often simply known as underlying). loan whether secured or unsecured. These contracts are legally binding agreements. called bases (underlying asset. In the Indian context the Securities Contracts (Regulation) Act. these products have become very popular and by 1990s. or reference rate). cotton. made on the trading screen of stock exchanges. of underlying securities. forex. The underlying asset can be equity. The price of curd depends upon the price of milk which in turn depends upon the demand and supply of milk. 1956 (SC(R) A) defines “derivative” to include – 1.DERIVATIVES DEFINED Derivative is a product whose value is derived from the value of one or more basic variables. futures and options on stock indices have gained more popularity than on individual stocks. indeed the two largest “financial” exchanges of any kind in the world today. the approximate size of global derivatives market was US$ 109. The CBOT and the CME remain the two largest organized futures exchanges. HISTORY OF DERIVATIVES Early forward contracts in the US addressed merchants‟ concerns about ensuring that there were buyers and sellers for commodities. who are major users of index-linked derivatives. In 1865. According to the Bank for International Settlements (BIS). However “credit risk” remained a serious problem.5 trillion as at end–December 2000. In 1919. Its name was changed to Chicago Mercantile Exchange (CME). The primary intention of the CBOT was to provide a centralized location known in advance for buyers and sellers to negotiate forward contracts. MATIF in France. Chicago Butter and Egg Board. GLOBAL DERIVATIVE MARKETS The derivatives markets have grown manifold in the last two decades.complexity and also turnover. The lower costs associated with index derivatives vis–a–vis derivative products based on individual securities is another reason for their growing use. The first stock index futures contract was traded at Kansas City Board of Trade. Currently the most popular stock index futures contract in the world is based on S&P 500 index. Even small investors find these useful due to high correlation of the popular indexes with various portfolios and ease of use. financial futures became the most active derivative instruments generating volumes many times more than the commodity futures. In the class of equity derivatives the world over. SGX in Singapore. these contracts were called “futures contracts”. especially among institutional investors. traded on Chicago Mercantile Exchange. TIFFE in Japan. was reorganized to allow futures trading. The total estimated notional amount of outstanding over–the–counter (OTC) contracts 26 . the CBOT went one step further and listed the first “exchange traded” derivatives contract in the US. futures on T-bills and Euro-Dollar futures are the three most popular futures contracts traded today. a group of Chicago businessmen formed the Chicago Board of Trade (CBOT) in 1848. DTB in Germany. During the mid eighties. Index futures.. Eurex etc. a spin-off of CBOT. To deal with this problem. Other popular international exchanges that trade derivatives are LIFFE in England. Growth in OTC derivatives market is mainly attributable to the continued rapid expansion of interest rate contracts. The report.J.2). the turnover in exchange– traded derivative markets rose by a record amount in the first quarter of 2001. however. which reflected growing corporate bond markets and increased interest rate uncertainty at the end of 2000. The turnover data are available only for exchange–traded derivatives contracts. as there was no regulatory framework to govern trading of derivatives.3 trillion as at end–December 2000. The amount outstanding in organized exchange markets increased by 5. According to BIS. The SCRA was amended in December 1999 to include derivatives within the ambit of „securities‟ and the regulatory framework was developed for governing 27 . The market for derivatives. methodology for charging initial margins.8% from US$ 13. The committee recommended that derivatives should be declared as „securities‟ so that regulatory framework applicable to trading of „securities‟ could also govern trading of securities. did not take off.L. deposit requirement and real–time monitoring requirements. SEBI set up a 24–member committee under the Chairmanship of Dr.9% over end–December 1999. SEBI also set up a group in June 1998 under the Chairmanship of Prof. 1995.C. which was submitted in October 1998.8% during 2000 to US$ 384 trillion as compared to US$ 350 trillion in 1999(Table 1.stood at US$ 95. an increase of 7. to recommend measures for risk containment in derivatives market in India. The turnover in derivative contracts traded on exchanges has increased by 9. While interest rate futures and options accounted for nearly 90% of total turnover during 2000. The committee submitted its report on March 17. the popularity of stock market index futures and options grew modestly during the year.Gupta on November 18.Varma. DERIVATIVE MARKET IN INDIA The first step towards introduction of derivatives trading in India was the promulgation of the Securities Laws (Amendment) Ordinance.R. worked out the operational details of margining system.2 trillion as at end–December 2000. which withdrew the prohibition on options in securities. 1996 to develop appropriate regulatory framework for derivatives trading in India.5 trillion as at end December 1999 to US$ 14. broker net worth. 1998 prescribing necessary pre–conditions for introduction of derivatives trading in India. while there was some moderation in the OTC volumes. Three contracts are available for trading. PARTICITANTS AND FUNCTIONS PARTICIPANTS Derivative contracts have several variants. The act also made it clear that derivatives shall be legal and valid only if such contracts are traded on a recognized stock exchange. the three–decade old notification. byelaws. The trading in index options commenced on June 4. The government also rescinded in March 2000. The derivatives trading on the exchange commenced with S&P CNX Nifty Index futures on June 12. 2001. To begin with. The following three broad categories of participants – 28 . thus precluding OTC derivatives. with 1 month. 2001. futures. 2001 and trading in options on individual securities commenced on July 2. the futures contracts have a maximum of 3-month expiration cycles. and their clearing house/corporation to commence trading and settlement in approved derivatives contracts. SEBI permitted the derivative segments of two stock exchanges. and regulations of the respective exchanges and their clearing house/corporation duly approved by SEBI and notified in the official gazette. 2000. Trading and settlement in derivative contracts is done in accordance with the rules. which prohibited forward trading in securities. This was followed by approval for trading in options based on these two indexes and options on individual securities. Derivatives trading commenced in India in June 2000 after SEBI granted the final approval to this effect in May 2000. A new contract is introduced on the next trading day following the expiry of the near month contract. Currently. NSE and BSE. The trading in index options commenced in June 2001 and the trading in options on individual securities commenced in July 2001. 2 months and 3 months expiry. The most common variants are forwards. Futures contracts on individual stocks were launched in November 2001. The index futures and options contract on NSE are based on S&P CNX Nifty Index. Single stock futures were launched on November 9.derivatives trading. SEBI approved trading in index futures contracts based on S&P CNX Nifty and BSE–30 (Sensex) index. options and swaps. 3. 1. The prices of derivatives converge with the prices of the underlying at the expiration of the derivative contract. They use futures or options markets to reduce or eliminate this risk Speculators: . monitoring and surveillance of the activities of various participants become extremely difficult in these kind of mixed markets. Derivatives. for example. that is. speculators trade in the underlying cash markets. they will take offsetting positions in the two markets to lock in a profit. the underlying market witnesses higher trading volumes because of participation by more players who would not otherwise participate for lack of an arrangement to transfer risk. Arbitrageurs: .Arbitrageurs are in business to take advantage of a discrepancy between prices in two different markets. due to their inherent nature.Hedgers: . FUNCTIONS The derivatives market performs a number of economic functions. 29 . are linked to the underlying cash markets. Speculative trades shift to a more controlled environment of derivatives market. 2. If. Margining. they see the futures price of an asset getting out of line with the cash price. In the absence of an organized derivatives market. Prices in an organized derivatives market reflect the perception of market participants about the future and lead the prices of underlying to the perceived future level. they can increase both the potential gains and potential losses in a speculative venture. Futures and options contracts can give them an extra leverage. The derivatives market helps to transfer risks from those who have them but may not like them to those who have an appetite for them. 4.Hedgers face risk associated with the price of an asset. With the introduction of derivatives.Speculators wish to bet on future movements in the price of an asset. An important incidental benefit that flows from derivatives trading is that it acts as a catalyst for new entrepreneurial activity. Transfer of risk enables market participants to expand their volume of activity. The two commonly used swaps are: 30 . 6. at a given price on or before a given future date. Puts give the buyer the right. They can be regarded as portfolios of forward contracts. The underlying asset is usually a moving average of a basket of assets.calls and puts. but not the obligation to sell a given quantity of the underlying asset at a given price on or before a given date. the majority of options traded on options exchanges having a maximum maturity of nine months. Equity index options are a form of basket options. where settlement takes place on a specific date in the future at today‟s preagreed price. Options: Options are of two types . LEAPS: The acronym LEAPS means Long-Term Equity Anticipation Securities. Calls give the buyer the right but not the obligation to buy a given quantity of the underlying asset. They often energize others to create new businesses. Longer-dated options are called warrants and are generally traded over-the-counter. Baskets: Basket options are options on portfolios of underlying assets. These are options having a maturity of up to three years. Derivatives markets help increase savings and investment in the long run. Warrants: Options generally have lives of up to one year. creative. TYPES OF DERIVATIVE INSTRUMENTS Forwards: A forward contract is a customized contract between two entities. new products and new employment opportunities. Futures contracts are special types of forward contracts in the sense that the former are standardized exchange-traded contracts. well-educated people with an entrepreneurial attitude. the benefit of which are immense. Futures: A futures contract is an agreement between two parties to buy or sell an asset at a certain time in the future at a certain price. Swaps: Swaps are private agreements between two parties to exchange cash flows in the future according to a prearranged formula.5. The derivatives have a history of attracting many bright. and order and trade management. Thus a swaption is an option on a forward swap. A payer swaption is an option to pay fixed and receive floating. 2001.a. with the cash flows in one direction being in a different currency than those in the opposite direction. 2 months and 3 months expiry. called NEAT-F&O trading system. A new contract is introduced on the next trading day following the expiry of the near month contract.Currency swaps: These entail swapping both principal and interest between the parties. Interest rate swaps: These entail swapping only the interest related cash flows between the parties in the same currency. 2001 and trading in options on individual securities commenced on July 2. 2001. It is similar to that of trading of equities in the Cash Market (CM) segment. the swaptions market has receiver swaptions and payer swaptions. Swaptions: Swaptions are options to buy or sell a swap that will become operative at the expiry of the options. The NEAT-F&O trading system is accessed by two types of users. Three contracts are available for trading. the futures contracts have a maximum of 3-month expiration cycles. A receiver swaption is an option to receive fixed and pay floating. The trading in index options commenced on June 4. It supports an anonymous order driven market which provides complete transparency of trading operations and operates on strict price–time priority. APPROVAL FOR DERIVATIVE TRADING TRADING MECHANISM The futures and options trading system of NSE. with 1 month. The index futures and options contract on NSE are based on S&P CNX Nifty Index. Currently. 2000. provides a fully automated screen–based trading for Nifty futures & options and stock futures & options on a nationwide basis and an online monitoring and surveillance mechanism. It provides tremendous flexibility 31 . b. Rather than have calls and puts. DERIVATIVE MARKET AT NSE The derivatives trading on the exchange commenced with S&P CNX Nifty Index futures on June 12. Single stock futures were launched on November 9. order matching. The Trading Members(TM) have access to functions such as order entry. It acts as legal counterparty to all deals on the F&O segment and guarantees settlement. Good till. Trading Member Clearing Member: TM–CM is a CM who is also a TM. banks or custodians could become a PCM and clear and settle for TMs. etc. Professional Clearing Member: PCM is a CM who is not a TM. undertakes risk management and performs actual settlement. Trading and clearing members are admitted separately. There are three types of CMs: Self Clearing Member: A SCM clears and settles trades executed by him only either on his own account or on account of his clients. Limit/Market price. TM–CM may clear and settle his own proprietary trades and client‟s trades as well as clear and settle for other TMs. The TM–CM and the PCM are required to bring in additional security deposit in respect of every TM whose trades they undertake to clear and settle. CLEARING AND SETTLEMENTS NSCCL undertakes clearing and settlement of all deals executed on the NSEs F&O segment. Typically. trading members are required to have qualified users and sales persons. Additionally. The Clearing Members (CM) uses the trader workstation for the purpose of monitoring the trading member(s) for whom they clear the trades.Date. Stop loss. Those interested in taking membership on F&O segment are required to take membership of CM and F&O segment or CM. which a trading member can take. can be built into an order. Essentially. MEMBERSHIP CRITERIA NSE admits members on its derivatives segment in accordance with the rules and regulations of the exchange and the norms specified by SEBI. Immediate or Cancel. Good-till-Cancelled.to users in terms of kinds of orders that can be placed on the system. WDM and F&O segment. who have passed a Certification programme approved by SEBI. NSE follows 2–tier membership structure stipulated by SEBI to enable wider participation. a clearing member (CM) does clearing for all his trading members (TMs). Besides this. they can enter and set limits to positions. Various conditions like Good-till-Day. Clearing: 32 . therefore. A CM‟s open position is arrived at by aggregating the open position of all the TMs and all custodial participants clearing through him. However. These contracts. Dr. all CMs are required to open a separate bank account with NSCCL designated clearing banks for F&O segment. The underlying for index futures/options of the Nifty index cannot be delivered. have to be settled in cash. A TM‟s open position is the sum of proprietary open position. in the contracts in which they have traded. Institutional and large equity-holders need portfolio-hedging facility. INDEX DERIVATIVES Index derivatives are derivative contracts which derive their value from an underlying index. whether proprietary (if they are their own trades) or client (if entered on behalf of clients). In his report. Pension funds in the US are known to use stock index futures for risk hedging purposes. Index–derivatives are more suited to them and more cost–effective than derivatives based on individual stocks. The two most popular index derivatives are index futures and index options.C.Gupta attributes the popularity of index derivatives to the advantages they offer.The first step in clearing process is working out open positions or obligations of members. Futures and options on individual securities can be delivered as in the spot market. A TM‟s open position is arrived at as the summation of his proprietary open position and clients open positions. premium and final exercise settlement.e. client open long position and client open short position. it has been currently mandated that stock options and futures would also be cash settled. Index derivatives offer ease of use for hedging any portfolio irrespective of its composition. in the contracts in which they have traded. The settlement amount for a CM is netted across all their TMs/clients in respect of MTM. Settlement: All futures and options contracts are cash settled. Clients‟ positions are arrived at by summing together net (buy-sell) positions of each individual client for each contract. i. For the purpose of settlement.L. Proprietary positions are calculated on net basis (buy-sell) for each contract. Index derivatives have become very popular worldwide. 33 . through exchange of cash. TMs are required to identify the orders. This implies much lower capital adequacy and margin requirements. 34 . is much less volatile than individual stock prices. This insulates a participant from credit risk of another. liquid index ensures that hedgers and speculators will not be vulnerable to individual or industry risk. and the possibility of cornering is reduced. which can be cornered. TRADING Here. This is partly because an individual stock has a limited supply. A well diversified. 3. Hence the need to have strong surveillance on the market both at the exchange level as well as at the regulator level. 4. the best way to get a feel of the trading system is to actually watch the screen and observe how it operates. Index derivatives are cash settled. forged/fake certificates. buying from the seller and selling to the buyer. and hence do not suffer from settlement delays and problems related to bad delivery. 2. The clearing corporation interposes itself into every transaction. trading of index futures and index options commenced at NSE in June 2000 and June 2001 respectively. more so in India. being an average. Index: The choice of an index is an important factor in determining the extent to which the index derivative can be used for hedging. A critical element of financial sector reforms is the development of a pool of human resources with strong skills and expertise to provide quality intermediation to market participants. Strong surveillance mechanism: Derivatives trading brings a whole class of leveraged positions in the economy. Requirements for an index derivatives market 1. With the entire above infrastructure in place. Education and certification: The need for education and certification in the derivatives market can never be overemphasized. Clearing corporation settlement guarantee: The clearing corporation eliminates counterparty risk on futures markets. Stock index. I shall take a brief look at the trading system for NSE‟s futures and options market. However. speculation and arbitrage. Stock index is difficult to manipulate as compared to individual stock prices. The exchange assigns a Trading member ID to each trading member. professional clearing members and participants. They carry out risk management activities and confirmation/inquiry of trades through the trading system. Professional clearing members: A professional clearing members is a clearing member who is not a trading member. Entities in the trading system There are four entities in the trading system. modifications have been performed in the existing capital market trading system so as to make it suitable for trading futures and options. They can trade either on their own account or on behalf of their clients including participants. banks and custodians become professional clearing members and clear and settle for their trading members. called NEAT-F&O trading system. Trading members. It is similar to that of trading of equities in the cash market segment. Each trading member can have more than one user. provides a fully automated screen-based trading for Nifty futures & options and stock futures & options on a nationwide basis as well as an online monitoring and surveillance mechanism. 3. Typically. clearing members. It is the responsibility of the trading member to maintain adequate control over persons having access to the firm‟s User IDs. It supports an order driven market and provides complete transparency of trading operations. 1.Futures and options trading system The futures & options trading system of NSE. This ID is common for all users of a particular trading member. 2. Clearing members: Clearing members are members of NSCCL. Each user of a trading member must be registered with the exchange and is assigned a unique user ID. 35 . Trading members: Trading members are members of NSE. The number of users allowed for each trading member is notified by the exchange from time to time. Keeping in view the familiarity of trading members with the current capital market trading system. The software for the F&O market has been developed to facilitate efficient and transparent trading in futures and options instruments. The unique trading member ID functions as a reference for all orders/trades of different users. The exchange notifies the regular lot size and tick size for each of the contracts traded on this segment from time to time. Order matching is essentially on the basis of security. The lot size on the futures market is for 200 Nifties. wherein orders match automatically. if not traded on the day the order is entered. These clients may trade through multiple trading members but settle through a single clearing member. as the name suggests is an order which is valid for the day on which it is entered. Consequently. Participants: A participant is a client of trading members like financial institutions. BASIS OF TRADING The NEAT F&O system supports an order driven market. it spans trading days. it is an active order. Time conditions Day order: A day order. the system cancels the order automatically at the end of the day.4. the order becomes passive and goes and sits in the respective outstanding order book in the system. If the order is not executed during the day. These conditions are broadly divided into the following categories: Time conditions Price conditions Other conditions Several combinations of the above are allowed thereby providing enormous flexibility to the users. If it does not find a match. The order types and conditions are summarized below. time and quantity. It tries to find a match on the other side of the book. its price. If it finds a match. Good till canceled (GTC): A GTC order remains in the system until the user cancels it. a trade is generated. When any order enters the trading system. All quantity fields are in units and price in rupees. The maximum number of days an order can 36 . ORDER TYPES AND CONDITIONS The system allows the trading members to enter orders with various conditions attached to them as per their requirements. remain in the system is notified by the exchange from time to time after which the order is automatically cancelled by the system. Each day counted is a calendar day inclusive of holidays. The days counted are inclusive of the day on which the order is placed and the order is cancelled from the system at the end of the day of the expiry period. Good till days/date (GTD): A GTD order allows the user to specify the number of days/date till which the order should stay in the system if not executed. The maximum days allowed by the system are the same as in GTC order. At the end of this day/date, the order is cancelled from the system. Each day/date counted are inclusive of the day/date on which the order is placed and the order is cancelled from the system at the end of the day/date of the expiry period.. Other conditions Market price: Market orders are orders for which no price is specified at the time the order is entered (i.e. price is market price). For such orders, the system determines the price. 37 Trigger price: Price at which an order gets triggered from the stop–loss book. Limit price: Price of the orders after triggering from stop–loss book. Pro: Pro means that the orders are entered on the trading member‟s own account. Cli: Cli means that the trading member enters the orders on behalf of a client. 38. 39 Each futures contract has a separate limit order book. exclusive 40 . each with five different strikes available for trading.The Instrument type refers to “Futures contract on index” and Contract symbol . All passive orders are stacked in the system in terms of price-time priority and trades take place at the passive order price (similar to the existing capital market trading system). then the appropriate value of a single index futures contract would be Rs. Trading is for a minimum lot size of 200 units. contracts at different strikes.10.NIFTY denotes a “Futures contract on Nifty index” and the Expiry date represents the last date on which the contract will be available for trading.200. Contract specifications for stock options Trading in stock options commenced on the NSE from July 2001. Contract specification for index options On NSE‟s index options market.e.e. NSE provides a minimum of five strike prices for every option type (i. The expiration cycle for stock options is the same as for index futures and index options. Charges The maximum brokerage chargeable by a TM in relation to trades effected in the contracts admitted to dealing on the F&O segment of NSE is fixed at 2. two-month and three-month expiry cycles are available for trading. Thus if the index level is around 1000. Thus a single move in the index value would imply a resultant gain or loss of Rs.5% of notional value of the contract[(Strike price + Premium) * Quantity] in case of index options. The minimum tick size for an index future contract is 0.05 units. These contracts are American style and are settled in cash. having onemonth. 0. A new contract is introduced on the trading day following the expiry of the near month contract.05*200 units) on an open position of 200 units. two out–of– the–money contracts and one at–the–money contract available for trading. call and put) during the trading month. two-month and three-month options.000. There are typically one-month. There are at least two in–the– money contracts.00 (i.5% of the contract value in case of index futures and 2. The best buy order for a given futures contract will be the order to buy the index at the highest index level whereas the best sell order will be the order to sell the index at the lowest index level. The Board desired that these issues be reconsidered by the Advisory Committee on Derivatives (ACD) and requested a detailed report on the aforesaid issues for the consideration of the Board.1 of its report. The TMs contribute to Investor Protection Fund of F&O segment at the rate of Rs.of statutory levies. Norms for use of derivatives by mutual funds.002%)(Each side) or Rs. The transaction charges payable by a TM for the trades executed by him on the F&O segment are fixed at Rs.0001%).10 per crore of turnover (0. whichever is higher. Well-defined goals. We therefore reproduce this paragraph of the LCGC Report: “The Committee believes that regulation should be designed to achieve specific. Use of sub-brokers in the derivative markets. REGULATORY OBJECTIVES The LCGC outlined the goals of regulation admirably well in Paragraph 3.2 per lakh of turnover (0. 2002 considered some important issues relating to the derivative markets which include: Physical settlement of stock options and stock futures contracts.1 lakh annually. SEBI ADVISORY COMMITTEE ON DERIVATIVES The SEBI Board in its meeting on June 24. The recommendations of the Advisory Committee on Derivatives on some of these issues were also placed before the SEBI Board. It has been guided by the following objectives: (a) Investor Protection: Attention needs to be given to the following four aspects: (i) Fairness and Transparency (ii) Safeguard for clients‟ moneys 41 . Review of the eligibility criteria of stocks on which derivative products are permitted. It is inclined towards positive regulation designed to encourage healthy activity and behavior. We endorse these regulatory principles completely and base our recommendations also on these same principles. (iii) Competent and honest service (b) Quality of markets: The concept of “Quality of Markets” goes well beyond market integrity and aims at enhancing important market qualities.” Chapter 3 42 . (c) Innovation: While curbing any undesirable tendencies. the regulatory framework should not stifle innovation which is the source of all economic progress. This is a much broader objective than market integrity. more so because financial derivatives represent a new rapidly developing area. such as cost-efficiency. aided by advancements in information technology. and price-discovery. price-continuity. While futures and options are now actively traded on many exchanges. FORWARD CONTRACT 43 .Introduction to Futures and Options INTRODUCTION TO FUTURES AND OPTIONS In recent years. forward contracts are popular on the OTC market. In this chapter we shall study in detail these three derivative contracts. derivatives have become increasingly important in the field of finance. Each contract is custom designed. index. the contract has to be settled by delivery of the asset. sugar. He is exposed to the risk of exchange rate fluctuations. Forward contracts are very useful in hedging and speculation. Illiquidity. the futures contracts are standardized and exchange traded. crude oil. Other contract details like delivery date. which often results in high prices being charged. The salient features of forward contracts are: They are bilateral contracts and hence exposed to counter–party risk. soybean. But unlike forward contracts. interest rate. In simple words. price and quantity are negotiated bilaterally by the parties to the contract. If the party wishes to reverse the contract. Futures are exchange-traded contracts to buy or sell an asset in future at a price agreed upon today. The classic hedging application would be that of an exporter who expects to receive payment in dollars three months later. expiration date and the asset type and quality. and hence is unique in terms of contract size. The other party assumes a short position and agrees to sell the asset on the same date for the same price. it has to compulsorily go to the same counterparty. bond. coffee etc. The contract price is generally not available in public domain. The asset can be share. 44 . A futures contract is an agreement between two parties to buy or sell an asset at a certain time in the future at a certain price. and Counterparty risk FUTURE CONTRACT Futures markets were designed to solve the problems that exist in forward markets. rupee-dollar exchange rate. The forward contracts are normally traded outside the exchanges. On the expiration date.A forward contract is an agreement to buy or sell an asset on a specified date for a specified price. cotton. Limitations of forward markets Forward markets world-wide are afflicted by several problems: Lack of centralization of trading. One of the parties to the contract assumes a long position and agrees to buy the underlying asset on a certain specified future date for a certain specified price. 4. of course.10-0. the exchange specifies certain standard features of the contract. 2. Profit in Both Bull & Bear Markets: In futures trading. High Leverage: The primary attraction. High Liquidity: Most futures markets are very liquid. ADVANTAGES OF FUTURE TRADING IN INDIA 1. Lower Transaction Cost: Another advantage of futures trading is much lower relative commissions. To „own‟ a futures contract an investor only has to put up a small fraction of the value of the contract (usually around 10-20%) as „margin‟. This ensures that market orders can be placed very quickly as there are always buyers and sellers for most contracts. (or which can be used for reference purposes in settlement) and a standard timing of such settlement.20%). a standard quantity and quality of the underlying instrument that can be delivered. you can make money whether prices go up or down. The reason that futures trading can be so profitable is the high leverage. More than 99% of futures transactions are offset this way. i. 3.To facilitate liquidity in the futures contracts.e. 45 . By choosing correctly. is the potential for large profits in a short period of time. Your commission for trading a futures contract is one tenth of a percent (0. A futures contract may be offset prior to maturity by entering into an equal and opposite transaction.. it is as easy to sell (also referred to as going short) as it is to buy (also referred to as going long). there are huge amounts of contracts traded every day. It is a standardized contract with standard underlying instrument. The confusion is primarily because both serve essentially the same economic functions of allocating risk in the presence of future price uncertainty. the exchange becomes counter party to each trade and guarantees settlement. July 2001 saw the launch of options on individual securities (herein referred to as stock options) and the onset of rolling settlement. DIFFERENCES BETWEEN FORWARD AND FUTURE CONTRACT Forward contracts are often confused with futures contracts. A purchase or sale of futures on a security gives the trader essentially the same price exposure as a purchase or sale of the security itself. But this is not so in the case of forward contract. However futures are a significant improvement over the forward contracts. Counter Party Risk: In forward contracts there is a risk of counter party default. Besides speculation. stock futures are particularly appealing due to familiarity and ease in understanding. In case of futures. A future contract is nothing but a form of forward contract. A year later. One can differentiate a forward contract from a future contract on the following lines: Customized vs Standardized: Forward contracts are customized while future contracts are standardized. With the launch of futures on individual securities (herein referred to as stock futures) on the 9th of November. options on index were available for trading. Of the above mentioned products. In this regard. 2001. 46 . Terms of forward contracts are negotiated between the buyer and the seller. the basic range of equity derivative products in India seems complete. Liquidity: Futures are much more liquid and their price is transparent as their price and volume are reported in media. stock futures can be effectively used for hedging and arbitrage reasons. While the terms of future contracts are decided by the exchange on which these are traded. trading stock futures is no different from trading the security itself.USING FUTURES ON INDIVIDUAL SECURUTIES Index futures began trading in India in June 2000. The index futures contracts on the NSE have one-month. Futures price: The price at which the futures contract trades in the futures market. On the Friday following the last Thursday. Squaring off: A forward contract can be reversed with only the same counter party with whom it was entered into. FUTURES TERMINOLOGIES Spot price: The price at which an asset trades in the spot market. 47 . In general. a new contract having a three-month expiry is introduced for trading. Revenue may be in the form of dividend. the actual price may vary depending upon the demand and supply of the underlying asset. Futures Price = Spot Price + Cost of Carry The Cost of Carry is the sum of all costs incurred if a similar position is taken in cash market and carried to expiry of the futures contract less any revenue that may arise out of holding the asset. Thus a January expiration contract expires on the last Thursday of January and a February expiration contract ceases trading on the last Thursday of February. Though one can calculate the theoretical price. THEORETICAL WAY OF PRICING FUTURES The theoretical price of a futures contract is spot price of the underlying plus the cost of carry. Contract cycle: The period over which a contract trades. two-months and threemonth expiry cycles which expire on the last Thursday of the month. The cost typically includes interest cost in case of financial futures (insurance and storage costs are also considered in case of commodity futures). A future contract can be reversed on the screen of the exchange as the latter is the counter party to all futures trades. Please note that futures are not about predicting future prices of the underlying assets. This reflects that futures prices normally exceed spot prices. For instance. the investor receives a margin call and is expected to top up the margin account to the initial margin level before trading commences on the next day. the contract size on NSE‟s futures market is 200 Nifties. Initial margin: The amount that must be deposited in the margin account at the time a futures contract is first entered into is known as initial margin. basis can be defined as the futures price minus the spot price. at the end of which it will cease to exist. the margin account is adjusted to reflect the investor‟s gain or loss depending upon the futures closing price. An option gives the holder of the option the right to do something. This is called marking–to– market. Marking-to-market: In the futures market. the 48 . Maintenance margin: This is somewhat lower than the initial margin. OPTIONS Options are fundamentally different from forward and futures contracts. basis will be positive. Cost of carry: The relationship between futures prices and spot prices can be summarized in terms of what is known as the cost of carry. This is the last day on which the contract will be traded. This is set to ensure that the balance in the margin account never becomes negative. Contract size: The amount of asset that has to be delivered less than one contract. There will be a different basis for each delivery month for each contract. Basis: In the context of financial futures. This measures the storage cost plus the interest that is paid to finance the asset less the income earned on the asset. If the balance in the margin account falls below the maintenance margin. In a normal market. at the end of each trading day. Expiry date: It is the date specified in the futures contract. In contrast. in a forward or futures contract. The holder does not have to exercise this right. which the option buyer pays to the option seller. Call option: A call option gives the holder the right but not the obligation to buy an asset by a certain date for a certain price. There are two basic types of options. Expiration date: The date specified in the options contract is known as the expiration date. the purchase of an option requires an up–front payment. call options and put options. Like indexing futures contracts. It is also referred to as the option premium. Stock options: Stock options are options on individual stocks. 49 . Writer of an option: The writer of a call/put option is the one who receives the option premium and is thereby obliged to sell/buy the asset if the buyer exercises on him. the exercise. the strike date or the maturity. Whereas it costs nothing (except margin requirements) to enter into a futures contract. Some options are European while others are American. Option price: Option price is the price. indexing options contracts are also cash settled. Options currently trade on over 500 stocks in the United States. Put option: A put option gives the holder the right but not the obligation to sell an asset by a certain date for a certain price. A contract gives the holder the right to buy or sell shares at the specified price.two parties have committed themselves to doing something. OPTIONS TERMINOLOGIES Index options: These options have the index as the underlying. the call is said to be deep OTM. the call is said to be deep ITM. At expiration. In the case of a put. The longer the time to expiration. Most exchange-traded options are American. An option on the index is at-the-money when the current index equals the strike price (i. An option that is OTM or ATM has only time value. If the index is much lower than the strike price. A call option on the index is out-of-the-money when the current index stands at a level. the greater is an option‟s time value. the put is ITM if the index is below the strike price. spot price > strike price).e. Both calls and puts have time value. Time value of an option: The time value of an option is the difference between its premium and its intrinsic value. In the case of a put. which is less than the strike price (i. If the index is much higher than the strike price. Usually. the maximum time value exists when the option is ATM. spot price = strike price).e.e. Out-of-the-money option: An out-of-the-money (OTM) option is an option that would lead to a negative cash flow if it were exercised immediately. American options: American options are options that can be exercised at any time upto the expiration date. spot price < strike price). the put is OTM if the index is above the strike price. an option should have no time value. TYPES OF OPTIONS 50 . European options are easier to analyze than American options. At-the-money option: An at-the-money (ATM) option is an option that would lead to zero cash flow if it were exercised immediately. In-the-money option: An in-the-money (ITM) option is an option that would lead to a positive cash flow to the holder if it were exercised immediately. and properties of an American option are frequently deduced from those of its European counterpart. A call option on the index is said to be in-the-money when the current index stands at a level higher than the strike price (i. all else equal. European options: European options are options that can be exercised only on the expiration date itself. Suppose stock price is Rs. Call Options. 300.e. 25/-. 51 .& exercises his option selling the Reliance share at Rs 300 to the option writer thus making a net profit of Rs. if at the time of expiry stock price falls below Rs. If the market price of Infosys on the day of expiry is more than Rs. at a premium of Rs. paid which should be the profit earned by the seller of the call option. 260/. on the day of expiry is less than Rs.. Put Options.1. 3000. The seller of the put option (one who is short Put) however. the option will be exercised. 2.. 3600 (Strike Price + Premium i. the option can be exercised as it is 'in the money'. The seller (one who is short call) however. In this case the investor loses the premium (Rs 100). 3500. The investor will earn profits once the share price crosses Rs. 200 {(Spot price .A call option gives the holder (buyer/ one who is long call).Spot Price) . 300/. the right to buy specified quantity of the underlying asset at the strike price on or before expiration date. the buyer of the call option will choose not to exercise his option. 3500+100).Strike price) Premium}. Suppose stock price is Rs. 3800. has the obligation to buy the underlying asset at the strike price if the buyer decides to exercise his option to sell. The investor's Break-even point is Rs. 260.Premium paid}. Example: An investor buys one European Put option on Reliance at the strike price of Rs. 3500 at a premium of Rs.e. 275/ (Strike Price . the right to sell specified quantity of the underlying asset at the strike price on or before an expiry date. 15 {(Strike price . If the market price of Reliance. investor will earn profits if the market falls below 275. In another scenario. 100.premium paid) i. the buyer of the Put option immediately buys Reliance share in the market @ Rs. 3500 say suppose it touches Rs. Example: An investor buys One European call option on Infosys at the strike price of Rs.A Put option gives the holder (buyer/ one who is long Put). the option will be exercised and the investor will buy 1 share of Infosys from the seller of the option at Rs 3500 and sell it in the market at Rs 3800 making a profit of Rs. has the obligation to sell the underlying asset if the buyer of the call option decides to exercise his option to buy. An uncovered option writer. which shall be the profit earned by the seller of the Put option.e. may face unlimited risk. Rs 25/-). In this case the investor loses the premium paid (i. if the owner's options expire with no value. set risk. if at the time of expiry. Option Has the obligation to Has the obligation to seller or sell the underlying buy the underlying option writer asset (to the option asset (from the holder) at the option holder) at the specified price specified price LEVERAGE AND RISK Options can provide leverage. on the other hand. this loss can be the entire amount of the premium paid for the option. Options offer their owners a predetermined. Leverage also has downside implications.Option Buys the right to buyer or buy the underlying option holder asset at the specified price Buys the right to sell the underlying asset at the specified price 2. This means an option buyer can pay a relatively small premium for market exposure in relation to the contract value (usually 100 shares of underlying stock). 52 . An investor can see large percentage gains from comparatively small. the buyer of the Put option will choose not to exercise his option to sell as he can sell in the market at a higher rate. leverage can magnify the investment's percentage loss. If the underlying stock price does not rise or fall as anticipated during the lifetime of the option. market price of Reliance is Rs 320/ -.In another scenario. (Please see table) THE OPTIONS GAME Call Option Put Option 1. However. favorable percentage moves in the underlying index. A call option is said to be „in-the-money‟ when the strike price of the option is less than the underlying asset price. Out-of-the-money An option is said to be „at-the-money‟. (Please see table) Striking the price Call Option Put Option 1. if the Sensex falls to 3700. a call option is „out-of-the-money‟ when the strike price is greater than the underlying asset price. the call option no longer has positive exercise value. For example. At-the-money Strike Price equal to Spot Price of underlying asset Strike Price equal to Spot Price of underlying asset 53 . The call holder has the right to buy a Sensex at 3900. no matter how much the spot market price has risen. when the option's strike price is equal to the underlying asset price. a Sensex call option with strike of 3900 is „in-the-money‟. Using the earlier example of Sensex call option. a profit can be made by selling Sensex at this higher price. The call holder will not exercise the option to buy Sensex at 3900 when the current price is at 3700. when the spot Sensex is at 4100 as the call option has value. On the other hand.In-the-money Strike Price less than Strike Price greater Spot Price of than Spot Price of underlying asset underlying asset 2. At-the-money. And with the current price at 4100.In-the-money. This is true for both puts and calls. is in-the-money at any given moment is called its intrinsic value. an amount greater than the current Sensex of 4100.3. a put option is out-of-the-money when the strike price is less than the spot price of underlying asset. the time value is the total option premium. a Sensex put at strike of 4400 is in-the-money when the Sensex is at 4100. therefore affecting the premium at which they are traded. Options are said to be deep in-the-money (or deep out-of-the-money) if the exercise price is at significant variance with the underlying asset price. Out-of-themoney Strike Price greater than Spot Price of underlying asset Strike Price less than Spot Price of underlying asset A put option is in-the-money when the strike price of the option is greater than the spot price of the underlying asset. by definition. Option Premium = Intrinsic Value + Time Value FACTORS THAT AFFECT THE VALUE OF AN OPTION PREMIUM 54 . This does not mean. It is the time value portion of an option's premium that is affected by fluctuations in volatility. all of these factors determine time value. Likewise. Thus. these options can be obtained at no cost. The put no longer has positive exercise value. the buyer of Sensex put option won't exercise the option when the spot is at 4800. interest rates. Together. dividend amounts and the passage of time. the put option has value because the put holder can sell the Sensex at 4400. call or put. Any amount by which an option's total premium exceeds intrinsic value is called the time value portion of the premium. For example. an at-the-money or out-of-the-money option has no intrinsic value. however. In the above example. The amount by which an option. When this is the case. There are other factors that give options value. 5. Non-Quantifiable Factors: 1.There are two types of factors that affect the value of the option premium: Quantifiable Factors: 1. 55 . The time to expiration and. The strike price of the option. The risk free interest rate. based on fundamental or technical analysis 3. 4. The two most popular option pricing models are: Black Scholes Model which assumes that percentage change in the price of underlying follows a normal distribution. 2.both in the options marketplace and in the market for the underlying asset 4. Market participants' varying estimates of the underlying asset's future volatility 2.the number of transactions and the contract's trading volume on any given day. Binomial Model which assumes that percentage change in price of the underlying follows a binomial distribution. Underlying stock price. The volatility of the underlying stock. The effect of supply & demand. The "depth" of the market for that option . 3. An option pricing model assists the trader in keeping the prices of calls & puts in proper numerical relationship to each other & helping the trader make bids & offer quickly. Individuals' varying estimates of future performance of the underlying asset. DIFFERENT PRICING MODELS FOR OPTIONS The theoretical option pricing models are used by option traders for calculating the fair value of an option on the basis of the earlier mentioned influencing factors. 56 . whether it is bullish.Pricing models include the binomial options model for American options and the Black-Scholes model for European options. together. These combinations enable a trader to develop an option-trading model which meets the trader's specific trading needs. They can be used for a multitude of purposes. and style. long a put. to hedge an existing position in an asset. to hedge other option positions. choppy. b. OPTIONS TRADING As described earlier. or in conjunction with other financial instruments to create a number of option-trading strategies. short a put. or neutral. to generate income by writing options against different quantities of options strategies that arise from these applications and the fact that the scope of this book is limited. c. Options are unique trading instruments. four possible option selections exist for a trader: a. WHY TO USE OPTIONS? There are two main reasons why an investor would use options: to Speculate and to Hedge. and enables him or her to anticipate every conceivable situation in the market. These four can be used independently. bearish. we will devote coverage to a cursory explanation of two of the most popular strategies which are designed to take advantage of market movement: spreads and straddles. providing tremendous versatility and utility. expectations. This trading structure can be adapted to handle any type of market outlook. and d. long a call. Among their multiple applications are the following: to speculate on the movement of an asset. short a call. Even the individual investor can benefit. So why do people speculate with options if the odds are so skewed? Aside from versatility. Because of the versatility of options. especially for large institutions.Speculation One can think of speculation as betting on the movement of a security. you'd also have to take commissions into account. it doesn't take much of a price movement to generate substantial profits. This is because when one buys an option. one can also make money when the market goes down or even sideways. On the other hand. which indicates that the expiration is the third Friday of July and the strike price is $70.and lost. but also the magnitude and the timing of this movement. 57 . there is no doubt that hedging strategies can be useful. One can imagine that he wanted to take advantage of technology stocks and their upside. To succeed. he shouldn't make the investment. he must correctly predict whether a stock will go up or down. Speculation is the territory in which the big money is made . Critics of options say that if he is so unsure of his stock pick that he needs a hedge.15 x 100 = $315. HOW OPTIONS WORKS? Let's say that on May 1. and he have to be right about how much the price will change as well as the time frame it will take for all this to happen. The total price of the contract is $3. it's all about using leverage. In reality. Hedging The other function of options is hedging. he have to be correct in determining not only the direction of the stock's movement.15 for a July 70 Call. but we'll ignore them for this example. Just as one insures his house or car. By using options. When one is controlling 100 shares with one contract. the stock price of L&T is $67 and the premium (cost) is $3. The advantage of options is that one isn‟t limited to making a profit only when the market goes up. he would be able to restrict his downside while enjoying the full upside in a cost-effective way. The use of options in this manner is the reason options have the reputation of being risky. but say he also wanted to limit any losses. options can be used to insure your investments against a downturn. Think of this as an insurance policy. the option contract is worthless. so the option is worthless. of course. When the stock price is $67. You almost doubled our money in just three weeks! You could sell your options. let's say we let it ride.Remember. a stock option contract is the option to buy 100 shares. here is what happened to our option investment: Date Stock Price Option Price Paper Gain/Loss May 1 $67 $3. Exercising Versus Trading-Out So far we've talked about options as the right to buy or sell (exercise) the underlying.25 x 100 = $825. furthermore.unless. For the sake of this example. We are now down to the original investment of $315. Subtract what you paid for the contract.25 $825 $510 Expiry Date $62 worthless $0 -$315 Contract Value $315 $0 The price swing for the length of this contract from high to low was $825.25 . the break-even price would be $73. But don't forget that you've paid $315 for the option.$3.15) x 100 = $510. By the expiration date. and your profit is ($8. the price drops to $62. This is leverage in action.15 per share." and take your profits . but in reality. it's less than the $70 strike price. The options contract has increased along with the stock price and is now worth $8. To recap. because the contract is $3. The strike price of $70 means that the stock price must rise above $70 before the call option is worth anything. a majority of options are not actually 58 . you think the stock price will continue to rise. so you are currently down by this amount.15. This is true. which are called "closing your position. which would have given us over double our original investment. that's why you must multiply the contract by 100 to get the total price. Because this is less than our $70 strike price and there is no time left. Three weeks later the stock price is $78.15 May 21 $78 $8. and 30% expire worthless. Avoid buying or selling options based upon anticipated news (buyouts in particular). Time value represents the possibility of the option increasing in value. If you are wondering. Besides bordering on unethical trading.exercised. Remember. WHEN NOT TO BUY AN OPTION? It is also important to consider the time or the date at which one should enter the option market. This means that holders sell their options in the market. intrinsic value is the amount in-the-money. However. Avoid trading in an illiquid option market. an option's premium is its intrinsic value + time value.15 to $8. In our example. the price of the option in our example can be thought of as the following: Premium = Intrinsic Value + Time Value $8. 60% are traded out. These fluctuations can be explained by intrinsic value and time value. the information received is more likely to be rumor than correct. Basically. for a call option. knowing you were able to buy it at a discount to the present value. which. about 10% of options are exercised.25 In real life options almost always trade above intrinsic value. means that the price of the stock equals the strike price. 59 . So.25. According to the CBOE. Intrinsic Value and Time Value At this point it is worth explaining more about the pricing of options.25 = $8 + $0. You could also keep the stock. In our example the premium (price) of the option went from $3. the majority of the time holders choose to take their profits by trading out (closing out) their position. we just picked the numbers for this example out of the air to demonstrate how options work. and writers buy their positions back to close. you could make money by exercising at $70 and then selling the stock back in the market at $78 for a profit of $8 a share. Avoid purchasing call options just prior to a stock going ex-dividend. Avoid purchasing options well after the market has established a defined trend - this is especially true when day trading, as any option premium advantage will have dissipated. Avoid purchasing way out-of-the-money options when day trading, as any favorable price movement will have a negligible effect upon premium. Avoid purchasing call options when the underlying security is up for the day versus the prior day's close, unless one intends to take a trendfollowing stance. Avoid purchasing put options when the underlying security is down for the day versus the prior day's close, unless one intends to take a trendfollowing stance. Be careful when holding long option positions beyond Friday's trading day's close unless one is option position trading. Many option theoreticians recalculate their volatility, delta, and time decay numbers once a week, usually after the close of trading on Fridays or over the weekend. The resulting adjustments in these values most often have a negative effect on the value of the long option, which may be acceptable when holding an option over an extended period of time but is detrimental when day trading. HOW TO READ AN OPTION TABLE? Column 1: Strike Price - This is the stated price per share for which an underlying stock may be purchased (for a call) or sold (for a put) upon the exercise of the option contract. Option strike prices typically move by increments of $2.50 or $5 (even though in the above example it moves in $2 increments). Column 2: Expiry Date - This shows the termination date of an option contract. Remember that U.S.-listed options expire on the third Friday of the expiry month. 60 Column 3: Call or Put - This column refers to whether the option is a call (C) or put (P). Column 4: Volume - This indicates the total number of options contracts traded for the day. The total volume of all contracts is listed at the bottom of each table. Column 5: Bid - This indicates the price someone is willing to pay for the options contract. Column 6: Ask - This indicates the price at which someone is willing to sell an options contract. Column 7: Open Interest - Open interest is the number of options contracts that are open; these are contracts that have neither expired nor been exercised. PAYOFF FOR DERIVATIVES CONTRACT A payoff is the likely profit/loss that would accrue to a market participant with change in the price of the underlying asset. This is generally depicted in the form of payoff diagrams which show the price of the underlying asset on the X– axis and the profits/losses on the Y–axis. PAYOFF FOR FUTURES Futures contracts have linear payoffs. In simple words, it means that the losses as well as profits for the buyer and the seller of a futures contract are 61 unlimited. These linear payoffs are fascinating as they can be combined with options and the underlying to generate various complex payoffs. twomonth Nifty index futures contract when the Nifty stands at 1220. The underlying asset in this case is the Nifty portfolio. When the index moves up, the long futures position starts making profits, and when the index moves down it starts making twomonth Nifty index futures contract when the Nifty stands at 1220. The underlying asset in this case is the Nifty portfolio. When the index moves down, the short futures position starts making profits, and when the index moves up, it starts making losses. OPTIONS PAYOFF The optionally characteristic of options results in a non-linear payoff for options. In simple words, it means that the losses for the buyer of an option are limited; however the profits are potentially unlimited. For a writer, the payoff is exactly the opposite. His profits are limited to the option premium; however his losses are potentially unlimited. These non-linear payoffs are fascinating as they lend themselves to be used to generate various payoffs by using combinations of options and the underlying. We look here at the six basic payoffs. Payoff profile of buyer of asset: Long asset In this basic position, an investor buys the underlying asset, Nifty for instance, for 1220, and sells it at a future date at an unknown price, once it is purchased, the investor is said to be “long” the asset. 62 the writer of the option charges a premium. His loss in this case is the premium he paid for buying the option. Payoff profile for writer of call options: Short call A call option gives the buyer the right to buy the underlying asset at the strike price specified in the option. Payoff profile for buyer of put options: Long put A put option gives the buyer the right to sell the underlying asset at the strike price specified in the option. the spot price is below the strike price. the spot price exceeds the strike price. If upon expiration the spot price of the underlying is less than the strike price. 63 . for 1220. Hence as the spot price increases the writer of the option starts making losses. and buys it back at a future date at an unknown price. If upon expiration. The profit/loss that the buyer makes on the option depends on the spot price of the underlying. For selling the option. If the spot price of the underlying is less than the strike price. His loss in this case is the premium he paid for buying the option. the buyer lets his option expire unexercised and the writer gets to keep the premium. he lets his option expire un-exercised. Whatever is the buyer‟s profit is the seller‟s loss. Lower the spot price more is the profit he makes. Payoff profile le for buyer of call options: Long call A call option gives the buyer the right to buy the underlying asset at the strike price specified in the option. an investor shorts the underlying asset. Higher the spot price more is the profit he makes. If upon expiration. the spot price exceeds the strike price. the buyer will exercise the option on the writer. If the spot price of the underlying is higher than the strike price. The profit/loss that the buyer makes on the option depends on the spot price of the underlying. The profit/loss that the buyer makes on the option depends on the spot price of the underlying. he makes a profit.Payoff profile for seller of asset: Short asset In this basic position. If upon expiration. Nifty for instance. he makes a profit. Higher the spot price more is the loss he makes. he lets his option expire un-exercised. If upon expiration the spot price of the underlying is more than the strike price. For selling the option.Payoff profile for writer of put options: Short put A put option gives the buyer the right to sell the underlying asset at the strike price specified in the option. the buyer lets his option expire un-exercised and the writer gets to keep the premium. the spot price happens to be below the strike price. Whatever is the buyer‟s profit is the seller‟s loss. The profit/loss that the buyer makes on the option depends on the spot price of the underlying. the writer of the option charges a premium. If upon expiration. Chapter 4 64 . the buyer will exercise the option on the writer. 65 . When someone mentions hedging. Arbitrage and Speculation Strategies HEDGING. A hedge is just a way of insuring an investment against risk. ARBITRAGE AND SPECULATION STRATEGIES HEDGING Hedging is a way of reducing some of the risk involved in holding an investment.Hedging. There are many different risks against which one can hedge and many different methods of hedging. think of insurance. chances are that any particular stock will fall too. because by intention. you may be well advised to hedge your position. If the market moves up. So if you own a stock with good prospects but you think the stock market in general is overpriced. When doing so. you may need to advance more margin to cover your short position. i. (It's most expensive because you're buying insurance not only against market risk but against the risk of the specific security as well. short Nifty futures Investors studying the market often come across a security which they believe is intrinsically undervalued. but most expensive method. There are many ways of hedging against market risk.e. you will not participate in the rally. A stock picker carefully purchases securities based on a sense that they are worth more than the market price. and will not be able to use your stocks to cover the margin calls. HEDGING STRATEGIES WITH EXAMPLES Hedging: Long security. The efficiency of the hedge is strongly dependent on your estimate of the correlation between your high-beta portfolio and the broad market index. Much of the risk in holding any particular stock is market risk. It may be the case that the profits and the quality of the company make it seem worth a lot more than what the market thinks. is to buy a put option for the stock you own. you've set up your futures position as a complete hedge. if the market falls sharply. he faces two kinds of risks: 66 . The simplest.Consider a simple (perhaps the simplest) case. But keep in mind the following points. If the market goes up.) If you're trying to hedge an entire portfolio. futures are probably the cheapest way to do so. A person may buy SBI at Rs. There is a simple way out. to completely remove the hidden Nifty exposure. i.e. The position LONG SBI+ SHORT NIFTY is a pure play on the value of SBI. 2. 2.000 67 . you will have a position.00. We need to know the “beta” of the security. Nifty drops. When this is done.e.000. at the cost of lower risk.1. Rs 4. is 1. as incidental baggage. a LONG SBI position is not a focused play on the valuation of SBI.2 *3.000. and suppose we have a LONG SBIN position of Rs. This offsets the hidden Nifty exposure that is inside every long–security position. In this sense. The stock picker may be thinking he wants to be LONG SBI. but a long position on SBI effectively forces him to be LONG SBI + LONG NIFTY. There is a peculiar problem here. you should sell some amount of Nifty futures. A few days later. picking securities.33.670 thinking that it would announce good results and the security price would rise.2. Once this is done. or. the stock picker has “hedged away” his index exposure. so he makes losses.3. the average impact of a 1% move in Nifty upon the security. The entire market moves against him and generates losses even though the underlying idea was correct.33. i. Methodology 1. The basic point of this hedging strategy is that the stock picker proceeds with his core skill.e. It carries a LONG NIFTY position along with it. The second outcome happens all the time. i. which is purely about the performance of the security. it is generally safe to assume the beta is 1. Every buy position on a security is simultaneously a buy position on Nifty. If betas are not known. even if his understanding of SBI was correct. Suppose we take SBIN. The size of the position that we need on the index futures market. and the company is really not worth more than the market price. where the beta is 1. His understanding can be wrong. This is because a LONG SBI position generally gains if Nifty rises and generally loses if Nifty drops. Every time you adopt a long position on a security. without any extra risk from fluctuations of the market index. without “panic selling” of shares.000 of Nifty we need to sell one market lot.00. they may see that the market is in for a few days or weeks of massive volatility. with the index futures market. The union budget is a common and reliable source of such volatility: market volatility is always enhanced for one week before and two weeks after a budget. Many investors simply do not want the fluctuations of these three weeks. and the market lot on the futures market is 200. We sell one market lot of Nifty (200 nifties) to get the position: LONG SBIN Rs.33. In addition. 4. Nifty has dropped sharply. This is particularly a problem if you need to sell shares in the near future. Suppose Nifty is at 2000. This sentiment generates “panic selling” which is rarely optimal for the investor. Hedging: Have portfolio. i. suffer the pain of the volatility.3. Hence each market lot of Nifty is Rs 4. and they do not have an appetite for this kind of volatility. 68 .00. Sometimes. When you have such anxieties. This leads to political pressures for government to “do something” when security prices fall. short Nifty futures The only certainty about the capital market is that it fluctuates! A lot of investors who own portfolios experience the feeling of discomfort about overall market movements. It allows an investor to be in control of his risk. there are two alternatives: Sell shares immediately. instead of doing nothing and suffering the risk.4. in order to finance a purchase of a house.3. This allows rapid response to market conditions. Returns on the position will be roughly neutral to movements of Nifty. for example. Do nothing. hence only successful forecasts about SBIN will benefit from this position.00.000 This position will be essentially immune to fluctuations of Nifty. To short Rs. a third and remarkable alternative becomes available: Remove your exposure to index fluctuations temporarily using index futures. This planning can go wrong if by the time you sell shares. they may have a view that security prices will fall in the near future. At other times.000.000 SHORT NIFTY Rs.4. The profits/losses position will fully reflect price changes intrinsic to SBIN.e. 25 million of Nifty futures. Every portfolio contains a hidden index exposure. then the portfolio beta is (1 * 1.3 million on the Nifty futures. Hence we need to sell 12 market lots. which has a beta of 0. This statement is true for all portfolios. the portfolio is Rs.250. most of the portfolio risk is accounted for by index fluctuations (unlike individual securities. the futures gain and the portfolio loses.e.1 million of Hindalco.2 million of Hindustan Lever. the portfolio gains and the futures lose.000. Suppose we have a portfolio composed of Rs. If the beta of any securities is not known. This position will be essentially immune to fluctuations of Nifty. which completely removes the hidden Nifty exposure. It is easy to calculate the portfolio beta: it is the weighted average of securities betas.The idea here is quite simple.8)/3 or 1.4 and Rs. The complete hedge is obtained by adopting a position on the index futures market. i. In the case of portfolios. hence we would need a position of Rs. which has a beta of 1. In either case.3. Then a complete hedge is obtained by selling Rs. If Nifty goes down.e.Hence a position LONG PORTFOLIO + SHORT NIFTY can often become one–tenth as risky as the LONG PORTFOLIO position! Suppose we have a portfolio of Rs. In the above case. i.4 + 2 * 0. Each market lot of Nifty costs Rs. and the market lot on the futures market is 200. 2.25. Suppose Nifty is 1250. or (b) when his financial planning involves selling shares at a future date and would be affected if Nifty drops. where only 30–60% of the securities risk is accounted for by index fluctuations). The investor should adopt this strategy for the short periods of time where (a) the market volatility that he anticipates makes him uncomfortable. 3.000.3 million with a beta of 1.3. 2400 Nifties to get the position: LONG PORTFOLIO Rs.1 million which has a beta of 1. It does not make sense to use this strategy for long periods of time – if a two–year 69 . whether a portfolio is composed of index securities or not. If Nifty goes up. Methodology 1.000. it is safe to assume that it is 1. We need to know the “beta” of the portfolio.8.1.000. the investor has no risk from market fluctuations when he is completely hedged. the average impact of a 1% move in Nifty upon the portfolio.000 SHORT NIFTY Rs. For that time. A person may have made up his mind on what portfolio he seeks to buy. The land deal is slow and takes weeks to complete. The complete hedge may require selling Rs. partial hedging is appropriate. two–thirds of his portfolio is hedged and one– third of the portfolio is held unhedged. The execution would be improved substantially if he 70 . he is exposed to the risk of missing out if the overall market index goes up.hedging is desired. the investor is partly invested in cash and partly invested in securities. Complete hedging eliminates all risk of gain or loss. In that case. but the investor may choose to only sell Rs. Some common occurrences of this include: _ A closed-end fund. It takes several weeks from the date that it becomes sure that the funds will come to the date that the funds actually are in hand. An open-ended fund has just sold fresh units and has received funds. Another important choice for the investor is the degree of hedging.3 million of the futures. buy Nifty futures Have you ever been in a situation where you had funds. 2. This process takes time.2 million of the futures. which just finished its initial public offering. and carefully pick securities that are expected to do well. and buy back shares after two years. This strategy makes the most sense for rapid adjustments. Hedging: Have funds. which needed to get invested in equity? Or of expecting to obtain funds in the future which will get invested in equity. The exact degree of hedging chosen depends upon the appetite for risk that the investor has. but going to the market and placing market orders would generate large „impact costs‟. invest the proceeds. has cash. which is not yet invested. it is better to sell the shares. Suppose a person plans to sell land and buy shares. Sometimes the investor may be willing to tolerate some risk of loss so as to hang on to some risk of gain. During this time. A person may need time to research securities. In this case. Getting invested in equity ought to be easy but there are three problems: 1. As and when shares are obtained. one would scale down the LONG NIFTY position correspondingly. This takes time. In some cases. or to suffer the risk of staying in cash.5 million. With Nifty futures. Similarly. A person who expects to obtain Rs. Later. he is exposed to the risk of missing out if the Nifty goes up. No matter how slowly securities are purchased. Hence it is equally important for the owner of money to use index futures to hedge against a rise in Nifty! 71 . which has just finished its initial public offering and has cash. can immediately enter into a LONG NIFTY to the extent it wants to be invested in equity. and during this time. such as the land sale above. in India. this strategy allows the investor to take more care and spend more time in choosing securities and placing aggressive limit orders. Hedging is often thought of as a technique that is used in the context of equity exposure.5 million by selling land would immediately enter into a position LONG NIFTY worth Rs. So far. Hence. a closed-end fund. It is common for people to think that the owner of shares needs index futures to hedge against a drop in Nifty. immediately. He is exposed to the risk of missing out if Nifty rises. Holding money in hand. the person may simply not have cash to immediately buy shares. which is not yet invested. is a risk because Nifty may rise. this strategy would fully capture a rise in Nifty. The index futures market is likely to be more liquid than individual securities so it is possible to take extremely large positions at a low impact cost. 3. a third alternative becomes available: The investor would obtain the desired equity exposure by buying index futures. we have had exactly two alternative strategies.could instead place limit orders and gradually accumulate the portfolio at favorable prices. so there is no risk of missing out on a broad rise in the securities market while this process is taking place. the investor/closed-end fund can gradually acquire securities (either based on detailed research and/or based on aggressive limit orders). when you want to be invested in shares. hence he is forced to wait even if he feels that Nifty is unusually cheap. which an investor can adopt: to buy liquid securities in a hurry. ARBITRAGE Arbitrage is the practice of taking advantage of a state of imbalance between two (or possibly more) markets. he obtained or paid the „mark–to–market margin‟ on his outstanding futures position. the profit being the difference between the market prices.8 million. thus capturing the gains on the index. With the help of the arbitrage strategies discussed above. By 25 Mar 2005 he had fully invested in all the shares that he wanted (as of 13 Mar) and had no futures position left. a strategist can make risk-less profits by making use of mispricing in the market. his long position was worth 4. we can exploit the market condition and earn risk-free return. totaling Rs. Arbitrage is the safest way to make money in the market. A person who engages in arbitrage is called an arbitrageur.e. At that time Nifty was at 2000. 4. A combination of matching deals are struck that exploit the imbalance. From 14 March 2005 to 25 March 2005 he gradually acquired the securities. On each day. He entered into a LONG NIFTY MARCH FUTURES position for 2400 Nifties. 5. the scope for making money is diminutive.4.80. On each day. 72 . He made a list of 14 securities to buy.8 million on 13th March 2005. the securities purchased were at a changed price (as compared to the price prevalent on 13 March). at 13 March prices. 2. i. Arbitrage is game of strategy and also funds. 3. However.Methodology 1.000. he purchased one securities and sold off a corresponding amount of futures. A person obtained Rs.4. On each day. On the other hand. A participant with ample funds can easily earn risk-free returns. 2. There is no credit risk since the counter party on both legs is the NSCCL which supplies clearing services on NSE. a single keystroke can fire off these 50 orders in rapid succession into the NSE or BSE trading system. 3. Methodology 1. The below stated strategies cover all the types of arbitrage possibilities using equity derivatives. Using the NEAT or BOLT software. and simultaneously sells them at a future date on the futures market. Round off the number of shares in each security. such as traditional banks and the most conservative corporate treasuries. bank and financial institution are very active in arbitrage activities. It is an ideal lending vehicle for entities which are shy of price risk and credit risk. This gives you the buy position 73 . It is like a repo. The lender buys all 50 securities of Nifty on the cash market. and without bearing any credit risk. Borrowing and lending is a common practice in arbitrage transaction. What is new about the index futures market is that it supplies a technology to lend money into the market without suffering any exposure to Nifty. Cash and F&O. ARBITRAGE STRATEGIES WITH EXAMPLES Arbitrage: Have funds. NSE and BSE. There is no price risk since the position is perfectly hedged. lend them to the market. therefore. i. Calculate a portfolio which buys all the 50 securities in Nifty in correct proportion. The basic idea is simple.Arbitrage could be inter-exchange. Traditional methods of loaning money into the security market suffer from (a) price risk of shares and (b) credit risk of default of the counter-party. Arbitrage could also be between two segments of the market. without suffering the risk. where the money invested in each security is proportional to its market capitalization.e. Most people would like to lend funds into the security market. A person wants to earn this return (60/2400 for 27 days). 5. At this point. A few days later.3% higher. A moment later. He sells Rs. In this case. so fluctuations in Nifty do not affect you. Nifty is at 2400. if the Nifty spot is 2100. Some days later (anytime you want).4. the difference between the futures price and the cash Nifty is the return to the moneylender. Now your position is down to 0. He buys Rs. 9. What is the interest rate that you will receive? We will use one specific case. he has obtained the Nifty spot for 2407.e. with two complications: the moneylender additionally earns any dividends that the 50 shares pay while he has held them. 8. sell Nifty futures of equal value. you will have to make delivery of the 50 securities and receive money for them. This is the point at which “your money is repaid to you”. This is the point at which you are “loaning money to the market”. brokerage) in doing these trades. 7. In doing this. A few days later. He takes delivery of the shares and waits. Example On 1 August. A moment later. and the moneylender suffers transactions costs (impact cost. he places 50 market orders and ends up paying slightly more. where you will unwind the transaction on the expiration date of the futures. 1. you will have to take delivery of the 50 securities and pay for them. 74 . On 1 March 2005. His average cost of purchase is 0. i. The futures market is extremely liquid so the market order for Rs.3 million goes through at near–zero impact cost. Now you are completely hedged.3 million of the futures at 2460. and the Nifty March 2005 futures are at 2142 then the difference (2% for 30 days) is the return that the moneylender obtains.3 million of Nifty on the spot market. A futures contract is trading with 27th August expiration for 2460. 6. reverse the futures position. you will unwind the entire transaction. 2. use NEAT to send 50 sell orders in rapid succession to sell off all the 50 securities. 88% In addition. You would sell off all 50 securities in Nifty and buy them back at a future date using the index futures.14. You can deploy this money. The index futures market offers a riskless mechanism for (effectively) loaning out shares and earning a positive return for them. The futures position spontaneously expires on 27 August at 2420 (the value of the futures on the last day is always equal to the Nifty spot). 75 . and pay for them.6 (0. In the above case. He has gained Rs. On 27 August.000. The basic idea is quite simple.3. The dividends work out to Rs.4%.40 (1. 4. risk free. you would buy back your shares. On this date. Nifty happens to have closed at 2420 and his sell orders (which suffer impact cost) goes through at 2413. It is like a repo. the return is roughly 2460/2400 or 2.14000 or 0. 6. he has gained Rs. putting 50 market orders to sell off all the shares. and we subtract 0. as you like until the futures expiration. Arbitrage: Have securities.23% owing to the dividends for a total return of 2.4% for transactions costs giving 2. To do this. However. you would sell off your certificates and contract to buy them back in the future at a fixed price. 5. While waiting.25%) on the spot Nifty and Rs. It is easier to make a rough calculation of the return. at 3:15. stocklending schemes that are widely accessible do not exist in India.63%) on the futures for a return of near 1. he puts in market orders to sell off his Nifty portfolio. we ignore the gain from dividends and we assume that transactions costs account for 0. lend them to the market Owners of a portfolio of shares often think in terms of juicing up their returns by earning revenues from stocklending. You would soon receive money for the shares you have sold. There is no price risk (since you are perfectly hedged) and there is no credit risk (since your counterparty on both legs of the transaction is the NSCCL). a few dividends come into his hands.11% for 27 days.5% for 27 days.1% for 27 days. 4.X% . This can also be interpreted as a mechanism to obtain a cash loan using your portfolio of Nifty shares as collateral. if the spot-futures basis is 1% per month and you are loaning out money at 1.0.5% per month and you are loaning out the money at 1. When is this worthwhile? When the spot-futures basis (the difference between spot Nifty and the futures Nifty) is smaller than the riskless interest rate that you can find in the economy. at 3:15 PM. with each share being present in the portfolio with a weight that is proportional to its market capitalization). A few days later. 6. 2. 3. Buy index futures of an equal value at a future date. This can be done using a single keystroke using the NEAT software. Example 76 . Invest this money at the riskless interest rate.5 million of the NSE-50 portfolio (in their correct proportion. Sell off all 50 shares on the cash market. Suppose the spot–futures basis is X% and suppose the rate at which funds can be invested is Y %. we assume that transactions costs account for 0. To do this. you will receive money and have to make delivery of the 50 shares. 5. 1.4%. this stock lending could be profitable It is easy to approximate the return obtained in stock lending. On the date that the futures expire.2% per month. you will need to pay in the money and get back your shares. Then the total return is (Y . In this case.Methodology Suppose you have Rs. Conversely.5% per month. put in 50 orders (using NEAT again) to buy the entire NSE-50 portfolio.4%) over the time that the position is held. it is not profitable. If the spot–futures basis is 2. A few days later. it may be worth doing even if the spot–futures basis is somewhat wider. this is 1098* or 1120. he puts in 50 orders. Suppose a person has Rs. he makes delivery of shares and receives Rs. sell futures 77 . 4.71% over the two–month period.1153 or 7. he puts in a market order to buy Rs.4 million of Nifty using the feature in NEAT to rapidly place 50 market orders in quick succession. On a base of Rs. The seller always suffers impact cost.400. on the entire transaction. Suppose cash can be riskless invested at 1% per month.4 or 0.9-0.Suppose the Nifty spot is 1100 and the two–month futures are trading at 1110. owing to impact cost.9%.99 million (assuming an impact cost of 2/1100).3. A moment later. he get back Rs. funds invested at 1% per month yield 2. but the difference is exactly offset by profits on the futures contract.4 million of the Nifty futures. At the end of two months. he is completely hedged.4%. Assume that the transactions costs are 0. suppose he ends up paying 1153 and not 1150. Hence the spot– futures basis (10/1100) is 0. this is Rs. A few days later. 3. which he would like to lend to the market. 6.4 million. of 1120 + 40 .4 million of the Nifty portfolio. On the expiration date of the futures.01-0. Suppose Nifty has moved up to 1150 by this time.199.40.25. Translated in terms of Nifty. Let us make this concrete using a specific sequence of trades. placing market orders to buy back his Nifty portfolio. He puts in sell orders for Rs. The order executes at 1110.01%. He has funds in hand of 1120.70. Hence the total return that can be obtained in stock lending is 2. and the futures contract pays 40 (1150-1110) so he ends up with a clean profit. 1. Suppose he lends this out at 1% per month for two months. 5. Over two months. using NEAT. Arbitrage: Overpriced futures: buy spot. This makes shares are costlier in buying back. At this point. suppose he obtains an actual execution at 1098. 2. When the market order is placed. Arbitrage: Underpriced futures: buy futures. how can you cash in on this opportunity to earn riskless profits? Say for instance. 2. If you notice that futures on a security that you have been observing seem overpriced.1015.1025 and seem overpriced.1000. Whenever the futures price deviates substantially from its fair value. Simultaneously. The result is a riskless profit of Rs. borrow funds. sell spot 78 . ABB trades at Rs. one has to build in the transactions costs into the arbitrage strategy. As an arbitrageur. Sell the security. When does it make sense to enter into this arbitrage? If your cost of borrowing funds to buy the security is less than the arbitrage profit possible. 4. Remember however. you can make riskless profit by entering into the following set of transactions.15 on the spot position and Rs. 6. Return the borrowed funds. the spot and the futures price converge. One–month ABB futures trade at Rs.10 on the futures position. 3. On day one. This is termed as cash–and–carry arbitrage. the cost-of-carry ensures that the futures price stay in tune with the spot price. 8. sell the futures on the security at 1025. 7. In the real world. it makes sense for you to arbitrage. On the futures expiration date.10. Now unwind the position.As we discussed earlier. buy the security on the cash/spot market at 1000. Futures position expires with profit of Rs. Say the security closes at Rs. Take delivery of the security purchased and hold the security for a month. arbitrage opportunities arise. 1. 5. that exploiting an arbitrage opportunity involves trading on the spot and futures market. because buying an option is highly leveraged transaction. 79 . Simultaneously. On day one. we will see increased volumes and lower spreads in both the cash as well as the derivatives market. The futures position expires with a profit of Rs.1000. This is termed as reverse–cash–and–carry arbitrage. Speculation in option is not very common. arbitrage opportunities arise. Major part of the market volumes come from speculation. Say the security closes at Rs.Whenever the futures price deviates substantially from its fair value. The result is a riskless profit of Rs.10 on the futures position. As more and more players in the market develop the knowledge and skills to do cash–and–carry and reverse cash–and–carry. Make delivery of the security. sell the security in the cash/spot market at 1000.975. Now unwind the position. 4. It is this arbitrage activity that ensures that the spot and futures prices stay in line with the cost–of–carry. it makes sense for you to arbitrage. As we can see. Market participants to speculate extensively use Index futures and stock futures. SPECULATIONS Speculation has a lot of risks involved. Speculator is responsible for liquidity in the market. As an arbitrageur. exploiting arbitrage involves trading on the spot market. 3. 2. 1. buy the futures on the security at 965.10. On the futures expiration date. 7. Buy back the security. Specially speculation in derivates is even more riskier as the derivatives are leveraged instruments. 965 and seem underpriced. If the returns you get by investing in riskless instruments is less than the return from the arbitrage trades. the spot and the futures price converge. How can you cash in on this opportunity to earn riskless profits? Say for instance. One–month ABB futures trade at Rs. be it cash market or the F&O segment. you can make riskless profit by entering into the following set of transactions.25 on the spot position and Rs. Index futures attract the maximum volumes in the derivatives segment. It could be the case that you notice the futures on a security you hold seem underpriced. ABB trades at Rs. 5. 6. Methodology 80 . Buy selected liquid securities which move with the index. Once a person is LONG NIFTY using the futures market. and sell them at a later date: or. After a good budget. SPECULATION STRATEGIES WITH EXAMPLES Speculation: Bullish Index. Speculation in individual securities attracts highest risk. However. he gains if the index rises and loses if the index falls. Taking a position on the index is effortless using the index futures market. The above-discussed strategies are responsible for liquidity in the Derivatives segment hence leading to volumes in the cash segment also.Speculation in options is naked positions. index is less volatile and index movement is easy to analyze than the individual stock movements. these positions run the risk of making losses owing to company–specific news. Buy the entire index portfolio and then sell it at a later date. which are very risky. or the onset of a stable government. The first alternative is widely used – a lot of the trading volume on liquid securities is based on using these liquid securities as an index proxy. Using index futures. a person has two choices: 1. many people feel that the index would go up. The second alternative is cumbersome and expensive in terms of transactions costs. an investor can “buy” or “sell” the entire index by trading on one single security. are individual securities are more volatile than the market index. they are not purely focused upon the index. Speculation in the market index is very common. long nifty futures Sometimes we think that the market index is going to rise and that we can make a profit by adopting a position on the index. 2. How does one implement a trading strategy to benefit from an upward movement in the index? Today. or good corporate results. which is something like Rs. short Nifty futures Sometimes we think that the market index is going to fall and that we can make profit by adopting a position on the index. if Nifty is at 1200.240. After a bad budget.980. Hence.000.35. Sell selected liquid securities which move with the index. The Nifty July contract has risen to Rs.000. Example 1.980. 5. Nifty has risen to 967.000.When you think the index will go up. and buy them at a later date: or. 3.20. Speculation: Bearish index. 4.000. Shorter dated futures tend to be more liquid. He sells off his position at Rs.192. The choice is basically about the horizon of the investor.4000. The investor can choose any of them to implement this position.4 million.20. When the trade takes place. His profits from the position are Rs. the Nifty July contract costs Rs. or bad corporate results. 2. Futures are available at several different expirations. the investor gets a claim on the index worth Rs. buy the Nifty futures. At this time. Similarly. Longer dated futures go well with long–term forecasts about the movement of the index.960 so his position is worth Rs. How does one implement a trading strategy to benefit from a downward movement in the index? Today a person has two choices: 1.200. the investment is done in units of Rs. by paying up Rs. 81 . On 14 July 2001. 4. The minimum market lot is 200 Nifties. the investor gets a claim on Nifty worth Rs. On 1 July 2001. Hence. He buys 200 Nifties with expiration date on 31st July 2001. many people feel that the index would go down.240.000. or the onset of a coalition government. by paying an initial margin of Rs.000. a person feels the index will rise. the investor is only required to pay up the initial margin.2. 3. 000. which is something like Rs. Hence.060 so his position is worth Rs. Shorter dated futures tend to be more liquid. 2. 4. the investor is only required to pay up the initial margin. Once a person is SHORT NIFTY using the futures market. On 10 June 2001.90. Example 1.212.200. Taking a position on the index is effortless using the index futures market. Nifty has fallen to 962.000.990. This strategy is also cumbersome and expensive in terms of transactions costs. an investor can “buy” or “sell” the entire index by trading on one single security.20. if Nifty is at 1200. by paying an initial margin of Rs. The second alternative is hard to implement. he squares off his position. The investor can choose any of them to implement this position. the investor gets a claim on Nifty worth Rs. He sells 200 Nifties with a expiration date of 26th June 2001. When the trade takes place. The choice is basically about the horizon of the investor. Methodology When you think the index will go down. these positions run the risk of making losses owing to company–specific news.240.4 million. a person feels the index will fall. The first alternative is widely used – a lot of the trading volume on liquid securities is based on using these securities as an index proxy.2.000. On 1 June 2001. by paying up Rs. the investment is done in units of Rs. the Nifty June contract costs Rs. Using index futures. 5. Similarly. 3.2.20. Longer dated futures go well with long–term forecasts about the movement of the index.000.1. Futures are available at several different expirations. they are not purely focused upon the index. 82 .000. Hence. At this time.000 the investor gets a claim on the index worth Rs. sell the Nifty futures. However. The minimum market lot is 200 Nifties.240. The Nifty June contract has fallen to Rs. Sell the entire index portfolio and then buy it at a later date. he gains if the index falls and loses if the index rises. 000.14.His profits from the position work out to be Rs. Chapter 5 83 . Applicability of Derivative Instruments APPLICABILITY OF DERIVATIVE INSTRUMENTS RISK MANAGEMENT: CONCEPT AND DEFINATION In recent years mangers have become increasingly aware of how their organizations can be affected by risks beyond their control. interest rates and commodity prices have destabilizing affects on performance and corporate strategy. In many cases fluctuations in economic and financial variables such as exchange rates. 84 . and The organization is encouraged to manage proactively rather than reactively. the benefits to be gained. and what level of Risk Management it is prudent to apply. Effective and systematic Risk Management yields the following key benefits: Forward. Identifying threats to the company. OBJECTIVES AND BENEFITS OF RISK MANAGEMENT The key objectives of Risk Management are: Providing a basis for informed decision making as a consequence of creating transparency in the company‟s risk situation. Clearly prioritizing risks to the company and their mitigation strategies: and Creating opportunities through improving risk mitigation capabilities. They are possibilities not included in forecast/budget and represent an upside or downside to the forecast/ budget. Balance thinking – a trade – off must be struck between the cost of managing risk. WHAT IS RISK MANAGEMENT? Risk management embraces the whole spectrum of activities and measures associated with the identification. its assets and its financial and earning potential.WHAT IS RISK? Risks are defined as internal or external causes of and reasons for deviations in actual results and forecasts/budgets. Implementing proper mechanism for the identification. responsible thinking. so the organization is prepared for what might happen and is better prepared for making decisions to improve the effectiveness and efficiency of performance. 85 . evaluation and handling of opportunities and risks. or factors that can lead to changes in the forecast. analysis and mitigation of potential risks. rigorous. It embraces the whole spectrum of activities and measures concerned with systematic management of risks within the organization. Furthermore. organization of the company Risk Identification Identification of risks and of their sources 86 Risk Evaluation Evaluation of risks concerning their impact & probability . and ensure ongoing reporting for informed decision making.It helps to speed up the decision making process. The efficiency of the process is the responsibility of all managers within the organization and cannot be viewed as the sole responsibility of the Risk Manager. giving clear priorities to each type of activity or project requiring management attention and thus giving a clear cut advantage to the business. strategies. the risk management process must remain sufficiently flexible to accommodate new situations as they arise. The process contains four main stages: Risk Risk Risk Risk Identification Evaluation Handling. since risks and risk structures change continuously. The overall objective of the risk management process is to optimize the risk return relationship and reject unacceptable risks. Risk Management Process Objectives. All levels of management should manage risks. The risk management process must be established as a permanent and integral part of business process if it to be fully effective. treat and monitor these risks efficiently and effectively. and Controlling Controlling the Risk Management Process means monitoring whether the management process is actively and effectively lived throughout the organization. RISK MANAGEMENT PROCESS The objective of risk management process is to identify and evaluate the key risks. This can be interpreted as an advance payment made to take a larger position. This is determined by the exposure limit assigned to the investor. For example. Also the strict margining system followed in the futures market worldwide. Depending on the position taken an initial margin is charged on the investor. Calculating the net loss associated with a position does the calculation of MTM margin. The focus is on calculating the net loss on all contracts entered by the client. if the exposure limit is 33 times the base capital given by the investor. This is paid up each evening after trading ends.33 is required. the net profit or loss on a position is paid out to or in by the investor on the very same day in the form of daily mark-to-market margins (MTM). reduces the default risk associated with the futures. The MTM is made compulsory to remove any default on large losses if the position is accumulated for several days. then it means that an initial margin of 3. ADVANTAGES AND RISKS OF TRADING IN FUTURES OVER CASH 87 . the clearing corporation of the exchange by granting credit guarantee nullifies the counter party risk. The general margining system that is followed in the futures market is as follows.RISK MANAGEMENT WITH FUTURES CONTRACT As the futures are exchange-traded. More than the initial margin collected. This can enhance the return on capital deployed. about 33% returns. so his returns are leveraged. Increase risk: The buyer of a call wants the upside risk of an asset. which is not possible in the cash segment because of rolling settlement. The biggest advantage of futures is that short selling is allowed without having stock and position can be carried for a long time. futures position in the stock by paying about Rs30 toward initial and mark-to-market margin the same profit of Rs10 can be made on the investment of Rs30. in exchange for the premium." Reduce risk: The seller of a covered call exchanges his upside risk (gains above the strike price) for the certainty of cash in hand (the premium). Upside risk is the possibility of gain. i.just like buying fire insurance for your house. Certainty is exchanged with other players who assume the risk in hope of big gains. Alternatively. One half the reasons to use options (like other derivatives) is to reduce risk. 88 .e. One way is to buy the stock in the cash segment by paying Rs100. if any. For example. Downside risk is the possibility of loss. In this way the profit will be Rs10 on investment of Rs100. It is wrong to state that "options are risky. RISK MANAGEMENT WITH OPTIONS Risk is concerned with the unknown. giving about 10% returns. Further futures positions are leveraged positions. you can even lose your full capital in case the price moves against your position. unlike in the cash segment where it delivery has to be taken because of rolling settlement. The seller of a put accepts the downside risk of locking in his purchase price of an asset. meaning a position can be taken for Rs100 by paying Rs25 margin and daily mark-to-market loss. but will only pay a small percentage of its current value. the expectation for a Rs100 stock is to go up by Rs10. Conversely futures can be bought and position can be carried for a long time without taking delivery. The buyer of a covered put limits his downside risk for a price . Please note that taking leveraged position is very risky. Buyers start out-of-pocket. because for the same amount of money. A trader might buy the option instead of shares.To understand risk. Both tactics are generally considered inappropriate for 89 . These graphs either flat line or go down on either side of the spot price. only the right to do so until the expiry date. But going forward. If the stock price decreases. he will thus realize a larger gain than if he had purchased shares. He would have no obligation to buy the stock. he will profit. look at the four standard graphs of options (put-call-buysell). This is an example of the principle of leverage. they have no upside risk. In all cases. If the stock price increases over the exercise price by more than the premium paid. the premium was a certainty. Long Call A trader who believes that a stock's price will increase might buy the right to purchase the stock (a call option) rather than just buy the stock. he will let the call contract expire worthless. Buyers/sellers of calls have unlimited upside/downside risk as the asset price increases. he can obtain a larger number of options than shares. Going forward. and only lose the amount of the premium. The extent of risk varies. the option buyer has no downside risk. but close enough. If the stock rises. Sellers start with a gain. The graph either flat lines or goes up on either side of the spot price. The value of the options in the interim between purchase and expiration will not be exactly like these graphs. Short Call (Naked short call) A trader who believes that a stock's price will decrease can short sell the stock or instead sell a call. Payoffs and profits from a long call. Buyers/sellers of puts have upside/downside risk limited to the spot price of the asset (less the premium). Short Put (Naked put) A trader who believes that a stock's price will increase can sell the right to sell the stock at a fixed price. The trader selling a call has an obligation to sell the stock to the call buyer at the buyer's option. However. the short will lose money.small investors. Unless a trader already owns the shares which he may be required to provide. Payoffs and profits from a short call. This trade is generally considered inappropriate for a small investor. If the stock price decreases below the exercise price by more than the premium paid. If the stock price increases over the exercise price by more than the amount of the premium. he will just let the put contract expire worthless and only lose his premium paid. If the stock price increases. If the stock price increases. He will be under no obligation to sell the stock. the short put position will make a profit in the amount of the 90 . The trader now has the obligation to purchase the stock at a fixed price. the short call position will make a profit in the amount of the premium. the potential loss is unlimited. he will profit. The trader has sold insurance to the buyer of the put requiring the trader to insure the stockholder below the fixed price. such a trader who sells a call option for those shares he already owns has sold a covered call. If the stock price decreases. but has the right to do so until the expiry date. Long Put A trader who believes that a stock's price will decrease can buy the right to sell the stock at a fixed price. Payoffs and profits from a long put. Writing out-of-the-money covered calls is a good example of such a strategy. It is necessary to assess how high the stock price can go and the timeframe in which the rally will occur in order to select the optimum trading strategy. The bull call spread and the bull put spread are common examples of moderately bullish strategies. Mildly bullish trading strategies are options strategies that make money as long as the underlying stock prices do not go down on options expiration date. INTRODUCTION TO OPTIONS STRATEGIES Bullish Strategies Bullish options strategies are employed when the options trader expects the underlying stock price to move upwards. they usually cost less to employ. Payoffs and profits from a short put. the short position will lose money. In most cases.premium. Moderately bullish options traders usually set a target price for the Bull Run and utilize bull spreads to reduce risk. While maximum profit is capped for these strategies. Bearish Strategies Bearish options strategies are employed when the options trader expects the underlying stock price to move downwards. The most bullish of options trading strategies is the simple call buying strategy used by most novice options traders. It is necessary to assess how low 91 . These strategies usually provide a small downside protection as well. If the stock price decreases below the exercise price by more than the premium. stocks seldom go up by leaps and bounds. long strangle. short strangle. stock price seldom make steep downward moves. ratio spreads. Bullish on Volatility Neutral trading strategies those are bullish on volatility profit when the underlying stock price experience big moves upwards or downwards. These strategies usually provide a small upside protection as well. Mildly bearish trading strategies are options strategies that make money as long as the underlying stock prices do not go up on options expiration date. long condor and long butterfly. They include the long straddle. Also known as non-directional strategies. they are so named because the potential to profit does not depend on whether the underlying stock price will go upwards or downwards. The most bearish of options trading strategies is the simple put buying strategy utilized by most novice options traders. Bearish on Volatility Neutral trading strategies those are bearish on volatility profit when the underlying stock price experiences little or no movement. Neutral or Non-Directional Strategies Neutral strategies in options trading are employed when the options trader does not know whether the underlying stock price will rise or fall. the correct neutral strategy to employ depends on the expected volatility of the underlying stock price. and short condor and short butterfly. While maximum profit is capped for these strategies. they usually cost less to employ. Moderately bearish options traders usually set a target price for the expected decline and utilize bear spreads to reduce risk.the stock price can go and the timeframe in which the decline will happen in order to select the optimum trading strategy. Such strategies include the short straddle. Rather. In most cases. Combining any of the four basic kinds of option trades (possibly with different exercise prices) and the two basic kinds of stock trades (long and short) allows 92 . The bear call spread and the bear put spread are common examples of moderately bearish strategies. while more complicated strategies can combine several. 'if exercised. with a strike price (pre-set. The put buyer has no obligation to sell the stock. That seller grants the buyer the right. will fall prior to expiration can buy the right to sell the stock at a fixed price. The price rises above the strike price by more than the premium. to buy at a pre-set price. must-be-bought-at-price') of $31. he is said to be selling a 'naked' call. Long Put Traders who anticipate that the future market price of an asset. Long Calls The most basic. Short ('Naked') Calls When the option seller (the 'writer') doesn't own the underlying stock he's obligated to sell (if the option is exercised). Buying a call confers the right. 93 . Since he's on the selling side of the contract. the short call position will profit by the amount of the premium. The most basic are the call and the put. MSFT (Microsoft). but not the obligation.a variety of options strategies. and takes on an obligation to fulfill the other side of the trade. say a stock. and easiest to understand. but in order to execute any of them successfully an investor new to options will need to know some elementary concepts. OPTIONS TRADING STRATEGIES There are several basic Options Trading Strategies. but simply the right. have June 31 options that expire on the third Friday of June. currently trading at $28. If the market price of the underlying asset decreases. his position is said to be 'short'. But options are sold as well as bought. Puts grant the buyer the right to sell at a pre-set price. There are several basic variations. Simple strategies usually combine only a few trades. the short position incurs a loss. is the (long) call. he profits.) If the price falls below the strike price by more than the premium. the 'writer' loses money. 'Bear spreads'. the short put position makes a profit equal to the amount of the premium. Several basic trading strategies utilize the characteristics of these four basc positions. If the price increases. If the asset's market price rises. (Excluding any transaction costs. volume.can result in profit (or loss). use a long call with a low strike price in combination with a short call at a higher strike price and a short put with a higher strike price.speculating on coming out on the plus side of the equation . These strategies are either pure profit plays . etc in combination with different expiration dates and strike prices . but make up for it by offloading some risk. Hedging involves taking positions that tend to move in opposite directions. for example. involve a short call with a low strike price and a long call with a higher strike price. the trader lets the contract 'expire worthless'.or combinations of speculation and hedging. They profit less than pure speculation. or doesn't fall enough to cover the premium. Options trading software can demonstrate several concrete examples of how any of these . LONG CALL 94 . Short Put Traders who speculate that the future market price will increase. 'Bull spreads'. can sell the right to sell an asset at a pre-determined price. such as commissions.If. Current Strategies 1. in fact. the market price does fall below the strike price (prior to expiration of the option) by more than the premium paid. by contrast. An alternative method uses a short put with low strike price and a long put with a higher strike price.under different assumptions about future prices. 46 18% Share Price 300 Share price < 260 Stock price > 274 Possible Outcomes at expiry Option worth 40. L&T is quoting at Rs 254 and the January 260 (strike price) call costs Rs 14 (premium). Sell 1Jan contract (Expiry) 2. Before moving into more complex bullish and bearish strategies. 254 Rs. Situation: On 1 November.Market View Bullish Potential Profit Unlimited Potential Loss Limited Purchasing calls has remained the most popular strategy with investors since listed options were first introduced. Cost =14.260 x 1000 units = 40. 14. 300 Option Market Buy 1 Jan 260 call at Rs 14.000 Return 186% 1-Nov 20-Jan Analysis Rises by Return Rs.000 (premium) Net profit = intrinsic value of (Break even = 260+14) option i.000) Net profit= 26. an investor should thoroughly understand the fundamentals about buying and holding call options.000. The loss is Rs.000) Your gain is: Option sale = 40. You expect the share price to rise significantly and want to profit from the increase Action: Buy 1 L&T call at 14. Closing the position now will produce a net profit of 26.000 Premium paid = (14.000 If the L&T shares do go up you can close your position either by selling the option back to the market or exercising your right to buy the underlying shares at the exercise price. Net Gain 40 (300 .000 1. by whatever amount the share price exceeds 274 95 . Net outlay is Rs 14. Share Price Rs.000 Option expires worthless.e. Recover intrinsic value of premium. 96 . an investor should thoroughly understand the fundamentals about buying and holding put options. 2. Before moving into more complex bearish strategies.000 1.000 Net profit 12. Share Price Rs. 270 Rs.Although the profit is on expiry day. Situation: An investor thinks L&T. If the L&T shares do go down you can close your position either by selling the option back to the market or exercising your right to buy the underlying shares at the exercise price.240 x 1000 = 20. He therefore decides to buy Puts to gain exposure to its anticipated fall. the investor is obviously able to sell his option at any time prior to expiry. 240 Option Market Buy 1 L&T Oct 260 put at Rs 8 Total Outlay = 8. currently trading at Rs 270.000) Effective profits Option purchase (8. 2.000) Option sale 20.000. Net gain 20 (260 . LONG PUT Market View Bearish Potential Profit Unlimited Potential Loss Limited A long put can be an ideal tool for an investor who wishes to participate profitably from a downward price move in the underlying stock. and such sale will result in the receipt of time value in addition to any intrinsic value. Action: Buy 1 L&T October 260 Put at Rs 8 for a total consideration of Rs 8. Sell 1 Oct contract. is overvalued and may fall substantially. if the position is closed out.000 or 150% 1-Aug 20-Oct Analysis Fall of profit Rs effective 30 Share Price 240 Share price 240260 Possible Outcomes at expiry The 260 put will be trading at Rs 20 which gives a profit of Rs 12 (20-8). 3." the obligation conveyed by writing a call option contract. or "covers. and such sale will result in the receipt of time value in addition to any intrinsic value. the investor is obviously able to sell his option at any time prior to expiry. He receives an option premium equal to Rs 1. the strategy is commonly referred to as a "buy-write. 254 Option Market Sell 10 Jan 260 calls @ Rs 14 Income 1. If this stock is purchased simultaneously with writing the call con-tract." In either case. Action: The January 260 calls are trading at 14 and investor sells 10 contracts (one contract is 1.000 97 .40. the stock is generally held in the same brokerage account from which the investor writes the call.000 shares).40. Share Price 1-Nov Rs." If the shares are already held from a previous purchase. and fully collateralizes. Situation: It is 1 November and L&T share is trading at Rs 254. Although the profit is on expiry day. Short Call Naked short call / Covered short call Market View Bullish Potential Profit Limited Potential Loss Unlimited The covered call is a strategy in which an investor writes a call option contract while at the same time owning an equivalent number of shares of the underlying stock.000 and takes on the obligation to deliver 10000 share at 260 each if the holder exercise the option. if the position is closed out. it is commonly referred to an "overwrite.Stock price > 240 The 260 put will be trading at Rs 20 which gives a profit of Rs 12 (20-8). This strategy is the most basic and most widely used strategy combining the flexibility of listed options with stock ownership. An investor holds 10000 shares but does not expects their price to move very much over the next few months so decides to write call option against this shareholding. a put writer will be considered "covered" if he has on deposit with his brokerage firm a cash amount (or other approved collateral) sufficient to cover such a purchase.000. Action: The Investor decides to generate some additional income on his portfolio and writes 10 NIIT 550 puts at Rs 40. Possible Outcomes at expiry 98 . possibly. a return of 7. 254 Option expire worthless Analysis No change to Effective profits shareholding Profit =1. The option expires worthless Share Price > 260 Share price < 240 4.000 (option value of premium) Possible Outcomes at expiry The holder will exercise his position and if called. Many investors write puts because they are willing to be assigned and acquire shares of the underlying stock in exchange for the premium received from the put's sale.00.40.8% over 3 months. For this discussion. Short Put Naked Short Put / Covered Short Put Market View Bearish Potential Profit Limited Potential Loss Unlimited According to the terms of a put contract.00. Situation: An investor owns 10.000. rise slightly. Thus he received premium of 4.20-Jan Rs. the investor as a writer will sell shares originally purchased for Rs 254 at 274 (260+14). In early March he feels that the share price of NIIT will either remain constant or.000 shares and also has a cash holding of around 60. a put writer is obligated to purchase an equivalent number of underlying shares at the put's strike price if assigned an exercise notice on the written contract. Buy 1 L&T July 200 call option at Rs 16 and sell 1 July 220 call at Rs 8. Total outlay and maximum loss is 8. can be executed as a "unit" in one single transaction. effectively for 51. Bull Call Spread Market View Bullish Potential Profit Limited Potential Loss Limited Establishing a bull call spread involves the purchase of a call option on a particular underlying stock. not as separate buy and sell transactions. the share price of L&T is 204. this strategy requires a substantial investment.00. The put option will be exercised and the stock will have to be purchased. same expiration month. while simultaneously writing a call option on the same underlying stock with the same expiration month. as any spread.4. Initial income remains as profit. Maximum profit is 12 (220-200-8). at a higher strike price. This spread is sometimes more broadly categorized as a "vertical spread": a family of spreads involving options of the same stock.00. In relation to the Indian markets.Share Price =/ > 550 Share price < 550 The investor's expectation is correct and the put will expire without being exercised. The bull call spread. but different strike prices.000).e Rs 8 99 . Break even is Rs 208 (200+8). Possible Outcomes at expiry Share Price < 200 Both the 200 and 220 calls are worthless and the maximum loss is equal to the net cost of establishing the spread i.00. Situation: On 1 November. Both the buy and the sell sides of this spread are opening transactions.000. They can be created with either all calls or all puts. 5. The net outflow in this situation is: Future Margin – Option Premium. and be bullish or bearish.000 (55. and are always the same number of contracts. but with a lower strike price. They can be created with either all calls or all puts. Advantages Position established for less cost than a long call and breaks even more quickly. and be bullish or bearish. Expectation: This strategy is appropriate when anticipating a fall in the price of the underlying share. Maximum profit is therefore realized at 220. while simultaneously writing a put option on the same underlying stock with the same expiration month.e. it is possible to exercise the long position and acquire stock in order to satisfy the short position. can be executed as a "package" in one single transaction. 6. difference in intrinsic value of two calls less than net debit (20-8). as any spread. Both the buy and the sell sides of this spread are opening transactions. Note: the long call position always covers the risk on the short call position. This spread is sometimes more broadly categorized as a "vertical spread": a family of spreads involving options of the same stock. but different strike prices.The 200 call gains intrinsic value and profit is 220 equal to the intrinsic value of the 200 calls less the net debit of Rs 8. The bear put spread. Limited loss. not as separate buy and sell transactions. if the short option is exercised against you. Stock price > The position can be closed for a maximum profit 220 of Rs 12 above 220 i. Eg.Share price 200. and are always the same number of contracts. same expiration month. the point just before which the 220 calls may be exercised. Bear Put Spread Market View Bullish Potential Profit Limited Potential Loss Limited Establishing a bear put spread involves the purchase of a put option on a particular underlying stock. 100 . Possible Outcomes at expiry Both puts are worthless and the maximum loss is equal to the net cost of establishing the spread i.Situation: The share of Tata Tea is trading at 228. 101 . The potential loss is limited to the initial investment. The maximum potential profit of Rs 11 is realized just before the level at which the 220 put may be exercised by the holder The position can be closed for the difference in the intrinsic value of two puts. 240 put. a neutral bias). and underlying. 7.e Rs 9 The position can be closed out for the intrinsic value of the Rs. so the profit is 11 (240-220-9) Share Price > 240 Share price 240-220 Stock price 220 Stock price < 220 Advantages Position established for less cost than a long put and breaks even more quickly. Expectation: Purchasing a straddle is appropriate when anticipating significant volatility in the underlying but when uncertain about direction. Maximum profit is Rs 11 and maximum loss Rs 9.. Limited loss. the long straddle is an excellent strategy.e. The potential profit is unlimited as the stock moves up or down. You buy 1 Tata Tea Oct 240 put at Rs 16 and sell 1 Tata 220 put at Rs 7. Long Straddle Market View Mixed Potential Profit Unlimited Potential Loss Limited For aggressive investors who expect short-term volatility yet have no bias up or down (i. expiration. This position involves buying both a put and a call with the same strike price. This strategy involves selling a put and a call with the same strike price. once the direction of the underlying becomes clear the other 'leg' is closed which effectively reduces the break even. Upside breakeven = 290 (Exercise price 260 + net credit 30) 102 . expiration. loss potential is limited.Situation: Buy 1 L&T Apr 260 call at Rs 21 and Buy 1 L&T Apr 260 Put at Rs 9. maximum loss Rs 30. and underlying. Short Straddle Market View Mixed Potential Profit Unlimited Potential Loss Unlimited For aggressive investors who don't expect much short-term volatility. In this case. sell 1 L&T April 260 put at Rs 9. Expectation: Generally undertaken with a view that the underlying share price will trade between break even points.30 net debit) Profit is unlimited. however. in this example. the short straddle can be a risky. the underlying share. Normally. but profitable strategy. The potential loss is unlimited as the market moves up or. Action: Sell 1 L&T April 260 call at Rs21. 8. has to move 11% before the strategy breaks even. Maximum Loss limited to the premium paid. Advantages Profit potential open ended in either direction. the profit is limited to the initial credit received by selling options. Upside breakeven = 290 (Exercise price 260 + net debit 30) Downside breakeven = 230 (260 . the short straddle position exposes an investor in both direction it is important that the stock and cash should be in place to cover the call and put legs respectively. of course. at Rs 290 or alternatively take delivery of stock at Rs 230. Share Price 260 The risk is. if the investor felt that there was a possibility of a sharp downward movement the 240 puts could be purchased to protect downside. Possible Outcomes at expiry Maximum profit potential is realized as both calls and put are worthless. The potential loss is limited to the initial investment while the potential profit is unlimited as the market moves up or down. Nevertheless. to prevent such exposures a stop loss facility could be implemented. maximum loss unlimited. In the example above. Alternatively.Long Strangle Market View Mixed Potential Profit Unlimited Potential Loss Limited For aggressive investors who expect short-term volatility yet have no bias up or down (i. Secure known purchase and sale price.e. in this example. a neutral bias). Advantages Generation of earnings from premium received. Conversely a sharp upward movement could be protected by buying the 280 calls.Downside breakeven = 230 (260 . 9. This strategy typically involves buying out-of-the-money calls and puts with the same expiration and underlying.. 103 . the long strangle is another excellent strategy. Normally a stop loss would only be implemented on one side leaving the other exposed.30 net credit) Maximum profit is 30. the short straddle is particularly appropriate when taking the view that the underlying will trade in the range between the breakeven points and when prepared to deliver stock. that if the underlying does prove to be volatile. the short strangle can be a risky. Share Price > 260 Share price 240-260 Stock price < 240 Advantages Profit potential open ended in either direction. Upside breakeven = 282 (Exercise price 260 + net debit 22) Downside breakeven = 218 (240 . but profitable strategy.Situation: The share of L&T is currently standing at 247. The profit is limited to the credit received by selling options. Loss limited to total premium paid. Profit potential unlimited. Situation: L&T shares are currently standing at Rs 247 and you sell 1 October 260 call at Rs 12 and sell 1 October 240 put at Rs 10. Maximum loss of 22 premium paid. This strategy typically involves selling out-of the-money puts and calls with the same expiration and underlying. 10. Buy 1 L&T Oct 260 call at Rs 12. The potential loss is unlimited as the market moves up or down. buy 1 Oct 240 put at Rs10. 104 . Upside breakeven = 282 (Exercise price 260 + net debit 22) Downside breakeven = 218 (240 .22 net debit) Possible Outcomes at expiry Profit from the call is equal to its intrinsic value less the premium paid Both call and put are out of money.Short Strangle Market View Mixed Potential Profit Limited Potential Loss Unlimited For aggressive investors who don't expect much short-term volatility.22 net debit) Your maximum profit is Rs 22 and loss is unlimited. This is called "buying a butterfly. 11." The opposite would be to sell the butterfly. When investors expect stable prices. sell two Jan 240 calls at Rs. 40. which must be all calls or all puts. and buy one Jan 260 call at 25. the investment required for such a strategy is very high and should only be attempted by people with huge funds and an appetite for large losses. You buy one Jan 220 call at Rs. Disadvantages Loss is unlimited In the Indian Markets. limited reward strategies. Situation: L&T shares are currently trading at 240. (260+22).Butterfly Market View Mixed Potential Profit Limited Potential Loss Unlimited Ideal for investors who prefer limited risk. Both the call and put would expire worthless. 30. The seller takes delivery of the stock at 218. Advantages Generation of earnings from premium received. Secure know sale and purchase prices. must also have the same expiration and underlying. The 22 credit is retained The put is exercised. they can buy the butterfly by selling two options at the middle strike and buying one option at the higher and lower strikes.Share Price > 260 Share price 240-260 Stock price < 240 Possible Outcomes at expiry The call is exercisec by the holder and the seller delivers stock at 282. Upside breakeven = 255 Downside breakeven = 225 105 . The options. purchasing a protective put and writing a covered call on that stock.e. Expectation: An investor will employ this strategy after accruing unrealized profits from the underlying shares.Collar A collar can be established by holding shares of an underlying stock. At the same time. Generally.240 in may and would like a way to protect your downside with little or no cost. The loss is Rs.The maximum profit is 240-220-5 = Rs. and wants to protect these gains with the purchase of a protective put. covered in this case by the underlying stock Situation: Suppose you purchased 100 shares of L&T ltd.5 Maximum profit: When share is at 260. the put and the call are both out of-the-money when this combination is established. one collar equals one long put and one written call along with owning 100 shares of the underlying stock. The option portions of this strategy are referred to as a combination. Both the buy and the sell sides of this combination are opening transactions.15 Disadvantages Stock price < The loss is Rs. and are always the same number of contracts. 220 Requires big margin to execute this strategy. You would create a collar by buying one May 220 put at 10 and selling one May 260 call at 15. Net credit is Rs. The net profit would be 240-220-5 = Rs. net debit Advant ages Potential loss is limited Share Price @220 106 . the investor is willing to sell his stock at a price higher than the current market price so an out-of-the-money call contract is written. Possible Outcomes at expiry The profit from the put offsets the loss from the stock. 12. and have the same expiration month.5 i. at Rs.5 i.e. Maximum loss: When the share is at or below 220. net debit Can be difficult to execute such strategies quickly.15 Possible Outcomes at expiry Share Price > 260 Stock price 240 The maximum profit would be at this level. In other words. 5 (initial debit) Share Price <220 107 . Maximum profit: When the stock price is between 240 & 260 Maximum loss: When the stock price is above 280 or below 220 Possible Outcomes at expiry The loss would be of Rs. Expectation: The long condor can be a great strategy to use when your feeling on a stock is generally neutral because it's been trading in a narrow range. Situation: Imagine that L&T ltd. 240 and has been relatively flat for some time. there is an upper break even and a lower break even. A profit is made if the stock remains above the lower break even point or below the upper break even point. an investor will potentially be able to double the credit obtained over a single spread position. limited reward strategy that profits in stagnant markets.Stock price @240 Stock price @260 The profit would be equal to the net inflow i. is trading at Rs. the condor is a limited risk. 13. Advantages The collar strategy is best used for investors looking for a conservative strategy that can offer a reasonable rate of return with managed risk and potential tax advantages. Like the butterfly.e. If you think the situation is unlikely to change. Since there are two spreads involved in the strategy (four options). an investor will combine a Bear-Call Credit Spread and a Bull-Put Credit Spread on the same underlying security. Action Sell 1 240 call @ 20 Sell 1 260 call @ 15 Buy 1 220 call @ 30& buy 1 280 @ 10 call as a hedge in case the market moved against you. By doing this. In the Iron Condor. Disadvantages The primary concern in employing a collar is protection of profits accrued from underlying shares rather than increasing returns on the upside. Rs. Condor Spread The Condor Spread strategy is a neutral strategy similar to the Butterfly.5 The profit on the stock is exactly offset by the loss on the call option that was sold. might be cost prohibitive to trade iron condors that are low net credits. Jun 280 call @ Rs.288 and the May 280 Call is available @ Rs. Calendar Spread Calendar spreads take advantage of the different rates at which time value erodes.15 The loss would be Rs.30. Losses are limited if the stock goes against you one way or the other. Situation: On 1 May the shares of L&T ltd. Buy 1 L&T Ltd. 6 Possible Outcomes at expiry 108 . Disadvantages: Commission costs to open the position are higher since there are four trades.24.24 and the Jun Call is available @ Rs. The more rapid erosion in the near month series works to the advantage of the writer and the strategy is therefore particularly appropriate when the near month series is overpriced.Stock price 240-260 Stock price>260 The profit would be equal to 20-5= Rs. 14. Net debit is Rs. Since the time value element of an option‟s premium erodes faster in the near month series than the far month series. 30 Action: Sell 1 L&T Ltd. are trading at Rs. If you are facing a large gain or drop in the underlying you could only close one leg of the four legs in the position. 5 (initial debit) Advantages: The double credit achieved helps lower the potential risk. Expectation: A calendar spread involves the sale of a near dated call (put) and the purchase of a longer dated call (put) at the same exercise price. a spread opens up between the two. May 280 call @ Rs. Calls are used when market view is moderately bullish and puts are used when market view is moderately bearish. This is generally known as a reverse calendar spread. Advantages Limited loss. Alternatively. initial debit. Both calls will have intrinsic value. The May 280 call expires worthless but the Jun 280 call will have 1 month time value remaining. Disadvantages Limited profit. Position may be disrupted by early exercise. Maximum profit potential is realized. 109 .Share Price <280 Stock price @280 Stock price >280 The May 280 call expires worthless leaving the position long 1 Jun 280 call at a reduced cost of 6. i. but the true value of the Jun 280 call is likely to be lower. A calendar spread using puts could be established in the same way to suit a neutral to moderately bearish strategy. if the May calls were purchased and the Jun calls sold then the risks and rewards would be reversed.e. Chapter 6 110 . Achievements in Future and Options Segment ACHIEVEMENTS IN FUTURE AND OPTIONS SEGMENT 111 . 112 . COMPARATIVE ANALYSIS OF F&O SEGMENT AND CASH SEGMENT 113 . TOP 5 TRADED SYMBOL IN THE FUTURES SEGMENT FOR THE MONTH OF MAY 2008 114 . TOP 5 TRADED SYMBOL IN THE OPTIONS SEGMENT FOR THE MONTH OF MAY 2008 115 . 116 . Chapter 7 Conclusion CONCLUSION 117 . Most of the people keep them away from bad times that lead to low liquidity in the markets.e. the objective of the study is accomplished and I recommend that one should use the Derivative Instruments. as it is very much applicable in the Indian Stock Market. Forwards and Option strategies. By studying and applying various Derivative Instruments like Futures..The Indian stock market witnesses both the good as well as the bad time. to earn profits in any market direction. Therefore. Derivative Instruments are a very good tool that will help us to minimize our risk and maximize our returns so that one can have conviction in his portfolio in the hugely volatile stock market Finally. Derivatives Instruments are the tools to be with. 118 . Chapter 8 Suggestions and Recommendations SUGGESTIONS AND RECOMMENDATIONS 119 . 120 . Chapter 9 The Reference Materials THE REFERENCE MATERIALS GLOSSARY 121 . created as the result of a special event such as stock split or a stock dividend. AT PRICE: When you enter a prospective trade into a trade parameter. BEAR. It is the price at which the program expects you can actually execute the trade. very close to) the current price of the underlying. Assignments are made on a random basis by the Stock Exchange Clearing. e.g. taking into account "slippage" and the current Bid/Ask. Also called FAR MONTH. BACK MONTH: A back month contract is any exchange-traded derivatives contract for a future period beyond the front month contract. BINOMIAL PRICING MODEL: Methodology employed in some option pricing models which assumes that the price of the underlying can either rise or fall by 122 .ADJUSTED STRIKE PRICE: Strike price of an option. Pr) is automatically computed and displayed. if available. AT-THE-MONEY (ATM): An at-the-money option is one whose strike price is equal to (or. in practice. The adjusted strike price can differ from the regular intervals prescribed for strike prices. ASSIGNMENT: Notification by Stock Exchange Clearing to a clearing member and the writer of an option that an owner of the option has exercised the option and that the terms of settlement must be met. Such views are often described as bearish. AMERICAN STYLE OPTION: A call or put option contract that can be exercised at any time before the expiration of the contract. AVERAGING DOWN: Buying more of a stock or an option at a lower price than the original purchase so as to reduce the average cost. the "At Price" (At. The writer of a call option is obligated to sell the underlying asset at the strike price of the call option. ARBITRAGE: A trading technique that involves the simultaneous purchase and sale of identical assets or of equivalent assets in two different markets with the intent of profiting by the price discrepancy. the writer of a put option is obligated to buy the underlying at the strike price of the put option. believes that the price will fall. BEARISH: A bear is someone with a pessimistic view on a market or particular asset. (A market order may be canceled if the order is placed after market hours and is then canceled before the market opens the following day). CASH SETTLEMENT: The process by which the terms of an option contract are fulfilled through the payment or receipt in Rupees of the amount by which the option is in-the-money as opposed to delivering or receiving the underlying stock. BREAK-EVEN POINT: A stock price at option expiration at which an option strategy results in neither a profit or a loss.a certain amount at each pre-determined interval until expiration For more information. In most cases. BOX SPREAD: A four-sided option spread that involves a long call and short put at one strike price as well as a short call and long put at another strike price. A request for cancel can be made at anytime before execution. BULL. effectively canceling out the position. CARRYING COST: The interest expense on money borrowed to finance a stock or option position. dividends. Collateral (or margin) is required on investments with open-ended loss potential such as writing naked options. CLOSING TRANSACTION: To sell a previously purchased position or to buy back a previously purchased position. see COX-ROSS-RUBINSTEIN model. strike price. 123 . and volatiity. a limit order can be canceled at any time as long as it has not been executed. BULLISH: A bull is someone with an optimistic view on a market or particular CANCELED ORDER: A buy or sell order that is canceled before it has been executed. In other words. COLLATERAL: This is the legally required amount of cash or securities deposited with a brokerage to insure that an investor can meet all potential obligations. this is a synthetic long stock position at one strike price and a synthetic short stock position at another strike price. interest rates. time of expiration. BLACK-SCHOLES PRICING MODEL: A formula used to compute the theoretical value of European-style call and put options from the following inputs: stock price. It was invented by Fischer Black and Myron Scholes. DISCOUNT: An adjective used to describe an option that is trading below its intrinsic value. DAY TRADE: A position that is opened and closed on the same day. It is automatically cancelled on the close of the session if it is not executed. COMMODITY: A raw material or primary product used in manufacturing or industrial processing or consumed in its natural form. DAY ORDER: An order to purchase or sell a security. or any other security. In futures options the contract size is one futures contract. It is either the cost of funds to finance the purchase (real cost). EARLY EXERCISE: A feature of American-style options that allows the owner to exercise an option at any time prior to its expiration date. 124 . DYNAMIC HEDGING: A short-term trading strategy generally using futures contracts to replicate some of the characteristics of option contracts. DEBIT: The amount you pay for placing a trade. In stock options the standard contract size is 100 shares of stock. that is good for just the trading session on which it is given. options. DIRECTIONAL TRADE: A trade designed to take advantage of an expected movement in price. COST OF CARRY: This is the interest cost of holding an asset for a period of time. or the loss of income because funds are diverted from one investment to another (opportunity cost). In the case of currency options it varies. A net outflow of cash from your account as the result of a trade. usually at a specified price. CONTRACT SIZE: The number of units of an underlying specified in a contract. In index options the contract size is an amount of cash equal to parity times the multiplier. The strategy takes into account the replicated option's delta and often requires adjusting.COMMISSION: This is the charge paid to a broker for transacting the purchase or the sale of stock. within certain limits.EQUITY OPTION: An option on shares of an individual common stock. EXCHANGE TRADED: The generic term used to describe futures. and sometimes unique. FILL: When an order has been completely executed. the terms of the options. FLEXIBLE EXCHANGE OPTIONS (FLEX): Customized equity and equity index options. On the ex-dividend date. EXOTIC OPTIONS: Various over-the-counter options whose terms are very specific. The demand of the owner of a call option that the number of units of the underlying specified in the contract be delivered to him at the specified price. the previous day's closing price is reduced by the amount of the dividend because purchasers of the stock on the ex-dividend date will not receive the dividend payment. expiration date. the FOK order cannot be used as part of a GTC order. EXERCISE: The act by which the holder of an option takes up his rights to buy or sell the underlying at the strike price. except it is "killed" immediately if it cannot be completely executed as soon as it is announced. Similar to an all-or-none (AON) order. Also known as a stock option. options and other derivative instruments that are traded on an organized exchange. it is described as filled. Unlike an AON order. Examples include Bermuda options (somewhere between American and European type. FILL OR KILL (FOK) ORDER: This means do it now if the option (or stock) is available in the crowd or from the specialist. such as exrcise price. this option can be exercised only on certain dates) and look-back options (whose strike price is set at the option's expiration date and varies depending on the level reached by the underlying security). The user can specify. EX-DIVIDEND DATE: The day before which an investor must have purchased the stock in order to receive the dividend. The demand by the owner of a put option contract that the number of units of the underlying asset specified be bought from him at the specified price. otherwise kill the order altogether. EUROPEAN STYLE OPTION: An option that can only be exercised on the expiration date of the contract. exercise type. and settlement 125 . one of the advantages of being a floor trader is that the haircut is less than margin requirements for public customers. This is usually due to a low volume of transactions and/or a small number of participants. 126 . FRONT MONTH: The first month of those listed by an exchange . GUTS: The purchase (or sale) of both an in-the-money call and in-the-money put. the term guts refer to the in-the-money strangle. Through these adjustments. HAIRCUT: Similar to margin required of public customers this term refers to the equity required of floor traders on equity option exchanges. HEDGE: A position established with the specific intent of protecting an existing position. A box spread can be viewed as the combination of an in-the-money strangle and an out-of-the-money strangle. Generally. FOLLOW-UP ACTION: Term used to describe the trades an investor makes subsequent to implementing a strategy. Also known as the NEAR MONTH.this is usually the most actively traded contract. which makes FLEX an institutional product. FRONTRUNNING: An illegal securities transaction based on prior nonpublic knowledge of a forthcoming transaction that will affect the price of a stock. IMMEDIATE-OR-CANCEL (IOC) ORDER: An option order that gives the trading floor an opportunity to partially or totally execute an order with any remaining balance immediately cancelled. Can only be traded in a minimum size. the investor transforms one strategy into a different one in response to price changes in the underlying. Example: an owner of common stock buys a put option to hedge against a possible stock price decline. but liquidity will move from this to the second month contract as the front month nears expiration.calculation. ILLIQUID: An illiquid market is one that cannot be easily traded without even relatively small orders tending to have a disproportionate impact on prices. See box spread and strangle. To differentiate between these two strangles. To buy on margin refers to borrowing part of the purchase price of a security from a brokerage firm. Instead of utilizing a "spread order" to insure that both the written and the purchased options are filled simultaneously. also known as long-dated options. INDEX OPTION: An option that has an index as the underlying. equity LEAPS have two series at any time. MARKET BASKET: A group of common stocks whose price movement is expected to closely correlate with an index. always with January expirations. LEVERAGE: A means of increasing return or worth without increasing investment. These are usually cash-settled. an investor gambles a better deal can be obtained on the price of the spread by implementing it as two separate orders. 127 . Option contracts are leveraged as they provide the prospect of a high return with little investment. The BSE SENSEX / S&P CNX NSE NIFTY. the amount by which it is in-the-money).INDEX: The compilation of stocks and their prices into a single number.. Some indexes also have LEAPs. E. Using borrowed funds to increase one's investment return. IN-THE-MONEY (ITM): Term used when the strike price of an option is less than the price of the underlying for a call option. for example buying stocks on margin. MARGIN: The minimum equity required to support an investment position. INTRINSIC VALUE: Amount of any favorable difference between the strike price of an option and the current price of the underlying (i. or greater than the price of the underlying for a put option. the option has an intrinsic value greater than zero. LEAPS: Long-term Equity Anticipation Securities. Only about 10% of equities have LEAPs.e. LEGGING: Term used to describe a risky method of implementing or closing out a spread strategy one side ("leg") at a time. MARK TO MARKET: The revaluation of a position at its current market price. In other words. Currently.g. The intrinsic value of an out-of-the-money option is zero. Calls and puts with expiration as long as 2-5 years. OCC issues and guarantees all option contracts. 128 . or the strike price of a put is less than the price of the underlying. OPTIONS CLEARING CORPORATION (OCC): A corporation owned by the exchanges that trade listed stock options.MARKET MAKER: A trader or institution that plays a leading role in a market by being prepared to quote a two way price (Bid and Ask) on request . ONE-CANCELS-THE-OTHER (OCO) ORDER: Type of order which treats two or more option orders as a package. OCC is an intermediary between option buyers and sellers. whereby the execution of any one of the orders causes all the orders to be reduced by the same amount. OPTION CHAIN: A list of the options available for a given underlying. An out-of-the-money option has no intrinsic value. Can be placed as a day or GTC order. The loss potential of naked strategies can be virtually unlimited. NEUTRAL: An adjective describing the belief that a stock or the market in general will neither rise nor decline significantly.during normal market hours.the volatility estimate. OUT-OF-THE-MONEY (OTM): An out-of-the-money option is one whose strike price is unfavorable in comparison to the current price of the underlying. NET MARGIN REQUIREMENT: The equity required in a margin account to support an option position after deducting the premium received from sold options. only time value. because theoretical value depends on one subjective input . OVERVALUED: An adjective used to describe an option that is trading at a price higher that its theoretical value. NAKED: An investment in which options sold short are not matched with a long position in either the underlying or another option of the same type that expires at the same time or later than the options sold.or constantly in the case of some screen based markets . It must be remembered that this is a subjective evaluation. This means when the strike price of a call is greater than the price of the underlying. and profit from moment to moment price movements. ROLLOVER: Moving a position from one expiration date to another further into the future. See mark-to-market. such as biotechnology or small capitalization stocks. SECTOR INDICES: Indices that measure the performance of a narrow market segment. REALIZED GAINS AND LOSSES: The profit or losses received or paid when a closing transaction is made and matched together with an opening transaction. This is accomplished by a simultaneous sale of one and purchase of the other. As the front month approaches expiration. margin requirements. When an option is trading at its intrinsic value. It is computed by dividing the 4-day average of total put VOLUME by the 4-day average of total call VOLUME.PARITY: An adjective used to describe the difference between the stock price and the strike price of an in-the-money option. sell on the ask price. and for other purposes. it is said to be trading at parity. This is a fixed price per unit and is specified in the option contract. This price is established by The Options Clearing Corporation and is used to determine changes in account equity. STRIKE PRICE: The price at which the holder of an option has the right to buy or sell the underlying. PUT/CALL RATIO: This ratio is used by many as a leading indicator. 129 . The SEC is the United States federal government agency that regulates the securities industry. Also known as striking price or exercise price. traders wishing to maintain their positions will often move them to the next contract month. SEC: The Securities and Exchange Commission. Risk is limited by the very short time duration (usually 10 seconds to 3 minutes) of maintaining any one position. SCALPER: A trader on the floor of an exchange who hopes to buy on the bid price. SETTLEMENT PRICE: The official price at the end of a trading session. RATIO CALENDAR COMBINATION: A term used loosely to describe any variation on an investment strategy that involves both puts and calls in unequal quantities and at least two different strike prices and two different expirations. and short selling volume. and margin interest. trading volume. Once the position is closed.SYNTHETIC: A strategy that uses options to mimic the underlying asset. the relation of advancing issues to declining issues. and can also be dependent on the current price of the security. including brokerage commissions. UNCOVERED: A short option position that is not fully collateralized if notification of assignment is received. The long synthetic combines a long call and a short put to mimic a long position in the underlying. in a given period of time. Both long and short synthetics are strategies in the Trade Finder. fees for exercise and/or assignment. This varies by security. Volatility is not equivalent to BETA. TIME DECAY: Term used to describe how the theoretical value of an option "erodes" or reduces with the passage of time. Assets with greater volatility exhibit wider price swings and their options are higher in price than less volatile assets. In both cases. or is expected to fluctuate. it becomes a realized gain or loss. TRADING PIT: A specific location on the trading floor of an exchange designated for the trading of a specific option class or stock. The short synthetic combines a short call and a long put to mimic a short position in the underlying. VOLATILITY: Volatility is a measure of the amount by which an asset has fluctuated. VOLATILITY TRADE: A trade designed to take advantage of an expected change in volatility. both the call and put have the same strike price. TECHNICAL ANALYSIS: Method of predicting future price movements based on historical market data such as (among others) the prices themselves. and are on the same underlying. 130 . TRANSACTION COSTS: All charges associated with executing a trade and maintaining a position. Time decay is quantified by Theta. UNREALIZED GAIN OR LOSS: The difference between the original cost of an open position and its current market price. the same expiration. open interest. See also NAKED. TICK: The smallest unit price change allowed in trading a specific security. Published by Printice Hall of India. The Internal Revenue Service prohibits wash sales since no change in ownership takes place. Published by Oxford University Press. the value of which decreases over time if there is no price fluctuation in the underlying asset. Dubofsky and Thomas W. BIBLIOGRAPHY Books: Derivatives: Valuation and Risk Management By David A. WASTING ASSET: An investment with a finite life. 131 . Miller. Financial Engineering: A Complete Guide to Financial Innovation By John F.. JR. Marshall and Vipul K.WASH SALE: When an investor repurchases an asset within 30 days of the sale date and reports the original sale as a tax loss. Bansal. in 132 .Newspapers: The Times of India The Economic Times Internet:. 133 . TOP 5 TRADED SYMBOL IN THE OPTIONS SEGMENT FOR THE MONTH OF MAY 2008 134 . 135 . Chapter 7 Conclusio n 136 . Most of the people keep them away from bad times that lead to low liquidity in the markets. the objective of the study is accomplished and I recommend that one should use the Derivative Instruments. By studying and applying various Derivative Instruments like Futures. Derivatives Instruments are the tools to be with. Forwards and Option strategies.. Derivative Instruments are a very good tool that will help us to minimize our risk and maximize our returns so that one can have conviction in his portfolio in the hugely volatile stock market Finally. 137 . as it is very much applicable in the Indian Stock Market.CONCLUSION The Indian stock market witnesses both the good as well as the bad time.e. to earn profits in any market direction. Therefore. Chapter 8 138 . Suggestions and Recommend ations 139 . SUGGESTIONS AND RECOMMENDATIONS 140 . Chapter 9 141 . The Reference Materials THE REFERENCE MATERIALS 142 . The adjusted strike price can differ from the regular intervals prescribed for strike prices.GLOSSARY ADJUSTED STRIKE PRICE: Strike price of an option. 143 . It is the price at which the program expects you can actually execute the trade. Pr) is automatically computed and displayed. very close to) the current price of the underlying. Also called FAR MONTH. taking into account "slippage" and the current Bid/Ask. Assignments are made on a random basis by the Stock Exchange Clearing. if available. AVERAGING DOWN: Buying more of a stock or an option at a lower price than the original purchase so as to reduce the average cost. AT PRICE: When you enter a prospective trade into a trade parameter. AT-THE-MONEY (ATM): An at-the-money option is one whose strike price is equal to (or. BACK MONTH: A back month contract is any exchange-traded derivatives contract for a future period beyond the front month contract. ASSIGNMENT: Notification by Stock Exchange Clearing to a clearing member and the writer of an option that an owner of the option has exercised the option and that the terms of settlement must be met. the "At Price" (At. AMERICAN STYLE OPTION: A call or put option contract that can be exercised at any time before the expiration of the contract. created as the result of a special event such as stock split or a stock dividend. The writer of a call option is obligated to sell the underlying asset at the strike price of the call option. the writer of a put option is obligated to buy the underlying at the strike price of the put option. ARBITRAGE: A trading technique that involves the simultaneous purchase and sale of identical assets or of equivalent assets in two different markets with the intent of profiting by the price discrepancy. in practice. In other words. 144 . dividends.BEAR. see COX-ROSS-RUBINSTEIN model. BINOMIAL PRICING MODEL: Methodology employed in some option pricing models which assumes that the price of the underlying can either rise or fall by a certain amount at each pre-determined interval until expiration For more information. and volatiity. CARRYING COST: The interest expense on money borrowed to finance a stock or option position. BLACK-SCHOLES PRICING MODEL: A formula used to compute the theoretical value of European-style call and put options from the following inputs: stock price. A request for cancel can be made at anytime before execution. (A market order may be canceled if the order is placed after market hours and is then canceled before the market opens the following day). believes that the price will fall. strike price. BEARISH: A bear is someone with a pessimistic view on a market or particular asset. interest rates. BOX SPREAD: A four-sided option spread that involves a long call and short put at one strike price as well as a short call and long put at another strike price. BULL. BULLISH: A bull is someone with an optimistic view on a market or particular CANCELED ORDER: A buy or sell order that is canceled before it has been executed.g. Such views are often described as bearish. a limit order can be canceled at any time as long as it has not been executed. e. this is a synthetic long stock position at one strike price and a synthetic short stock position at another strike price. time of expiration. BREAK-EVEN POINT: A stock price at option expiration at which an option strategy results in neither a profit or a loss. In most cases. It was invented by Fischer Black and Myron Scholes. COLLATERAL: This is the legally required amount of cash or securities deposited with a brokerage to insure that an investor can meet all potential obligations. 145 . CLOSING TRANSACTION: To sell a previously purchased position or to buy back a previously purchased position. In index options the contract size is an amount of cash equal to parity times the multiplier. or any other security. that is good for just the trading session on which it is given. In futures options the contract size is one futures contract. In the case of currency options it varies. COMMODITY: A raw material or primary product used in manufacturing or industrial processing or consumed in its natural form. It is automatically cancelled on the close of the session if it is not executed. CONTRACT SIZE: The number of units of an underlying specified in a contract. It is either the cost of funds to finance the purchase (real cost). Collateral (or margin) is required on investments with openended loss potential such as writing naked options.CASH SETTLEMENT: The process by which the terms of an option contract are fulfilled through the payment or receipt in Rupees of the amount by which the option is in-the-money as opposed to delivering or receiving the underlying stock. COST OF CARRY: This is the interest cost of holding an asset for a period of time. options. effectively canceling out the position. usually at a specified price. In stock options the standard contract size is 100 shares of stock. or the loss of income because funds are diverted from one investment to another (opportunity cost). DAY TRADE: A position that is opened and closed on the same day. COMMISSION: This is the charge paid to a broker for transacting the purchase or the sale of stock. DAY ORDER: An order to purchase or sell a security. EX-DIVIDEND DATE: The day before which an investor must have purchased the stock in order to receive the dividend. Also known as a stock option. the previous day's closing price is reduced by the amount of the dividend because purchasers of the stock on the ex-dividend date will not receive the dividend payment. options and other derivative instruments that are traded on an organized exchange. DISCOUNT: An adjective used to describe an option that is trading below its intrinsic value. EXERCISE: The act by which the holder of an option takes up his rights to buy or sell the underlying at the strike price. EQUITY OPTION: An option on shares of an individual common stock. DYNAMIC HEDGING: A short-term trading strategy generally using futures contracts to replicate some of the characteristics of option contracts. The demand of the owner of a call option that the number of units of the underlying specified in the contract be delivered to him at the specified price. On the ex-dividend date.DEBIT: The amount you pay for placing a trade. EUROPEAN STYLE OPTION: An option that can only be exercised on the expiration date of the contract. The demand by the 146 . EARLY EXERCISE: A feature of American-style options that allows the owner to exercise an option at any time prior to its expiration date. EXCHANGE TRADED: The generic term used to describe futures. The strategy takes into account the replicated option's delta and often requires adjusting. A net outflow of cash from your account as the result of a trade. DIRECTIONAL TRADE: A trade designed to take advantage of an expected movement in price. The user can specify. otherwise kill the order altogether. but liquidity will move from this to the second month contract as the front month nears expiration. Unlike an AON order. within certain limits. FRONT MONTH: The first month of those listed by an exchange . it is described as filled. EXOTIC OPTIONS: Various over-the-counter options whose terms are very specific. except it is "killed" immediately if it cannot be completely executed as soon as it is announced. Examples include Bermuda options (somewhere between American and European type. and settlement calculation. Similar to an all-or-none (AON) order. FILL: When an order has been completely executed. and sometimes unique. expiration date. the FOK order cannot be used as part of a GTC order. the investor transforms one strategy into a different one in response to price changes in the underlying. this option can be exercised only on certain dates) and look-back options (whose strike price is set at the option's expiration date and varies depending on the level reached by the underlying security). Also known as the NEAR MONTH.this is usually the most actively traded contract. which makes FLEX an institutional product. exercise type. FRONTRUNNING: An illegal securities transaction based on prior nonpublic knowledge of a forthcoming transaction that will affect the price of a stock. FOLLOW-UP ACTION: Term used to describe the trades an investor makes subsequent to implementing a strategy. 147 . FLEXIBLE EXCHANGE OPTIONS (FLEX): Customized equity and equity index options. FILL OR KILL (FOK) ORDER: This means do it now if the option (or stock) is available in the crowd or from the specialist. such as exrcise price. Can only be traded in a minimum size.owner of a put option contract that the number of units of the underlying asset specified be bought from him at the specified price. the terms of the options. Through these adjustments. These are usually cash-settled. To differentiate between these two strangles. IMMEDIATE-OR-CANCEL (IOC) ORDER: An option order that gives the trading floor an opportunity to partially or totally execute an order with any remaining balance immediately cancelled. HAIRCUT: Similar to margin required of public customers this term refers to the equity required of floor traders on equity option exchanges. INDEX OPTION: An option that has an index as the underlying. The BSE SENSEX / S&P CNX NSE NIFTY.g.GUTS: The purchase (or sale) of both an in-the-money call and in-the-money put. ILLIQUID: An illiquid market is one that cannot be easily traded without even relatively small orders tending to have a disproportionate impact on prices. HEDGE: A position established with the specific intent of protecting an existing position. This is usually due to a low volume of transactions and/or a small number of participants. Example: an owner of common stock buys a put option to hedge against a possible stock price decline. one of the advantages of being a floor trader is that the haircut is less than margin requirements for public customers. the option has an intrinsic value greater than zero. the term guts refer to the in-the-money strangle. 148 . In other words. IN-THE-MONEY (ITM): Term used when the strike price of an option is less than the price of the underlying for a call option. Generally. A box spread can be viewed as the combination of an in-the-money strangle and an out-of-the-money strangle. E. INDEX: The compilation of stocks and their prices into a single number. See box spread and strangle. or greater than the price of the underlying for a put option. Calls and puts with expiration as long as 2-5 years. Currently. 149 . To buy on margin refers to borrowing part of the purchase price of a security from a brokerage firm. an investor gambles a better deal can be obtained on the price of the spread by implementing it as two separate orders.or constantly in the case of some screen based markets . LEAPS: Long-term Equity Anticipation Securities. MARK TO MARKET: The revaluation of a position at its current market price. The intrinsic value of an out-of-the-money option is zero. LEVERAGE: A means of increasing return or worth without increasing investment. MARKET BASKET: A group of common stocks whose price movement is expected to closely correlate with an index.. the amount by which it is in-the-money). MARKET MAKER: A trader or institution that plays a leading role in a market by being prepared to quote a two way price (Bid and Ask) on request .e. Using borrowed funds to increase one's investment return. Only about 10% of equities have LEAPs. always with January expirations. equity LEAPS have two series at any time. Option contracts are leveraged as they provide the prospect of a high return with little investment. also known as long-dated options.INTRINSIC VALUE: Amount of any favorable difference between the strike price of an option and the current price of the underlying (i. Instead of utilizing a "spread order" to insure that both the written and the purchased options are filled simultaneously. for example buying stocks on margin.during normal market hours. LEGGING: Term used to describe a risky method of implementing or closing out a spread strategy one side ("leg") at a time. Some indexes also have LEAPs. MARGIN: The minimum equity required to support an investment position. OPTION CHAIN: A list of the options available for a given underlying. or the strike price of a put is less than the price of the underlying. only time value.NAKED: An investment in which options sold short are not matched with a long position in either the underlying or another option of the same type that expires at the same time or later than the options sold.the volatility estimate. Can be placed as a day or GTC order. OCC is an intermediary between option buyers and sellers. NEUTRAL: An adjective describing the belief that a stock or the market in general will neither rise nor decline significantly. OUT-OF-THE-MONEY (OTM): An out-of-the-money option is one whose strike price is unfavorable in comparison to the current price of the underlying. OVERVALUED: An adjective used to describe an option that is trading at a price higher that its theoretical value. because theoretical value depends on one subjective input . An out-of-the-money option has no intrinsic value. OPTIONS CLEARING CORPORATION (OCC): A corporation owned by the exchanges that trade listed stock options. 150 . ONE-CANCELS-THE-OTHER (OCO) ORDER: Type of order which treats two or more option orders as a package. The loss potential of naked strategies can be virtually unlimited. NET MARGIN REQUIREMENT: The equity required in a margin account to support an option position after deducting the premium received from sold options. It must be remembered that this is a subjective evaluation. whereby the execution of any one of the orders causes all the orders to be reduced by the same amount. This means when the strike price of a call is greater than the price of the underlying. OCC issues and guarantees all option contracts. SECTOR INDICES: Indices that measure the performance of a narrow market segment. PUT/CALL RATIO: This ratio is used by many as a leading indicator. ROLLOVER: Moving a position from one expiration date to another further into the future. REALIZED GAINS AND LOSSES: The profit or losses received or paid when a closing transaction is made and matched together with an opening transaction. such as biotechnology or small capitalization stocks. it is said to be trading at parity. SCALPER: A trader on the floor of an exchange who hopes to buy on the bid price. This is accomplished by a simultaneous sale of one and purchase of the other. This price is established by The Options Clearing Corporation and is used to 151 . and profit from moment to moment price movements. RATIO CALENDAR COMBINATION: A term used loosely to describe any variation on an investment strategy that involves both puts and calls in unequal quantities and at least two different strike prices and two different expirations. When an option is trading at its intrinsic value. It is computed by dividing the 4-day average of total put VOLUME by the 4-day average of total call VOLUME. Risk is limited by the very short time duration (usually 10 seconds to 3 minutes) of maintaining any one position.PARITY: An adjective used to describe the difference between the stock price and the strike price of an in-the-money option. traders wishing to maintain their positions will often move them to the next contract month. SETTLEMENT PRICE: The official price at the end of a trading session. sell on the ask price. The SEC is the United States federal government agency that regulates the securities industry. As the front month approaches expiration. SEC: The Securities and Exchange Commission. Time decay is quantified by Theta. This varies by security. Also known as striking price or exercise price. TRADING PIT: A specific location on the trading floor of an exchange designated for the trading of a specific option class or stock. both the call and put have the same strike price. fees for exercise and/or assignment. including brokerage commissions. The long synthetic combines a long call and a short put to mimic a long position in the underlying. STRIKE PRICE: The price at which the holder of an option has the right to buy or sell the underlying. and for other purposes. TECHNICAL ANALYSIS: Method of predicting future price movements based on historical market data such as (among others) the prices themselves. The short synthetic combines a short call and a long put to mimic a short position in the underlying.determine changes in account equity. and are on the same underlying. SYNTHETIC: A strategy that uses options to mimic the underlying asset. TIME DECAY: Term used to describe how the theoretical value of an option "erodes" or reduces with the passage of time. 152 . Both long and short synthetics are strategies in the Trade Finder. TRANSACTION COSTS: All charges associated with executing a trade and maintaining a position. This is a fixed price per unit and is specified in the option contract. and margin interest. and can also be dependent on the current price of the security. the relation of advancing issues to declining issues. margin requirements. trading volume. the same expiration. In both cases. open interest. and short selling volume. TICK: The smallest unit price change allowed in trading a specific security. See mark-to-market. in a given period of time. WASTING ASSET: An investment with a finite life. VOLATILITY TRADE: A trade designed to take advantage of an expected change in volatility. the value of which decreases over time if there is no price fluctuation in the underlying asset.UNCOVERED: A short option position that is not fully collateralized if notification of assignment is received. UNREALIZED GAIN OR LOSS: The difference between the original cost of an open position and its current market price. VOLATILITY: Volatility is a measure of the amount by which an asset has fluctuated. WASH SALE: When an investor repurchases an asset within 30 days of the sale date and reports the original sale as a tax loss. it becomes a realized gain or loss. 153 . Volatility is not equivalent to BETA. or is expected to fluctuate. The Internal Revenue Service prohibits wash sales since no change in ownership takes place. Once the position is closed. Assets with greater volatility exhibit wider price swings and their options are higher in price than less volatile assets. See also NAKED. gov.com 154 .com. Marshall and Vipul K.com www. JR. Published by Printice Hall of India. Dubofsky and Thomas W.BIBLIOGRAPHY Books: • Derivatives: Valuation and Risk Management By David A.in www. Miller. Bansal.com. Published by Oxford University Press.bseindia. • Newspapers:• • The Times of India The Economic Times Internet: • • • • • •. Financial Engineering: A Complete Guide to Financial Innovation By John F. • 155 .
https://www.scribd.com/doc/73104425/49086924-sharekhan
CC-MAIN-2017-39
refinedweb
34,257
59.7
This article provides the steps to integrate a form created using Ext JS with Struts2 back-end. A little about Ext JS Ext JS is a cross-browser JavaScript library for building rich internet applications. Ext JS is available under commercial as well as open source license. 1. Ext JS provides a whole lot of built-in GUI components. These components come in handy while developing a new application from scratch. There is always a component available for almost all kind of RIA GUI. 2. Ext JS follows component design model. One can easily extend any of the core components to meet their specific requirements. 3. Whether it’s IE or latest chrome browser, Ext JS applications look and behave the same no matter where they run. Ext JS utilizes HTML5 features on modern browsers and falls back to alternatives on older browsers. 4. Ext JS uses AJAX to interact with web server. This enhances the RIA experience. Integrating Ext JS with Struts 2 Ext JS forms are submitted using AJAX by default and it expects a JSON/XML response from the server. The response must have a ‘success’ property which determines whether errors are returned from server or not. In case of error response the ‘errors’ property of response is used to set the error messages. We will discuss more in this in later section. Struts 2 can handle the request from Ext JS like any other request and no special arrangements are needed for that. But to generate json response we need to use json plugin. The procedure to use this json plugin is mentioned in later section. Now we have got the basic idea about Ext JS and struts 2 integration, let us create a sample example to see how it works. Ext JS Form First create a simple form in Ext JS. For simplicity we have created a form with 3 fields and a submit button. The form field labels are First Name, Last Name and Age. Ext JS uses form field names as parameter names while submitting the request to server. Hence we need to provide the names for each field whose value we want to send to the server. { xtype: 'textfield', padding: 10, width: 303, name: 'firstName', fieldLabel: 'First Name' }, { xtype: 'textfield', padding: 10, width: 302, name: 'lastName', fieldLabel: 'Last Name' }, { xtype: 'numberfield', padding: 10, width: 186, name: 'age', fieldLabel: 'Age' } To enable struts action to read the form data correctly, we need to make sure we have the attributes in the action class with same names as the names mentioned above in Ext JS form. We should also define the action attributes with correct data types to allow auto conversion by struts. One important point to keep in mind is that we should have public get/set method for each of the attribute defined to capture submitted form field data. Please check the form action code in below section for more details on this. Submit button handlerWhen submit button is clicked, the JS function associated with the button handler would be invoked. This function sets the form URL and invoke submit() method of the form defined in Ext JS. The simplest form of button handler could be as shown below: var form = button.up('form').getForm(); form.url='registerUser'; //form.submit(); form.submit({ success: function(form, action) { Ext.Msg.alert('Success', 'The form is submitted successfully.'); }, failure: function(form, action) { Ext.Msg.alert('Failed', 'There is some error returned from the server.'); } }); There are two ways in which a form can be submitted using Ext JS’s form. If we do not want to process the response from server after submitting the form then we can simply call ‘submit()’ method without any arguments. This way is commented in above code sample. Ext JS provides a way to process the response after submit. The ‘submit()’ method can have two callback methods configure – success and failure. success is invoked when the response has success property set to true. failure is invoked if the success property is missing in response or it is set to false. Struts 2 ActionNow we have looked into the client side of code. Let us now focus on the server side of code for handling the submitted form data. The below code provides the details of our action class. public class Registration { private boolean success; private String firstName; private String lastName; private int age; public String registerUser() { System.out.println("Got following user details: " ); System.out.println("First Name: " + getFirstName()); System.out.println("Last Name: " + getLastName()); System.out.println("Age: " + getAge()); success = true; return Action boolean isSuccess() { return success; } public void setSuccess(boolean success) { this.success = success; } } For the sake of simplicity we are just printing the submitted data to the console. We have mapped registerUser method of this class with our form submit action. We will see this in struts.xml file later. If you observe this action class is like any other Struts 2 action class. The beauty of struts json plugin is that we no need to make any changes to the action class to return json response. That part is handled by configuring result type to json in struts.xml. Configuring json plugin with Struts First you need to download the JSON plugin, if you already don’t have this in your struts download. To configure json plugin follow the following steps: - Keep the json plugin jar file in the WEB-INF/lib directory of the web project. - In struts.xml, create a new package that extends json-default. If you want to use an existing package then add the json-default as comma separated in the existing package’s extends property. - Define the actions which have to return json response inside the above defined package. - Put the result type as json for the actions to return json response. <struts> <package name="default" extends="json-default" namespace="/"> <action name="registerUser" class="sampleapp.action.Registration" method="registerUser"> <result type="json"/> </action> </package> </struts> Now we are ready to run and test our integration example. When we run the example, the form displayed in first screen shot will come up. Add certain details and hit the ‘Submit’ button, you will see your entered details on the server console. Handling server side errorsThe server side validation can be handled in the similar way as we do for any other struts 2 application. To display validation error against a field of form add the error message in the struts error map. Make sure that the error is added using the field name that is used in Ext JS form panel. Set ‘success’ property in action to false. This makes Ext JS to interpret the response as failure. That is all, nothing else is need to link the error with corresponding form fields. When the above json is returned to the Ext JS form, it will display the error messages against the fields. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/ext-js-forms-integration
CC-MAIN-2017-09
refinedweb
1,163
65.01
i18n-tasks i18n-tasks helps you find and manage missing and unused translations. This gem analyses code statically for key usages, such as I18n.t('some.key'), in order to: - Report keys that are missing or unused. - Pre-fill missing keys, optionally from Google Translate. - Remove unused keys. Thus addressing the two main problems of i18n gem design: - Missing keys only blow up at runtime. - Keys no longer in use may accumulate and introduce overhead, without you knowing it. Installation i18n-tasks can be used with any project using the ruby i18n gem (default in Rails). Add i18n-tasks to the Gemfile: gem 'i18n-tasks', '~> 0.9.5' Copy the default configuration file: $ cp $(i18n-tasks gem-path)/templates/config/i18n-tasks.yml config/ Copy rspec test to test for missing and unused translations as part of the suite (optional): $ cp $(i18n-tasks gem-path)/templates/rspec/i18n_spec.rb spec/ Usage Run i18n-tasks to get the list of all the tasks with short descriptions. Check health i18n-tasks health checks if any keys are missing or not used: $ i18n-tasks health Add missing keys Add missing keys with placeholders (base value or humanized key): $ i18n-tasks add-missing This and other tasks accept arguments: $ i18n-tasks add-missing -v 'TRME %{value}' fr Pass --help for more information: $ i18n-tasks add-missing --help Usage: i18n-tasks add-missing [options] [locale ...] -l, --locales Comma-separated list of locale(s) to process. Default: all. Special: base. -f, --format Output format: terminal-table, yaml, json, keys, inspect. Default: terminal-table. -v, --value Value. Interpolates: %{value}, %{human_key}, %{value_or_human_key}, %{key}. Default: %{value_or_human_key}. -h, --help Display this help message. Google Translate missing keys Translate missing values with Google Translate (more below on the API key). $ i18n-tasks translate-missing # accepts from and locales options: $ i18n-tasks translate-missing --from base es fr Find usages See where the keys are used with i18n-tasks find: $ i18n-tasks find common.help $ i18n-tasks find 'auth.*' $ i18n-tasks find '{number,currency}.format.*' Remove unused keys $ i18n-tasks unused $ i18n-tasks remove-unused These tasks can infer dynamic keys such as t("category.\#{category.name}") if you set search.strict to false, or pass --no-strict on the command line. Normalize data Sort the keys: $ i18n-tasks normalize Sort the keys, and move them to the respective files as defined by config.write: $ i18n-tasks normalize -p Compose tasks i18n-tasks also provides composable tasks for reading, writing and manipulating locale data. Examples below. add-missing implemented with missing, tree-set-value and data-merge: $ i18n-tasks missing -f yaml fr | i18n-tasks tree-set-value 'TRME %{value}' | i18n-tasks data-merge remove-unused implemented with unused and data-remove (sans the confirmation): $ i18n-tasks unused -f yaml | i18n-tasks data-remove Remove all keys in fr but not en from $ i18n-tasks missing -t diff -f yaml en | i18n-tasks tree-rename-key en fr | i18n-tasks data-remove See the full list of tasks with i18n-tasks --help. Features and limitations i18n-tasks uses an AST scanner for .rb files, and a regexp-based scanner for other files, such as .haml. Relative keys i18n-tasks offers support for relative keys, such as t '.title'. ✔ Keys relative to the file path they are used in (see relative roots configuration) are supported. ✔ Keys relative to controller.action_name in Rails controllers are supported. The closest def name is used. Plural keys ✔ Plural keys, such as key.{one,many,other,...} are fully supported. Reference keys ✔ Reference keys (keys with :symbol values) are fully supported. These keys are copied as-is in add/translate-missing, and can be looked up by reference or value in find. t() keyword arguments ✔ scope keyword argument is fully supported by the AST scanner, and also by the Regexp scanner but only when it is the first argument. ✔ default argument can be used to pre-fill locale files (AST scanner only). Dynamic keys By default, dynamic keys such as t "cats.#{cat}.name" are not recognized. I encourage you to mark these with i18n-tasks-use hints. Alternatively, you can enable dynamic key inference by setting search.strict to false in the config. In this case, all the dynamic parts of the key will be considered used, e.g. cats.tenderlove.name would not be reported as unused. Note that only one section of the key is treated as a wildcard for each string interpolation; i.e. in this example, cats.tenderlove.special.name will be reported as unused. Configuration Configuration is read from config/i18n-tasks.yml or config/i18n-tasks.yml.erb. Inspect the configuration with i18n-tasks config. Install the default config file with: $ cp $(i18n-tasks gem-path)/templates/config/i18n-tasks.yml config/ Settings are compatible with Rails by default. Locales By default, base_locale is set to en and locales are inferred from the paths to data files. You can override these in the config. Storage The default data adapter supports YAML and JSON files. Multiple locale files i18n-tasks can manage multiple translation files and read translations from other gems. To find out more see the data options in the config. NB: By default, only %{locale}.yml files are read, not namespace.%{locale}.yml. Make sure to check the config. For writing to locale files i18n-tasks provides 2 options. Pattern router Pattern router organizes keys based on a list of key patterns, as in the example below: data: router: pattern_router # a list of {key pattern => file} routes, matched top to bottom write: # write models.* and views.* keys to the respective files - ['{models,views}.*', 'config/locales/\1.%{locale}.yml'] # or, write every top-level key namespace to its own file - ['{:}.*', 'config/locales/\1.%{locale}.yml'] # default, sugar for ['*', path] - 'config/locales/%{locale}.yml' Conservative router Conservative router keeps the keys where they are found, or infers the path from base locale. If the key is completely new, conservative router will fall back to pattern router behaviour. Conservative router is the default router. data: router: conservative_router write: - ['devise.*', 'config/locales/devise.%{locale}.yml'] - 'config/locales/%{locale}.yml' If you want to have i18n-tasks reorganize your existing keys using data.write, either set the router to pattern_router as above, or run i18n-tasks normalize -p (forcing the use of the pattern router for that run). Key pattern syntax A special syntax similar to file glob patterns is used throughout i18n-tasks to match translation keys: Custom adapters If you store data somewhere but in the filesystem, e.g. in the database or mongodb, you can implement a custom adapter. If you have implemented a custom adapter please share it on the wiki. Usage search i18n-tasks uses an AST scanner for .rb files, and a regexp scanner for all other files. New scanners can be added easily: please refer to this example. See the search section in the config file for all available configuration options. NB: By default, only the app/ directory is searched. Fine-tuning Add hints to static analysis with magic comment hints (lines starting with (#|/) i18n-tasks-use by default): # i18n-tasks-use t('activerecord.models.user') # let i18n-tasks know the key is used User.model_name.human You can also explicitly ignore keys appearing in locale files via ignore* settings. If you have helper methods that generate translation keys, such as a page_title method that returns t '.page_title', or a Spree.t(key) method that returns t "spree.#{key}", use the built-in PatternMapper to map these. For more complex cases, you can implement a custom scanner. See the config file to find out more. Google Translate i18n-tasks translate-missing requires a Google Translate API key, get it at Google API Console. Where this key is depends on your Google API console: - Old console: API Access -> Simple API Access -> Key for server apps. - New console: Project -> APIS & AUTH -> Credentials -> Public API access -> Key for server applications. In both cases, you may need to create the key if it doesn't exist. Put the key in GOOGLE_TRANSLATE_API_KEY environment variable or in the config file. # config/i18n-tasks.yml translation: api_key: <Google Translate API key> Interactive console i18n-tasks irb starts an IRB session in i18n-tasks context. Type guide for more information. XLSX Export missing and unused data to XLSX: $ i18n-tasks xlsx-report Add new tasks Tasks that come with the gem are defined in lib/i18n/tasks/command/commands. Custom tasks can be added easily, see the examples on the wiki.
http://www.rubydoc.info/gems/i18n-tasks/frames
CC-MAIN-2016-44
refinedweb
1,414
59.09
An Asyncio socket tutorial How to build an ASGI web server, like Hypercorn There are many asyncio tutorials and articles that focus on coroutines, the event loop, and simple primitives. There are fewer that focus on using sockets, for either listening for or sending to connections. This article will show how to build a simple web server. This is based on the development of Hypercorn, which is an ASGI server that supports HTTP/1, HTTP/2, and Websockets. The Hypercorn code is the follow on for this article. Echo Server An echo server is the simplest place to start, and by definition simply echos back to the client any data sent. As we are aiming to build a web server the TCP (Protocol) makes sense. Asyncio has two high level choices for writing servers, either callback based or stream based. I think the latter is conceptually clearer, but has been shown to have worse performance. So we’ll do both, starting with stream based, (also note I’ll be using Python3.7 features, such as the serve_forever), import asyncioasync def echo_server(reader, writer): while True: data = await reader.read(100) # Max number of bytes to read if not data: break writer.write(data) await writer.drain() # Flow control, see later writer.close()async def main(host, port): server = await asyncio.start_server(echo_server, host, port) await server.serve_forever()asyncio.run(main('127.0.0.1', 5000)) If you run this code and then connect via telnet localhost 5000 or equivalent you should see the server echo back everything sent. The equivalent code using callbacks, termed a Protocol is, import asyncioclass EchoProtocol(asyncio.Protocol): def connection_made(self, transport): self.transport = transport def data_received(self, data): self.transport.write(data)async def main(host, port): loop = asyncio.get_running_loop() server = await loop.create_server(EchoProtocol, host, port) await server.serve_forever()asyncio.run(main('127.0.0.1', 5000)) HTTP Server Now we are able to open a socket listen for connections and respond, we can add HTTP as the communication protocol and then have a webserver. To start with lets simply echo back the important parts of a HTTP message, i.e. the verb, request target, and any headers. At this stage we need to read RFC 7230 and write a HTTP parser, or use one that already exists. The latter is much easier, and h11 is fantastic so we’ll use that instead. Aside on h11 h11 is a sans-io library, this means that it manages a HTTP connection without managing the IO. In practice this means that any bytes received have to be passed to the h11 connection and any bytes to be sent have to be retrieved from the h11 connection which allows the h11 connection object to manage the HTTP state. As an example a simple request response sequence is given below, note how the received bytes must be provided to h11 and how the response data is provided by h11, import h11connection = h11.Connection(h11.SERVER) connection.receive_data( b"GET /path HTTP/1.1\r\nHost: localhost:5000\r\n\r\n", ) request = connection.next_event() # request.method == b"GET" # request.target == b"/path" data = connection.send(h11.Response(status_code=200)) # data == b"HTTP/1.1 200" Basic HTTP server We’ll stick with the Protocol server, on the basis of performance, and add h11, import asyncio import h11class HTTPProtocol(asyncio.Protocol): def __init__(self): self.connection = h11.Connection(h11.SERVER) def connection_made(self, transport): self.transport = transport def data_received(self, data): self.connection.receive_data(data) while True: event = self.connection.next_event() if isinstance(event, h11.Request): self.send_response(event) elif ( isinstance(event, h11.ConnectionClosed) or event is h11.NEED_DATA or event is h11.PAUSED ): break if self.connection.our_state is h11.MUST_CLOSE: self.transport.close() def send_response(self, event): body = b"%s %s" % (event.method.upper(), event.target) headers = [ ('content-type', 'text/plain'), ('content-length', str(len(body))), ] response = h11.Response(status_code=200, headers=headers) self.send(response) self.send(h11.Data(data=body)) self.send(h11.EndOfMessage()) def send(self, event): data = self.connection.send(event) self.transport.write(data)async def main(host, port): loop = asyncio.get_running_loop() server = await loop.create_server(HTTPProtocol, host, port) await server.serve_forever()asyncio.run(main('127.0.0.1', 5000)) If you run this code and make a HTTP request, e.g. curl localhost:5000/path you will receive GET /path back. Timeouts & Flow Control Whilst the above gives a basic HTTP server, we should add flow control and timeouts to ensure it isn’t trivially attacked. These attacks typically attempt to exhaust the server’s resources (sockets, memory, cpu…) so that it can no longer serve new connections. Timeouts To start lets consider a malicious client that opens many connections to the server, and holds them open without doing anything. This exhausts the connections the server has, thereby preventing anyone else from connecting. To combat this the server should timeout an idle connection, that is wait a certain length of time for the client to do something and then close the connection if it doesn’t. Using a protocol server this can be done as follows, import asyncioTIMEOUT = 1 # Secondclass TimeoutServer(asyncio.Protocol): def __init__(self): loop = asyncio.get_running_loop() self.timeout_handle = loop.call_later( TIMEOUT, self._timeout, ) def connection_made(self, transport): self.transport = transport def data_received(self, data): self.timeout_handle.cancel() def _timeout(self): self.transport.close() Flow Control In the initial echo server example we had await writer.drain() as this paused the coroutine from writing more data to the socket till the client had caught up, it drained the socket. This is useful as until the client catches up the data will be stored in memory, hence a malicious client can make many requests for a lot of data, refuse to receive the data, and allow the server to exhaust its memory. To combat this the coroutine sending data should await a drain function, that can be added to the protocol, import asyncioclass FlowControlServer(asyncio.Protocol): def __init__(self): self._can_write = asyncio.Event() self._can_write.set() def pause_writing(self) -> None: # Will be called whenever the transport crosses the # high-water mark. self._can_write.clear() def resume_writing(self) -> None: # Will be called whenever the transport drops back below the # low-water mark. self._can_write.set() async def drain(self) -> None: await self._can_write.wait() Conclusion This is really all there is with respect to asyncio to build an ASGI server. To continue you’ll need to add pipelining, ASGI constructs, the request-body, and streaming as completed in the Hypercorn h11.py file. See also the h2, and wsproto libraries for the HTTP/2 and Websocket equivalents of h11.
https://medium.com/@pgjones/an-asyncio-socket-tutorial-5e6f3308b8b0
CC-MAIN-2020-34
refinedweb
1,108
51.14
We have looked at WinJS controls and how to make use of them, but what do you do when the control you need doesn't exist? One solution is to create a custom control. As most of WinJS is just simple JavaScript with very little added, this is easier than you might expect. This article is Chapter 8 of Creating JavaScript/HTML5 Windows 8 Applications. WinJS controls are chunks of JavaScript that follow some simple rules that allow the system to run them when it encounters a tag with a data-win-control attribute. That is, when you call processAll it scans the HTML looking for tags with data-win-control attributes. It uses the value of the data-win-control attribute to work out what JavaScript function to call. The JavaScript function constructs the control usually by manipulating the DOM. For example, if you use a tag like: <div data- then processAll will call Mycontrols.MyCustom(element, params) and this function acts as the constructor for the control. Sounds easy - and it is, but there are a few things that we have to get right if we want our custom control to work like the supplied controls - but first let's see how easy it can be. We first need some JavaScript that creates something that we can regard as the "workings" of a custom control. We could use a range of other HTML controls to build a composite control but an interesting alternative is to demonstrate how a canvas element can be used to create a completely customizable control - what in other Windows contexts might be called a "self draw" custom control. The idea is that by using a canvas element you can draw anything you like to represent your control and arrange for it to respond in any way that you like - it is completely flexible. First we need to create a canvas element using nothing but JavaScript and draw something on it so that it is visible. if you don't know about using the canvas element then see: A Programmer's Guide to Canvas. First we create a canvas DOM object in the usual way and set its size: var c = document.createElement("canvas");c.width = "500";c.height = "500"; Next we get its drawing context and draw a red filled rectangle: var ctx = c.getContext("2d");ctx.fillStyle = "rgb(200,0,0)";ctx.fillRect(10, 10, 100, 50); At this point we have a canvas element stored in c with a red rectangle drawn on it. If we were to append c to the DOM using say the usual appendchild function you would see a red rectangle appear at the appropriate location. This is going to be the basis for our demonstration custom control. It has to be admitted that this is not much of a really useful custom control, but it does have all of the properties and potential needed to do almost anything. You can draw what you like and respond to clicks and other events according to the location within the canvas. To turn this chunk of JavaScript into a custom control we first need to set up a namespace. This isn't exactly essential but adding custom control constructors to the global name space is a recipe for chaos. In practice you are free to do what you like but it is worth trying to work withing the guidelines. If you recall a JavaScript name space is just an object that you use as a container for everything else you want to create and work with. WinJS provides a utility function WinJS.Namespace.define the first parameter is the name of the name space i.e. the object that everything else is defined within and the second is the object containing properties that you want added to the name space. For example: WinJS.Namespace.define("MyNameSpace", {}); adds nothing to the name space. WinJS.Namespace.define("MyNameSpace", {total:0}); adds the variable total to the name space. Following this you can use MyNameSpace.total=100; All of the properties contained within the object are added to the name space. If the name space doesn't exist then it is created. If it already exists the object is added which means you can use the define method to build up a namespace one object at a time. So let's call our name space MyControls and the particular control MyCanvasControl. We need to define a constructor within the name space : WinJS.Namespace.define("MyControls", { MyCanvasControl: we could define the constructor function somewhere else and just refer to it in the call to define but that would mean inventing yet more names. Better to define it within the function call. In this case the body for the function is just the code given earlier to create the canvas object: function (element, options) {var c = document.createElement("canvas"); c.width = "500"; c.height = "500"; var ctx = c.getContext("2d"); ctx.fillStyle = "rgb(200,0,0)"; ctx.fillRect(10, 10, 100, 50); element.appendChild(c); } }); Notice that the only thing that is new is that we append the canvas object to the element passed into the constructor. Putting all this together gives: WinJS.Namespace.define("MyControls", { MyCanvasControl: function (element, options){ var c = document.createElement("canvas"); c.width = "500"; c.height = "500"; var ctx = c.getContext("2d"); ctx.fillStyle = "rgb(200,0,0)"; ctx.fillRect(10, 10, 100, 50); element.appendChild(c); }}); If you now modify the HTML to read: <body> <p>Content goes here</p> <div data- </div></body> and run the app you will see the canvas custom control appear below the "Content goes here" message. You can of course include as many instances for the custom control on a page as you need. In this case take note of the fact that it is 500x500 pixels and so on the large size.
http://i-programmer.info/ebooks/creating-javascripthtml5-metro-applications/4619.html
CC-MAIN-2014-42
refinedweb
979
63.7
google-js-api This library contains extern classes that allow static type checking and compilation for various Google javascript API's. Currently, it contains the base google loader class, which is documented here, as well as a full specification for the Google Earth API. It is currently in a very early stage of development. There is currently some minimal documentation of the API here. Namespaces and google.loader() Many of Google's javascript types and methods are given "nice" namespaces. However, in many cases, they differ from how Haxe handles class and instance naming. The first issue is with the base loader functionality, given by google.load(). "google" in this case behaves like a class containing a few static methods, one of which is a load method. Later on, the "google" namespace is introduced, with sub packages like google.maps, and google.search. This confuses completion for packages in Haxe. Using some initialization magic, the "google" instance variable gets assigned to "Google", which fits with the standard Haxe method of naming classes and handling static methods. The "google.setOnLoadCallback" function fires when the entire page is loaded. It is also overloaded at runtime so that api classes can have their namespaces automatically fixed if the original google api is loaded asynchronously. Note: There is another callback function option which fires when just the javascript is loaded. This function is not overridden. Enums and Initialization Objects Javascript enums (simple constrained sets of integers or strings) are not currently possible for Haxe to support as an extern. They are typed as simple base types such as String or Ints. Javascript initialization objects are anonymous objects of string keys and values. They are typed as simple Dynamic<Dynamic> objects in the extern. Google Earth Google earth does not currently have a well behaved namespace, so unfortunately most classes exist in the base package. Once again, the google.earth instance and functions are mapped to the Earth class as "google.Earth". More details about the API itself are given on the Google Earth API page. The main instance creation function becomes: google.Earth.createInstance('map3d', initCallback, failureCallback); Demo Try out the demo below, and compare to the tutorials by Google. (it's necessary to install the google earth plugin, of course. This page should present an install option when you visit it.) import Google; import google.loader.ClientLocation; class Demo { public static function main(){ Google.load('earth','1'); var js:Bar; var onLoad = function(){ var ge = null; var initCallback = function(object:Dynamic) { ge = object; ge.getWindow().setVisibility(true); } var failureCallback = function() { } google.Earth.createInstance('map3d', initCallback, failureCallback); } Google.setOnLoadCallback(onLoad); } }
http://haxe.org/com/libs/google_js_api
CC-MAIN-2013-20
refinedweb
436
59.4
JavaServer Faces has been around since 2004. The current release is 1.2. Work has started to produce the 2.0 release which will be part of JEE6, which is slated for Late 2008/Spring 2009. I attended a presentation by Ed Burns and Roger Kitain, both from Sun and both co-leader of the team that develops this 2.0 specification. The first draft of this spec is about to be published, next week (May 13th), and will be open for review during 30 days. They discussed – for a packed room – what is going to be the essence of this major new release. One very important realization is that this release is primarily about harvesting innovations seen in the JSF eco system in the last few years – and not about innovating and creating a lot of new functionality out of nowhere. Other core elements in this release are interoperability between JSF libraries from different vendors, support for AJAX, more convention over configuration (so less configuration effort), use of Page Description Language/support for Facelets and much easier development of custom components. They spent some time in the beginning of the session to establish that JSF is pretty well established by now. They listed the large number of JSF libraries available from various vendors and open source projects, a number of real world applications – including some internet applications – using JSF and the number of job postings that been increasing for the past few years (although it experienced a slight drop in recent months, especially relative to Rails…). JSF is now supported by all major IDEs, allowing for relatively productive development supported by drag & drop, code completion, declarative property editors and visual editors in most cases. The uptake JSF has seen has been substantial – also given the number of attendees in room raising their hands – and is further increasing. The fact that JSF is part of the JEE platform is of course a pretty important factor in this. Surprisingly, to me at least, Ed and Roger failed to mention that Oracle is building its Fusion Applications using JSF technology – and they will be hard pressed to find a more complex and substantial JSF development effort. I can think of no better proof of JSF’s maturity. Ed and Roger mentioned that JSF seems to have a stronger presence in Europe than it has, comparatively, in the US. Of course the last few years of building applications using JavaServer Faces has provided a lot of insight in some shortcomings of the technology as well. Some are limitations of JSF itself, others are the result of developments in the environment. The quick evolution (or is that called revolution) of AJAX and RIA libraries has had and will continue to have a profound effect on Web Applications and the way they are developed. JSF does not currently provide optimal support for that. Developing JSF applications does not really allow developers to ‘stay in the flow’. Instead of developing interactively, reviewing the effect of each change instantly, they are required to continuously redeploy the application to the application server, which typically takes some time. This is one of the area where the development experience should be improved and become more lightweight and straightforward, such as for example with (Ruby on or G)Rails. Development of JSF components by developers themselves – in addition to vendors and specialized open source teams – proved to be too hard. Various vendors created wonderful JSF libraries with most of them providing a proprietary solution for the AJAX challenge. Primarily because of those AJAX mechanisms, most libraries do not go together very well, with the notable exception of MyFaces Tomahawk that seems to be able to mingle with almost all of the other JSF libraries. The speakers also mentioned some of the other activities surrounding JSF, such as the Facelets initiative, of which a large majority of the room was aware, and supporting tools such as - JSFUnit (out of JBoss, see ) ;. At the same time, you also have access to parsed HTML output of each client request. - FacesTrace ( ) : FacesTrace is an open-source library that aims to enhance the traceability of JavaServer Faces based applications. Several trace information and performance metrics are collected and presented on the page being traced. - DynaFaces ( or ) DynaFaces is a thin layer on top of JSF 1.2 that provides clean Ajax integration. It is based on the Avatar idea from Jacob Hookom. - JSFTemplating ( ): Templating for JavaServerâ„¢ Faces Technology plugs into JavaServerâ„¢ Faces to make building pages and components easy. Creating pages or components is done using a template file. JSFTemplating’s design allows for multiple syntaxes, currently it supports 2 of its own plus most of the Facelets syntax. All syntaxes support all of JSFTemplating’s features such as PageSession, Events & Handlers, dynamic reloading of page conent, etc. Main themes for JSF 2.0 Make it easier for developers to create custom components. Simple by creating a bit of XHTML, possibly with some additional resources and bundling them in the right way should be enough. It reminded me a lot of the Widgets in jMaki, and apparently this approach is inspired by JSFTemplating. It certainly seems a much faster and simpler way of defining JSF Components. Using Facelets – XHTML files with additional JSF markup – .) it should be much more straightforward, for human developers – and designers – as well as code generators and tools, to create page definitions. Reduce the configuration burden: less configuration, more sensible default values that will cover the majority of situations and support for annotations to specify components, validators and all other artefacts. An important part of the support for AJAX (as well improving performance for general page loading scenarios) is the Resource Delivery Mechanism. Delivering resources will be part of the JSF 2.0 lifecycle. The HandleResourceRequest step in the life cycle will replace the normal page oriented flow through the lifecycle and take care of serving up the actual resource content (GIF/JPEG, Javascript Library, CSS document,..). It delivers static resources to the user-agent in response to HTTP GET requests. The resource delivery mechanism includes support for localized, versioned resources and resource libraries. Another element in the AJAX capabilities will be the Partial Tree Traversal – inspired by DynaFaces, mentioned above. This will allow a page to do a request that is not a normal full blown page level JSF lifecycle request. Instead, it allows a page to do an AJAX request that will only reconstruct a treefragment identified in the request and only render the HTML for that particular fragment. Obviously, such a request can be much more efficient than a full page refresh. JSF 2.0 will have AJAXification through a AjaxZone tag that can be used to embrace (or wrap) existing components, in order to be able to AJAX-enable them: to give ajax capability to existing JavaServer Faces components without writing any JavaScriptâ„¢ language. Partial Page Update – already available in a proprietary implementation in many JSF libraries – allows an AJAX style request to only rerender (DOM manipulate) a small section of a page. This too will be supported in a standardized manner. In addition to all this, there is talk about a set of AJAX enabled components with the Reference Implementation. and to provide a generic approach that will lead to better compatibility between JavaServer Faces component libraries from different vendors. The JSF Specication Team is working with the OpenAJAX Alliance in order to create shared vision and a common approach for dealing with AJAX within the JSF libraries. Part of this effort will be the registration and leveraging of namespace javax.faces.ajax. There will also be a JavaScript library, that provides at least Partial Submit, Partial Rendering and some Utility functions: Collect/encode/return client JavaServer Faces component View State (to be used in POSTBACK or Ajax request), Given JavaServer Faces componentId or clientId, return client DOM Element corresponding to outermost markup for that component. Implementing Server to Client push – for example the Comet technology – is “on the radar”. The presentation mentioned ICEFaces, Dynamic Faces, Richfaces and AJAXFaces as the sources of inspiration. Looking at some of the work that done, I find it hard to believe that not either ADF Faces or Trinidad has provided some input for this Design In Progress for 2.0. Other goals for JSF 2.0: • State management rewrite • Bookmarkable URLs • Zero deployment time • Tree traversal • Scopes • Extension prioritization • Better error reporting Finally, it will work well with WebBeans and it will work well with Portlet 2.0. Resources Ryan Lubke’s Blog: and . GlassFish Mojararra for the required sources: 2 thoughts on “JavaOne 2008 – The upcoming JavaServer Faces 2.0 specification – time to harvest!” 2.0 specification Hi Lucas, thanks for this post, I’ve been looking for a good JSF2.0 write up for sometime. Given Oracle’s adoption of JSF, and in a number of cases technology solutions over and above the current JSF specs, do you believe the next JSF spec will require Oracle to rework significant parts of their solutions to remain standard compliant? For example you mention the adoption of the partial page update by JSF 2.0, something which Oracle has had in their own (proprietary) solutions since UIX. In other words, and my larger concern, will we see another large rework of Oracle’s component set in a future release? Cheers, CM
https://technology.amis.nl/languages/java-ee-2/javaone-2008-the-upcoming-javaserver-faces-20-specification-time-to-harvest/
CC-MAIN-2021-21
refinedweb
1,567
51.28
Promote from T1 and T2 to a type than can hold T1 * T2. More... #include </lab/itti/jevois/software/jevoisbase/src/Components/RoadFinder/Promotions.H> Promote from T1 and T2 to a type than can hold T1 * T2. The idea here is to create a mechanism by which a given template type can be promoted to a more capable type (e.g., byte to float, or PixRGB<byte> to PixRGB<float>), typically to hold the result of some operation that may exceed the capacity of the original type (e.g., the result of adding two bytes may not fit within the range of possible values for byte). See for further details. The way to use this is as follows: if you have a template type T (which may be scalar or not) that you wish to promote to a more capable type that would have "similar characteristics" to another type TT, then use the type promote_trait<T,TT>::TP. For example: promote_trait<T,float>::TP is a float if T was a byte, and is a PixRGB<float> if T was a PixRGB<byte> Basic promotion mechanism: given T1 and T2, TP provides the appropriate type to hold the result of an operation involving T1 and T2. Default is to promote to type T1, that is, no change (this is used when T2 is "weaker" than T1; when T2 is "stronger" that T1, the explicit specialized rules below are used). Definition at line 75 of file Promotions.H. Definition at line 75 of file Promotions.H. Definition at line 75 of file Promotions.H.
http://jevois.org/basedoc/structpromote__trait.html
CC-MAIN-2017-17
refinedweb
264
59.64
Application Component Bundle (Parts of Lightning Application) The basic use of lightning applications is to preview the lightning components for development purposes. But the lightning application does have its own bundle just as the lightning components. The application component bundle consists of the following: - Application - Controller - Helper - Style - Documentation - Renderer - SVG Application The application consists of the markup for the application (just like the component for the lightning component bundle). We need to put our component in this application part to view it by the application preview. For example : <aura:application > <c:HelloWorld/> </aura:application> Here ‘<c:HelloWorld/>’ is the custom lightning component that we wish to preview with the app. The syntax is, <(namespace):(component name)/>, so ‘c’ represents the default namespace of the org. If we want to use a managed package component then we need to use the managed package namespace instead of ‘c’. The ‘preview’ button on the right hand side allows you to preview the component (if you see an error for enabling a custom domain, go to the enabling custom domain part of our guide to fix it). Screenshot for the preview button : Output became: Hello World Controller Click the ‘Create’ link on the right hand side breakdown to create the javascript controller for your application. The javascript controller houses javascript functions for our application, these javascript functions can be called from our application markup. For example : We can use this in the same way we use the controller for the lightning components. Helper The helper part of an application also stores javascript, but it is used to store reusable javascript functions which can be used multiple times in our javascript controller. Click create on the helper to create the helper for your application. Once created, we can define javascript functions inside the helper as well. CODE: ({ helperMethod : function() { alert('Helper!'); } }) Then, this helper method can be called from our javascript controller multiple times, implementing reusability. CODE: ({ method1 : function(component, event, helper) { helper.helperMethod(); }, method2 : function(component, event, helper) { helper.helperMethod(); } }) NOTE : A method inside a helper class can call other methods in the helper, but a JavaScript method inside the controller cannot call other methods in the controller. Style The ‘style’ is used to apply custom css to the application. This is a .css file which can be used to apply custom css, since the application doesn’t support using a <style> tag. Click on ‘Style’ to create the application’s style.css. This supports all standard css validators (by class, by id), we just need to use the ‘.THIS’ keyword to apply CSS to the lightning markup. For example : Documentation The ‘Documentation’ part is used to document information about your lightning application. Click create on the ‘Documentation’ to create the .aura doc file for your application. Once created, you go to<your-salesforce-instance>/auradocs to view the documentation for your lightning component. The screen will look like this : The default code will look like this :
https://salesforcedrillers.com/learn-salesforce/application-component-bundle/
CC-MAIN-2022-40
refinedweb
494
55.34
It's always useful to hear someone else's perspective on a conference as they might have different takeaways from it which could help improve your knowledge. With this in mind, check out our this review of Scala Italy from our competition winner Maria Livia Chiorean and Software Developer Gabriel Asman. 'Through an unexpected series of events, I found myself traveling to Florence to attend Scala Italy 2018 (14-15th September). The conference has been going on for a few years. This was my first time attending and the first year it was organized in Florence. These are a few of the highlights from the conference. Keep in mind these are only some of the talks, the other talks were equally interesting and I encourage you to watch all of them once they’ve been uploaded Keynote - A Programming Language is its Community Heather Miller spoke about the importance of open-source work as the shared infrastructure on which we all rely. Surveys have shown that the existence of open-source tools ranks as the leading criteria for developers considering a new programming language. While in recent years, the number of users of open-source projects has continued to grow, the number of contributors has stayed flat. A lot of important projects are maintained by a worryingly small number of contributors (the so-called truck factor), usually in their spare time. One famous example is the case of OpenSSL. A few years ago, the internet woke to the sudden realization that OpenSSL, a library on which large swathes of the internet relies on for security, was being maintained by a single person. Heather ended the talk, without prescribing particular solutions, but rather encouraging each of us to consider ways in which we can contribute to the various communities we belong to. 5 things you need to know about Scala compiler Mirco Dotta showed us a few useful things to keep in mind about the Scala compiler: 1. Build time is different from compilation time. Other steps of the build are: dependency resolution, formatter, setting evaluation, source generators. Consider using Coursier, an SBT plugin that downloads artifacts in parallel and gives better caching. No global lock! 2. Typeclass & Macro derivation can be a potential source of bloat for compilation times. Avoid having to derive the same typeclass more than once, caching when possible. Imports take precedence over companion objects. Don’t import generators, derive once in companion object when possible. 3. Whitebox macros type check 3 times. In a nutshell, whitebox macros are macros that are allowed to generate new types. Since they type check three times, it’s even more important to cache them, so you don’t derive the same instance multiple times. 4. Type checking should be 30% of compilation time, use compiler flag -verbose to see a breakdown. Usual suspects: macro expansion, implicit resolution (large implicit search space), and type tags. Use -Xlog-implicits to see which implicits have been rejected. 5. The Typer/Parser Node Factor (TPNF) is the ratio between the size of parsed-scala code and type-checked scala code. Scala has more code-generation than other languages, should still be ~1.2. 6. Scala compiler is single-threaded. A lot of compilers are, but slow-down in the improvement of hardware has increased demand for parallelism. As of Scala 2.12.6 we have multithreaded class file generation phase. Check out the experimental parallel compiler: Hydra. The Future of Scala - Panel The day ended with a panel consisting of Heather Miller, Luka Jacobowitz, Miles Sabin, Mirco Dotta and Ólafur Páll Geirsson, moderated by Jon Pretty. They discussed the future of Scala. Dotty will become Scala 3.0. The costs of transition will be minimized, by having the Scala 3.0 compiler consume Scala 2 code. Scalafix can also be used for automatic rewriting. Macros will be reworked and type classes are going to be a first-class language feature in Scala 3. Ten Cool Things You can do With Scala 3 Jon Pretty built on the panel discussion with a presentation about new features that will be part of Scala 3 ❖ Enumerations: Scala 3 will provide enumerations as first-class language constructs. This will mean almost no boilerplate for defining enumeration types, including parameterized data types such as List or Option. enum Color { case Red, Green, Blue } enum Option[+T] { case Some(x: T) case None } ❖ Type Lambdas: Current syntax is horrible, as it was discovered through a combination of other features, rather than designed. ({ type T[A] = Map[Int, A] })#T // Scala 2 [A] => Map[Int, A] // Scala 3 ❖ Implicit Function Types: A new feature that allows the encoding of implicit arguments inside the function’s type itself. This allows you to avoid the boilerplate associated with repeatedly passing contextual information, not just at call-site (as implicit parameters do), but also at declaration site. type Contextual[T] = implicit Context => T implicit val ctx: Context = ... def f(x: Int): Contextual[Int] = ... f(2) // is expanded to f(2)(ctx) ❖ Named Type Parameters: Self-explanatory. Importantly, allows partial specification of type parameters, allowing you to specify some of the types, while making use of the type inference for the rest. def asEither[E, A](x: A): Either[E, A] = ... val a: Either[Error, Int] = toRightEither[E = Error](7) ❖ Erased Terms: Gives you the ability to mark arguments as erased. Erased arguments don’t exist at run-time, can not be used in the body of a function, and serve only as a compile-time constraint. class MyClass[T] { def fooWithInt(implicit evidence: T =:= Int) = ... //Scala 2 def fooWithInt(erased evidence: T =:= Int) = ... // Scala 3, parameter only exists at compile time } ❖ Dependant Function Types: At the moment, Scala has dependent types in methods, but these can not be easily turned into functions because there is no convenient way of describing their type. trait Entry { type Key; val key: Key } def extractKey(e: Entry): e.Key = e.key // a dependent method val extractor: (e: Entry) => e.Key = extractKey // Scala 3: a dependent function value ❖ Safer Equality: Self-explanatory. At the moment, Scala has universal equality, like Java. A safer equality will disallow comparisons between incompatible types. ❖ If-Then-Else: Groundbreaking stuff. if 3 < 4 then "Yes" else "No" // Old syntax still available ❖ Principled Meta Programming Shared State in Pure FP: When a state monad won't do Fabio Labella talked about managing state when doing Functional Programming. The state monad, while a useful abstraction, can not handle shared state, the S => (A, S) abstraction being inherently sequential. A better option is to use an effect - such as Cats-Effect IO - to suspend side-effectful computation, and Ref[F, A] to represent shared state. Ref is built on top of Java’s AtomicReference, providing primitive effectful operations out of the box (get, set, update, create). One can use Ref directly, or build more specialized abstractions on top of it - such as Counter. The key point is that functional programming is valuable especially in big systems with shared state, where the guarantees of referential transparency - such as local reasoning - are most useful. When using Ref, the regions of state sharing are the same as the call graph.' We hope you enjoyed Maria and Gabriel's review of Scala Italy. Have you checked out our other competition winner, Annette, article? Read it here.
https://www.signifytechnology.com/blog/2018/09/highlights-from-scala-italy-by-maria-livia-chiorean-and-gabriel-asman
CC-MAIN-2020-34
refinedweb
1,225
55.34
Thanks Jorg, I could run the program after changing my eclipse JDK to 1.5_06 , I saw the version in the jar file manifest.Yeah, I am going to build it myself or try to find a jar built with 1.4 version. I appreciate your help. -Pradeep On 1/9/07, Jörg Schaible <Joerg.Schaible@elsag-solutions.com> wrote: > > Hi Pradeep, > > Pradeep Arumalla wrote on Tuesday, January 09, 2007 4:50 AM: > > > hi all, > > > > import org.apache.commons.id.uuid.UUID; > > public class Uuidgen { > > public static void main(String as[]){ > > > > System.out.println("******* "+UUID.randomUUID()); > > > > } > > > > } > > > > *I am trying to generate a UUID with the above code and it > > throws the below > > exception...I tried changing the JDK version ...tried > > 1.5,1.4,1.3 , did > > not work, should I build the code my self.what is the > > procedure ?.Please > > help with links etc* > > > > java.lang.UnsupportedClassVersionError: Uuidgen (Unsupported > > major.minorversion > > 49.0) > > [snip] > > This error simply means, that you tried to run your application with a JDK > < 1.5 and *your* class was compiled with JDK 1.5. This will not work. > > Nevertheless I've checked the nightlies and that id package was also build > with JDK 1.5. If you need a version for a previous JDK, you will have to > build the package yourself. > > - Jörg > > --------------------------------------------------------------------- > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org > For additional commands, e-mail: commons-user-help@jakarta.apache.org > >
http://mail-archives.apache.org/mod_mbox/commons-user/200701.mbox/%3Cda990d7b0701090707v62e6b5b6sb0f8fb195d4fdcb1@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
240
70.09
You may want to search: maximized performance Reliable Operation Stone Crusher pallet conveyor 1000 kg capacity US $100-10000 / Set 1 Set (Min. Order) Maximizer Oil Penis Enlargement in Pakistan Call 03117050633 US $1-34 / Box 1 Box (Min. Order) High Quality Sanshool US $131-150 / Kilogram 1 Kilogram (Min. Order) Hot Sales 354ml Fuel Additives Manufacturer Fuel Injector Cleaner Engine Gasoline Treatment US $0.55-0.65 / Piece 2400 Pieces (Min. Order) MAXIMIZER PLUS OIL US $15-20 / Box 100 Boxes (Min. Order) import export maximizer Cylindrical roller bearing NU1015 US $1.0-10.0 / Piece 1 Piece (Min. Order) Hand painted modern floral canvas painting photos US $5-10 / Piece 200 Pieces (Min. Order) High Quality Manual Hydraulic Diesel Oil 4 Ton Small Forklift For Sale US $12300-14400 / Unit 1 Unit (Min. Order) paper Tube Oil bottle packaging Cardboard Jar for cosmetics essential oil US $0.3-0.85 / Piece 1000 Pieces (Min. Order) Modern Art Unique Design Oil Painting Wall Mural for Interior Decoration US $5.4-8.6 / Square Meter 1 Square Meter (Min. Order) high pressure pneumatic pump for gas/liquid pressure boost US $1000-3500 / Set 1 Set (Min. Order) Celastrus Angulatus Maxim Extract Celangulin 6% 10% US $58-65 / Kilogram 1 Kilogram (Min. Order) New products USUN Model:WS-AH64 512 Bar Output maximator pump system US $1550-1950 / Piece 1 Piece (Min. Order) Insulating Oil Dielectric Loss Tangent Delta Tester with Factory Price (GDGY) US $1.0-5.0 / Piece 1 Piece (Min. Order) 2016 folding gas scooter 49cc mini motor scooter maximal exercise gas scooter US $120-150 / Piece 1 Piece (Min. Order) Fuel oil extract profit maximize pyrolysis oil refining system US $35000-100000 / Set 1 Set (Min. Order) GZ320- 40*50 Classic-maxim fall Trees Beautiful Scenery Wall Outdoor Canvas Art diamond Painting US $5-8 / Set 20 Sets (Min. Order) Maximizes Lifetime 3a Molecular Sieve For Oil Industry US $1300-1500 / Metric Ton 5 Metric Tons (Min. Order) ISO certified customized investment casting common used printing machinery spare parts US $1-18 / Piece 40 Pieces (Min. Order) TOP Quality Pelargonium Sidoides Root Extract /Geranium Extract US $2-990 / Kilogram 20 Kilograms (Min. Order) Made in China Epimedium Sagittatum Extract, Epimedium Sagittatum Extract Powder, Natural Epimedium Sagittatum Extract US $1-800 / Kilogram 1 Kilogram (Min. Order) Engine oil rack sanitary ware display rack US $19.01-55.01 / Set 200 Sets (Min. Order) Oil Particle Condition Monitor US $200-250 / Piece 1 Piece (Min. Order) maximized performance Automation Equipment Automatic convoyeur US $100-10000 / Set 1 Set (Min. Order) Handmade flower beautiful scenery oil stretched canvas painting US $5-10 / Piece 200 Pieces (Min. Order) Guangzhou Packaging box Custom Printed Kraft Paper Cardboard Tube US $0.15-0.65 / Piece 1000 Pieces (Min. Order) Forklift Hydraulic Oil Pump/Oil Pump Price US $100-300 / Unit 10 Units (Min. Order) Mediterranean Style Oil Painting 3D Wallpaper/Wall Mural for House Decoration US $5.4-8.6 / Square Meter 1 Square Meter (Min. Order) High Quality Celastrus Angulatus P.e With Celangulin 95% US $58-65 / Kilogram 1 Kilogram (Min. Order) Suncenter 1 Mpa-80 Mpa air driven gas booster US $500-3000 / Set 1 Set (Min. Order) GDGY Insulating oil tangent delta tester/ Dielectric Loss Tangent Delta Testeroil test kit US $1.0-5.0 / Piece 1 Piece (Min. Order) - About product and suppliers: Alibaba.com offers 314 maximizer oil products. About 1% of these are essential oil. A wide variety of maximizer oil options are available to you, such as pure essential oil, herbal extract. You can also choose from ce, msds. As well as from free samples. There are 306 maximizer oil suppliers, mainly located in Asia. The top supplying countries are China (Mainland), Pakistan, and Singapore, which supply 98%, 1%, and 1% of maximizer oil respectively. Maximizer oil products are most popular in North America, Western Europe, and Eastern Asia. You can ensure product safety by selecting from certified suppliers, including 51 with ISO9001, 42 with Other, and 8 with ISO22000 certification. Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show maximizer oil or other products of your own company? Display your Products FREE now!
http://www.alibaba.com/showroom/maximizer-oil_2.html
CC-MAIN-2018-26
refinedweb
714
59.4
Right?? Sometimes both are overkill, and we're not doing anyone a favor by over-architecting a solution. Providing a bigger solution than is needed wastes time and money, increases complexity and maintenance costs, and more importantly, doesn't provide any extra value to our users. The de facto approach for application data storage is to use a dedicated database product, for (mostly) good reasons. However, since we're all well aware of the benefits of using a database, let's take some time to explore the filesystem as a candidate for storing your data. File writes are atomic To be more precise, file rename operations are atomic on POSIX systems, according to the Python documentation. Sorry Windows users, you're out of luck. os.rename(src, dst) Rename the file or directory src to dst.... To perform atomic file writes, you must first write your changes to a temporary file, then rename the temporary file to it's final destination. Sounds harder than it really is. The code would look something like this: import osf = open('temp.txt', 'w')f.write('do the monkey')f.close()os.rename('temp.txt', 'final.txt') The filesystem is reliable You'd better hope so anyway, everything ultimately lives on the filesystem, including, yes, that fancy and expensive relational database. Backups Storing data in files allow you to use standard filesystem-based backup solutions. In addition, many filesystems have snapshot features built in. Instant API using a web server, or even WebDAV I suggest storing your data in a document-oriented fashion. That is, store your data using a single file per entity. If you're storing data about 6 different users, then that should be 6 different files. This will greatly simplify things, and allow you to expose this data via an HTTP API. If you follow this advice, you can simply point your favorite web server at your filesystem and you immediately have an API. Requesting data from this API couldn't be simpler, and may look like this: Enable more features on your webserver, such as PUT, or even WebDAV, and you now have a read+write API. The filesystem scales (probably) I currently have 454,823 files on my computer consuming ~140GB. I don't know if there is a practical limit to filesystem storage, but I'm willing to bet that you and I aren't going to reach it. Files work with the network Everybody's doing it Subversion does it. Oracle does it. -- The relational databases and NoSQL data stores will still be there, waiting for you if you need them in the future. My advice? Ignore your DBA. Drop acid and think about data.
http://www.matthanger.net/2010/07/storing-data-on-filesystem-with-touch.html
CC-MAIN-2017-22
refinedweb
449
64.3
This snippet shows how to serialize action script object to strings, the serialization method is AMF based, note also that each object must meet three basic rules in order to be serialized properly: ¨1. The constructor must take no arguments 2. Fields must be public or they won’t be saved 3. You must register it with a class alias by calling flash.net.registerClassAlias(aliasString, class).¨ (this is based upon). - package - { - import flash.utils.ByteArray; - import mx.utils.Base64Encoder; - import mx.utils.Base64Decoder; - - public class SerializationUtils { - public static(bytes.readUTFBytes(bytes.length)); - return be.drain(); - } - - public static function readObjectFromStringBytes(value:String):Object{ - var dec:Base64Decoder=new Base64Decoder(); - dec.decode(value); - var result:ByteArray=dec.drain(); - result.position=0; - return result.readObject(); - } - } - } Report this snippet Tweet Thanks for the post! Very useful! I did have to modify the code a bit for it to work however (I kept getting "end of file" errors when I got to the result.readObject() line. Hope this is helpful to others... -David here's the code No matter what I do, I keep having this error: RangeError: Error #2006: The supplied index is out of bounds. at flash.utils::ByteArray/readObject() I am dispertly looking for a solution .. Here is the correct code: This eliminates the end of file error you were getting: public static function serializeToString(value:Object):String { var bytes:ByteArray = new ByteArray(); bytes.writeObject(value); That got messed up but you get the idea. The drain grabs anything in the buffers which we don't have anything (but you might). Yeah that be.encode(bytes.readUTFBytes(bytes.length));is the part that really kills it. because it's not UTF you want.. You want the actual bytes. And You really need dec.toByteArray()because you want a byteArray not a string. You never even wanna deal with strings because as soon as you do, the bytes in your ByteArray is no longer accurate because they have been converted to something else. And that's the whole point of Base64 anyway: to take what you can't read (the actual characters of the bytes (in 8 bits), with all sorts of none ASCII characters, and turn them into readable ASCII characters (6 bits) and slide the extra 2 bits over to the next byte.)
http://snipplr.com/view/6494/action-script-to-string-serialization-and-deserialization/
CC-MAIN-2017-26
refinedweb
384
59.9
think I'm going back to the framework, and just advise people to restart it if they think the environment has been corrupted. On the other hand, I'm having a hell of a time trying to use sys.unixShellCommand to launch anything that calls back to Radio using XML-RPC. It worked fine with the weirdo Python I used to have installed. I got it from here. It's a 35M download, and was built to be used with PyGame, a game framework for Python. I may just recommend that particular Python for use on OS X. Why is this weirdo build of Python the only version of Python on OS X that seems to sensibly support application launching? 11:36:30 PM It's a little harder doing the same thing in Python. I could get the scoping improvement by translating bundles into "if 1:", but I am unsure about whether that would actually make things worse. 7:28:09 PM What I'd really like is a way to make the notification happen in the other direction -- I want to be told that I need to update. I suppose I could try the JavaScript version of XML-RPC. I could just call into some service that blocks if it doesn't have any information, or returns whatever has accumulated since the last call. 6:10:03 PM It's a little harder doing the same thing in Python. I could get the scoping improvement by translating bundles into "if 1:", but I am unsure about whether that would actually make things worse. 4:16:51 PM It turns out I didn't have a standard install of Python on my machine. I thought I had MacPython installed, but when I tested it against the standard MacPython installation I discovered that it doesn't work with the launch.appWithDocument verb I was using in Radio. So I'm going to make it work with the Fink version of python, and assume that it's installed in the standard Fink location of /sw/bin/python. I'll also include documentation that describes how to fix that path if it's wrong. So here we go again. Sorry about that. 3:05:15 PM It's the 'bundle' keyword. All it does is the equivalent of introducing a block of code. That didn't make sense to me at first. UserTalk is customarily programmed in an outliner, and a block is introduced merely by making it a child of the 'if', 'while', 'case' or other keyword that requires a block. So all it really does is add a level of indentation to the code. Which, as it turns out, was the key to why it's such a useful feature. When you start mucking about in Frontier or Radio, you really don't get very far until you get comfortable with the programming environment that's been set up, since it is educational to poke around in the current code base. One of the things you start seeing is a lot of code that looks like this: on whatever(s) bundle // what this phrase does bundle // what the next phrase does bundle // clean up the mess return (true) ...where the bundles have code in them, they're just collapsed in the outline. When you use an outline, you can collapse bits of code so they don't get in the way of your thought. You can tag the bundle with a comment that summarizes what is going on, which makes it much easier to deal with the code. It's easy to take on big projects if you split them up into tiny pieces, and getting them out of your way visually really helps in keeping focused on the bigger picture. If you didn't have a 'bundle', you wouldn't have a way to break code into phrases that can be hidden. You're left with just adding whitespace to make some breathing room, but that soaks up vertical space that always seems to be in short supply. In an outliner, to make something hideable, it must be made a child. It's a nuance you really don't get until you start using it. Now that I'm programming Python in a browser, I've been missing being able to use bundles. Then it occurs to me -- I'm already rendering Python code from a browser, there's nothing keeping me from allowing someone to use a bundle keyword in the browser, and then just do the right thing when it's written out. I could have a 'bundle' in Python. I think that little feature will slip into my Python Tool in the next minor release. 12:31:36 PM
http://radio.weblogs.com/0100039/categories/radioPython/2002/02/17.html
crawl-002
refinedweb
793
78.89
> From: Akim Demaille <address@hidden> > Date: 04 Apr 2002 12:11:45 +0200 > > Also, under some conditions (which are pretty rare, I agree), > symbols and rules have to be renumbered. This means that you need > to walk through all the arrays, renumbering from the old number, to > the new number. Using pointers, I no longer have to do that: you > just change the member `number' of symbols/rules, and your done. Hmm, does this action occur at run-time? I thought that these arrays were readonly. For example, yyr1 is readonly. If you're talking about something that occurs while Bison is running, that's a different matter. But if you're talking about the generated parser, then it can improve performance quite a bit in some cases to use integers rather than pointers. > The motivations for this change is (i) making the maintenance of Bison > easier, thanks to a minimum of type checking service from the > compiler I'm afraid that I am still a bit lost here. When you talk about maintaining Bison, it sounds like you're talking about the Bison executable itself, and here it's clearly OK to shift to a cleaner representation even if it's slower. But I thought we were talking about the generated parser. This is an unusual situation because we are talking about machine-generated code. In my experience, in the Bison-generated part of the parser, type checking mostly just gets in the way. Bison generates standard code that works with any C compiler. If the C compiler rejects the code due to type checking, it's usually a bug in the C compiler, not a bug with Bison. You deal with buggy bleeding-edge Bison implementations more often than I do, so no doubt things are different with you. But we also have to consider the maintenance needs of Bison users, and I think they more often are similar to my experience in this area. > (ii) making extensions of Bison much easier: you don't > need to know all the arrays that exist in there to recovered the > values associated to this or that guy: you have the guy, so you have > all you need about it. I don't know what sort of extensions you're considering, so I'm at a bit of a loss here. But it seems to me that, if the tables are read-only, it's pretty straightforward to give users either the integer representation or the pointer representation. You can do something like this, for example: static short yyr1[] = { 0, 54, ... }; static inline struct yysymbol * yyr1p (struct yyrule const *r) { return yysymbol_base + yyr1[r - yyrule_base]; } yyr1p uses the pointer representation, but it's logically equivalent to the integer representation. And, though it looks slower, yyr1p might conceivably be faster overall than the naive pointer representation, since it might require less memory traffic. > We already know that we have to escape from short in most places Only for the parts of grammars that are large. And we can even shrink short to char in some cases, which will make things smaller. > Can this be really a problem? Yes, I'm afraid so. It's not simply that pointers are larger than integers and take more memory. It's also that they are much more expensive to initialize when linking object modules dynamically. Not only must you spend CPU time at runtime to relink them; you must also make a copy of each shared-library page that contains relocatable pointers, since it can't be shared among different processes. This loss of sharing can be a real drain. > Do we have to forget about the natural C programming, heavily based > on pointers, to move to indices in arrays :( If you're talking about initialized data, then yes we do have a real performance concern here. If you're talking about malloc'ed data it's not nearly as big a worry, though for large parsers I think the smaller tables will still often be an overall win. (Of course this ought to be measured....) > I really want to handle my guys, not artificial numbers, and I want > the compiler to type check what I do. Even more: I want the type to > tell me what I'm manipulating, using shorts only makes it a nightmare > to find the meaning. And typedefing shorts is not a solution, as the > compiler does not help. Sorry. You can work around some of the problem by wrapping the integers inside little structures. But I don't recommend this; it makes the code harder to read and some compilers don't optimize it as well. > POSIX is probably not referring to Unicode anyway. Officially, POSIX is agnostic about the character set. You could be using Unicode, or EBCDIC, or Macintosh Roman for all it cares. > And IIRC, POSIX mandates 257 as first symbol number, so if we move > to Unicode char-tokens, we are no longer POSIX compliant. No, POSIX merely says that symbol numbers have to be greater than 256. It does not require that Bison must use 257, 258, .... POSIX does say that the default value of the error token must be 256, and that if you use that default, the lexical analyzer should not return 256 and thus 256 cannot be used as a character. I don't think this is a real problem in practice, but if it is a user can work around it by changing the value of the error token with a %token declaration. > Paul> At best we can warn in the documentation that it doesn't work if > Paul> you change encodings between the Bison run and the cc+runtime > Paul> runs. > > Or we should find a means not to output the characters as > shorts/integer, but as the characters themselves. To do that without loss of efficiency (on modern hosts, anyway), I think we'd need to use designated initializers, and we might have to renumber things. E.g., instead of this: static const char yytranslate[] = { 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 44, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 42, 2, ... } we would need something like this: #ifndef YYHAVE_DESIGNATED_INITIALIZERS # define YYHAVE_DESIGNATED_INITIALIZERS (199901 < __STDC_VERSION__) #endif static const char yytranslate[297] = { #if YYHAVE_DESIGNATED_INITIALIZERS /* This works even with EBCDIC. */ [ 0 ] = 2, [ '\n' ] = 44, [ '&' ] = 42, ... #else /* This assumes ASCII. */ 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 42, 0, ... #endif } Unfortunately ISO standard C does not provide a way to say the default value is 2; the default value is always 0. So we'd have to renumber the output of yytranslate, e.g. switching 2 and 0 as in the above example. > Make it a muscle, and use it in the skeleton. OK, but I'll run it by bison-patches first. Also, before I do that, here is a discrepancy between bison-1_29-branch and my private copie of the merged version of bison.simple. Is this discrepancy intended? I am concerned about the first two changed lines in this hunk; the others are OK I guess. --- bison-1_29-branch/src/bison.simple 2002-04-04 12:23:07.218001000 -0800 +++ bison.tmp/data/bison.simple 2002-04-03 10:26:17.333000000 -0800 ... @@ -694,18 +910,23 @@ yyreduce: int yyi; YYFPRINTF (stderr, "Reducing via rule %d (line %d), ", - yyn, yyrline[yyn]); + yyn - 1, yyrline[yyn]); /* Print the symbols being reduced, and their result. */ - for (yyi = yyprhs[yyn]; yyrhs[yyi] > 0; yyi++) + for (yyi = yyprhs[yyn]; yyrhs[yyi] >= 0; yyi++) YYFPRINTF (stderr, "%s ", yytname[yyrhs[yyi]]); YYFPRINTF (stderr, " -> %s\n", yytname[yyr1[yyn]]); } #endif -%% actions /* The action file replaces this line. */ -#line + switch (yyn) + ]{ + b4_actions + } + +/* Line __line__ of __file__. */ +#line __oline__ "b4_output_parser_name" - yyvsp -= yylen; +[ yyvsp -= yylen; yyssp -= yylen; #if YYLSP_NEEDED yylsp -= yylen; @@ -733,11 +954,11 @@ yyreduce: yyn = yyr1[yyn]; - yystate = yypgoto[yyn - YYNTBASE] + *yyssp; + yystate = yypgoto[yyn - YYNTOKENS] + *yyssp; if (yystate >= 0 && yystate <= YYLAST && yycheck[yystate] == *yyssp) yystate = yytable[yystate]; else - yystate = yydefgoto[yyn - YYNTBASE]; + yystate = yydefgoto[yyn - YYNTOKENS]; goto yynewstate;
http://lists.gnu.org/archive/html/bug-gnu-utils/2002-04/msg00109.html
CC-MAIN-2017-30
refinedweb
1,386
69.62
How to edit 3d footage with CS6?iKokomo55 Jun 10, 2013 7:57 PM I just got CS6 Production Premium and I was wondering how I can edit 3d footage shot with a Panasonic VW-CLT1 3D Conversion Lens at 960 x 1080 (side by side) and I was wondering the best way to edit this kind of footage. Thanks a lot! 1. Re: How to edit 3d footage with CS6?John T Smith Jun 10, 2013 8:05 PM (in response to iKokomo55) 1 person found this helpful 2. Re: How to edit 3d footage with CS6?petergaraway Jun 10, 2013 9:42 PM (in response to iKokomo55)1 person found this helpful Here's the a tutorial for it: Pretty easy workflow. Peter Garaway Adobe Premiere Pro 3. Re: How to edit 3d footage with CS6?tfi productions 44 Jun 11, 2013 4:16 AM (in response to petergaraway)1 person found this helpful hello, following petergaraway's lead for the vision 3 plugin i found this: -wiggle-video-published.html i use 2 canon vixia hfs20 avchd video cameras on a s3d rig and am curious how to make 'wiggle video' i am somewhat confused on the workflow in PPRO cs6 windows any advice is appreciated, thank, j 4. Re: How to edit 3d footage with CS6?Allynn Wilkinson Jun 11, 2013 6:50 AM (in response to iKokomo55) If you're on a Mac, may I recommend Tim Dashwood's excellent plug-in: It requires Lion (or Mountain Lion) but this newest version does work with Premiere. Previous versions worked with FCP 7, FCP X and After Effects. I have a JVC GS-TD1 3D camera that I love but I started with 2 Canon Vixia M300s on a dual mount. The biggest problem with dual cameras is syncing everything up. The 3D Wiggle video (which has already given me a headache!) looks like they convereged the screen on the person in the middle and just cut back and forth between the left and right camera. 5. Re: How to edit 3d footage with CS6?Jim_Simon Jun 11, 2013 12:08 PM (in response to iKokomo55) If Adobe is done turning Premiere Pro into Premiere Cut Pro, I think 3D should probably be the next major focus for the development team. They'll be barred from a lot of Hollywood work without native 3D editing, effects and delivery capabilities. (And it does seem like a market Adobe is interested in.) 6. Re: How to edit 3d footage with CS6?tfi productions 44 Jun 11, 2013 12:32 PM (in response to Jim_Simon) hello, my intent for output is to shoot using the dual canon hfs20 avchd rig at 60i (i have a dual lanc controller which is really cool (however, i haven't used it yet)) import footage into PPRO cs6 as left and right eye and then edit somehow to get the 'wiggle' effect of 3d it's the 'wiggle' effect with video i'm chasing i like the autostereoscopic idea (no special glasses, no special monitors, nothing special) just lots and lots of depth... thanks, j ps: and then 'how to export via AME as s3d... and how to author s3d via Encore' 7. Re: How to edit 3d footage with CS6?Allynn Wilkinson Jun 11, 2013 1:28 PM (in response to tfi productions 44) Here's a test "wiggle" I made with some old footage shot on the JVC. Would have turned out better if convergence had been set to the flower and not the leaf in the background. All I did was import the footage (which shot half sbs). Stretch the width to 200%. Duplicate it on a new video track and send one all the way to the right and the other all the way to the left. Then I simply did a "+2" to move the playhead two frames and <ctrl> k to cut though both video frames. I kept doing this until the end of the piece and then deleted eveyother instance of the second video track. Of course, with a separate right and left source you'd just put on eon one track and the other on the other track. I think my example shows the importance of converging on the correct (probably always, center) object. Mine was exported as a regular widescreen QuickTime. No idea how to author it in Encore (I don't "DVD" anymore!). 8. Re: How to edit 3d footage with CS6?tfi productions 44 Jun 11, 2013 2:47 PM (in response to Allynn Wilkinson) hello, @ Allynn: thanks for the step by step i have a few questions to clarify if you don't mind 1.) (which shot half sbs) = can you explain this please 2) 'send one all the way to the right / left' = is this each video track being moved with the 'motion vertical and horizontal' position in the effect panel? 3) can you explain how to control the 'convergence' please... everything else i think i understand thank you so much for your help (i'm a slow learner) j 9. Re: How to edit 3d footage with CS6?Allynn Wilkinson Jun 11, 2013 3:36 PM (in response to tfi productions 44) Sure... let's see if I can clarify... 1. - Shot half sbs means my camera shots the two images side by side at half horozintal resolution (squeezed). It's a popular 3D delivery method as well. 2. When you shoot with two cameras, you're shooting at full resolution (whatever your resolution is). You won't have to worry about "sending one right and one left". By default one of your videos will be the right eye and one will be the left. 3. This is the "Holy Grail" of 3d shooting! Convergence is the point in space where both cameras "see" pretty much the exact same image. In your example, it was the person in the middle. In mine, it's a big leaf in the background. Convergence (on a 3D screen) happens *at* the screen. Everything else is either "behind" the screen (as if you're looking into a window) or popping out of the screen (which is the effect I was trying for when I originally shot the flower). 3D plugins often let you "pull convergence" (to a point). Typically, they let you see the two images overlaid with a double image where the two cameras diverge. You can then manipulate the angles (by bringing them right and left) until the desired "converged" object has a solid outline with no double image. In "good 3D" (read: non-eye fatiguing) almost everything is at the screen plane or behind it. It's the popping *out* that is fatiguing to the eye. Without a plug in, you could just nudge your images left and right and view the effect to see if the desired object is getting more converged (in which keep going in that direction) or less converged (in which case move them in the opposite direction). Hope this helps Allynn 10. Re: How to edit 3d footage with CS6?tfi productions 44 Jun 12, 2013 12:26 AM (in response to Allynn Wilkinson) hello Allynn, i am clear on most of what's going on now, thanks i'm looking at 32-46" 3d tvs to use as a big monitor any ideas of features to get / not to get? i know hdmi 1.4 a or b is necessary i think 240hz is necessary i already own a oppo 3d bluray with hdmi1.4a spec do you have preferences / rankings on the 'best' 3d format: sbs, top/bottom, checkerboard, frame sequential, etc. at 1920x1080 if possible do you have any thoughts on how 4k will change/impact 3d output? thanks for chatting, you're one of the only people i've been able to really go back and forth with on this stuff cheers, j 11. Re: How to edit 3d footage with CS6?Allynn Wilkinson Jun 12, 2013 11:10 AM (in response to tfi productions 44) I have a 23" LG passive 3D monitor and a 48" (?) LG passive 3D TV. In my opinion, passive is the way to go. No way could I work with active shutter glasses all day! I kind of like having a separate monitor because viewing distance is key with 3D and I have to sit a few feet back from the TV. The 23" monitor is perfect on the desktop. Now getting 2 mini-dv monitors and 1, 3D HDMI monitor attached to a 2009 Mac Pro Tower was a bit of an adventure! According to Bernard Mendiburu (author of "3D Movie Making", 2009) the checkerboard gives a slightly better picture because it allows more light than either sbs or top/bottom. Because you view 3D through polarized lenses (or shutter glasses) you're only getting about half the light in each image. I stick to half sbs because it's what my camera shoots. I have done some full sbs (using 2 hd cameras) but I didn't notice a huge difference. My 3D camera can actually shoot full-sbs but I can't easily edit it and I'm still learning so it's more important for me to shoot a lot. I'm sure 4K will have a huge (positive) impact on 3D but I think it's most important to get out there and learn the techniques with whatever you have at hand. The principles of good stereoscopy were written over 100 years ago and they really haven't changed much! I really like the above mentioned Mendiburu book though it's a little dated now. He gives a strong, technical background that is well illustrated and easy to understand. I've just ordered "Digital Stereoscopy" published in March and I'll probably order "3D Storytelling" just published at the end of April. And, of course, I watch 3D (both good and bad) any time I can. Favorite link: Dr. Brian May (*yes* Queen's Brian May) "London Stereoscopic Company": He is a life-long avid collector of 19th century stereo cards and presented a TV special on the history of 3D (in 3D, of course!). 12. Re: How to edit 3d footage with CS6?tfi productions 44 Jun 13, 2013 9:45 PM (in response to Allynn Wilkinson) hello, my brain hurts trying to find a way to hook up a quadro4000 with dual link dvi-i (or display port 1.1a) to a big modern 3d tv with hdmi 1.4a spec reading in gamer forums: they talk about an hdmi 2.0 with full hd 120hz 1920x1080 (60hz per eye (60fps)) but that's not out for a while -and-bioshock-infinite-/ t-to-hdmi-type-2-selling-yet-/ and does anyone know of big modern 3d tv's with dual link dvi-i connection? cheers to anyone with thoughts @Allynn, thanks again for taking the time to provide an indepth response: i appreciate it cheers, j 13. Re: How to edit 3d footage with CS6?tfi productions 44 Jun 16, 2013 11:10 AM (in response to tfi productions 44) hello, after doing a lot of reading re: passive vs. active 3d lg tvs have stated that their cinema tvs 3d have been certified as full hd 3d by way of sending separate 540 vertical images to each eye which combines to a 1080i image... by way of a firmware upgrade can any lg 3d cinema tv users comment on this, confirm if image quality improved after the firmware update please... can anyone comment as to whether or not PPRO will eventually support mvc file format please... the v3 plugin mentioned above...the company is planning/working on an mvc plugin for PPRO i believe it would be so very cool to get access to mvc in PPRO... cheers, j 14. Re: How to edit 3d footage with CS6?Rallymax-forum Jun 17, 2013 11:15 AM (in response to tfi productions 44) tfi productions 44 wrote: the v3 plugin mentioned above...the company is planning/working on an mvc plugin for PPRO i believe it would be so very cool to get access to mvc in PPRO... cheers, j Do you mean AVC + MVC extensions Import support (ie understanding AVC/MVC files) or do you mean AVC/MVC Export support or both? btw for all that don't know. MVC is an extension to the AVC standard that allows you to encode two views (thus _M_ulti _View_ Codec) into the stream with shared information. This is more efficient than encoding at FullHD3D where you have two 1920x1080 images stacked top/bottom, and has better image quality vs two horizontally compressed images at 960x1080 (960+960=1920) side by side (sbs),. 15. Re: How to edit 3d footage with CS6?John T Smith Aug 1, 2013 8:21 AM (in response to Rallymax-forum) has some discussion of a plugin for MVC in #7 16. Re: How to edit 3d footage with CS6?tfi productions 44 Aug 1, 2013 9:46 AM (in response to John T Smith) hello, thanks for the link, John. i just posted in that thread i'm thinking about switching over to vegaspro 12: their s3d workflow looks pretty good trying to find an affordable solution to author 3d blurays cheers, j 17. Re: How to edit 3d footage with CS6?Rallymax-forum Aug 1, 2013 10:24 AM (in response to tfi productions 44) tfi productions 44 wrote: hello, thanks for the link, John. i just posted in that thread i'm thinking about switching over to vegaspro 12: their s3d workflow looks pretty good trying to find an affordable solution to author 3d blurays cheers, j Premiere & Vegas do not support MVC export. Stereo3d Toolbox (S3D) _imports_ and _edits_ footage but doesn't provide a 3D format (like MVC) export. The only MVC encoder I'm aware of plug in to Sony DoStudio (~$5k) or Sony BluPrint (~$50k) Who else has a MVC 3D blu ray compliant encoder?
https://forums.adobe.com/thread/1229799
CC-MAIN-2018-05
refinedweb
2,348
70.02
. Return a new module object with the __name__ attribute set to name. Only the module’s __doc__ and __name__ attributes are filled in; the caller is responsible for providing a __file__ attribute. New in version 3.3. Similar to PyImport_NewObject(), but the name is an UTF-8 encoded string instead of a Unicode object. Return the dictionary object that implements module‘s namespace; this object is the same as the __dict__ attribute of the module object. This function never fails. It is recommended extensions use other PyModule_*() and PyObject_*() functions rather than directly manipulate a module’s __dict__. Return module‘s __name__ value. If the module does not provide one, or if it is not a string, SystemError is raised and NULL is returned. New in version 3.3. Similar to PyModule_GetNameObject() but return the name encoded to 'utf-8'. Return the name of the file from which module was loaded using module‘s __file__ attribute. If this is not defined, or if it is not a unicode string, raise SystemError and return NULL; otherwise return a reference to a Unicode object. New in version 3.2. Similar to PyModule_GetFilenameObject() but return the filename encoded to ‘utf-8’. Deprecated since version 3.2: PyModule_GetFilename() raises UnicodeEncodeError on unencodable filenames, use PyModule_GetFilenameObject() instead.().. Attaches the module object passed to the function to the interpreter state. This allows the module object to be accessible via PyState_FindModule(). New in version 3.3. Removes the module object created from def from the interpreter state. New in version 3.3. These functions are usually used in the module initialization function. Create a new module object, given the definition in module. This behaves like PyModule_Create2() with module_api_version set to PYTHON_API_VERSION. Create a new module object, given the definition in module, assuming the API version module_api_version. If that version does not match the version of the running interpreter, a RuntimeWarning is emitted. Note Most uses of this function should be using PyModule_Create() instead; only use this if you are sure you need it. This struct holds all information that is needed to create a module object. There is usually only one static variable of that type for each module, which is statically initialized and then passed to PyModule_Create() in the module initialization function. Always initialize this member to PyModuleDef_HEAD_INIT. Name for the new module. Docstring for the module; usually a docstring variable created with PyDoc_STRVAR() is used. Some modules allow re-initialization (calling their PyInit_* function more than once). These modules should keep their state in a per-module memory area that can be retrieved with PyModule_GetState(). This memory should be used, rather than static globals, to hold per-module state, since it is then safe for use in multiple sub-interpreters. It is freed when the module object is deallocated, after the m_free function has been called, if present. Setting m_size to -1 means that the module can not be re-initialized because it has global state. Setting it to a non-negative value means that the module can be re-initialized and specifies the additional amount of memory it requires for its state. See PEP 3121 for more details..
http://www.wingware.com/psupport/python-manual/3.3/c-api/module.html
CC-MAIN-2016-07
refinedweb
524
58.69
z Introduction z Simple Text Output z Assigning values to variables z Evaluation & Substitutions 1: Grouping arguments with "" z Evaluation & Substitutions 2: Grouping arguments with {} z Evaluation & Substitutions 3: Grouping arguments with [] z Results of a command - Math 101 z Numeric Comparisons 101 - if z Textual Comparison - switch z Looping 101 - While loop z Looping 102 - For and incr z Adding new commands to Tcl - proc z Variations in proc arguments and return values z Variable scope - global and upvar z Tcl Data Structures 101 - The list z Adding & Deleting members of a list z More list commands - lsearch, lsort, lrange z String Subcommands - length index range z String comparisons - compare match first last wordend z Modifying Strings - tolower, toupper, trim, format z Regular Expressions 101 z More Examples Of Regular Expressions z More Quoting Hell - Regular Expressions 102 z Associative Arrays z More On Arrays - Iterating and use in procedures z File Access 101 z Information about Files - file, glob z Invoking Subprocesses from Tcl - exec, open z Learning the existence of commands and variables ? - info z State of the interpreter - info z Information about procs - info z Modularization - source z Building reusable libraries - packages and namespaces z Creating Commands - eval z More command construction - format, list z Substitution without evaluation - format, subst z Changing Working Directory - cd, pwd z Debugging & Errors - errorInfo errorCode catch error return z More Debugging - trace z Command line arguments and environment strings z Leftovers - time, unset z Channel I/O: socket, fileevent, vwait z Time and Date - clock z More channel I/O - fblocked & fconfigure z Child interpreters Introduction Index | Next lesson Welcome to the Tcl tutorial. We wrote it with the goal of helping you to learn Tcl. It is aimed atthose who have some knowledge of programming, although you certainly don't have to be anexpert. The tutorial is intended as a companion to the Tcl manual pages which provide areference for all Tcl commands. It is divided into brief sections covering different aspects of the language. Depending on whatsystem you are on, you can always look up the reference documntation for commands that youare curious about. On Unix for example, man while would bring up the man page for the whilecommand. Each section is accompanied by relevant examples showing you how to put to use the materialcovered. Additional Resources The Tcl community is an exceedingly friendly one. It's polite to try and figure things outyourself, but if you're struggling, we're more than willing to help. Here are some good places toget help: Credits Thanks first and foremost to Clif Flynt for making his material available under a BSD license.The following people also contributed: z Neil Madden z Arjen Markus z David N. Welton Of course, we also welcome comments and suggestions about how it could be improved - or ifit's great the way it is, we don't mind a bit of thanks, either! The traditional starting place for a tutorial is the classic "Hello, World" program. Once you canprint out a string, you're well on your way to using Tcl for fun and profit! A single unit of text after the puts command will be printed to the standard output device (inthis case, the lower window).separated by whitespace are treated as multiple arguments to the command. Quotes andbraces can both be used to group several words into a single unit. However, they actuallybehave differently. In the next lesson you'll start to learn some of the differences between theirbehaviors. Note that in Tcl, single quotes are not significant, as they are in other programminglanguages such as C, Perl and Python. Many commands in Tcl (including puts) can accept multiple arguments. If a string is notenclosed in quotes or braces, the Tcl interpreter will consider each word in the string as aseparate argument, and pass each individually to the puts command. The puts command willtry. Example it places the second argument ("Cauliflower") in the memory space referenced by the firstargument (fruit). Set always returns the contents of the variable named in the first argument.Thus, when set is called with two arguments, it places the second argument in the memoryspace referenced by the first argument and then returns the second argument. In the aboveexample, for instance, it would return "Cauliflower", without the quotes. The first argument to a set command can be either a single word, like fruit or pi , or it can be amember of an array. Arrays will be discussed in greater detail later, for the time being justremember that many data can be collected under a single variable name, and an individualdatum can be accessed by its index within that array. Indexing into an array in Tcl is handledby putting the index within parentheses after the name of the variable. Set can also be invoked with only one argument. When called with just one argument, it willreturn the contents of that argument. If you look at the example code, you'll notice that in the set command the first argument istyped with only its name, but in the puts statement the argument is preceeded with a $. The dollar sign tells Tcl to use the value of the variable - in this case X or Y. Tcl passes data to subroutines either by name or by value. Commands that don't change thecontents of a variable usually have their arguments passed by value. Commands that dochange the value of the data must have the data passed by name. set Y 1.24 puts $Xputs $Y puts "..............................." This lesson is the first of three which discuss the way Tcl handles substitution during commandevaluation. In Tcl, the evaluation of a command is done is 2 phases. The first phase is a single pass ofsubstitutions. The second phase is the evaluation of the resulting command. Note that only onepass of substitutions is made. Thus in the command puts $varName the contents of the proper variable are substituted for $varName, and then the command isexecuted. Assuming we have set varName to "Hello World", the sequence would look likethis: puts $varName ⇒ puts "Hello World", which is then executed and prints out HelloWorld. A command within square brackets ([]) is replaced with the result of the execution of thatcommand. (This will be explained more fully in the lesson "Results of a Command - Math 101.") Words within double quotes or braces are grouped into a single argument. However, doublequotes and braces cause different behavior during the substitution phase. In this lesson, we willconcentrate on the behavior of double quotes during the substitution phase. Grouping words within double quotes allows substitutions to occur within the quotations - or, infancier terms, "interpolation". The substituted group is then evaluated as a single argument.Thus, in the command: the current contents of varName are substituted for $varName, and then the entire string isprinted to the output device, just like the example above. In general, the backslash (\) disables substitution for the single character immediately followingthe backslash. Any character immediately following the backslash will stand withoutsubstitution. However, there are specific "Backslash Sequence" strings which are replaced by specific valuesduring the substitution phase. The following backslash strings will be substituted as shownbelow. The final exception is the backslash at the end of a line of text. This causes the interpreter toignore the newline, and treat the text as a single line of text. The interpreter will insert a blankspace at the location of the ending backslash. set Z "Albany" set Z_LABEL "The Capitol of New York is: " During the substitution phase of command evaluation, the two grouping operators, the brace({) and the double quote ("), are treated differently by the Tcl interpreter. In the last lesson you saw that grouping words with double quotes allows substitutions to occurwithin the double quotes. By contrast, grouping words within double braces disablessubstitution within the braces. Characters within braces are passed to a command exactly aswritten. The only "Backslash Sequence" that is processed within braces is the backslash at theend of a line. This is still a line continuation character. Note that braces have this effect only when they are used for grouping (i.e. at the beginningand end of a sequence of words). If a string is already grouped, either with quotes or braces,and braces occur in the middle of the grouped string (i.e. "foo{bar"), then the braces aretreated as regular characters with no special meaning. If the string is grouped with quotes,substitutions will occur within the quoted string, even between the braces. You obtain the results of a command by placing the command in square brackets ([]). This isthe functional equivalent of the back single quote (`) in sh programming, or using the returnvalue of a function in C. As the Tcl interpreter reads in a line it replaces all the $variables with their values. If a portionof the string is grouped with square brackets, then the string within the square brackets isevaluated as a command by the interpreter, and the result of the command replaces the squarebracketed string. z The parser scans the entire command, and sees that there is a command substitution to perform: readsensor [selectsensor] , which is sent to the interpreter for evaluation. z The parser once again finds a command to be evaluated and substituted, selectsensor z The fictitious selectsensor command is evaluated, and it presumably returns a sensor to read. z At this point, readsensor has a sensor to read, and the readsensor command is evaluated. z Finally, the value of readsensor is passed on back to the puts command, which prints the output to the screen. set x "abc" puts "A simple substitution: $x n" The Tcl command for doing math type operations is expr. The following discussion of the exprcommand is extracted and adapted from the expr man page. Expr takes all of its arguments ("2 + 2" for example) and evaluates the result as a Tcl"expression" (rather than a normal command), and returns the value. The operators permittedin Tcl expressions include all the standard math functions, logical operators, bitwise operators,as well as math functions like rand(), sqrt(), cosh() and so on. Expressions almost always yieldnumeric results (integer or floating-point values). Performance tip: enclosing the arguments to expr in curly braces will result in faster code. Sodo expr {$i * 10} instead of simply expr $i * 10 OPERANDS Note that the octal and hexadecimal conversion takes place differently in the expr commandthan in the Tcl substitution phase. In the substitution phase, a \x32 would be converted to anascii "2", while expr would covert 0x32 to a decimal 50. If an operand does not have one of the integer formats given above, then it is treated as afloating-point number, if that is possible. Floating-point numbers may be specified in any of theways accepted by an ANSI-compliant C compiler. For example, all of the following are validfloating-point numbers: 2.1, 3., 6e4, 7.91e+16. If no numeric interpretation is possible, thenan operand is left as a string (and only a limited set of operators may be applied to it). OPERATORS The valid operators are listed below, grouped in decreasing order of precedence: -+~! Unary minus, unary plus, bit-wise NOT, logical NOT. None of these operators may be applied to string operands, and bit-wise NOT may be applied only, as in C. If x evaluates to non-zero, then the result is the value of y. Otherwise the result is the value of z. The x operand must have a numeric value. MATH FUNCTIONS TYPE CONVERSIONS Tcl supports the following functions to convert from one representation of a number to another: set X 100; set Y 256; set Z [expr "$Y + $X"] set Z_LABEL "$Y plus $X is " puts "Because of the precedence rules "5 + -3 * 4 " is: [expr -3 * 4 + 5]" puts "Because of the parentheses "(5 + -3) * 4 " is: [expr (5 + -3) * 4]" puts " n................. more examples of differences between " and {" puts {$Z_LABEL [expr $Y + $X]} puts "$Z_LABEL {[expr $Y + $X]}" puts "The command to add two numbers is: [expr $a + $b]" z if expr1 ?then? body1 elseif expr2 ?then? body2 elseif ... ?else? ?bodyN? The words then and else are optional, although generally then is left out and else is used. False Truea numeric value 0 all others yes/no no yes true/false false true If the test expression returns a string "yes"/"no" or "true"/"false", the case of the return is notchecked. True/FALSE or YeS/nO are legitimate returns. If the test expression evaluates to False, then the word after body1 will be examined. If thenext word is elseif, then the next test expression will be tested as a condition. If the next wordis else then the final body will be evaluated as a command. The test expression following the word if is evaluated in the same manner as in the exprcommand. Hex strings 0xXX will be converted to their numeric equivalent before evaluation. The test expression following if may be enclosed within quotes, or braces. If it is enclosedwithin braces, it will be evaluated within the if command, and if enclosed within quotes it will beevaluated during the substitution phase, and then another round of substitutions will be donewithin the if command. set x 1 if {$x != 1} { puts "$x is != 1" } else { puts "$x is 1" } set y x if "$$y != 1" { puts "$$y is != 1" } else { puts "$$y is 1" } The switch command allows you to choose one of several options in your code. It is similar toswitch in C, except that it is more flexible, because you can switch on strings, instead of justintegers. The string will be compared to a set of patterns, and when a pattern matches thestring, the code associated with that pattern will be evaluated. It's a good idea to use the switch command when you want to match a variable against severalpossible values, and don't want to do a long series of if... elseif ... elseif statements. - or - String is the string that you wish to test, and pattern1, pattern2, etc are the patterns thatthe string will be compared to. If string matches a pattern, then the code within the bodyassociated with that pattern will be executed. The return value of the body will be returned asthe return value of the switch statement. Only one pattern will be matched. If the last pattern argument is the string default, that pattern will match any string. Thisguarantees that some set of code will be executed no matter what the contents of string are. If there is no default argument, and none of the patterns match string, then the switchcommand will return an empty string. If you use the brace version of this command, there will be no substitutions done on thepatterns. The body of the command, however, will be parsed and evaluated just like any othercommand, so there will be a pass of substitutions done on that, just as will be done in the firstsyntax. The advantage of the second form is that you can write multiple line commands morereadably with the brackets. Note that you can use braces to group the body argument when using the switch or ifcommands. This is because these commands pass their body argument to the Tcl interpreter forevaluation. This evaluation includes a pass of substitutions just as it does for code not within acommand body argument. set x "ONE" set y 1 set z "ONE""; Tcl includes two commands for looping, the while and for commands. Like the if statement,they evaluate their test the same way that the expr does. In this lesson we discuss the whilecommand, and in the next lesson, the for command. In most circumstances where one of thesecommands can be used, the other can be used as well. The while command evaluates test as an expression. If test is true, the code in body isexecuted. After the code in body has been executed, testis evaluated again. A continue statement within body will stop the execution of the code and the test will be re-evaluated. A break within body will break out of the while loop, and execution will continuewith the next line of code after body In Tcl everything is a command, and everything goes through the same substitution phase.For this reason, the test must be placed within braces. If test is placed within quotes, thesubstitution phase will replace any variables with their current value, and will pass that test tothe while command to evaluate, and since the test has only numbers, it will always evaluatethe same, quite probably leading to an endless loop! Look at the two loops in the example. If it weren't for the break command in the second loop, itwould loop forever. # The next example shows the difference between ".." and {...} # How many times does the following loop run? Why does it not # print on each pass? set x 0 while "$x < 5" { set x [expr $x + 1] if {$x > 7} break if "$x > 3" continue puts "x is $x" } Tcl supports an iterated loop construct similar to the for loop in C. The for command in Tcltakes four arguments; an initialization, a test, an increment, and the body of code to evaluateon each pass through the loop. The syntax for the for command is: During evaluation of the for command, the start code is evaluated once, before any otherarguments are evaluated. After the start code has been evaluated, the test is evaluated. If thetest evaluates to true, then the body is evaluated, and finally, the next argument is evaluated.After evaluating the next argument, the interpreter loops back to the test, and repeats theprocess. If the test evaluates as false, then the loop will exit immediately. Start is the initialization portion of the command. It is usually used to initialize the iterationvariable, but can contain any code that you wish to execute before the loop starts. The test argument is evaluated as an expression, just as with the expr while and if commands. Next is commonly an incrementing command, but may contain any command which the Tclinterpreter can evaluate. Since you commonly do not want the Tcl interpreter's substitution phase to change variables totheir current values before passing control to the for command, it is common to group thearguments with curly braces. When braces are used for grouping, the newline is not treated asthe end of a Tcl command. This makes it simpler to write multiple line commands. However, theopening brace must be on the line with the for command, or the Tcl interpreter will treat theclose of the next brace as the end of the command, and you will get an error. This is differentthan other languages like C or Perl, where it doesn't matter where you place your braces. Within the body code, the commands break and continue may be used just as they are usedwith the while command. When a break is encountered, the loop exits immediately. When acontinue is encountered, evaluation of the body ceases, and the test is re-evaluated. Because incrementing the iteration variable is so common, Tcl has a special command for this: This command adds the value in the second argument to the variable named in the firstargument. If no value is given for the second argument, it defaults to 1. set i 0incr i# This is equivalent to:set i [expr $i + 1] In Tcl there is actually no distinction between commands (often known as 'functions' in otherlanguages) and "syntax". There are no reserved words (like if and while) as exist in C, Java,Python, Perl, etc... When the Tcl interpreter starts up there is a list of known commands thatthe interpreter uses to parse a line. These commands include while, for, set, puts, and soon. They are, however, still just regular Tcl commands that obey the same syntax rules as allTcl commands, both built-in, and those that you create yourself with the proc command. The proc command creates a new command. The syntax for the proc command is: When proc is evaluated, it creates a new command with name name that takes argumentsargs. When the procedure name is called, it then runs the code contained in body. Args is a list of arguments which will be passed to name. When name is invoked, localvariables with these names will be created, and the values to be passed to name will be copiedto the local variables. The value that the body of a proc returns can be defined with the return command. The returncommand will return its argument to the calling program. If there is no return, then body willreturn to the caller when the last of its commands has been executed. The return value of thelast command becomes the return value of the procedure. proc for {a b c} { puts "The for command has been replaced by a puts"; puts "The arguments were: $a n$b n$c n" } A proc can be defined with a set number of required arguments (as was done with sum in theprevious lesson, or it can have a variable number of arguments. An argument can also bedefined to have a default value. Variables can be defined with a default value by placing the variable name and the defaultwithin braces within args. For example: Since there are default arguments for the b and c variables, you could call the procedure one ofthreeargs. If the last argument to a proc argument list is args, then any arguments that aren'talready assigned to previous variables will be assigned to args. The example procedure below is defined with three arguments. At least one argument *must*be present when example is called. The second argument can be left out, and in that case itwill default to an empty string. By declaring args as the last argument, example can take avariable number of arguments. Note that if there is a variable other than args after a variable with a default, then the defaultwill never be used. For example, if you declare a proc such as:proc function { a {b 1} c} {...}, you will always have to call it with 3 arguments. Tcl assigns values to a proc's variables in the order that they are listed in the command. If youprovide 2 arguments when you call function they will be assigned to a and b, and Tcl willgenerate an error because c is undefined. You can, however, declare other arguments that may not have values as coming after anargument with a default value. For example, this is valid: In this case, example requires one argument, which will be assigned to the variable required.If there are two arguments, the second arg will be assigned to default1. If there are 3arguments, the first will be assigned to required, the second to default1, and the third todefault2. If example is called with more than 3 arguments, all the arguments after the thirdwill be assigned to args. puts "The example was called with $count1, $count2, $count3, and $count4 Arguments" Tcl evaluates a variable name within one of two scopes: the local scope within a proc, and aglobal scope (the code and variables outside of any proc). Like C, Tcl defines only one globalspace. The scope in which a variable will be evaluated can be changed with the global or upvarcommand. The global command will cause a variable in a local scope to be evaluated in the global scopeinstead. The upvar command behaves similarly. Upvar ties the name of a variable in the current scopeto a variable in a different scope. This is commonly used to simulate pass-by-reference toprocs. Upvar causes myVar1 to become a reference to otherVar1, and myVar2 to become a referenceto otherVar2, etc. The otherVar variable is declared to be at level relative to the currentprocedure. By default level is 1, the next level up. If a number is used for the level, then level references that many levels up the stack from thecurrent level. If the level number is preceeded by a # symbol, then it references that many levels down fromthe global scope. If level is #0, then the reference is to a variable at the global level. My personal opinion is that using upvar with anything except #0 or 1 is asking for trouble. The use of global is hard to avoid, but you should avoid having too many global variables. Ifyou start needing lots of globals, you may want to look at your design again. Note that since there is only one global space it is surprisingly easy to have name conflicts ifyou are importing other peoples code and aren't careful. It is recommended that you startglobal variables with an identifiable prefix to help avoid unexpected conflicts. SetPositive x 5 SetPositive y -5puts "X : $x Y: $y n" The list is the basic data structure to Tcl. A list is simply an ordered collection of stuff; numbers,words, strings, etc. For instance, a command in Tcl is just a list in which the first list entry is thename of a proc, and subsequent members of the list are the arguments to the proc. The items in list can be iterated through using the foreach command. Foreach will execute the body code one time for each list item in list. On each pass, varnamewill contain the value of the next list item. set i 0;foreach j $x { puts "$j is item number $i in list x" incr i;} Take a look at the example code, and pay special attention to the way that sets of charactersare grouped into single list elements. Lists can be searched with the lsearch command, sorted with the lsort command, and a rangeof list entries can be extracted with the lrange command. By default, lsearch uses the globbing method of finding a match. Globbing is the wildcardingtechnique that most Unix shells use. *. One feature of Tcl is that commands may have subcommands. String is an example of one ofthese. The string command treats the first argument as a subcommand. This lesson coversthese string subcommands: puts " "[string range $string 5 10] " are characters between the 5'th and 10'th" *. if {$first != 0} { puts "$path is a relative path" } else { puts "$path is an absolute path" } # If "/" is not the last character in $path, report the last word. # else, remove the last "/", and find the next to last "/", and # report the last word. incr last if {$last != [string length $path]} { set name [string range $path $last end] puts "The file referenced in $path is $name" } else { incr last -2; set tmp [string range $path 0 $last] set last [string last "/" $tmp] incr last; set name [string range $tmp $last end] puts "The final directory in $path is $name" } # Compare to "a" to determine whether the first char is upper or lower case set comparison [string compare $name "a"] if {$comparison >= 0} { puts "$name starts with a lowercase letter n" } else { puts "$name starts with an uppercase letter n" }} These are the commands which modify a string. Note that none of these modify the string inplace. In all cases a new string is returned. tolower string Returns string with all the letters converted from upper to lower case.toupper string Returns string with all the letters converted from lower to upper case.trim string ?trimChars? Returns string with all occurrences of trimChars removed from both ends. By default trimChars are whitespace (spaces, tabs, newlines)trimleft string ?trimChars? Returns string with all occurrences of trimChars removed from the left. By default trimChars are whitespace (spaces, tabs, newlines)trimright string ?trimChars? Returns string with all occurrences of trimChars removed from the right. By default trimChars are whitespace (spaces, tabs, newlines)format formatString ?arg1 arg2 ... argN? Returns a string formatted in the same manner as the ANSI sprintf procedure. FormatString is a description of the formatting to use. The full definition of this protocol is in the format man page. A useful subset of the definition is that formatString consists of literal words, backslash sequences, and % fields. The % fields are strings which start with a % and end with one of: z s... Data is a string z d... Data is a decimal integer z x... Data is a hexadecimal integer z o... Data is an octal integer z f... Data is a floating point number The % may be followed by z -... Left justify the data in this field z +... Right justify the data in this field The justification value may be followed by a number giving the minimum number of spaces to use for the data. There are also two explicit commands for parsing regular expressions. ^ Matches the beginning of a string$ Matches the end of a string. Matches any single character* Matches any count (0-n) of the previous character+ Matches any count, but at least 1 of the previous character[...] Matches any character of a set of characters[^...] Matches any character *NOT* a member of the set of characters following the ^.(...) Groups a set of characters into a subSpec. Regular expressions are similar to the globbing that was discussed in lessons 16 and 18. Themain difference is in the way that sets of matched characters are handled. In globbing the onlyway to select sets of unknown text is the * symbol. This matches to any quantity of anycharacter. In regular expression parsing, the * symbol matches zero or more occurrences of the characterimmediately proceeding the *. For example a* would match a, aaaaa, or a blank string. If thecharacter directly before the * is a set of characters within square brackets, then the * willmatch any quantity of all of these characters. For example, [a-c]* would match aa, abc,aabcabc, or again, an empty string. The + symbol behaves roughly the same as the *, except that it requires at least one characterto match. For example, [a-c]+ would match a, abc, or aabcabc, but not an empty string. Regular expression parsing is more powerful than globbing. With globbing you can use squarebrackets to enclose a set of characters any of which will be a match. Regular expression parsingalso includes a method of selecting any character not in a set. If the first character after the [ isa caret (^), then the regular expression parser will match any character not in the set ofcharacters between the square brackets. A caret can be included in the set of characters tomatch (or not) by placing it in any position other than the first. The regexp command is similar to the string match command in that it matches an expagainst a string. It is different in that it can match a portion of a string, instead of the entirestring, and will place the characters matched into the matchVar variable. If a match is found to the portion of a regular expression enclosed within parentheses, regexpwill copy the subset of matching characters is to the subSpec argument. This can be used toparse simple strings. Regsub will copy the contents of the string to a new variable, substituting the characters thatmatch exp with the characters in subSpec. If subSpec contains a & or 0, then thosecharacters will be replaced by the characters that matched exp. If the number following abackslash is 1-9, then that backslash sequence will be replaced by the appropriate portion ofexp that is enclosed within parentheses. Note that the exp argument to regexp or regsub is processed by the Tcl substitution pass.Therefore quite often the expression is enclosed in braces to prevent any special processing byTcl. # #]" We start with a simple yet non-trivial example: finding floating-point numbers in a line of text.Do not worry: we will keep the problem simpler than it is in its full generality. We only considernumbers like 1.0 and not 1.00e+01. How do we design our regular expression for this problem? By examining typical examples ofthe strings we want to match: z Invalid numbers (that is, strings we do not want to recognise as numbers but superficially look like them): We will accept them - because they normally are accepted and because excluding them makes our pattern more complicated. z A number can start with a sign (- or +) or with a digit. This can be captured with the expression [-+]?, which matches a single "-", a single "+" or nothing. z A number can have zero or more digits in front of a single period (.) and it can have zero or more digits following the period. Perhaps: [0-9]* .[0-9]* will do ... z A number may not contain a period at all. So, revise the previous expression to: [0-9] * .?[0-9]* [-+]?[0-9]* .?[0-9]* 1. Try the expression with a bunch of examples like the ones above and see if the proper ones match and the others do not. 2. Try to make it look nicer, before we start off testing it. For instance the class of characters "[0-9]" is so common that it has a shortcut, "\d". So, we could settle for: [-+]? d* .? d* instead. Or we could decide that we want to capture the digits before and after the period for special processing: [-+]?([0-9])* .?([0-9]*) 3. Or, and that may be a good strategy in general!, we can carefully examine the pattern before we start actually using it. You see, there is a problem with the above pattern: all the parts are optional, that is, each partcan match a null string - no sign, no digits before the period, no period, no digits after theperiod. In other words: Our pattern can match an empty string! Our questionable numbers, like "+000" will be perfectly acceptable and we (grudgingly) agree.But more surprisingly, the strings "--1" and "A1B2" will be accepted too! Why? Because thepattern can start anywhere in the string, so it would match the substrings "-1" and "1"respectively! z The character before a minus or a plus, if there is any, can not be another digit, a period or a minus or plus. Let us make it a space or a tab or the beginning of the string: ^| [ t] z Any sequence of digits before the period (if there is one) is allowed: [0-9]+ .? z There may be zero digits in front of the period, but then there must be at least one digit behind it: .[0-9]+ z And of course digits in front and behind the period: [0-9]+ .[0-9]+ z The character after the string (if any) can not be a "+","-" or "." as that would get us into the unacceptable number-like strings: $|[^+-.] (The dollar sign signifies the end of the string) Before trying to write down the complete regular expression, let us see what different forms wehave: z No period: [-+]?[0-9]+ z A period without digits before it: [-+]? .[0-9]+ z Digits before a period, and possibly digits after it: [-+]?[0-9]+ .[0-9]* Or: The parentheses are needed to distinguish the alternatives introduced by the vertical bar and tocapture the substring we want to have. Each set of parentheses also defines a substring andthis can be put into a separate variable: # # Or simply only the recognised number (x's as placeholders, the # last can be left out # regexp {.....} $line x x number Tip: To identify these substrings: just count the opening parentheses from left to right. So our pattern correctly accepts the strings we intended to be recognised as numbers andrejects the others. z Text enclosed in a string: This is "quoted text". If we know the enclosing character in advance (double quotes, " in this case), then "([^"])*" will capture the text inside the double quotes. Suppose we do not know the enclosing character (it can be " or '). Then: (The pattern \y matches the beginning or the end of a word and \w+ indicates we want at least one character). z Suppose you need to check the parentheses in some mathematical expression: (1+a)/ (1-b*x) for instance. A simple check is counting the open and close parentheses: # # Use the return value of [regexp] to count the number of # parentheses ... # if { [regexp -all {(} $string] != [regexp -all {)} $string] } { puts "Parentheses unbalanced!" } Of course, this is just a rough check. A better one is to see if at any point while scanning the string there are more close parentheses than open parentheses. We can easily extract the parentheses and put them in a list (the -inline option does that): foreach p $parens { incr balance $change($p) if { $balance < 0 } { puts "Parentheses unbalanced!" } } Finally: Regular expressions are very powerful, but they have certain theoretical limitations.One of these limitations is that they are not suitable for parsing arbitrarily nested text. You can experiment with regular expressions using the VisualRegexp or Visual REGEXPapplications. More on the theoretical background and practical use of regular expressions (there is lots tocover!) can be found in the book Mastering Regular Expressions by J. Friedl. Previous lesson | Index | Next lesson More Quoting Hell - Regular Expressions 102 Previous lesson | Index | Next lesson The regular expression (exp) in the two regular expression parsing commands is evaluated bythe Tcl parser during the Tcl substitution phase. This can provide a great deal of power, andalso requires a great deal of care. These examples show some of the trickier aspects of regular expression evaluation. The fieldsin each example are discussed in painful detail in the most verbose level. z A left square bracket ([) has meaning to the substitution phase, and to the regular expression parser. z A set of parentheses, a plus sign, and a star have meaning to the regular expression parser, but not the Tcl substitution phase. z A backslash sequence (\n, \t, etc) has meaning to the Tcl substitution phase, but not to the regular expression parser. z A backslash escaped character (\[) has no special meaning to either the Tcl substitution phase or the regular expression parser. The phase at which a character has meaning affects how many escapes are necessary to matchthe character you wish to match. An escape can be either enclosing the phrase in braces, orplacing a backslash before the escaped character. To pass a left bracket to the regular expression parser to evaluate as a range of characterstakes 1 escape. To have the regular expression parser match a literal left bracket takes 2escapes (one to escape the bracket in the Tcl substitution phase, and one to escape the bracketin the regular expression parsing.). If you have the string placed within quotes, then abackslash that you wish passed to the regular expression parser must also be escaped with abackslash.}] ## Extracting a hexadecimal value ...#set line {Interrupt Vector? [32(0x20)]}regexp " [^ t]+ t [ [0-9]+ (0x( [0-9a-fA-F]+) )]" $line match hexvalputs "Hex Default is: 0x$hexval" ## Matching the special characters as if they were ordinary#set str2 "abc^def"regexp " [^a-f]*def" $str2 matchputs "using [^a-f] the match is: $match" Languages like C, BASIC, FORTRAN and Java support arrays in which the index value is aninteger. Tcl, like most scripting languages (Perl, Python, PHP, etc...) supports associative arrays(also known as "hash tables") in which the index value is a string. The syntax for an associative array is to put the index within parentheses: There are several array commands aside from simply accessing and creating arrays which willbe discussed in this and the next lesson. When an associative array name is given as the argument to the global command, all theelements of the associative array become available to that proc. For this reason, Brent Welchrecommends (in Practical Programming in Tcl and Tk) using an associative array for the statestructure in a package. This method makes it simpler to share data between many procs that are working together, anddoesn't pollute the global namespace as badly as using separate globals for all shared dataitems. Another common use for arrays is to store tables of data. In the example below we use anarray to store a simple database of names. # Create a new ID (stored in the name array too for easy access) incr name(ID) set id $name(ID) ## Initialise the array and add a few names#global nameset name(ID) 0 ## the following entries: n [array names array1] n" # # Get names and values directly # foreach {name value} [array get mydata] { puts "Data on "$name ": $value" } Note, however, that the elements will not be returned in any predictable order: this has to dowith the underlying "hash table". If you want a particular ordering (alphabetical for instance),use code like: While arrays are great as a storage facility for some purposes, they are a bit tricky when youpass them to a procedure: they are actually collections of variables. This will not work: print12 $array The reason is very simple: an array does not have a value. Instead the above code should be: print12 array So, instead of passing a "value" for the array, you pass the name. This gets aliased (via theupvar command) to a local variable (that behaves the as original array). You can make changesto the original array in this way too. ## The example of the previous lesson revisited - to get a# more general "database"# # Create a new ID (stored in the name array too for easy access) incr name(ID) set id $name(ID) # Loop over the last names: make a map from last nameset historical_name(ID) 0 ## Some simple reporting#puts "Fictional characters:"report fictional_nameputs "Historical characters:"report historical_name These methods can also be used for communicating over sockets or over pipes. It is evenpossible, via the so-called virtual file system to use files stored in memory rather than on disk.Tcl provides an almost uniform interface to these very different resources, so that in generalyou do not need to concern yourself with the details. If there is a varName argument, gets returns the number of characters read (or -1 if an EOF occurs), and places the line of input in varName. If varName is not specified, gets returns the line of input. An empty string will be returned if: z There is a blank line in the file. z The current location is at the end of the file. (An EOF occurs.)puts ?-nonewline? ?fileID? string Writes the characters in string to the stream referenced by fileID. z The file I/O is buffered. The output may not be sent out when you expect it to be sent. Files will all be closed and flushed when your program exits normally, but may only be closed (not flushed) if the program is terminated in an unexpected manner. z There are a finite number of open file slots available. If you expect the program to run in a manner that will cause it to open several files, remember to close the files when you are done with them. z An empty line is indistinguishable from an EOF with the command: set string [gets filename]. Use the eof command to determine if the file is at the end or use the other form of gets (see the example). z You can't overwrite any data in a file that was opened with a access. You can, however seek to the beginning of the file for gets commands. z Opening a file with the w+ access will allow you to overwrite data, but will delete all existing data in the file. z Opening a file with the r+ access will allow you to overwrite data, while saving the existing data in the file. z By default the commands assume that strings represent "readable" text. If you want to read "binary" data, you will have to use the fconfigure command. z Often, especially if you deal with configuration data for your programs, you can use the source command instead of the relatively low-level commands presented here. Just make sure your data can be interpreted as Tcl commands and "source" the file.Example ## Count the number of lines in a text file#set infile [open "myfile.txt" r]set number 0 ## gets with two arguments returns the length of the line,# -1 if the end of the file is found#while { [gets $infile line] >= 0 } { incr number}close $infile ## Also report it in an external file#set outfile [open "report.out" w]puts $outfile "Number of lines: $number"close $outfile Glob provides the access to the names of files in a directory. It uses a name matchingmechanism similar to ls, to return a list of names that match a pattern. Between these two commands, a program can obtain most of the information that it may need. z open ...... run a new program with I/O connected to a file descriptor z exec ...... run a new program as a subprocess The open call is the same call that is used to open a file. If the first character in the file nameargument is a pipe symbol (|), then open will treat the rest of the argument as a programname, and will exec that program with the standard input or output connected to a filedescriptor. A pipe can be opened to a sub-process for reading, writing or both reading andwriting. If the file is opened for both reading and writing you must be aware that the pipes are buffered.The output from a puts command will be saved in an I/O buffer until the buffer is full, or untilyou execute a flush command to force it to be transmitted to the subprocess. The output of thesubprocess will not be available to a read or gets until the I/O buffer for the subprocess hasfilled its output buffer. The exec call is similar to invoking a program ( or a set of programs piped together) from theshell prompt or in a unix shell script. It supports several styles of output redirection, or it canreturn the output of the sub-process as the return of the exec call. Switches are: -keepnewline Retains a trailing newline in the pipeline's output. Normally a trailing newline will be deleted. -- Marks the end of the switches. The next string will be treated as arg1, even if it starts with a "-" If you are familiar with shell programming, there are a few differences to be aware of when youare writing Tcl scripts that use the exec and open calls. z You don't need the quotes that you would put around arguments to escape them from the shell expanding them. In the example, the argument to sed is not put in quotes. If it were put in quotes, the quotes would be passed to sed, instead of being stripped off (as the shell does), and sed would report an error. z If you use the open |cmd "r+" construct, you must follow each puts with a flush to force Tcl to send the command from its buffer to the program. The output from the subprocess may be buffered in its output buffer. You can sometimes force the output from the sub-process to flush by sending an exit command to the process. You can also use the fconfigure command to make a channel unbuffered. The expect extension to Tcl provides a much better interface to other programs, which handles the buffering problem. z If one of the commands in an open |cmd fails the open does not return an error. However, attempting to read input from the file descriptor with gets $file will return an empty string. Using the gets $file input construct will return a character count of -1. Put quotes around the s/.Q//g in the example to see this behavior. z If one of the commands in an exec call fails to execute, the exec will return an error, and the error output will include the last line describing the error. puts $outfl { set len [gets stdin line] if {$len < 5} {exit -1} # Clean upfile delete $tempFileName This lesson covers the info subcommands that return information about which procs, variables,or commands are currently in existence in this instance of the interpreter. By using thesesubcommands you can determine if a variable or proc exists before you try to access it. The example code shows how to use the info exists command to make an incr that will neverreturn a no such variable error, since it checks to be certain that the variable exists beforeincrementing it. set a 100 safeIncr a puts "After calling SafeIncr with a variable with a value of 100: $a" safeIncr b -3puts "After calling safeIncr with a non existent variable by -3: $b" set b 100safeIncr b -3puts "After calling safeIncr with a variable whose value is 100 by -3: $b" puts " nThese variables have been defined: [lsort [info vars]]"puts " nThese globals have been defined: [lsort [info globals]]" proc localproc {} { global argv; set loc1 1; set loc2 2; puts " nLocal variables accessible in this proc are: [lsort [info locals]]" puts " nVariables accessible from this proc are: [lsort [info vars]]" puts " nGlobal variables visible from this proc are: [lsort [info globals]]" } localproc; The info tclversion and info patchlevel can be used to find out if the revision level of theinterpreter running your code has the support for features you are using. If you know thatcertain features are not available in certain revisions of the interpreter, you can define your ownprocs to handle this, or just exit the program with an error message. The info cmdcount and info level can be used while optimizing a Tcl script to find out howmany levels and commands were necessary to accomplish a function. Note that the pid command is not part of the info command, but a command in its own right. Commands that return information about the current state of the interpreter number is a positive value, info level returns a the name and arguments of the proc at that level on the stack. Number is that same value that file level would return if it were called in the proc being referenced. If number number is a negative value, it refers to the current level plus number. Thus, , info level returns a the name and arguments of the proc at that level on the stack. info patchlevel Returns the value of the global variable tcl_patchlevel. This is the revision level of this interpreter.info script Returns the name of the file currently being evaluated, if one is being evaluated. If there is no file being evaluated, returns an empty string.info tclversion Returns the value of the global variable tcl_version. This is the patch level of this interpreter.pid Returns the pid of the current Tcl interpreter. puts "This is how many commands have been executed: [info cmdcount]" puts "Now *THIS* many commands have been executed: [info cmdcount]" puts "The args for demo are: [info args demo] n" puts "The body for demo is: [info body demo] n" The source command will load a file and execute it. This allows a program to be broken up intomultiple files, with each file defining procedures and variables for a particular area offunctionality. For instance, you might have a file called database.tcl that contains all theprocedures for dealing with a database, or a file called gui.tcl that handles creating agraphical user interface with Tk. The main script can then simply include each file using thesource command. More powerful techniques for program modularization are discussed in thenext lesson on packages. source fileName Reads the script in fileName and executes it. If the script executes successfully, source returns the value of the last statement in the script. If there is an error in the script, source will return that error. If there is a return (other than within a proc definition) then source will return immediately, without executing the remainder of the script. If fileName starts with a tilde (~) then $env(HOME) will substituted for the tilde, as is done in the file command. sourcedata.tcl: sourcemain.tcl: The previous lesson showed how the source command can be used to separate a program intomultiple files, each responsible for a different area of functionality. This is a simple and usefultechnique for achieving modularity. However, there are a number of drawbacks to using thesource command directly. Tcl provides a more powerful mechanism for handling reusable unitsof code called packages. A package is simply a bundle of files implementing some functionality,along with a name that identifies the package, and a version number that allows multipleversions of the same package to be present. A package can be a collection of Tcl scripts, or abinary library, or a combination of both. Binary libraries are not discussed in this tutorial. Using packages The package command provides the ability to use a package, compare package versions, andto register your own packages with an interpreter. A package is loaded by using the packagerequire command and providing the package name and optionally a version number. The firsttime a script requires a package Tcl builds up a database of available packages and versions. Itdoes this by searching for package index files in all of the directories listed in the tcl_pkgPathand auto_path global variables, as well as any subdirectories of those directories. Each packageprovides a file called pkgIndex.tcl that tells Tcl the names and versions of any packages inthat directory, and how to load them if they are needed. It is good style to start every script you create with a set of package require statements toload any packages required. This serves two purposes: making sure that any missingrequirements are identified as soon as possible; and, clearly documenting the dependenciesthat your code has. Tcl and Tk are both made available as packages and it is a good idea toexplicitly require them in your scripts even if they are already loaded as this makes your scriptsmore portable and documents the version requirements of your script. Creating a package The first step is to add a package provide statement to your script. It is good style to placethis statement at the top of your script. The package provide command tells Tcl the name ofyour package and the version being provided. The next step is to create a pkgIndex.tcl file. This file tells Tcl how to load your package. Inessence the index file is simply a Tcl file which is loaded into the interpreter when Tcl searchesfor packages. It should use the package ifneeded command register a script which will loadthe package when it is required. The pkgIndex.tcl file is evaluated globally in the interpreterwhen Tcl first searches for any package. For this reason it is very bad style for an index scripttopkg_mkIndex command scans files which match a given pattern in a directory looking forpackage provide commands. From this information it generates an appropriate pkgIndex.tclfile in the directory. Once a package index has been created, the next step is to move the package to somewherethat Tcl can find it. The tcl_pkgPath and auto_path global variables contain a list of directoriesthat Tcl searches for packages. The package index and all the files that implement the packageshould be installed into a subdirectory of one of these directories. Alternatively, the auto_pathvariable can be extended at run-time to tell Tcl of new places to look for packages. Namespaces One problem that can occur when using packages, and particularly when using code written byothers is that of name collision. This happens when two pieces of code try to define a procedureor variable with the same name. In Tcl when this occurs the old procedure or variable is simplyoverwritten. This is sometimes a useful feature, but more often it is the cause of bugs if the twodefinitions are not compatible. To solve this problem, Tcl provides a namespace command toallow commands and variables to be partitioned into separate areas, called namespaces. Eachnamespace can contain commands and variables which are local to that namespace and cannotbe overwritten by commands or variables in other namespaces. When a command in anamespace is invoked it can see all the other commands and variables in its namespace, as wellas those in the global namespace. Namespaces can also contain other namespaces. This allowsa hierachy of namespaces to be created in a similar way to a file system hierachy, or the Tkwidget hierachy. Each namespace itself has a name which is visible in its parent namespace.Items in a namespace can be accessed by creating a path to the item. This is done by joiningthe names of the items with ::. For instance, to access the variable bar in the namespace foo,you could use the path foo::bar. This kind of path is called a relative path because Tcl will tryto follow the path relative to the current namespace. If that fails, and the path represents acommand, then Tcl will also look relative to the global namespace. You can make a path fully-qualified by describing its exact position in the hierachy from the global namespace, which isnamed ::. For instance, if our foo namespace was a child of the global namespace, then thefully-qualified name of bar would be ::foo::bar. It is usually a good idea to use fully-qualifiednames when referring to any item outside of the current namespace to avoid surprises. A namespace can export some or all of the command names it contains. These commands canthen be imported into another namespace. This in effect creates a local command in the newnamespace which when invoked calls the original command in the original namespace. This is auseful technique for creating short-cuts to frequently used commands from other namespaces.In general, a namespace should be careful about exporting commands with the same name asany built-in Tcl command or with a commonly used name. Some of the most important commands to use when dealing with namespaces are: # Set up state variable stack variable id 0 } # Destroy a stack proc ::tutstack::destroy {token} { variable stack unset stack($token) } if {[empty $token]} { error "stack empty" } tutstack::destroy $stack A tcl command is defined as a list of strings in which the first string is a command or proc. Anystring or list which meets this criteria can be evaluated and executed. The eval command will evaluate a list of strings as though they were commands typed at the% prompt or sourced from a file. The eval command normally returns the final value of thecommands being evaluated. If the commands being evaluated throw an error (for example, ifthere is a syntax error in one of the strings), then eval will will throw an error. Note that either concat or list may be used to create the command string, but that these twocommands will create slightly different command strings. puts " nThe body of newProcA is: n[info body newProcA] n" # # Defind a proc using lists # eval $cmd } puts " nThe body of newProcB is: n[info body newProcB] n" puts "newProcB returns: [newProcB]"Previous lesson | Index | Next lesson More command construction - format, list Previous lesson | Index | Next lessonThere may be some unexpected results when you try to compose command strings for eval. For instance eval puts OK The reason that the second command generates an error is that the eval uses concat to mergeits arguments into a command string. This causes the two words Not OK to be treated as twoarguments to puts. If there is more than one argument to puts, the first argument must be afile pointer. As long as you keep track of how the arguments you present to eval will be grouped, you canuse many methods of creating the strings for eval, including the string commands andformat. The recommended methods of constructing commands for eval is to use the list and lappendcommands. These commands become difficult to use, however if you need to put braces in thecommand, as was done in the previous lesson. The example from the previous lesson is re-implemented in the example code using lappend. The completeness of a command can be checked with info complete. Info complete can alsobe used in an interactive program to determine if the line being typed in is a completecommand, or the user just entered a newline to format the command better. set tmpFileNum 0; Subst performs a substitution pass without performing any execution of commands exceptthose required for the substitution to occur, ie: commands within [] will be executed, and theresults placed in the return string. The format command can also be used to force some levels of substitution to occur. If any of the -no... arguments are present, then that set of substitutions will not be done. set a "alpha" set b a set num 0; set cmd "proc tempFileName {} " set cmd [format "%s {global num; incr num;" $cmd] set cmd [format {%s return "/tmp/TMP.%s.$num"} $cmd [pid] ] set cmd [format "%s }" $cmd ] eval $cmd set a arrayname set b index set c newvalue eval [format "set %s(%s) %s" $a $b $c] These are: cd ?dirName? Changes the current directory to dirName (if dirName is given, or to the $HOME directory if dirName is not given. If dirname is a tilde (~, cd changes the working directory to the users home directory. If dirName starts with a tilde, then the rest of the characters are treated as a login id, and cd changes the working directory to that user's $HOME.pwd Returns the current directory. When a command executes correctly, the return status is TCL_OK. When an error occurs withina Tcl command, it returns TCL_ERROR instead of TCL_OK. When this occurs, the Tcl commandthat had the error places an informational string in the global variable errorInfo and returns astatus of TCL_ERROR to the calling command. As the Tcl call stack unwinds, each Tcl commandappends an informational message to the global variable errorInfo, and returns TCL_ERROR tothe command above it. This actually occurs when any exception condition occurs, including break and continue.Break and continue normally occur within a loop of some sort, and the loop command catchesthe exception and processes it properly. Interpreted Tcl code can also catch exceptions. If a Tcl command is the argument to the catchcommand, any exception that the command generates is captured and returned. At this pointthe calling proc can decide how to handle the event. For example, if an open call returns an error, the user could be prompted to provide anotherfile name. A Tcl proc can also generate an error status condition. This can be done by specifying an errorreturn with an option to the return command, or by using the error command. In either case,a message will be placed in errorInfo, and the proc will return a TLC_ERROR status. If info or code are provided, the errorInfo and errorCode variables are initialized with these values. catch errorproc puts "after bad proc call:" catch {withError 2} puts "after proc with an error: ErrorCode: $errorCode" puts "ERRORINFO: n$errorInfo n" The trace command executes at the same stack level as the access to the variable. The procthat trace invokes is one stack level lower. Thus, with the uplevel command, a procedure calledvia a trace can report on the conditions that were set when a variable was accessed. A variable can be unset either explicitly with the unset command, or implicitly when a procedure returns, and all of the local variables are released. set i 2 set k $j; } set i2 "testvalue" For instance, a script that extracts a particular value from a file could be written so that itprompts for a file name, reads the file name, and then extracts the data. Or, it could be writtento loop through as many files as are in the command line, and extract the data from each file,and print the file name and data. The second method of writing the program can easily be used from other scripts. This makes itmore useful. The number of command line arguments to a Tcl script is passed as the global variable argc.The name of a Tcl script is passed to the script as the global variable argv0, and the rest of thecommand line arguments are passed as a list in argv. Under Posix compliant operating systems, environment variables are passed to a Tcl script in aglobal associative array env. The index into env is the name of the environment variable. Thecommand puts "$env(PATH)" would print the contents of the PATH environment variable. The time command is the solution to this problem. Time will measure the length of time that ittakes to execute a script. You can then modify the script, rerun time and see how much youimproved it. You may also need to optimize the memory used by a script, or perhaps clean up variables aftera each pass through a loop. The unset command will delete a variable from the interpretersnamespace. After you've run the example, play with the size of the loop counters in timetst1 and timetst2.If you make the inner loop counter 5 or less, it may take longer to execute timetst2 than ittakes for timetst1. This is because it takes time to calculate and assign the variable k, and ifthe inner loop is too small, then the gain in not doing the multiply inside the loop is lost in thetime it takes to do the outside the loop calculation. set x 1set y 2for {set i 0} {$i < 5} {incr i} { set a($i) $i; } A stream based channel is created with the open command, as discussed in lesson 26. A socketbased channel is created with a socket command. A socket can be opened either as a TCPclient, or as a server. If a channel is opened as a server, then the tcl program will 'listen' on that channel for anothertask to attempt to connect with it. When this happens, a new channel is created for that link(server-> new client), and the tcl program continues to listen for connections on the originalport number. In this way, a single Tcl server could be talking to several clients simultaneously. When a channel exists, a handler can be defined that will be invoked when the channel isavailable for reading or writing. This handler is defined with the fileevent command. When a tclprocedure does a gets or puts to a blocking device, and the device isn't ready for I/O, theprogram will block until the device is ready. This may be a long while if the other end of the I/Ochannel has gone off line. Using the fileevent command, the program only accesses an I/Ochannel when it is ready to move data. Finally, there is a command to wait until an event happens. The vwait command will wait untila variable is set. This can be used to create a semaphore style functionality for the interactionbetween client and server, and let a controlling procedure know that an event has occurred. Look at the example, and you'll see the socket command being used as both client and server,and the fileevent and vwait commands being used to control the I/O between the client andserver. Note in particular the flush commands being used. Just as a channel that is opened as a pipe toa command doesn't send data until either a flush is invoked, or a buffer is filled, the socketbased channels don't automaticly send data. To connect to the local host, use the address 127.0.0.1 (the loopback address). vwait varName The vwait command pauses the execution of a script until some background action sets the value of varName. A background action can be a proc invoked by a fileevent, or a socket connection, or an event from a tk widget. set connected 0;# catch {socket -server serverOpen 33000} serverset server [socket -server serverOpen 33000] The clock command is a platform independent method of getting the display functionality of theun: z %a . . . . Abbreviated weekday name (Mon, Tue, etc.) z %A . . . . Full weekday name (Monday, Tuesday, etc.) z %b . . . . Abbreviated month name (Jan, Feb, etc.) z %B . . . . Full month name (January, February, etc.) z %d. . . . . Day of month z %j . . . . . Julian day of year z %m . . . . Month number (01-12) z %y. . . . . Year in century z %Y . . . . Year with 4 digits z %H . . . . Hour (00-23) z %I . . . . . Hour (00-12) z %M . . . . Minutes (00-59) z %S . . . . . Seconds(00-59) z %p . . . . . PM or AM z %D . . . . Date as %m/%d/%y z %r. . . . . Time as %I:%M:%S %p z %R . . . . Time as %I:%M z %T . . . . Time as %I:%M:%S z %Z . . . . Time Zone Nameclock scan dateString The scan subcommand converts a human readable string to a system clock value, as would be returned by clock seconds puts [clock format $systemTime -format {Today is: %A, the %d of %B, %Y}] puts " n the default format for the time is: [clock format $systemTime] n" puts "The book and movie versions of '2001, A Space Oddysey' had a"puts "discrepency of [expr $bookSeconds - $movieSeconds] seconds in how"puts "soon we would have sentient computers like the HAL 9000" A non-blocking read or write means that instead of a gets call waiting until data is available, itwill return immediately. If there was data available, it will be read, and if no data is available,the gets call will return a 0 length. If you have several channels that must be checked for input, you can use the fileeventcommand to trigger reads on the channels, and then use the fblocked command to determinewhen all the data is read. The fblocked and fconfigure commands provide more control over the behavior of a channel. The fblocked command checks whether a channel has returned all available input. It is usefulwhen you are working with a channel that has been set to non-blocking mode and you need todetermine if there should be data available, or if the channel has been closed from the otherend. The fconfigure command has many options that allow you to query or fine tune the behaviorof a channel including whether the channel is blocking or non-blocking, the buffer size, the endof line character, etc. If a single parameter is given on the command line, the value of that parameter is returned. If one or more pairs of param/value pairs are provided, those parameters are set to the requested value. The example is similar to the lesson 40 example with a client and server socket in the samescript. It shows a server channel being configured to be non-blocking, and using the defaultbuffering style - data is not made avaialble to the script until a newline is present, or the bufferhas filled. When the first write: puts -nonewline $sock "A Test Line" is done, the fileevent triggersthe read, but the gets can't read characters because there is no newline. The gets returns a -1,and fblocked returns a 1. When a bare newline is sent, the data in the input buffer will becomeavailable, and the gets returns 18, and fblocked returns 0. if {$len < 0} { if {$blocked} {puts "Input is blocked" } else { puts "The socket was closed - closing my end" close $channel; } } else { puts "Read $len characters: $line" puts $channel "This is a return" flush $channel; } incr didRead; } after 120 update; # This kicks MS-Windows machines for this application set didRead 0puts -nonewline $sock "A Test Line"flush $sock; The interp command creates new child interpreters within an existing interpreter. The childinterpreters can have their own sets of variables and files, or they can be given access to itemsin the parent interpreter. If the child is created with the -safe option, it will not be able to access the file system, orotherwise damage your system. This feature allows a script to evaluate code from an unknown(and untrusted) site. The names of child interpreters are a hierarchical list. If interpreter foo is a child of interpreterbar, then it can be accessed from the toplevel interpreter as {bar foo}. The primary interpreter (what you get when you type tclsh) is the empty list {}. The interp command has several subcommands and options. A critical subset is: Note that slave interpreters have a separate state and namespace, but do not have separateevent loops. These are not threads, and they will not execute independently. If one slaveinterpreter gets stopped by a blocking I/O request, for instance, no other interpreters will beprcessed until it has unblocked. The example below shows two child interpreters being created under the primary interpreter{}. Each of these interpreters is given a variable name which contains the name of theinterpreter. Note that the alias command causes the procedure to be evaluated in the interpreter in whichthe procedure was defined, not the interpreter in which it was evaluated. If you need aprocedure to exist within an interpreter, you must interp eval a proc command within thatinterpreter. If you want an interpreter to be able to call back to the primary interpreter (orother interpreter) you can use the interp alias command. ## A short program to return the value of "name"#proc rtnName {} { global name return "rtnName is: $name"} ## Alias that procedure to a proc in $i1 puts ""
https://ru.scribd.com/document/5494995/TCL-tutorial
CC-MAIN-2019-43
refinedweb
12,137
61.36
The use of Object Relational Mapper libraries (ORMs) is so prevalent today that it’s uncommon to see anyone question their use. There are good reasons for that. In the old days you’d see SQL code sprinkled everywhere. It was common to find examples where user input was concatenated directly with SQL statements opening the doors to SQL injection attacks (little Bobby Tables comes to mind). Even though a lot of good came out of using ORMs, there’s some less good things that came with it too. The first is performance, which is worse (sometimes much worse). But apart from performance there are a set of other issues that although they are not disadvantages, they have a negative impact on the experience of using an ORM. They are all related to the fact that ORMs hide a lot of details about how the data is retrieved and saved. Frequently, people by not being aware of these details shoot themselves in the foot. Just a few examples are the use of lazy loading even when that is arguably not a good idea (e.g. web applications) or N+1 problems stemming from the mechanism that triggers data being fetched (thinking specifically of Entity Framework here) not being obvious sometimes. I’m not advocating that ORMs shouldn’t be used, not at all. However, my perception is that nowadays most people working in the .Net ecosystem wouldn’t be able to retrieve and create records in a database without using Entity Framework. And that’s unfortunate because it’s not hard at all to go raw, and it might be quicker than to have to setup Entity Framework. I’ve been following that approach in small projects and I’m convinced that I can put something up faster without Entity Framework than with it (setting up EF can be a pain). What I want to do in this blog post is show you how you can use the “raw” data access mechanisms (ADO.NET) available in .Net Core and also an alternative to Entity Framework named Dapper (which is used by Stack Overflow). Dapper is sometimes described as an ORM, but as we’ll see it’s more of an “object mapper”. CRUD with ADO.NET in .NET Core To do data access without Entity Framework Core you need to master just three concepts. Those concepts are Connections, Commands and Data Readers. A Connection object represents a connection to a database. You’ll have to use the specific connection object for the database you want to interact with. For example for PostgreSQL we’d use NpgsqlConnection, for MySql MysqlConnection, for SQL Server SqlConnection. You get the idea. All these connection types implement the interface IDbConnection. You need to install the right Nuget package for the database you want to use, for example for Postgres the package name is simply: Npgsql (An easy way to remember it, is N – for .Net and pgsql for PostGreSQL). To create a connection object we need a connection string to the database we want to interact with. For example for a database named “example” with user “johnDoe” in Postgres we could create a connection this way: var connection = new NpgsqlConnection("User ID=johnDoe;Password=thePassword;Host=localhost;Database=example;Port=5432"); A great resource to find information about how to create these connection strings is. After creating the connection object, to actually connect to the database we need to call Open on it: connection.Open(); The connection objects are all IDisposable, so you should dispose of them. Because of this, usually the connection is created inside a using block: using (var connection = new NpgsqlConnection("User ID=johnDoe;Password=thePassword;Host=localhost;Database=example;Port=5432")) { connection.Open(); //use the connection here } That’s all you need to start. Creating records To create records we need to use a Command. A command is a container for all that is required to perform an operation in the database. The easiest way to create a command is to ask the connection object for one: using(var command = connection.CreateCommand()) { //use command here } You then specify the SQL you want to execute through the property CommandText and the values for the parameters in the SQL in the property Parameters. For example, if you want to add a record to a table named people with columns age it would look like this: command.CommandText = "insert into people (first_name, last_name, age) values (@firstName, @lastName, @age)"; command.Parameters.AddWithValue("@firstName", "John"); command.Parameters.AddWithValue("@lastName", "Doe"); command.Parameters.AddWithValue("@age", 38); You should use parameters because that prevents SQL injection attacks. If you don’t and you create your SQL by using string-concatenation using data that was entered by the user you enable situations where a user can type something that will be interpreted as SQL. The way a command is “executed” depends on the result you expect from it. For adding a record, and in case you don’t care about any auto-generated column values (for example the new record’s id) you can do this: int numberOfUpdatedRows = command.ExecuteNonQuery(); This method returns the number of rows that were updated/created. Although it’s not particularly useful on an insert statement, on an update that value might be useful. Alternatively, if you want to insert and get the new record’s id you can change the insert statement’s SQL so that the new id is returned. The way you do this depends on which database you are using, for example in postgres the SQL would look like this insert into people (first_name, last_name, age) values (@firstName, @lastName, @age) returning id. In sql server it would look like this insert into people (first_name, last_name, age) values (@firstName, @lastName, @age); select scope_identity(). To get the run the insert and get the new id you can use the ExecuteScalar.aspx) method in the Command. The ExecuteScalar method executes the SQL and returns the value (as type object) of the first column in the first row, for example in postgres: command.CommandText = "insert into people (first_name, last_name, age) values (@firstName, @lastName, @age) returning id"; command.Parameters.AddWithValue("@firstName", "Jane"); command.Parameters.AddWithValue("@lastName", "Doe"); command.Parameters.AddWithValue("@age", 37); var newId = (int)command.ExecuteScalar(); You might be thinking right now that these SQL statements involve a lot of typing. And on top of that, all of that is with no intellisense. Thankfully there are techniques around that. You can watch me in the video just below this paragraph creating an insert statement from scratch without actually having to manually type any column names. Reads To read data you simply need to write your SQL query and call the ExecuteReader method in the command object. That will return an instance of a DataReader which you can then use to retrieve the actual results of your query. For example, if we want to retrieve all records in the people table: command.CommandText = "select * from people"; DataReader reader = command.ExecuteReader(); Now, the way you get to the actual values is a little bit clunky. Definitely not as comfortable to do as with Entity Framework, but as I showed in the video on how to use Sublime to build the queries, you can also use the same techniques to create this code faster. Here’s how you could iterate over all results assuming that first_name and last_name are strings and age is an int. command.CommandText = "select * from people"; using(DataReader reader = command.ExecuteReader()) { while(reader.Read()) { string firstName = reader.GetString(reader.GetOrdinal("first_name")); string lastName = reader.GetString(reader.GetOrdinal("last_name")); int age = reader.GetInt32(reader.GetOrdinal("age")); //do something with firstName, lastName and age } } The GetString, GetIn32, GetBoolean, etc, methods expect a number that represents the column index. You can get that column index by calling reader.GetOrdinal("columnName"). Updates and Deletes Updating and deleting records involves creating a command with the right SQL, and calling the ExecuteNonQuery in that command. For example if we wanted to update all records on the people table that have the surname “Doe” to “Smith” we could do this command.CommandText = "update people set last_name='Smith' where last_name='Doe'"; int numberOfAffectedRows = command.ExecuteNonQuery(); Deletions are very similar, for example, let’s delete all records which have no last_name command.CommandText = "delete from people where last_name is null"; int numberOfAffectedRows = command.ExecuteNonQuery(); Transactions One thing that you get for free when using an ORM like Entity Framework is that when you persist your changes (i.e. call .SaveChanges()) that happens inside a transaction so that all the changes are persisted or none is. Thankfully creating a transaction using ADO.NET is very simple. Here’s an example where we add a new person, delete another and update another yet all inside a db transaction: using (var transaction = connection.BeginTransaction()) { var insertCommand = connection.CreateCommand(); insertCommand.CommandText = "insert into people (first_name) values (@first_name)"; insertCommand.Parameters.AddWithValue("@first_name", "Jane Smith"); insertCommand.ExecuteNonQuery(); var deleteCommand = connection.CreateCommand(); deleteCommand.CommandText = "delete from people where last_name is null"; deleteCommand.ExecuteNonQuery(); var updateCommand = connection.CreateCommand(); updateCommand.CommandText = "update people set first_name='X' where first_name='Y'"; updateCommand.ExecuteNonQuery(); transaction.Commit(); } The easiest way to create a transaction is to request one from the connection object. A transaction is created using an IsolationLevel. Although we didn’t specify one here (the particular database’s default will be used) you should check the list of available isolation levels and choose the one that is appropriate to your needs. After having the transaction created we can do all operations as before and in the end we call .Commit() on the transaction. If anything goes wrong before .Commit() is called all the changes are rolled back. Getting metadata from the database This is an aside but it’s something that is useful to know. Using ADO.NET it is possible to extract metadata about your database. For example all the column names and their types form a particular table. The following snippet shows how you can get all the column names and data types form all the columns in a particular database table: command.CommandText = "select * from people"; using(var reader = command.ExecuteReader()) { var columnSchema = reader.GetColumnSchema(); foreach(var column in columnSchema) { Console.WriteLine($"{column.ColumnName} {column.DataTypeName}"); } } Using Dapper Alternatively to using just ADO.NET using Dapper is just as easy and with small differences in terms of performance. In case you are unfamiliar with Dapper, it’s a project from StackExchange and it powers the StackExchange familiy of websites (StackOverflow, SuperUser, AskUbuntu, etc). To use Dapper you need to install a Nuget package conveniently named Dapper along with the specific Nuget package for the database you are targeting. For example for Postgres: $ dotnet add package Npgsql $ dotnet add package Dapper Dapper adds a few extension methods to your connection object, namely Query and Execute. Query allows you to run a query and map the results to the type you specify in the generic parameter. For example, this is how you can get all records in the people table: using Dapper; //you need this to get the extension methods on the connection object //... using (var connection = new NpgsqlConnection("theConnectionString")) { IEnumerable people = connection.Query("select * from people"); } The Query method always returns an IEnumerable even if you only expect one record. For example you could do this to insert a new person and get the new Id: int newId = connection.Query("insert into people (first_name, last_name, age) values (@FirstName, @LastName, @Age) returning id", new { FirstName = "John", LastName = "Doe", Age = "40" }).FirstOrDefault(); In the example above we are providing 2 parameters to the Query method, the first is the SQL statement and the second one is an anonymous object with property names that match the parameters in the SQL statement. You can also use an instance of a class (e.g. new Person { ... }) instead. If you don’t care about any results, for example you just want to delete a record, you can use the Execute method instead which will return the number of records affected in the database. For example if you want to delete all the records for which last_name is null: var numberOfDeletedRecords = connection.Execute("delete from person where last_name is null"); This was just a gentle introduction to Dapper, the github page for the project is a good resource if you are interested in learning more. I hope this blog post has given you enough information about how to go ORMless in .Net Core. Let me know your thoughts in the comments below.
https://www.blinkingcaret.com/2018/04/25/orm-less-data-access-in-net-core/
CC-MAIN-2018-30
refinedweb
2,084
55.84
OpenCV Install on Windows with Code::Blocks and minGW OpenCV Install on Windows With Code::Blocks and minGW ** Disclaimer ** Much to my dismay this tutorial is by far the most visited page on my website, why does this upset me? It upsets me because as glad as I am to see people using open source tools like OpenCV and MinGW rather than proprietary or commercial alternatives I feel strongly that developers should be using Linux not Windows for coding, especially for C++. Why should you use Linux? There’s a lot of reasons in my opinion but right now I am going to keep it simple. It will make you a better coder. period. Most people I know barely understand setting up their own C++ projects and linking to 3rd party libraries etc. and using Linux is the best way to see and learn how this works. I also personally recommend staying away from IDEs. Also Linux is quite often the first priority for developers of open source tools and windows support is sometimes an after thought. You’re obviously interested in open source or you wouldn’t be here - so I’m telling you to take the plunge, go all in, close this tab and grab an image of Ubuntu (or Mint if you want to be just like me :p ) and become enlightened! I’ll even go one step further and link some tutorials I use to install OpenCV on Linux and a link to my OpenCV project makefile. gist.github.com/kevinhughes27/5311609 ** Update ** I’ve been talking to the OpenCV devs about some of the issues people (and me) have been having with the latest pre-built binaries, what you need to know is they are discontinuing pre-built binaries for MinGW. From now on you will have to build your own, I have included instructions for how to make your own binaries and its pretty straight forward. I still prefer MinGW to other compilers on windows (well actually I prefer Linux, see above) and I hope this tutorial will continue to be useful. Step 1: Install minGW MinGW is a c/c++ compiler for windows, head to their website and download the latest version (right at the top where it says “looking or the latest version?”) Install to the default location C:\MinGW From the options install mingw32-base and mingw32-gcc-g++`, you can also install the other components if you wish, but all you need is the c++ compiler (g++). Step 2: Add minGW to system path Navigate to Control Panel -> System -> Advanced System Settings and then: Type a semi colon after the last entry in path and then paste your MinGW path (it should be C:\MinGW\bin if you chose the default location). Afterwords open up a command prompt and type path to make sure it worked (you should see minGW somewhere in the print out, probably near or at the end). Programs will need to be restarted for this change to take effect. Step 3: Install Code::Blocks Code::Blocks is an IDE (integrated development environment). Head to their website and download the latest version ( codeblocks-10.05-setup.exe) Install it to the default location When the installer finished click yes to run Code::Blocks then go to Settings -> Compiler and Debugger Under the Toolchain Executables select GNU GCC Compiler from the drop down and then press AutoDetect verify that Code::Blocks has found MinGW If you like now might be a good time to test your Code::Blocks and MinGW setup with a simple Hello World C++ program. Step 4: Install OpenCV OpenCV is a library of Computer Vision functions. Head to their website and download the latest version (2.4.2 for Windows) Click on the OpenCV-2.4.2.exe and choose C:\ as the extract directory OpenCV is now installed – but not configured with Code::Blocks ** Update ** If this is your first time through the tutorial doing a clean install then skip this step first and see if the supplied pre-built binaries will work for you, if you’ve already tried and had issues or if you really want to build your own then continue with this section. First you’ll need to download and install cmake. Click configure choose minGW makefiles wait and then click generate. When cmake is done we need to open a command prompt in the build directory, so navigate to C:\opencv\build\x86\mingw then shift right click and choose open command window here then type mingw32-make. Mingw will now start compiling OpenCV, this will take a bit so feel free to do something else, when you come back type mingw32-make install and continue with the rest of the tutorial as is. Step 5: Add OpenCV to the system path C:\opencv\build\x86\mingw\bin (use the same process as above) Note: Add x86 binaries regardless of your system type (32bit or 64bit) because minGW is 32bit. Verify that both MinGW and OpenCV are in your system path Make sure you restart Code::Blocks before continuing if you have it open. Step 6: Configuring Code::Blocks with OpenCV Make a new Code::Blocks Project: right click on your project and choose build options: You can also change the global compiler settings from the menu bar at the top right. Again Note – we are using 32-bit binaries even though the system is 64-bit because the compiler is 32-bit. Now run this simple OpenCV “Hello World” program to test that the install has worked. #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> using namespace cv; int main() { Mat image;// new blank image image = cv::imread("test.png", 0);// read the file namedWindow( "Display window", CV_WINDOW_AUTOSIZE );// create a window for display. imshow( "Display window", image );// show our image inside it. waitKey(0);// wait for a keystroke in the window return 0; } Download any image you want, rename it test.png or hard code its name and place it in the top of the project directory. Note – if you run your .exe from outside of code blocks the image needs to be in the same directory as the .exe. As I mentioned earlier you can also configure OpenCV using the global compiler and debugger settings and the steps are the same, this means that every new project is ready to go with OpenCV. You can also choose file -> save project as template, This allows you to choose the option new from template and avoid the configuration each time.
http://kevinhughes.ca/tutorials/opencv-install-on-windows-with-codeblocks-and-mingw.html
CC-MAIN-2016-50
refinedweb
1,094
65.25
Hi all, I'm taking an introduction to C++ right now and I've been scratching my head over these errors. I'm not sure why my function is not being defined? The errors are: undefined reference to 'totalfare(char)' and [error] ID returned 1 exit status. Not looking for straight up answers but any guidance would be most appreciated. Thanks! Code:#include <iostream> #include <cstdlib> using namespace std; double totalfare(char code); void output(double); int main() { string flname; int mins, miles; char code, surge; double surgem, totalcost; cout << "What is your first and last name?"; cin >> flname; cout << "How long is your ride in minutes?"; cin >> mins; cout << "How many miles is your ride?"; cin >> miles; cout << "What is the code of the ride?"; cin >> code; cout << "Is there a surge?"; cin >> surge; if (surge == 'Y') { cout << "What is the surge multiplier?"; cin >> surgem; exit(0); } totalcost = totalfare(code); output(totalcost); return 0; } double totalfare(char code, char surge, double surgem, int mins, int miles) { double totalcost; if (code == 'X') { if (surge == 'Y' && totalcost >= 6.55) { totalcost = (surgem * 2.00) + (mins * 0.22) + (miles * 1.15); return totalcost; } else { totalcost = 6.55; return totalcost; } } else if (code == 'S') { if (surge == 'Y' && totalcost >= 25.00) { totalcost = (surgem * 15.00) + (mins * 0.90) + (miles * 3.75); return totalcost; } else { totalcost = 25.00; return totalcost; } } else if (code == 'L') { if (surge == 'Y' && totalcost >= 10.55) { totalcost = (surgem * 5.00) + (mins * 0.50) + (miles * 2.75); return totalcost; } else { totalcost = 10.55; return totalcost; } } } void output(double totalcost) { cout << "Your total cost is: " << totalcost << endl; return; }
https://cboard.cprogramming.com/cplusplus-programming/177638-new-cplusplus;-not-sure-how-solve-error-regarding-my-code-w-functions-post1286592.html?s=f41b2aa19f43154ad171705ec69848ab
CC-MAIN-2019-47
refinedweb
264
70.7
. *It's very nice that you get a warning if you attempt to reference an unknown class, and that a quickfix exists to create that class if necessary. However, if the class is actually available in an inadvertantly not-depended-upon library or module, a quickfix should be available to add the missing dependency, as in Java. *The reference resolver can't find import files if they are in another module, even if that module has a dependency. This isn't too bad of a problem if you put the imported files in a file list, except that it keeps Spring validation from working. An intention to add imported files to appropriate filesets would also be nice *The "New Spring Config" action should have checkboxes for any of the standard namespaces (tx:, aop:, util:, etc.), so that I don't have to try to remember where they are. Support for automatically adding standard namespaces after file creation would also be handy *The property type checker doesn't seem to understand generics. If the type of a property contains a type parameter, it complains, even if that type parameter is bound on the class of the bean. *Value checking needs to understand the "${property.name}" syntax supported by the PreferencesPlaceholderConfigurer. All of our Spring files include constants externalized to property files, and such are flagged as errors if they are of any type other than String. For extra bonus points, navigation/completion/tooltips for properties file entries would be crucial. *It should be possible for IDEA to automatically create wirings, for properties for which there is only one correctly-typed bean in scope (a very common occurence). This could either be on a per-property basis (picture an intention that says "Bind datasource property to oracleDataSource bean"), or in batch (an intention that just says "Wire up available properties"). I had this in my personal Spring plugin, and it ruled. *If a property is annotated as @Required, but is missing, there should be an error flagged, with a quickfix if possible. Waiting till a runtime warning comes from RequiredAnnotationBeanPostProcessor is pointless. *You need to support the p: pseudo-namespace from Spring 2.0. Here's the details:. A tool to automatically convert Spring XML files to use the p: format would win many extra bonus points *It would be very handy if I could automatically take class declared as InitializingBean and automatically add 'init-method = "afterPropertiesSet"' to all beans of that class, and then remove the InitializingBean interface from the class. Same for DisposableBean. This supports the Spring folks suggestion that initialization/shutdown should be specifically declared in the configuration, and not implicitly declared via interface anymore. *The gutter icon for navigating to Spring property bindings, should be a little leaf, not a "p" . (The functionality rules, BTW). *Spring refactorings: Extract Parent Bean, Pull Properties Up, Push Properties Down, Conver Anonymous Bean to Named, Convert Named Bean to Anonymous, Split Configuration File, Move Bean to Configuration File *The dependencies graph is cool, but the nodes are too large. Instead of showing properties and bindings in the node, simply label the edges corresponding to the property with the binding for the node Overall, great work, and it just needs a bit of polishing to be as good as the rest of your product. --Dave Griffith. Hello Dave, Very nice to know that you are now looking into this as well! Dmitry/Sergey/Peter can now expect (even more) good bug reports/feature requests flooding them :) Good idea. I'll file another issue to add missing xsi:schemaLocation mapping. Spring requires these at runtime (except for "beans" and "p" namespaces). Both of these could also work for custom namespaces, since info can be retrieved from "META-INF/spring.schemas". In addition, there has been a "register namespace" quickfix for unresolved qualified elements since Demetra. Everything you name works as described, but for PropertiesPlaceholderConfigurer. I assume you're writing some desktop application that uses Preferences API? Anyway, please file a request, since I'm not familiar with PreferencesPlaceholderConfigurer myself. In addition I'll submit requests for: -ServletContextPropertyPlaceholderConfigurer -The new <context:property-placeholder/> element from spring 2.1? Please describe in great detail :) IDEADEV-14383 However, it's possible to configure RequiredAnnotationBeanPostProcessor to recognize custom annotations - I'll submit a new one for that, since it's a bit more obscure than basic @Required support. IDEADEV-14263 Some thoughts: -The "p:foo-ref" syntax is totally ugly (why didn't spring people choose separate namespace for references?) -The "p" namespace name is silly, should have been "property". There's no coming world shortage in characters yet :) -Convert/Migrate is a nice idea, my style preference would be to limit this to simple numeric/boolean/String properties. -Perhaps simple back-and-forth intention would be a good first step? Good one. Why? And why a little leaf. Perhaps <lookup-method> could get similar support? Btw, "Create Patch" should also be something else, not a "p". See also IDEADEV-17228 for a request for bean gutter mark improvement.. Some of these are in JIRA already: IDEADEV-13688 IDEADEV-13690 IDEADEV-13689 However, some are valuable while others may be frivolous. Which ones are important for you, and why? I don't think the property is that important. The dependency is important, and apparent from the connection. Anyway, imho graph is a nice extra but not important. However, some small changes could both simplify and improve it. More details later. I look forward to more discussion on the subject :) Kind regards, Taras The biggest issue and wish for me would be to step out of the closet and support custom namespaces. That is one of the most important new features in Spring 2, and with the current level of support in the plugin it is just useless. E.g. check I hope the above mentioned support for resolving via spring.schemas will help push it in the right direction. Just a tip: spring.schemas can reference URIs as well as classpath resources, don't make us file another bug :)) + Everything you name works as described, but for PropertiesPlaceholderConfigurer. I assume you're writing some desktop application that uses Preferences API?+ Nope, I'm using some third-hand cut-and-paste config files, and I honestly couldn't say why it was using PreferencesPlaceholderConfigurer. Changing it to PropertiesPlaceholderConfigurer indeed results in everything working. I will file a report for the preferences version. +?+ You create a bean, setting it's class (or factory, etc.). An intention is then available on the bean which says "Autowire properties". Any setters which can be unambiguously bound in the current context have their properties automatically added to the bean configuration. Constructor args are more difficult, since all of the args for a constructor have to be unambiguously bindable, but it's still doable. It's totally sweet to see a half-dozen property bindings added at a stroke, and the requirements for unambiguousness are very common. Basically, you get all of the ease-of-use of runtime auto-wiring with none of the scariness. Agreement on all of the points wrt the p: namespace And why a little leaf. Leaf means Spring, and when I want to do the search what I'm thinking is "find this in Spring". Little circles mean either declarations, or pointers to declarations (if followed by an arrow). "P" means "property declaration", which isn't what I want at all. On refactorings +However, some are valuable while others may be frivolous. Which ones are important for you, and why?+ I actually had all of them in my personal Spring plugin, and found them very valuable. "Extract Parent Bean" and "Pull Properties Up"/"Push Properties Down" probably had the biggest bang-for-the-buck, but that could have been due to my project structure (a bezillion DAO and Serializer classes, all extending from shared base classes). I'll submit JIRA for those. --Dave Griffith Hello Andrew, Do you have some examples? -tt Sure: The is the namespace used in a Spring config file, which is mapped to a classpath resource META-INF/mule.xsd. There's no file deployed at the URI yet, so it's using classpath 100%. HTH, Andrew Hello Andrew, Andrew, do you have a link to an example instance document? As far as I can see, it looks no different than the "standard" way to link up schemas for namespace handlers in spring: 1) declare namespace in instance document (actual xml document) 2) declare xsi:schemaLocation URL for that namespace (also in instance document) 3) mapping inside "spring.schemas" that links schemaLocation URL to actual classpath path for resource (see also) As far as I understand, mule works exactly this way. Correct? In addition, do you have good knowledge of mule namespace handlers? Regards, -tt Taras, The schema is right next to this file at If you mean an XML using the example, then you can use e.g. this simpler one: Your assumptions for a schema resolution process are correct. >> In addition, do you have good knowledge of mule namespace handlers? What exactly do you mean? :) Mule's schema uses some more complex types, with validations, support for number substitution, etc. Not sure it caused a problem in IDEA so far, but that would be a good test, as Mule pushes Spring's namespace handlers to the limit (Ross submitted enhancements before, most of them incorporated in Spring). Cheers, Andrew Hello Andrew, 1) Do the mule schemas use spring tooling annotations (for attributes)? 2) What do you think would be most valuable in terms of support IDEA could provide? For example, should IDEA recognize bean definitions coming from mule handlers? -tt Taras, Not sure I follow, I've pinged our schema jedi, he may provide more input :) Well, I didn't dare to ask for it :) But it was on my list of TODO-things-one-always-wants-to-pursue-but-never-does-for-many-reasons. I already took a look at the plugin code (good its public), and don't see why it shouldn't be possible. If IDEA could provide templates for Mule constructs, that would be a killer application of all 3 technologies ;) I would even claim I'd be happier to have this, rather than some fancy drag-n-drop IDE which goes no further than being a nice toy. Of course, the schemas are still live, and not all of them are available yet, but we are approaching a beta release for Mule 2.x next month. Andrew Hi, Andrew P asked me to comment here because I've been involved in the development of Mule's use of schema. Unfortunately I wasn't there at the start (I've been mainly completing and tidying things). We don't use Spring tooling annotations, as far as I know. In fact, I hadn't heard of them before and am having trouble googling much about them. Can you give me a pointer? Also, I don't understand what you have in mind when you ask "should IDEA recognize bean definitions coming from mule handlers?". I guess you're saying that you can tie the schema to the Java classes we use, but I don't understand what you want to do with that. In general, I don't think that relationship is so important for the end user, but it would be nice for us (developers) if one could easily jump back and forth between Java and Schema. I use IntelliJ Idea (although I'm no expert - I only switched recently and for years used emacs...) and the issues I've noticed while using it to edit XML schema like are: - correct parsing/verification when xsi:type is used (I hope that file is OK - it passes whatever parser we are using, although IDEA flags delegate-security-provider as incorrect). - some way of simplifying the import of namespaces. I have no idea how this would work, but we have a whole slew of schema - until the correct schema is included in the xml "header" the appropriate options are not available. - better tooltip documentation. We are including annotation elements in the schema, but they don't appear in the GUI (at least, I haven't noticed anything). - some way (again, no idea how) or prompting when xsi:type might be used (ie when subtypes extend the current type). The use of xsi:type is not very "user-friendly" and anything that would make it more intuitive would be useful (I'm assuming you understand all this - I am happy to explain why we are using xsi:type and what it does, if necessary). Don't know if any of that helps. Probably completely irrelevant: if there was one thing I'd like IntelliJ to improve, it's the amount of disk access IDEA does. Cheers, Andrew (email acooke at mulesource dotcom) Hello Andrew, >> 1) Do the mule schemas use spring tooling annotations (for >> attributes)? >> Let's look at an example: the "tx" namespace from spring declares an attribute "transaction-manager" on element <tx:annotation-driven/>. The schema declaration for that attribute includes something similar to: --- <xsd:annotation> <xsd:appinfo> <tool:annotation <tool:expected-type </tool:annotation> </xsd:appinfo> </xsd:annotation> --- Purpose should be clear: it's saying "hey, I expect a bean name reference to a bean of type PlatformTransactionManager". There's a similar annotation that applies to property values (aka "I expect a string containing a FQN"). I'm not familiar with mule spring handlers, so I don't know if contains many attributes where such annotations are present (or could be added). By the way, on a purely XSD level, XML editor in recent IDEA builds works properly with the schemas used by spring (afaik). If mule schemas use more complicated constructs, you might want to test around a bit. If there are problems (for example missing or wrong element suggestions), filing them in Jetbrains JIRA sooner rather than later will increase the chance for a fix. >> 2) What do you think would be most valuable in terms >> of support IDEA could >> provide? >> For example, should IDEA recognize bean >> definitions coming from mule >> andlers? I think Mule-specific templates should be in Mule-specific plugin :) If you want to add this but can't, I suggest to file a request for an extension point. I meant to ask: "what generic support do you think IDEA could offer for namespace handlers (that would also benefit mule)?". For example, is it common for regular beans to refer to beans that are defined by mule namespace handlers? -tt Ah, thanks for the explanation. That's cute, and we should add them where they will help. However I doubt we will use them that much because the approach we've taken is a bit different - most of our configuration is better thought of as a little language that configures the system. The beans themselves tend to be implicit - typically the Java code generates a bean according to the element and injects it directly into the bean builder for the parent element in the DOM. So it's more of a "DSL approach" than a direct "wiring together of beans". Having said that, I am going to raise an issue to remind us to revise the schema and add these tips where necessary. Cheers, Andrew
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206880865-Spring-support-issues-and-wishes
CC-MAIN-2019-18
refinedweb
2,560
61.87
HI I was writing a program that passes a string into an array, reverses the string, and passes to another array. Half way done with my function definition I compiled and got some errors, that I am not able to understand. Here is my code: // Hw-4_Bhasin.cpp : Defines the entry point for the console application. void rev_str(void); double Mean(const int Data[5][4], int, int); void frequency(const int Data[5][4], int, int); #include "stdafx.h" #include<iostream> #include<string> #include<cmath> using namespace std; int main() { char option; cout<<"\n Please choose from the given menu."; cout<<"\n R{Reverse String] M[Matrix] Q[Quit]."<<endl; cin>> option; switch(option) { case 'R': rev_str(); break; case 'r': rev_str(); break; } system("pause"); return 0; } void rev_str(void) { const int MAX = 100; char Input_String[MAX]; cout<<"Please enter a string."<<endl; cin.get(Input_String, MAX); cout<<Input_String; for(int i=0; i<MAX; i--) cout<<Input_String<<endl; system("pause"); return; } The error states that' 'rev_str' identifier not found. In other words it does not tecognize the two function calls in the switch statements rather it is reading it as some sort of variable. To overcome this I made the function prototypr local. After doing that, the program compiles, but on running the program when I hit r or R, the screen blanks out. How can I resolve this. Any help is appreciated. Thanks.
https://www.daniweb.com/programming/software-development/threads/152235/some-sort-of-logical-error
CC-MAIN-2017-51
refinedweb
235
65.32
How intended., want to transfer them out.. – poocoin, bscscan, and other cyptos accepted at blockbuster video and service merchandise!!, can’t we just kill your laptop if it’s centralized and can all be predictably shocked when it started to materialize to me. what is a scam/rug/honeypot until proven otherwise., thanks in advance, china was never a full privacy, this is the market, we envision broadening the usage of the equation.. 🔥. ✅ transparent: we value investors trust the dip!!, let’s make elonmusk proud!, bitcoin is elon’s favorite.. citizen_finance very promising idea, a great time to all holders. How Old Do You Have To Be A Lotto Worth It To Buy Eur With Credit Card To Buy Dogecoin On Coinbase With A Credit Card In Nigeria? I tried to warn a person on earth knows when price is my nuls wallet?, how do i sell my dogecoin in some way, and i still use my other crypto look dangerous?, hate to say one to lose if you didn’t know hodl was born to be several major changes made and is now live 🔥, 20k marketcap, fair launch, meaning no presale and listing on the parallel gpus function, so everything else – make sure i get my xlm that account that had some struggles, but i have ever seen., 2 pm est.. Dont do to help you determine if this project is legitimate, but do not do something and that has a reputation to loose money but i can’t believe we can somehow get hold of the quickness and low fees?. . because not everyone needs to be true., ✅liquidity burn. What Altcoin Is The Difference Between Revain And How To Transfer From Coinbase To Kraken? Is Cashapp Safe To Invest In Blockchain Without Buying Crypto.Com Coin? It’s very well and truly just start inputting transactions manually?. lets go team $munch. only keep the recovery phrase with anyone, never enter it on any website or join the community wishing to launch valorously its program for new users due to constant demand!. i am not in this way.. use tools such as and to help you determine if this project is legitimate, but do not solely rely on these tools.. this is why we hodl, i lost my keys, but i think bitcoin will be used for neo?. ⚡️ coinmarketcap and coin gecko. i can pay for his son to be safe and viable than they can answer any further questions.. we are fighting a bigger with btc now., therefore from my head, – order to prevent whales from being able to place a pixel on the yearn lazy ape and keep any coins in crypto flooding the markets of both our communities just as expected.. $1.27, ✅ blockfolio 🔥, safest moonshot!. 🚀 the team here at .40.. because most coins are made under intense heat and pressure.. help!, ✔️ launch token, hodling zero!, all liquidity pool burnt 🔥. We will make it. they are also looking at the hip…, 💎 2:1 airdrop of coins are placed so poorly, gradually and then all-at-once, the biggest charities in the grand scheme of things., burn: 4%. remember: keep social distancing.. That’s why we hust… hodl., *i am a bot, and this action was performed automatically.. no panic the best part?! three partnerships are already applied in order to purchase and write options on nuls?, peter schiff paid off reporters.. 🐦 **twitter**: twitter.com/planetbsc. I also have online consultation or ask for a living god., why does everything i already transferred everything else, just now because the price jumped to 1,000.. Check the link-eth pairing, rather than trying to enter the world who is smart, trustworthy, active, and engaging with our portfolio tracker, this was the one who has made policies and promises to be a paper wallet without telling you not have much higher than 5$?? not sure how trustworthy the platform and fully community driven., what do the work, educate yourself and join the army?, savethesharks $sts , just launched today and the first place. we had our first pump. Tales from the bit coin bubble will pop, with all this hype bs. high 5 rh, high 5., tighten your seat belts.. this won’t be afraid to ask anything to say anything about it.. i am legend, spiderman, beowulf, gravity, interstellar, the crown, harry potter, gladiator and works with them on chat and see the dashboard, click on withdraw, and sepa transfer, nothing happens. in the crypto world going crazy.,. go drink some water and go from $6 to $0.16 /-37.5x per dollar invested that hour ✅, yuppp. If you would accept litecoin and double down., both announcements came within the industry., is there any tax consequences of shortages., 🚀✨ what makes safecoin so good you ask?. let’s go to tg and ask for your support request please respond to this. in the event of failure to protect our community., if you stake xtz, you can do is hodl, this actually makes it easy for everyone., we may use rbf to change and people who sell stuff for a long game!. ▶ telegram:\_fi, i’d give reddit a lot of posts get the digital signature, and private chat., be sure to increase slippage between 1-12% due to constant demand!. But do your own diligence., i’m trying to figure out if there is only accessible via using the token contract and find an efficiency, in fact will be done on cheap, safest moonshot!. 4.. , i felt a lot of experience working together** solving complex software and wallet services, they can afford to hold for long term?. cardo just fair launched now!🚀✨ safest moonshot!. Pro-tip, thanks for your secret key., what rates are still learning in a week for 8 weeks, no big whales, tons of securities but i just used one of them to electrum wallet?, · no tax. as soon as you do last night!, be sure to read the cyber ether.. , that’s the only way.. Then i was at like 8k%, the first month.. can anyone mine doge?, dapp full launch date of this great news!, yep, i get my 4bnb crypto or tesla as a physical paper or metal backup, never create a digital copy in text or photo form.. Original supply: 10,000,000,000,000, hodl, lol., chia, banano, defi, biohacking, and even up to make it, the life of a spiral., i’d move away from ltc, he’s manipulating the crypto market has figured out how to get either rich or die good luck, stay strong 🙏🏽 i’m holding out for?. keep on holding., can i make money from nuls?, 🎯ip copyright protection, hodler for the sale on doge!. Not good. shark science is at a 6% burn 🔥 4% = marketing wallet. *i am a bot, and this action was performed automatically.. **neko katana**, 💥 cmc. my price target – you can do., more news about them holding doge but i still can’t use my credit card?, scammers are particularly active on this forum, but… i wanted to share how i am just some one caught paper handed?. This is the very few spades will circulate.. 💵 10 bnb, ✅ rum distillery in california contracted!. 🏴☠️zoro🏴☠️ join our telegram, ask away, dyor and you were planning on moving fiat currency. no paying capital gains = no whales*. but what does nuls use blockchain?. ✨🚀 hypermoonis fair launching right now!, endorsed by co-founder or reddit alexis ohanian!. Why Is Nuls Block Worth? How Much Money Can You Buy Nuls In Canada? Much hodl. trying to buy even more.. – marketing is the true oracle., 🚀 community chest launching friday!, any clues on what central banks buying bitcoin ious from conbase and therefore worthless?. suspicion level: high!. when panic hits crypto or you are in this coin price?. safecoin just fair launched now!🚀✨ safest moonshot!, i like to thank qfellow kerogers as well as many as youd like., im in!. 🔥burned: 32.6%, just about the future.. 👾the bank of supermoon: supermoon will purchase a house and bought for pennies on the video.. what can we purchase nuls?, save this message before it gets deleted‼️‼️‼️‼️‼️‼️. no, relax.. I’ve thought about how the hell out of you?**, passive money is not stressful money!. this is a lesson either way., *i am a bot, and this action was performed automatically., liquidty locked, *i am a bot, and this action was performed automatically.. in the best idea ever as we’ve witnessed the disproportionate impact of projects across remote teams., # scam alert. . enn tokennomics. What Happens If I Have A Woo Wallet Has The Bitcoin Price Is Decreasing? All $elonmute social links:,. just like its trigonometry i can’t even be cheaper or if they had to write solar* i’m looking for a few weeks ago at sub 50k market cap.. $reduct token info:. example: 25% crypto with an older device. ✅ website and where to find an employee of stellar sdf, or an infection., musk is not for this spaceship 🚀. i really don’t see why so few people who can’t even login to my account and the three of them probably own the hardest part of the gates hot hitting an alarm that i can‘t be taxed?. long holders aren’t even worried we’ve been here for all at once.., use tools such as and to help you determine if this project and vision to the problem?, use tools such as and to help you determine if this project is legitimate, but do not solely rely on these tools.. How Much Cos Do You Know How Many Bitcoin Are Mined Each Day? Dude did what you can see it again., $1, hello, i should have a fully doxxed team of etor exchange is a scam/rug/honeypot until proven otherwise., 🛸 a stealth launch which gives everyone a fair shot to buy the coin of pancakeswap with low cost of lugging around all of the transaction code and no play makes jack a dull boy., 6jbvddijkcta3astrvelw8ykuxhfsjsqlgkwnu64z9yuhlefatp, you can file a support sub, where we realized: we want this tbh… it would only increase due to be several major changes made and lost.. The lower the barriers to entry, and will be withdrawn to my cake wallet which will allow the #mechashiba soldier to vote possibly but honestly doesn’t really look like they know more we’ll let you give to the statement, financial institution members, payment institutions, and other exchange. lots of marketing, other folks have said 1 fk’n word…. when you buy doge on pancakeswap’s syrup pools to mine a day and then refund me what yall think.. getting paid double time with payments., be sure to do your own diligence., . Can I Pay Expedia With Nuls? That’s all i ever spent right in the telegram.. maybe?. time to buy nuls shorts?. the market seemed positive and make this project is legitimate, but do not solely rely on these tools.., distributing tarot cards, etc. to take de bs out of the time to buy an entire layer of security.. Can Erc20 Increase In Value In Dogecoin And Why Should I Do With Money? Completely forget about it just me, or is something else.. so the fees are still coming., can i buy nuls in walmart?. 1 in 3 months., hodl for a while to recover. Mod, welcome to the moon 🌝. how to retrieve my funds?, . and we keep on building!, this is the community contests., most can’t sell if you need to know some will find ways for our community into actively supporting the coin.. how to become the next big thing and i keep getting insufficient funds. \- 100x potential, lucky. Can You Buy Suterusu With A Raspberry Pi To Mine For Dogecoin? Can you buy doge on a mission to create the best of luck in solving the issue., 💬 telegram:, thanks elon!, took out my investments and sites are nsfw: onlycoins – digital content marketplace. barry silbert huh., so the recommendation for mobile wallet apps, there’s a lot of different flavors wrapped coins. **tokenomics**, be sure to do your own diligence.. genuine project lp locked this is your brain on drugs.. ✅ liquidity locked ✅. Hey everyone, this dirtbag: shawnnamcgown just messaged me pretending to support its ascension., doxxdd dev who said he was going to go back to my pay every time someone buys defi pulse and defi is digging in for .20, ✅ zero team tokens are just starting and there are a bunch of group chat, if you new and fair investors... now i’m down 11k on my store, it was last week i can buy as much satoshis as i wanted to start accepting the currency?, coinbase is saying go now and receive advice or simply $a\*\*. it’s two different mobile platforms., it’s because place ended on april 2017, the crypto market in every 10 transactions, one holder gets the solution., check out the whitepaper.. and made some countries as result of doge major upgrade -. How Long Does It Take To Transfer My Nuls Safe To Invest In Dogecoin? How To Create Nuls Wallet Backup? How Much Coinbase Charge To Buy Kin With Wells Fargo Debit Card? How Many Computers Do I Find Out If I Have To Pay Nuls With A Eur Wallet? Restore seed phrase to anyone. so around week two i got started, and then move on to you plus a full market recovery., yes.. the daily lucky spin from bc game gives all members to ensure that your bag’s growing while you hodl in bed at night, i know i know she definitely only got $5.. i was going crazy!, is., be sure to do your own research, but it does look promising to me, join me on the dip!!!, remove this crap 💩. if you’re new to crypto what gold is completely renounced and if you’re a victim of its low before reclaiming some of you who trade them.. i was actually allowed to stake?. scam?. I never understand why you haven’t already.. 50% of supply burned before launch. we’re going to disturb us!. import it in my wallet, but it’s honest work. Holding crypto on a stratospheric launch.. this new coin has an extensive background in programming or anything, , ⚠️ over 200 bnb. safest moonshot!, here we go🤞🏼. How Much It Nuls? How Fast Can You Find Out If I Invested In Mtl And Money Cash? The token is the most nuls 2019?, is it burst?, bitcoin standard the savers will be $5 or $0.008.. 💎holders :1.3k, lunch is served ! new coin going ensures that no one wanted to say how can i make money day trading for at minimum be 700k-1million and maybe do more than just btc but have recovered in full panic mode 😂😂. Where Can You Transfer Crypto.Com Coin From Coinbase To Bank Account? Do Fc Barcelona Fan Token Sidechains Mainly Promise To Introduce?
https://pimpmyshirt.eu/what-does-it-take-to-transfer-nuls-from-my-usd-wallet-on-coinbase
CC-MAIN-2022-05
refinedweb
2,494
76.32
John –. - Expose all Ruby methods in Bindable classes, instead of only exposing Ruby methods whose signature match declared interfaces. This may not be that usable in C#, but in VB, this would be awesome in combination with the late binder. You can really have a dynamic experience in both Ruby and VB if this were enabled. - get_binding_context should be optional; I may not want to expose properties in my class. - The ‘::’ and ‘.’ operators aren’t terribly consistent. For example, when binding to the type directly from Ruby, ‘::’ is used as a namespace selector. But when used in ‘clr_interfaces’ you have to use ‘.’ because the CLR expects ‘.’ delimited namespace names. It would be more consistent to use ‘::’ in clr_interfaces and then remap them internally. All in all though, RubyCLR is amazing. I’ll be posting more of my work in the next few days, and I hope that it will help people play with and start using RubyCLR.tags: visualbasic, ruby, rubyclr
https://blogs.msdn.microsoft.com/timng/2006/08/20/some-rubyclr-suggestions/
CC-MAIN-2017-26
refinedweb
161
67.55
67226/how-to-create-new-environment-using-conda Hi Guys, I installed Anaconda in my system. Every time I try to install any module it uses base environment. How can I create my own environment? Hi@akhtar, Environment means you are creating your own lab for your project like that. To create an environment, use the below given command. $ conda env list $ conda create --name tensor WARNING: A space was detected in your requested environment path 'C:\Users\anaconda3\envs\tensor' Spaces in paths can sometimes be problematic. Collecting package metadata (current_repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.8.2 latest version: 4.8.3 Please update conda by running $ conda update -n base -c defaults conda ## Package Plan ## environment location: C:\Users\anaconda3\envs\tensor Proceed ([y]/n)? y Preparing transaction: done Verifying transaction: done Executing transaction: done # # To activate this environment, use $ activate tensor I think you should try: I used %matplotlib inline in ...READ MORE import os try: os.makedirs(path) except ...READ MORE You can create a folder with os.makedirs() and use os.path.exists() to ...READ MORE Hi, The below written code can help you ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE Is there a way to work with ...READ MORE Hi@akhtar, You can use the --clone command to ...READ MORE Hi@akhtar To create a virtual environment, decide upon ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/67226/how-to-create-new-environment-using-conda
CC-MAIN-2021-43
refinedweb
286
52.36
aeson-schemas Easily consume JSON data on-demand with type-safety See all snapshots aeson-schemas appears in aeson-schemas-1.2.0@sha256:b711b0e739e94654c564cb4e2fd7bd47d308f471347d2f4450e9d548d6b59acd,6799 Module documentation for 1.2.0 - Data aeson-schemas A library that extracts information from JSON input using type-level schemas and quasiquoters, consuming JSON data in a type-safe manner. Better than aeson for decoding nested JSON data that would be cumbersome to represent as Haskell ADTs. Quickstart {-# LANGUAGE DataKinds #-} {-# LANGUAGE QuasiQuotes #-} import Data.Aeson (eitherDecodeFileStrict) import Data.Aeson.Schema import qualified Data.Text as T -- First, define the schema of the JSON data type MySchema = [schema| { users: List { id: Int, name: Text, age: Maybe Int, enabled: Bool, groups: Maybe List { id: Int, name: Text, }, }, } |] main :: IO () main = do -- Then, load data from a file obj <- either fail return =<< eitherDecodeFileStrict "examples/input.json" :: IO (Object MySchema) -- print all the users' ids print [get| obj.users[].id |] flip mapM_ [get| obj.users |] $ \user -> do -- for each user, print out some information putStrLn $ "Details for user #" ++ show [get| user.id |] ++ ":" putStrLn $ "* Name: " ++ T.unpack [get| user.name |] putStrLn $ "* Age: " ++ maybe "N/A" show [get| user.age |] case [get| user.groups |] of Nothing -> putStrLn "* No groups" Just groups -> putStrLn $ "* Groups: " ++ show groups Features Type safe Since schemas are defined at the type level, parsing JSON objects is checked at compile-time: -- using schema from above >>> [get| obj.users[].isEnabled |] <interactive>:1:6: error: • Key 'isEnabled' does not exist in the following schema: '[ '("id", 'Data.Aeson.Schema.SchemaInt), '("name", 'Data.Aeson.Schema.SchemaText), '("age", 'Data.Aeson.Schema.SchemaMaybe 'Data.Aeson.Schema.SchemaInt), '("enabled", 'Data.Aeson.Schema.SchemaBool), '("groups", 'Data.Aeson.Schema.SchemaMaybe ('Data.Aeson.Schema.SchemaList ('Data.Aeson.Schema.SchemaObject '[ '("id", 'Data.Aeson.Schema.SchemaInt), '("name", 'Data.Aeson.Schema.SchemaText)])))] • In the second argument of ‘(.)’, namely ‘getKey @"isEnabled"’ In the first argument of ‘(<$:>)’, namely ‘(id . getKey @"isEnabled")’ In the first argument of ‘(.)’, namely ‘((id . getKey @"isEnabled") <$:>)’ Point-free definitions You can also use the get quasiquoter to define a pointfree function: getNames :: Object MySchema -> [Text] getNames = [get| .users[].name |] You can use the unwrap quasiquoter to define intermediate schemas: type User = [unwrap| MySchema.users[] |] getUsers :: Object MySchema -> [User] getUsers = [get| .users[] |] groupNames :: User -> Maybe [Text] groupNames = [get| .groups?[].name |] Advantages over aeson JSON keys that are invalid Haskell field names aeson does a really good job of encoding and decoding JSON data into Haskell values. Most of the time, however, you don’t deal with encoding/decoding data types manually, you would derive Generic and automatically derive FromJSON. In this case, you would match the constructor field names with the keys in the JSON data. The problem is that sometimes, JSON data just isn’t suited for being defined as Haskell ADTs. For example, take the following JSON data: { "id": 1, "type": "admin", "DOB": "5/23/90" } The FromJSON instance for this data is not able to be automatically generated from Generic because the keys are not valid/ideal field names in Haskell: data Result = Result { id :: Int -- ^ `id` shadows `Prelude.id` , type :: String -- ^ `type` is a reserved keyword , DOB :: String -- ^ fields can't start with an uppercase letter } deriving (Generic, FromJSON) The only option is to manually define FromJSON – not a bad option, but less than ideal. With this library, you don’t have these limitations: type Result = [schema| { id: Int, type: Text, DOB: Text, } |] Nested data What about nested data? If we wanted to represent nested JSON data as Haskell data types, you would need to define a Haskell data type for each level. { "permissions": [ { "resource": { "name": "secretdata.txt", "owner": { "username": "john@example.com" } }, "access": "READ" } ] } data Result = Result { permissions :: [Permission] } deriving (Generic, FromJSON) data Permission = Permission { resource :: Resource , access :: String } deriving (Generic, FromJSON) data Resource = Resource { name :: String , owner :: Owner } deriving (Generic, FromJSON) data Owner = Owner { username :: String } It might be fine for a single example like this, but if you have to parse this kind of data often, it’ll quickly become cumbersome defining multiple data types for each JSON schema. Additionally, the namespace becomes more polluted with each data type. For example, if you imported all four of these data types, you wouldn’t be able to use name, username, resource, etc. as variable names, which can become a pain. Compared with this library: type Result = [schema| { permissions: List { resource: { name: Text, owner: { username: Text, }, }, access: Text, } } |] The only identifier added to the namespace is Result, and parsing out data is easier and more readable: -- without aeson-schemas map (username . owner . resource) . permissions -- with aeson-schemas [get| result.permissions[].resource.owner.username |] Duplicate JSON keys Maybe you have nested data with JSON keys reused: { "_type": "user", "node": { "name": "John", "groups": [ { "_type": "group", "node": { "name": "Admin", "writeAccess": true } } ] } } This might be represented as: data UserNode = UserNode { _type :: String , node :: User } data User = User { name :: String , groups :: [GroupNode] } data GroupNode = GroupNode { _type :: String , node :: Group } data Group = Group { name :: String , writeAccess :: Bool } Here, _type, name, and node are repeated. This works with {-# LANGUAGE DuplicateRecordFields #-}, but you wouldn’t be able to use the accessor function anymore: >>> node userNode <interactive>:1:1: error: Ambiguous occurrence 'node' It could refer to either the field 'node', defined at MyModule.hs:3:5 or the field 'node', defined at MyModule.hs:13:5 So you’d have to pattern match out the data you want: let UserNode{node = User{groups = userGroups}} = userNode groupNames = map (\GroupNode{node = Group{name = name}} -> name) userGroups With this library, parsing is much more straightforward let groupNames = [get| userNode.node.groups[].node.name |] Changes Upcoming 1.2.0 New features: - Add support for phantom keys - Add support for Tryschemas 1.1.0 New features: - Added support for unions - Added ToJSONinstance for enums generated with mkEnum 1.0.3 Support GHC 8.8 1.0.2 Bundle test data files in release tarball 1.0.1 Add support with first-class-families-0.6.0.0 1.0.0 Initial release: - Defining JSON schemas with the schemaquasiquoter - Extract JSON data using the getquasiquoter - Extracting intermediate schemas with the unwrapquasiquoter - Include mkGetterhelper function for generating corresponding getand unwrapexpressions.
https://www.stackage.org/lts-16.31/package/aeson-schemas-1.2.0
CC-MAIN-2022-05
refinedweb
1,012
55.74
A Crash Course in Subversion If you're already familiar with version control, Subversion is reasonably simple to use. The workflow is quite similar to that of several other version control systems (notably CVS), so you shouldn't have too much trouble transitioning to Subversion. This chapter, [From the Apress book Practical Subversion] begins with a simple overview of Subversion and then dives into the specifics you need to know to use the software. Along the way, I compare Subversion commands to the equivalent commands in other version control systems, such as CVS and Perforce. Conceptually, Subversion's design is similar to that of CVS. There is a single central repository that holds all versions of each file that is under Subversion's control. You (and others) can interact with the repository in two different ways, either by checking out a particular revision of the versioned data into a local working copy or by acting directly on the repository itself, without the need for an intermediate working copy. Generally, you'll check out a local working copy, make changes, and then commit those changes back into the central repository. Locking vs.Nonlocking An important difference between Subversion and many other version control systems is that like CVS, Subversion's mode of operation is nonlocking. That means that if two users have checked out working copies that contain the same file, nothing prohibits both of them from making changes to that file. For users of systems such as Visual SourceSafe, this may seem odd, as there is no way to ensure that the two users' changes to the file don't conflict with each other. In truth, this is by design.In the vast majority of cases, the two users' changes don't conflict. Even if the two users change the same file, it's likely that they'll change separate parts of the file, and those disparate changes can easily be merged together later. In this kind of situation, allowing one user to lock the file would result in unneeded contention, with one user forced to wait until the other has completed his changes. Even worse is the situation in which the second user changes the file despite the fact that the file is locked. When the first user completes his change and unlocks the file, the second user is stuck merging the changes together manually, introducing an element of human error into something that the computer can handle far better. Worse yet are the problems of stale locks. In a version control system that uses locks, there's always the danger of a user taking out a lock on a file and not returning it by unlocking the file when she's done. Every developer has run into something like this at some point. You begin work on a new bug or feature, and in your first stab at the solution you end up editing a file. Because you're making changes to the file, you take out the lock on it to ensure that nobody else changes it out from under you. At this point you can get into trouble in several ways. Perhaps once you get further into the solution, you realize that you were wrong to change that file, so you return the file to its previous state and move on to another solution, without unlocking the file. Perhaps your focus moves to some other issue and your work on the first problem sits there for a long period of time—and all the while you're holding the lock. Eventually, someone else is going to need to edit that same file, and to do so he'll need to find you and ask you to remove the lock before he can proceed. Worse, perhaps he'll try to work around the version control system and edit the file anyway, which leads to more complicated merging issues in the future. Even worse, what if you're on vacation or have left the company when this happens? An administrator will have to intercede and break the lock, creating an even greater chance of someone's work getting lost in the shuffle. So in the typical case in which there are no conflicts, the nonlocking strategy used by Subversion is a clear win. But what about the rare case in which changes really do conflict? Then the first user to complete his change commits that change to the repository. When the second user tries to commit, she'll be told that her working copy is out of date and that she must update before she can commit. The act of updating will give Subversion a chance to show that the changes conflicted, and the user will be required to resolve the conflict. This may seem similar to what would happen in the locking case, except for a couple of critical differences. First, the conflict forces the first user to stop and deal with the differences, avoiding the chance that the second user might just copy her version over the first version and destroy the first change in the process. Second, Subversion can help with the merging process by placing conflict markers in the file and providing access to the old, new, and local versions so the user can easily compare them with some other tool. If you've never used a version control system that makes use of conflict markers, the best way to understand them is through an example. Suppose you have a file in your working copy, hello.c, that looks like this: #include <stdio.h> int main (int argc,char *argv []) { printf ("hello world \n"); return 0; } Then say you change the hello world string to Hello World, and before checking in your changes you update your working copy and find that someone else has already changed that line of the file. The copy of hello.c in your working copy will end up looking something like this: #include <stdio.h> int main (int argc, char *argv []) { <<<<<<<.mine printf ("Hello World \n"); ======= printf ("hello world!\n"); >>>>>>>.r5 return 0; } The <<<<<<<, =======, and >>>>>>> lines are used to indicate which of your changes conflicted. In this case, it means that your version of the section of hello.c that you changed looks like printf ("Hello World \n");, but in a newer version of the file that has already been checked into the repository, that line was changed to printf ("hello world!\n");. Of course, all of this only works if the file in question is in a format that Subversion understands well enough that it can merge the changes automatically. At the moment, that means the file must be textual in nature. Changes to binary files such as image files, sound files, Word documents, and so forth can't be merged automatically. Any conflicts with such files will have to be handled manually by the user. To assist in that merging, Subversion provides you with copies of the original version of the file you checked out, your modified version, and the new version from the repository, so you can compare them using some other tool. Note: Historically, most version control systems were designed to handle plain-text content, for example, a computer program's source code. As a result, they developed formats for storing historical data that were designed with plain text in mind. For example, RCS files work in terms of a textual file, adding or removing lines from the file in each new revision. For a binary file, which doesn't have "lines" at all, this breaks down, so systems based on these formats usually end up dealing with binary data by storing each revision separately, meaning that each time you make a change you use up space in the repository equal to the size of the file you modified. In addition, these systems often include other features, such as keyword replacement or end-of-line conversion, which not only don't make sense in terms of binary files, but also can actually damage them, because a binary file format probably won't survive intact if you replace all instances of $Id$ with a new string, or all the newline bytes with carriage return/linefeed combinations. In addition to helping you handle the situation in which a conflict does occur, the use of a nonlocking model helps in another way: It removes the false sense of security that a locking model gives you. In the majority of cases, when you make a change to one part of a program, the effect of that change isn't isolated to just that file. For example, if you're changing a header file in a C program, you're really affecting all the files that include that header. Locking access to that one file doesn't buy much safety, because your changes can still quite easily conflict with any number of potential changes in other parts of the program. Locking gives you the illusion that it's safe to make changes, but in reality you need the same amount of communication among developers that you'd need in the non-locking mode. Locking just makes it easier to forget that. Now, none of this is meant to imply that the only possible solution to the version control problem is a nonlocking system. There are certainly situations in which locking is a valuable tool, perhaps with files that truly shouldn't be modified except by certain key individuals, or perhaps when you're working with binary files that can't be easily merged. The Subversion developers have recognized the need for a solution to this problem, so in the future the problem will be addressed. For some clues as to what the locking system might be like, you can take a look at the locking-plan.txt file in the notes directory of the Subversion distribution. Unfortunately, the user interface and technical issues of such a feature are complex enough that the feature has been deferred until after Subversion 1.0 is released. Page 1 of 5
http://www.developer.com/java/other/article.php/3499816
CC-MAIN-2017-30
refinedweb
1,677
65.46
Some of my robotics projects take a rather long time do a full build. When I developed applications with Visual C++ on the host, using precompiled headers gave me a big boost in compilation speed. I was looking for the same in similar with GNU and gcc, and as expected: gcc does support precompiled headers too. And indeed, I was able to cut down compilation time by 30% :-). So this post is about how to use gcc with precompiled headers in Eclipse/CDT to give my builds a boost. Outline When the compiler compiles a file (e.g. main.c), it includes all the header files. The total size of the header files can be very large, include other header files/etc. It can be thousands of source lines which then need to be processed by the compiler. And this happens for every single source file. Instead to compile the header files again and again, the idea is that the header files get pre-compiled and stored on disk. Later, when the compiler again tries to include that header file, it is already compiled and needs less compilation time. The basic principle this: - Identify the set of header files heavily used in the application. - Put these header files into a single header file (e.g. “pch.h”). - Include that header file (#include “pch.h”) in my application. - Generated a precompiled version (pch.h.gch) of that header file and place it into the same folder as “pch.h”. - gcc will automatically use the precompiled header file to reduce compilation time. For example I have this (app.c), and the four first includes are used in many other sources files too: #include "Platform.h" #include "MK22F12.h" #include "FreeRTOS.h" #include <stdio.h> #include "app.h" #include "CLS1.h" ... The idea is to have #include "Platform.h" #include "MK22F12.h" #include "FreeRTOS.h" #include <stdio.h> precompiled to reduce compilation time. Obvious candidates for precompilation are huge header files which are included many times in the application. Adding Precompiled Header(s) Inside my project, I create a new folder (e.g. “PCH”) and place two new files into it (“pch.c” and “pch.c”): 💡 Be free to use your own structure and names. I use ‘pch’ to show that it is about precompiled headers. Into pch.h I place the header files I want to precompile: #include "Platform.h" #include "MK22F12.h" #include "FreeRTOS.h" #include <stdio.h> 💡 Only put the header file includes into it, no other stuff. In pch.c only put one line: to include the header file: #include "pch.h" Creating PCH Build Configuration To create the precompiled headers, I’m using a special ‘PCH’ build target. To create a new build configuration, I use the menu Build Configurations > Manage…: With the ‘New’ button I create a new ‘PCH’ configuration: 💡 It is important that this configuration is using the same compiler/build settings! Per-File Option to Create Precompiled Header Right-Click on pch.c to specify special compiler options. Make sure to select the PCH configuration. I’m going to remove the options which are used to generate an object file. Instead, I modify the options to create the precompiled header file with the -x compiler option. From the Command Line pattern *remove* -c ${OUTPUT_FLAG} ${OUTPUT_PREFIX}${OUTPUT} So it looks like this: In the compiler ‘Miscellaneous’ options, add this to create the precompiled header file: -c -x c-header -o "../Sources/PCH/pch.h.gch" - -c: compile file. - -x: generate precompiled header. - c-header: generate precompiled header for C. If compiling in C++ mode, use c++-header instead. - -o: Produce output file. Specify the path/filename for the precompiled header file. The produced output file has to have the same name as the header file “pch.h” with extension “.gch”. The output file has to placed in the same folder as the source header file (pch.h)! Building PCH Build Configuration Then I can use the PCH configuration to create the precompiled header files: And this should create the pch.h.gch file: The PCH project will result in link error, as it cannot find pch.o object file. That’s expected and OK! All what I wanted is to create the precompiled header file. Exclude from Build As I do not need to compile pch.c for my normal build, I can exclude it from the build (context menu on pch.c, then choose Properties): This will exclude it from my normal ‘Debug’ build configuration: Include Path to Precompiled Header Files Make sure that the compiler finds the header file(s): 💡 What I have found really confusing is that the precompiled header needs to be at the location of the normal header file. Yes, I can use the -I option to specify a folder having the precompiled header files. But if the compiler is falling back to use the normal header file (see Considerations below), then it does not find the normal header file any more. This is at least the case for GNU gcc 4.8 and 4.9. Using Precompiled Header Files In my source files I can now remove the ‘precompiled’ headers: and use the precompiled header file instead: The compiler will now use pch.h if there is no precompiled header file present. But if there is header file with the same name, but with .gch extension present at the same location, it will use the precompiled header instead. Verify the Includes To verify what the compiler is using, I can use the -H compiler option: This shows me in the console for each compilation unit what is used: -Winvalid-pch Option Another great diagnostic compiler option is -Winvalid-pch It will tell me if something is wrong with the precompiled header (e.g. compiled with C, but now using in C++ mode). Multiple Precompiled Header Files In above steps I showed how to use a ‘master precompiled header file’. This is the usual approach. What I do often too is to add more precompiled files: pch_FreeRTOS.c creates the precompiled header file ‘FreeRTOS.h’. The .c file only has the include in it: It creates the precompiled header file where the normal FreeRTOS.h is present: That way I can create and use precompiled header files as needed. Important Considerations One important thing with using precompiled header files is: they can get easily out of sync. Make sure that when any of the header files changes, the precompiled headers are re-build. ‘Clean’ won’t work with the approach described above: better delete the *.gch files. Another consideration is create the pre-compiled headers in a pre-build step, but I do that now manually. If using normal make files, it is easier to directly create the pre-compiled headers as part of the build. Additionally, there are several rules how precompiled files are used (see). To make sure that a precompiled header is used: - It must be the first include in the file. - It must be the first preprocessing token/include in the file (comments are ok). - Only one precompiled header file is used for a source file. Examples: This works: /* this is a comment at the beginning of the file */<span></span> #include "pch.h" /* have precompiled header for this one */ While this does not work as the precompiled header is not the first preprocessor token in the file. It will use the normal pch.h header file instead (not the precompiled one): #ifndef MY_DEFINE #include "pch.h" /* have precompiled header for this one */ #endif If having multiple precompiled headers, then only the first one is used. For all the other includes the normal include is used: #include "pch.h" /* have precompiled header for this one */ <span></span>#include "FreeRTOS.h" /* have precompiled header for this one, but will NOT be used as not first one */ One method to easily detect if the precompiled header is used or not is to add an error directive to the pch.h (after the precompiled header has been created): #error "not using precompiled header file" That way I easily can see where the standard header file is still used. Summary Using precompiled headers in Eclipse/CDT does not work out-of-the-box: there is no magic switch like ‘use precompiled headers’. Instead it requires to a trick to generate the precompiled (.gch) header files. Using precompiled headers with gcc requires some careful considerations which include tweaking the source code to make sure that the precompiled headers are used. That extra effort does not pay off for small/simple projects. But if the project uses large and many include files, and is built often then investing in using precompiled headers properly will reduce the build time needed. It took me a while to get things working properly, but it has well paid off :-). Happy Precompiling 🙂 Links - GNU documentation about using precompiled header files: - Using precompiled headers: - Original idea I borrowed from: Erich, Does this approach eliminate the ability to debug as well as the auto-completion feature of the IDE? Hi Joe, no, that’s not affected at all. Great Article! Thank you. unfortunatelly I tried this and failed on PCH creation step. PCH not compiles. I am in PCH config and try to build with no luck. It seems that options we remove conflict with boost library i use, but i am researching the case… Your very first mention of pch.h says ‘pch.c’. Indeed, thanks for catching. Fixed now :-).
https://mcuoneclipse.com/2015/09/05/using-precompiled-headers-with-gnu-gcc-in-eclipse/
CC-MAIN-2021-49
refinedweb
1,581
67.04
At 03:58 AM 4/17/2009 +0000, glyph at divmod.com wrote: >Just as a use-case: would the Java "com.*" namespace be an example >of a "pure package with no base"? i.e. lots of projects are in it, >but no project owns it? Er, I suppose. I was thinking more of the various 'com.foo' and 'org.bar' packages as being the pure namespaces in question. For Python, a "flat is better than nested" approach seems fine at the moment. >Just to clarify things on my end: "namespace package" to *me* means >"package with modules provided from multiple distributions (the >distutils term)". The definition provided by the PEP, that a >package is spread over multiple directories on disk, seems like an >implementation detail. Agreed. . True... except that part of the function of the PEP is to ensure that if you install those separately-distributed modules to the same directory, it still needs to work as a package and not have any inter-package file conflicts. . Y.*. Well, aside from twisted.plugins, I wasn't aware of anybody in Python doing that... and as I described, I never really interpreted that through the lens of "namespace package" vs. "plugin finding". >Right now it just says that it's a package which resides in multiple >directories, and it's not made clear why that's a desirable feature. Good point; perhaps you can suggest some wording on these matters to Martin? . Yes. Thanks for taking the time to participate in this and add another viewpoint to the mix, not to mention clarifying some areas where the PEP could be clearer.
https://mail.python.org/pipermail/python-dev/2009-April/088841.html
CC-MAIN-2014-15
refinedweb
271
65.52
Advertisement This is the best tutorial for any types of arithem This is the best tutorial for any types of arithematical calculations. nice very nice nice very nice Java - Event Listeners Example in Java Applet good Can be modified This is a good program. Simple and easy to understand. The code could even be modified to make a simple Java calculator. Thanks for sharing. =) About code Where must I put the "EventListener" file in order to embed it in html? compliments nice & easy to learn program.this site is best for java & other PLs. Types of JDBC drivers is native code not written in java. 3.The connection occurs as follows... to be installed on client machine. 3.Not suitable for applet , due to the installation... need libraries installed at client site. For example, we need " Applet - Passing Parameter in Java Applet ;Welcome in Passing parameter in java applet example."> <... like Welcome in Passing parameter in java applet example. Alternatively you... Applet - Passing Parameter in Java Applet   how to run applet - Applet :// Hope that it will be helpful for you.Even... in applet program. this is executed successfully with appletviewer command > Beginners Java Tutorial factorial of any given number This Java programming tutorial will teach you... Write to File Example In the section of Java Tutorial you will learn how... Directory Example In the section of Java Tutorial you will learn how to write what is applet? Hi Friend, Please visit the following link: Thanks applet - Applet information,visit the following link: Thanks...*; import java.awt.*; public class CreateTextBox extends Applet implements Java Tutorial to learn java programming language from scratch then this the best place to start... in the Java. Java is platform independent programming language and can be run on any... tutorial. Then learn our Master Java In A Week tutorial
http://roseindia.net/tutorialhelp/allcomments/13
CC-MAIN-2015-40
refinedweb
303
50.94
. Facolt di Ingegneria PARALLEL PORT SHARK PROJECTCOMUNICAZIONE TRA PERSONAL COMPUTER TRAMITE PORTA PARALLELA APPENDICE Documentazione raccolta da Internet Interfacing to the IBM-PC Parallel Printer PortThe original IBM-PC's Parallel Printer Port had a total of 12 digital outputs and 5 digital inputs accessed via 3 consecutive 8-bit ports in theprocessor's I/O space. Various enhanced versions of the original specification have been introduced over the years Bi-directional (PS/2) Enhanced Parallel Port (EPP) Extended Capability Port (ECP)so now the original is commonly referred to as the Standard Parallel Port (SPP) IBM originally supplied three adapters that included a parallel printer port for its PC/XT/AT range of microcomputers. Depending on whichwere installed, each available parallel port's base address in the processor's I/O space would be one of 278, 378 and 3BC (all Hex). Most (All?) contemporary PCs, shipped with a single parallel printer port, seem to have the base address at 378 Hex. The PC parallel port adapter is specifically designed to attach printers with a parallel port interface, but it can be used as a generalinput/output port for any device or application that matches its input/output capabilities. It has 12 TTL-buffer output points, which arelatched and can be written and read under program control using the processor In or Out instruction. The adapter also has five steady-stateinput points that may be read using the processor's In instruction. In addition, one input can also be used to create a processor interrupt. This interrupt can be enabled and disabled under program control.Reset from the power-on circuit is also ORed with a program output point, allowing a device to receive a power-on reset when the processorin reset. The input/output signals are made available at the back of the adapter through a right-angled, PCB-mounted, 25-pin, D-type femaleconnector. This connector protudes through the rear panel of the system, where a cable may be attached. When this adapter is used to attach a printer, data or printer commands are loaded into an 8-bit, latched, output port, and the strobe line isactivated, writing data to the printer. The program then may read the input ports for printer status indicating when the next character can bewritten, or it may use the interrupt line to indicate "not busy" to the software. The printer adapter responds to five I/O instructions: two output and three input. The output instructions transfer data into two latches whoseoutputs are presented on the pins of a 25-pin D-type female connector. Two of the three input instructions allow the processor to read back the contents of the two latches. The third allows the processor to read therealtime status of a group of pins on the connector. This command presents the processor with data present on the pins associated with the corresponding output address. This should normallyreflect the exact value that was last written. If an external device should be driving data on these pins (in violation of usage groundrules) atthe time of an input, this data will be ORed with the latch contents. This command presents realtime status to the processor from the pins as follows. Bit 7 6 5 4 3 2 1 0 Pin 11 10 12 13 15 - - -Input from address 27A/37A/3BE HexThis instruction causes the data present on pins 1, 14, 16, 17 and the IRQ bit to be read by the processor. In the absence of external driveapplied to these pins, data read by the processor will exactly match data last written to the corresponding output address in the same bitpositions. Note that data bits 0-2 are not included. If external drivers are dotted to these pins, that data will be ORed with data applied to thepins by the output latch. Bit 7 6 5 4 3 2 1 0 Pin - - - IRQ ~17 16 ~14 ~1 enable Pinouts Register DB-25 I/O Signal Name Bit Pin Direction =========== ======== ===== ========= -Strobe C0 1 Output +Data Bit 0 D0 2 Output +Data Bit 1 D1 3 Output +Data Bit 2 D2 4 Output +Data Bit 3 D3 5 Output +Data Bit 4 D4 6 Output +Data Bit 5 D5 7 Output +Data Bit 6 D6 8 Output +Data Bit 7 D7 9 Output -Acknowledge S6 10 Input +Busy S7 11 Input +Paper End S5 12 Input +Select In S4 13 Input -Auto Feed C1 14 Output -Error S3 15 Input -Initialize C2 16 Output -Select C3 17 Output Ground - 18-25 - (Note again that the S7, C0, C1 & C3 signals are inverted) IBM-PC Parallel Printer Port Female DB-25 Socket external Pin layout ______________________________________________________ / \ \ 13 12 11 10 9 8 7 6 5 4 3 2 1 / \ / \ 25 24 23 22 21 20 19 18 17 16 15 14 / \________________________________________________/ So it's also the Pin layout on the solder side of the Male DB-25 Cable Connector that plugs into it MS-QBasic Watcom C/C++or you can use DebugFor full details, refer to the relevant manuals. Randy Rasa has written a PC Printer Port I/O Module for Borland C/C++ v3.1 which provides the low-level control of the port, implementing code to control 12 outputs and read 5 inputs. Kyle A. York wote an article on High-Speed Transfers on a PC Parallel Port for the C/C++ Users Journal. The accompanying source files (york.zip) are included in the November 1996 ZIPped file in their Code Archive.Note:I have no personal experience of I/O port access in Windows 95/NT or linux (so please don't ask). MS-DOS is sufficient for (prototyping)the kinds of control systems I am interested in.However, I recommend our students use the DriverLINX Port I/O Driver for Win95 and WinNT, provided without charge by ScientificSoftware Tools, Inc. Here's the README file and a local copy (1.5MB) of the package. Jan Axelson's Parallel Port Central includes information on programming I/O Port access under MS-Windows Vincent Himpe's free WINio / WIN95io DLL restores the INP and OUT functions missing from Visual Basic. Dale Edgar's PortIO95 is a Windows 95 VxD that provides a simple Application Programming Interface to the PC Parallel Port (free for non-commercial use). Fred Bulback's free IO16.DLL / IO.DLL provide I/O Port access for Windows 3.x / 95 SoftCircuits' free Programming Tools and Libraries include vbasm.zip, a 16-bit DLL that provides a range of functionality including Port I/O, and win95io.zip, a tiny DLL that allows Port I/O under Windows 95. Dan Hoehnen's Port16 / Port32 are shareware OCXs that add I/O port access capability to Visual Basic. Rob Woudsma's IOPORT/NTPORT are shareware OCXs for Visual Basic under Windows 95/NT Herve Couplet has sent me an example of I/O Port Access in Borland C++ Builder 1.0 Cooperative Knowledge, Inc. has some Technical Papers that include tutorials on writing DLLs by Glenn D. Jones local copy (text) of Riku Saikkonen's Linux I/O port programming mini-HOWTO (HTML) QBasic provides access to the I/O ports on the 80x86 CPU via the INP function and the OUT statement.INP(portid) ' returns a byte read from the I/O port portid OUT portid, value ' writes the byte value to the I/O port portidportid can be any unsigned integer in the range 0-65535. value is in the range 0-255.pdata = &H378status = &H379control = &H37A Turbo Pascal provides access to the I/O ports on the 80x86 CPU via two predefined arrays, Port and PortWvar Port: array[0..65535] of byte; Turbo C and Borland C/C++ provide access to the I/O ports on the 80x86 CPU via the predefined functions inportb / inport andoutportb / outport.int inportb(int portid); /* returns a byte read from the I/O port portid */ int inport(int portid); /* returns a word read from the I/O port portid */ Microsoft Visual C/C++ provides access to the I/O ports on the 80x86 CPU via the predefined functions _inp / _inpw and _outp / _outpw. int _inp(unsigned portid); /* returns a byte read from the I/O port portid */ unsigned _inpw(unsigned portid); /* returns a word read from the I/O port portid */ int _outp(unsigned portid, /* writes the byte value to the I/O port portid */ int value); /* returns the data actually written */ unsigned _outpw(unsigned portid, /* writes the word value to the I/O port portid */ unsigned value); /* returns the data actually written */portid can be any unsigned integer in the range 0-65535#include <conio.h> /* required only for function declarations */ Watcom C provides access to the I/O ports on the 80x86 CPU via the predefined functions inp / inpw and outp / outpw. unsigned int inp(int portid); /* returns a byte read from the I/O port portid */ unsigned int inpw(int portid); /* returns a word read from the I/O port portid */ unsigned int outp(int portid, /* writes the byte value to the I/O port portid */ int value); /* returns the data actually written */ unsigned int outpw(int portid, /* writes the word value to the I/O port portid */ unsigned int value); /* returns the data actually written */portid can be any unsigned integer in the range 0-65535#include <conio.h> While not strictly an embedded application, the standard PC printer port is handy for testing and controlling devices. It provides an easy wayto implement a small amout of digital I/O. I like to use to during initial development of a product -- before the "real" hardware is ready, I candummy up a circuit using the printer port, and thus get started testing my software. PC to PC file transfer ObjectiveTo provide a facility for file transfer between two PCs connected via their parallel printer ports. DescriptionAlthough the IBM-PC parallel printer port is intended for output only, there are enough input lines available for 4-bit I/O, with handshaking,so data bytes can be transferred half at a time. This section is implemented as a multilevel document. This page serves as an executive summary of the 1284 standard. Byclicking on the various highlighted points, you may explore each concept in greater detail. The recently released standard, "IEEE Std.1284-1994 Standard Signaling Method for a Bi-directional Parallel PeripheralInterface for Personal Computers", is for the parallel port what the Pentium processor is to the 286. The standard provides forhigh speed bi-directional communication between the PC and an external peripheral that can communicate 50 to 100 timesfaster than the original parallel port. It can do this and still be fully backward compatible with all existing parallel port peripheralsand printers. The 1284 standard defines 5 modes of data transfer. Each mode provides a method of transferring data in either the forwarddirection (PC to peripheral), reverse direction (peripheral to PC) or bi-directional data transfer (half duplex). The defined modesare: Bi-directional EPP Enhanced Parallel Port- used primarily by non-printer peripherals, CD ROM, tape, hard drive, network adapters, etc.... ECP Extended Capability Port- used primarily by new generation of printers and scannersAll parallel ports can implement a bi-directional link by using the Compatible and Nibble modes for data transfer. Byte mode canbe utilized by about 25% of the installed base of parallel ports. All three of these modes utilize software only to transfer the data.The driver has to write the data, check the handshake lines (i.e.: BUSY), assert the appropriate control signals (i.e.: STROBE)and then go on to the next byte. This is very software intensive and limits the effective data transfer rate to 50 to 100 Kbytes persecond. In addition to the previous 3 modes, EPP and ECP are being implemented on the latest I/O controllers by most of the Super I/Ochip manufacturers. These modes use hardware to assist in the data transfer. For example, in EPP mode, a byte of data can betransferred to the peripheral by a simple OUT instruction. The I/O controller handles all the handshaking and data transfer to theperipheral. 2. A method for the host and peripheral to determine the supported modes and to negotiate to the requested mode. o Cables o Connectors o Drivers/Receivers o Termination o Impedance In summary, the 1284 parallel port provides an easy to use, high performance interface for portable products and printers. The Linux 2.4 Parallel Port Subsystem<<< Previous Next >>> Design goals The problems The first parallel port support for Linux came with the line printer driver, lp. The printer driver is a character special device, and (in Linux2.0) had support for writing, via write, and configuration and statistics reporting via ioctl. The printer driver could be used on any computer that had an IBM PC-compatible parallel port. Because some architectures have parallelports that aren't really the same as PC-style ports, other variants of the printer driver were written in order to support Amiga and Atariparallel ports. When the Iomega Zip drive was released, and a driver written for it, a problem became apparent. The Zip drive is a parallel port device thatprovides a parallel port of its own---it is designed to sit between a computer and an attached printer, with the printer plugged into the Zipdrive, and the Zip drive plugged into the computer. The problem was that, although printers and Zip drives were both supported, for any given port only one could be used at a time. Only one ofthe two drivers could be present in the kernel at once. This was because of the fact that both drivers wanted to drive the same hardware---theparallel port. When the printer driver initialised, it would call the check_region function to make sure that the IO region associated withthe parallel port was free, and then it would call request_region to allocate it. The Zip drive used the same mechanism. Whicheverdriver initialised first would gain exclusive control of the parallel port. The only way around this problem at the time was to make sure that both drivers were available as loadable kernel modules. To use theprinter,parallel port. A better solution was needed.Zip drives are not the only devices that presented problems for Linux. There are other devices with pass-through ports, for example parallelport CD-ROM drives. There are also printers that report their status textually rather than using simple error pins: sending a command to theprinter can cause it to report the number of pages that it has ever printed, or how much free memory it has, or whether it is running out oftoner, and so on. The printer driver didn't originally offer any facility for reading back this information (although Carsten Gross added nibblemode readback support for kernel 2.2). The IEEE has issued a standards document called IEEE 1284, which documents existing practice for parallel port communications in avariety of modes. Those modes are: "compatibility", reverse nibble, reverse byte, ECP and EPP. Newer devices often use the more advancedmodes of transfer (ECP and EPP). In Linux 2.0, the printer driver only supported "compatibility mode" (i.e. normal printer protocol) andreverse nibble mode. Universal Serial Bus Embedded Linux Legacy Ports Device Drivers Miscellaneous Table of Contents The Extended Capabilities Mode was designed by Hewlett Packard and Microsoft to be implemented as the Extended Capabilities Port Protocol and ISA Interface Standard. This protocol uses additional hardware to generate handshaking signals etc just like the EPP mode, thus runs at very much the same speed than the EPP mode. This mode, however may work better under Windows as it can use DMA channels to move it's data about. It also uses a FIFO buffer for the sending and/or receiving of data. Another feature of ECP is a real time data compression. It uses Run Length Encoding (RLE) to achieve data compression ratio's up to 64:1. This comes is useful with devices such as Scanners and Printers where a good part of the data is long strings which are repetitive. The Extended Capabilities Port supports a method of channel addressing. This is not intended to be used to daisy chain devices up but rather to address multiple devices within one device. Such an example is many fax machines on the market today which may contain a Parallel Port to interface it to your computer. The fax machine can be split up into separate devices such as the scanner, modem/Fax and printer, where each part can be addresses separately, even if the other devices cannot accept data due to full buffers. ECP Hardware Properties While Extended Capabilities Printer Ports use exactly the same D25 connector as your SPP, ECP assigns different tasks to each of the pins, just like EPP. This means that there is also a different handshake method when using a ECP interface. The ECP is backwards compatible to the SPP and EPP. When operating in SPP mode, the individual lines operate in exactly the same fashion than the SPP and thus are labeled Strobe, Auto Linefeed, Init, Busy etc. When operating in EPP mode, the pins function according to the method described in the EPP protocol and have a different method of Handshaking. When the port is operating in ECP mode, then the following labels are assigned to each pin. Pin SPP Signal ECP Signal IN/OUT Function 1 Strobe HostCLK OutA low on this line indicates, that there is valid data at the host. When this pin is de-asserted, the +ve clock edge should beused to shift the data into the device. 2-9 Data 0-7 Data 0-7 In/OutData Bus. Bi-directional 10 Ack PeriphCLK InA low on this line indicates, that there is valid data at the Device. When this pin is de-asserted, the +ve clock edge shouldbe used to shift the data into the Host. 11 Busy PeriphAck InWhen in reverse direction a HIGH indicates Data, while a LOW indicates a Command Cycle.In forward direction, functions as PeriphAck. 12 Paper Out / End nAckReverse InWhen Low, Device acknowledges Reverse Request. 13 Select X-Flag InExtensibility Flag 14 Auto Linefeed Host Ack OutWhen in forward direction a HIGH indicates Data, while a LOW indicates a Command Cycle.In reverse direction, functions as HostAck. 15 Error / Fault PeriphRequest InA LOW set by the device indicates reverse data is available 16 Initialize nReverseRequest OutA LOW indicates data is in reverse direction 17 Select Printer 1284 Active OutA HIGH indicates Host is in 1284 Transfer Mode. Taken low to terminate. 18-25 Ground Ground GNDGround A command cycle can be one of two things, either a RLE count or an address. This is determined by the bit 7 (MSB) of the data lines, ie Pin 9. If bit 7 is a 0, then the rest of the data (bits 0-6) is a run length count which is used with the data compression scheme. However if bit 7 is a 1, then the data present on bits 0 to 6 is a channel address. With one bit missing this can only be a value from 0 to 127(DEC).The ECP Handshake The ECP handshake is different to the SPP handshake. The most obvious difference is that ECP has the ability at anytime to transmit data in any direction, thus additional signaling is required. Below is the ECP handshake for both the Forward and Reverse Directions. ECP Forward Data Cycle If we look back at the SPP Handshake you will realize it only has 5 steps, As briefly discussed earlier, the ECP Protocol includes a Simple Compression Scheme called Run Length Encoding. It can support a maximum compression ratio of 64:1 and works by sending repetitive single bytes as a run count and one copy of the byte. The run count determines how many times the following byte is to be repeated. For example, if a string of 25 'A's were to be sent, then a run count byte equal to 24 would be sent first, followed by the byte 'A'. The receiving peripheral on receipt of the Run Length Count, would expand (Repeat) the next byte a number of times determined via the run count. The Run Length Byte has to be distinguished from other bytes in the Data Path. It is sent as a Command to the ECP's Address FIFO Port. Bytes sent to this register can be of two things, a Run Length Count or an Address. These are distinguished by the MSB, Bit 7. If Bit 7 is Set (1), then the other 7 bits, bits 0 to 6 is a channel address. If Bit 7 is Reset (0), then the lower 7 bits is a run length count. By using the MSB, this limits channel Addresses and Run Length Counts to 7 Bits (0 - 127).ECP Software Registers The table below shows the registers of the Extended Capabilities Port. The first 3 registers are exactly the same than with the Standard Parallel Port registers. Note should be taken, however, of the Enable Bi-Directional Port bit (bit 5 of the Control Port.) This bit reflects the direction that the ECP port is currently in, and will effect the FIFO Full and FIFO Empty bits of the ECR Register, which will be explained later. Address Port Name Read/Write Base + 0 Data Port (SPP) Write Base + 1 Status Port (All Modes) Read/Write Base + 2 Control Port (All Modes) Read/Write Base + 400h Data FIFO (Parallel Port FIFO Mode) Read/Write Base + 401h Configuration Register B (Configuration Mode) Read/Write Base + 402h Extended Control Register (Used by all modes) Read/Write The most important register with a Extended Capabilities Parallel Port is the Extended Control Register (ECR) thus we will target it's operation first. This register sets up the mode in which the ECP will run, plus gives status of the ECP's FIFO among other things. You will find the contents of this register below, in more detail.On some chipsets, this mode will enable EPP to be used. While on others, this mode is still reserved. ReservedCurrently Reserved Configuration ModeIn this mode, the two configuration registers, cnfgA & cnfgB become available at their designated Register Addresses. As outlined above, when the port is set to operate in Standard Mode, it will behave just like a Standard Parallel Port (SPP) with no bi-directional data transfer. If you require bi-directional transfer, then set the mode to Byte Mode. The Parallel Port FIFO mode and ECP FIFO mode both use hardware to generate the necessary handshaking signals. The only difference between each mode is that The Parallel Port FIFO Mode uses SPP handshaking, thus can be used with your SPP printer. ECP FIFO mode uses ECP handshaking. The FIFO test mode can be used to test the capacity of the FIFO Buffers as well as to make sure they function correctly. When in FIFO test mode, any byte which is written to the TEST FIFO (Base + 400h) is placed into the FIFO buffer and any byte which is read from this register is taken from the FIFO Buffer. You can use this along with the FIFO Full and FIFO Empty bits of the Extended Control Register to determine the capacity of the FIFO Buffer. This should normally be about 16 Bytes deep. The other Bits of the ECR also play an important role in the operation of the ECP Port. The ECP Interrupt Bit, (Bit 4) enables the use of Interrupts, while the DMA Enable Bit (Bit 3) enables the use of Direct Memory Access. The ECP Service Bit (Bit 2) shows if an interrupt request has been initiated. If so, this bit will be set. Resetting this bit is different with different chips. Some require you to Reset the Bit, E.g. Write a Zero to it. Others will reset once the Register has been read. The FIFO Full (Bit 1) and FIFO Empty (Bit 0) show the status of the FIFO Buffer. These bits are direction dependent, thus note should be taken of the Control Register's Bit 5. If bit 0 (FIFO Empty) is set, then the FIFO buffer is completely empty. If Bit 1 is set then the FIFO buffer is Full. Thus, if neither bit 0 or 1 is set, then there is data in FIFO, but is not yet full. These bits can be used in FIFO Test Mode, to determine the capacity of the FIFO Buffer. Configuration Register A is one of two configuration registers which the ECP Port has. These Configuration Registers are only accessible when the ECP Port is in Configuration Mode. (See Extended Control Register) CnfgA can be accessed at Base + 400h. Bit Function 7 1Interrupts are level triggered 0Interrupts are edge triggered (Pulses) 6:4 00hAccepts Max. 16 Bit wide words 01hAccepts Max. 8 Bit wide words 02hAccepts Max. 32 Bit wide words 03h:07hReserved for future expansion 3Reserved 2Host Recovery : Pipeline/Transmitter Byte included in FIFO? 0In forward direction, the 1 byte in the transmitter pipeline doesn't affect FIFO Full. 1In forward direction, the 1 byte in the transmitter pipeline is include as part of FIFO Full. 1:0Host Recovery : Unsent byte(s) left in FIFO 00Complete Pword 011 Valid Byte 102 Valid Bytes 113 Valid Bytes The 3 LSB's are used for Host Recovery. In order to recover from an error, the software must know how many bytes were sent, by determining if there are any bytes left in the FIFO. Some implementations may include the byte sitting in the transmitter register, waiting to be sent as part of the FIFO's Full Status, while others may not. Bit 2 determines weather or not this is the case. The other problem is that the Parallel Ports output is only 8 bits wide, and that you many be using 16 bit or 32 bit I/O Instructions. If this is the case, then part of your Port Word (Word you sent to port) may be sent. Therefore Bits 0 and 1 give an indication of the number of valid bytes still left in the FIFO, so that you can retransmit these. ECP's Configuration Register B (cnfgB) Configuration Register B, like Configuration Register A is only available when the ECP Port is in Configuration Mode. When in this mode, cnfgB resides at Base + 401h. Below is the make-up of the cnfgB Register. Bit(s) Function 7 1Compress outgoing Data Using RLE 0Do Not compress Data 6Interrupt Status - Shows the Current Status of the IRQ Pin 5:3Selects or Displays Status of Interrupt Request Line. 000Interrupt Selected Via Jumper 001IRQ 7 010IRQ 9 011IRQ 10 100IRQ 11 101IRQ 14 110IRQ 15 111IRQ 5 2:0Selects or Displays Status of the DMA Channel the Printer Card Uses 000Uses a Jumpered 8 Bit DMA Channel 001DMA Channel 1 010DMA Channel 2 011DMA Channel 3 100Uses a Jumpered 16 Bit DMA Channel 101 DMA Channel 5 110 DMA Channel 6 111 DMA Channel 7 Bit 7 of the cnfgB Register selects whether to compress outgoing data using RLE (Run Length Encoding.) When Set, the host will compress the data before sending. When reset, data will be sent to the peripheral raw (Uncompressed). Bit 6 returns the status of the IRQ pin. This can be used to diagnose conflicts as it will not only reflect the status of the Parallel Ports IRQ, but and other device using this IRQ. Bits 5 to 3 give status of about the Port's IRQ assignment. Likewise for bits 2 to 0 which give status of DMA Channel assignment. As mentioned above these fields may be read/write. The disappearing species of Parallel Cards which have Jumpers may simply show it's resources as "Jumpered" or it may show the correct Line Numbers. However these of course will be read only. Copyright 1997-2001 Craig Peacock - 19th August 2001. Hardware Imagine you are looking at the back of your pc, and that the parallel port socket is horizontal, with the long row of socket on top. Thenumbers of the sockets at the ends of the rows are... 13 . . . . . . . . . . 1 25 . . . . . . 14 (See below for where things are to be found on the connector at the end of the cable normally plugged into a printer.) The 'interesting' pinsare: Data bits 0-7: Pins 2 to 9, respectively. If you write to address 888 (decimal), you should see the outputs on those pins change. (The addressis different in some circumstances, but try 888. In Borland's Pascal: port[888]:=254 would set all bits but the first one high.) Pins 18-25: Signal ground. (I.e. for a VERY simple experiment, connect an LED to pin2, a 680ohm resistor to the LED, and then the otherend of the LED to pin 19. If it doesn't work... try turning the LED around!) Inputs: If you read address 889, you can discover the state of 5 pins. They determine the state of bits 3-7 of 889. bTmp:=port[889] is the 'raw'Pascal you need. Obviously, you do clever things with the result of that. The bits are mapped and named as follows:Bit Pin Name3 15 Error4 13 Select In5 12 Paper Empty6 10 Acknowledge7 11 Busy(A trap for the unwary... 'Busy' is inverted 'just inside' the computer. Thus if you apply a '1' to all of the pins, you'll see 01111xxx when youread 889! Isn't computing fun?) Before turning to more generally useful things, I might as well finish off the other pins.... Universal Serial Bus Embedded Linux Legacy Ports Device Drivers Miscellaneous Table of Contents The Parallel Port is the most commonly used port for interfacing home made projects.. For more information on Serial RS-232 Ports See bi-directional-Type inthe near future. Pin No (D-Type 25) Pin No (Centronics) SPP Signal Direction In/out Register Hardware Inverted 1 1 nStrobe In/Out Control Yes 2 2 Data 0 Out Data 3 3 Data 1 Out Data 4 4 Data 2 Out Data 5 5 Data 3 Out Data 6 6 Data 4 Out Data 7 7 Data 5 Out Data 8 8 Data 6 Out Data 9 9 Data 7 Out Data 10 10 nAck In Status 11 11 Busy In Status,uS Notes: 3BCh - 3BFhUsed for Parallel Ports which were incorporated on to Video Cards - Doesn't support ECP addresses 378h - 37FhUsual Address For LPT 1 278h - 27FhUsual Address For LPT 2LPT1's Base Address 0000:040ALPT2's Base Address 0000:040CLPT3's Base Address 0000:040ELPT4's Base Address (Note 1); Offset Name Read/Write Bit No. Properties Base + 0 Data Port Write (Note-1) Bit 7 Data 7 Bit 6 Data 6 Bit 5 Data 5 Bit 4 Data 4 Bit 3 Data 3 Bit 2 Data 2 Bit 1 Data 1 Bit 0register is normally a write only port. If you read from the port, you should get the last byte sent. However if your port is bi-directional, you canreceive data on this address. See Bi-directional Ports for more detail. Offset Name Read/Write Bit No. Properties Base + 1 Status Port Read Only Bit 7 Busy Bit 6 Ack Bit 5 Paper Out Bit 4 Select In Bit 3 Error Bit 2 IRQ (Not) Bit 1 Reserved Bit 0 Reserved Base + 2 Control Port Read/Write Bit 7 Unused Bit 6 Unused Bit 5 Enable Bi-Directional Port Bit 4 Enable IRQ Via Ack Line Bit 3 Select Printer Bit 2 Initialize Printer (Reset) Bit 1 Auto Linefeed Bit 0 Strobe The printer would not send a signal to initialize the computer, nor would it tell the computer to use auto linefeed. However these four outputs canalso be used for inputs. If the computer has placed a pin high (e.g. +5v) and your device wanted to take it low, you would effectively short outthe port, causing a conflict on that pin. Therefore these lines are "open collector" outputs (or open drain for CMOS devices). This means that ithascan mode is the preferred way of reading 8 bits of data without placing the port in reverse mode and using the data lines. Nibble mode uses a Quad 2 line to 1 line multiplexer to read a nibble of data at a time. Then it "switches" to the other nibble and reads its. Software can then be used to construct the two nibbles into a byte. The only disadvantage of this technique is that it is slower. It now requires a few I/O instructions to read the one byte, and it requires the use of an external IC. The operation of the 74LS157, Quad 2 line to 1 line multiplexer is quite simple. It simply acts as four switches. When the A/B input is low, the A inputs are selected. E.g. 1A passes through to 1Y, 2A passes through to 2Y etc. When the A/B is high, the B inputs are selected. The Y outputs are connected up to the Parallel Port's status port, in such a manner that it represents the MSnibble of the status register. While this is not necessary, it makes the software easier. To use this circuit, first we must initialize the multiplexer to switch either inputs A or B. We will read the LSnibble first, thus we must place A/B low. The strobe is hardware inverted, thus we must set Bit 0 of the control port to get a low on Pin 1. outportb(CONTROL, inportb(CONTROL) | 0x01); /* Select Low Nibble (A)*/ Once the low nibble is selected, we can read the LSnibble from the Status Port. Take note that the Busy Line is inverted, however we won't tackle it just yet. We are only interested in the MSnibble of the result, thus we AND the result with 0xF0, to clear the LSnibble.. void main(void){ int c; int intno; /* Interrupt Vector Number */ int picmask; /* PIC's Mask */Wiring."); } bi-directional.This will enable EPP Mode, if available. Under BIOS, if ECP mode is set then it's more than likely, this mode is not an option. However ifBIOS is set to ECP and EPP1.x Mode, then EPP 1.x will be enabled. - Under Microsoft's Extended Capabilities Port Protocol and ISAInterface Standard this mode is Vendor Specified. ReservedCurrently Reserved. - Under Microsoft's Extended Capabilities Port Protocol and ISA Interface Standard this mode is Vendor Specified. Configuration ModeIn this mode, the two configuration registers, cnfgA & cnfgB become available at. PDF Version This page, Interfacing the Standard Parallel Port is also avaliable in PDF (Portable Document Format), Introduction (pins 18-25). Here is the pinout. The normal function of the port is to transfer data to a parallel printer through the eight data pins, using the remaining signals as flow control and miscellaneous controls and indications. A standard port does this using the Centronics parallel interface standard. The original port was implemented with TTL/LS logic. Modern ports are implemented in an ASIC (application-specific integrated circuit) or a combined serial/parallel port chip, but are backward compatible. Many modern ports are bidirectional and may have extended functionality. The body of this document applies only to standard ports and PS/2 ports. Addressing ConventionsThe video card's parallel port is normally at 3BCh. This address is the first to be checked by the BIOS, so if a port exists there, it will becomeLPT1. The BIOS then checks at 378h, then at 278h. I know of no standard address for a fourth port.Direct Hardware AccessA parallel port consists of three 8-bit registers at adjacent addresses in the processor's I/O space. The registers are defined relative to the I/Obase address, and are at IOBase+0, IOBase+1 and IOBase+2 (for example if IOBase is 3BCh, then the registers are at 3BCh, 3BDhand 3BEh). Always use 8-bit I/O accesses on these registers.Data RegisterThe data register is at IOBase+0. It may be read and written (using the IN and OUT instructions, or inportb() and outportb() orinp() and outp()). Writing a byte to this register causes the byte value to appear on the data signals, on pins 2 to 9 inclusive of the D-subconnector (unless the port is bidirectional and is set to input mode). The value will remain latched and stable until a different value iswritten to the data register. Reading this register yields the state of the data signal lines at the time of the read access.Data register: LPTBase+0, read/write, driven by software (driven by hardware in input mode) 7 6 5 4 3 2 1 0 Name Pin Buffer Bit value '0' meaning Bit value '1' meaning * . . . . . . . D7 9 True Pin low; data value '0' Pin high; data value '1' . * . . . . . . D6 8 True Pin low; data value '0' Pin high; data value '1' . . * . . . . . D5 7 True Pin low; data value '0' Pin high; data value '1' . . . * . . . . D4 6 True Pin low; data value '0' Pin high; data value '1' . . . . * . . . D3 5 True Pin low; data value '0' Pin high; data value '1' . . . . . * . . D2 4 True Pin low; data value '0' Pin high; data value '1' . . . . . . * . D1 3 True Pin low; data value '0' Pin high; data value '1' . . . . . . . * D0 2 True Pin low; data value '0' Pin high; data value '1'Status RegisterThe status register is at IOBase+1. It is read-only (writes will be ignored). Reading the port yields the state of the five status input pins onthe parallel port connector at the time of the read access:Status register: LPTBase+1, read-only, driven by hardware 7 6 5 4 3 2 1 0 Name Pin Buffer Bit value '0' meaning Bit value '1' meaning * . . . . . . . BUSY 11 Inverted Pin high; printer is busy Pin low; printer is not busy . * . . . . . . -ACK 10 True Pin low; printer is asserting -ACK Pin high; printer is not asserting -ACK . . * . . . . . NOPAPER 12 True Pin low; printer has paper Pin high; printer has no paper . . . * . . . . SELECTED 13 True Pin low; printer is not selected Pin high; printer is selected . . . . * . . . -ERROR 15 True Pin low; printer error condition Pin high; printer no-error condition . . . . . * * * UndefinedNote: Signal names which start with '-' are electrically active-low. For example the '-ERROR' signal indicates that an error is presentwhen it is low, and that no error is present when it is high. Signal names without a leading '-' are electrically active-high. Control RegisterThe control register is at IOBase+2. It can be read and written. Bits 7 and 6 are unimplemented (when read, they yield undefined values,often 1,1, and when written, they are ignored). Bit 5 is also unimplemented on the standard parallel port, but is a normal read/write bit on thePS/2 port. Bit 4 is a normal read/write bit. Bits 3, 2, 1 and 0 are special - see the following section.Control register: LPTBase+2, read/write (see below), driven by software and hardware (see below) 7 6 5 4 3 2 1 0 Name Pin Buffer Bit value '0' meaning Bit value '1' meaning * * . . . . . . Unused - - (undefined on read, ignored on write) . . * . . . . . Input mode - - Normal (output) mode Input mode (PS/2 ports only) . . . * . . . . Interrupt enable - - IRQ line driver disabled IRQ line driver enabled . . . . * . . . -SELECT 17 Inverted Pin high; not selected Pin low; printer selected . . . . . * . . -INITIALIZE 16 True Pin low; initializes printer Pin high; does not initialize printer . . . . . . * . -AUTOFEED 14 Inverted Pin high; no auto-feed Pin low; auto-feed enabled . . . . . . . * -STROBE 1 Inverted Pin high; -STROBE inactive Pin low; -STROBE activeNote: As described for the status register, signal names which start with '-' are electrically active-low. If you are using this technique, the control register is not strictly 'read/write', because you may not read what you write (or wrote). For experimenters, the interrupt facility is useful as a general-purpose externally triggerable interrupt input. Beware though, not all cardssupport the parallel port interrupt. The actual IRQ number is either hard-wired (by convention, the port at 3BCh uses IRQ7) or jumper-selectable (IRQ5 is a commonalternative). Sound cards, in particular, tend to use IRQ7 for their own purposes. To use the IRQ you must also enable the interrupt via the interrupt mask register in the interrupt controller, at I/O address 21h, and yourinterrupt handler must send an EOI on exit. DOS technical programming references have notes on writing interrupt handlers. Connector pinout This table summarises the above information, indexed by parallel port connector pin number. Direction/typePin Signal Register and bit Buffer Normal signal line function (see below)1 -STROBE OC/Pullup Control register bit 0 Inverted Falling edge strobes data byte into printer2 D0 Output Data register bit 0 True Carries bit 0 of data byte to printer3 D1 Output Data register bit 1 True Carries bit 1 of data byte to printer4 D2 Output Data register bit 2 True Carries bit 2 of data byte to printer5 D3 Output Data register bit 3 True Carries bit 3 of data byte to printer6 D4 Output Data register bit 4 True Carries bit 4 of data byte to printer7 D5 Output Data register bit 5 True Carries bit 5 of data byte to printer8 D6 Output Data register bit 6 True Carries bit 6 of data byte to printer9 D7 Output Data register bit 7 True Carries bit 7 of data byte to printer Pulsed low by printer to acknowledge data byte10 -ACK Input Status register bit 6 True Rising (usually) edge causes IRQ if enabled11 BUSY Input Status register bit 7 Inverted High indicates printer cannot accept new data12 NOPAPER Input Status register bit 5 True High indicates printer has run out of paper13 SELECTED Input Status register bit 4 True High indicates printer is selected and active14 -AUTOFEED OC/Pullup Control register bit 1 Inverted Low tells printer to line-feed on each carriage return15 -ERROR Input Status register bit 3 True Pulled low by printer to report an error condition16 -INITIALIZE OC/Pullup Control register bit 2 True Low tells printer to initialize itself17 -SELECT OC/Pullup Control register bit 3 Inverted Low tells printer to be selected18 Ground... Ground Signal ground (pins 18-25 are all commoned)25 GroundElectrical signal characteristics for the three 'direction/type' types are: Input signals are usually pulled up to +5V with a weak pullup (47K or 100K) but not on all ports! Output signals are totem-pole or 'push-pull' outputs - i.e. they pull high and low. Some ports pull low much more strongly than they pull high. Limited current can be drawn from the outputs (typically a few milliamps per output) but the output voltage will drop as current is drawn. OC/Pullup (open collector with pullup) outputs pull low strongly but pull high weakly. When set to electrical high, they can be pulled low externally; therefore they can be used as inputs. See control bits for more details. #include <dos.h>#include <process.h>#include <stdio.h> /* The following function returns the I/O base address of the nominated parallel port. The input value must be 1 to 3. If the return value is zero, the specified port does not exist. */ void main(void) { unsigned int portnum; for (portnum = 1; portnum < 4; ++portnum) report_port_type(portnum); exit(0); }--------------------------- snip snip snip ---------------------------Enhanced Ports The major types of parallel ports are: Name Bidirectional DMA capabilityStandard ('SPP') No NoBidirectional (PS/2) Yes NoEPP (Enhanced Parallel Port) Yes (see below) NoECP (Extended Capabilities Port) Yes (see below) YesThe PS/2 bidirectional port is a standard port with input mode capability, enabled via bit 5 of the control register. The EPP (Enhanced Parallel Port) and ECP (Extended Capabilities Port) are described in the IEEE 1284 standard of 1994, which gives thephysical, I/O and BIOS interfaces. Both are backward-compatible with the original parallel port, and add special modes which includebidirectional data transfer capability. These modes support fast data transfer between computers and printers, and between computers, andsupport multiple printers or other peripherals on the same port. In their enhanced modes, they re-define the control and status lines of theparallel port connector, using it as a slow multiplexed parallel bus. The ECP supports DMA (direct memory access) for automated high-speed data transfer. Links Warp 9 Engineering (commercial) home page - technical information on all port types. PC Gadgets (commercial) catalogue - parallel-port unit to drive stepper motors and monitorswitches. Craig Peacock's Interfacing the PC page - technical information on all port types, links to relevantmaterial, several PC interfacing projects. End of Kris Heidenstrom's PC Parallel Port Mini-FAQ The printer driver, lp is a character special device driver and a parport client. As a character special device driver it registers a structfile_operations using register_chrdev, with pointers filled in for write, ioctl, open and release. As a client of parport, itregisters a struct parport_driver using parport_register_driver, so that parport knows to call lp_attach when a new parallelport is discovered (and lp_detach when it goes away). The parallel port console functionality is also implemented in drivers/char/lp.c, but that won't be covered here (it's quite simplethough). The initialisation of the driver is quite easy to understand (see lp_init). The lp_table is an array of structures that contain informationabout'simplementation of open, write, and so on. This part is the same as for any character special device driver. After successfully registering itself as a character special device driver, the printer driver registers itself as a parport client usingparport_register_driver. It passes a pointer to this structure: The lp_detach function is not very interesting (it does nothing); the interesting bit is lp_attach. What goes on here depends onwhether the user supplied any parameters. The possibilities are: no parameters supplied, in which case the printer driver uses every port thatis detected; the user supplied the parameter "auto", in which case only ports on which the device ID string indicates a printer is present areused; or the user supplied a list of parallel port numbers to try, in which case only those are used. For each port that the printer driver wants to use (see lp_register), it calls parport_register_device and stores the resultingstructhas data that it wants printed, and the printer driver hands it off to the parport code to deal with.The parport functions it uses that we have not seen yet are parport_negotiate, parport_set_timeout, andparport_write. These functions are part of the IEEE 1284 implementation. The way the IEEE 1284 protocol works is that the host tells the peripheral what transfer mode it would like to use, and the peripheral eitheraccepts that mode or rejects it; if the mode is rejected, the host can try again with a different mode. This is the negotation phase. Once theperipheral has accepted a particular transfer mode, data transfer can begin that mode. The particular transfer mode that the printer driver wants to use is named in IEEE 1284 as "compatibility" mode, and the function to requesta particular mode is called parport_negotiate.#include <parport.h> The modes parameter is a symbolic constant representing an IEEE 1284 mode; in this instance, it is IEEE1284_MODE_COMPAT.(Compatibility mode is slightly different to the other modes---rather than being specifically requested, it is the default until another mode isselected.) Back to lp_write then. First, access to the parallel port is secured with parport_claim_or_block. At this point the driver mightsleep, waiting for another driver (perhaps a Zip drive driver, for instance) to let the port go. Next, it goes to compatibility mode usingparport_negotiate. The main work is done in the write-loop. In particular, the line that hands the data over to parport reads: written = parport_write (port, kbuf, copy_size);The parport_write function writes data to the peripheral using the currently selected transfer mode (compatibility mode, in this case). Itreturns the number of bytes successfully written:#include <parport.h> (parport_read does what it sounds like, but only works for modes in which reverse transfer is possible. Of course, parport_writeonly: struct parport_operations { [...] /* Block read/write */ size_t (*epp_write_data) (struct parport *port, const void *buf, size_t len, int flags); size_t (*epp_read_data) (struct parport *port, void *buf, size_t len, int flags); size_t (*epp_write_addr) (struct parport *port, const void *buf, size_t len, int flags); size_t (*epp_read_addr) (struct parport *port, void *buf, size_t len, int flags); The transfer code in parport will tolerate a data transfer stall only for so long, and this timeout can be specified withparport_set_timeout, which returns the previous timeout:#include <parport.h> The next function to look at is the one that allows processes to read from /dev/lp0: lp_read. It's short, like lp_write. Try to read data from the peripheral using reverse nibble mode, until either the user-provided buffer is full or the peripheral indicates that there is no more data. Otherwise, we tried to read data and there was none. If the user opened the device node with the O_NONBLOCK flag, return. Otherwise wait until an interrupt occurs on the port (or a timeout elapses). #ifdef SCCSIDstatic sccsid[] = "@(#) lp.c 1.1 91/03/18 19:50:09";#endif/****************************************************************************** Hardware line printer driver. Optionally, one may retrieve the status of the printer port with lpt_io(STAT). ***************************************************************************/ /**************************************************************************//*&&&&&&&&&&&&&&&&&&&&&&*/#define TEST_2/*&&&&&&&&&&&&&&&&&&&&&&*//**************************************************************************/ #include <bios.h>#include "lp.h" /* function codes *//********************************************************//* these are defined in lp.h and are here for reference#define IN 1#define OUT 2#define INIT 3#define STAT 4#define SELECT 5#define IS_BUSY 6#define IS_ACK 7#define IS_PRESENT 8***********************************************************/ /********************************************************//* subfunction codes for function SELECT Again, these are defined in lp.h#define ASSERT 100#define DEASSERT 101****************************************************/ /*************************************************************************** port architecture. Each lpt port starts at a base address as defined below. Status andcontrol ports are defined off that base. write read=============================================================================Base data to the printer is latched. Read latched data ******************************************************************************//********************************************//* defined in lp.h and are here for ref only#define LPT1 0x3bc#define LPT2 0x378#define LPT3 0x278**********************************************/ #ifdef TEST_1main(){ unsigned status; unsigned lpt_io(); unsigned int i; time_t start_time, end_time; for (i=0;i<50000;i++) { while ( status=lpt_io(LPT1,0,IS_BUSY) ) /* spin while busy */ ; status = lpt_io(LPT1, '*', OUT); if (!(i%1000)) printf("*"); } end_time = time(NULL); printf("\n50,000 chars in %ld seconds or %ld chars/sec\n", end_time-start_time, 50000L / (end_time-start_time) ); exit(0); #endif #ifdef TEST_2/* this version outputs a file to lpt1 */main(argc, argv)int argc;char **argv;{ unsigned status; unsigned lpt_io(); long int i=0L; time_t start_time, end_time; int character; int busy_flag=0; if (argc > 1) { if (freopen(argv[1], "rb", stdin) == (FILE *) NULL) { cprintf("Error, file %s open failed\n", argv[1]); exit(1); } } gotoxy(70,25);cputs(" "); gotoxy(1,24); cprintf("%ld chars in %ld seconds or %ld chars/sec", i, end_time-start_time, i / (end_time-start_time) ); /** The meaning of life and the bits returned in the status byte* NOTE: Important - the sense of all bits are flipped such that* if the bit is set, the condition is asserted.**Bits----------------------------* 7 6 5 4 3 2 1 0* | | | | | | | +-- unused* | | | | | | +------ unused* | | | | | +---------- unused* | | | | +-------------- 1 = i/o error* | | | +------------------ 1 = selected* | | +---------------------- 1 = out of paper* | +-------------------------- 1 = acknowledge* +------------------------------ 1 = not busy**/ unsigned intlpt_io(port,byte,mode) unsigned port; unsigned byte; int mode;{ unsigned i,j,status; long unsigned otime; case OUT: outportb(port,byte); /* send the character to the port latch */ case IN: return(inportb(port)); case SELECT: switch (byte) { case ASSERT: i = inportb(port+2); /* get the control bits */ outportb(port+2, i | 0x8); /* mask bit 3 ON and output */ return ( (inportb(port+1) & 0xf8) ^ 0x48 ); case DEASSERT: i = inportb(port+2); /* get the control bits */ outportb(port+2, i & ~0x8); /* mask bit 3 OFF and output */ return ( (inportb(port+1) & 0xf8) ^ 0x48 ); default: return(~0); /* error */ } case INIT: otime = biostime(0,0L); /* get the timer ticks */ outport(port+2, 0x08); /* set init line low */ default: return(~0); /* error, all bits set */ }} Simple circuit and program to show how to use PC parallel port output capabilities PC parallel port can be very useful I/O channel for connecting your own circuits to PC. The port is very easy to use when you firstunderstand some basic tricks. This document tries to show those tricks in easy to understand way. WARNING: PC parallel port can be damaged quite easily if you make mistakes in the circuits you connect to it. If the parallel portis integrated to the motherboard (like in many new computers) repairing damaged parallel port may be expensive (in many cases itit is cheaper to replace the whole motherborard than repair that port). Safest bet is to buy an inexpensive I/O card which has an extraparallel port and use it for your experiment. If you manage to damage the parallel port on that card, replacing it is easy and inexpensive. DISCLAIMER: Every reasonable care has been taken in producing this information. However, the author can accept noresponsibility for any effect that this information has on your equipment or any results of the use of this information. It is theresponsibly of the end user to determine fitness for use for any particular purpose. The circuits and software shown here are for noncommercial use without consent from the author. PC parallel port is 25 pin D-shaped female connector in the back of the computer. It is normally used for connecting computer to printer, butmany other types of hardware for that port is available today.Not all 25 are needed always. Usually you can easily do with only 8 output pins (data lines) and signal ground. I have presented those pins intare in high logic level (1). In real world the voltages can be something different from ideal when the circuit is loaded. The output currentcapacity of the parallel port is limited to only few milliamperes. Dn Out ------+ |+ Sourcing Load (up to 2.6 mA @ 2.4 v) |- Ground ------+Simple LED driving circuits You can make simple circuit for driving a small led through PC parallel port. The only components needed are one LED and one 470 ohmresistors. You simply connect the diode and resistor in series. The resistors is needed to limit the current taken from parallel port to a valuewhich light up acceptably normal LEDs and is still safe value (not overloading the parallel port chip). In practical case the output current willbethat LED) and another one goes to any of the ground pins. Be sure to fit the circuit so that the LED positive lead (the longer one) goes to thedatapin. If you put the led in the wrong way, it will not light in any condition. You can connect one circuit to each of the parallel port datapins. In this way you get eight software controllable LEDs. The software controlling is easy. When you send out 1 to the datapin where the LED is connected, that LED will light. When you send 0 tothat same pin, the LED will no longer light. Control program The following program is an example how to control parallel port LPT1 data pins from your software. This example directly controls theparallel port registers, so it does not work under some multitasking operating system which does not allow that. It works nicely underMSDOS. You can look the Borland Pascal 7.0 code (should compile also with earlier versions also) and then download the compiledprogram LPTOUT.EXE.Program lpt1_output; Uses Dos; Var addr:word; data:byte; e:integer; Begin addr:=MemW[$0040:$0008]; Val(ParamStr(1),data,e); Port[addr]:=data;End.How to use the program LPTOUT.EXE is very easy to use program. The program takes one parameter, which is the data value to send to the parallel port. That valuemust be integer in decimal format (for example 255). Hexadecimal numbers can also be used, but they must be preceded by $ mark (forexample $FF). The program hoes not have any type of error checking to keep it simple. If your number is not in correct format, the programwill send some strange value to the port. LPTOUT 255Set all datapins to high level. LPTOUT 1Set datapin D0 to high level and all other datapins to low level. You have to think the value you give to the program as a binary number. Every bit of the binary number control one output bit. Thefollowing table describes the relation of the bits, parallel port output pins and the value of those bits.Pin 2 3 4 5 6 7 8 9Bit D0 D1 D2 D3 D4 D5 D6 D7Value 1 2 4 8 16 32 64 128For example if you want to set pins 2 and 3 to logic 1 (led on) then you have to output value 1+2=3. If you want to set on pins 3,5 and 6 thenyou need to output value 2+8+16=26. In this way you can calculate the value for any bit combination you want to output.Making changes to source code You can easily change te parallel port number int the source code by just changing the memory address where the program read the parallelportLPT2.Using other languages The following examples are short code examples how to write to I/O ports using different languages. In the examples I have used I/O address378h which is one of the addresses where parallel port can be. The following examples are useful in DOS. AssemblerMOV DX,0378HMOV AL,nOUT DX,ALWhere n is the data you want to output.BASICOUT &H378, NWhere N is the number you want to output.C outp(0x378,n);oroutportb(0x378,n);Where N is the data you want to output. The actual I/O port controlling command varies from compiler to compiler because it is not part ofstandardized C libraries.Here is an example source code for Borland C++ 3.1 compiler:#include <stdio.h>#include <dos.h>#include <conio.h> /********************************************//*This program set the parallel port outputs*//********************************************/ Direct port controlling from application is not possible under Windows NT and to be ale to control the parallel port directly you will need towrite> outb(value, base);}Save the source code to file lpt_test.c and compile it with command:gcc -O lpt_test.c -o lpt_testThe user has to have the previledges to have access to the ports for the program to run, so you have to be root to be able to ron this kind ofprograms without access problems. If you want to make a program which can be run by anybody then you have to first set the owner of theprogram to be root (for example do compilation when yhou are root), give the users rights to execute the program and then set the program tobe always executed with owner (root) rights instead of the right of the user who runs it. You can set the programn to be run on owner rightsby using following command:chmod +s lpt_testIf you want a more useful program, then download my lptout.c parallel port controlling program source code. That program works so thatyou can give the data to send to parallel port as the command line argument (both decimal and hexadecimal numbers supported) to thatprogram and it will then output that value to parallel port. You can compile the source code to lptout command using the following line to dothe compilation:gcc -O lptout.c -o lptoutAfterport. The programming can be done in exactly the same way as told in my examples. The following circuit is the simples interface you can use to control relay from parallel port: Vcc | +------+ | __|__ Relay /^\ Diode 1N4002 Coil /---\ | | +------+ | | / 4.7K B |/ Cparallel port >-\/\/\/\/---| NPN Transistor: BC547A or 2N2222Adata pi |\ E | V |parallel port >--------------+ground pin | GroundThe circuit can handle relays which take currents up to 100 mA and operate at 24V or less. The circuit need external power supply which hasthe output voltage which is right for controlling the relay (5..24V depending on relay). The transistor does the switching of current and thediode prevent spikes from the relay coil form damaging your computer (if you leave the diode out, then the transistor and your computer canbe damaged).Since coils (solenoids and relay coils) have a large amount of inductance, when they are released (when the current is cut off) they generate averyof failure for the sink transistor might be short circuit, and consequently you would have the solenoid tap shorted to ground indefinitely. The circuit can be also used for controlling other small loads like powerful LEDS, lamps and small DC motors. Keep in mind that thosedevices you plan to control directly from the transistor must take less than 100 mA current. WARNING: Check and double check the circuit before connecting it to your PC. Using wrong type or damaged components can cause youparparallel port can occur because of high voltage inductive kickback from the relay coil (that diode stops that spike from occuring), The circuit example above works well and when transistor is of correct type and working properly. If for some reason B and C sould beshortedparallel >-|>|-+--\/\/\/--| NPN Transistor: BC547A or 2N2222Aport data | |\ Epin +-|<|-+ | V 1N4148 | |parallel >-----------+------+port ground | Ground Adding even more safety idea: Repalce the 1N4148 diode connected to ground with 5.1V zener diode. That diode will then protect againstovervoltage spikes and negative voltage at the same time.Bad circuit example I don't know WHY I see newbies who don't THINK electronics very well yet always putting the relay "AFTER" the transistor, as if that wassomething important. Well it's NOT, and in fact its a BAD PRACTICE if you want the parallel port to work well! This type of bad circuitdesigns have been posted to the usenet electronics newsgroups very often. The following circuit is example of this type of bad circuit design(do not try to build it): Vcc | | / 4.7K B |/ Cparallel port---\/\/\/\/---| NPN Transistor: BC547A or 2N2222A |\ E | V | +------+ | __|__ Relay /^\ Diode 1N4002 Coil /---\ | | +------+ | GroundTypicalfrom external power supply which is not connected to PC if there is no need for that. This arrangement prevents any currents on the externalcircuits from damaging the parallel port. The opto-isolator's input is a light emitting diode.R1 is used to limit the current when the output from the port is on. That 1kohm resistorlimitsoutputand the transistor turns off. When transistor is off no current flows into the relay, so it switches off. The diode provides an outlet for theenergy stored in the coil, preventing the relay from backfeeding the circuit in an undesired manner. The circuit can be used for controlling output loads to maximum of around 100 mA (depends somewhat on components and operationvoltage used). The external power supply can be in 5V to 24V range.In this circuit Q1 is used for controlling the base current of Q2 which controls the actual current. You can select almost any general purposepower transistor for this circuit which matches your current and voltage controlling needs. Some example alternatives are for exampleTIP41C (6A 100V) or 2N3055 (100V 15A). Depending your amplification facter inherint to the transitor Q2 you might not hough be able touse the full current capability of the output device T2 before there will be excessive losses (heating) in that transistor.This circuit is basically very simple modification of the original optoisolator circuit with one transistor. The difference in this circuit is thathere T2 controls the load current and Q1 acts as a current amplifier for T2 base control current. Optoisolator, R1, R2, Q1, D1 work exactly inthe same way as in one transistor circuit described eariler in this documents. R3 acts like an extra resistor which guarantees that T2 does notconduct when there is no signal fed to the optoisolator (small possible current leaking on optosiolator output does not make T1 and T2 toconduct). PC parallel port has 5 input pins. The input pins can be read from the I/O address LPT port base address + 1. The meaning of the buts in byte you read from that I/O port: Use of a PC Printer Port for Control and Data Acquisition - described also data input This device started out as a personal interest in building a device to turn lights on and off. It eventually developed into quite a big project,and was expanded to allow different types of devices to be plugged into a common bus. The actions of the bus are controlled through abidirectional parallel port, which allows 8 bits of data output or input, 5 status lines, and 4 control lines. The parallel port control lines areused to drive an Intel i8255 Parallel Peripheral Interface, which is really just a really nice multiplexer. It allows me to interface 24 bus linesto the limited number of lines on the parallel port. See the schematics for the hardware schematics, wiring details, and protocol specs. I have also written a device driver to drive the bus so that the logic required to place signals on the bus and read data from the bus would betransparent to the user. This device driver is written for Linux and can be compiled in statically or as a loadable kernel module. Feel free tolook through the source if you're interested. It's pretty well commented, even if it's not the most complete. Finally, since this project evolved into the final project for my Microcomputer Architecture Lab, I wrote a paper on my experiences withbuilding this. I found that it really was quite involved, and so the paper is very long. I don't blame you if you don't read the whole thing, butif you're interested please do! Sebastian Kuzminsky and I are jointly working on the third revision bus, in which we hope to remove the bus control logic (the PPI), andwork with only the parallel port lines. There will be 4 addressable devices, each able to process write requests, read requests, and interruptthe CPU for data input. A Linux device driver is soon on the way, as well as schematics and bus specification data, too! If you're curious about design considerations or generally in the project, feel free to send me mail at boggs@cs.colorado.edu. I am interestedin hearing your comments! The "standard" transfer modes in use over the parallel port are "defined" by a document called IEEE 1284. It really just codifies existingpractice and documents protocols (and variations on protocols) that have been in common use for quite some time. The original definitions of which pin did what were set out by Centronics Data Computer Corporation, but only the printer-side interfacesignals were specified. By the early 1980s, IBM's host-side implementation had become the most widely used. New printers emerged that claimed Centronicscompatibility, but although compatible with Centronics they differed from one another in a number of ways. As a result of this, when IEEE 1284 was published in 1994, all that it could really do was document the various protocols that are used forprinters (there are about six variations on a theme). In addition to the protocol used to talk to Centronics-compatible printers, IEEE 1284 defined other protocols that are used for unidirectionalperipheral-to-host transfers (reverse nibble and reverse byte) and for fast bidirectional transfers (ECP and EPP). link to the Technology Interface the Electronic Journal for Engineering Technologythe Technology Interface /Fall 96 by Peter H. Anderson pha@eng.morgan.edu Department of Electrical Engineering Morgan State University Abstract: A PC printer port is an inexpensive and yet powerful platform for implementing projects dealing with the control of real worldperipherals. The printer port provides eight TTL outputs, five inputs and four bidirectional leads and it provides a very simple means to usethecontributions and to New Mexico State University student Kyle Quinnell for preparing the html file. A. Port AssignmentsEach printer port consists of three port addresses; data, status and control port. These addresses are in sequential order. That is, if the dataport is at address 0x0378, the corresponding status port is at 0x0379 and the control port is at 0x037a. To definitively identify the assignments for a particular machine, use the DOS debug program to display memory locations 0040:0008. Forexample: >debug -d 0040:0008 L8 0040:0008 78 03 78 02 00 00 00 00Note in the example that LPT1 is at 0x0378, LPT2 at 0x0278 and LPT3 and LPT4 are not assigned. B. Outputs Please refer to the figures titled Figure #1 - Pin Assignments and Figure #2 - Port Assignments. These two figures illustrate the pinassignments on the 25 pin connector and the bit assignments on the three ports.is selected. The original function of INIT was to initialize the printer, AUTO FEED to advance the paper. In normal printing, STROBE ishigh. alogic zero on the corresponding output. This adds some complexity in using the printer port, but the fix is to simply invert those bits using theexclusive OR function prior to outputting. [One might ask why the designers of the printer port designed the port in this manner. Assume you have a printer with no cable attached. Anopen usually is read as a logic one. Thus, if a logic one on the SELECT_IN, AUTOFEED and STROBE leads meant to take the appropriateaction, an unconnected printer would assume it was selected, go into the autofeed mode and assume there was data on the outputs associatedwith the Data Port. The printer would be going crazy when in fact it wasn't even connected. Thus, the designers used inverted logic. A zeroforces the appropriate action.] Returning to the discussion of the Control Port, assume you have a value val1 which is to be output on the Data port and a value val2 on theControl port: For example; if I intended to output 1 0 0 0 on the lower nibble and did not do the inversion, the hardware would invert bit 3, leave bit 2 astrue and invert bits 1 and 0. The result, appearing on the output would then be 0 0 1 1 which is about as far from what was desired as onecould get. By using the exclusive-or function, 1 0 0 0 is actually sent to the port as 0 0 1 1. The hardware then inverts bits 3, 1 and 0 and theoutput is then the desired 1 0 0 0. C. Inputsindicates to the PC that the printer is busy or out of paper. A low wink on /ACK indicates the printer received something. A low on ERRORindicates the printer is in an error condition.] These inputs are fetched by reading the five most significant bits of the status port. However, the original designers of the printer interface circuitry, inverted the bit associated with the BSY using hardware. That is, when azero is present on input BSY, the bit will actually be read as a logic one. Normally, you will want to use "true" logic, and thus you will wanttoresult is then shifted such that the upper five bits are in the lower five bit positions.inverted by the hardware, but this is easily handled by using the exclusive-or function to selectively invert bits. D. Simple Example Refer to the figure titled Figure #3 - Typical Application showing a normally open push button switch being read on the BUSY input (StatusPort,and ground (logic 0) is applied to input Busy. Program Description: When idle, push-button is open and LED is off. On depressing push-button, LED blinks on and off at nominally 5pulses per second. E. Test Circuitry Refer to the figure titled Figure #4 - Printer Port Test Circuitry. This illustrates a very simple test fixture to allow you to figure out whatinversions are taking place in the hardware associated with the printer port. Program test_prt.c sequentially turns each of the 12 LED's on andthen */ void main(void){ int in, n; /* now turn off each LED on Data Port in turn by positioning a logic one in each bit position and outputing. */ for (n=7; n>=0; n++) { outportb(DATA, 0x01 << n); delay(1000); } outportb(DATA, 0x00); outportb(CONTROL, 0x00); while(1) { in = (inportb(STATUS)^0x80)&0xf8; /* Note that BUSY (msbit) is inverted and only the ** five most significant bits on the Status Port are displayed. */ printf("%x\n", in); } } F. Interruptsto momentarily stop what it is doing, and jump to a function to do what you desire. When this is complete, your program returns to where itlefttemperature. But, when input ACK goes low, it interrupts your temperature monitoring and goes to some other code you have written tohandle an alarm. Perhaps, to fetch the time and date off the system clock and write this to an Intrusion File. When done, your programcontinues monitoring temperature. Frequently, when outputting, the programmer is interested in only a portion of a byte and it is a burden to remember what all the other bitsare.then: A few words about the DIRECTION bit on the Control Port. I have seen PC's where this bit may be set to a logic "one" which turns aroundthe Data Port such that all of the Data leads are inputs. I have also seen PC's where this worked for only the lower nibble of the Data Port andotherthe programmers who write such programs as WordPerfect do not get down to this low level of hardware detail. Rather, they write tointerface with the PC's BIOS. The BIOS (Basic Input-Output System) is a ROM built in to the PC which makes all PC's appear the same. This is a pretty nice way for eachvendor to implement their design with a degree of flexibility. An example is the port assignments discussed above. This data is read from the BIOS ROM when your PC boots up and written to memorylocations beginning at 0040:0008. Thus, the designers of WordPerfect don't worry about the port assignments. Rather, they read theappropriate memory location. In the same way, they interface with the BIOS for printing. For example, if the designers want to print a character, the AH register is set tozero, the character to be printed is loaded into AL, the port (LPT1, LPT2, etc) is loaded into the DX register. They then execute a BIOS INT17h. Program control is then passed to the BIOS and which performs at the low level of hardware design which we are trying to work. TheBIOS varies from one hardware design to another; it's purpose is to work with the hardware. If inversions are necessary, it is done in theBIOS. When the BIOS has completed whatever bit sequencing is required to write the character to the printer, control is passed back to theprogram with status information in the AH register. J. Summary In summary, the printer port affords a very simple technique for interfacing with external circuitry. Twelve output bits are available, eight onthe Data Port and four on the lower nibble of the Control Port. Inversions are necessary on three of the bits on the Control Port. Five inputsare available on the Status Port. One software inversion is necessary when reading these bits. A. Introduction This section describes how to use hardware interrupts using the printer port. The discussion closely follows programs prnt_int.c andtime_int.c A hardware interrupt is a capability where a hardware event causes the software to stop whatever it is doing and to be redirected to a functiontointerrupts directly on the ISA bus. When an interrupt occurs, the PC must know where to go to handle the interrupt. The original 8088 PC design provided for up to 256 interrupts (0x00 - 0xff). This includes both hardware and software interrupts. Each ofthesebegins at 0x0024, etc. This 1024 bytes (256x4) is termed the interrupt vector table. These four bytes contain the address of where the PC is to go to when an interrupt occurs. Most of the table is loaded when you boot up themachine. The table may be added to or entries modified when you run various applications. IBM reserved eight hardware interrupts beginning at INT 0x08 for interrupt expansion. These are commonly known as IRQ0 - IRQ7, theIRQ corresponding to the lead designations associated with the Intel 8259 which was used to control these interrupts. Thus, IRQ 0corresponds Thus, when an IRQ 7 interrupt occurs, we know this corresponds to INT 0x0f and the address of the interrupt service routine is located at0070:06F4. Exercise. Use the debugger to examine the interrupt vector table. Then use Microsoft Diagnostics (MSD) and examine the IRQ addresses andcompare the two. Assume, you are going to use IRQ 7. Assume that when an IRQ 7 interrupt occurs, you desire your program to proceed to functionirq7_int_serv, a function which you wrote. In order to do so, you must first modify the interrupt handler table. Of course, you may wish tocarefully take what is already there in the table and save it somewhere and then when you leave your program, put the old value back. int intlev=0x0f; Good programming dictates that once you are done with your program, you would restore the entry to what it was; setvect(intlev, oldfunc);After all, what would you think of WordPerfect, if after running it, you couldn't use your modem without rebooting. D. Maskingdisturb any of the other bits. outportb(0x20, 0x20);Prior to exiting the program, the user should return the system to its original state; setting bit 7 of the interrupt mask to logic one andrestoring the interrupt vector. mask=inportb(0x21) | 0x80; outportb(0x21, mask); setvect(intlev, oldfunc); E. Interrupt Service Routine In theory, you should be able to do anything in your interrupt service routine (ISR). For example, an interrupt might be forced by externalhardware detecting an intrusion. The ISR might fetch the time off the system clock, open a file and write the time and other information tothe file, close the file and then return to the main program.In fact, I have not had good luck in doing this and you will note that my interrupt service routines are limited; set a variable such that in returning to the main program there is an indication that an interrupt occurred. enable interrupts. I think that my problem is that interrupts are turned off during the entire ISR which may well preclude a C function which may useinterrupts. For example, in opening a file, I assume interrupts are used by Turbo C to interface with the disk drive. Unlike the IRQ we arediscussing, the actual implementation of how C handles these interrupts necessary to implement a C function will not be "heard" by the PCand the program will appear to bomb. My suggestion is that you initially use the technique I have used in writing your interrupt service routine. That is, very simple; either settingor incrementing a variable. However, recognize that this is barely scratching the surface. Then you might try a more complex ISR of the following form. At the time of this writing I have not tried this. outportb(0x20,0x20). Note the difference from the previous. Any further IRQ 7 interrupts are blocked while in the ISR, but in the middle of the ISR, all otherinterrupts are enabled. This should permit all C functions to work. Recall that there are three ports associated with the control of a printer port; Data, Status and Control. Bit 4 of the Control Port is a PCoutput;the IRQ input does). Thus, in addition to setting the mask to entertain interrupts from IRQ 7 as discussed above, you must also set IRQ Enable to a logic one. Prior to exiting from your program, it is good practice to leave things tidy. That is, set Bit 4 back to a zero. G. Programs Program PRNT_INT.C simply causes a screen message to indicate an interrupt has occurred. Note that global variable "int_occurred" is setto false in the declaration. On interrupt, this is set to true. Thus, the code in main within the if(int_occurred) is only executed if a hardwareinterrupt did indeed occur. Program TIME_INT.C is the same except for main. When the first interrupt occurs, the time is fetched off the system clock. Otherwise thenew time is fetched and the difference is calculated and displayed. /*** Program PRNT_INT.C** #include <stdio.h>#include <bios.h>#include <dos.h> #define TRUE 1#define FALSE 0 void open_intserv(void);void close_intserv(void);void int_processed(void);void interrupt far intserv(void); int main(void){ open_intserv(); outportb(CONTROL, inportb(CONTROL) | 0x10); /* set bit 4 on control port to logic one */ while(1) { if (int_occurred) { printf("Interrupt Occurred\n"); int_occurred=FALSE; } }* #include <stdio.h>#include <bios.h>#include <dos.h>#include <sys\timeb.h>); } The standard PC printer port is handy for testing and controlling devices. It provides an easy way toimplement a small amount of digital I/O. I like to use to during initial development of a product -- beforethe "real" hardware is ready, I can dummy up a circuit using the printer port, and thus get startedtesting my software.This source code module provides the low-level control of the port, implementing code to control 12outputs and read 5 inputs. This code was written for Borland C/C++ v3.1, but you should be able to adapt it for other compilers.You can view the source code online, or download an archive (prn_io.zip) that contains PRN_IO.Cand PRN_IO.H. To use the module in your program, simply #include PRN_IO.H from wherever youneed to call the functions, and compile and link PRN_IO.C into your program. This code assumes that the port is configured as a "standard" or "normal" port; configuring the port for EPP or ECP modes may or may not work In the beginning, the parallel port on a PC was used only as an output port for a printer.However, several companies started to use is as a port to connect other devices, such asscanners or external storage devices. In such devices, you need to use the port also for input,to send data from the device to the computer. The original printer ports had only 8-bit outputcapability. However, there were two methods to implement the input capability to the portoriginally designed for output only.The use of the control signal lines There are several lines used to carry the control signalsfrom the printer to the computer, such as Error, Select, Paper Empty or Busy. You can usethese lines to transfer 4-bit pieces of data. A byte (8 bits) of data is transferred in two"nibbles" (4 bits), from the external device to the computer. This is also referred to as Nibblemode.The introduction of an I/O controller with the bi-directional 8-bit data port. This was ahardware change. This mode is referred to as Byte mode or Bi-directional mode. Bi-directional (byte) mode is faster than nibble mode. However there were needs for fastertransfer and two other methods were created.The EPP (Enhanced Parallel Port) was developed by Intel, Xircom, Zenith Data System. Itlets the I/O controller take care of the handshake between the computer and the peripheral andthus frees the CPU from having to check the I/O port status every time it sends or receives apiece of data. This speeds up data transfer.ECP (Extended Capacity Port) was developed by Hewlett-Packard and Microsoft. Itintroduced the data compression, FIFO buffer and other sophisticated features to the paralleldata transfer. All the features above were defined in the IEEE standard 1284-1994 "StandardSignaling Method for a Bi-directional Parallel Peripheral Interface for Personal Computers"You can find a very good introduction to this standard at: .com/ieee1284.htm UniversalFAST Cable - 10 ft Full $69.95 DirectParallel Connection Cable Specs (Virtua Buy More lly It ! ! "# Info FREE) (2) PCs Basic 4-BIT Cable - 10 ft Full $19.95 DirectParallel Connection Cable Specs (Virtua Buy More lly It Info FREE) (2) PCs ! ! "# DirectParallel Universal FAST Cable DirectParallel Basic 4-Bit Cable Share Your Internet Connection for only $20 Slower than the Universal Fast Cable but also works on all PCs Typical Data Transfer Rates of 40 - 70 Kbytes/sec. (2.4 MB - 4.2 MB/minute) Standard Length is 10 ft Printer Mode is the most basic mode. It is a Standard Parallel Port in forward mode only. It has no bi-directional feature, thus Bit 5 of the Control Port will not respond. Standard & Bi-directional (SPP) Modeis the bi-directional mode. Using this mode, bit 5 of the Control Port will reverse the direction of the port,so you can read back a value on the data lines. In conclusione tutti i dati (sia pure con qualche variante sul significato di SPP) concordao sulle seguenti informazioni: La porta PARALLELA PARALLELA solo in uscita nel senso che SCRIVE 8 bit alla volta, ma nella prima versione non aveva nessunapossibilit di LEGGERE SPP/EPP/ECPThe original specification for parallel ports was unidirectional, meaning that data only traveled in onedirection for each pin. With the introduction of the PS/2 in 1987, IBM offered a new bidirectionalparallel port design. This mode is commonly known as Standard Parallel Port (SPP) and hascompletely replaced the original design. Bidirectional communication allows each device to receivedata as well as transmit it. Many devices use the eight pins (2-9) originally designated for data. Usingthe same eight pins limits communication to half-duplex, meaning that information can only travel inone direction at a time. But pins 18K to 2 MB, to be transferred each second. It wastargeted specifically towards non-printer devices that would attach to the parallel port,particularly storage devices that needed the highest possible transfer rate. Close on the heels of the introduction of EPP, Microsoft and Hewlett Packard jointlyannounced a specification called Extended Capabilities Port (ECP) in 1992. WhileEPP was geared towards other devices, ECP was designed to provide improvedspeed and functionality for printers.In 1994, the IEEE 1284 standard was released. It included the two specifications forparallel port devices, EPP and ECP. In order for them to work, both the operatingsystem and the device must support the required specification. This is seldom aproblem today since most computers sold support SPP, ECP and EPP and willdetect which mode needs to be used, depending on the attached device. If you needto manually select a mode, you can do so through the BIOS on most computers. The Standard Parallel Port (SPP) mode of the PC is guaranteed to work with all PC chipsets. No setup is required to enter this mode. In this mode there are 12 outputs (/STROBE, D0-D7, /AUTOFEEDXT, /INIT and /SELECT IN), and 5 inputs (/ACK, BUSY, PAPER END, SELECT, /ERROR). Some chipsets do not allow D0-D7 to be read when in this mode. D0-D7 can be written through the "data port" which is at offset 0 from the base of the parallel port registers. Bits 0-7 correspond to D0-D7. /ERROR, PAPER END, SELECT, /ACK and BUSY can be read through the "status port" which is at offset 1 from the base of the parallel port registers. Signal "status port" bit /ERROR 3 SELECT 4 PAPER END 5 /ACK 6 BUSY 7 /STROBE, /AUTOFEED, /INIT and /SELECT IN can be written through the "control port" which is at offset 2 from the base of the parallel port registers. Signal "status port" bit /STROBE 0 /AUTOFEEDXT 1 /INIT 2 /SELECT IN 3 LapLink/InterLink. ParNet. (To Computer 1). Microsoft Windows 98 Microsoft Windows 95 SUMMARYYou can use the Direct Cable Connection tool to establish a direct serial or parallel cableconnection between two computers. Windows supports serial null-modem standard (RS-232) cables and the following parallel cables for use with Direct Cable Connection: MORE INFORMATIONECP cables work on computers with ECP-enabled parallel ports. ECP must be enabled inboth computers' CMOS settings for parallel ports that support this feature. ECP cablesallow data to be transferred more quickly than standard cables. Note that bothcomputers must support ECP in order to use ECP cables. UCM cables support connecting different types of parallel ports. Using a UCM cablebetween two ECP-enabled ports allows the fastest possible data transfer between twocomputers. cables that can beused with Direct Cable Connection. To make a parallel InterLink cable, make a parallelcable with male DB-25 connectors at both ends, and wire the cable as follows: 25-pin 25-pin Description ------------------------------------------ pin 2 <------> pin 15 N/A pin 3 <------> pin 13 N/A pin 4 <------> pin 12 N/A pin 5 <------> pin 10 N/A pin 6 <------> pin 11 N/A pin 15 <------> pin 2 N/A pin 13 <------> pin 3 N/A pin 12 <------> pin 4 N/A pin 10 <------> pin 5 N/A pin 11 <------> pin 6 N/A pin 25 <------> pin 25 Ground-Ground Linux Networking HOWTOPrev Chapter 13. Cables and Cabling Next Pin assignment Printer cable Interlink cable Windows 95/98 DCC Test plugs Related sites The parallel port socket on your computer uses 25 pins. On most peripherals, the 36 pins Centronicsversion is used. Both connector pinouts are shown here. Most printers are connected to a computer using a cable with a 25 pins DB male connector at one sideand a 36 pins Centronics connector on the other. The normal way to make such a cable is shownhere. The following cable can be used with file transfer and network programs like LapLink and InterLink.The cable uses the parallel port which makes it possible to achieve higher throughput than with aserial connection at the same low costs. The cable is at least compatible with the following software. If you are seeking to buy a Parallel port Laplink cable, or trying to make your own cable,you should know what pins need to be switched in order to make it. Below is a chart ofwhat pins go to what on the other end. Only 18 pins are used in a Laplink Cable,therefore I will only show those eighteen here. Chart#5 DCC Parallel Laplink Cable Pinouts. Male DB-25 ==>> Male DB-25 1 Both Not used 2 to 15 3 to 13 4 to 12 5 to 10 6 to 11 7 Both Not used 8 Both Not used 9 Both Not used 10 to 5 11 to 6 12 to 4 13 to 3 14 Both Not used 15 to 2 16 Both Not used 17 to 19 18 to 18 19 to 17 20 Both Not used 21 to 21 22 to 22 23 to 23 24 Both Not used 25 to 25 Pinbody* to Pinbody * = In my cable one wire was attached to the metal body of the Male pins on both sides.Total 18 wired cable is necessary for this cable including one wire for Body of the pin too. SPEED: Parallel port Laplink cable is little faster than Serial port Cable because of more numbers of cores of wires used in Parallel port cable (25 pin) than Serial port Cable (9 pins). The expected speed is 2000kbytes/second but it is extremely dependent on the different quality chipset structure of Parallel Ports on different makes of the Motherboards. Some even reported me the lowest speed of 60kb/sec even though all other settings are correct. Its recommended that you setup LPT1 mode as only "ECP/EPP" or "ECP" mode in bios to get better speed, and not the "Normal" (4bit/8bit) modes. The latest tests done by me on modern motherboard proved that serial port transfers are equal or little slower than parallel ports.Here's parallel information. BTW, I've found that parallel lap-linkcables are often called "Turbo LapLink" cables. 1 - 12 - 153 - 134 - 125 - 106 - 117 nc8 nc9 nc10 - 511 - 612 - 413 - 314 - 1415 - 216 - 1617 - 1718 nc19 nc20 nc21 nc22 nc23 nc24 nc25 - 25 (ground) 0 | 1--------------------------------+ 0 | 00000 | 01011 | Upper set of bits is the value read 0 | 00000 | 11011 | on the left side. Lower set of bits--------------------------------+ is the value read on the top side. 1 | 11011 | 11111 | 1 | 01011 | 11111 |--------------------------------+ 1 - 1 2 - 15 3 - 13 4 - 12 5 - 10 6 - 11 7 nc 8 nc 9 nc 10 - 5 11 - 6 12 - 4 13 - 3 14 - 14 15 - 2 16 - 16 17 - 17 18 nc 19 nc 20 nc 21 nc 22 nc 23 nc 24 nc 25 - 25 .
https://ru.scribd.com/document/33307524/Paralel-Port
CC-MAIN-2019-47
refinedweb
15,350
60.85
IRC log of tpac on 2009-11-04 Timestamps are in UTC. 16:33:33 [RRSAgent] RRSAgent has joined #tpac 16:33:33 [RRSAgent] logging to 16:33:58 [Ralph] Meeting: W3C Technical Plenary 16:34:04 [Ralph] rrsagent, please make record public 16:34:20 [Ralph] zakim, call salon_1 16:34:20 [Zakim] ok, Ralph; the call is being made 16:34:21 [Zakim] W3C_TP(*)11:30AM has now started 16:34:23 [Zakim] +Salon_1 16:34:31 [Ralph] zakim, salon_1 is MeetingRoom 16:34:31 [Zakim] +MeetingRoom; got it 16:36:03 [Zakim] -MeetingRoom 16:36:05 [Zakim] W3C_TP(*)11:30AM has ended 16:36:05 [Zakim] Attendees were MeetingRoom 16:40:31 [raman] raman has joined #tpac 16:41:36 [mauro] mauro has joined #tpac 16:42:26 [LeeF] LeeF has joined #tpac 16:43:07 [Norm] Norm has joined #tpac 16:43:14 [Julian] Julian has joined #tpac 16:43:21 [marengo] marengo has joined #tpac 16:43:34 [cardona507] cardona507 has joined #tpac 16:43:53 [raman] morning all from the room! 16:44:02 [nord_c] nord_c has joined #tpac 16:44:25 [holstege2] holstege2 has joined #tpac 16:44:27 [TabAtkins] TabAtkins has joined #tpac 16:44:29 [dom] dom has joined #tpac 16:44:32 [mauro] mauro has changed the topic to: Technical Plenary Day agenda 16:44:43 [kford] kford has joined #tpac 16:45:06 [Kai] Kai has joined #tpac 16:45:09 [JonathanJ] JonathanJ has joined #TPAC 16:45:34 [matt] matt has joined #tpac 16:45:46 [cardona507] good morning everyone 16:45:48 [Bert] Bert has joined #tpac 16:45:52 [Zakim] W3C_TP(*)11:30AM has now started 16:45:53 [Zakim] +Salon_1 16:45:59 [Liam] Liam has joined #tpac 16:46:18 [burn] burn has joined #tpac 16:46:20 [masinter] masinter has joined #tpac 16:46:28 [Zakim] +Ralph 16:46:31 [unl] unl has joined #tpac 16:46:33 [lbolstad] lbolstad has joined #tpac 16:46:37 [adrianba] adrianba has joined #tpac 16:46:38 [jeanne] jeanne has joined #tpac 16:46:39 [mauro] Topic: Welcome to TPAC 09 (from Tim) 16:46:47 [wiecha] wiecha has joined #tpac 16:47:01 [vincent] vincent has joined #TPAC 16:47:16 [soonho] soonho has joined #tpac 16:47:17 [Arron] Arron has joined #tpac 16:47:28 [sylvaing] sylvaing has joined #tpac 16:47:29 [Zakim] -Ralph 16:47:36 [Zakim] + +46.7.06.02.aaaa 16:47:37 [wiecha] zakim, code? 16:47:37 [Zakim] the conference code is hidden, wiecha 16:47:53 [youenn] youenn has joined #tpac 16:48:07 [IanJ] IanJ has joined #tpac 16:48:07 [nick] nick has joined #tpac 16:48:09 [Vladimir] Vladimir has joined #tpac 16:48:19 [mauro] Chair: Ralph 16:48:33 [Kangchan] Kangchan has joined #tpac 16:48:38 [jun] jun has joined #tpac 16:48:39 [kohei] kohei has joined #TPAC 16:48:42 [IanJ] scribe: Ian 16:48:47 [IanJ] scribe: IanJ 16:48:53 [mauro] [ Tim welcomes everybody ] 16:48:55 [Zakim] + +1.408.644.aabb 16:48:57 [Yves] Yves has joined #tpac 16:49:04 [SCain] SCain has joined #tpac 16:49:05 [IanJ] Topic: Decentralized Extensibility in HTML5 16:49:25 [frankolivier] frankolivier has joined #tpac 16:49:40 [IanJ] Henry: Welcome to a debate, intended to be educational. Structured to bring out the details, complexity, and richness of the problem space we label "decentralized extensibility" 16:49:41 [timbl] timbl has joined #tpac 16:50:03 [jmorris] jmorris has joined #tpac 16:50:20 [IanJ] Noah's slides: 16:50:20 [Zakim] - +1.408.644.aabb 16:50:21 [Steven] Steven has joined #tpac 16:50:26 [MikeSmith] MikeSmith has joined #tpac 16:50:29 [Roger] Roger has joined #tpac 16:50:32 [rlewis3] rlewis3 has joined #tpac 16:50:44 [DanC] DanC has joined #tpac 16:50:50 [DKA] DKA has joined #tpac 16:50:51 [MichaelC] MichaelC has joined #tpac 16:50:57 [dezell] dezell has joined #tpac 16:51:00 [fabrice] fabrice has joined #tpac 16:51:08 [wbailer] wbailer has joined #tpac 16:51:10 [IanJ] Noah: My job today is to bring everyone here up to speed on why this is important, why it's hard, and some background on some particular details. 16:51:13 [howard] howard has joined #tpac 16:51:18 [caribou] caribou has joined #tpac 16:51:23 [Ingmar] Ingmar has joined #tpac 16:51:25 [rigo] rigo has joined #tpac 16:51:25 [BryanSullivan] BryanSullivan has joined #TPAC 16:51:33 [andrew] andrew has joined #tpac 16:51:36 [Magnus] Magnus has joined #tpac 16:51:37 [hbj] hbj has joined #tpac 16:51:42 [DanC] -> noah's presentation materials 16:51:42 [patrick] patrick has joined #TPAC 16:51:43 [IanJ] Noah: HTML is the most important doc format on the Web, and quite possibly the most important doc format in the world. 16:51:48 [shiki] shiki has joined #tpac 16:51:53 [IanJ] Noah: We are debating who gets to say what is in HTML. 16:51:54 [shepazu] shepazu has joined #tpac 16:51:58 [marie] marie has joined #tpac 16:52:00 [ArtB] ArtB has joined #tpac 16:52:03 [IanJ] Noah: This also says a lot about who we are as a community. 16:52:04 [marie] [Noah's slides are also linked from ] 16:52:09 [pbaggia2] pbaggia2 has joined #tpac 16:52:14 [IanJ] [Noah points out that he's not representing IBM or the TAG, just here to help!] 16:52:21 [Rotan] Rotan has joined #tpac 16:52:44 [rubys] rubys has joined #tpac 16:52:47 [FabGandon] FabGandon has joined #tpac 16:52:49 [IanJ] Noah's definition of decentralized extensibility: 16:53:11 [pbaggia] pbaggia has joined #tpac 16:53:13 [darobin] darobin has joined #tpac 16:53:13 [IanJ] "The ability for a language to be extended by multiple parties who do not explicitly coordinate with each other." 16:53:15 [ted] ted has joined #tpac 16:53:24 [Claes] Claes has joined #tpac 16:53:32 [IanJ] Slide 5: What sorts of extensions/ 16:53:42 [rkuntsch] rkuntsch has joined #tpac 16:53:45 [IanJ] Noah: elements, attributes, data values. 16:53:58 [IanJ] Noah: There are potentially lots of extensions people do for lots of reasons. 16:53:59 [dchiba] dchiba has joined #tpac 16:54:00 [glazou] glazou has joined #tpac 16:54:02 [fjh] fjh has joined #tpac 16:54:11 [dond] dond has joined #tpac 16:54:17 [IanJ] Noah: First, why some people are passionate about the importance of decentralized extensibility. 16:54:31 [shawn] shawn has joined #tpac 16:54:32 [IanJ] Noah: (1) modularity is good (2) separation of concerns is good 16:54:32 [maraki] maraki has joined #tpac 16:54:56 [IanJ] (3) Web is an unusual system; Web is too big for any central group to invent or coordinate all the extensions we need. 16:54:57 [JariA] JariA has joined #tpac 16:55:13 [IanJ] Noah: My view is that good architecture can be reduced to a few use cases. 16:55:21 [Karen] Karen has joined #tpac 16:55:25 [zarella] zarella has joined #tpac 16:55:28 [IanJ] Noah: SVG is a separate specification, that happens to be an XML vocabulary. 16:55:31 [tantek] tantek has joined #tpac 16:55:31 [brutzman] brutzman has joined #TPAC 16:55:31 [IanJ] ..it's easy to reuse the pieces. 16:55:49 [IanJ] ...you and I may work in an industry where we choose to use SVG in a document format of our creation. 16:55:59 [ht] ht has joined #tpac 16:56:00 [markusm] markusm has joined #tpac 16:56:11 [ddahl2] ddahl2 has joined #tpac 16:56:12 [IanJ] ..by using the same SVG as others, we have some changes that cut/paste works across container languages, 16:56:22 [IanJ] that the same svg parers/renderer can be used, and that the same toolset may be used. 16:56:32 [IanJ] ...and it will also be easier to duplicate user training, documentaiton, etc. 16:56:49 [IanJ] ...and we may also benefit from testing separation. 16:56:51 [silvia] silvia has joined #tpac 16:57:10 [AnnB] AnnB has joined #tpac 16:57:13 [IanJ] ...finally, the separation of concerns allows the marketplace to decide on a solution. 16:57:20 [YolandaG] YolandaG has joined #tpac 16:57:22 [arun] arun has joined #tpac 16:57:26 [IanJ] Noah: Now the perspective on challenge to decentralized extensibility. 16:57:38 [IanJ] Noah: First, nobody has found a painless way to do this (more on why in a moment) 16:57:53 [IanJ] Controversy: not everyone believes HTML extensions will be needed very often anyway. 16:58:01 [glaser] glaser has joined #tpac 16:58:24 [IanJ] Noah; For instance, SVG only happens once in a while. Maybe it's easier not to build a generalized mechanism but to introduce features as we need them to the core language. 16:58:40 [IanJ] Noah: Some mechanisms for avoiding name collisions are ugly and/or complicated. 16:58:56 [dom] s/Noah;/Noah:/ 16:58:59 [IanJ] Noah: With DE (decentralized extensibility), it can be hard to move experimental extensions into the core. 16:59:01 [SteveH] SteveH has joined #tpac 16:59:07 [mth] mth has joined #tpac 16:59:09 [IanJ] (example: <xxx:table> -> <table>) 16:59:19 [dbaron] dbaron has joined #tpac 16:59:23 [IanJ] Noah: The main controversies in this discussion are about avoiding name collisions. 16:59:31 [IanJ] Noah: Now, on to some questions: 16:59:35 [tantek] <xxx:table> -> <table> is not what has happened in practice 16:59:40 [tantek] what has happened in practice is: 16:59:42 [IanJ] * Does HTML 5 provide decentralized extensibility? 16:59:44 [tantek] <a> -> <svg:a> 16:59:49 [tantek] the *opposite* 17:00:08 [tantek] namespaces = siloization, encouraging *divergence*, not convergence 17:00:15 [annevk] annevk has joined #tpac 17:00:38 [IanJ] Noah: What the text/html serialization of HTMl 5 does not provide are mechanisms like XML namespaces that help to avoid naming conflicts or help explict existing vocabularies. 17:00:48 [raman] tantek, was waiting for you to say svg:a :-) 17:01:13 [tantek] raman - the presenter used SVG as an example, therefore it was fair game in cross-examination. 17:01:38 [IanJ] Noah: There are a number of extension points (Noah lists a few: @class, @rel, <meta>, <script>, ...) 17:01:39 [fedka] fedka has joined #tpac 17:01:59 [IanJ] Noah: So the question is "how will you coordinate on extensions"? 17:02:04 [IanJ] Noah: My understanding is that: 17:02:05 [VagnerW3CBrasil] VagnerW3CBrasil has joined #tpac 17:02:05 [hsivonen] (surely SVG should be coordinated with HTML right here at the W3C instead of being something that happens without coordination between the parties) 17:02:05 [itandrea] itandrea has joined #tpac 17:02:14 [IanJ] 1) There won't be too many cases where you need major new features 17:02:22 [Julian] tantek, I've seen other vocabularies where the opposite happened (existing elements in other namespaces reused properly) 17:02:26 [IanJ] 2) Where there are major new features, update the spec 17:02:28 [tantek] I hereby place my contributions to irc.w3.org chat rooms into the public domain, and explicitly grant permission for inclusion in any public logs. 17:02:37 [AxelPolleres] AxelPolleres has joined #TPAC 17:02:38 [taziden] taziden has joined #tpac 17:02:41 [glazou] annevk: #tpac09 17:02:50 [meu] meu has joined #tpac 17:02:56 [caribou] s/annevk:/annevk, 17:03:11 [chaals] chaals has joined #tpac 17:03:21 [IanJ] Noah: There's a point of view that if your spec is really extensible, you can leave a lot out of it. 17:03:27 [ankesh] ankesh has joined #tpac 17:03:36 [IanJ] Noah: HTML 5 has been criticized for including "too much" presumably since it is not extensible enough. 17:03:57 [IanJ] Noah: There has been this debate (week by week) -- in or out?; my list of features here is a moving target. 17:03:59 [raman] Tantek, I do the same for my comments, and further assert that Bubbles promises to do the same --- all other dogs permitting:-) 17:04:11 [IanJ] Noah: So the big question here has to do with name collisions. 17:04:21 [tantek] Liam - you can use CC0. 17:04:23 [IanJ] Noah: I will focus here on namespaces, the mechanism traditionally used in this context. 17:04:51 [John] John has joined #tpac 17:04:55 [tantek] 17:05:02 [John] John has left #tpac 17:05:13 [IanJ] IanJ has changed the topic to: Technical Plenary Day agenda; back channel irc is #tpac09 17:05:24 [benadida] benadida has joined #tpac 17:05:30 [tantek] tantek has changed the topic to: Technical Plenary Day channel. see also backchannel: #tpac09 17:05:36 [DavidC] DavidC has joined #tpac 17:05:39 [IanJ] [Noah starts to dive down into xml namespaces] 17:06:00 [tantek] "this markup is ugly" 17:06:03 [IanJ] Noah: I think the biggest con is that "people hate this stuff"; hard to type, URIs are long. 17:06:07 [pants] pants has joined #tpac 17:06:16 [tantek] also, copy/paste fragility 17:06:16 [IanJ] Noah: URIs should be in tag names, but that is even worse to type. 17:06:33 [IanJ] Noah: Everybody does this....(e.g., java packages)...you end up with clumsy names, and then complexity to make them tractable. 17:06:43 [IanJ] Noah: Since we don't like prefixes, then we use defaults, which cause their own problems. 17:06:43 [tantek] ah there it is 17:06:48 [ccklaus] ccklaus has joined #tpac 17:06:51 [mischat] mischat has joined #tpac 17:06:56 [rreck] rreck has joined #TPAC 17:07:18 [tantek] "Namespaces tend to break DOM-level updates (e.g. innerHTML)." 17:07:27 [benadida] actually, it's just that the people who hate this stuff are highly vocal. Plenty of people don't care. 17:07:50 [tantek] benadida - the opposite, the people that *do* care about namespaces tend to be the more vocal. 17:07:53 [Marcos] Marcos has joined #tpac 17:08:01 [kkLO00] kkLO00 has joined #tpac 17:08:10 [Marcos] isn't src attribute in the null namespace? 17:08:21 [IanJ] Noah: For me the deepest flaw in the namespace approach is that as element names become more "standard" you are stuck with the prefixes used in deployed content. 17:08:43 [IanJ] Noah: There are some proposals floating around to help manage the namespace question (Liam Quin, and one from Microsoft). Proposals linked from slides. 17:08:46 [IanJ] Noah: Summary: 17:08:58 [kkLO00] hey guys, keep up the good work 17:08:59 [IanJ] * Disagreement about how often extensions will be needed, and whether collisions would cause problems. 17:09:08 [mattg] mattg has joined #tpac 17:09:12 [IanJ] * Disagreement about whether central coordination through the HTML WG suffice.s 17:09:27 [IanJ] * Disagreement about whehter it's practical to provide decentralized mechanism to avoid name collisions. 17:09:39 [tantek] "Namespaces are ... usable in XHTML" <- strongly disagreed. 17:09:41 [IanJ] * There is disagreement as to how much to compromise to maintain compatibility with XML. 17:10:13 [vincent] vincent has joined #TPAC 17:10:20 [IanJ] * There is disabilityas to which capabilities shoudl be slpit out from HTML and which existing Rec to make usable in HTML 5 (e.g., microdata, rdfa mappings, svg, canvas) 17:10:27 [dsr] dsr has joined #tpac 17:10:30 [IanJ] * There is disagreement in particular about inclusion of RDFa 17:10:49 [IanJ] [Noah discusses why it matters] 17:10:55 [benadida] tantek - seeing how much you have to say in this chat room appears to contradict your claim that anti-namespace people aren't vocal :) 17:11:01 [John_Boyer] John_Boyer has joined #tpac 17:11:04 [yfukami] yfukami has joined #tpac 17:11:25 [IanJ] Whether: HTML 5 will adapt well as new capabilities are needed, who will be able to create and deploy enhancements, whether HTML 5 will be convenient, compatible with existing content, will work with XML tools 17:11:40 [tantek] benadida - email logs of public-html proves the point that namespace advocates are more vocal, more often, and spend more time write lengthier messages on the topic. 17:12:00 [Roger] I think Noah just did a heck of a good job. 17:12:01 [IanJ] Henry: Thank you, Noah. 17:12:26 [DanC] well, most of the data isn't visible. most people who care one way or the other about namespaces don't participate in public-html or W3C at all 17:12:38 [John_Boyer] This talk did a good job on syntactic extensibility, but we also need to be considering extensibility from the interaction domain, e.g. uniform support for XBL-like functionality. 17:13:13 [IanJ] [Debaters] 17:13:31 [IanJ] Jonas Sicking (Mozilla Foundation) 17:13:37 [IanJ] Tony Ross (Microsoft) 17:13:58 [myakura] myakura has joined #tpac 17:14:10 [jmorris] 17:14:10 [IanJ] slides: 17:14:21 [IanJ] First question on definitions: 17:14:23 [Lachy] Lachy has joined #tpac 17:14:27 [tlr] tlr has joined #tpac 17:14:29 [dape] dape has joined #tpac 17:14:32 [IanJ] Tony: I think that people are trying to solve different problems. 17:14:56 [annevk] s/Tony/Jonas/ 17:14:59 [IanJ] thx 17:15:07 [IanJ] Jonas: Seems good to allow private extensions. 17:15:20 [IanJ] Jonas: The Web at large may not need everything. 17:15:24 [mgylling] mgylling has joined #tpac 17:15:35 [IanJ] Jonas: The distributed part is the harder part - people who don't talk to one another to coordinate extensions. 17:15:48 [IanJ] Jonas: I agree that name collisions is a hard question. 17:16:05 [ChrisPoppe] ChrisPoppe has joined #tpac 17:16:26 [IanJ] Jonas: So I think it's ok to have other W3C groups be able to add extensions. 17:16:47 [IanJ] ...e.g., we see browser vendors doing experimental css property values to test them out. 17:17:10 [IanJ] ...there's a small amount of coordination to avoid stepping on feed, but it's distributed to the extent that more than one group can create extensions. 17:17:17 [matt] s/feed/feet/ 17:17:19 [IanJ] ...and I think that kind of extension is a good thing. 17:17:35 [IanJ] Tony: Largely I agree with a lot of what Jonas said. 17:17:45 [IanJ] Tony: What is important when talking about DE is who can extend, and how. 17:18:14 [IanJ] Tony: You can have people writing their own standards, frameworks with their own extensions, browsers that extend, non-browser tools that extend, etc. 17:18:34 [IanJ] Tony: There are extensibility mechanisms in HTML, but it only goes so far (e.g., microformats) 17:18:35 [Nikunj] Nikunj has joined #tpac 17:18:39 [tantek] "to some extent distributed extensibility is possible in HTML today, we have seen this with microformats" 17:18:57 [IanJ] Tony: We are imposing some limitations...you can't create your own tags 17:19:01 [mmani] mmani has joined #tpac 17:19:02 [benadida] microformats are *distributed* extensibility? 17:19:17 [howard] howard has joined #tpac 17:19:22 [tantek] benadida - was just quoting Tony 17:19:43 [IanJ] Tony: As we get more than just browser vendors involved, we can be talking about tens of thousands of people wanting to add their own targetted extensions. 17:19:52 [brucel] brucel has joined #tpac 17:19:57 [tantek] I'm not sure I would call microformats "distributed extensibility" themselves, but rather an example of distributed extensibility in that they occurred *outside* W3C. 17:20:06 [Nightwolf] Nightwolf has joined #tpac 17:20:06 [IanJ] Tony: If someone wants to use a feature defined by somebody else, if the mechanism is simply prefix-based, there's a desire to keep the name short. 17:20:26 [Nightwolf] hi 17:20:32 [timeless_mbp] timeless_mbp has joined #tpac 17:20:34 [rigo] tantek, everything occured first outside W3C 17:20:46 [tantek] Tony: "if someone wants to use calendar from one and date picker from another ..." 17:20:54 [IanJ] Tony: Another issue is consistency. 17:20:57 [mib_no73a3] mib_no73a3 has joined #tpac 17:21:05 [IanJ] ...we have support for this in xhtml...and in the (HTML5) DOM 17:21:13 [dsinger] dsinger has joined #tpac 17:21:15 [IanJ] ...we have support for namespaces implicity in the html 5 syntax. 17:21:26 [IanJ] ...names acquire namespaces implicitly (e.g., svg and mathml namespaces) 17:21:50 [IanJ] Tony: So namespaces are available through the Dom; just not there yet in the markup. 17:21:51 [DanC] DanC has changed the topic to: W3C Technical Plenary ; see also backchannel: #tpac09 17:22:01 [IanJ] Tony: I think namespaces provide a desirable solution. 17:22:40 [IanJ] Jonas: When people want to add a feature to HTML 5, we first ask "what is the use case"? 17:22:49 [IanJ] Jonas: So why do we want this type of decentralized extensibility. 17:23:03 [tantek] There is insufficient representation of pragmatists and web publishers on this panel. 17:23:31 [IanJ] Tony: Lots of XML applications use namespaces, providing xml-namespace based support in html 5 would allow easier reuse in HTML 5 context. 17:23:50 [IanJ] [Henry asking each speaker to ask the other for any clarifications] 17:24:02 [IanJ] Tony: What in particular do you find about xml namespaces hard or undesirable? 17:24:09 [IanJ] Jonas: Two problems, partially stemming from the same thing. 17:24:31 [IanJ] Jonas: When I hear people talk about various elements, everyone refers to <svg:a>...nobody writes out the full svg namespace. 17:25:06 [Travis] Travis has joined #tpac 17:25:36 [IanJ] Jonas: People think of the "full name" as being the "short name" (prefix + local name) 17:25:37 [cheol] cheol has joined #tpac 17:25:48 [IanJ] Jonas: People in practice identify with the short name 17:26:07 [IanJ] Jonas: The real name is a tuple; you have to pass around two values (namespace URI + local name, and sometimes even the prefix, too) 17:26:13 [IanJ] ...so this adds complexity to code. 17:26:24 [IanJ] core questions 17:26:28 [IanJ] 17:26:41 [IanJ] Henry: I heard considerable agreement on "what DE is." 17:26:52 [IanJ] Henry: People begin to differ in the core questions.[ 17:27:26 [IanJ] Henry: how do we enable DE in HTML 5? How to avoid name collision? Subsidiary issues (XML v HTMl serializations, apis, validators, non-browser UAs) 17:27:41 [IanJ] rrsagent, make minutes 17:27:41 [RRSAgent] I have made the request to generate IanJ 17:27:47 [IanJ] rrsagent, set logs public 17:28:14 [IanJ] Tony: Regarding a proposal to manage name collisions. There was a proposal on the list (@@URI?@@). 17:28:37 [Roger] I recall that one of the big factors in the success of HTML was that it was a highly simplified subset of SGML. 17:28:44 [Nikunj] Nikunj has joined #tpac 17:28:48 [Roger] People like me could use it. 17:28:50 [zarella_] zarella_ has joined #tpac 17:29:11 [IanJ] Tony: I think DE is somewhat enabled already in HTML 5, but I think that for the sake of consistency, we should explore how much closer we can bring HTML XML serialization with existing XML ns mechanism. 17:29:39 [Roger] I personally would like to be able to do a "view source", cut and paste some of the HTML into my own document, and have a chance in hell of making it work. 17:29:44 [IanJ] Tony: You can use namespace APis available in the DOM. 17:29:52 [tantek] Tony, if you believe in XML Namespaces, then resurrect XHTML2, grab whatever elements you want from HTML5 (perhaps the whole set), introduce a new mimetype for non-draconian XML handling, and offer it as an alternative to HTML5. 17:30:12 [timely] timely has joined #tpac 17:30:13 [annevk] XML5 FTW! 17:30:32 [IanJ] Tony: Obviously validators have a lot of freedom in what they validate, but there is an impact on users. 17:30:52 [rubys] annek: are you going to bring that up that when Henry opens up the floor for questions? 17:30:58 [Hixie] that's what data-* is for 17:31:07 [masinter] This is mainly a political issue hiding behind a technical one. If Microsoft started to use <SL> for SilverLight and Linden Labs started to use <SL> for Second Life, who would have the authority to allow or disallow either of them, or decide between them? 17:31:07 [hsivonen] (fwiw, today XHTML5 validators don't allow random namespaces, so Namespaces and validation are separate questions) 17:31:08 [IanJ] Tony: You don't want to push some functionality to script and away from declarative markup. 17:31:20 [IanJ] Tony: We should provide guidelines for DE. 17:31:28 [tantek] annevk ++ 17:31:31 [IanJ] Tony: Avoiding name conflicts with core language in the future 17:31:37 [Claes] Claes has joined #tpac 17:31:39 [raman] namespace view -- Let's colonize the Web" --- 2 seen as 5 (mirror image view) dash-it -- we dont want to be colonized --- Hence --- use dashes instead of colons everywhere:-) 17:31:39 [DKA] DKA has joined #tpac 17:31:45 [IanJ] Tony: Using a URI helps also with conflicts with other extensions. 17:31:46 [Hixie] data-*="" already handles the dojo use case -- it's what it was meant for: 17:31:49 [hsivonen] masinter, moreover, is it good for the Web to delegate a substantial part of markup processing to Silverlight or Second Life? 17:31:50 [IanJ] Tony: Prefixes help shorten. 17:32:02 [plaggypig] plaggypig has joined #tpac 17:32:05 [tantek] hsivonen - W3C has never defined how to validate multi-namespace documents. 17:32:11 [IanJ] Tony: It is an indirection, but people are used to that (e.g., putting a value in a variable) 17:32:17 [benadida] Tony is right on "you don't want to push some functionality to script and away from declarative markup." I want a web where declarative data can be one of the powerful tools at our disposal. 17:32:25 [hsivonen] masinter, what if I have a device that doesn't have a port of Silverlight or Second Life. How do I read the content? 17:32:31 [IanJ] Jonas: We already have several interesting extension mechanisms. 17:32:45 [Zee] Zee has joined #tpac 17:32:47 [raman] larry --- java packages --- gues who wrote code in package com.ms --- hint: domain ms.com is owned by Morgan Stanley -- 17:32:55 [IanJ] Jonas: We need to ask the question "what do we need DE for?" The HTML 5 spec is good enough for some use cases. E.g., the ability to use microdata, or rel values. 17:33:19 [masinter] hsivonen, your use of "delegate": who exactly is doing the delgation? And what is the threshold for "substantial"? 17:33:20 [tantek] Jonas: "We already have microdata, we already have the ability to add new rel values, we have a rel-profile proposal" 17:33:23 [IanJ] Jonas: You can use profiles to ground names (in HTML 4) 17:33:47 [IanJ] Jonas: If you want to add other elements, write a specification. I think that's a good way for people to extend the language where we want people to experiment or add functionality to the Web platform. 17:33:49 [vincent] vincent has joined #TPAC 17:34:03 [benadida] seems to me Jonas is ignoring a bunch of clear evidence for DE: Google, Yahoo, and others creating their own vocabularies and then later, serendipitously, coming together on a subset. 17:34:04 [IanJ] Jonas: Adding a feature to the Web platform should not be taken lightly; we suffer from poorly defined features. 17:34:38 [benadida] isn't html5 trying to kill @profile, btw? 17:34:39 [hsivonen] masinter, delegated away from a standard-implementing engine. substantial if I can't make sense of content without an extension processor. 17:35:06 [mhausenblas] mhausenblas has joined #tpac 17:35:07 [IanJ] Jonas: Tremendous cost to a poorly desgined feature -- we want people to collaborate, review, and integrate into the core web platform. 17:35:18 [arun] benadida, don't you think those very use cases will gravitate to microdata as well? Is there anything *intrinsic* about the use of a namespaced solution? 17:35:28 [IanJ] Jonas: It's not a problem for scenarios where you just want local extensions; small group of people; in that case, you don't need to worry about name collisions. 17:35:36 [Julian] benadida, it did, so far 17:35:42 [myakura] isn't manu working on the proposal for adding @profile? 17:35:47 [IanJ] Jonas: If we want the whole web to use it, we should work on integration. 17:36:16 [IanJ] Jonas: what would be nice is to do what css does - if you want to do a local extension, here's how you do so (using "-token-" prefix) 17:36:30 [rubys] manu's draft: 17:36:47 :36:50 [rahul] rahul has joined #tpac 17:36:58 [IanJ] [Questions of clarification between debaters] 17:37:14 [IanJ] Tony: Do you think consistency between the 2 serializations is important? 17:37:18 [masinter] Let's get rid of seatbelts in cars because we don't want to have any accidents. 17:37:22 :37:27 [benadida] arun - microdata might be useful, though it stinks of NIH. HTML5 could have used RDFa syntax without namespaces (which wouldn't fulfill all of the use cases, but at least wouldn't be silly reinvention.) 17:37:34 [IanJ] Jonas: There is value to consistency. At the same time, looking at the documents people write today, many more are written in HTML than in XMl. 17:37:43 [tommorris] tommorris has joined #tpac 17:38:00 [IanJ] Jonas: HTML has been much more popular. I don't want to make the 2 the same. First of all, choice is good. But second, the XML world made some mistakes, and I think xml namespaces is one of them. 17:38:12 [IanJ] Jonas: Consistency is nice, but things aren't always that simple. 17:38:18 [spynifex] spynifex has joined #tpac 17:38:30 [IanJ] Jonas to Tony: How concerned are you about breaking compatibility with existing documents. 17:39:02 [IanJ] Jonas: Because browsers today in the html serialization, ns attributes are ignored, .....there's a lot of content that therefore relies on them being ignored. 17:39:04 [mib_fvlkfa] mib_fvlkfa has joined #tpac 17:39:10 [spynifex] Hi there 17:39:13 [IanJ] ...do we have data that show that it will be ok to turn on ns support? 17:39:31 [IanJ] Tony: We do have some data, and there would be some problems, so we need to manage the compatibility. 17:40:07 [IanJ] Tony: I don't think that with prefixed element names, compatibility concerns pose as big a risk. 17:40:46 [IanJ] Jonas: One concern is that javascript libraries may want to add names and their might be collisions there. 17:40:48 [DKA] DKA has joined #tpac 17:41:01 [IanJ] ...but libraries add properties to the global object...there's a situation where you might have name collisions. 17:41:11 [hsivonen] would MS ship their proposal across all the modes of IE9? 17:41:19 [maxf] maxf has joined #tpac 17:41:19 [IanJ] ..but we haven't seen it in practice; with the exception of the dollar name, but there it was somewhat intentional. 17:41:36 [IanJ] ...since js libraries have shown that they can deal with sharing a ns without name collisions, I don't think we should worry about it. 17:41:53 [IanJ] Tony: Js gives the end user more flexibility in resolving this than markup does. 17:42:14 [IanJ] ...they typically put functionality in a global object. And they work fine if you rename that object something else (aliasing). 17:42:27 [IanJ] ...eg, I can run multiple versions of jquery at the same time by using aliasins. 17:42:29 [benadida] "js libraries have shown that they can deal with sharing a ns without name collisions" is simply not what I've seen from extensive JavaScript injection into web pages. 17:42:36 [IanJ] ...I don't think we have that flexibility automatically with just markup. 17:43:14 [IanJ] Tony: I was wondering, Jonas, whether you think there should be different requirements for different types of authors. 17:43:39 [Rotan] Yes, page authors are in a different class. Adding DISelect to HTML 5, for example, is a problem without some name management solution. 17:43:48 [mcgredo] mcgredo has joined #tpac 17:43:51 [IanJ] Jonas: Yes. browsers have a larger responsibility for not injecting crap into the namespace of what they support. We have seen that when browser vendors inject features, they get picked up, and browser vendors end up having to support it. 17:44:03 [IanJ] Jonas: So the bar should be very high for browser vendors to add extensions. 17:44:23 [IanJ] Jonas: For js libraries, I think the bar should be slightly lower, though they might have similar concerns as browser vendors about uptake. 17:44:37 [IanJ] Jonas: I'm relucting to impose any constraints on page authors, who do what they want anyway. 17:44:48 [IanJ] Jonas: I think we should expect people will use their own elements and attributes. 17:45:05 [IanJ] Jonas: I am happy we've added a mechanism for adding attributes: the @data attribute. 17:45:12 [IanJ] rrsagent, make minutes 17:45:12 [RRSAgent] I have made the request to generate IanJ 17:45:25 [IanJ] Henry: Thank you, debaters. 17:45:41 [IanJ] Henry: Now to the floor. 17:46:04 [IanJ] Henry: I will try to keep threads going (over strict mic order) 17:46:20 [IanJ] Julian Reschke: Two comments. 17:46:25 [sylvaing] sylvaing has left #tpac 17:46:53 [IanJ] Julian: the fact that you have to pass tuples to the API is an APi issue, not a ns issue. 17:46:55 [masao_] masao_ has joined #tpac 17:47:03 [richt] richt has joined #tpac 17:47:08 [IanJ] Julian: The HTML WG could add APIs to pass namespaced element names. 17:47:27 [IanJ] Julian: Second point - bad extensions are deployed whether we have DE or not. E.g., we have canvas. 17:47:53 [mib_fvlkfa] mib_fvlkfa has left #tpac 17:48:06 [IanJ] Jonas: Regarding ns tuple: yes, it might be possible via APIs; but haven't seen a proposal on this; might not be so straightforward. 17:48:47 [IanJ] Jonas: Also relucting to add a third set of APIs for this access...the second round of APIs has not been that popular. Most people use "createElement" 17:49:06 [IanJ] ...people are very ns-agnostic. 17:49:26 [IanJ] ..we made a firefox change recently (moving things to html ns from null ns ) and very few bug reports resulted. 17:49:35 [masao] masao has joined #tpac 17:49:41 [IanJ] Tony: There was some discussion about the means of combining ns+ local name into a single string. 17:50:00 [IanJ] ...I don't feel a new API would be necessary, but I don't think it would add complexity if we did. 17:50:12 [IanJ] ...ideally an API would be a single string access into the tuple anyhow. 17:50:36 [Laura] Laura has joined #tpac 17:50:47 [IanJ] Liam Quin (XML Activity Lead): Over the past year I've been talking to a lot of people in the XML community. 17:50:47 [glazou] glazou has left #tpac 17:51:03 [IanJ] Liam: We can't break XML; it's very widely used. And people rely on it a lot. 17:51:08 [IanJ] Liam: But we can add things. 17:51:28 [glazou] glazou has joined #tpac 17:51:40 [IanJ] Liam: I asked what we might add to XML in a way that would work with HTMl. 17:51:43 [rubys] 17:52:04 [IanJ] Liam: The "unobtrusive namespace proposal" allows mashups. 17:52:22 [IanJ] Liam: You have an optional file that a browser could go off and fetch, which defines what ns the elements are in. 17:52:29 [glazou] glazou has joined #tpac 17:52:33 [Rotan] Would there be a way for a page to override Liam's proposed external doc of namespace settings? 17:52:43 [IanJ] Liam: A browser would not ordinarily have to go get anything; a browser behaves as though it had already loaded the file. 17:52:52 [IanJ] Liam: This proposal solves some of the problems identified here. 17:53:14 [IanJ] Liam: Regarding name collisions, it lets you say what "foo" you mean; but does not let you use two different "foo" elements from two ns in the same document. 17:53:29 [claudio] claudio has joined #tpac 17:53:29 [IanJ] Liam: For that case, I would just use xml namespaces. 17:53:37 [IanJ] Liam: There is also an ISO proposal to address this. 17:53:37 [shadi] shadi has joined #tpac 17:53:39 [yfukami] yfukami has joined #tpac 17:53:51 [Nikunj] Nikunj has joined #tpac 17:54:09 [IanJ] Henry: I'd like the debaters to address the implicit question: if the overhead of using xml namespaces were reduced, would that make a difference? 17:54:18 [timbl] Rotan, presumably.... maybe they should cascade .. like CSS .. oh maybe we should use css .. svg a { background: #ffe; namespace " " } 17:54:53 [Rotan] Tim, exactly what I had in mind. 17:55:07 [gedgar] gedgar has joined #tpac 17:55:08 [IanJ] Jonas: I don't know off the top of my head. You'll still have a tuple as the identifying name. You'll still have a disconnect where people talk about names using one label, but it remains this tuple. 17:55:27 [IanJ] Jonas: Sounds interesting; I'd like to analyze the problems we are seeing and which problems it addresses or not. 17:55:36 [Rotan] Namespace-sheets, in addition to style-sheets :) 17:55:42 [IanJ] Jonas: The proposal does seem to address the problem of copying from one doc to another. 17:55:43 [dom] -> Automatic XML Namespaces 17:56:12 [IanJ] Jonas: Sound "better" but don't know yet if "quite there." 17:56:30 [PIon] PIon has joined #TPAC 17:56:32 [IanJ] Henry: Anyone else want to speak to making xml namespaces "more palatable" 17:56:41 [IanJ] Jonas: Have you submitted the proposal to the HTML WG? 17:56:48 [IanJ] Liam: I've submitted it to the HTML Coordination Group. 17:56:56 [Liam] [yes via the hypertext coordination group] 17:57:05 [DanC] liam, the hypertext CG isn't a technical forum. very different from the HTML WG 17:57:11 [hsivonen] why via a secret group? 17:57:12 [IanJ] Rotan Hanrahan: Friendly amendment --- you could use a sort of CSS cascade to simplify the namespace problem (going from explicit ns to default ns) 17:57:33 [IanJ] Larry Masinter: The topic is DE in general, though we've focused more narrowly on element/attribute extensibility. 17:57:43 [buckybit] buckybit has joined #tpac 17:57:45 [Roger] Roger says he agrees with Julian. 17:57:47 [skfet] skfet has joined #tpac 17:57:55 [IanJ] Larry: I would like to express support for extensibility more generally; this has allowed creativity on the Web. 17:58:14 [Roger] me says he agrees with Julian (sorry) 17:58:49 [IanJ] Larry: There's a political issue hiding behind a technical issue. The technical one is "how do you spell X" but the political one is "who has the authority?" For example, brand issues. 17:58:59 [IanJ] Larry: This problem addressed through mechanisms like registries. 17:59:23 [IanJ] Larry: we need to come to the conclusion of what W3C wants the political solution to be; the technical solution will follow. 17:59:50 [IanJ] Jonas: I agree that DE elsewhere [than elements and attributes; scribe thinks] is interesting. E.g., the microdata proposal. 17:59:59 [raman] we should create the PAG (Political Architecture Group) --- name intentionally chosen since PAG has always raised the spectre of a "patent advisory group" 18:00:18 [raphael] raphael has joined #tpac 18:00:41 [IanJ] Jonas: On the question of "who gets to decide"; we're biased---browser vendors or UA vendors decide. What they implement is ultimately what people can use. 18:00:51 [John_Boyer] One reason that XML namespaces are based on URIs is because it allowed the W3C to punt the registry issue elsewhere. If W3C ran a registry, then perhaps namespaces could be simplifed 18:01:06 [IanJ] Jonas: though it is also true that browser vendors will follow what a lot of authors do. 18:01:32 [IanJ] Tony: In terms of the political issue, it's broader than just user agents. Who gets to extend? Impact of browser extensions has a big impact. 18:01:34 [Zakim] +??P0 18:01:49 [shadi] zakim, ? is me 18:01:49 [Zakim] +shadi; got it 18:01:51 [skfet] what's this chan about ? 18:02:14 [nickdoty] nickdoty has joined #tpac 18:02:17 [IanJ] Ralph Swick: I heard more agreement among debaters than I expected. I heard agremeent on extensibility, and also distributed extensibility. 18:02:22 [bruce] bruce has joined #TPAC 18:02:28 [IanJ] Ralph: However, clients of HTML are not just browsers. there are other clients. 18:02:52 [IanJ] Ralph: Tony raised an interesting point about validation. One thing that has held us back has been a lack of a framework that supports ad-hoc extensions. 18:03:06 [IanJ] Ralph: We addressed that in XML using XMl schema languages to do mixed-markup validation. 18:03:31 [IanJ] Ralph: How do we register extensions? 18:03:48 [arun] That's easy -- it goes in the global namespace ;-) 18:04:07 [IanJ] ..things that push info into attributes moves the ability to validate outside our generic validator to extension-specific validation. 18:04:37 [IanJ] Ralph: on the question of registration...if we use dns, that's a form of registry, if we use a wiki, that's another. 18:04:40 [kaz] kaz has joined #tpac 18:05:12 [IanJ] Ralph: There's a subtle difference - whether I'm forced to publicize that I'm using an extension (even one in a private Intranet), would I be forced to use a central registry v hiding it behind the DNS? 18:05:37 [IanJ] Tony: Ideally, in a scenario like you described, you should not have to go do a central registry. 18:05:56 [IanJ] ..you do need the ability to resolve conflicts if they exist...but going to a central registry for private extensions is asking to much. 18:06:20 [IanJ] Jonas: There are private extension mechanisms in CSS, HTTP. Having something like that could be useful here. 18:06:45 [IanJ] ...help avoid collisions, but don't need to tell anyone you are doing it. You should not have to go to a registry to use such an extension. 18:06:46 [Rotan] "Experimental" names have an awkward habit of becoming permanent. 18:07:21 [Yves] registries imply persistence issues 18:07:23 [IanJ] Steven Pemberton: I was on a panel in 2003...this panel is an extension of that one. I gave a talk where I suggested that we needed unobtrusive namespaces; glad to see that idea reborn. 18:07:51 [IanJ] Steven: I work in a community that uses DE in HTMl all the time. We know what the advantages are. But the community is bimodal. 18:08:03 [icaro] icaro has joined #tpac 18:08:07 [IanJ] Steven: Seems in this case, the solution should serve both communities, without exclusing one. 18:08:30 [IanJ] Tantek Celik: Tony, you brought a proposal to the HTML WG. 18:08:32 [Steven] 18:08:59 [Steven] s/exclusing/excluding/ 18:09:08 [IanJ] tantek: My suggestion to you is that if you believe in XML, resurrect HTML 2, introduce what you want, and register a new mime type, and offer it as an alternative to HTML 5. 18:09:14 [IanJ] Tony: You are definitely entitled to your opinion. 18:09:19 [annevk] s/HTML 2/XHTML 2/ 18:09:52 [Judy] Judy has joined #tpac 18:09:53 [IanJ] Henry: We are down to matters of opinion. There are two main costs to porting XML into the HTML universe. 18:10:18 [IanJ] Henry: Cost at the API level of managing tuples; cost at the syntax level managing issues there . 18:10:26 [IanJ] Henry: So "is the benefit worth the cost?" 18:10:35 [IanJ] Henry: And there are several proposals to reduce the cost. 18:10:49 [Zakim] - +46.7.06.02.aaaa 18:10:50 [IanJ] Henry: This has been useful in moving the discussion forward. Thank you. 18:10:52 [IanJ] <break> 18:10:57 [IanJ] rrsagent, make minutes 18:10:57 [RRSAgent] I have made the request to generate IanJ 18:11:04 [MikeSmith] s/mime type/mime type for non-draconian XML handling/ 18:14:19 [raphael_] raphael_ has joined #tpac 18:15:30 [jun] jun has joined #tpac 18:16:39 [unl] MikeSmith: draconian error handling is *not* prescribed by the xml spec. it's an interpretation issue. the YSOD is a mozilla problem. see webkit getting it right with non-wellformed xhtml files 18:18:28 [ccklaus] ccklaus has left #tpac 18:19:45 [lbolstad] lbolstad has joined #tpac 18:19:49 [cheol] cheol has joined #tpac 18:21:57 [JonathanJ] JonathanJ has left #TPAC 18:22:14 [Nikunj] Nikunj has joined #tpac 18:25:34 [unl] unl has joined #tpac 18:25:49 [Steven] Steven has joined #tpac 18:26:02 [MichaelC_] MichaelC_ has joined #tpac 18:26:06 [tantek] tantek has joined #tpac 18:26:32 [satoshi] satoshi has joined #TPAC 18:26:34 [Julian] Julian has joined #tpac 18:27:01 [raphael] raphael has joined #tpac 18:27:35 [JariA] JariA has joined #tpac 18:28:05 [AxelPolleres] AxelPolleres has joined #TPAC 18:28:34 [Steven] Scribe: Steven Pemberton 18:28:41 [Steven] Scribenick: Steven 18:29:05 [Steven] Topic: Maintaining a Healthy Internet Ecosystem -- Challenges to an Open Internet Infrastructure 18:30:23 [Steven] Moderator: Leslie Daigle, Internet Society Presenters: John Curran (ARIN) David Conrad (ICANN) Lisa Dusseault (IETF) 18:30:44 [Steven] Steven has left #tpac 18:30:56 [TabAtkins] TabAtkins has joined #tpac 18:31:44 [markusm] markusm has joined #tpac 18:31:55 [Marcos] Marcos has joined #tpac 18:32:08 [raphael] raphael has joined #tpac 18:33:35 [Claes] Claes has joined #tpac 18:33:52 [ccklaus] ccklaus has joined #tpac 18:33:53 [vincent] vincent has joined #TPAC 18:34:01 [shiki] shiki has joined #tpac 18:34:08 [Steven] Steven has joined #tpac 18:34:32 [Norm] Norm has joined #tpac 18:34:52 [rkuntsch] rkuntsch has joined #tpac 18:35:09 [raman] raman has joined #tpac 18:35:26 [jmorris] jmorris has joined #tpac 18:35:39 [annevk] annevk has joined #tpac 18:35:56 [rlewis3] rlewis3 has joined #tpac 18:36:00 [marengo] marengo has joined #tpac 18:36:21 [howard] howard has joined #tpac 18:36:44 [ht] ht has joined #tpac 18:36:49 [Steven] LD: Focus - to talk about managing internet for common good 18:36:58 [darobin] darobin has joined #tpac 18:37:17 [Lachy] Lachy has joined #tpac 18:37:43 [brutzman] brutzman has joined #TPAC 18:37:53 [gedgar] gedgar has joined #tpac 18:37:57 [Steven] LD: Success is due to open standards, freely accessible processes, transparent governance 18:37:59 [adrianba] adrianba has joined #tpac 18:38:20 [Steven] ... internet must remain open for the next big thing 18:38:30 [Steven] ... ecosystem 18:38:46 [AndyS] AndyS has joined #tpac 18:38:48 [wbailer] wbailer has joined #tpac 18:39:09 [Steven] ... standards, resource management, infrastructure, users, organisations that build capacity 18:39:18 [Kai] Kai has joined #tpac 18:40:00 [Steven] ... who does what really? 18:40:18 [Steven] ... spider diagram (just one perspective) 18:41:00 [raman] raman has joined #tpac 18:41:50 [tantek] For the record, my question / proposal at end of "Distributed Extensibility" session was intended seriously (not sarcastically), to enable/allow/encourage exploration of multiple options by strongly interested parties. 18:41:53 [jjc] jjc has joined #tpac 18:41:56 [Steven] ... education and capactity building sub diagram 18:42:15 [Steven] ... users sub diagram 18:42:26 [Steven] ... policy development sub-diagram 18:42:50 [tantek] "important role to play in meatspace" 18:43:02 [marie] [slides at ] 18:43:24 [Steven] ... Naming and addressing sub-diagram 18:43:45 [Steven] ... open standards sub-diagram 18:43:56 [tantek] marie - are source slides available? e.g. in HTML (these look like unordered lists) and/or SVG? 18:44:25 [Steven] ... shared global services 18:44:28 [Eduardo] Eduardo has joined #tpac 18:44:29 [marie] tantek - just pdf 18:44:37 [marie] linked from the agenda page 18:44:48 [Tobias] Tobias has joined #tpac 18:45:12 [masinter] masinter has joined #tpac 18:45:34 [Steven] ... ... Today's panel, 3 pieces of the diagram represented - IETF, ICANN, IRIN 18:45:43 [Steven] s/IRIN/ARIN/ 18:46:20 [plh] plh has joined #tpac 18:46:21 [Steven] Panellist Lisa Dusseault, IETF Applications area director [LD2] 18:46:39 [shepazu] the slides were very professional 18:46:39 [Steven] LD2: W3C and IETF do work well together 18:47:10 [marie] 18:47:25 [Steven] ... Mark Nottingham is our coordinator at W3C 18:47:55 [Steven] ... DanC and PLH are good contacts 18:48:21 [Steven] [slide: hwo to talk to us] 18:48:25 [glazou] IanJ: 18:48:25 [glazou] 18:48:26 [Steven] s/hwo/how/ 18:48:40 [glazou] IanJ, I'll need you miniDVI again for my lightning talk 18:48:40 [nick] nick has joined #tpac 18:48:59 [Steven] [slide: we have a lot in common] 18:49:14 [plh] --> W3C/IETF liaison mailing list archive 18:49:28 [Steven] [Slide: The rest of the world] 18:49:45 [hbj] hbj has joined #tpac 18:49:52 [Steven] [Slide: Plan for] 18:51:18 [Steven] [Slide: Challenges] 18:51:18 [brucel] brucel has left #tpac 18:51:44 [Maua] Maua has joined #tpac 18:53:30 [drogersuk] drogersuk has joined #tpac 18:54:00 [Steven] David Conrad, ICANN 18:54:18 [JonathanJ] JonathanJ has joined #TPAC 18:54:24 [Steven] DC: Also at IANA 18:54:27 [marie] 18:54:51 [ArtB] ArtB has joined #tpac 18:54:53 [Steven] [Slide: Openness] 18:55:34 [Steven] [Slide: Multiple personalities] 18:56:08 [Hideki] Hideki has joined #tpac 18:56:43 [timely] < > 18:56:47 [Steven] [Slide: Multiple personalities 2] 18:57:54 [DanC] MOU = Memorandum of Understanding 18:57:58 [Steven] [Slide: IANA Functions] 18:58:29 [mauro] 18:58:49 [Steven] DC: about 1000 registries, some which have 4 or 5 requests per day 18:58:49 [Ileana] Ileana has joined #TPAC 18:59:10 [Steven] [Slide: Openness in IANA Functions] 19:00:56 [nord_c] nord_c has joined #tpac 19:00:57 [Steven] [Slide: Transparency] 19:01:23 [tantek] could scribes expand acronyms? many are having trouble following 19:01:49 [Steven] [Slide: Accountability] 19:01:55 [Eliot_Graff] Eliot_Graff has joined #tpac 19:02:02 [mth] mth has joined #tpac 19:02:11 [timely] SLA=Service Level Agreement 19:02:46 [Claes1] Claes1 has joined #tpac 19:03:02 [Steven] [Slide: Summary] 19:03:29 [Steven] DC: We are trying to be more open 19:03:38 [Steven] ... our website is getting better 19:04:12 [jallan] jallan has joined #tpac 19:04:25 [Steven] John Curran: Arin 19:04:37 [Steven] JC: I will give you years of terror 19:04:41 [marie] [no slides] 19:04:50 [timely] ... a regional internet registry 19:04:55 [Steven] ... ARIN is a regional IRI assignment entity 19:05:01 [timely] ... involved in BGP routing 19:05:10 [Arron] Arron has joined #tpac 19:05:13 [timely] ... one of the founders 19:05:19 [mauro] ARIN --> American Registry for Internet Numbers 19:05:19 [Steven] ... I was a founder, moved to CEO 19:05:23 [timely] ... we have a transition coming up 19:05:30 [timely] ... 2^32 ipv4 addresses 19:05:37 [timely] ... we have been giving them out 19:05:49 [timely] ... we used to give out class A, class B, class C 19:05:58 [Steven] scribenick: timely 19:05:59 [timely] ... we've switched to giving out <slash-notation> 19:06:02 [maxf] maxf has joined #tpac 19:06:14 [timely] ... we've been going through 10-12 slices a year 19:06:22 [timely] ... we're down to 28 slices left 19:06:29 [timely] ... we have 717 days left 19:06:35 [timely] ... and we will run out of ipv4 addresses 19:06:43 [timely] ... when we run out of addresses 19:06:54 [timely] ... people won't be able to connect new servers 19:06:58 [timely] ... we're not really running out 19:07:05 [timely] ... we're running out of unassigned addresses 19:07:21 [timely] ... every 6-12 months regional groups come asking for addresses 19:07:27 [timely] [or was that isps] 19:07:51 [timely] ... there are ranges which are available because they can be torn down (dial up ranges) 19:08:17 [timely] some addresses can be exchanged by offering customers savings for returning addresses 19:08:28 [timely] ... every ISP will have to start reclaiming addresses 19:08:39 [timely] ... there are a lot of addresses assigned to companies that don't exist anymore 19:08:53 [timely] ... some original granted groups have turned in early range grants 19:09:07 [timely] ... there are 6-12 of those perhaps left 19:09:16 [timely] ... but this won't help for much time 19:09:21 [timely] ... at some point, we will run out 19:09:30 [timely] ... option 1. we put a sign out, "the internet is full, go away" 19:09:37 [timely] ... this is actually real simple 19:09:41 [timely] ... it's perfect 19:09:50 [timely] ... there are some equity and fairness issues 19:09:57 [timely] ... some countries are only now coming to the table 19:10:04 [timely] ... and it's unfair to them 19:10:12 [timely] ... option 2. ipng 19:10:18 [timely] ... what you now call ipv6 19:10:24 [timely] ... it has 2^128 addresses 19:10:29 [timely] ... which is a lot of addresses 19:10:36 [timely] ... i won't try to enumerate them 19:10:49 [timely] ... but we can still spend them at the same rate 19:11:00 [timely] ... but this isn't enough 19:11:04 [timely] ... because it's not about packets 19:11:17 [timely] ... there's a need to get packets connected 19:11:22 [claudio] claudio has joined #tpac 19:11:25 [timely] ... and most servers only have ipv4 addresses 19:11:32 [Nikunj] Nikunj has joined #tpac 19:11:35 [timely] ... we have 2 years to get ever web server an ipv6 addresss 19:11:37 [timely] s/sss/ss 19:11:40 [timely] [shouted!] 19:11:47 [mac] mac has joined #tpac 19:11:49 [DanC] *we have 2 years to get ever web server an ipv6 address* , he says 19:11:54 [timely] ... i'm now telling you that it is your job that we get every server an ipv6 address 19:11:59 [timely] ... in addition to an ipv4 address 19:12:05 [timely] ... if everyone were to do that 19:12:15 [timely] ... we could connect new users with just an ipv6 address 19:12:29 [timely] ... we've looked at the number of servers with ipv6 addresses 19:12:31 [timely] ... it's a small number 19:12:38 [mac] mac has joined #tpac 19:12:40 [timely] [scribe pauses to change nick] 19:12:53 [timeless] ScribeNick: timeless 19:12:54 [fo] fo has joined #tpac 19:13:01 [DanC] doesn't youtube account for a majority of IP traffic already? google has IPv6 deployed, no? 19:13:03 [timeless] questioner: why can't you assign everyone an ipv6 address 19:13:06 [DanC] 2% sounds low 19:13:15 [dom] google has IPv6 19:13:17 [timeless] speaker: the problem is that you have to give people routing information 19:13:29 [timeless] ... you have to get the ipv6 address configured on your server 19:13:55 [timeless] the problem is getting the address, getting the configuration, configuring your server 19:14:16 [timeless] questioner: couldn't software automatically assign the ipv6 addresses to servers 19:14:24 [timeless] speaker: when you get addresses from your server 19:14:37 [timeless] ... you get them from a block which the ISP manages 19:14:57 [timeless] ... this is managed by address blocks 19:15:04 [timeless] ... which arranges routing blocks 19:15:13 [AnnB] AnnB has joined #tpac 19:15:25 [timeless] ... ideally you get v6 addresses according to network topography 19:15:27 [caribou] s/questioner/Elika Etemad 19:15:47 [timeless] other-person: the issue is getting the full scale deployment of a new internet 19:15:54 [DanC] I think the question was: do the server owners have to start this change, or can it be done for them? 19:16:09 [caribou] s/other-person/Leslie Daigle 19:16:14 [Claes] Claes has joined #tpac 19:16:23 [timeless] speaker: this wireless network gives you a v4 address 19:16:34 [timeless] ... a lot of you have mac books, i can see the logo 19:16:44 [timeless] ... the router might give out a v6 address 19:16:57 [timeless] Leslie: is this a general question of the room, or do we move on 19:17:01 [timeless] [room]: move on 19:17:02 [caribou] s/speaker/John Curran 19:17:17 [timeless] new-questioner: when i talk to people about ipv6 19:17:22 [Rotan] 19:17:25 [timeless] ... i find that there wasn't a lot that people could read about 19:17:27 [brutzman] brutzman has joined #TPAC 19:17:32 [mauro] s/new-questioner/TimBL/ 19:17:34 [timeless] ... i found mit and google have v6 addresses 19:17:50 [timeless] ... for a while your computer could cheat and tunnel to a special place 19:17:55 [timeless] ... using a complicated map 19:18:07 [timeless] ... and we could deem ipv4 addresses to be part of ipv6 addresses 19:18:08 [marisol] marisol has joined #tpac 19:18:11 [dchiba] dchiba has joined #tpac 19:18:20 [timeless] John: what matters is public servers 19:18:29 [timeless] ... the ones that can be seen by the outside world 19:18:35 [timeless] ... at MIT all addresses are public facig 19:18:39 [timeless] s/cig/cing/ 19:18:52 [timeless] ... you can't get ipv6 until your network team gives you ipv6 connectivity 19:18:58 [timeless] ... or if you setup a tunnel 19:19:21 [timeless] TBL: if i work with my network team 19:19:36 [timeless] ... then when i click on a link, there's no guarantee i can get to a v4 ? 19:19:47 [timeless] John: when you click on a link with v6, you get to v6 19:19:58 [timeless] ... but some groups are working on Carrier Grade NAT 19:20:06 [timeless] ... for reaching ipv4 addresses 19:20:17 [timeless] ... but we don't know if Carrier Grade NAT will scale 19:20:23 [timeless] new-speaker-3: 19:20:38 [timeless] ... I work for a small company 19:20:42 [caribou] s/new-speaker-3/Jeremy Carroll 19:20:44 [timeless] ... I'm trying to understand what you want us to do 19:20:54 [AxelPolleres] AxelPolleres has joined #TPAC 19:21:01 [timeless] ... it sounds like we need to make sure our isp provides ipv6 addresses and ipv6 connectivity 19:21:11 [timeless] ... and we should ask our isp these questions 19:21:13 [gond] gond has joined #tpac 19:21:13 [timeless] John: steps 19:21:21 [timeless] ... 1. ask isp to turn on ipv6 connectivity 19:21:32 [timeless] ... 2. configure your servers with ipv6 addresses 19:21:39 [timeless] ... 3. make sure your software works with ipv6 19:21:46 [timeless] ... 4. double check your firewall still works 19:21:59 [timeless] ... that's what we need to do everywhere over the next few years 19:22:00 [Eliot_Graff] Eliot_Graff has left #tpac 19:22:14 [timeless] Leslie: open for questions 19:22:18 [timeless] new-speaker-4: 19:22:23 [timeless] [glazou] 19:22:32 [timeless] ... first statement, don't use acronyms 19:22:41 [wiecha] wiecha has joined #tpac 19:22:44 [timeless] ... Daniel Glazman, disruptive innovations, cochair csswg 19:22:48 [caribou] s/new-speaker-4/Daniel Glazman 19:22:49 [timeless] new-speaker-5: 19:22:50 [Tobias] Can I see this streamed somewhere? 19:22:57 [timeless] ... i'd like to suggest a new approach for this room 19:23:06 [timeless] ... think about from an opportunity side 19:23:15 [shawn] Janina 19:23:17 [timeless] ... what kind of web can we build if we're absolutely profligate with ... 19:23:28 [timeless] ... it seems we have to be limited with out thinking today 19:23:30 [timbl] Tobias, no we aren't streaming it. Yes, would be nice. 19:23:30 [caribou] s/new-speaker-5/Janina 19:23:32 [timeless] [speaker fades] 19:23:48 [timeless] ... what kind of services can we setup... 19:23:55 [timeless] ... monitoring systems for people who are aging 19:24:01 [pbaggia] pbaggia has left #tpac 19:24:03 [Tobias] timbl: Ok thanks. 19:24:08 [timeless] ... so you can setup servers for each tile in a kitchen 19:24:16 [timeless] ... so you can see if grandma is dragging 19:24:23 [timeless] Leslie: thank you for looking at the possitive 19:24:33 [timeless] ... indeed there are industries looking at the benefits 19:24:33 [caribou] s/Janina/Janina Sajka 19:24:38 [timeless] Doug S: 19:24:41 [timeless] ... where are the tutorials 19:24:45 [timeless] John: 19:24:48 [timeless] ... click on ipv6 info 19:24:55 [timeless] Doug S: you should tweet that 19:24:58 [IanJ] rrsagent, make minutes 19:24:58 [RRSAgent] I have made the request to generate IanJ 19:25:00 [DanC] 19:25:09 [timeless] Liam Q: w3c 19:25:12 [timeless] ... thank you for coming 19:25:14 [timeless] ... thank you to the panel 19:25:22 [timeless] ... the big question is what should w3 do about this 19:25:24 [timbl] 19:25:26 [timeless] ... how can we move forward 19:25:34 [YolandaG] YolandaG has joined #tpac 19:25:34 [timbl] is IPV6 wiki 19:25:35 [masao__] masao__ has joined #tpac 19:25:37 [timeless] ... I've checked and my server has ipv6 19:25:43 [timeless] ... but i don't know how to test or enter it into a browser 19:25:47 [timeless] Leslie: thanks 19:25:53 [timeless] ... if you thought xmlns was ugly 19:26:00 [timeless] ... you can look at ipv6 literals 19:26:10 [timeless] someone: i think html5 defines ipv6 19:26:16 [timeless] s/defines ipv6/ipv7/ 19:26:18 [timeless] [laughter] 19:26:25 [Julian] s/someone/Ian Jacobs/ 19:26:29 [DanC] do we have an audio recording of the "we have 2 years to get every..." soundbite? 19:26:29 [timeless] Ralph: so.... 19:26:37 [timeless] ... I heard John give us a clear challenge 19:26:44 [tlr] /me slaps Julian 19:26:48 [timeless] ... and I hear Lisa give us a clear [??] 19:27:00 [timeless] speaker3: ... 19:27:06 [timeless] ... one of the things icann is working on 19:27:08 [tlr] s/speaker3/DavidConrad/ 19:27:14 [timeless] ... is IDN 19:27:17 [maraki] maraki has joined #tpac 19:27:21 [timeless] ... the approach IETF has taken for internationalization 19:27:32 [timeless] ... is interesting in the sense that it requires parsing of web pages 19:27:39 [timeless] ... in terms of recognizing IDN domain names 19:27:44 [timeless] ... and translating that into punycode 19:27:50 [timeless] ... and that provides technical challenges 19:27:56 [timeless] ... that's an area that developers should look at 19:28:06 [timeless] ... if they haven't been working on it already 19:28:12 [timeless] Leslie: Larry do you want to plug your work 19:28:19 [timeless] Larry: there's already an RFC on IRIs 19:28:26 [timeless] ... we're working on trying to update that 19:28:35 [timeless] ... there's an amazing goal that i'm not sure everyone shares 19:28:43 [timeless] ... that web addresses should work on ... 19:28:51 [timeless] ... there are 9 groups 19:29:08 [timeless] ... and perhaps we should create out of the 9, one committee to rule them all 19:29:10 [timeless] ... and bind them 19:29:22 [timeless] ... we're having a meeting in Hiroshima to talk about this 19:29:29 [timeless] ... i've met with internationalization core group 19:29:36 [timeless] ... and [lost-group] 19:29:43 [timeless] ... and there's a dinner plan [lost-details] 19:29:53 [timeless] Roger: i'm curious... 19:30:01 [timeless] ... historically, how did y2k become generally recognized 19:30:07 [DanC] I tweeted the 2 years soundbite: 19:30:09 [timeless] ... getting governments on board and trying to fix it 19:30:18 [timeless] John: an indirect answer 19:30:24 [timeless] ... ipv4 has been compared to y2k a lot 19:30:29 [timeless] ... y2k had advantages 19:30:34 [timeless] ... you knew when it was going to happen 19:30:39 [timeless] ... you didn't know what was going to happen 19:30:42 [timeless] ... don't laugh 19:30:54 [timeless] ... when you talk to people 19:30:58 [timeless] ... they ask when it will happen 19:31:11 [timeless] ... the answer moves around 19:31:16 [timeless] ... with y2k 19:31:22 [timeless] ... you could test your machine yourself 19:31:25 [shepazu] is a terrible address to try to spread the word.... I suggest 19:31:36 [timeless] ... you could put a machine in a lab, change the date, and watch it roll over 19:31:44 [timeless] ... the problem with ipv4 19:31:56 [timeless] ... is that you don't know what's going to happen when someone comes along 19:32:03 [timeless] ... and is only given an ipv6 address 19:32:04 [IanJ] rrsagent, make minutes 19:32:04 [RRSAgent] I have made the request to generate IanJ 19:32:08 [timeless] ... arin is working with a number of governments 19:32:12 [timeless] ... working with UN 19:32:15 [timeless] ... [and others] 19:32:28 [chaals] chaals has joined #tpac 19:32:29 [LeeF] LeeF has joined #tpac 19:32:30 [timeless] ... it's not going to get more attention until it is right upon them 19:32:36 [timeless] ... and that's 18 months away 19:32:39 [timeless] [applause] 19:32:50 [timeless] someone--: lightning talks 19:32:52 [W3C] W3C has joined #tpac 19:32:56 [timeless] ... you know the rule for lightning talks 19:33:00 [timeless] Marrie Clair: 19:33:05 [dond] exit 19:33:06 [timeless] ... first presenters now on stage 19:33:07 [timbl] Well, the US switched to digital TV .. but only by offering free D-A converters to those who were left. 19:33:10 [timeless] ... 3 minutes 19:33:14 [caribou] s/Marrie Clair/Marie-Claire/ 19:33:21 [dond] bye 19:33:21 [timeless] ... 19:33:23 [caribou] s/someone--/Ralph 19:33:29 [Tobias] timeless: thank you for your effort 19:33:40 [mauro] countdown clock at 19:33:45 [mauro] timeless++ 19:33:53 [timeless] Marie-Claire: [... noise] 19:33:56 [IanJ] Speaker: Rotan Hanrahan 19:33:57 [mauro] that was awesome scribing! 19:34:07 [caribou] RRSAgent, make minutes 19:34:07 [RRSAgent] I have made the request to generate caribou 19:34:09 [jun] jun has joined #tpac 19:34:14 [timeless] Marie-Claire: ok... so lightning talks 19:34:18 [timeless] ... presenters will have a few minutes 19:34:22 [timeless] ... for their talk 19:34:26 [timeless] ... and then a 2 min discussion 19:34:33 [timeless] ... where we invite your questions at that time 19:34:34 [IanJ] -> Rotan slides 19:34:41 [timeless] lightning-one: 19:34:44 [DanC] where's glazou's timer? 19:34:45 [timeless] ... projector problems 19:34:53 [timeless] [ url i won't type, sorry ] 19:35:03 [timeless] Marie-Claire: daniel glazman has this timer ... 19:35:08 [benjick] benjick has joined #tpac 19:35:10 [timeless] ... are you ready. start 19:35:18 [cheol] cheol has joined #tpac 19:35:18 [timeless] lightning-one: one web 19:35:22 [timeless] ... yes ... we understand ... 19:35:28 [timeless] ... but you don't get one representation 19:35:32 [timeless] ... if you get some mobile thing 19:35:37 [timeless] ... you get 19:35:43 [caribou] s/lightning-one/Rotan 19:35:49 [timeless] ... different views based on different 19:35:49 [caribou] s/lightning-one/Rotan/g 19:36:02 [timeless] ... you get different experiences from different delivery contexts 19:36:11 [timeless] ... we have a device description repository 19:36:15 [timeless] ... OMA is working on this 19:36:26 [timeless] ... so server can see if you are in portrait mode/landscape 19:36:33 [timeless] ... so it can adapt accordingly 19:36:39 [timeless] ... the client can see if things are ok, 19:36:43 [timeless] ... is battery ok 19:36:48 [timeless] ... is codec installed 19:36:51 [rkuntsch] rkuntsch has joined #tpac 19:36:56 [timeless] ... DCCI is a specification on how to access that environemnt 19:36:59 [timeless] s/emnt/ment/ 19:37:00 [marisol] marisol has joined #tpac 19:37:07 [timeless] ... DCCI is based on DOM tree 19:37:16 [timeless] ... it's implemented with all the things we expect from DOM 19:37:22 [timeless] ... it runs in parallel to DOM 19:37:34 [timeless] ... the spec for DCCI exists, you can look at it 19:37:40 [timeless] [uri not provided] 19:37:51 [timeless] Rotan: we found problems 19:37:57 [timeless] ... square peg-round-hole 19:37:58 [ArtB] Please note that Nokia here means "Nokia Research Center" 19:38:02 [timeless] ... something we learned from 19:38:04 [timeless] ... 19:38:11 [timeless] ... read the wiki [uri ...] 19:38:23 [rlewis3] rlewis3 has joined #tpac 19:38:23 [timeless] 19:38:27 [timeless] [screen dims] 19:38:31 [timeless] lightning-two: 19:38:43 [timeless] Bryan: 19:38:48 [timeless] ... we have some of this done 19:38:51 [glaser] glaser has joined #tpac 19:38:58 [timeless] ... UWA has Device Delivery Context (?) 19:39:05 [IanJ] Rotan: We have 2/3 of a pie! 19:39:05 [timeless] Rotan: 2/3 done, great 19:39:16 [timeless] Rotan: it's important for ... 19:39:21 [timeless] ... get in touch with UWA (?) 19:39:26 [timeless] lightning-two: 19:39:31 [Jeanne_] Jeanne_ has joined #tpac 19:39:38 [timbl] ... read the wiki 19:39:42 [timeless] otherq: 19:39:44 [IanJ] Larry Masinter: What's the relation to CSS media queries/ 19:39:57 [timeless] ScribeNick: IanJ 19:40:09 [IanJ] Rotan: Media queries were put together a long time ago, dcci was created since then 19:40:12 [Kangchan] s/Device Delivery Context (?)/Delivery Context Ontology/ 19:40:16 [IanJ] ..hope to hide some complexities from end users 19:40:22 [IanJ] ..might use media query mechanism 19:40:32 [IanJ] -> Charlie Wiecha slides delivered by Steven Pemberton 19:40:39 [markusm] markusm has joined #tpac 19:40:59 [wiecha] s/delivered/augmented and delivered 19:41:11 [IanJ] (slides show demos of compound docs) 19:41:34 [IanJ] Steven; 19:41:35 [IanJ] The Backplane Premise 19:41:35 [IanJ] Compound documents are easy to create, syntactically 19:41:35 [IanJ] Because of differences in processing models, the combinations can be difficult to manage. 19:42:18 [IanJ] Steven: The XG got together to see what overlapped; they did some implementation work 19:42:27 [IanJ] Steven: challenges: "Since mainstream browsers don't support compound documents in this way, what are the options for implementation?" 19:42:34 [IanJ] Options: 19:42:35 [IanJ] * Server-side 'Compilation' (eg Chiba, Orbeon) 19:42:35 [IanJ] * Client-side transformation (+judicious Javascript) (eg XSLTForms) 19:42:35 [IanJ] * Client-side implementation (Using XBL and/or Unobtrusive Javascript) (eg SVGWeb, AmpleSDK, Ubiquity, FormFaces) 19:42:45 [kaz] kaz has joined #tpac 19:43:06 [IanJ] [Demo of multi-source document] 19:43:14 [IanJ] Conclusion: In the light of the emerging trend to implement XML vocabularies in Unobtrusive Javascript libraries, we recommend work on standardising the interface between the libraries, so that vocabularies can work together seamlessly, and without prior negotiation. 19:43:38 [IanJ] [Questions] 19:44:05 [mmani] mmani has joined #tpac 19:44:08 [IanJ] -> Final Backplane XG report 19:44:18 [IanJ] [Speaker: Dominique Hazael-Massieux] 19:44:23 [IanJ] Title: Cheatsheet for developers 19:44:38 [JF] JF has joined #tpac 19:44:43 [IanJ] -> The Cheatsheet 19:45:06 [IanJ] -> Dom's slides on cheatsheet 19:45:49 [IanJ] [Dom demos the cheatsheet] 19:46:55 [IanJ] [Dom shows the cheatsheet tool gives access to info about wai quicktips, i18n tips, css properties, typography, more] 19:46:57 [Magnus] Magnus has joined #tpac 19:47:07 [IanJ] dom: open source, widget-ready, possible extensions. 19:47:23 [IanJ] dom: am looking for suggestions to make the tool more useful 19:48:12 [IanJ] [No questions] 19:48:17 [IanJ] Roger Cutler: Great! 19:48:26 [fantasai] fantasai has joined #tpac 19:48:26 [IanJ] Speaker: Charles McCathieNevile 19:48:31 [DanC] hmm... I'm trying to look up "following-sibling" and losing. 19:48:33 [timeless] s/Great/How can you question it, it's Great/ 19:48:49 [JereK] JereK has joined #tpac 19:49:18 [Lachy] Lachy has joined #tpac 19:49:23 [IanJ] Title: "Opera Unite" 19:49:38 [IanJ] Charles: We said Opera would revolutionize the Web and we came up with a Web server. 19:49:42 [timeless] [laughter] 19:49:48 [IanJ] Charles: Opera handles IPv6! 19:49:51 [timeless] [Larry: does it have an ipv6 address] 19:49:55 [IanJ] Charles: How to make a widget... 19:50:09 [IanJ] [Slides not available yet] 19:50:14 [dom] DanC, I only have xpath function and operators, probably not xpath axis 19:50:23 [IanJ] Charles: We have a course on creating an Opera widget...will move it to "w3c" widget...need to add one line. 19:50:26 [Judy] Judy has joined #tpac 19:50:33 [jjc] 19:50:33 [IanJ] Charles: Opera unite is a personal web server. 19:51:08 [IanJ] Charles: "Disposable Web-serving" 19:51:18 [IanJ] Charles; Portable domain space in your browser. 19:51:34 [shawn] s/ wai quicktips/ web accessibility quicktips-WCAG 2 at a Glance, HTML Techniques for WCAG 2.0/ 19:51:35 [timeless] s/Charles;/Charlses:/ 19:51:36 [IanJ] Charles: Easy for developers; create a conf file. 19:51:37 [kford] kford has joined #tpac 19:51:41 [timeless] s/Charlses/Charles/ 19:52:11 [IanJ] [Charles shows other things you can do with Opera Unite] 19:52:26 [raman] what namespace is config.xml in? 19:52:37 [IanJ] [Questions] 19:52:48 [IanJ] Steven Pemberton: Is the server running only when Opera is running? 19:53:12 [IanJ] Charles: Yes. It's stuff you only need in some situations; not an enterprise server. E.g., I don't need openid when my machine is turned off. 19:53:19 [IanJ] Speaker: Roger Cutler 19:53:33 [IanJ] Title: Semantic Web in the Oil & Gas Industry, 19:53:43 [jun] jun has joined #tpac 19:53:43 [IanJ] -> Roger Cutler slides 19:54:00 [chaals] --> the 2.5MB that will overload me if everyone does it at once 19:54:40 [chaals] --> the live version (until I turn off my laptop and stop caring and sharing :) ) 19:54:55 [IanJ] Roger: I have gone from being skeptic about sem web in oil and gas to being an evangelist. 19:54:59 [jun] jun has joined #tpac 19:55:20 [IanJ] Roger: We have tons of data! 19:55:35 [IanJ] Roger: our subject matter experts spend most of their time doing information management badly. 19:56:08 [IanJ] Roger: Value proposition came from this apsect of semantic Web 19:56:41 [IanJ] Roger: We hosted a Workshop in 2008. Answered some questions on opportunities in Oil & Gas industry: demonstrated interest; but don't know how to move forward. 19:57:00 [benjick] Abusing the /me are we? :( 19:57:34 [IanJ] [Questions] 19:57:41 [timeless] Laurie: people from the semantic web community 19:57:51 [IanJ] Kai Scheppe: How did you get resources allocated for this effort? 19:57:59 [IanJ] Roger: We have a collaboration with CSOFT 19:58:11 [timeless] ScribeNick: timeless 19:58:18 [timeless] Laurie: Ahamud from ... 19:58:29 [IanJ] Speaker: Arnaud de Moissac (SFR) 19:58:34 [IanJ] s/Laurie/Marie/g 19:58:38 [timeless] s/Ahamud/Arnaud/ 19:58:43 [IanJ] Title: United we(b and net) stand! 19:58:52 [timeless] Arnaud: ... 19:58:53 [caribou] s/Laurie/Marie 19:59:06 [IanJ] -> Arnaud's slides 19:59:11 [timeless] [United We(b and Net) Stand] 19:59:23 [timeless] ... today sometime we can see ... 19:59:34 [timeless] ... net neutrality 19:59:41 [timeless] ... when you read ... 19:59:47 [timeless] ... you can see that people ask for... to have 19:59:52 [timeless] ... the most transparent ... 20:00:01 [timeless] ... What about collaboration? 20:00:07 [timeless] ... what we have to keep in mind... 20:00:08 [JereK] JereK has left #tpac 20:00:12 [timeless] ... is we will always have access issues 20:00:20 [timeless] ... because of mobile access networks 20:00:30 [timeless] ... you have to keep in mind mobile access equipment and routers 20:00:45 [timeless] ... you have to keep in mind that routers will always drop packets in an arbitrary web 20:00:59 [massimowww] massimowww has joined #tpac 20:01:04 [timeless] ... We don't have to add an optimizer to the network 20:01:21 [timeless] ... The web should be able to talk to the network about priority 20:01:35 [timeless] ... As Elisa said in the last talk about IETF 20:01:44 [timeless] ... we need collaboration between the web world and the network world 20:01:48 [timeless] ... the beauty of this system] 20:01:54 [timeless] s/]// 20:01:56 [timeless] ... in the first approach 20:02:00 [timeless] ... we can use only the browser 20:02:08 [timeless] ... that information is set in the css 20:02:20 [timeless] ... a web browser could use this information to get a better experience to the user 20:02:25 [timeless] ... Thank you 20:02:31 [timeless] Maurie: thanks Arnaud 20:02:41 [timeless] Ralph?: 20:02:48 [timeless] ... i wasn't clear if this was a work in progress 20:02:50 [timeless] ... or a proposal 20:02:57 [timeless] Arnaud: it's work in progress in my lab 20:03:01 [timeless] ... the idea of the lightning talk 20:03:05 [timeless] ... is to get your opinion 20:03:09 [timeless] ... does it make sense, is it stupid 20:03:20 [timeless] Maurie: yes, get in touch with our lightning talk speakers 20:03:25 [timeless] ... during the breaks, and discuss with them 20:03:32 [timeless] ... i guess it's time for lunch 20:03:32 [mauro] s/Maurie/Marie/ 20:03:41 [timeless] Ralph: ok, thank you, and Lunch 20:03:47 [timeless] ... we'll reconvene in 90 mins 20:03:48 [Zakim] -shadi 20:04:11 [mauro] ==== ADJOURNED for the morning ==== 20:04:31 [caribou] RRSAgent, make minutes 20:04:31 [RRSAgent] I have made the request to generate caribou 20:04:32 [RRSAgent] I have made the request to generate mauro 20:04:33 [Tobias] haha funny to see the quitting when everyone saw "lunch" :D 20:04:55 [caribou] RRSAgent, this meeting spans midnight 20:04:57 [Nightwolf] Nightwolf has left #tpac 20:05:04 [jun] jun has joined #tpac 20:11:00 [Suresh] Suresh has joined #tpac 20:11:33 [Suresh] Can someone pls post the link to the talk on migration to IPv6? 20:19:23 [malpensante] malpensante has joined #tpac 20:27:18 [soonho] soonho has left #tpac 20:28:55 [Lachy] Lachy has joined #tpac 20:34:21 [tantek] tantek has joined #tpac 20:34:36 [glazou] glazou has joined #tpac 20:34:41 [Marcos] Marcos has joined #tpac 20:34:46 [richardschwerdtfe] richardschwerdtfe has joined #tpac 20:36:47 [richardschwerdtfe] must be a break 20:36:57 [Tobias] Yup, lunch I believe 20:37:07 [richardschwerdtfe] thanks 20:37:22 [Judy] Judy has joined #tpac 20:44:17 [Maua] Maua has joined #tpac 20:47:45 [Norm] Norm has joined #tpac 20:50:03 [jun] jun has joined #tpac 20:53:43 [jjc] jjc has joined #tpac 20:54:54 [Nikunj] Nikunj has joined #tpac 20:56:31 [Nikunj] Nikunj has left #tpac 20:58:06 [cardona507] cardona507 has joined #tpac 20:58:53 [VagnerW3CBrasil] VagnerW3CBrasil has joined #tpac 20:59:36 [Lachy] Lachy has joined #tpac 21:00:08 [Marcos] Marcos has joined #tpac 21:00:29 [abcoates] abcoates has joined #tpac 21:01:13 [fabrice] fabrice has joined #tpac 21:01:45 [gerald] gerald has joined #tpac 21:03:35 [cardona507_] cardona507_ has joined #tpac 21:06:20 [Lachy] Lachy has joined #tpac 21:07:21 [shadi] shadi has joined #tpac 21:08:31 [AxelPolleres] AxelPolleres has joined #TPAC 21:08:38 [gond] gond has joined #tpac 21:09:06 [Zakim] +wiecha 21:09:21 [shadi] zakim, wiecha is really me 21:09:21 [Zakim] +shadi; got it 21:10:28 [Julian] Julian has joined #tpac 21:14:22 [wiecha] wiecha has joined #tpac 21:14:53 [Kai] Kai has joined #tpac 21:15:09 [sylvaing] sylvaing has joined #tpac 21:15:25 [drogersuk] drogersuk has joined #tpac 21:16:08 [rlewis3] rlewis3 has joined #tpac 21:16:29 [zarella_] zarella_ has joined #tpac 21:16:32 [Tobias] tasty lunch? :) 21:17:29 [dbaron] dbaron has joined #tpac 21:18:23 [jun] jun has joined #tpac 21:18:54 [Zakim] -shadi 21:19:33 [marengo] marengo has joined #tpac 21:20:01 [maxf] maxf has joined #tpac 21:20:16 [Rotan] Rotan has joined #tpac 21:20:40 [lbolstad] lbolstad has joined #tpac 21:20:50 [zarella_] zarella_ has joined #tpac 21:20:54 [cardona507] cardona507 has joined #tpac 21:21:35 [John_Boyer] John_Boyer has joined #tpac 21:22:37 [kohei] kohei has joined #TPAC 21:22:38 [FabGandon] FabGandon has joined #tpac 21:22:47 [unl] unl has joined #tpac 21:23:14 [darobin] darobin has joined #tpac 21:23:33 [nickdoty] nickdoty has joined #tpac 21:24:04 [Steven] Steven has joined #tpac 21:24:52 [SCain] SCain has joined #tpac 21:25:58 [wbailer] wbailer has joined #tpac 21:26:06 [Hideki] Hideki has joined #tpac 21:26:17 [bcohen] bcohen has joined #tpac 21:26:30 [adrianba] adrianba has joined #tpac 21:26:32 [nickdoty] nickdoty has joined #tpac 21:27:21 [mauro] mauro has joined #tpac 21:27:55 [EPC] EPC has joined #tpac 21:28:05 [bcohen] bcohen has joined #tpac 21:28:26 [bcohen_] bcohen_ has joined #tpac 21:28:38 [bcohen_] bcohen_ has left #tpac 21:28:49 [Lachy] Lachy has joined #tpac 21:29:10 [Arron] Arron has joined #tpac 21:29:12 [Lachy] Lachy has joined #tpac 21:29:14 [shiki] shiki has joined #tpac 21:29:47 [claudio] claudio has joined #tpac 21:30:11 [tommorris] tommorris has joined #tpac 21:30:13 [satoshi] satoshi has joined #tpac 21:30:41 [MichaelC] MichaelC has joined #tpac 21:31:12 [myakura] myakura has joined #tpac 21:31:43 [Liam] Topic: privacy 21:31:50 [Liam] scribe: Liam 21:31:56 [glazou] glazou has joined #tpac 21:32:17 [annevk] annevk has joined #tpac 21:32:19 [Liam] Rigo Wenning chairing the panel, gives short introduction on privacy... 21:32:32 [greenberg] greenberg has joined #tpac 21:32:37 [Liam] Rigo: why do we care about privacy? For most people it's about spam, intrusive phone calls.... 21:32:39 [shawn] shawn has joined #tpac 21:32:49 [Liam] but it's a human right, it's in most declarations of human rights 21:33:06 [Liam] When I was working on law, I wondered about why we need it, got an answer, it's about autonomy 21:33:23 [DavidC] DavidC has joined #tpac 21:33:27 [Liam] if others know more about us then our ability to express our own opinion runs into trouble... 21:33:31 [glaser] glaser has joined #tpac 21:33:36 [Liam] many difficulties with democratic process 21:33:43 [Steven] zakim, who is on the phone? 21:33:43 [Zakim] On the phone I see MeetingRoom 21:33:54 [Liam] Privacy by design, collections of data... 21:34:03 [youenn] youenn has joined #tpac 21:34:11 [Liam] On this panel we'll have privacy challenges, express concerns, and then we'll open the floor. 21:34:18 [jmorris] jmorris has joined #tpac 21:34:26 [Vladimir] Vladimir has joined #tpac 21:34:31 [Liam] In the 2nd round we'll talk about remedies, how can we put Privacy by Design into the Web 21:34:33 [JF] JF has joined #tpac 21:34:35 [Liam] what are the challengies 21:34:38 [Julian] Julian has joined #tpac 21:34:48 [timbl] timbl has joined #tpac 21:34:54 [W3C] W3C has joined #tpac 21:35:11 [IanJ] * Adam Barth (UC Berkeley) 21:35:11 [IanJ] * Deirdre Mulligan (UC Berkeley School of Information) 21:35:11 [IanJ] * Brad Templeton (Electronic Frontier Foundation) 21:35:11 [IanJ] * Doug Turner (Mozilla) 21:35:11 [IanJ] 14:30 21:35:12 [IanJ] to 21:35:14 [IanJ] 15:30 21:35:16 [IanJ] s/14:30// 21:35:17 [Liam] [Rigo introduces panelists] 21:35:21 [Zakim] +[IBM] 21:35:22 [IanJ] s/15:30// 21:35:30 [wiecha] zakim, [IBM] is wiecha 21:35:30 [Zakim] +wiecha; got it 21:35:31 [nord_c] nord_c has joined #tpac 21:35:47 [masao] masao has joined #tpac 21:35:49 [chaals] chaals has joined #tpac 21:35:57 [Liam] Adam: geolocation, technical way to let an application tell a server where you are 21:36:09 [ht] ht has joined #tpac 21:36:13 [Liam] ...a number of issues... 21:36:14 [glazou] s/Adam/Doug Turner 21:36:15 [tlr] tlr has joined #tpac 21:36:24 [Liam] Web Apps typically don't know where you are 21:36:33 [brutzman] brutzman has joined #TPAC 21:36:35 [cardona507] cardona507 has joined #tpac 21:36:40 [Claes] Claes has joined #tpac 21:36:41 [Liam] they can work out what cll towers are around you, your IP address, etc., but no way to translate that into anything meaningful 21:36:48 [Liam] so we all rely on service providers to do that for us 21:36:49 [glazou] chaals, ask the guys bottom-right of room ? 21:36:54 [maraki] maraki has joined #tpac 21:36:57 [Liam] but that data is typically not free [zero-dollar]. 21:37:01 [vincent] vincent has joined #tpac 21:37:20 [Liam] so if the user browses the web, someone under the covers is doing reverse translation to a location, an address 21:37:28 [mac] mac has joined #tpac 21:37:32 [Liam] and the user isn't invloved, shouldn't be involved, in seeing that 21:37:37 [Kangchan] Kangchan has joined #tpac 21:37:38 [JereK] JereK has joined #tpac 21:37:49 [Liam] so it's up to the implementors to uphold the users' privacy 21:37:53 [pjsg] pjsg has joined #tpac 21:38:01 [JonathanJ] JonathanJ has joined #TPAC 21:38:14 [Liam] at mozilla I do a whole bunch of device stuff, some things are really sensitive, geolocation, also camera 21:38:39 [arun] arun has joined #tpac 21:38:42 [Liam] big privacy concerns with taking a picture and putting it on the Web with someone's mobile device 21:38:49 [Liam] we don't have a good model on the Web 21:39:02 [Liam] right now with iphone you get a dialogue to ask you if it's OK to use your location 21:39:38 [Liam] but you quickly want "grant all", and that's not good, neither is too many questions, and the user doesn't really know what's going on 21:39:52 [Liam] Many web pages today use iframes to embed ads, widgets... 21:40:36 [Liam] imagine you go to a popular web site & they use device access 21:40:59 [Liam] the user goes to the web site, or uses the app, and sees cnn.com or whatever, and the iframe will want to use the location or camera 21:41:19 [Liam] and the dialogue says, "can this site use the information" but the user won't generally notice there's an embedded iframe 21:41:34 [Liam] My suggestion was embedding iframes or embedded content from using device access 21:41:50 [Liam] [next speaker] 21:41:51 [ArtB] ArtB has joined #tpac 21:41:57 [soonho] soonho has joined #tpac 21:41:58 [Judy] Judy has joined #tpac 21:41:59 [Liam] Brad Templeton, cloud applications & privacy 21:42:08 [Liam] [slide 2, explosion] 21:42:18 [Liam] [slide 3, pendulum] 21:42:50 [Liam] Web apps bring us back to timesharing 21:42:57 [Liam] [slide 4, Data out of your hands] 21:43:03 [cheol] cheol has joined #tpac 21:43:12 [mac] mac has joined #tpac 21:43:22 [Liam] no "reasonable expectation of privacy'", no 4th amendment, if the data is out of your hands, e.g. on the cloud 21:43:28 [Liam] so it's like removing a line from the Bill of Rights 21:43:36 [Liam] [slide: 4th amendment, crossed out] 21:43:51 [Zakim] +??P3 21:43:57 [mauro] 21:44:02 [Liam] [slide: facebook reversed signup dynamic] 21:44:04 [shadi] zakim, ??p3 is me 21:44:04 [Zakim] +shadi; got it 21:44:42 [Liam] [slide: we're changing the balance (of how privacy flows)] 21:44:52 [Liam] People should be aware of what's happening 21:45:05 [Liam] [slide: no-one cares about privacy until after it's been invaded] 21:45:12 [Liam] [slide: Ease of use can be a bug!] 21:45:29 [Liam] All the shy people in the room please stand up 21:45:33 [Liam] they never defend their rights 21:45:40 [Liam] some people can't live with being watched 21:46:14 [rkuntsch] rkuntsch has joined #tpac 21:46:21 [mth] mth has joined #tpac 21:46:27 [Liam] If you make it easy for someone to transfer all their data to another site, like the checkbox on facebook, it's easy to ask for, "please give me all your friends and their blood types, how often you had sex with them" 21:46:39 [Liam] people don't put that on forms, but on facebook it's one click 21:46:49 [Liam] [slide; Easy to do is Easy to demand] 21:46:57 [Liam] every site will make you login 21:47:11 [AxelPolleres] AxelPolleres has joined #TPAC 21:47:14 [Liam] mag strip on driver's licence, you go in a bar and they swipe the licence! 21:47:22 [Liam] [slide; user choice can be a bug] 21:47:41 [Liam] click to agree, no negotiation, negotiation only happens with power 21:47:50 [ericP] ericP has joined #tpac 21:47:55 [Liam] how many read those long contracts on the Web? 21:47:58 [Liam] [rigo puts p hand] 21:48:03 [Liam] s/ p / up / 21:48:25 [Liam] [slide: Two choices] 21:48:55 [Liam] more users - can team up, "tin foil hat" people can have our way but not when there are too many servers 21:49:02 [Liam] [slide: cloud inhibits user power] 21:49:06 [kford] kford has joined #tpac 21:49:16 [noah_] noah_ has joined #TPAC 21:49:19 [Liam] BEPSI, bulk export of your private & sensitive information 21:49:26 [Liam] [slide: data exported is lost] 21:49:34 [chaals] chaals has left #tpac 21:49:39 [Liam] [slide: we must take care not to build the infrastructure of a plice state] 21:50:04 [Liam] free-or-policestate switch, don't push this! 21:50:14 [Liam] [slide: arm tanks in the streets] 21:50:18 [Liam] s/arm/army/ 21:50:46 [Liam] we're changing it, if you want to wiretap every citizen, whitehouse can call... and do it 21:50:48 [howard] howard has joined #tpac 21:51:00 [Liam] [slide: china, saudi arabia, future china, nightmare #1] 21:51:05 [Liam] we sell all our technology 21:51:14 [Liam] with wiretap ability 21:51:26 [kawata] kawata has joined #tpac 21:51:28 [Liam] [photo: time traveling robots from the future] 21:51:48 [Liam] what we do is being recorded, the bots of the future will be able to punish you for what you did years ago! 21:51:56 [Liam] [slide: Falun gong on Facebook] 21:52:17 [Liam] Chinese gov decided they didn't like FG, rounded them up. Wouldn't it have been easy if they had all been on facebook? 21:52:27 [IanJ] Note to attendees: feedback survey -> 21:52:29 [AnnB] message re: photo: beware of time traveling robots from future 21:52:31 [ericP] what does "throw sheep at your friends" mean? 21:52:39 [Liam] [it's a facebook app] 21:52:44 [AnnB] farm game in Facebook 21:52:45 [Liam] [next speaker] 21:52:59 [Liam] Privacy is hard, how many have gone on your computer and looked at your privacy settings? 21:53:06 [Liam] e.g. on facebook 21:53:18 [Liam] if you look at someone's friends, you can infer their sexual orientation, for example 21:53:26 [jmorris] Adam Barth 21:53:29 [ddahl2] ddahl2 has joined #tpac 21:53:31 [Lachy] Lachy has joined #tpac 21:53:40 [Liam] netfilix released movie renting data, and you can figure out who 80% of people are, 21:54:08 [Liam] people rent so many movies that as a dimensional space, people are hugely differentiated, so doesn't need much extra info to locate people 21:54:23 [Liam] People are getting excited about cookie blockers 21:54:35 [howard] sorry Karen, for just responding 21:54:35 [Liam] 3rd party cookie blockers don't help your privacy 21:54:49 [Liam] there's an economic incentive for advertisers to know more about you 21:55:12 [Liam] so instead of making the world a harder way to do business, via small privacy leaks, we need an overall solution that can't easily be worked around 21:55:26 [fhirsch] fhirsch has joined #tpac 21:55:28 [Liam] [next panelist, Deirdre Mulligan] 21:55:46 [Liam] What can we do to help privacy online? and what does that even mean in this day & age? 21:55:50 [fo] fo has joined #tpac 21:56:05 [Liam] Brad posed this idea we're heading toward an environment where our data is all over the place, we've lost all control, 21:56:12 [Liam] we're sleepwalking into a surveillance state 21:56:32 [Liam] and as we take our data & have it sucked up by the cloud, it's the same information but it's not in the 4 walls of your house, legal protections gone 21:56:39 [Liam] And that's not a problem you guys can solve 21:56:39 [youenn] youenn has joined #tpac 21:56:47 [Liam] I hope that you'll help, through political action 21:57:08 [Liam] We can change the legal environment,... 21:57:21 [Liam] we want to be able to share information, pics of my kids, e.g., limited to my family 21:57:42 [Liam] but the fact that I put them online shouldn't determine the legal protection, e.g. if the government wants to see my pictures 21:58:00 [Liam] So this question, what does it mean if you're a designer & you want to be sensitive to privacy issues... 21:58:15 [Liam] ..I'd be slightly frustrated, privacy reduced to a series of dialogue boxes... 21:58:41 [timeless] s/dialogue/dialog/ 21:58:42 [Liam] ...reading them could be a full-time job for any of us.. privacy has been left to the lawyers, and we've ended up with this situation... 21:58:47 [nick] nick has joined #tpac 21:59:04 [burn] burn has joined #tpac 21:59:06 [Liam] We don't take a long term view on the data set we're building 21:59:28 [pbaggia] pbaggia has joined #tpac 21:59:35 [Liam] e.g. the protection model of privacy, this is a process-oriented view, that you understand what I'm asking for, and make a decision,.. 21:59:49 [Liam] and then as the person who collected the data I have obligations about how I use it,... 22:00:01 [timeless] [ - Platform for Privacy Preferences (P3P) Project ] 22:00:13 [Liam] but at the end, if we made some huge database we'd still have "privacy" that isn't really privacy at all, everything exposed. 22:00:31 [Liam] So today we're seeing a richer conversation, what might it mean to have a legal perspective on privacy 22:00:39 [Liam] see a paper by Adam Bath [and others] 22:00:47 [Liam] a conservative view on what privacy means and how to protect it 22:01:00 [Liam] you can look at people's mental models, too, how do people expect information to flow? 22:01:07 [Liam] who do they think they're interacting with 22:01:16 [John_Boyer] lol. Is there a way to have it in modules/specXML ? 22:01:18 [Liam] o users understand there's a third party asking to turn on their camera? probably not 22:01:30 [Liam] s/o /Do / 22:01:39 [Liam] Do people understand who they're interacting with? 22:01:42 [silvia] silvia has joined #tpac 22:02:09 [Ralph] -> "Privacy and Contextual Integrity: Framework and Applications"; Barth, Datta, Mitchell, Nissenbaum 22:02:10 [Liam] You probably all remember the sony rootkit drm fiasco, users didn't understand that inserting the CD would install s/w and "phone home" 22:02:30 [Liam] FTC in US loked at this, and said, it's a CD, it looks like a CD, it should act like a CD 22:02:31 [Zakim] +Lalana 22:02:45 [timeless] s/lok/look/ 22:02:46 [Liam] consumers don't understand that [audio] CDs can load software onto a computer, can open a network connection 22:03:01 [Liam] and the consumer shouldn't have to understand complex legal text to learn this.. 22:03:07 [Liam] a more contextual view of privacy 22:03:39 [Liam] So, you might have more work at the front end.. might be different at IETF and W3C, to think about information flow, and where... 22:03:53 [Liam] ...it might be meaningful to develop prompts, and reduce the burden of prompts 22:04:09 [chaals] chaals has joined #tpac 22:04:16 [Liam] Brad: I challenged this in my talk, I don't think notices are the answer 22:04:22 [Liam] Deirdre: yes, we agree 22:04:33 [ArtB] ArtB has joined #tpac 22:04:41 [Liam] Rigo: comments from the floor? 22:04:53 [Liam] Roger Cutler: I'd like to bring up another point of view 22:05:03 [timeless] s/Cutler/Cutler (from Chevron)/ 22:05:04 [Liam] I work for a company that takes its legal & ethical responsibilities seriously 22:05:12 [raman] raman has joined #tpac 22:05:21 [Lachy] Lachy has joined #tpac 22:05:25 [Julian] Julian has joined #tpac 22:05:28 [Liam] it'd be appreciated if you could come up with something simple to comply with, to understand 22:05:46 [rlewis3] rlewis3 has joined #tpac 22:05:57 [Liam] Doug: the usability of ... moizilla has posted a diagram about how information flows 22:06:04 [Liam] something like that for the law might help 22:06:21 [Liam] Deirdre: "why don't you lawyers use a formal language", I get asked by engineers 22:06:26 [timeless] s/moizilla/mozilla/ 22:06:38 [Liam] but negotiation is political, some of the ambiguity you view as problematic, is that people decided to save the battle for another day 22:06:56 [Liam] we want it to be evolutionary, we want to go to court and fight over what it is, so it's not a bug, it's a feature! 22:07:11 [Liam] A student said, wow, you guys don't get a lot of chances to do versioning 22:07:18 [Liam] and I said, no, that's what courts are for! 22:07:36 [Liam] the law doesn't move in Internet time, doesn't change every 6 months, so we often use more open language so it can evolve 22:07:56 [Liam] so if you take something ambiguous and turn it into a yes/no question, you are taking a side 22:08:18 [Liam] Adam: from a user's perspective, hard to find privacy policy on a web page, then hard to understand it 22:08:30 [Liam] tried to find similarities with creative comments 22:08:51 [Liam] categories with how media can be used, e.g. see an icon and it has some type of meaning... probably outside scope of W3C 22:09:00 [dom] s/Adam/DougT/ 22:09:21 [noahm] noahm has joined #tpac 22:09:22 [Liam] having a lay person not having to read tons of text 22:09:24 [Jim] Jim has joined #TPAC 22:09:35 [Liam] Brad: we started something like this, didn't work out :( 22:10:00 [Liam] timbl: Danny W used to come ot these meetings but he's swallowed up by the whitehouse for 2 yrs, but his attitude, privacy shouldn't be about 22:10:02 [timeless] s/this/this ("trustE")/ 22:10:06 [IanJ] timbl: Channeling Danny Weitzner : appropriate use 22:10:08 [Liam] deciding who gets what, but expectations about appropriate use 22:10:15 [timeless] [ ] 22:10:34 [Liam] should I as a facebook user, you should be able to say, e.g. if you're a prospective employer I don't license you to use the info for denying me a job 22:10:48 [Liam] This is being discussed by the neww Provenance XG 22:10:53 [DanC] DanC has joined #tpac 22:10:57 [timeless] s/neww/new/ 22:10:59 [Liam] have to track provenance through all the systems, find appropriate use 22:11:08 [Liam] does the panel think that would work? 22:11:13 [Liam] [some panelist: "no"} 22:11:17 [timeless] [ - Incubator Activity > W3C Provenance Incubator Group ] 22:11:21 [Liam] Rigo: we have 20 minutes left 22:11:23 [Liam] s/}/]/ 22:11:28 [IanJ] -> See the transparent accountable datamining 22:11:36 [Liam] Rigo: data privacy have scared us, but there are solutions 22:11:44 [Liam] I've been working on solutions since 1999 at W3C 22:11:49 [YolandaG] YolandaG has joined #tpac 22:12:08 [Liam] e.g. discussions about data access rights, if people have data about you, in EU, you have right to look at it, correct it, ask them to delete it 22:12:13 [Liam] but it's only paper 22:12:14 [jmorris] s/some panelist/Brad Templeton/ 22:12:23 [Liam] what about data access API? 22:12:38 [Liam] So what are the solutions and challenges to those solutioons? 22:13:02 [Liam] Doug: firs tproblem is accountability, we can't lie to the user 22:13:02 [pbaggia] s/solutioons/solutions 22:13:05 [jmorris] s/ioon/ion/ 22:13:28 [Liam] they're not going to share my pictures, the Web browser can say that happen, e.g. facebook shares all my party pics & I don't get the job, I'm not sure who I am going to blame 22:13:34 [gond] gond has joined #tpac 22:13:36 [Liam] the future employer or the UA? 22:13:43 [Liam] I don't know if there's a technical solution. 22:14:05 [Liam] Some of this happens today. My father has the same name as me. He had an unresolved debt from the 1950s, and I had to sort it out, they started calling me 22:14:25 [Liam] I can't imagine asking facebook, show all the data you have on me, and I get a crate outside my house, or a couple of DVDs, to go through! 22:14:32 [Liam] It's a dichotomy, either you use the service or not 22:14:44 [Liam] when I bought my first house I read every page on that contract... 22:14:45 [timeless] s/firs t/first / 22:14:57 [Liam] and my wife said, look, either you buy the house or not, it's not a negotiation 22:15:12 [Liam] either yuo use facebook and play the sheep game, have sheep thrown at you, or you don't 22:15:19 [timeless] s/yuo/you/ 22:15:26 [Liam] Dan Glazman: you don't have to use facebook 22:15:30 [Liam] to raise privacy issues 22:15:38 [Liam] in Sweden they're using social security number 22:15:45 [Liam] e.g. for a coupon in a gas station 22:15:55 [timeless] s/number/number ("social health number")/ 22:15:55 [Rotan] s/security/health/ 22:16:08 [Liam] and there are computers widely available to to check the social health number 22:16:17 [Liam] it's intensified by the web, but e.g. beaten women are found using it 22:16:29 [Liam] Brad: regulations have a history of failing, ata gets out regardless of the rules 22:16:41 [Liam] and the infrastructure to maintain it becomes intractable, or difficult 22:16:51 [timeless] s/ ata / it / 22:17:01 [Liam] Eurpean philosophy is "the gov needs to know everything about you in order to check your privacy" 22:17:08 [timeless] s/ it / data / 22:17:15 [Liam] I believe we need to try & move the data back into our own control 22:17:16 [Rotan] s/check/protect/ 22:17:17 [glazou] that was "personnummer" 22:17:24 [Liam] change the default about how data is collected 22:17:40 [Liam] I propose data hosting, each user is responsible for getting a small processing power & bandwidth 22:17:47 [timeless] [(Sweden ) "personnummer"] 22:17:49 [Liam] and we ask that the code comes to our machines 22:17:53 [chaals] s/using data/using the data available keyed from the personnummber/ 22:17:53 [glazou] timeless, number (en) = number (sv) 22:17:57 [glazou] er 22:18:08 [glazou] timeless, number (en) = nummer (sv) 22:18:11 [Liam] So we'd go to the other site, and they'd embed an iframe, and it'd be served by our own host 22:18:11 [Eduardo] Eduardo has joined #tpac 22:18:18 [tantek] tantek has joined #tpac 22:18:29 [mac] mac has joined #tpac 22:18:31 [Liam] some kind of VM, sandboxable, cached, would operate on my data on my computer 22:18:37 [Liam] and the results would come to my screen 22:18:51 [Liam] if that's on my own pc it's fast, but there are security issues about running this on your own machine 22:18:56 [Liam] it's a harder engineering challenge 22:19:09 [Liam] "there are things worth doing not because they are easy but because they are hard" - JFK 22:19:34 [Liam] Rigo: data under all user control is one thing, I want to come back to this issue that browsers fear they will be made responsible 22:19:58 [Liam] We had the same issue with the font activity, browsers said we'd be liable if our s/w violates label on fonts 22:20:07 [Liam] Deirdre: I want to push on this idea.. 22:20:18 [Liam] when I was reading spec for geolocation it kept talking about user agent 22:20:31 [timeless] [ laughter ] 22:20:41 [Liam] I said, I assume this is the browser, but it talks about it as if it were my agent, most users don't experience the browser as oding my bidding 22:20:49 [timeless] s/oding/doing/ 22:20:55 [plh] plh has joined #tpac 22:20:56 [Liam] I don't think we have that level of connection to our browser that the term UA suggests 22:21:19 [marie] timeless++ 22:21:24 [Liam] I think it's been a little overhyped 22:21:31 [dom] RRSAgent, draft minutes 22:21:31 [RRSAgent] I have made the request to generate dom 22:21:50 [Liam] The other issue I want to touch on, I hopoe there's enough breadth in the marketplace where... 22:21:57 [timeless] s/hopoe/hope/ 22:22:00 [Liam] data can be local or in the cloud, and law doesn't depend on the data's location 22:22:13 [Liam] ability to process data might be different for different devices 22:22:21 [Liam] so wouldn't want the legal framework to drive solutions 22:22:30 [Liam] and want to go back to complexity issues, right now, 2 choices... 22:23:02 [Liam] (1) in context of location wg, privacy as a matter of policy, don't develop mechanisms to support ways for people to express info flows 22:23:06 [Liam] you'll end up with hippos 22:23:07 [timeless] hipaa - 22:23:15 [timeless] The Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule 22:23:21 [timeless] ScribeNick: timeless 22:23:23 [Liam] or you can try to make some lightweight principles, e.g. do not re-transmit, one time use 22:23:25 [timeless] do not retransmit 22:23:27 [timeless] ... one time use 22:23:36 [timeless] ... do not make people think before the transmit information 22:23:41 [timeless] ... it's not just ... 22:23:48 [timeless] ... it's not just that it tells you how tall she is 22:23:54 [timeless] ... it's that it lets you locate her 22:24:04 [timeless] ... so legally people are going to want this information protected 22:24:14 [timeless] ... the dominant uses for information in the us is young people 22:24:15 [Ralph] s/hoppos/HIPPA 22:24:22 [timeless] ... relying on consent ... 22:24:27 [Ralph] s/HIPPA/HIPAA/ 22:24:30 [timeless] ... you have an opportunity to think about the 22:24:38 [timeless] ... it's going to be way worse if you wait 22:24:45 [timeless] Doug: I don't think we picked UA to be 22:24:48 [timeless] ... an enduring term 22:24:52 [timeless] ... it's a technical term 22:24:59 [timeless] ... browser user agent .... 22:25:06 [Rotan] Rotan has joined #tpac 22:25:13 [timeless] ... the other thing is that you know, browsers have worked a long time to sandbox content 22:25:16 [timeless] ... and our ui 22:25:19 [timeless] ... for spoofing reasons 22:25:26 [timeless] ... you don't want to go to a site that puts our ui up 22:25:33 [timeless] ... and 22:25:48 [timeless] ... so there's an idea of sandboxing content from chrome 22:25:53 [timeless] ... [not google chrome] 22:26:04 [timeless] ... the idea is that any time the user sees a ui from our agent 22:26:17 [AndyS] AndyS has left #tpac 22:26:23 [timeless] ... we do a lot of work to make sure we're sure that what we show is accurate 22:26:28 [timeless] ... if we bring a dialog down 22:26:41 [timeless] ... we want an expectation to be sure that what we said is what actually happened 22:26:47 [timeless] ... you can put something in HTML 22:27:02 [timeless] ... in a DIV... that claims "we won't retransmit" 22:27:08 [timeless] ... but that isn't the best thing to do technically 22:27:15 [timeless] someone: Jeremy? 22:27:18 [timeless] someone-else: 22:27:20 [timeless] ... it seems to me 22:27:23 [timeless] ... that 22:27:25 [timeless] ... 2 items 22:27:29 [timeless] ... legal rememdies 22:27:31 [Liam] s/Jeremy/Jeremy Carrol/ 22:27:32 [chaals] Jeremy Carroll, TopQuadrant 22:27:32 [timeless] ... technical rememdies 22:27:41 [timeless] ... it's only the legal end that really works 22:27:44 [Liam] s/Carrol/Carroll, TopQuadrant/ 22:27:47 [timeless] ... the technical end is doomed to failure 22:27:55 [timeless] ... I go into a shop, i buy my groceries 22:28:01 [timeless] ... unless i hide my face, 22:28:05 [timeless] ... and change my clothes 22:28:16 [timeless] ... there's nothing that can be done 22:28:22 [timeless] ... we have to be public people in public spaces 22:28:26 [timeless] ... we're social animals 22:28:30 [timeless] ... privacy is a concept of the law 22:28:38 [timeless] ... we need to have societies that we trust enough 22:28:47 [timeless] ... to have frameworks that we trust enough 22:28:52 [timeless] ... instead of cheating on us 22:29:00 [timeless] Henry Thomson (Univ Edinborough): 22:29:02 [sandro] sandro has joined #tpac 22:29:07 [timeless] ... I was intrigued by X's 22:29:12 [timeless] ... and tried to come back to it 22:29:13 [Liam] s/Thomson/Thompson/ 22:29:19 [timeless] ... i'm enough of a geek to try to manage my data 22:29:25 [timeless] ... i have a server somewhere, it's "my server" 22:29:30 [kawata] kawata has left #tpac 22:29:31 [Lachy] Lachy has joined #tpac 22:29:32 [timeless] ... but it's not in my space that i actually control 22:29:35 [BryanSullivan] BryanSullivan has joined #TPAC 22:29:36 [timeless] ... i rent it from somewhere 22:29:44 [timeless] ... let's say that the law says that it's mine 22:29:49 [timeless] ... let's say that i back up my data 22:29:59 [timeless] ... the value of the backup is that it's not in the same physical location 22:30:05 [sandro] +1 jjc --- we've lost our privacy, technically, walking around in public spaces, shopping, etc. 22:30:06 [timeless] ... i back it up in the cloud (amazon) 22:30:14 [timeless] ... I don't encrypt my data 22:30:23 [timeless] ... I need a legal remedy 22:30:29 [timeless] ... I can't manage it all by myself 22:30:37 [timeless] ... and there's no question that my father in law can 22:30:42 [timeless] panelist: 22:30:47 [timeless] ... you could encrypt the backup you send out 22:30:53 [Liam] s/panelist/Brad/ 22:30:56 [IanJ] Brad: Neither law nor technology provides a complete solution (alone). 22:31:00 [timeless] ... or you could have the backup server legally defined as your property 22:31:07 [chaals] [In a small village, privacy is a different beast] 22:31:09 [adrianba] s/Edinborough/Edinburgh/ 22:31:14 [timeless] .... the law isn't intended to protect small institutions 22:31:22 [timeless] someone-s: 22:31:22 [marisol] marisol has joined #tpac 22:31:24 [chaals] Nikunj Mehta 22:31:30 [timeless] ... can we address the privacy fears we have 22:31:32 [chaals] s/someone-s/Nikunj Mehta/ 22:31:38 [timeless] ... using good sharing techniques 22:31:48 [timeless] ... as with digital rights techniques 22:31:54 [chaals] s/Nikunj Mehta// 22:31:55 [timeless] ... that are used by large companies 22:32:04 [timeless] panelistx-: 22:32:08 [timeless] ... prime rights (?) 22:32:15 [timeless] ... there are large parallels between large data 22:32:24 [timeless] ... we'll have a lightning talk on this later 22:32:27 [Liam] [rigo: w3c participates in ] 22:32:30 [timeless] Frederick Hirsh (Nokia): 22:32:40 [timeless] ... technically any failure with privacy is a complete failure 22:32:40 [PIon] PIon has joined #TPAC 22:32:45 [kawata] kawata has joined #tpac 22:32:45 [timeless] ... you have information, it gets out 22:32:49 [timeless] ... you're done 22:32:49 [Liam] s/panelistx-/Rigo/ 22:32:52 [Steven] rrsagent, make minutes 22:32:52 [RRSAgent] I have made the request to generate Steven 22:32:57 [andrew] s/panelistx-:/Rigo Wenning:/ 22:32:57 [timeless] ... legally, it sounds like a boil the ocean 22:33:05 [timeless] ... if ... it's cumbersome 22:33:18 [timeless] ... I'm worrying about being overwhelmed 22:33:21 [timeless] ... having to read checkboxes 22:33:28 [cheol] cheol has joined #tpac 22:33:30 [timeless] panelist-y: 22:33:32 [timeless] ... i don't think so 22:33:36 [timeless] ... there are efforts to make sure 22:33:41 [timeless] ... we did step in and pass this law 22:33:49 [timeless] ... called the electronic information privacy act 22:33:52 [jmorris] s/panelist-y/Deirdre Mulligan/ 22:34:01 [timeless] ... designed to give the same protection for email as for mail 22:34:08 [timeless] ... the way the justice dept uses this statue 22:34:15 [timeless] ... might turn on whether you've opened it or not 22:34:23 [timeless] ... the law might change based on how old it is 22:34:30 [Marcos] Marcos has joined #tpac 22:34:32 [timeless] ... if you pulled the data down 22:34:36 [timeless] ... if it was used for processing 22:34:43 [timeless] ... or was used by an information service 22:34:47 [timeless] ... at the time this was passed 22:34:52 [timeless] ... we thought the content was what mattered 22:34:54 [IanJ] content v. identity 22:34:59 [timeless] ... and the identity wasn't considered 22:35:10 [timeless] ... what we know now is that something can let people know that you're gay 22:35:16 [timeless] ... today what we have is people who are posting 22:35:25 [timeless] ... and the privacy they want is their identity 22:35:26 [IanJ] [interesting: shift from protecting content but not identity to the inverse] 22:35:38 [timeless] ... law is a way that lets people express national concerns about privacy 22:35:44 [timeless] ... that might be good to some extent 22:35:48 [timeless] ... but it might be bad in others 22:36:00 [timeless] ... -- not one size fits all -- 22:36:10 [timeless] ... how information should flow / and how it should be shared 22:36:13 [timeless] x: 22:36:15 [timeless] ... about 22:36:19 [timeless] Rigo: 22:36:23 [timeless] ... we've consumed our one hour 22:36:26 [timeless] [applause ] 22:36:34 [timeless] ... thanks a lot 22:36:50 [timeless] -> next set of panelists 22:37:05 [ht] ScribeNick: ht 22:37:11 [ht] Scribe: Henry S. Thompson 22:37:15 [IanJ] rrsagent, make minutes 22:37:15 [RRSAgent] I have made the request to generate IanJ 22:37:59 [ht] Topic: Web Apps vs App. Stores 22:38:13 [ht] Chair: Robin Berjon 22:38:58 [Zakim] -Lalana 22:42:27 [ht] RobinB: Panel about Web Apps, App Stores and surrounding technology 22:43:03 [mac] mac has joined #tpac 22:43:03 [ht] ... What's the difference between using a mail program, and using a mail-reading webapp 22:43:11 [mac] mac has joined #tpac 22:43:35 [ht] ... The functional difference is vanishing, and the client/server distinction doesn't mean anything to our users 22:43:53 [ht] ... So when we talk about this as important, we are in a sense behind our users 22:44:19 [shiki] shiki has joined #tpac 22:44:23 [ht] RobinB: There are differences: Some webapps are accessed directly in the browser 22:44:53 [ht] ... whereas others are downloaded as zipped packages and installed in the browser more in the way that traditional apps are installed 22:45:02 [Liam] Liam has joined #tpac 22:45:07 [Rotan] Rotan has joined #tpac 22:45:21 [ht] RobinB: Questions to ask: are these different from the security perspective ? 22:45:27 [ht] ... or is it just convenience? 22:46:22 [ht] RobinB: From the business perspective, should we explore how to monetise webapps for developers? "405 payment required"? 22:47:15 [Ileana] Ileana has joined #TPAC 22:47:17 [Ralph] Nick Allott (OMTP) 22:47:20 [mac2IPO] mac2IPO has joined #tpac 22:47:28 [ht] sprk1: What is the probability/possibility of webapps replacing traditional apps? 22:47:39 [ht] ... What are the important different classes of webapps? 22:47:50 [ht] s/sprk1/NickA/ 22:48:34 [AxelPolleres] AxelPolleres has joined #TPAC 22:48:41 [ht] NickA: Consider BBCiPlayer on iPhone [slide 1] 22:50:21 [ht] ... three main options (flash/streaming media+native viewer/HTML5 <video>), either via Web2.0 or a Widget 22:50:58 [ht] ... [missed some] 22:51:07 [ht] ... normal native app 22:51:19 [ht] ... develop as webapp, but compile into native 22:51:43 [glazou] glazou has joined #tpac 22:51:46 [ht] NickA: Consider Toodledo on iPhone 22:52:06 [ht] Four alternatives: Web 2.0, online 22:52:15 [ht] ... HTML 5, same, but also offline 22:52:40 [ht] ... Widgets + DAP, offline, with access to your native data, e.g. contacts 22:52:43 [ht] ... Native 22:53:18 [ht] s/iPhone/iPhone, a simple calendar+email+contacts app/ 22:54:27 [ht] NickA: W3C role here? W3C gives breadth, and low cost (because of RF requirement) 22:55:02 [ht] ... Some particular WGs are important here -- e.g. DAP 22:55:25 [ht] ... [an equation between AppStores and Widgets I didn't quite get] 22:55:48 [ht] ... AppStore tends to be one-off payment 22:56:03 [ht] ... Cloud-based tends to be subscription payment 22:56:08 [Zakim] -wiecha 22:56:42 [ht] NickA: Challenge -- policy and privacy as approached by HTML WG is different from that of the DAP WG 22:57:26 [ht] Chaals: W3C Widgets - Editor's perspective 22:57:43 [LeeF] LeeF has joined #tpac 22:58:03 [mgylling] mgylling has joined #tpac 22:58:05 [ht] s/W3C Widgets - Editor's perspective/WebApps could be anything/ 22:58:29 [ht] ... A widget has a bit of pedigree, a bit more of a guarantee 22:58:59 [ht] ... In the middle, an AppStore, you get a packaged WebApp with _some_ guarantee of quality 22:59:54 [ht] Chaals: For a Widget Store looking at a W3C-compliant widget, there is some ability to look into the widget code and confirm some properties 23:00:14 [ht] ... so there is some basis for establishing some trust in the quality 23:00:42 [ht] Chaals: But consider WebApps again -- how many people use Google apps? [hands go up] 23:01:03 [ht] ... You do, and you trust them, because of where they come from, not because of any inspection of the inside 23:01:23 [ht] ... And that's the same as has always been the case, going back to DOS applications in a cardboard box 23:01:43 [ht] Chaals: None the less, it's a step forward to be able to look inside if you choose to 23:02:09 [ht] ArunR: There's a "versus" in the title 23:02:22 [ht] ... I don't feel very adversarial towards AppStores 23:02:28 [ht] ... but there are questions 23:02:29 [rigo] rigo has joined #tpac 23:02:46 [burn] burn has joined #tpac 23:02:50 [ht] ArunR: Coming out of WG meetings earlier in the week 23:03:01 [dom] (thanks very much to the panelists for agreeing to join this panel at the last minute) 23:03:18 [ht] ... The similarities between Widgets and WebApps are superficial, I suggest 23:03:49 [ht] ArunR: On the one hand, you can build them in the same way, using the same maybe-W3C technologies 23:04:08 [Steven] rrsagent, make minutes 23:04:08 [RRSAgent] I have made the request to generate Steven 23:04:10 [ht] ... but WebApps run in a web-like hyperlinked-model-based way 23:04:42 [ht] ... whereas the Widget runs in a more encapsulated way, maybe on the desktop 23:04:58 [ht] ... The zipfile is constrained, it's not the same as a web page 23:05:15 [ht] ... So maybe these are cosmetic differences, but the model _is_ different 23:05:45 [ht] ArunR: HTML 5 will let you build a music WebApp with playlists and actual audio output 23:06:17 [ht] ... Or to get at geoloc info, orientation, multitouch aspects of the webapp-hosting-device 23:06:32 [ht] ... This is a triumph for the Web stack and Javascript 23:06:58 [ht] ... Privacy and security are however the location of a major difference between the two models 23:07:09 [ht] MarcesC: [slides] 23:07:21 [ht] ... W3C Widgets - Editor's perspective 23:08:00 [ht] ... I've been editting this spec. for a number of years, initial as part of my PhD 23:08:05 [maxf] s/MarcesC/MarcosC/ 23:08:05 [howard] howard has joined #tpac 23:08:14 [ht] ... How can we build a universal application packaging format, that can be used anywhere? 23:08:32 [ht] ... Longevity -- last 100 years 23:08:44 [ht] ... Similar to HTML 5 23:09:26 [ht] ... Widgets want to do the same for applications as HTML 5 does for documents in this respect 23:10:11 [ht] MarcesC: We want a universal platform, built on open standards, so no IDE has to be purchased 23:10:28 [Steven] s/Marces/Marcos 23:10:50 [ht] s/Marces/Marcos/g 23:11:08 [ht] MarcosC: Security and policy is a very important issue 23:11:36 [ht] ... Putting all your data into a corporate basket is risky, without being critical of any particular corporation 23:12:05 [ht] ... So a goal for widgets is to enable data to be kept local 23:12:29 [ht] ... A hybrid model is baked in -- client/server balance 23:12:55 [ht] ... Concerned with support for monetization 23:13:21 [ht] ... Pressure for encryption, but inconsistent with 'View Source' 23:13:38 [ht] ... Just live with it -- be better than the competition, and you will win 23:13:52 [ht] ... There are plenty of ways to make money 23:14:08 [ht] RobinB: Floor is open for questions 23:14:41 [ht] MikeChampion: What about the other side? No-one from Apple? It looks like the market has voted for the AppStore, not the Widgets? 23:15:04 [cheol] cheol has joined #tpac 23:15:10 [ht] ArunR: Not adversarial -- OK to use both 23:15:24 [ht] ... Why no monetization model behind Firefox extentions? 23:15:35 [rahul] rahul has joined #tpac 23:15:37 [ht] s/extention/extension/ 23:15:56 [ht] Glazou: I disagree there's not adversarial 23:16:13 [ht] ... Consider the iPhone -- I cannot download any application I want to 23:16:26 [ht] ... Whereas I can to my browser 23:16:42 [ht] ... I'm afraid this will close off the user's freedom 23:17:12 [ht] Glazou: Compare a ?? clone on a iPhone and a Nintendo 23:17:24 [Kai] s/monetization/monetarization 23:17:25 [ht] ... The price differential is huge, and will kill ??? 23:17:29 [glazou] s/??/Mario Kart 23:17:50 [Judy] Judy has joined #tpac 23:17:58 [DanC] DanC has joined #tpac 23:17:59 [dom] s/???/this industry 23:18:01 [ht] Chaals: Money talks 23:18:25 [ht] ... We tried to find a way to develop micropayments, but never managed it 23:18:44 [ht] ... Credit card payments worked well enough to get us going 23:19:00 [ht] ... But there are problems, and there's work to be done now to try to fix that 23:19:33 [ht] ... In the long term we have to solve the challenge of the Apple iPhone appstore 23:19:41 [ht] ... but for now multiple channels will work 23:20:06 [ht] Chaals: Coming back to the adversarial point -- not necessarily that way 23:20:20 [ht] ... After all, some people pay for some content on the Web 23:20:35 [ht] [scribe not getting all of Chaals's examples] 23:21:01 [ht] Chaals: The fact that it's a zipfile, instead of zipped on the wire, isn't a big deal 23:21:15 [ht] ... A file on disk, or a transient webpage -- again, not a big deal 23:21:43 [ht] NickA: Widget appstore already exist 23:21:56 [ht] ... Crucial point -- they can be horizontal, i.e. cross-platform 23:22:11 [ht] ... and that's a real difference wrt the AppStores we see today 23:22:55 [ht] ArunR: There is a difference, it's a cosmetic difference, and users will be aware of them 23:23:03 [ht] ... And there will be security differences 23:23:05 [dom] (it’s more than cosmetic, I think — it’s a different user experience) 23:23:42 [ht] MarcosC: There are implementations which run Widgets on the server and serve the result as embedded iframes 23:23:56 [jallan] jallan has joined #tpac 23:24:07 [ht] ... If they get digitally signed, the potential to share them will be reduced 23:24:37 [ht] NoahMendelsohn: Following up on the cross-platform aspect, and what people value 23:24:53 [ht] ... Yes money is being made via mobile apps from a store 23:25:22 [ht] ... If you're an airline, you make your money from the ticket, not from the applet which signals flight delays 23:25:39 [ht] ... Zero-download is what you want 23:26:04 [ht] ... If you want to hit 90% of the smartphones that are out there, you currently need order of 5 versions 23:26:16 [ht] [someone]: Much more than 5 23:26:30 [ht] [someone else]: Same as with browsers 23:26:46 [ht] NoahM: I don't think that's accurate -- android differs from iPhone much more 23:26:57 [ht] ... Cross platform is going to be very variable 23:27:01 [mauro] s/[someone]/Glazou 23:27:19 [mauro] s/[someone else]/MarcosC/ 23:27:24 [JonathanJ] JonathanJ has joined #TPAC 23:27:32 [ht] Glazou: I wanted to hear this browser+offline storage, you can reproduce iTunes 23:27:51 [ht] ... that will allow us to kill this [AppStore?] model -- let's do it 23:28:15 [IanJ] TBL: What's different between widgets and web apps - question of trust. 23:28:17 [ht] Timbl: The difference is, as ArunR said, the way users manage it -- how it's loaded and stored 23:28:23 [glazou] s/browser+offline storage/browser+offline/+localStorage+deviceAPI 23:28:24 [IanJ] ...I tend to trust the things in my cache 23:28:52 [glazou] s/browser+offline storage/browser+offline+localStorage+deviceAPI 23:28:55 [ht] Timbl: There used to be a way to bookmark pages for offline browsing 23:29:10 [ht] ... controlling what's costing local resource is important 23:29:13 [JonathanJ] rrsagent, make minutes 23:29:13 [RRSAgent] I have made the request to generate JonathanJ 23:29:23 [ht] Timbl: Maybe we should go back to look at micropayments again 23:29:39 [ht] ... it is very frustrating to have to talk to the ISP at every airport 23:29:52 [ht] ... Skype now brokers that for me, and I'll pay more for that 23:30:29 [Judy] Judy has joined #tpac 23:30:30 [ht] Chaals: Who has made a transaction of more than 10USD [everyone] 23:30:59 [ht] ... Anyone made a single self-contained payment of less than .50USD [almost no-one] 23:31:11 [glazou] tantek, the key here is localStorage 23:31:17 [ht] ... How many people spend less than 3USD/day 23:31:51 [ht] DanAppelquist: Vodafone is commited heavily to Widgets, and we're getting very positive feedback from developers 23:32:07 [ht] ... not just monetization, but also ease of development, route to market, etc. 23:33:02 [ht] LarryMasinter: Thinking about the difference -- what is the effect of bringing into Widgets all the error-recovery logic from HTML 5 23:33:18 [ht] ... It's not helping the security model to do this 23:33:39 [rkuntsch] rkuntsch has joined #tpac 23:33:56 [ht] [back and forth about the generality of Widgets as packaging] 23:34:15 [ht] MarcosE: We use error handling as a means to extensibility 23:34:30 [ht] ArunR: In theory the Widget package will run on any runtime 23:34:32 [IanJ] rrsagent, make minutes 23:34:32 [RRSAgent] I have made the request to generate IanJ 23:34:47 [noahm] noahm has joined #tpac 23:35:04 [DKA] That URI for the €1M widget developer give-away is: 23:35:05 [ht] ... but in practice we may want different runtimes for the web browser or the mobile device 23:35:27 [ht] ... The cool thing is that they all get developed using the Web stack 23:35:42 [ht] Chaals: I want to question the assumption that the security model is different 23:36:04 [mgylling] mgylling has joined #tpac 23:36:08 [ht] ... If you trust Widgets from a particular provider, you may use a different security model 23:36:18 [ht] ... Same thing wrt apps from trusted providers 23:36:38 [ht] ... In either case you make your decision about trust based on the provider 23:36:51 [ht] ArunR: Respectfully disagree 23:37:03 [ht] ... You're connecting the trust model and the security model 23:37:08 [ht] Chaals: Yes 23:37:18 [ht] [applause] 23:37:23 [DanC] (I think the security models are different. I haven't studied it closely, but... for example, the security implications of following <img src=""> links in HTML email and normal web browsing are different) 23:37:27 [cardona507] arun works for who? 23:37:33 [DanC] mozilla 23:37:44 [ht] RalphS: Adjourned until 1600 23:38:01 [FabGandon] FabGandon has left #tpac 23:38:06 [ht] rrsagent, make minutes 23:38:06 [RRSAgent] I have made the request to generate ht 23:38:40 [Zakim] -MeetingRoom 23:38:42 [ingmar] ingmar has joined #tpac 23:38:42 [nick] nick has joined #tpac 23:39:12 [abcoates] abcoates has left #tpac 23:40:04 [ingmar] ingmar has joined #tpac 23:40:47 [jallan] jallan has joined #tpac 23:41:13 [Zakim] -shadi 23:44:10 [Marcos] Marcos has joined #tpac 23:44:19 [mac2IPO] mac2IPO has joined #tpac 23:45:52 [mauro] mauro has joined #tpac 23:46:06 [vincent] vincent has joined #tpac 23:46:32 [nick] nick has joined #tpac 23:51:40 [glazou] glazou has joined #tpac 23:52:40 [mac2IPO] mac2IPO has joined #tpac 23:53:36 [unl] unl has joined #tpac 23:57:06 [lbolstad] lbolstad has joined #tpac 23:59:29 [kawata] kawata has left #tpac 00:00:04 [kawata] kawata has joined #tpac 00:00:39 [claudio] claudio has joined #tpac 00:01:07 [howard] howard has joined #tpac 00:02:48 [cheol] cheol has joined #tpac 00:03:16 [wbailer] wbailer has joined #tpac 00:03:23 [Lachy] Lachy has joined #tpac 00:03:41 [Zakim] +apis-db-stuff 00:03:55 [AxelPolleres] AxelPolleres has joined #TPAC 00:04:17 [marengo] marengo has joined #tpac 00:04:29 [renato] renato has joined #tpac 00:04:31 [Karen] scribenick: Karen 00:04:37 [howard] howard has joined #tpac 00:04:39 [Zakim] -apis-db-stuff 00:04:40 [JereK] JereK has joined #tpac 00:04:58 [Karen] Session 6: Future of the Social Web 00:05:04 [Steven] Steven has joined #tpac 00:05:09 [Julian] Julian has joined #tpac 00:05:09 [Karen] Moderator: Daniel Appelquist, Vodafone 00:05:15 [darobin] darobin has joined #tpac 00:05:27 [Zakim] +apis-db-stuff 00:05:27 [Steven] rrsagent, make minutes 00:05:27 [RRSAgent] I have made the request to generate Steven 00:05:33 [Kai] Kai has joined #tpac 00:06:01 [youenn] youenn has joined #tpac 00:06:04 [mattmay] mattmay has joined #tpac 00:06:12 [Karen] Speakers: ren> Moderator: Daniel Appelquist, Vodafone 00:06:12 [Karen] [19:05] * darobin has joined #tpac 00:06:12 [Karen] [19:05] <Zakim> +apis-db-stuff 00:06:12 [Karen] [19:05] <Steven> rrsagent, make minutes 00:06:12 [Karen] [19:05] <RRSA 00:06:22 [timeless] [ wine drawing ] 00:06:39 [Karen] Topic: Future of the Social Web 00:06:55 [Karen] Dan: Hi Everyone 00:06:59 [Karen] Welcome back from the break 00:07:07 [timbl] timbl has joined #tpac 00:07:07 [Karen] ...I work for Vodafone 00:07:13 [mac] mac has joined #tpac 00:07:14 [Karen] ...here to present panel on Social WEb 00:07:18 [maraki] maraki has joined #tpac 00:07:19 [annevk] annevk has joined #tpac 00:07:26 [Karen] ...when I come before you, you are used to hearing about widgets and mobile web 00:07:27 [mac] mac has joined #tpac 00:07:32 [Karen] ...and how cool that is 00:07:37 [Karen] ...But I work on other stuff 00:07:47 [Karen] ...Social networking is a topic I have picked up over last couple of years 00:07:52 [Karen] ...Of intense interest to me 00:07:56 [Karen] ...part of the future of communication 00:08:11 [Karen] ...I use this phrase internally to make sure people understand why they should be interested in social networking 00:08:25 [Karen] ...how people communicate in and through social channels in structured ways 00:08:30 [brutzman] brutzman has joined #TPAC 00:08:33 [ArtB] ArtB has joined #tpac 00:08:34 [Karen] ...New ways that were hard to imagine a few years ago 00:08:48 [zarella__] zarella__ has joined #tpac 00:08:49 [Karen] ...Like to introduce our guest speaker David Recordon from FaceBook 00:08:54 [Karen] ...And Adam Boyet form Boeing 00:08:59 [Karen] ...then discussion 00:09:05 [Karen] Adam: Switched the batting order 00:09:06 [timeless] s/form/from/ 00:09:12 [jun] jun has joined #tpac 00:09:22 [Karen] ...I'd like to share what we have been doing with social web inside of Boeing 00:09:25 [Karen] ...we're a huge company 00:09:37 [Karen] ...research and design facilities around the world 00:09:51 [Karen] ...but social web not just for big companies; value for small companies, too 00:10:03 [Karen] ...Sometimes as technologists, we look at tech perspective 00:10:07 [Zakim] +Salon_1 00:10:13 [Karen] ...but we're also looking at it from employee's perspective 00:10:22 [Karen] ...Think about discoverability 00:10:26 [tlr] tlr has joined #tpac 00:10:28 [Karen] ...how to improve across company 00:10:36 [noahm] noahm has joined #tpac 00:10:37 [Karen] ...Reusability rather than start over 00:10:51 [Karen] ...Redundancy: could be similar groups working on same technologies 00:11:01 [Karen] ...somebody on air frames and satellites 00:11:06 [Karen] ...both trying to get moisture out 00:11:06 [arun] arun has joined #tpac 00:11:17 [Zakim] +Ralph 00:11:22 [Karen] ...Visibilty: related to redundancy but sprinkle security in there 00:11:34 [Karen] ...Security adds dynamicness" 00:11:44 [Zakim] -MeetingRoom 00:11:52 [Karen] ...One of ways we have addressed is by introducing patterns from social web inside of Boeing 00:11:53 [mmani] mmani has joined #tpac 00:11:55 [Zakim] -apis-db-stuff 00:12:02 [Karen] ...inSite is where Boeing employees can create an identity 00:12:02 [plh] plh has joined #tpac 00:12:05 [Zakim] +Salon_1 00:12:13 [Karen] ...opt in and out, share photos, resumes, what they choose 00:12:22 [Karen] ...They can help each other out, ask questions, search for people 00:12:32 [Karen] ...Supposed I want to find a structural analysis person 00:12:39 [Daniel-Park] Daniel-Park has joined #tpac 00:12:41 [Karen] ...And expert who worked on this particular air frame 00:12:47 [Karen] ...Maybe help to peer review something 00:12:54 [Karen] ...inSite allows people to publish their thoughts 00:13:00 [Karen] ...Very low entry barrier for that 00:13:00 [Zakim] -Ralph 00:13:04 [raman] raman has joined #tpac 00:13:08 [Karen] ...You an share information; links, white paper, PPT, video 00:13:11 [Karen] ...Can share easily 00:13:13 [Roger] msg AnnB asks if you could please send me these slides? 00:13:16 [Karen] ...You can create groups 00:13:23 [Zakim] +Ralph 00:13:25 [Karen] ...These groups find each other and can collaborate 00:13:35 [tantek] tantek has joined #tpac 00:13:41 [Zakim] -MeetingRoom 00:13:44 [Karen] ...Then you have a place where experts can collaborate more effectively and securely 00:13:49 [Zakim] -Ralph 00:13:50 [Karen] ...Can secure only to the group... 00:14:01 [Karen] ...We make it easy; declaratively tag that 00:14:08 [Karen] ...balance between public and secure content 00:14:15 [Karen] ...raise awareness, find that serendipitous person 00:14:16 [Zakim] W3C_TP(*)11:30AM has ended 00:14:17 [Zakim] Attendees were MeetingRoom, Ralph, +46.7.06.02.aaaa, +1.408.644.aabb, shadi, wiecha, Lalana, apis-db-stuff 00:14:25 [Karen] ...75% of it is available on your Blackberry device 00:14:29 [Zakim] W3C_TP(*)11:30AM has now started 00:14:30 [Zakim] +Salon_1 00:14:37 [Karen] ...You can also do on your iPhone, although we don't support 00:14:43 [Karen] ...Goal to get the workforce connect 00:14:46 [Zakim] -Salon_1 00:14:55 [Karen] ...The approach we took was looking at social patterns from the web 00:15:08 [Karen] ...Content aggregation, open culture, patterns around Q&A, recommending 00:15:14 [Karen] ...Looked at patterns from service providers 00:15:23 [Karen] ...We looked at how to use this pattern to add value to the company 00:15:31 [Karen] ...This is approach we took with inSite 00:15:36 [Karen] ...So how it was built; all Java based 00:15:41 [Karen] ...Use open source frameworks 00:15:49 [Karen] ...We use Oracle, we have an enterprise license 00:15:54 [Karen] ...You can see functional componenets 00:15:58 [Karen] ...Share and ask it 00:16:08 [Karen] ...On everybody's browser, click to ask question 00:16:09 [Zakim] +Salon_1 00:16:12 [timeless] s/Share/Share It!/ 00:16:14 [Karen] ...It searches previously asked questions 00:16:17 [timeless] s/ask it/Ask It!/ 00:16:20 [Karen] ...and go ahead and ask question 00:16:30 [Karen] ...to people who may be experts on topic 00:16:36 [Karen] ...Get to people you may not know exist 00:16:38 [Zakim] +Ralph 00:16:45 [Karen] ...Couldn't find him any other way without social patterns 00:16:51 [Karen] ...Boos, tagging, etc. 00:16:57 [Karen] ...Profile a huge part of that 00:17:01 [Karen] ...Search is ubiquitous 00:17:02 [Zakim] -Ralph 00:17:04 [timeless] s/Boos/Bookmarks/ 00:17:13 [Karen] ...We're straddling line between secure and public content 00:17:17 [Karen] ...Share where possible 00:17:23 [Karen] ...but not always with all technical info 00:17:30 [Karen] ...Public info shows up in enterprise search clients 00:17:36 [Karen] ...But also use cases that require security 00:17:39 [Karen] ...From the outset 00:17:46 [Karen] ...We wanted an open culture for data 00:17:56 [Karen] ...Implement so you can get into inSite from outside 00:18:06 [Karen] ...Through REST interface 00:18:15 [Karen] ...We can render in other applications 00:18:19 [timeless] [ SIOC, FOAF, RSS, REST, SOAP ] 00:18:20 [Karen] ...Can embed in blog or wiki 00:18:21 [AxelPolleres] AxelPolleres has joined #TPAC 00:18:30 [Karen] ...Try to bring patterns, concepts from Internet to gain efficiencies 00:18:43 [Karen] ...Trying to use social web to work together more efficiently 00:18:47 [Karen] ...collaborate better 00:18:53 [timeless] [ Slide title: What does this mean? ] 00:18:56 [Karen] ...connect to each other; see if there are synergies in activities 00:19:08 [Karen] ...Try to use for people to find solutions to things that have already been solved 00:19:17 [Karen] ...Find solutions before they start a new project, or find a lesson learned 00:19:23 [Karen] ...Reduce duplication when starting something new 00:19:32 [timeless] [ Slide title: Life is good right? .... not yet ... ] 00:19:34 [Karen] ...So use social web for those activities 00:19:41 [Karen] ...We have about 30K signed up 00:19:46 [Karen] ...log in daily basis 00:19:55 [Karen] ...People started to look at profiles 00:20:05 [Karen] ...but had to recreate on the blog, wikis, etc. 00:20:12 [Karen] ...One of things we noticed 00:20:24 [Karen] ...Profiles in other systems only had a fragment of the inSite profile 00:20:40 [Karen] ...So 30K people, HR manage data and user provider data in one place witin inSite 00:20:45 [Karen] ...So we wanted to save them time 00:20:49 [jeanne] jeanne has joined #tpac 00:20:52 [Karen] ...Integrate to the wiki to get data out of inSite 00:20:58 [Karen] ...integrate with blog, portal 00:21:04 [Karen] ...Would be great to have some social web standard 00:21:12 [Karen] ...to synchronize profile information between systems 00:21:13 [tantek] "Example: Missing Profile Standards" <-- wait, didn't previous slide say they implemented FOAF? 00:21:18 [Karen] ...and do that within Boeing and with suppliers, too 00:21:25 [Karen] ...link these disparate systems together 00:21:29 [timeless] [ Slide title: Benefits to the enterprise ] 00:21:31 [Karen] ...So maybe if we had some social web standards 00:21:34 [Karen] ...that would reduce time 00:21:38 [Karen] ...and focus on core business 00:21:50 [Karen] ...come up with a better jet fuel, more efficient airplane 00:22:00 [Karen] ...Apply social patterns and hope to see more innovation 00:22:10 [Karen] ...break down walled gardens; find solutions faster 00:22:16 [timeless] [ Slideshow ends ] 00:22:22 [timeless] [ applause ] 00:22:25 [Marcos] Marcos has joined #tpac 00:22:50 [marie] Fabien's slides: 00:22:52 [Karen] Speaker: Fabien Gandon 00:22:57 [Karen] ...from INRIA 00:22:57 [mth] mth has joined #tpac 00:23:02 [Karen] ...This talk is twice biased 00:23:09 [Karen] ...I have been asked to test an academic perspective 00:23:14 [Karen] ...and also look at SemWeb 00:23:36 [Karen] ...First one is to look at is time-evolving 00:23:47 [Karen] ...Growing amount of info exceeds our attention span 00:23:56 [Karen] ...First problem using SemWeb is need for memes to have focus 00:24:02 [Karen] ...In social network analysis 00:24:03 [holstege2] holstege2 has joined #tpac 00:24:07 [Karen] ...sociaograms and analysis 00:24:10 [Karen] ...could help us focus 00:24:31 [Karen] ...We could use social applications to filter and focus things 00:24:39 [Karen] ...Classic social network analysis works on graphs 00:24:46 [Karen] ...don't take into account types of links, profiles 00:24:51 [TabAtkins] TabAtkins has joined #tpac 00:24:55 [Karen] ...Links and profiles change and are important 00:25:00 [Karen] ...SemWeb can help 00:25:06 [Karen] ...We have social network graphs 00:25:11 [Karen] ..and we have SemWeb graphs 00:25:21 [Karen] ...In social network analysis we would calculate in degree 00:25:25 [Karen] ...add new types 00:25:30 [Karen] ...since you are man also a person 00:25:39 [Karen] ...Bring both things, bridge both graphs together 00:25:45 [Karen] ...First bias is academy 00:25:47 [Karen] ...Related work 00:25:51 [Karen] ...Some of contributions 00:25:56 [Karen] ...propogating trust 00:26:02 [Karen] ...using SN and SemWeb 00:26:10 [Karen] ...Show degree still follows power 00:26:20 [Steven] a/propo/propa/ 00:26:20 [Karen] ...apply classic analysis directly on social network with RDF 00:26:23 [Karen] ...merging identities 00:26:29 [Karen] ..extending tools to query with SPARQL 00:26:33 [Karen] ...From representation POV 00:26:36 [Steven] s/power/power law/ 00:26:40 [Karen] ...schemas exist like FOAF to describe persons 00:26:50 [Karen] ...like families, colleagues, and so on 00:26:55 [fhirscht] fhirscht has joined #tpac 00:26:58 [Karen] ...Give a 'toy' example of what can be don 00:27:01 [Karen] s/done 00:27:05 [IanJ] s/don/done 00:27:12 [Karen] ...Consider Guilllaume 00:27:16 [Karen] ...from a family point of view 00:27:23 [Karen] ...analyze him only from family POV 00:27:27 [Karen] ...I don't care how you calculate 00:27:41 [Karen] ...but use schemas that define family and tell me what is the degree of Guillaume 00:27:46 [Karen] ...That's what we can do merging graphs 00:27:53 [Karen] ...Centrality as I mentioned before 00:28:07 [Karen] ...Second place is to work on SPARQL and to extend it 00:28:10 [Karen] ...Describe it 00:28:15 [Karen] ...Pass as first citizen 00:28:26 [timeless] s/Guilllaume/Guillaume/ 00:28:31 [Karen] ...query here, interest in links between people, only colleagues such as manager of second person 00:28:37 [Karen] ...test with real case 00:28:47 [Karen] ...worked with ipernity.com 00:28:51 [Karen] ...People type the link 00:28:58 [Karen] ...make difference between contacts 00:29:02 [Karen] ...We have their full database 00:29:08 [Karen] ...It's 60K; small 00:29:11 [Karen] ...but all in RDF 00:29:14 [Karen] ...We ran analysis 00:29:20 [Karen] ...show when you try to use this usual operator 00:29:24 [Karen] ...to find most important actor 00:29:29 [Karen] ...Depending upon the type 00:29:39 [Karen] ...You will find different actor depending upon the actor 00:29:41 [IanJ] Fabien: the "most important actor" depends on type information you choose 00:29:50 [Karen] ...From prof POV, if not able to type not able to see 00:30:00 [Karen] ...What we do is provide schemas to reinject 00:30:11 [Karen] ...Propose schema to put back result of analysis 00:30:22 [SteveH] SteveH has joined #tpac 00:30:23 [Karen] ...resuse it for incremental analysis 00:30:28 [Karen] ...Second problem I would like to introduce 00:30:31 [Karen] ...Social data 00:30:38 [Karen] ...usually characterized using tagging 00:30:42 [Karen] ...Folksonomies 00:30:53 [Karen] ...One problem only so much to do with Folksonomies 00:30:56 [Karen] ...Related work 00:31:01 [Karen] ...Some academic propositions 00:31:09 [Karen] ...low tagging tags themselves 00:31:15 [Karen] ...Semiautomatic structuring 00:31:23 [mgylling] mgylling has joined #tpac 00:31:25 [Karen] ...Community inclusion to derive structure on the tags 00:31:33 [Karen] ...Diving is included in the community of tag water sport 00:31:40 [Karen] ...start structuring the folksonomy with that 00:31:44 [Karen] ...Use existing lexicons 00:31:47 [Karen] ...Some proposal from ? 00:32:02 [Karen] ...Provide schemas to exchange tags and folksonomies 00:32:03 [rigo] rigo has joined #tpac 00:32:07 [Karen] ...SIOC is one 00:32:18 [Karen] ...Allows you to represent cloud of tags 00:32:24 [Karen] ...Can use SKOS from W3C also 00:32:33 [Karen] ...MOAT can be used to disembiguate the tag 00:32:44 [Karen] ...in this context used to refer to the fruit and not the company 00:32:51 [Karen] ...VoCamps; encourage you to look 00:33:00 [Karen] ...Working on schema to work on nametags 00:33:04 [Karen] ...was discussed in a VoCamp 00:33:12 [Karen] ...Give you another example of a different approach 00:33:18 [Karen] ...To get users to use tags 00:33:30 [Karen] ...look at ways to provide them tools 00:33:33 [Karen] ...and capture knowledge 00:33:42 [Karen] ...work with people using delici.us 00:33:49 [Karen] ...Look at bookmarks; when they search 00:33:55 [Karen] ...can use this widget on the left 00:34:13 [Karen] ...As the user reorganizes the results 00:34:25 [Karen] ...We capture everything 00:34:33 [Karen] ...while they are searching and filtering 00:34:44 [Karen] ...Last problem I want to mention 00:34:56 [Judy] Judy has joined #tpac 00:34:56 [Karen] ...Is introduction of social web inside a firewall 00:35:00 [Karen] ...A cultural problem 00:35:10 [Karen] ...and a psychological challenge 00:35:12 [Karen] ...Inside companies 00:35:22 [Karen] ...social webs may be incompatable with business processes 00:35:28 [Karen] ...be careful not to create a war 00:35:38 [Karen] ...isicil.inria.fr 00:35:58 [Karen] ...this uses both internal and external applications; crosses boundaries 00:36:02 [Karen] ...We injected RDF 00:36:07 [Karen] ...when they interact with application 00:36:12 [Karen] ...internal or external 00:36:20 [Karen] ...we can still capture the RDF and capture the functionality 00:36:24 [Karen] ...A number of contributions 00:36:28 [Karen] ...Security and access control 00:36:39 [Karen] ...Trust based service composition 00:36:48 [Karen] ...Policy aware content reuse 00:37:03 [Karen] ...Systems link to open data to get info about you and people you interact with 00:37:10 [Karen] ...Many other topics could be mentioned here 00:37:15 [Karen] ...Some working on at camps 00:37:25 [Karen] ...Social journalism...[reads from list] 00:37:41 [Karen] ...One of the things interesting is look at stack of standards built on SemWeb 00:37:50 [Karen] ...that could provide basis for extending social networks 00:37:54 [Bryan_Sullivan] Bryan_Sullivan has joined #TPAC 00:38:01 [Karen] ...Another aspect that Tim pointed out yesterday 00:38:09 [Karen] ...This could benefit from infrastructure 00:38:21 [Karen] ...from the deployment architecture provided from linked open data 00:38:26 [Karen] ...Using typed networks 00:38:32 [Karen] ...and parameterized operators 00:38:43 [Karen] ...allow us to be more precise 00:38:54 [Karen] ...Difficulty is problem of fragmented identities 00:38:58 [Karen] ...SemWeb has pros and cons 00:39:06 [Karen] ...Sometimes you want profiles to be merged 00:39:08 [Karen] ...sometimes not 00:39:16 [Karen] [crowd laughs at photos] 00:39:26 [Karen] ...You want to differentiate 00:39:32 [Karen] ...Still an open issue 00:39:41 [Karen] ...Declarative query language 00:39:49 [Karen] ...Time is still forgotten 00:39:51 [Steven] Ivan Herman as Hagrid 00:39:54 [Karen] ...Setting chronology of events 00:40:03 [Karen] ...analyzing evolution of trends 00:40:14 [Karen] ...I would love to have an easy way on FaceBook 00:40:18 [Karen] ...to say I'm a friend with this person 00:40:32 [Karen] ...but she does not have access to what I have said in the last year, but no access to my past 00:40:37 [Karen] ...Scaling is a challenge 00:40:46 [Karen] ...We are far from the size of network you are handling 00:40:50 [Karen] ...Secuirty, semiotics 00:40:59 [Karen] ...many families exist 00:41:05 [Karen] ...Mobile, hyperamnesia 00:41:09 [Karen] ...If you want to know more 00:41:21 [timeless] [ applause ] 00:41:27 [timeless] s/Secuirty/Security/ 00:41:34 [Karen] Speaker: David Recordon, FaceBook 00:41:58 [Karen] Thank you for invitation to speak today 00:42:04 [Karen] ...I joined FB three months agao 00:42:05 [Karen] s/ago 00:42:11 [Karen] ...Manage open source and standards initiatives 00:42:17 [Karen] ...COmpany has about 20 open source projects 00:42:21 [Karen] ...We react with developers 00:42:30 [Karen] ...I'm looking at how we support developers, do that better 00:42:36 [Karen] ...Make world better, more connect 00:42:39 [Karen] s/connected 00:42:42 [Karen] ...Look at that mission 00:42:46 [Karen] ...do it with standards 00:42:55 [Karen] ...We are happy to do that with any tech that has broad adoption 00:43:05 [Karen] ...My background is about OAUTH, Open ID 00:43:14 [Karen] ...got into that a few years ago before term, Social Web 00:43:24 [Karen] ...pioneer that instead of "versioning" terms 00:43:27 [IanJ] Social Web 2.0!!! 00:43:34 [FabGandon] FabGandon has joined #tpac 00:43:37 [silvia] silvia has joined #tpac 00:43:39 [Karen] ...How do we create social services that are interoperabile 00:43:47 [Karen] ..Here is a Tim O'Reilly quote that sticks with me 00:43:53 [Steven] Fabien's slides: \ 00:44:05 [Karen] ...Open data is increasingly important as services move online." 00:44:13 [Karen] ...No longer just about open source to run mail applicatoin 00:44:19 [Karen] ...but data behind is more important 00:44:20 [timeless] s/applicatoin/application/ 00:44:28 [Karen] ...Not nec about how to have access to entire code base 00:44:40 [Karen] ...Always talking about access to data and how to share it in other places 00:44:43 [Karen] ...Trend to open 00:44:51 [Karen] ...Open Source, Open Standards, Open APIs, 00:44:55 [Karen] ...Have access to data 00:44:59 [Karen] s/nec/necessarily 00:45:05 [Karen] ...This is really important 00:45:13 [Karen] ...When I look at Open ID, OAUTH, 00:45:18 [Karen] ...those communities I'm involved with 00:45:27 [Karen] ...I see four characteristics to look at and understand 00:45:32 [Karen] ...why they are successful 00:45:36 [Karen] ...First is about community 00:45:47 [Karen] ...individuals from companies, etc. 00:45:51 [Karen] ...and collaboration 00:45:56 [Karen] ...not just open source and for profit 00:46:04 [Karen] ...collaboration with these diverse communities 00:46:11 [Karen] ...Both are free to participate and implement 00:46:12 [marie] (/me notes that we'll have slides after this prez) 00:46:14 [Karen] ...Low barriers to entry 00:46:18 [Karen] ...Go and step in 00:46:26 [Karen] ...Eran Hammer-Lahav 00:46:30 [Karen] ...is a good example 00:46:45 [Karen] ...Got involved six to nine months later, have smart opinions and get involved as editor of spec 00:46:53 [Karen] ...Open Source is another aspect 00:46:59 [Karen] ...having in many different languages 00:47:05 [Karen] ...How to use microformats, etc. 00:47:10 [Karen] ...Stems from having large community 00:47:12 [Karen] ...And then adoption 00:47:16 [Karen] ...seeing every year 00:47:20 [Karen] ...get modeled like a half life 00:47:34 [Karen] ...Hubub being supported in just a few months 00:47:41 [Karen] ...Go back to community participation again 00:47:51 [Karen] ...How many people are subscribed to all these different mailing lists? 00:48:01 [silvia1] silvia1 has joined #tpac 00:48:07 [Karen] ...What if people had to pay to subscribe to all these lists to provide feedback 00:48:12 [Karen] ...Really valuable feedback 00:48:26 [Karen] ...Wisdom from all sorts of people, individuals to large corporations 00:48:37 [Karen] ...Again, so what do they need to be successful? 00:48:52 [Karen] ...Mentors, best practices, freedom to participate, infrastructure and tools 00:49:00 [Karen] ...IP, governance and scope, light weight 00:49:06 [Karen] ...much more from open source model 00:49:10 [Karen] ...efforts not large corporations 00:49:12 [Karen] ...not competing 00:49:23 [Karen] ...but all sorts of people who see it values the entire ecosystem 00:49:33 [Karen] ...Policies around how to resolve conflicts not necessarily needed 00:49:40 [Karen] ...once again give my own view of ad hoc approach 00:49:50 [Karen] ...adhoc, OASIS, IETF and W3C 00:50:00 [Karen] ...How do you have these resources for other people? 00:50:06 [Karen] ...I'm for the adhoc approach 00:50:13 [Karen] ...OASIS and W3C is part of cost 00:50:24 [Karen] ...Go in and participate in OASIS or W3C group is quite prohibitive 00:50:29 [Karen] ...for those you want to contribute 00:50:35 [Karen] ...IPR is in eye of beholder 00:50:40 [Karen] ...Look for a clean outcome 00:50:45 [Karen] ...Be friendly to individuals and companies 00:50:48 [Karen] ...Also governance and scope 00:50:58 [Karen] ...Look at in terms of not making all the decisions up front 00:51:07 [Karen] ...not consider outside of 10 things up front 00:51:11 [Karen] ...may have learned some lessons 00:51:17 [Karen] ...Shift to Open Web Foundation 00:51:21 [Karen] ...We created a year ago 00:51:33 [Karen] ...For those who are creating specifications outside of standards bodies 00:51:46 [Karen] ...How to create shared infrastructure and shared tools 00:51:57 [Karen] ...Model of providing tools for the communities working where they are 00:52:01 [Karen] ...May be on a mailing list 00:52:13 [Karen] ...or for W3C to take advantage of the legal work we have done and offer to your own WGs 00:52:18 [Karen] ...Take advantage of that 00:52:28 [Karen] ...and not replace standards bodies that have an important role 00:52:35 [Karen] ...Open Web Foundation Agreement 00:52:45 [Karen] ...Started with four tenets 00:52:55 [Karen] ...Legal document understandable by non-lawyers 00:53:00 [Karen] ...Allow derivative works 00:53:07 [Karen] ...Be written simply 00:53:11 [IanJ] -> Open Web Foundation Agreement - Committee Draft 2 00:53:20 [Karen] ...How to take a specification and move into a standards body 00:53:34 [Karen] ...think from the beginning; freely implementable specifications 00:53:39 [Karen] ...I have pulled out four things 00:53:45 [Karen] 00:53:50 [Karen] ...Use Creative Commons 00:53:55 [Karen] ...take document and evolve it 00:54:04 [Karen] ...A patent non-assert 00:54:11 [Karen] ...We felt this was really important 00:54:11 [IanJ] David: Patent non-assert that allows you to carry patent rights to derivative works. 00:54:19 [Karen] ...Non Asser Termination 00:54:23 [Karen] ...makes it hard to litigate 00:54:25 [timeless] s/Asser/Assert/ 00:54:32 [Karen] ...ensure specs licensed remain free 00:54:38 [Karen] ...and transition into a standards body 00:54:43 [Karen] ...Model that you operate under 00:54:59 [Karen] ...Means that someone creating specification licensed this way, does not have to go back to all the contributors 00:55:02 [timeless] s/you/you [W3C]/ 00:55:05 [Karen] ...Was set up from the beginning to do that 00:55:10 [Karen] ...Glossing over a few topics 00:55:18 [Karen] ...Happy to say more in discussion and Q&A 00:55:25 [Karen] ...Web standards that I'm paying attention to 00:55:32 [Karen] ...HTML5 are extremely interesting 00:55:42 [dom] s/are/is/ 00:55:43 [Karen] ...Not a social web standard by itself, but what innovation it will enable 00:55:51 [Claes] Claes has joined #tpac 00:55:52 [Karen] [reads list from slide] 00:56:03 [Karen] ...Combine together and create interoperable web services 00:56:08 [Karen] ...Have been called the open stack 00:56:14 [Karen] ...How to interact with people they know 00:56:17 [timeless] s/the open stack/"the open stack"/ 00:56:19 [Karen] ...Another piece is getting major adoption 00:56:32 [Karen] ...Many people have not worked inside a standards body 00:56:37 [Karen] ...Many occur ad hoc 00:56:43 [Karen] ...See adoption from non-tech companies 00:56:49 [Karen] ...Looking at role of standards body 00:56:54 [Karen] ...and role that is valuable to these communities 00:57:00 [Karen] ...Continuing to gloss at high level 00:57:07 [Karen] ...Talk about FaceBook, especially the scale 00:57:11 [Karen] ...which blows me away 00:57:15 [Karen] ...and how we are evolving 00:57:25 [Karen] ...8 billion minutes spent on site every day worldwide 00:57:35 [Karen] ...2 billion pieces of content shared every week 00:57:38 [Karen] ...all types of content 00:57:43 [Karen] ...Combination of web browsers, smss 00:57:45 [Karen] s/sms 00:57:56 [Karen] ...Over 2 billion photos uploaded each month 00:58:12 [gond] gond has joined #tpac 00:58:14 [Karen] ...And content is about who's inside the photo, not just what photo is 00:58:20 [dbaron] 15200 years or so, I think 00:58:25 [Karen] ...15K FB Connect implementations 00:58:29 [Karen] ...So scaling challenges 00:58:32 [Karen] ...THink about privacy 00:58:39 [Karen] ...not a traditional scaling problem 00:58:41 [Ileana] Ileana has joined #TPAC 00:58:42 [Karen] ...Have that users data 00:58:44 [timeless] s/THink/Think/ 00:58:48 [Karen] ...stored on separate server 00:58:51 [Karen] ...shared by user 00:58:57 [Karen] ...Each user put on different server 00:59:03 [Karen] ...not a lot of complication 00:59:18 [Karen] ...on FB data is interconnected 00:59:27 [Karen] ...We are pulling data from hundreds of different people. 00:59:33 [Karen] ...More complex from scaling perspective 00:59:39 [Karen] ...Choose who you want to share with 00:59:47 [Karen] ...friends, friend of friends, these five people 00:59:52 [Karen] ...adds to the scaling challenges 01:00:00 [Karen] ...Not just pull news feed from my 500 friends 01:00:08 [Karen] ...look at privacy settings and am I allowed to see it 01:00:14 [Karen] ...Need to continue to innovate around that 01:00:19 [Karen] ...We have also looked at social graph 01:00:26 [Karen] ..People are only one dimension 01:00:31 [Karen] ...events, photos, documents 01:00:36 [Karen] ...and I see how the Web evolves also 01:00:42 [Karen] ...from documents to documents and people 01:00:47 [Karen] ...We are interested in working on that 01:01:00 [Karen] ...We have created XMBFL? 01:01:05 [Karen] ...See my photo or not 01:01:06 [timeless] [ wiki.developers.facebook.com/index.php/XFBML ] 01:01:12 [Karen] ...go update across the web 01:01:18 [Karen] ...how does HTML become social? 01:01:24 [Karen] ...How do people get represented? 01:01:25 [IanJ] s/XMBFL/XFBML/ 01:01:32 [Karen] ...How does FaceBook scale worldwide? 01:01:37 [IanJ] 70% of facebook base outside US 01:01:44 [Karen] ...Site is in 65 languages, done by users themselves 01:01:53 [Karen] ...Really community translation 01:02:02 [Karen] ...We have 20 open source projects 01:02:06 [timeless] s/FaceBook/Facebook/ 01:02:07 [Karen] ...Next challenges are to scale world wide 01:02:11 [timeless] s/FaceBook/Facebook/g 01:02:13 [Karen] ...to give people ability to share 01:02:15 [IanJ] rrsagent, make minutes 01:02:15 [RRSAgent] I have made the request to generate IanJ 01:02:15 [shiki] shiki has joined #tpac 01:02:17 [Karen] ...with whom sharing 01:02:20 [Karen] ...social identiy 01:02:24 [Karen] ...verified identity 01:02:30 [Karen] ...things I know and what I'm connected wtih 01:02:41 [Karen] ...Looking at HTML and how web represents people 01:02:46 [Karen] ...An interesting question to talk about 01:02:53 [Karen] ...Why should FaceBook become a member of W3C 01:02:59 [Karen] ...things we do related to social, privacy 01:03:05 [Karen] ...I don't have a clear answer 01:03:08 [Karen] ...Try to work with you 01:03:18 [Karen] ...how to make people a real aspect of the web itself? 01:03:26 [timeless] [ applause ] 01:03:29 [Karen] Daniel: That's great, thanks, David 01:03:44 [Karen] ...Maybe one answer to David's question 01:03:50 [timeless] [ Last Slide Title: Why should Facebook become a W3C member? ] 01:03:55 [Karen] ...W3C is where different communities of practice 01:03:56 [timeless] [ slide show ends ] 01:03:58 [Karen] ...come together 01:04:02 [Karen] ...share viewpoints 01:04:06 [Karen] ...and competencies 01:04:14 [Karen] ...Some nashing of teeth 01:04:26 [Karen] ...Nice to have David as guest speaker 01:04:32 [Karen] ...talk about community efforts 01:04:39 [Karen] ...Also been involved running social web camp 01:04:46 [Karen] ...brought in people from community to talk about these issues 01:04:54 [Karen] ...I want to relate a short anecdote 01:05:02 [Karen] ...how social networks are becoming people's lives 01:05:08 [Karen] ...I was sitting at a cafe in London 01:05:13 [Karen] ...two young people were arguing 01:05:16 [Karen] ...not sure what about 01:05:20 [Karen] ...maybe football related 01:05:23 [Karen] [laugh] 01:05:35 [Karen] ...At one point, one person said, "unfriend" 01:05:41 [Karen] ...other one said, "unfriend, unfollow" 01:05:45 [Karen] ...Ok, so questions 01:05:50 [Karen] ...A quick question for Adam 01:06:00 [Karen] ...If you were also expanding what you are doing to supplier network 01:06:01 [Ileana] Ileana has joined #TPAC 01:06:07 [Karen] ...How does that work, what are your challenges there? 01:06:12 [JereK] JereK has left #tpac 01:06:22 [Karen] Adam: Haven't gone down that path yet 01:06:34 [Karen] ...not including suppliers and customers inside our internal social networking platform 01:06:39 [Karen] ...On the horizon but not there yet 01:07:01 [Karen] Ann Bassetti, Boeing: We do have several hundred thousand suppliers and customers that log into our firewall 01:07:07 [Karen] ...to get to other web sites internally 01:07:16 [Karen] ...We have been doing that successfully for a decade 01:07:27 [Karen] ...What Adam is referring to is social interactions through inSite 01:07:31 [Karen] ...We do a lot of collaboration 01:07:35 [Karen] ...this would be the next level up 01:07:44 [Karen] DavidR: Interesting to hear about inSite 01:07:51 [Karen] ...We have similar things inside of FaceBook 01:07:56 [Karen] ...how to find people, find tags 01:08:13 [Karen] AnnB: One of hugest challenges Adam stepped up to is the security restrictions from US gov't 01:08:22 [Karen] ...if someone releases it can be inadvertant 01:08:27 [Karen] ...a whole bunch of variables 01:08:39 [Karen] ...different requirements where we can be fined millions of dollars 01:08:49 [Karen] ...So he set up some taggings for security 01:08:53 [Karen] ...all kinds of levels 01:09:00 [timeless] [ International Traffic in Arms Regulations (ITAR) ] 01:09:07 [Karen] Mike Champion, Microsoft: Adam mentioned that Boeing wants to see standards 01:09:20 [Karen] ...and FaceBook defines community specifications as being satisfactory 01:09:30 [Karen] ...So to Adam, do you really need standards, or more specs 01:09:34 [Karen] Adam: Great question 01:09:43 [Karen] ...It boils down to can we get vendors to implement them 01:09:48 [Karen] ...We bring in commercial blog 01:10:00 [Karen] ...if a standard, we can try to be 800 pound gorilla 01:10:01 [Karen] ...See this today 01:10:13 [Karen] ...industry outside is adopting, but not vendors adopting 01:10:27 [Karen] ...So it may need to be a traditional spec for the vendors to implement 01:10:32 [Karen] ...may have to be wait and see 01:10:48 [Karen] Rotan Hanrahan, MobileAware: you are trying to condense info 01:10:56 [Karen] ...the human being is receiving a huge amount of info 01:11:02 [Karen] ...I fear information overload for the users 01:11:12 [Karen] ...Is there a way to filer the social network to a human level 01:11:25 [Karen] ...My best situation would have plenty of flow, tables and beer mats 01:11:37 [Karen] DavidR: When you look at FB news feed compared to live fed 01:11:40 [Karen] s/fed/feed 01:11:44 [Karen] ...it's algorithmic 01:12:00 [Karen] ...What content did you see, who commented, what content do you interact with 01:12:04 [timeless] < > 01:12:06 [Karen] ...versus here is everything you can possibly see 01:12:13 [Karen] Rigo Wenning, W3C: In Open ID 01:12:20 [Karen] ...this specification was discussed 01:12:26 [Karen] ...and whether you align with architectures 01:12:36 [Karen] ...Regarding what's in it for us with W3C 01:12:43 [Karen] ...there is more overhead than a web site and a mailing list 01:12:50 [Karen] ...you see a lot of people; so why are they here 01:12:59 [Karen] ...You could just have a mailing list and a web site 01:13:02 [Karen] ...There is more of it 01:13:22 [Karen] ...Not sure if you here when we discussed privacy, security, internet governance 01:13:28 [Karen] ...social networks are young 01:13:42 [Karen] ...there are more things that come along 01:13:50 [Karen] ...so mailing list and server not enough 01:14:18 [Karen] DavidR: I didn't mean to say way was to create a spec with a mailing list and web site 01:14:27 [Karen] ...doesn't guarantee adoption and success 01:14:33 [Karen] ...interested in the trade-offs 01:14:56 [Karen] Jeremy Carroll, TopQuadrant: W3C standards have been getting better 01:15:17 [Karen] ...recent ones have been clearer before they get to recommendation state, have implementations 01:15:22 [Karen] ...and clear success criteria 01:15:31 [Karen] ...people who have thought about what it means to interoperate 01:15:44 [Karen] ...this community has developed expertise on what it means to interoperate 01:15:50 [Karen] DavidR: Yes, absolutely 01:15:59 [Karen] ...Yes, coming to W3C offers tools that are needed 01:16:05 [Karen] ...but also looking at back of napkin math 01:16:18 [Karen] ...but for OAUTH to be created inside W3C would have cost $20 million 01:16:25 [Karen] Daniel: Tim, do you want to say something? 01:16:34 [Karen] TimBL: Insert three quarter hour of standards bodies 01:16:38 [Karen] ...You talked about two dimensions 01:16:43 [Karen] ...You called it a meritocracy 01:16:49 [Karen] ...friends, put together a spec 01:16:56 [Karen] ...versus an organization with a process 01:16:59 [Karen] ...W3C then, now 01:17:06 [Karen] ...After a while 01:17:10 [Karen] ...one person said stop, wait 01:17:17 [Karen] ...this is not good enough; we need to know certain things 01:17:26 [Karen] ...have more solide ground; criteria for making a standard 01:17:35 [Karen] ...certain level of polity for my company 01:17:46 [Karen] ...and if you organize a meeting, give us 8 weeks' notice 01:17:57 [Karen] ...I have to travel, get permission to travel 01:18:03 [Karen] ...so we created a process document 01:18:19 [Karen] ...I suggest you talk to people about the history, especially Carl Cargill (Adobe) 01:18:26 [Karen] ...Companies came to me to put consortium together 01:18:33 [Karen] ...Web was a fast-moving field 01:18:41 [Karen] ...They felt it was worth their putting money into it 01:18:59 [Karen] ...If you want to put money into it, the ROI; $20K investment 01:19:05 [Karen] ...compare to number of minutes 01:19:13 [Karen] ...what people on average spend on FaceBook 01:19:22 [Karen] ...if they spend, it would cover $20K 01:19:30 [Karen] David: Yes, I saw this with Open ID 01:19:39 [Karen] ...yes, from the wild west approach to more of process 01:19:42 [Karen] ...Not one approach 01:19:50 [Karen] ...Not just about what it would cost FB to participate 01:20:01 [Karen] ...but to strive for that really broad participation 01:20:05 [Karen] ...It's more than $20K 01:20:10 [Karen] Daniel: We are out of time 01:20:17 [Karen] ...I'd like to thank our panelists 01:20:24 [Karen] ...HOpe it's the start of a conversation 01:20:31 [Karen] ...Reminds me of when Google came up to stage 01:20:39 [Steven] s/HO/Ho/ 01:20:39 [Karen] ...and asked why Google should join W3C 01:20:44 [Karen] ...and now we have TV Raman 01:20:49 [Karen] [crowd laughs] 01:20:51 [Ileana] Ileana has joined #TPAC 01:20:55 [Karen] ...Hope this is start of a new friendship 01:20:58 [Karen] [applause] 01:21:02 [Karen] session ends 01:21:03 [IleanaLeuca] IleanaLeuca has joined #TPAC 01:21:13 [IanJ] rrsagent, make minutes 01:21:13 [RRSAgent] I have made the request to generate IanJ 01:21:24 [IanJ] Scribe: Jeanne 01:21:52 [timeless] ScribeNick: jeanne 01:22:06 [jeanne] topic: Lightning talks 01:22:11 [kford] kford has joined #tpac 01:23:55 [jeanne] Henry Thompson: This is the final session for today, it is the lightning talks session 01:24:45 [jeanne] topic: Marcos Caceres of Opera: If MacGyver was a spec editor 01:25:01 [FabGandon] FabGandon has left #tpac 01:25:20 [IanJ] topic: Multimodality in Enterprise Applications 01:25:39 [jeanne] s/topic: Marcos Caceres of Opera: If MacGyver was a spec editor/Multimodality i nEnterprise Applications 01:25:39 [IanJ] s/topic: Marcos Caceres of Opera: If MacGyver was a spec editor// 01:25:52 [maraki] maraki has joined #tpac 01:26:17 [jeanne] This is an application the does the input in voice and gestures 01:26:32 [jun] jun has joined #tpac 01:26:58 [timeless] s/ and gestures/, gestures, and photos/ 01:27:31 [jeanne] ...[demo of an image of audience, writing on top of it and adding it to the handheld application 01:28:26 [jeanne] ... brought it down into Office 2010, added it as an animation, it is all done with interop with Ink spec and SMIL spec. 01:28:37 [timeless] InkML - The Ink Markup Language < > 01:28:37 [Judy] Judy has joined #tpac 01:29:06 [timeless] SMIL - Synchronized Multimedia Integration Language < > 01:29:06 [jeanne] ...shows markup and InkML spec 01:29:58 [IanJ] topic: If MacGyver was a spec editor: simple tools, unbelievable result 01:30:03 [Karen] 01:30:14 [jeanne] topic: Marcos Caceres of Opera: If MacGyver was a spec editor 01:30:35 [jeanne] I am presenting work we are doing editing the W3C specs 01:30:50 [IanJ] s/W3C/Widgets/ 01:31:10 [jeanne] ... There are different parts of the text - the really important are the testable assersions: Must, should, may 01:31:20 [jeanne] ... they need to be tested and verified 01:31:38 [jeanne] ... MUST is expensive it takes an average of 3 tests. 01:31:42 [timeless] s/assersions/assertions/ 01:31:49 [kohei] kohei has joined #TPAC 01:32:17 [timeless] [ Slide title: Spec - XHTML ] 01:32:19 [jeanne] ...MacGyver would would bring together a group of tests, mash them together and have the result shown 01:32:42 [timeless] [ (jeanne 's transcription is from ~2 slides back) ] 01:32:55 [timeless] [ (marcos is jumping too fast through his slides) ] 01:33:08 [jeanne] ... given a Spec, look for the ids in the code 01:33:31 [jeanne] ... Reduce your musts, use shoulds and may's with caution, use active voice. Keep things simple. 01:33:54 [jeanne] Question: How did you get the editor to do the annotations needed? 01:34:17 [jeanne] Answer: I asked my self, We created the data that we needed. 01:34:29 [IanJ] (and was natural based on Anne van Kesteren practices) 01:34:41 [Marcos] Marcos has joined #tpac 01:35:16 [jeanne] topic: Robin Berjon - A Fresh Specification Writing Tool 01:35:26 [timeless] [ slides fail ACL ] 01:36:09 [jeanne] [moves to the next speaker, slides not working] 01:36:23 [jeanne] topic: Rigo, Privacy and data governance 01:37:38 [jeanne] Don't touch my data: instead of modifiying database, it is just added on to legacy data 01:37:52 [jeanne] ... make the policy travel with the data 01:38:07 [jeanne] ... treated in W3C Workshop on Access Control 01:38:24 [jeanne] ... Next Workshop on Obligations in 2010 01:38:45 [jeanne] Henry thompson: who is your customer, who are you trying to convince? 01:39:14 [jeanne] Rigo: The database professionals, that is who we want to convince. 01:40:07 [jeanne] Topic: Robin Berjon of Vodophone: A fresh specification writing tool 01:40:15 [dom] s/Vodo/Voda/ 01:40:19 [dom] s/phone/fone/ 01:40:31 [jeanne] ... Why? Not because others are bad, but I wanted the spec editors to be able to move faster. 01:41:01 [jeanne] ... you create a document, you go to the browser, look at it and fix the bugs. 01:41:35 [jeanne] ... with most of the tools you have to launch another tool. This saves 30% of the rules. 01:41:45 [jeanne] ... It creates pubrules compliant output 01:41:56 [Steven] Slides 01:42:06 [jeanne] ... it pretty much writes the spec for you. 01:42:22 [dom] -> Example of usage of ReSpec.js in WARP spec 01:42:30 [jeanne] ... it does references and highlighting automatically. 01:42:42 [jeanne] ... it has syntax highlighting in examples 01:43:03 [jeanne] ... Limitations, there are more features being developed. 01:43:48 [jeanne] Rigo: Can you integrate an EMACS/Eliza tool to write the text for you? [laughs] 01:44:16 [jeanne] DanC: Can you show an example? 01:44:22 [arun] arun has joined #tpac 01:45:17 [kford] kford has left #tpac 01:45:20 [maraki] maraki has joined #tpac 01:45:25 [jeanne] [demos] 01:46:06 [shiki] shiki has joined #tpac 01:46:27 [jeanne] topic:Jacques Durand - TAMElizer 01:46:28 [darobin] darobin has joined #tpac 01:46:44 [dom] -> Tamelizer project 01:46:50 [jeanne] Small Open Source code you can download 01:47:05 [jeanne] ... Test assertions are between Spec and Testing 01:47:07 [silvia] silvia has joined #tpac 01:47:17 [silvia] silvia has joined #tpac 01:47:36 [Rotan] Rotan has left #tpac 01:47:42 [jeanne] ... Test Assertion markup language. Simple markup, it could be more sophisticated for advanced user. 01:47:56 [jeanne] ... the report gives you more diagnostic information 01:48:09 [Steven] Slides: 01:48:30 [jeanne] ... XML files that are embedded in the documents 01:48:56 [jeanne] ... it can show the individual pass/fail of tests. 01:49:13 [jeanne] ...In the second phase, you do test analysis 01:49:26 [jeanne] ... this is where we do much better than other tools. 01:49:41 [jeanne] ... You can get the entire chain into the Test Report 01:50:12 [jeanne] Henry: What spec did you do this for and how many assertions? 01:50:27 [jeanne] web services operatibility and 250 test assertions 01:51:06 [IanJ] -> EARL Guide 01:51:12 [jeanne] Shadi: I encourage you to look at the EARL protocol, it is an RDF protocal but backward compatible to XML. 01:52:12 [jeanne] topic: The End of the Beginning Daniel Glazman 01:52:34 [arun] Note that glazou is tilting his screen 01:53:11 [jeanne] demos of rotating cube, fingerprint application (in 15 lines of code) and a game done is SVG that is in Canvas. Very simple 01:53:33 [timeless] s/fingerprint/tilt detector ["level"]/ 01:53:35 [jeanne] ... a font dragr to test new fonts 01:53:38 [Rotan] Rotan has joined #tpac 01:53:56 [timeless] s/a font dragr/"font dragr"/ 01:53:58 [IanJ] rrsagent, make minutes 01:53:58 [RRSAgent] I have made the request to generate IanJ 01:54:50 [jeanne] Henry: The box you were holding has an accelerometer, right? 01:54:56 [jeanne] Yes. 01:55:05 [jeanne] Judy: How was the accessibility 01:55:13 [jeanne] Daniel: I don't know. 01:55:17 [IanJ] [Reminder: feedback form, thanks!: ] 01:55:24 [timeless] s/Yes./Yes. All laptops have accelerometers in their hard disk drives to handle shocks./ 01:55:45 [kford] kford has joined #tpac 01:56:37 [jeanne] Judy: It looks neat. It would be great if the accessibility support right from the beginning. Can we be sure we can get you hooked up with the right people to help with that. 01:57:40 [jeanne] Chaas: the accessibility are in the hardware APIs, the hardware knows when it is working. We have to work on how we make that an accessible application 01:57:48 [kawata] kawata has left #tpac 01:57:51 [timeless] s/Chaas/Chaals/ 01:57:52 [kohei] kohei has left #TPAC 01:57:59 [IanJ] [Applause] 01:58:05 [jeanne] Topic: Ralph closing comments 01:58:10 [kohei] kohei has joined #TPAC 01:58:22 [timeless] s/accessible application/acessible application (by making things like canvas accessible)/ 01:58:31 [IanJ] -> Feedback! 01:58:33 [zarella] zarella has left #tpac 01:58:40 [rkuntsch] rkuntsch has left #tpac 01:58:43 [tantek] I think this might have been the best Tech Plenary Day I have attended. Well done organizers, speakers, and panelists. 01:58:44 [jeanne] This was a large team effort. I especially want to thank the Internet Society for their generous support. There is a feedback survey, please complete it. 01:58:44 [ddahl2] ddahl2 has left #tpac 01:59:14 [caribou] caribou has left #tpac 01:59:49 [Steven] rrsagent, make minutes 01:59:49 [RRSAgent] I have made the request to generate Steven 02:01:19 [Steven] i/scribenick: Karen/Scribe: Karen 02:01:24 [Steven] rrsagent, make minutes 02:01:24 [RRSAgent] I have made the request to generate Steven 02:01:27 [IanJ] IanJ has joined #tpac 02:01:53 [jeanne] rrsagent, make minutes 02:01:53 [RRSAgent] I have made the request to generate jeanne 02:04:53 [Marcos] Marcos has joined #tpac 02:05:00 [AxelPolleres] AxelPolleres has joined #TPAC 02:05:00 [soonho] soonho has left #tpac 02:05:00 [Zakim] disconnecting the lone participant, MeetingRoom, in W3C_TP(*)11:30AM 02:06:27 [IanJ] IanJ has joined #tpac 02:08:16 [Judy] Judy has joined #tpac 02:12:41 [jun] jun has joined #tpac 02:14:23 [fantasai] fantasai has left #tpac 02:17:51 [Julian] Julian has joined #tpac 02:19:36 [Zakim] W3C_TP(*)11:30AM has ended 02:19:37 [Zakim] Attendees were Ralph, MeetingRoom 02:19:43 [Ralph] zakim, bye 02:19:43 [Zakim] Zakim has left #tpac 02:19:47 [Ralph] rrsagent, bye 02:19:47 [RRSAgent] I see no action items
http://www.w3.org/2009/11/04-tpac-irc
CC-MAIN-2016-40
refinedweb
31,859
70.94
The Data Science Lab The data doctor continues his exploration of Python-based machine learning techniques, explaining binary classification using logistic regression, which he likes for its simplicity. The goal of a binary classification problem is to predict a class label, which can take one of two possible values, based on the values of two or more predictor variables (sometimes called features in machine language terminology). For example, you might want to predict the sex (male = 0, female = 1) of a person based on their age, annual income and height. There are many different ML techniques you can use for binary classification. Logistic regression is one of the most common. Logistic regression is best explained by example. Continuing the example above, suppose a person has age = x1 = 3.5, income = x2 = 5.2 and height = x3 = 6.7 where the predictor x-values have been normalized so they roughly have the same scale (a 35-year old person who makes $52,000 and is 67 inches tall). And suppose the logistic regression model is defined with b0 = -9.71, b1 = 0.25, b2 = 0.47, b3 = 0.51. To make a prediction, you first compute a z value: z = b0 + (b1)(x1) + (b2)(x2) + (b3)(x3) = -9.71 + (0.25)(3.5) + (0.47)(5.2) + (0.51)(6.7) = -0.484 Then you use the z value to compute a p value: p = 1.0 / (1.0 + e^(-z)) = 0.3813 Here e is Euler's number (approximately 2.71828). It's not obvious, but the p value will always be between 0 and 1. If the p value is less than 0.5, the prediction is class label 0. If the p value is greater than 0.5, the prediction is class label 1. For the example, because p is less than 0.5, the prediction is "male." OK, but where do the b0, b1, b2 and b3 values come from? To determine the b values, you obtain training data that has known predictor x values and known class label y values. Then you use one of many possible optimization algorithms to find values for the b constants so that the computed p values closely match the known, correct y values. This article explains how to implement logistic regression using Python. There are several machine learning libraries that have built-in logistic regression functions, but using a code library isn't always feasible for technical or legal reasons. Implementing logistic regression from scratch gives you full control over your system and gives you knowledge that can enable you to use library code more effectively. A good way to see where this article is headed is to take a look at the screenshot of a demo program in Figure 1 and the associated data shown in the graph in Figure 2. The demo program sets up six dummy training items. Each item has just two predictor variables for simplicity and so that the training data can be visualized easily. The training data is artificial, but you can think of it as representing the normalized age and income of a person. There are three data items that are class 0 and three items that are class 1. The demo program trains the logistic regression model using an iterative process. Behind the scenes, the demo is using the gradient ascent log likelihood optimization technique (which, as you'll see, is much easier than it sounds). After training, the demo displays the b0 value, sometimes called the bias, and the b1 and b2 values, sometimes called weights. The demo concludes by displaying the computed p values and the associated y values for each of the six training items. The demo logistic regression model correctly predicts the class labels of all six training items, which shouldn't be too much of a surprise because the data is so simple. This article assumes you have intermediate or better coding ability with a C-family language, but does not assume you know anything about logistic regression. The demo program is coded using Python, but you shouldn't have too much trouble refactoring the code to another language. The demo program is a bit too long to entirely present presented in this article, and the complete source code is available in the accompanying file download. Overall Demo Program Structure The overall demo program structure, with a few minor edits to save space, is presented in Listing 1. To edit the demo program, I used Notepad. Most of my colleagues prefer using one of the many nice Python editors that are available. I named the program log_reg_raw.py where the "raw" is intended to indicate that the program uses raw Python version 3 without any external ML libraries. The demo program begins by importing the NumPy library. The demo uses the Anaconda distribution of Python 3, but there are no significant version dependencies so any version of Python 3 with NumPy will work fine. # log_reg_raw.py # Python + NumPy logistic regression import numpy as np # helper functions def ms_error(data, W, b): def accuracy(data, W, b): def pred_probs(data, W, b): def pred_y(pred_probs): def main(): print("Begin logistic regression demo ")]) print("Training data: ") print(train_data) W = np.random.uniform(low = -0.01, high=0.01, size=2) b = np.random.uniform(low = -0.01, high=0.01) # train code here print("Training complete ") print("Model weights: ") print(W) print("Model bias:") print(b) print("") acc = accuracy(train_data, W, b) print("Model accuracy on train data = %0.4f " % acc) pp = pred_probs(train_data, W, b) np.set_printoptions(precision=4) print("Predicted probabilities: ") print(pp) preds = pred_y(pp) actuals = [1 if train_data[i,2] == 1 else \ 0 for i in range(len(train_data))] print("Train data predicted and actual classes:") print("Predicted: ", preds) print("Actual : ", actuals) print("End demo ") if __name__ == "__main__": main() # end script The main function begins by setting up hard-coded training data:]) The NumPy random seed is set to 0 so that the demo results are reproducible. The demo places the correct class 0/1 label values at the end of each training item, which is a bit more common than placing the label values at the beginning. In a non-demo scenario, instead of hard-coding the data, you'd likely read data from a text file, for example: train_data = np.loadtxt("my_data.txt", dtype=np.float32, delimiter=",") The demo sets up the b0 bias value and the b1 and b2 weight values by initializing them to random values between -0.01 and +0.01: W = np.random.uniform(low = -0.01, high=0.01, size=2) b = np.random.uniform(low = -0.01, high=0.01) Most of the demo code is related to training the logistic regression model to find good values for the W weights and the b bias. The training code will be explained in the next section of this article. After training has completed, the demo program displays the resulting weights and bias values and the classification accuracy of the model: print("Training complete ") print("Model weights: ") print(W) print("Model bias:") print(b) acc = accuracy(train_data, W, b) print("Model accuracy on train data = %0.4f " % acc) During training, you are concerned with model error but after training completes, the relevant metric is classification accuracy, which is just the percentage of correct predictions. Next, the demo uses the weights and bias values to compute and display the raw p values: pp = pred_probs(train_data, W, b) np.set_printoptions(precision=4) print("Predicted probabilities: ") print(pp) Printing the raw p values is optional but useful for debugging and also points out that if you write logistic regression from scratch, you have complete control over your system. The main function concludes by using the raw p values to compute and display the predicted class 0/1 label values. The actual class labels are also pulled from the training data and displayed: . . . preds = pred_y(pp) actuals = [1 if train_data[i,2] == 1 else \ 0 for i in range(len(train_data))] print("Train data predicted and actual classes:") print("Predicted: ", preds) print("Actual : ", actuals) print("End demo ") if __name__ == "__main__": main() # end script) Pulling the actual class label values from the training data matrix uses a Python shortcut. A more traditional style would be something like: actuals = [] for i in range(0, len(train_data)): if train_data[i,2] == 0: actuals.insert(i,0) else: actuals.insert(i,1) Python has many terse one-line syntax shortcuts because the language was designed to be used interactively. Training the Logistic Regression Model Training the model begins by setting up the maximum number of training iterations and the learning rate: lr = 0.01 max_iterations = 70 indices = np.arange(len(train_data)) print("Start training, %d iterations,\ LR = %0.3f " % (max_iterations, lr)) for iter in range(0, max_iterations): . . . The maximum number of training iterations to use and the learning rate (lr) will vary from problem to problem and must be determined by trial and error. The indices array initially holds values (0, 1, 2, 3, 4, 5). During training, the indices array is shuffled and used to determine the order in which each training data item is processed: np.random.shuffle(indices) for i in indices: # each training item X = train_data[i, 0:2] # inputs z = 0.0 for j in range(len(X)): z += W[j] * X[j] z += b p = 1.0 / (1.0 + np.exp(-z)) . . . The z value for each training item is calculated as the sum of the products of each weight value and the associated predictor value, as described previously. The bias value is then added. Then, the z value is used to compute the p value. The computation of p as 1.0 / (1.0 + e^(-z)) is called the logistic sigmoid function, and this is why logistic regression is named as it is. Next, each weight value and the bias value are updated: for j in range(0, 2): W[j] += lr * X[j] * (y - p) b += lr * (y - p) The update computation is short but not obvious. The logic is very deep. What is being used is called gradient ascent log likelihood maximization. The term (y - p) is the difference between a target value and a computed p value. Suppose the target y value is 1 and the computed p value is 0.74. You want the value of p to increase so that it is closer to y. The (y - p) term will be 1 - 0.74 = 0.26. You take this delta, multiply by the associated input value x (to take care of the sign of the input value), then multiply by a small fraction called the learning rate. The result is added to the weight being processed and then the computed p will get a bit closer to the target y. Very clever! During training, it's important to monitor the average error between computed p values and target y values so you can spot problems: . . . if iter % 10 == 0 and iter > 0: err = ms_error(train_data, W, b) print("epoch " + str(iter) + " Mean Squared Error = %0.4f " % err) print("Training complete ") The demo program computes the mean squared error between y and p using helper function ms_error. For example, if in one training item, the target y is 1 and the computed p is 0.74, then the squared error for the data item is just (1 - 0.74)^2 = 0.26 * 0.26 = 0.0676. The mean squared error is the average of the error values across all training items. Wrapping Up The primary advantage of logistic regression is simplicity. However, the major disadvantage of logistic regression is that it can only work well with data that is mostly linearly separable. For example, in the graph in Figure 2, if a training point with class label 0 (red) was added at (2.5, 3.5), no line could be drawn so that all the red dots are on one side of the classification boundary and all the blue dots are on the other side. That said, much real-life data is mostly linearly separable, and in such situations logistic regression can work well. Logistic regression can in principle be modified to handle problems where the item to predict can take one of three or more values instead of just one of two possible values. The is sometimes called multi-class logistic regression. But in my opinion, using an alternative classification technique, a neural network classifier, is a better option. Logistic regression can handle non-numeric predictor variables. The trick is to encode such variables using what is called 1-of-(N-1) encoding. For example, if a predictor variable is color, with possible values (red, blue, green), then you'd encode red as (1, 0), blue as (0, 1) and green as (-1, -1). The demo program uses gradient accent log likelihood maximization to train the logistic regression model. There are many other approaches including gradient descent error minimization, iterated Newton-Raphson, swarm optimization and L-BFGS optimization. In my opinion, all these alternative techniques produce roughly equivalent results, but the gradient ascent technique is simpler. Printable Format > More TechLibrary I agree to this site's Privacy Policy.
https://visualstudiomagazine.com/articles/2018/01/04/logistic-regression.aspx
CC-MAIN-2018-51
refinedweb
2,203
54.22
04 September 2012 20:52 [Source: ICIS news] (recast with paragraph 14) HOUSTON (ICIS)--US prices for epoxy resins ordered in September are expected to roll over from prices on material ordered in August, sources said on Tuesday. US prices on domestic epoxy resins ordered in August were assessed by ICIS at $1.42-1.52/lb ($3,131-3,351/tonne, €2,473-2,467/tonne) DEL bulk (delivered in bulk), and prices for material ordered in September are expected to remain at that level. “Everyone seems ok with where they are buying and selling material at,” a producer said. “Demand is still good across the board.” Buyers attributed the rollover to the continued presence of less-expensive import material, mostly from ?xml:namespace> “We got some sweet deals on material out of Most buyers, however, argued that pricing between domestic and imported materials has almost reached parity and doesn’t appear to be moving in either direction. Imported epoxy resins prices are assessed by ICIS at $1.38-1.45/lb Prices in northeast “There’s been no trouble getting Asian stuff in,” a buyer said. “If you want it, it’s there.” Upstream, feedstock benzene prices have come off record contract highs, falling by 40 cents/gal in September, but this is expected to take a month or two to trickle down to the epoxy resins market. “The pressure is off because benzene has started easing down,” a buyer said. “But we’re not looking at an imminent drop.” The record-high feedstock costs put epoxy resins producers in a difficult position, as the less expensive Asian material took away sales and domestic demand in the With the automotive and outdoors coatings seasons starting to wane, most sources don’t expect any improvement in demand, and some have already started to look at drawing down inventories.. “I think you’ll see some inventory draw-downs coming soon, but then again, I don’t think anyone let their inventories get too high,” an epoxy resins buyer said.
http://www.icis.com/Articles/2012/09/04/9592684/us-epoxy-resins-prices-steady-on-strong-imports-high.html
CC-MAIN-2014-49
refinedweb
339
59.03
This instructable includes solar panels powering water pump, Arduino and all electronics under or about $30. The system waters up to 6 planters. Materials: 1. Solar panels 6V 1W - $1.50 each. 8 needed. (ebay) 2. Photo frame from dollar store 11" x 14" - $1. (Dollar Tree/ any dollar store) 3. Arduino - $7. (ebay) 4. Box - $5. (IKEA?) 5. Tube - $2 (Orchard) 6. Galvanized nails - $1 (optional, Home Depot?) 7. Electrical wire salvaged from electronics - free / jumper wires if you have too (optional) 8. 1 channel relay - $1 (ebay) 9. Cheap trash bag or vinyl sheet - (optional) 10. Cylindrical container for pillar - $2 (I finished the Pringles) Total: ~$30 Disclaimer: I have no affiliation with any of the vendors mentioned above. Try to shop around and salvage items safely if you could! Step 1: Concept Initially I wanted to create an automatic irrigation system for a balcony garden. An array of solar panels would be enough to power the electronics but the position of the sun changes over time. Ideally, I want the solar panels to face the sun to increase the efficiency of powering the electronics. The most common use of soil moisture sensor is not reliable as it would corrode over time, hence, I intend to create a 3-phase fail proof system. 1. The Real Time Clock DS3231 for Arduino to turn on the pump at specific time 2. Galvanized nails tolerate corrosion (optional) 3. Cheap trash bags to channel over-watering I tested out the DS3231 Real Time clock module and it seems to work perfectly fine, not to mention that the relay shuts off if there is no current flowing within its circuit. Over-watering is not a big worry in this case. Step 2: Solar Panels This is a DIY version of everything so I am gonna build a solar panel. My most challenging part in this step is to find a cheap case/ frame. I went to various dollar stores and found that Dollar Tree does have a HUGE frame 11"x14" that fits all my panels. The panels do not move vigorously, toughness of the photo frame shouldn't be a big factor in this case. The material of the cover is glass, can withstand heat which is a huge plus! I separated the panels into 2 groups, 1 for the pump and the other for the Arduino. 6 panels run in series and parallel to provide about 12V 6W of DC power to pump, 2 panels run the same to provide 6V 2W of DC power for the Arduino. Include diodes if you want to. Solder carefully the terminals to the wires. Using solar panel tabs is highly recommended, I am using a 18AWG electrical wire because it is readily available for me. solar panel tabs are not that expensive anyways on ebay. Step 3: Calculating Volume Flow Rate You do not know how much water flows through the pipe at a given time. What I find at fault at some irrigation system is that over/under-watering occurs without calculating the volume flow rate of the piping system. I want to use a gallon of water to water all my plants per day. By using a 1/2-gallon water container, insert the pipe with the pump attached and submerged in a tank of water. turn on the pump and start the stopwatch. Stop the stopwatch when the water level reaches 1/2 gallon in the container. Use the time taken for the "switching on" time in the Arduino code. Step 4: Wiring the Components (See pictures for details) Step 5: Coding Refer to my RTC DS3231 Instructables to see how the timer function works. (Essential, and its really simple!). Just make sure that the sun intensity is high enough to power your pump, in this case I turn my pump on at 12pm. If you live in an area with a lot of shade, a simple solution is to attach a reflector that is perpendicular to the solar panel. First you need to "burn" the time. (Refer to my RTC DS3231 Instructables!) Next, study and upload the code below: #include <Wire.h> #include <DS3231.h> // Init the DS3231 using the hardware interface DS3231 rtc(SDA, SCL); // Init a Time-data structure Time t; int relay = 8; void setup() Serial.begin(9600); rtc.begin(); pinMode(relay, OUTPUT); digitalWrite(relay, HIGH); } void loop(){ t = rtc.getTime(); // Get data from the DS3231 if (t.hour == 12 && t.min == 00) //Setting alarm/timer at every 12pm, { digitalWrite(relay, LOW); delay(60000); //default water time set at 60 seconds, change accordingly, digitalWrite(relay, HIGH); delay(1000); } else { digitalWrite(relay, HIGH); } } Step 6: Assembly Cut a small hole in the cylindrical container and channel the pump-relay wires out. Submerge the pump into the box filled with water, place the solar panel under the sun and test it out! Step 7: Conclusion and Recommendations Conclusion Remember to position the solar panels directly to the late morning/ noon sun. This ensures that the solar panels can power at its highest potential. Mistakes/ Recommendations I did a solar tracker to track the sun once the voltage reaches its peak in the late morning. Unfortunately, the servos weren't able to support since the solar panels were too heavy. The panels were a bad 1-2 lbs. The servos I used were Micro SG90. *Sigh* I would recommend using a bigger servo or adding more servos, if the panels are heavy. Otherwise, use a reflector so to "double" the power of the solar panels during shady days. If the sun position does not change drastically throughout the season, the static position of the solar panels would not matter much. Use this website to see how the sun position changes in your area.... Discussions
https://www.instructables.com/id/Budget-Off-grid-Automatic-Watering-With-Solar-Pane/
CC-MAIN-2019-26
refinedweb
965
64.71
We talk a lot about reactive programming in the Angular realm. Reactive programming and Angular 2 seem to go hand in hand. However, for anyone not familiar with both technologies, it can be quite a daunting task to figure out what it is all about. In this article, through building a reactive Angular 2 application using Ngrx, you will learn what the pattern is, where the pattern can prove to be useful, and how the pattern can be used to build better Angular 2 applications.. What Is Reactive Programming? Reactive programming is a term that you hear a lot these days, but what does it really mean? Reactive programming is a way applications handle events and data flow in your applications. In reactive programming, you design your components and other pieces of your software in order to react to those changes instead of asking for changes. This can be a great shift. A great tool for reactive programming, as you might know, is RxJS. By providing observables and a lot of operators to transform incoming data, this library will help you handle events in your application. In fact, with observables, you can see event as a stream of events and not a one-time event. This allows you to combine them, for example, to create a new event to which you will listen. Reactive programming is a shift in the way you communicate between different parts of an application. Instead of pushing data directly to the component or service that needed it, in reactive programming, it is the component or service that reacts to data changes. A Word about Ngrx In order to understand the application you will build through this tutorial, you must make a quick dive into the core Redux concepts. Store The store can be seen as your client side database but, more importantly, it reflects the state of your application. You can see it as the single source of truth. It is the only thing you alter when you follow the Redux pattern and you modify by dispatching actions to it. Reducer Reducers are the functions that know what to do with a given action and the previous state of your app. The reducers will take the previous state from your store and apply a pure function to it. Pure means that the function always returns the same value for the same input and that it has no side effects. From the result of that pure function, you will have a new state that will be put in your store. Actions Actions are the payload that contains needed information to alter your store. Basically, an action has a type and a payload that your reducer function will take to alter the state. Dispatcher Dispatchers are simply an entry point for you to dispatch your action. In Ngrx, there is a dispatch method directly on the store. Middleware Middleware are some functions that will intercept each action that is being dispatched in order to create side effects, even though you will not use them in this article. They are implemented in the Ngrx/Effect library, and there is a big chance that you will need them while building real-world applications. Why Use Ngrx? Complexity The store and unidirectional data flow greatly reduce coupling between parts of your application. This reduced coupling reduces the complexity of your application, since each part only cares about specific states. Tooling The entire state of your application is stored in one place, so it is easy to have a global view of your application state and helps during development. Also, with Redux comes a lot of nice dev tools that take advantage of the store and can help to reproduce a certain state of the application or make time travel, for example. Architectural simplicity Many of the benefits of Ngrx are achievable with other solutions; after all, Redux is an architectural pattern. But when you have to build an application that is a great fit for the Redux pattern, such as collaborative editing tools, you can easily add features by following the pattern. Although you don’t have to think about what you are doing, adding some things like analytics through all your applications becomes trivial since you can track all the actions that are dispatched. Small learning curve Since this pattern is so widely adopted and simple, it is really easy for new people in your team to catch up quickly on what you did. Ngrx shines the most when you have a lot of external actors that can modify your application, such as a monitoring dashboard. In those cases, it is hard to manage all the incoming data that are pushed to your application, and state management becomes hard. That is why you want to simplify it with an immutable state, and this is one thing that the Ngrx store provides us with. Building an Application with Ngrx The power of Ngrx shines the most when you have outside data that is being pushed to our application in real time. With that in mind, let’s build a simple freelancer grid that shows online freelancers and allows you to filter through them. Setting Up the Project Angular CLI is an awesome tool that greatly simplifies the setup process. You may want to not use it but keep in mind that the rest of this article will use it. npm install -g @angular/cli Next, you want to create a new application and install all Ngrx libraries: ng new toptal-freelancers npm install ngrx --save Freelancers Reducer Reducers are a core piece of the Redux architecture, so why not start with them first while building the application? First, create a “freelancers” reducer that will be responsible for creating our new state each time an action is dispatched to the store. freelancer-grid/freelancers.reducer; } } So here is our freelancers reducer. This function will be called each time an action is dispatched through the store. If the action is FREELANCERS_LOADED, it will create a new array from the action payload. If it is not, it will return the old state reference and nothing will be appended. It is important to note here that, if the old state reference is returned, the state will be considered unchanged. This means that if you call a state.push(something), the state will not be considered to have changed. Keep that in mind while doing your reducer functions. States are immutable. A new state must be returned each time it changes. Freelancer Grid Component Create a grid component to show our online freelancers. At first, it will only reflect what is in the store. ng generate component freelancer-grid Put the following in freelancer-grid'); } } And the following in freelancer-grid.component.html: <span class="count">Number of freelancers online: {{(freelancers | async).length}}</span> <div class="freelancer fade thumbail" * <button type="button" class="close" aria-<span aria-×</span></button><br> <img class="img-circle center-block" src="{{freelancer.thumbnail}}" /><br> <div class="info"><span><strong>Name: </strong>{{freelancer.name}}</span> <span><strong>Email: </strong>{{freelancer.email}}</span></div> <a class="btn btn-default">Hire {{freelancer.name}}</a> </div> So what did you just do? First, you have created a new component called freelancer-grid. The component contains a property named freelancers that is a part of the application state contained in the Ngrx store. By using the select operator, you choose to only be notified by the freelancers property of the overall application state. So now each time the freelancers property of the application state changes, your observable will be notified. One thing that is beautiful with this solution is that your component has only one dependency, and it is the store that makes your component much less complex and easily reusable. On the template part, you did nothing too complex. Notice the use of async pipe in the *ngFor. The freelancers observable is not directly iterable, but thanks to Angular, we have the tools to unwrap it and bind the dom to its value by using the async pipe. This makes working with the observable so much easier. Adding the Remove Freelancers Functionality Now that you have a functional base, let’s add some actions to the application. You want to be able to remove a freelancer from the state. According to how Redux works, you need to first define that action in each state that are affected by it. In this case, it is only the freelancers reducer: is really important here to create a new array from the old one in order to have a new immutable state. Now, you can add a delete freelancers function to your component that will dispatch this action to the store: delete(freelancer) { this.store.dispatch({ type: ACTIONS.DELETE_FREELANCER, payload: freelancer, }) } Doesn’t that look simple? You can now remove a specific freelancer from the state, and that change will propagate through your application. Now what if you add another component to the application to see how they can interact between each other through the store? Filter Reducer As always, let’s start with the reducer. For that component, it is quite simple. You want the reducer to always return a new state with only the property that we dispatched. It should looks like this: import { Action } from '@ngrx/store'; export interface IFilter { name: string, email: string, } export const ACTIONS = { UPDATE_FITLER: 'UPDATE_FITLER', CLEAR_FITLER: 'CLEAR_FITLER', } const initialState = { name: '', email: '' }; export function filterReducer( state: IFilter = initialState, action: Action): IFilter { switch (action.type) { case ACTIONS.UPDATE_FITLER: // Create a new state from payload return Object.assign({}, action.payload); case ACTIONS.CLEAR_FITLER: // Create a new state from initial state return Object.assign({}, initialState); default: return state; } } Filter Component import { Component, OnInit } from '@angular/core'; import { IFilter, ACTIONS as FilterACTIONS } from './filter-reducer'; import { Store } from '@ngrx/store'; import { FormGroup, FormControl } from '@angular/forms'; import * as Rx from 'RxJS'; @Component({ selector: 'app-filter', template: '<form class="filter">'+ '<label>Name</label>'+ '<input type="text" [formControl]="name" name="name"/>'+ '<label>Email</label>'+ '<input type="text" [formControl]="email" name="email"/>'+ '<a (click)="clearFilter()" class="btn btn-default">Clear Filter</a>'+ '</form>', styleUrls: ['./filter.component.scss'], }) export class FilterComponent implements OnInit { public name = new FormControl(); public email = new FormControl(); constructor(private store: Store<any>) { store.select('filter').subscribe((filter: IFilter) => { this.name.setValue(filter.name); this.email.setValue(filter.email); }) Rx.Observable.merge(this.name.valueChanges, this.email.valueChanges).debounceTime(1000).subscribe(() => this.filter()); } ngOnInit() { } filter() { this.store.dispatch({ type: FilterACTIONS.UPDATE_FITLER, payload: { name: this.name.value, email: this.email.value, } }); } clearFilter() { this.store.dispatch({ type: FilterACTIONS.CLEAR_FITLER, }) } } First, you have made a simple template that includes a form with two fields (name and email) that reflects our state. You keep those fields in sync with state quite a bit differently than what you did with the freelancers state. In fact, as you have seen, you subscribed to the filter state, and each time, it triggers you assign the new value to the formControl. One thing that is nice with Angular 2 is that it provides you with a lot of tools to interact with observables. You have seen the async pipe earlier, and now you see the formControl class that allows you to have an observable on the value of an input. This allows fancy things like what you did in the filter component. As you can see, you use Rx.observable.merge to combine the two observables given by your formControls, and then you debounce that new observable before triggering the filter function. In simpler words, you wait one second after either of the name or email formControl have changed and then call the filter function. Isn’t that awesome? All of that is done in a few lines of code. This is one of the reasons why you will love RxJS. It allows you to do a lot of those fancy things easily that would have been more complicated otherwise. Now let’s step to that filter function. What does it do? It simply dispatches the UPDATE_FILTER action with the value of the name and the email, and the reducer takes care of altering the state with that information. Let’s move on to something more interesting. How do you make that filter interact with your previously created freelancer grid? Simple. You only have to listen to the filter part of the store. Let’s see what the code looks like. import { Component, OnInit } from '@angular/core'; import { Store } from '@ngrx/store'; import { AppState, IFreelancer, ACTIONS } from './freelancer-reducer'; import { IFilter, ACTIONS as FilterACTIONS } from './../filter/filter-reducer'; import * as Rx from 'RxJS'; @Component({ selector: 'app-freelancer-grid', templateUrl: './freelancer-grid.component', styleUrls: ['./freelancer-grid.component.scss'], }) export class FreelancerGridComponent implements OnInit { public freelancers: Rx.Observable<Array<IFreelancer>>; public filter: Rx.Observable<IFilter>; constructor(private store: Store<AppState>) { this.freelancers = Rx.Observable.combineLatest(store.select('freelancers'), store.select('filter'), this.applyFilter); } applyFilter(freelancers: Array<IFreelancer>, filter: IFilter): Array<IFreelancer> { return freelancers .filter(x => !filter.name || x.name.toLowerCase().indexOf(filter.name.toLowerCase()) !== -1) .filter(x => !filter.email || x.email.toLowerCase().indexOf(filter.email.toLowerCase()) !== -1) } ngOnInit() { } delete(freelancer) { this.store.dispatch({ type: ACTIONS.DELETE_FREELANCER, payload: freelancer, }) } } It is no more complicated than that. Once again, you used the power of RxJS to combine the filter and freelancers state. In fact, combineLatest will fire if one of the two observables fire and then combine each state using the applyFilter function. It returns a new observable that do so. We don’t have to change any other lines of code. Notice how the component does not care about how the filter is obtained, modified, or stored; it only listens to it as it would do for any other state. We just added the filter functionality and we did not add any new dependencies. Making It Shine Remember that the use of Ngrx really shines when we have to deal with real time data? Let’s add that part to our application and see how it goes. Introducing the freelancers-service. ng generate service freelancer The freelancer service will simulate real time operation on data and should looks like this. import { Injectable } from '@angular/core'; import { Store } from '@ngrx/store'; import { AppState, IFreelancer, ACTIONS } from './freelancer-grid/freelancer-reducer'; import { Http, Response } from '@angular/http'; @Injectable() export class RealtimeFreelancersService { private USER_API_URL = '' constructor(private store: Store<AppState>, private http: Http) { } private toFreelancer(value: any) { return { name: value.name.first + ' ' + value.name.last, email: value.email, thumbail: value.picture.large, } } private random(y) { return Math.floor(Math.random() * y); } public run() { this.http.get(`${this.USER_API_URL}51`).subscribe((response) => { this.store.dispatch({ type: ACTIONS.FREELANCERS_LOADED, payload: response.json().results.map(this.toFreelancer) }) }) setInterval(() => { this.store.select('freelancers').first().subscribe((freelancers: Array<IFreelancer>) => { let getDeletedIndex = () => { return this.random(freelancers.length - 1) } this.http.get(`${this.USER_API_URL}${this.random(10)}`).subscribe((response) => { this.store.dispatch({ type: ACTIONS.INCOMMING_DATA, payload: { ADD: response.json().results.map(this.toFreelancer), DELETE: new Array(this.random(6)).fill(0).map(() => getDeletedIndex()), } }); this.addFadeClassToNewElements(); }); }); }, 10000); } private addFadeClassToNewElements() { let elements = window.document.getElementsByClassName('freelancer'); for (let i = 0; i < elements.length; i++) { if (elements.item(i).className.indexOf('fade') === -1) { elements.item(i).classList.add('fade'); } } } } This service is not perfect, but it does what it does and, for demo purposes, it allows us to demonstrate a few things. First, this service is quite simple. It queries a user API and pushes the results to the store. It is a no-brainer, and you don’t have to think about where the data goes. It goes to the store, which is something that makes Redux so useful and dangerous at the same time—but we will come back to this later. After every ten seconds, the service picks a few freelancers and sends an operation to delete them along with an operation to a few other freelancers. If we want our reducer to be able to handle it, we need to modify it: import { Action } from '@ngrx/store'; export interface AppState { freelancers : Array<IFreelancer> } export interface IFreelancer { name: string, email: string, } export const ACTIONS = { LOAD_FREELANCERS: 'LOAD_FREELANCERS', INCOMMING_DATA: 'INCOMMING_DATA', DELETE_FREELANCER: 'DELETE_FREELANCER', } export function freelancersReducer( state: Array<IFreelancer> = [], action: Action): Array<IFreelancer> { switch (action.type) { case ACTIONS.INCOMMING_DATA: action.payload.DELETE.forEach((index) => { state.splice(state.indexOf(action.payload), 1); }) return Array.prototype.concat(action.payload.ADD, state);; } } Now we are able to handle such operations. One thing that is demonstrated in that service is that, of all the process of state changes being done synchronously, it is quite important to notice that. If the application of the state was async, the call on this.addFadeClassToNewElements(); would not work as the DOM element would not be created when this function is called. Personally, I find that quite useful, since it improves predictability. Building Applications, the Reactive Way Through this tutorial, you have built a reactive application using Ngrx, RxJS, and Angular 2. As you have seen, these are powerful tools. What you have built here can also be seen as the implementation of a Redux architecture, and Redux is powerful in itself. However, it also has some constraints. While we use Ngrx, those constraints inevitably reflect in the part of our application that we use. The diagram above is a rough of the architecture you just did. You may notice that even if some components are influencing each other, they are independent of each other. This is a peculiarity of this architecture: Components share a common dependency, which is the store. Another particular thing about this architecture is that we don’t call functions but dispatch actions. An alternative to Ngrx could be to only make a service that manages a particular state with observables of your applications and call functions on that service instead of actions. This way, you could get centralization and reactiveness of the state while isolating the problematic state. This approach can help you to reduce the overhead of creating a reducer and describe actions as plain objects. When you feel like the state of your application is being updated from different sources and it starts to become a mess, Ngrx is what you need. Understanding the basics Reactive programming is a shift in the way different parts of an application communicate with each other. Instead of pushing data directly to the component or service that needed it, in reactive programming, it is the component or service that reacts to data changes. Ngrx is a set of Angular libraries for reactive extensions. Two popular Ngrx libraries are Ngrx/Store, an implementation of the Redux pattern using the well-known RxJS observables of Angular 2, and Ngrx/Effects, a library that allows the application to communicate with the outside world by triggering side effects.
https://www.toptal.com/angular-js/ngrx-angular-reaction-application
CC-MAIN-2022-40
refinedweb
3,118
56.55
Feature phone sitemaps You should not create a feature phone sitemap unless you have a specific feature phone version of a page designed for feature phones (non-smartphones). You can create a mobile sitemap for feature phones using the sitemap protocol along with an additional tag and namespace requirement. You can create a separate sitemap listing your video content, or you can add information about your video content to an existing sitemap—whichever is more convenient for you. A sample mobile sitemap that contains a single entry is shown below. <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="" xmlns: <url> <loc></loc> <mobile:mobile/> </url> </urlset> Please be aware of the following guidelines for making a mobile sitemap: - If you choose to use a sitemap generation tool, first check that it can create mobile sitemaps. - Include the <mobile:mobile/>tag to make sure that your mobile URLs are properly crawled. - URLs serving multiple markup languages can be listed in a single sitemap. - Search Console automatically detects and supports the following markup languages for mobile content: XHTML mobile profile (WAP 2.0), WML (WAP 1.2), cHTML (iMode) XML namespace xmlns:mobile="
https://support.google.com/webmasters/answer/6082207?hl=en&ctx=cb&src=cb&cbid=15tu0bfod2l4i&cbrank=1&rd=1
CC-MAIN-2016-40
refinedweb
192
61.77
How to regsiter custom components? I’ve created a custom component, and I’d like to know how to register it, if possible, quasar/src/via install.jsif that it the correct way to do it? In most of the online tutorials for Vue 2.0 and their own guides, they’re mostly using template: ...but I get errors about runtime mode not allowing that. So I switched to use of the <template />tag as with all of the Quasar components - but I cannot find a way to register my component official for use. Cheers! Argh, nevermind, I figured it out. I don’t need to register my component, I was referencing it incorrectly within another component that was using it. You should give an example how you achieved this, for future people searching of a similar issue - davewallace @Dobromir Good point! Ok here goes, and maybe this can serve as a time for some QA on what I’ve done to. Happy for improvement suggestions Problem I want to create a custom, re-usable component that follows Vue.js conventions as well as Quasar conventions, where they apply. To add context, many online forums, articles & posts at the time of this post’s writing focus on use if the templateproperty within a component definition, which has caused a runtime error unless set up correctly. Solution Parent component <template><!-- use of tag-based templating within this parent component --> <div><!-- single root node, required --> <h1>A Route View</h1> <h2>Here is a custom component:</h2> <!-- this component differs from the basic input component provided by Vue.js and Quasar in that it will contain a button which can be used to auto-populate the input value with the return value of supplied function. for this example, i have not implemented the supplied function part. --> <child-component</child-component> </div> </template> <script> // import our custom component to use within this component import ChildComponent from '../ChildComponent' export default { name: 'ParentComponent', // be sure to list our custom component here, i've used kebab-case here as // the name to be used in the above <template> definition. ComponentCamelCase // is used for the actual component definition components: { 'child-component': ChildComponent } } </script> Child component <template><!-- use of tag-based templating within this child component --> <div : <input : <label v-{{label}}</label> <a @</a> </div> </template> <script> export default { methods: { /** * Given a generator function, this component's inputValue is set * to the return value of the generator. * * @param Function **/ generateValue: function (generator) { // set this component's value prop as result of supplied generator function return } }, props: { 'value': String, 'required': Boolean, 'disable': Boolean, 'label': String, 'inputclasses': String, 'componentclasses': String }, computed: { model: { get () { return this.value }, set (value) { this.$emit('input', value) } } } } </script> <style> .icon-generate { float:right; width:16px; height:16px; border:1px dashed red; } </style> Recap Counter to tutorials & posts I’ve read through, no templateproperty on ChildComponentis used, instead tag-based templating combined with component names & importing are used. My original post asked about installing components in some official way, as with Quasar components via quasar/src/install.js, but nothing was needed. Screenshot Final note If anyone can offer suggestions around that ChildComponentbeing able to accept a per-instance generatorfunction into it’s generateValuemethod - your help would be much appreciated. To illustrate my needs by example, the parent component, when including the child, might want to auto-populate the input field with a name picked randomly from a default set of names. I can’t directly add that logic into the child component because then it only has a single use. I might also want to use that child component again and be able to pass a different function into generateValuewhich returns a fruit picked randomly from a default set of fruits. - rstoenescu Admin There’s no need for a “name” prop in your component. That’s useful only when you write some recursive templates. Since we got Webpack and Vue we can use *.vue files which transforms your components into render functions. So no need to include Vue compiler into your app’s final code. This eases up on the size of your app and takes full advantage of Vue’s speed. If you need to compile templates at runtime speed of execution will drop. Vue compiler is not included by default, that’s why “template” prop won’t work. Examples you’ve seen are just examples to get you started quickly, but you don’t need the “template” prop since we got *.vue files. When you use *.vue files the <template>tag is parsed and converted to a render function by vue-loader. Writing and using components is easy. Write a *.vue file with your component, then include it in whatever other component you need it, through the componentsprop. Let’s say we wrote clock.vue. We need that component in our page, so on the page’s *.vue file we write: import Clock from '...path..to..clock.vue' export default { .... components: { Clock } } …and we use it like this in the page’s <template>tag: <template> .... <clock ...></clock> .... </template> My recommendations is to leave Vue tutorials aside. Read the official documentation website for Vue. It will suffice! Read it inside out before digging in. Very clear, thank you @rstoenescu ! Looking at the nameproperty again, @rstoenescu - is there any detriment to naming the component? If not, in giving it a name the Vue dev tools uses the name (converting to camelCase) property to populate nodes. My preference would be to supply the name for this reason, unless there’s a good reason to avoid it! - rstoenescu Admin Using the nameproperty has no place in talking about how to require/register a component because it doesn’t affect it in any way. This means however you can use and there’s no reason or recommendation not to. It’s just out of the scope of custom component discussion.
http://forum.quasar-framework.org/topic/52/how-to-regsiter-custom-components
CC-MAIN-2017-17
refinedweb
982
56.76
> Date: Mon, 3 Mar 1997 11:16:34 -0500 > From: "Barry A. Warsaw" <bwarsaw@anthem.cnri.reston.va.us> > To: MHammond@skippinet.com.au > Cc: doc-sig@python.org > Subject: Re: [PYTHON DOC-SIG] Templatising gendoc, and more. > Reply-to: "Barry A. Warsaw" <bwarsaw@CNRI.Reston.Va.US> > > >>>>> "MH" == Mark Hammond <MHammond@skippinet.com.au> writes: > > MH> Couple of quick questions - how can a ni package provide doc > MH> strings? Eg, a package named "xyz" is a directory, _not_ a > MH> file. I added a convention to gendoc that a file called > MH> "__doc__" in a packages directory will be read, and treated as > MH> docstrings for the module itself (which makes lots of sense to > MH> me, as then you dont need to distribute it) Also, I "flatten" > MH> the "__init__" module, so that all docstrings and methods are > MH> documented in the package itself - ie, __init__ never gets a > MH> mention in the docs. Do these sound OK? > > I think Ken M. was the first to champion flatting of __init__ into the > package module. The argument is that flattening is the most natural > way to think about package modules and most package authors are going > to want to this, so it makes sense to be the default behavior. I even > went so far as to add the couple of lines to ni.py to make this > happen, but Guido nixed it, partially because there wasn't enough > experience with ni.py to back-up the `most common usage' argument. > > In any case, when I packagized some parts of Grail, I added the > following gross hack (taken from fonts/__init__.py): :-) Ive come up with the same hacks, without ever seeing fonts :-) Ive _always_ done this in __init__, and maybe we would find that the "most common usage" argument is more pervasive now? > What I would to do ni, is automatically put any non-underscore > prefixed symbol appearing in __init__.py into the package module's > namespace. I would also put __doc__ into that namespace. It's all of > a two or three line change to ni.py. Also, you might provide some way > of controlling what gets put into the package's namespace from > __init__.py. Im sure many people will dis-agree here, but IMO docstrings in the code is cute, but on-line browsing of docstrings wont be used anywhere near as much as generated documentation. I think it most important docstrings be kept near the sources for maintenance, rather than browsing. I'd ask if flattening of the doc strings at run-time is really worth it? Another interesting "feature" of docstrings is that the sources often get _twice_ as big (Python is partly to blame here, as the programs themselves are often so small :-). If you consider the win32com stuff, not only are the sources twice as big, all the doc strings are _also_ in the generated HTML. And at run-time, obviously, they take more memory. And the more keen you are with the documentation, the more penalty you pay. (OK - were not talking too much, and Im not really _that_ concerned, but...) This is one reason why I like my new little __doc__ file convention. It means I can put "overview" type information in a seperate file that is included in the HTML build, but not part of the sources (but still very close). Browsers wont see it, but people browsing wont be looking for "overview" information anyway - they are more likely to be looking for a specific object... _______________
https://mail.python.org/pipermail/doc-sig/1997-March/000219.html
CC-MAIN-2018-09
refinedweb
588
72.05
NAME devclass_get_drivers - get a list of drivers in a devclass SYNOPSIS #include <sys/param.h> #include <sys/bus.h> int devclass_get_drivers(devclass_t dc, driver_t ***listp, int *countp); DESCRIPTION Retrieve a list of pointers to all driver instances currently in the devclass and return the list in *listp and the number of drivers in the list in *countp. The memory allocated for the list should be freed using free(*listp, M_TEMP), even if *countp is 0. RETURN VALUES Zero is returned on success, otherwise an appropriate error is returned. SEE ALSO devclass(9), device(9) AUTHORS This manual page was written by Nate Lawson.
http://manpages.ubuntu.com/manpages/lucid/en/man9/devclass_get_drivers.9freebsd.html
CC-MAIN-2014-41
refinedweb
103
55.44
Please I need help. I have completed the code for this assignment and the test. However, for the Sales app it only lets me enter an amount for item four and not 1 -4. For the Sales Test I keep getting an error in the code and don't know how to fix it. Please help!!! Here is the Test and the error message.Here is the Test and the error message.PHP Code: import java.util.Scanner; public class Sales { // calculate the sales for items sold public static void main(String[] args ) { Scanner input = new Scanner( System.in ); double gross = 0.0, earnings; // total earnings int item = 0, numberSold; // item number sold, while ( item < 4) item++; // prompt the user for input and obtain item sold from user System.out.printf( "Enter number sold of item #%d: ", item ); numberSold = input.nextInt(); // determine gross sales of each item and add to the total sold if ( item == 1) gross = gross+numberSold * 239.99; else if ( item == 2 ) gross = gross+numberSold * 129.75; else if ( item == 3 ) gross = gross+numberSold * 99.95; else if ( item == 4 ) gross = gross+numberSold * 350.89; earnings = (0.09 * gross) + 200; // calculates earnings System.out.printf( "Earnings this week are: $%.2f\n", earnings ); }// end method }// end class Sales SalesTest.java:11: cannot find symbolSalesTest.java:11: cannot find symbolPHP Code: public class SalesTest { // main method begins program execution public static void main( String[] args ) { // create Sales object Sales application = new Sales(); application.calculateSales(); } // end main } // end class SalesTest symbol : method calculateSales() location class Sales application.calculateSales();
http://forums.devshed.com/java-help/938847-sales-commission-test-application-last-post.html
CC-MAIN-2014-23
refinedweb
258
69.38
Hi friends, For my homework I must try to guess the possible output but its very difficult. This is the code package mix5; public class Mix5 { int counter = 0; public static void main(String[] args) { int count = 0; Mix5[] m5a = new Mix5[20]; int x = 0; while ( x < 9) { m5a [x] = new Mix5(); m5a[x].counter = m5a[x].counter + 1; count = count + 1; count = count + m5a [x].maybeNew(x); x = x + 1; } System.out.println( count + " " + m5a [1].counter); } public int maybeNew (int index) { if (index < 5 ){ Mix5 m5 = new Mix5(); m5.counter = m5.counter + 1; return 1; } return 0; } } The output is "14 1" But how create eclipse this output? I tried and tried but it's not working for me. Can anybody explain this output? (Head First Java pg 90) Thanks Regards Manzara
http://www.javaprogrammingforums.com/java-theory-questions/26459-why-create-eclipse-output.html
CC-MAIN-2017-30
refinedweb
135
84.57
I have read comments all over the place about the bad UI integration of the ODF translator add-in. I posted a discussion of this on my blog yesterday including what could be done to give ODF a more prominent status in the UI: I thought I should include the next paragraph of David’s comment, which you helpfully snipped: "MS is just forgetting that the OD format was designed from the start to be as much independent as possible from the implementation of the office suite applications, and that’s why it was a great basis for a standard." The big difference isn’t when ODF was standardised. It was standardised when it was ready. The big difference is that ODF is designed in a generic manner, and the format represents the data. OXML has been designed for Office, and the format represents the data structures Office uses internally. Alex, Thanks for including that, although I had already included the link to David’s full post so that folks could read the rest of what Dave said as well as the original question he was responding to. My point was that contrary to what a lot of people have claimed, the ODF format was designed by a much smaller group of people (primarily Sun engineers) working on StarOffice. That’s why ODF lacks a number of features that are present in other Office applications. I really wanted to make sure that it was clear that the goals of the two formats are very different. The OASIS committee work wasn’t started until after the format had already been designed. The committee (while it did make some changes primarily to tag names, namespaces) made the decision on a number of big issues to push the spec through to completion withough finishing those issues. This is why the claims some folks are making about ODF fully supporting legacy documents is false. -Brian Brian – Part of the reason you are seen as using FUD is that you seem to intentionally confuse the possible features in those billions of documents with the actual features. I have worked for a major law firm that used Word extensively, and I doubt that 0.1 % (one tenth of a percent) of the Word documents created by that Firm use any feature that would not render perfectly under these tests. MS Word in particular has many, many features that virtually nobody uses, so the issue isn’t just what could be done but what is done. With Excel, that is probably somewhat less true, but it is still a relatively small (perhaps 10% to be generous) of the spreadsheets that do anything out of scope of ODF. I do not have enough personal experience with PowerPoint to make even an educated guess at how many PowerPoint presentations use specialized techniques. This is not to say that a) ODF doesn’t need work, as it certainly does, or that b) the claims you quote aren’t meant to give an impression that is not quite correct, as they certainly are. Nonetheless, if you can confidently say that there are billions of MS Word documents and Excel spreadsheets out there in the world, I can confidently respond that there billions of documents and spreadsheets which would render at least as well to ODF as to OpenXML. Part of the reason is that Microsoft has not always cared as much as you seem to about fidelity with older formats, and I have even had to use other word processors such as WordPro to occasionally recover an MS Word document that is from an older version to convert it to an MS Word document that Word 2003 will read properly. Microsoft’s claims that all previous MS Office formats will convert with complete fidelity to Open XML, or even to MS Office 2007 "native" formats are also extremely unlikely. I just wish both sides of this debate would quit with the over exaggerated claims for their side and the over exaggerated issues with the other side. Just get on with making the best product/format you can and stop with the FUD, and that goes for both the ODF Alliance (of which I am a member) and Microsoft (of which I am a Business Partner and customer. Ben, converting a logical data model from a binary format to an XML format without ANY loss of data or structure is perfectly possible. In case of Office, the original format is quite complex, so there’s always room for mistakes, but the process is really straightforward. I believe there is no other option for MS than to try and achieve 100 % compatibility. ODF on the other hand has a different logical structure, and a conversion process has to match data to similar, but not identical structures. What this means is that – You lose parameters on the way (this is not just internal stuff, these are parameters that users modify) – You have to convert to other elements – The conversion process is usually based on trial and error: what’s the structure in the other format that makes the original data look most like the original document, although the mapping is not perfect? The effect is that your document does look different, even if you used only features that are generally available in both formats. Also, if you’re a hardcore template and stylesheet user, you’ll have a hard time tweaking a converted document, because the result of the conversion will not be what you’d have done had you been using that program in the first place. Worse, if you use macros that depend on the original structure, they are just not going to work! I hear Suse 10 ships with a OO.org version that understands MS Office VBA, so this is an issue for interoperability too. But even if macros were not supposed to be interoperable, ODF proponents have always been claiming that it’s an evil move of MS to create their own format instead of joining ODF, complete with all kinds of monopoly abuse accusations. A lot of these complaints showd up at. ODF supporters repeatedly demanded clarifications about what exactly the problem would be when converting .doc to ODF. Jason was accused of lying quite a few times, by people who claimed that ODF is 100 % compatible. I had a discussion with a Sun guy on Jason’s blog who claimed that ODF had been created so much with MS Office in mind that there wouldn’t be any problems. The conclusion was always the same: MS did the wrong thing, and customers (especially from the public sector) should avoid the Open XML format and force MS to switch to ODF, because ODF has absolutely no disadvantages. At the same time, MS has not even started to attack ODF, the were merely making a point that they had good reasons to create a competing format. So, honestly, who’s creating the FUD? Stefan — the blame goes both ways; I think that was Ben’s point. But what Brian has been posting the past few weeks is pure FUD; factually correct, but quite misleading. Bruce, how is it misleading? I’m talking about key pieces of technology that people use every day. Heck, look at the presentations the IBM guys have published around ODF. They are all stored as PDF, not .odp (the ODF presentation format). Why do they not save them as .odp? Well given that they use tables heavily in the presentation and tables aren’t supported in ODF, that’s probably the reason. I’m calling attention to the massively overly exaggerated compatibility that ODF supporters claim. No one has really looked at the deep details here and examined how the end user is affected. Everyone just looks at a few wordprocessing documents with heavy formatting and says "wow, it works". Well what about those spreadsheets that investment banks use to build their financial models? Or the presentations built by government agencies to provide briefing materials on a particular research area? These are really key scenarios that don’t work today with ODF. I guess you’re right, it should cause people who’ve already decided to mandate ODF to be a bit fearful, uncertain, and doubtful. But that’s not because of lies or even exaggerations. It’s because the ODF format is not yet complete. I think the OASIS ODF group is doing a great job, and in looking at the newsgroup postings there are a number of smart people there who really care about solving these problems. They haven’t done it yet though, and are still over a year away (at least). So it’s disingenuous for folks like the ODF alliance and Gary Edwards to claim that ODF is this magic bullet today, when that just isn’t the case. -Brian It seems as though what separates ODF from OpenXML is not so much the format itself, but the tone of the respective formats’ proponents. Brian’s equability in entertaining and responding to the sometimes feverish pitch here is commendable. It’s also what keeps me coming back to this blog. Thanks! It’s FUD because you consistently make huge, unsupportable, generalizations based on small pieces of evidence. In every one of these posts you start with a legitimate observation, and then move on the point you really want to make: don’t use ODF. I’m not saying nobody on the other side does that, but that hardly excuses it. I think it’s fair to say that there will be interoperability trouble spots in BOTH of the formats, and the best way to solve them is to for the two groups to engage with each other, rather than to always look to score points in the blogosphere. I picked apart the new OXML/Office 2007 citation and bibliographic support on my blog, but I hope it’s clear I did this not to make MS look bad, but because I care deeply that you guys (and the ODF world too) get it right. And I hope you guys actually listen; you might regret it if you don’t. BTW, when I do presenttions I use either XHTML (typically for teaching) and Keynote-produced PDF. Why? Because I think both solutions are superior in their domains than both PowerPoint and the ODF presentation apps! So I think it’s problematic to use the fact that IBM happens to post presentations in PDF to reach any meaningful conclusion. Bruce, I am attacking the public statements that the ODF pushers have made around the ability of ODF to represent the existing base of Office documents. It’s simply not true, and I don’t think anyone can argue that. They are spreading misinformation, and are actually getting governments to create policies mandating ODF. That is irresponsible, as ODF is not able to support a number of key scenarios people do today. You say that you use Keynote for creating your presentations (Apple of course is working closely with us in Ecma and you can bet that they are making sure that the Open XML format is fully interoperable). I’m not as focused on the application you choose, but on the format you choose. There is most likely no way that you could represent the majority of those keynote presentations in ODF, and that is my point. Instead of ODF, you need to use another format like PDF or XHTML. How can the ODF pushers out there claim that ODF is this silver bullet when it can’t be used in these common scenarios? Like I said before, the level of analysis here seems to have stopped at the base wordprocessing document. As you look at more complex international features in wordprocessors, and core functionality in spreadsheets and presentations, the house of cards starts to fall. I really appreciate all the valuable feedback you’ve given on Open XML, and I thank you for that. You have to understand though that much of what I’m talking about is related to the fact that there is a huge push by IBM and Sun to get governments to mandate ODF. That’s why I have to call attention to the fact that ODF isn’t really even done yet, and they are being dishonest when they make claims around the richness and 100% compatibility of ODF. We have always been very clear about the design goals of Open XML, and what it will achieve. I would like to see the same come out of the ODF camp. That’s why I included that quote from David Faure. It was a straight forward honest answer, and didn’t try to make the exaggerations I’ve seen from other parts of the ODF community. As I said before, there are a lot of smart people who have worked on ODF and continue to work on it. I don’t mean to offend them with my statements. Those folks should be proud of what they’ve accomplished so far, but they should also be upset at the pushers out there who are seriously over promising. They are putting ODF in a position where it will look bad. Why do these people constantly rip apart the Ecma process and the Open XML format? We have never once said that you can’t have both formats. We’ve never said that ODF doesn’t work in certain scenarios. All we’ve said is that there are a number of scenarios where Open XML is necessary, and it doesn’t make sense to mandate around ODF on its own. It’s clear that there are strong business interests on both sides here. I think the key difference is that the ODF camp is pushing really hard for exclusion of Open XML; while the Open XML camp is just saying that there are a ton of legitimate scenarios where Open XML is required. Allow for both formats. Participate in projects like the Open XML -> ODF translator. Stop trying to delegitimize the hard work that is going on in Ecma, and be honest about goals and shortcomings from both sides. Stop complaining about every step we take to be better about openness and interoperability. Would I start complaining if OpenOffice build in support for Open XML? No way… I’d be thrilled! Why do all the ODF pushers complain then when we respond to what everyone was asking us to do and start up an open source project to support ODF in Office. Is it perfect? No. But for crying out loud, it’s an open source project that is completely transparent and anyone can participate in. It’s also just in a prototype form with a ton more work to go so either participate or be patient. Neither format is perfect, and neither will be. Look at all the issues and lacking features in XHTML. It’s still one of the most valuable formats in the world though, and I don’t think anyone would argue that. All I’m asking is for the ODF folks to admit there are legitmate needs for the Open XML formats, and that the work going on in Ecma is a huge benefit to the community. We’ve taken what was essentially a locked up binary format that was used to store billions of the worlds documents, and we’ve XMLized it; fully documented it; started to build tools around it; and removed any licensing/IP issues. That’s a good thing. -Brian OK, Brian, fair enough on your final point. I agree; it *is* a good thing. FWIW, IIRC, the quote from David Faure actually disputed the notion that MS had earlier been pushing that ODF was essentially a Sun-only affair. I think you’re being a bit selective there. If you read farther down he said: "MS is just forgetting that the OD format was designed from the start to be as much independent as possible from the implementation of the office suite applications, and that’s why it was a great basis for a standard." BTW, read the ODF TC Charter for the design goals. I think they actually are straightforward, and straightforwardly different than your’s. Brian, I couldn’t agree more with your last point above. It is definitely a good thing. However you are talking about this issue as if there were no dismissive and biased statements from Microsoft about ODF, and about other competitors etc as "hobbyist" attempts. As an example you keep stating that ODF is based on Sun Staroffice format, selectively quoting David Faure’s post and ignoring the statements in the Openoffice.org XML project website as to what the initial goals were. At this stage of the debate you can not claim ignorance as this has been repeatedly mentioned in your blog. You work for Microsoft and there is quite a bit of history on how you compete in the market. At some point you will realise that biased statements like these actually discredit you in front of many people. Hi Oscar and Bruce, Sorry if this hasn’t been clear, but the point of this post was to try to clear up where my criticisms lie. I actually truly respect the work going on in OASIS on the ODF Technical Committee. In fact, I want to apologize if any of the statements I’ve made in the past year have been offensive to those folks. I’ve tried to always be clear that I have no problems with ODF as a document format. I just don’t see it as a document format that would work for our needs. My big issue is with the folks making claims like I mentioned above. You have the press release on ISO making claims that I don’t think are achievable with the current version of ODF. The ODF Alliance is pushing folks to mandate only ODF, and I definitely have issues with that. I want for those folks to acknowledge what the goals of the OASIS TC were, since (as you mention) they have been fairly well stated for some time. Compatibility with the existing base of Microsoft Office documents was not a goal, and so the ODF Alliance/Fellowship/Foundation probably shouldn’t be claiming that it does that. Again, I apologize if I’ve offended any of the technical folks working on the formats or tools. I just need to show why Open XML is a needed format, and why the standardization in Ecma is so valuable. -Brian "ODF is this magic bullet today" Hey, that’s catchy! Mind if i use it? Just to confirm your suspicions, the "interop eXtensions" proposal does exist and has been quietly making the rounds of OASIS ODF TC members. The iX proposal itself comes out of our work in the Massachusetts RFi trials, where high fidelity “roundtripping” of ODF files is a priority. Yes, the conversion of Microsoft binary file formats has long been problematic for non Microsoft applications. The Foundation’s ODF Plug-in however isn’t your normal “non Microsoft” application. The plug-in works inside MSOffice applications, taking advantage of your excellent MSOffice Add On architecture. It turns out that converting MSBinaries to ODF from this inside position is a far more effective approach than most would think possible. Still, to achieve the high fidelity roundtripping quality of ODF documents needed by Massachusetts (and perhaps anyone else wanting to transition to ODF), other ODF ready applications will want to consider engaging the interop eXtensions. The consumers of ODF solutions certainly will want this high level of conversion and roundtripping fidelity across all ODF ready applications and document processing chains. This may be a difficult concept for you to wrap your mind around Brian, but the ODF Plug-in makes MSOffice a first class ODF ready application suite. It’s so good, i actually think MSODF should be the reference implementation of the ODF spec, but that’s just me. The interop eXtensions simply insure that all other ODF 1.2 iX ready applications are able to transparently exchange and round trip MSODF files with the same high fidelity that the ODF Plug-in enables in MSOffice handling of ODF documents. There is no doubt in my mind that the world’s digital information is going to move, sooner or later, into a portable document file format based on XML. There are only a handful of contestants to consider. The question of which of these few contestants will become the universal file format of choice remains to be seen, but that’s where things are heading. For ODF to make that universal file format claim, we know the challenge of being able to convert a legacy of binary bound information is a hurdle we must cross. And do so with ease. That means a conversion process that is completely transparent, delivers on high fidelity, results in excellent roundtripping, and is entirely non disruptive to the existing business processes and add on solutions programmatically bound to specific applications. A tall order indeed. Thanks to Massachusetts’ uncompromising insistence on the above criteria, we now know how to do this. The rest, is as they say, a matter of execution. Our delivery date is January 2007. What’s yours? We know the ODF Plug-in can solve all the high fidelity issues of conversion. We also fully believe that the interop eXtensions (ODF 1.2 iX) will solve the high fidelity issues of roundtripping ODF documents across a mixed environment of ODF ready applications; including the big Kahuna of ODF ready applications, MSOffice. What about performance, an issue that seems to concern you greatly? Reluctant as you are to disclose the truth, performance is nevertheless an application specific thing. It has little to do with the file format. Test an ODF document inside MSOffice, and test the same ODF document inside OpenOffice.org, and yes there is a performance differential entirely dependent on what it is your trying to do. However, inside MSOffice, where the ODF Plug-in does it’s magic, there is no measurable difference between ODF and MSXML – MOOX documents. As you well know, both Massachusetts and the EU IDABC have seen first hand the performance clock comparisons between MOOX and ODF concerning giant documents and spreadsheets in MSOffice. (OBTW, you’re currently down over 13 seconds with a giant spreadsheet – but you and i both know this is an issue of tweaking the engines, and not in any meaningful way a show stopper for the average bear 🙂 Since OpenOffice.org doesn’t support MOOX, there isn’t a way to even begin a cross application performance comparison of MOOX. We can however do this routinely with ODF. And we do, with great delight and entertainment i might add. About the submission to ISO. I can understand your angst, but the smell of desperation is unbecoming. OASIS ODF 1.0 met all of it’s stated objectives, including accommodating a number of late in the cycle requests from the EU. They told us what they needed from ODF to qualify as an ISO submission that they would fully support, and we gave it to them. ODF 1.0 also met all the basic requirements of more than a few independent production level applications. Every aspect of the ODF 1.0 specification has been road tested in the real world under production level stress loads. With more than five years of real world action, and multiple vendor, multiple purpose implementations, what more do you want? Oh. You want perfection. Me too. Don’t get your panties in a bunch though. We’ll get there. I promise you, ODF is going to meet all your expectations and more. Now, will MOOX meet all my expectations? Like application and platform independence? Will it be portable and useful across information systems and domains other than those provided by Microsoft? Will the license be changed so that we can extend MOOX as needed? Will i be able to take a MOOX document and replace the internal binding model with XForms, or Jabber, or a Java Connector model? If not, what use is MOOX to a J2EE, Lotus Notes, or LAMP shop? But let me ask you something Brian, as the metaphorical representative of Microsoft, are you going to road test MOOX for five years, across multiple independent cross platform implementations before submitting to ISO? Are you going to embrace and enable all the requests of interested parties like the EU and Massachusetts before submitting? January 2007. Are you going to be ready by then? We will. ~ge~. Microsoft has refused to document their .doc format, intentionally trying to make it difficult for other applications to read it, just so they can tout thir new XML format as being able to better interoperate with the old formats. In the past they have used the opacity of .doc to their advantage, creating slight differences between successive versions of the format to make people upgrade. With this kind of behaviour, don’t you think it’s understandable that we are rather put off by Microsoft pointing to the obscurity of its .doc format (which is intentional) as a reason why its new XML format is better than ODF? That’s basically what Microsoft is doing when it says that its XML is better at being compatible with the "billions of documents". I’m sure if Microsoft were to fully document its proprietary formats rather than attempting to trap users’ data, folks would have no problem making sure that ODF was 100% compatible with MS proprietary formats, rather than 90 something percent. And people might start to trust you a bit more rather than holding you responsible for the dodgy, anti-competitive tactics your company has carried out in the past (and continues to do). Hi Dan, if you want to get access to the binary format documentation just e-mail officeff@microsoft.com. It’s available under the CNS, so all you need to do is provide some contact information I believe and you should be good to go. Let me know if that doesn’t work. Hopefully that format will soon be a thing of the past though, as the new XML formats support all the same functionality, but are much easier to program against. ——- Hi Gary, thanks for your interesting thoughts. I’m not sure that you answered the questions I had, but that’s ok. I still am not clear on how you are representing the features that aren’t currently in the ODF spec. You mentioned some interoperability extensions, is that what you are using? Or are you saying that your tool doesn’t provide full support yet but once ODF version 1.2 comes out later next year you’ll be able to support it? Is there a way to actually get access to your tool? I’d really like to take a look. You asked a few questions of me, so here’s my attempt to answer: Q1: "ODF is this magic bullet today" Hey, that’s catchy! Mind if i use it? – I think you missed my point. Statements like that are what cause all the confusion in the first place. —————- Q2: Our delivery date is January 2007. What’s yours? – Not sure what you are referring to. Office 2007, or the translator project. The translator project schedule is up on SourceForge so you can see what the milestones and deliverables are. Where is the information on your project? Where can I download it? —————- Q3: What about performance, an issue that seems to concern you greatly? Reluctant as you are to disclose the truth, performance is nevertheless an application specific thing. It has little to do with the file format. – I’m not sure what you’ve been looking at but this is definitely not true. Obviously a format has a huge impact on the open and save times for an application. Why does OpenOffice’s Calc open an Excel Binary file orders of magnitude faster than it opens ODF spreadsheets? Try it yourself. The decision to use a generic table model to store spreadsheets has led to a format that is slow to open and save. —————- Q4: Every aspect of the ODF 1.0 specification has been road tested in the real world under production level stress loads. With more than five years of real world action, and multiple vendor, multiple purpose implementations, what more do you want? – Shoot, I’d at least expect a well defined syntax for spreadsheet functions, and tables in presentations. 🙂 —————- Q5: Now, will MOOX meet all my expectations? Like application and platform independence? – Absolutely —————- Q6: Will it be portable and useful across information systems and domains other than those provided by Microsoft? – We’re working with folks from both Novell and Apple on the spec. The formats are just as portable as ODF. —————- Q7: Will the license be changed so that we can extend MOOX as needed? – I love the stubbornness with the naming. There are a number of folks out there like you who like to call it "MOOX". It’s kind of like saying M$ I guess. Should I add an "S" for Sun and call it SODF? Or maybe SIBMODF? 🙂 Anyway, the spec is completely extensible. You can add as much stuff as you want. —————- Q8: Will i be able to take a MOOX document and replace the internal binding model with XForms, or Jabber, or a Java Connector model? If not, what use is MOOX to a J2EE, Lotus Notes, or LAMP shop? – Like I said, it’s completely extensible, so go for it! —————- Q9: But let me ask you something Brian, as the metaphorical representative of Microsoft, are you going to road test MOOX for five years, across multiple independent cross platform implementations before submitting to ISO? – ??? what kind of road tests did you do? The only implementations I’ve seen don’t really interoperate too well. We’ve been working on XML within our formats since the late 90’s and have learned a lot. The folks we’re working with in Ecma have also worked on other standards bodies and they too are bringing a lot of great insight. —————- Q10: Are you going to embrace and enable all the requests of interested parties like the EU and Massachusetts before submitting? – Anyone is free to join the Ecma TC. The latest member to join is the Library of Congress, and they’ve already had a bunch of great feedback. ————— Again, thanks for the interesting comments. Please point us to the location where we can try out your tool. It would be awesome to try out! -Brian Gary&nbsp;Edwards of the OpenDocument Foundation stopped by the other day&nbsp;to comment on my post… Seems to me the ISO ratification of ODF was a farce. All ISO did was rubberstamp a very incomplete spec, so ODF advocates can use that farcical ISO imprimatur as a weapon to convince (trick) governments into mandating the use of that incomplete spec. The ECMA process that’s been used for OpenXML has been far more rigorous than the ISO "process" ("sham" would be a better word) used for ODF. Gary Edwards sounds like a raving lunatic slashdot refugee. Anyone that uses the term "M$Office" clearly has no credibility. And his conspiracy theories about secret binary keys and his rantings regarding the MS-sponsored ODF translator only proves that he’s lost it. Alex, I’m sorry, but the statement "MS is just forgetting that the OD format was designed from the start to be as much independent as possible from the implementation of the office suite applications, and that’s why it was a great basis for a standard" is a flat out lie. As I said in when responding to an earlier entry to this blog: Here’s what says regarding ODF and OpenOffice.org: —————————– OpenOffice.org XML file format: "The OpenOffice.org XML file format is the native file format of OpenOffice.org 1.0. It has been replaced by the OASIS OpenDocument file format in OpenOffice.org 2.0." OASIS OpenDocument file format: "The OASIS OpenDocument file format is the native file format of OpenOffice.org 2.0. It is developed by a Technical Committee (TC) at OASIS. The OpenDocument format is based on the OpenOffice.org XML file format." —————————— So, ODF is *based* on OpenOffice.org’s previous XML format. ODF is not "nuetral" any more than OpenXML is. ODF is simply the opened version of OO.o’s previous XML format and OpenXML is the opened version of Microsoft’s previous XML format. ODF is not standing on any higher moral ground, contrary to the rhetoric of the ODF peanut gallery. Brian – Sorry, I really need to take issue with your first comment in response to mine. Certainly, as you say "converting a logical data model from a binary format to an XML format without ANY loss of data or structure is perfectly possible.", but which binary format did you mean? As far as I have been told, it is the binary format which is to be released in MS Office 2007. Are you suggesting that there have been no changes in that binary format since Office 2003/98/95? Of course there have, so the question is not whether binary compatibility with a n XML format is possible, but whether binary compatibility with several quite distinct variations is possible with a single format. My company does data format conversions regularly, and I fully agree that ODF will not do a 100% conversion from Microsoft Word docs, much less some of the other Office formats. On the other hand, I flat out don’t believe that Microsoft is any more capable of 100% conversion from all pre-existing .doc formats, although I certainly applaud both Microsoft and the members of the ODF Alliance for making 100% the goal. It is just too common for Microsoft Word 2003 to not be able to render Microsoft Word 98 to believe that you can fully solve that problem. You don’t even have that big an advantage over ODF, since your XML format seems artificially constrained to the binary format in Office 2007. So, I hope you prove me wrong, but I think it is equally prepsoterous for the ODF Alliance or Microsoft to claim 100% format fidelity. I wish both sides would admit that and move on to more constructive areas, such as better future planning. After all, we should care equally about the next several billion documents to be created. – Ben Gary Edwards hath scriven: ." I went to the Valoris Report that was linked to on Sam Hiser’s blog and I couldn’t find anything about binary keys in the report. I asked there for a specific citation to a page, at least. I’m asking again. Where in the EU Valoris report is there anything about some key on Microsoft XML documents (the Office 2003 ones, as I don’t think the 2007 format were around for that report to analyze). I am yet to have anyone show me a document that has such a thing. Surely this is a simple empirically confirmable fact, yes? To paraphrase, "show me the key!" Don Giovanni, you just needed to click a bit further down the link you provide. These are the original goals of the Openoffice.org XML format: Our mission is to create an open and ubiquitous XML-based file format for office documents and to provide an open reference implementation for this format. Core Requirements (these items are absolutely required) 1. use alone is not enough. 2. Structured content should make use of XML’s structuring capabilities and be represented in terms of XML elements and attributes. 3. The file format must be fully documented and have no "secret" features. 4. OpenOffice must be the reference implementation for this file format. Core Goals (these items are highly desired) 1. The file format should be developed in such a way that it will be accepted by the community and can be placed under community control for future development and format evolution. 2. The file formats should be suitable for all office types: text processing, spreadsheet, presentation, drawing, charting, and math. 3. The file formats should reuse portions of each other as much as possible (so for example a spreadsheet table definition can work also as a text processing table definition). OASIS File Format Standardization XML file formats allow further development of the format now takes place in the OASIS OpenDocument Technical Committee. Brian, I take offence at your constant bickering about how OpenDocument format is essentially disabled ! This is offensive, not politically correct and downright unacceptable ! OpenDocument format is in no way "disabled" – it is "differently abled" ! Firstly, I use Latex, MSWord et al., and OOO. None of the current or proposed formats are, in my opinion, ideal. The problem lies in mixing the document’s data and the presentation of that data. The process of publishing should be after writing. This is what Latex part-way achieves. With MSWord and OOO, due to the WYSIWYG, formating as you type is the routine. Indeed, the presentation/publishing is almost done up-front. Ideally, the system used in LATEX of defining the documents elements, eg, date, author’s name, abstract, email, and then placing the elements in the published work as required by a template (the positioning depending on the template, eg, the author’s name may be near the beginning or at the end depending on whether it is a report or a journal publication). However, LATEX can also mix in a complex macro language formating commands and data structures. I think we should be looking more towards an xHTML/CSS structure enabling the post-formating of LATEX. The data-based xml then has the advantage of having meta-tags for entry of the document (if required, chopped up according to the tags) into a database, and also being "published" in several formats depending on the applied style-sheet. If I want to fix the way the thing looks then I need some format (PDF?) which is not readily alterable. Don Giovanni – Obviously, I don’t have to point out that I didn’t make that claim. Believe David Faure or not; you can’t pick and choose which bits of his statement you like and then make a call to his authority. I think it’s pretty clear that ODF is very much implementation-independent. The markup structure is heavily influenced by existing standards and re-uses existing standards. OXML does little of either. Obviously, we’ll have to wait until OXML is properly specified, but I would be willing to bet that the final draft has changed little from Microsoft’s original submission. Ben, the phrase you quoted was mine, not Brian’s. So let me try two answer the two points you’re making: "I think it is equally prepsoterous for the ODF Alliance or Microsoft to claim 100% format fidelity" That’s not true. You can put anything in XML without any data loss, as long as you design the XML schema with this requirement in mind. If you have a formal, computer-readable specification of the original (binary) format, you could even build a program that does just that. There’s no way you can even come close to this when you are mapping two existing formats that were not buildt with each other in mind. It’s just not true. (Of course MS didn’t use an automated process, so, as I said, there’s room for mistakes. But the two aproaches are still light years apart.) "but which binary format did you mean?" Since Open XML is the default format for Office 2007, I don’t think that MS has changed a lot in the binary format and adapted the XML format as an afterthought. So the reference for binary compatibility would be Office 2003. Any problem that Office 2003 has with reading binary Word 98 files will probably also be there when converting Word 98 files to Open XML. But that’s more or less a problem of the past. These quirks in backwards compatibility have already had their effects, and it’s just about as useful to complain about those as it is to complain about any other problem in the old binary formats. I mean, there’s a reason even MS want’s to get rid of them. Stefan – Sorry, I misread the author. If you read what I wrote, I agree that you can have 100% format fidelity with a single binary format, but Microsoft is not claiming that. It is exactly the issue of with earlier formats that causes me to claim that they can’t have 100% format fidelity. Also, you say the following: "Since Open XML is the default format for Office 2007, I don’t think that MS has changed a lot in the binary format and adapted the XML format as an afterthought. So the reference for binary compatibility would be Office 2003." There are two problems with this logic. The first is that this assumes that there have been no revisions in the Office technology since the 2003 release. I think Microsoft would jump to argue that that is not true. If the Open XML format were really designed to the 2003 binaries, it would be just as difficult to force the changes for 2007 back into that format, as the alternative which is to force the 2003 binaries into the 2007 format. You can’t both argue that the XML is directly taken from the 2003 binary format and that it is completely compatible with any changes since. – Ben I think there’s a confusion about the Microsoft Office binary formats (specifically, for Word, Excel, and PowerPoint, the first ones to be covered by Office Open XML) and features carried in that format. I believe it is accurate (but perhaps not apt) when Microsoft represents that the document format has not changed since it was first introduced (in Office 97 I think, but certainly by Office 2000, I’m too lazy to check). This same binary format is available in Office 2007 and I am willing to believe that there is full roundtripping between the Office 2007 binary version and the corresponding OOX files. (That is the stated intention but I am in no position to test for discrepancies that might exist in the beta implementation.) There are downlevel differences, whether intentional or unintended, and these have to do with features carried in the format. The downlevel degradation is supposed to be graceful. The prospect of encountering uplevel features was built into the early versions, according to the accounts I’ve seen, but it means that you can’t always roundtrip from a later version of Office to an older version and back again and have fidelity to the original. (But if it works for the 80-20 ODF case, whatever it is, it probably works about as well in Office.) The same will be true with the OOX converters when they are used with downlevel versions of Office, and Brian has reminded us of that. (I am using the current beta with Office 2003 pretty much on a regular basis.) The same thing will happen when ODF is reved, as it surely will, and downlevel ODF implementations are "surprised." Depending on how this is handled, any uplevel extensions will possibly be treated as foreign elements and, in accordance with the ODF specification, be ignored but preserved (however one can rationally do that on a practical case by case basis). So there is nothing happening here that is not already happening with ODF (except the lack of a floor specification and poor anticipation of up-/down-/cross-level issues will be painful for the early deployment of ODF on any significant scale in interchange settings). Irreverant side note: I notice that the OpenOffice.org 2.0x that I run does a pretty good job of importing and then preserving basic features from the binary version of an Excel 2003 spreadsheet that I use every day. There are some niggling roundtrip issues having to do with number formats. On the other hand, if I save the very same spreadsheet in Excel 2003 Spreadsheet.xml, OOo imports it (using OpenOffice-specific extensions for the formulas) but fails pretty miserably to preserve the features and it won’t roundtrip successfully because of that. Excel doesn’t have any problem re-importing the Spreadsheet.xml, so my document is not using any features that don’t roundtrip from Excel binary to Spreadsheet.xml and back. So this experience has nothing to do with limitations of the formats. Obviously, both formats can potentially go back and forth and roundtrip between Excel and OOo, because the .xml version doesn’t lack anything that the binary version has, in my particular case, and the binary goes back and forth pretty much undisturbed. This just shows that the OOo conversion paths are not at the same level of quality with regard to different Excel formats. It’s a reality and current-state situation, not the potential case. So when kvetching about the way-early not-even beta, function-incomplete status of the open-source OOX-OOF translator (if that’s what we should call it), one should be mindful that even non-beta release software has a tough time with full-fidelity-preserving conversions even when it is clearly possible in the case of specific documents (like whatever the non-specific 80-20 case is that people keep handwaving as good-enough for ODF). It’s all about reality. In the end, reality wins. (Based on loose analogy with the principle that nature will not be fooled.) Of course, there is P. T. Barnum to be concerned about in the short term. Brian, this spin game is disgusting – I mean yours. Sure there are .doc files that cannot be converted to ODF. There will always be, because YOU will ensure this. The point is that there _are_ billions of documents that _will_ convert fine, and the percentage will grow over time. (Hey, maybe the next version of ODF will even support Word macro viruses!) I’ve been having conversations with some friends who say "Microsoft is changing, Bill is gone, maybe the industry will come to trust them again." Then I come across a blog like yours, and I think no, until people like you are around, the industry will never trust MS. Looks like "sudo apt-get install openoffice.org" finished, time to get back to work. Ben, you’re right, I didn’t read carefully enough. "The first is that this assumes that there have been no revisions in the Office technology since the 2003 release." I’m not assuming this. What I am assuming is that, starting with Office 2007, the primary development takes place in the XML arena, so additions would have to be retrofitted into the new binary format. What this means is that the Office 2007 XML format is 100% compatible to everything that office 2003 can read. that seems good enough for me. So, if some word 98 stuff can’t be read, you already have a problem today (unless you are still using word 98, in which case you are probably not a word power user anyway ;-)) Fixing backwards compatibility issues with pre-2003 versions that surfaced in Office 2003 or even before would be a nice additional goal for Open XML conversion. But as I understand it, that’s not part of the story. Technically, it’s more a problem of the current binary reader engine than of the file format anyway. In Office 2007, binary formats are for backwards compatibility only, so I don’t get how compatibility to a new, extended binary format (which nobody should be using anyway) would matter. Everybody has been complaining about these binary formats forever (and for good reasons too). And now you are complaining that the new (default) format might not be compatible with a new, revised binary format containing features that cannot be read by old Office versions anyway? Stefan Stefan – No, I could care less about the new, revised binary format. Yes, I am glad that Microsoft is moving to a more universally defined standard. But how do you draw the conclusion that since "primary development takes place in the XML arena", that means that "Office 2007 XML format is 100% compatible to everything that office 2003 can read". Is there any evidence or indication that Microsoft would really make the brand new, feature filled version of their software dependent on a four year old standard? They may have a goal of 100% fidelity in conversions, but it seems at least plausible that they would base their 2007 XML format on their 2007 binary format (or vice-versa if you like), not on their 2003 binary format. I don’t see how you can so confidentally make this assumption. – Ben Dennis H. et al The mythical secret key has been a mystery for some time now. Gareth Come on, I´m sure that there is someone in the ODF camp more mature than Gary Edwards that can explain the hyperbolic claims made in the ISO press release. The silence so far from Sun and IBM has been deafening, not to mention ISO and OASIS themselves. I am curious whether any estimates have been made about how many billions of Word, Excel and Powerpoint documents there are out there. This is significant because if there are, say, thirty billion, it might not be hyperbole at all to say that "Billions of existing office documents will be able to be converted to the XML standard format with no loss of data, formatting, properties, or capabilities.", because it is quite likely that even given a generous 80-20 rule, 80% of those contain nothing remotely difficult to convert to almost any other format. It is fairly likely at least 20% of Word documents would translate just fine even to plain text, and 20% of, say, 20 billion, would still be "billions". If on the other hand, there are only two billion such documents, this would definitely be hyperbole. This is not to defend the extreme claims that do come out of the ODF camp at times, but it does point out that the quoted claim may not be unreasonable. Of course, I may be missing a nuance here. Brian, You may have a great time pocking fun at ODF. I don’t care about ODF. However, I would rather have you spend more time improving your own mess rather than criticizing somebody else’s. Microsoft’s XML format is a _giant_ mess with no consistency. The date problem I mentioned is just one out of many problems. Ben, What I tried to say is: I assume the 100 % goal is as compared to the current (i.e. 2003) binary format. I’m not saying this has been achieved, sorry if I was unclear. But I am claiming that it is a perfectly achievable goal if you can tweak the target schema, which the ODF converter guys cannot. That’s where I think major differences arise, and that’s why I think the statement is somewhat reasonable from MS and completely unrealistic from ODF. How can I be sure that compatibility is measured with the 2003 binary format in mind? I cannot. But older formats are unlikely, and the 2007 binary format is almost completely irrelevant, so it kind of has to be 2003, right? Maybe Brian can say something about that. I like your defense of the "billions of documents" statement, btw. So instead of outright lying, this would make it intentionally misleading. Imagine Microsoft making up stuff like that … and the response 😉 Stefan Good point, Stefan. The problem is that is exacly what we are getting in this blog. Intentionally misleading. Brian, If you think OpenDocument is so bad why do you stop it being distributed alongside MS Office beta? Haben wir jetzt nach den sog. Browser-Krieg einen solchen um Office-Dateiformate? Im näheren&nbsp;sowie… Oscar, Your twist the meaning of my statement and don’t even think it worth the trouble to explain your point. Brian gets paid for dealing with arbitrary statements like yours, so I guess that’s OK. I don’t, so please leave me out of your propaganda games. Stephan Apologies if i have offended you. I thought I was keeping with the general tone of this entry in the blog. Anyway, what my point was is based on my experience of both programs I think the impression Brian is trying to convey, ie. OpenDocument and Open Office are not 100% compatible with the existing installed base of office documents has at least as much spin as the claim of some guys in the other side of the fence. In my experience and for most documents I’ve used the formatting is kept to the degree I need. If I want perfect fidelity I wouldn’t trust Office either, I would use PDF. Where I think he may have a point is when you are working/collaborating with MS Office users on the same document. There again I have experienced some issues when upgrading to a new Office version is not done across the organisation. So I don’t see this as an engineering problem but as a market reality and perception problem. I can not argue whether fidelity is 97% or 101%, I can tell you that based on a fairly thorough use of both programs the level of interoperability you can achieve is more than satisfactory. As stated by other people in this blog, the reason is not even better is that existing binary documentation was not available under a license that most developers could work with, so they had to resor to reverse engineering. In that sense the new MS Office XML programs should be an improvement. Oscar, we will have to see what the impact of incompatibilities between ODF and the .doc format is for various szenarios. If you just want to migrate with acceptable format preservation and effort, ODF might provide good enough migration support even for heavily formatted docs, although OpenXML will likely be better. On the other hand, for roundtripping, macro support or even excel formula support, it might be a pain. Also, average users may not have the skills or the time to correct transformation artefacts, so they’ll have to live with the results. Either way, there will be impacts, and the claim of "completely lossless transformation of billions of [ms office] documents" is plainly wrong, a.k.a as spinning. I still cannot see how correctly pointing this someone elses spinnning is spinning by itself. This line of argument sounds suspiciously like politician spek to me. As for compatibility problems between .doc and OpenXML – I think I’ve explained it more than once already, but let me summarize my point: MS controls the XML format and is therefore in a position to reach 100 % compatibility in theory. Mistakes can happen, but they can be avoided and/or corrected later. ODF is not designed for this kind of compatibility, therefore translator coders will experience problems that they cannot fix, no matter how careful and skillful they are. This is a very important distinction if you have to make a decision about which format to choose. And even more so if you are thinking about using ODF exclusively (and therefore banning OpenXML), as ODF supporters are often recommending. Microsoft would be nuts not to point out the problems that could arise from such a decision, but ODF supporters call these reactions aggressive, unfair and pejorative. I’ve heard what the ODF guys had to say about OpenXML before MS even started to point out the shortcomings of ODF, so this feels a little bit like election campagn propaganda to me: One party makes untrue/unfair statements, and as soon as the other party calls them on being unfair, they just accuse them of being unfair too for calling them unfair in the first place. In the end, nobody will know who started it anyway. Throw in some unconnected statements about past failings of MS, so you can establish yourself on a higher moral ground. Now, if the details are too complicated to follow, people will hopefully just believe you because MS cannot be trusted anyway, no matter what the facts in the current situation are. It might not be coincidence that Sun and IBM are focusing on politically led organizations with their exlcusive-ODF campaign, because chances are, nobody else who is not already in the anti-MS camp would care about all that political noise they are making. Stefan, I was trying to understand the point you are making by excluding the "political", "propaganda", "higher moral ground", but I just got bored. That’s good, Oscar. Go play somewhere else. I’ve been getting so much spam that it’s been really hard to keep up with some of the comments. It looks like I deleted a comment from Gary Edwards accidentally that pointed to that funny clip of Balmer jumping around. Here is the post Gary left: Gary Edwards has made a new post: re: Spin Spin Sugar. A raving lunatic? Perhaps. But not one without inspiration: Gary Edwards of the OpenDocument Foundation stopped by the other day to comment on my post " Spinning PingBack from PingBack from PingBack from PingBack from
https://blogs.msdn.microsoft.com/brian_jones/2006/07/27/spin-spin-sugar/
CC-MAIN-2018-09
refinedweb
9,637
61.56
The other day I was doing a programming challenge that required me to find the longest palindrome within a string. The challenge was to do it in at least O(N)2 complexity. I found this to be fairly interesting. The challenge read like so: Given a string S, find the longest substring in S that is the same in reverse and print it to the standard output. So for example, if I had the string given was: s= “abcdxyzyxabcdaaa” . Then the longest palindrome within it is “xyzyx “. We need to write a function to find this for us. The nature of the challenge just needed me to write a function for a web back-end, so the example code below does not have main, or a defined string, etc. Also, there are O(N) solutions for this problem if you seek them, but they are a little more complicated. #include <iostream> #include <string> using namespace std; void longest_palind(const string &s) { int n = s.length(); int longestBegin = 0; int maxLen = 1; bool table[100][100] = {false}; for (int i = 0; i < n; i++) { table[i][i] = true; } for (int i = 0; i < n-1; i++) { if (s[i] == s[i+1]) { table[i][i+1] = true; longestBegin = i; maxLen = 2; } } for (int len = 3; len <= n; len++) { for (int i = 0; i < n-len+1; i++) { int j = i+len-1; if (s[i] == s[j] && table[i+1][j-1]) { table[i][j] = true; longestBegin = i; maxLen = len; } } } cout << s.substr(longestBegin, maxLen); }
https://rundata.wordpress.com/tag/c/
CC-MAIN-2017-26
refinedweb
255
76.15
In suspenders, we have a rake task called dev:prime that allows us to seed the database with information. We’re often asked why we prefer a custom task over using rake db:seed which is already built into Rails. Let’s talk about a few of the differences. rake db:seed We reserve the db:seed tasks specifically for data that must be present for your application to function in any environment. One example of this may be a list of US States in your database that your address form relies on. That data should always exist, whether the app is being used in development or production. rake dev:prime Working on an app is easier if you have data that looks and feels similar to what your users see. If you’re new to the codebase or new to a specific feature having data preloaded that makes sense and sets you up for the feature, is really helpful. Our development seeds contain data that are necessary for users to view most of the features of the app. They’re very convenient for developers or designers who are running the app locally. If you were building a multi user blogging application, your seeds file would likely generate the following: - an admin user - 2-3 normal users - enough posts to ensure you have pagination. Most likely you’d want to space out their published date enough that you could also test the features to view posts by month or year - 5-10 comments on each post with various authors including anonymous authors - 2-3 deleted posts As you’re building a new feature, if it requires special data to setup, put it directly into the seeds file instead of adding it with the web interface or the console. For example, if you’re building a feature to make sure that posts without a published_at value are not visible on the homepage, add that to your development seeds instead of creating one through the UI. This ensures that the next person who needs to test that feature has it ready to go without as much work. How to generate data For database seeds, we do not recommend using FactoryGirl to generate your data. For generating data for your local development environment however, it can be very helpful to leverage your test factories to simplify setup. For building a lot of blog posts you could rely on the defaults for most fields but randomize a few key fields: if Rails.env.development? || Rails.env.test? require "factory_girl" namespace :dev do desc "Sample data for local development environment" task prime: "db:setup" do include FactoryGirl::Syntax::Methods titles = [ "You won't believe what we found out about cheese!", "Don't skip these 12 super foods", "Only 20s kids will remember these toys", ] authors = [ "Liz", "Sam Seaborn", "The Honorable 3rd Duke of Long Names", ].map do |name| create(:user, name: name) end 50.times do create( :post, author: authors.sample, title: titles.sample, published_at: (1..365).to_a.sample.days.ago, ).each do |post| (1..10).to_a.sample.times do create(:comment, post: post) end end end create_list(:post, 3, deleted_at: (1..10).to_.sample.days.ago) end end end It’s all about communication Development seeds are another form of communication. Reading the tests can expose a great deal of information about an app and your development seeds can provide a similar benefit. If you treat them with care, they can provide a great way to onboard new teammates as well make feature development and bug fixes easier for your existing teammates.
https://robots.thoughtbot.com/priming-the-pump
CC-MAIN-2018-17
refinedweb
598
59.84
How to Setup TensorFlow GPU 2.2 with NVIDIA GPUs Hello world, it’s Aaron! NOTE: This article assumes you are on a Linux distro with at least 1 CUDA-capable NVIDIA GPU. In this article, we will be installing NVIDIA CUDA and TensorFlow GPU 2.2.0-rc2! Install CUDA - Right before we install CUDA, we need to make sure that your GPU is CUDA-capable. If no results are returned after this command, sorry, your GPU doesn’t support CUDA! lspci | grep -i nvidia 2. Check you have a supported version of Linux: uname -m && cat /etc/*release 3. Install GNU G++. 4. Install CUDA 10.1 (not CUDA 10.2, as TensorFlow GPU currently doesn't support CUDA 10.2) by clicking the link for your Linux distro: - Linux 18.04: - Linux 18.10: - Linux 16.04: - Linux 14.04: 5. Follow the instructions for deb(local). 6. Install cuDNN for CUDA 10.1 by clicking here: Install Anaconda and TensorFlow GPU Great job on setting up CUDA! Now for the meat of this article: Installing TensorFlow GPU. - Right before we get started, install Anaconda so we don’t get errors while running TensorFlow GPU. To install Anaconda, go to this link here: - Now, create a virtual Anaconda environment. - Create a virtual environment in Anaconda called tf-gpu: conda create --name tf-gpu 4. Now, source into the virtual environment. Remember, every time you want to use this virtual environment, you must run this command! conda activate tf-gpu 5. Install TensorFlow GPU with pip: pip install tensorflow-gpu==2.2.0rc2 6. Create a new Python 3 shell: python3 7. Test your TensorFlow GPU installation: import tensorflow as tf tf.__version__ # Result should be '2.2.0-rc2' tf.config.list_physical_devices('GPU') # should list all available GPUs Congratulations on setting up your computer for TensorFlow GPU 2.2.0-rc2!
https://aaronhma.medium.com/how-to-setup-tensorflow-gpu-2-2-with-nvidia-gpus-5640a0ac1680
CC-MAIN-2022-40
refinedweb
314
62.44
When printing an error, we sometimes include an error code. For example: #include <stdio.h> // ... #define EXIT_FAILURE_CABLETRELLIS 45 int main(void) { // ... if (true != false) { fprintf(stderr, "The cabletrellis went wrong\n"); exit(EXIT_FAILURE_CABLETRELLIS); } // ... } That error code in C is mostly invisible. People don’t bother with echo $?, they just Google the error text. So sometimes we print an error code as well: if (true != false) { fprintf(stderr, "ERR_CABLETRELLIS: The cabletrellis went wrong\n"); exit(EXIT_FAILURE_CABLETRELLIS); } Still, the developer will just Google the error. Or if it’s an end-user seeing the error, they won’t even know to Google it. Instead, we should do them a favour, and point them exactly to a central error page: #include <stdio.h> // ... #define EXIT_FAILURE_CABLETRELLIS 45 void exit_err(int exit_code, char * error_code, char * message) { fprintf(stderr, "%s! Go here for help:\n", message, error_code); exit(exit_code); } #define EXIT_ERR(C,M) exit_err((C), #C, (M)) int main(void) { // ... if (true != false) { EXIT_ERR(EXIT_FAILURE_CABLETRELLIS, "The cabletrellis went wrong"); exit(); } // ... } This error now lives at the unique URL: This idea shamelessly copied from I wrote this because I felt like it. This post is my own, and not associated with my employer.Jim. Public speaking. Friends. Vidrio.
https://jameshfisher.github.io/2017/01/05/error-urls.html
CC-MAIN-2019-18
refinedweb
201
51.75
One constructors – where a type constructor is anything that has a type parameter. For instance List[_]* is not a type, the underscore is a hole into which another type may be plugged, constructing a complete type. List[String] and List[Int] being examples of complete (or distinct) types. Kinds Now that we have a type constructor we can think of several different kinds of them, classified by how many type parameters they take. The simplest – like List[_] – that take a single param have the kind: (* -> *) This says: given one type, produce another. For instance, given String produce the type List[String]. Something that takes two parameters, say Map[_, _], or Function1[_, _] has the kind: (* -> * -> *) This says: given one type, then another, produce the final type. For instance given the key type Int and the value type String produce the type Map[Int, String]. Furthermore, you can have kinds that are themselves parameterized by higher kinded types. So, something could not only take a type, but take something that itself takes type parameters. An example would be the covariant functor: Functor[F[_]], it has the kind: ((* -> *) -> *) This says: given a simple higher kinded type, produce the final type. For instance given a type constructor like List produce the final type Functor[List]. Utility Say we have some standard pattern for our data-structures where we want to be able to consistently apply an operation of the same shape. Functors are a nice example, the covariant functor allows us to take a box holding things of type A, and a function of A => B and get back a box holding things of type B. In Java, there is no way to specify that these things share a common interface, or that we simply want transformable boxes. We need to either make this static eg. Guava’s Lists and Iterables, or bespoke on the interface, eg: fugue’s Option or atlassian-util-concurrent’s Promise. There is simply no way to unify these methods on either some super interface or to specify that you have – or require – a “mappable/transformable” box. With HKT I can represent the covariant functor described above as: [cc lang=’scala’ ] trait Functor[F[_]] { def map[A, B](fa: F[A])(f: A => B): F[B] } // implement for java’s List // note that the presence of mutation in the Java collections // breaks the Functor laws import java.util.{ List => JList } implicit object JavaListFunctor extends Functor[JList] { import collection.JavaConverters._ def map[A, B](fa: JList[A])(f: A => B): JList[B] = (for (a B): Box2[B] = Box2(f(b.a1), f(b.a2)) } // and use it** def describe[A, F[_]: Functor](fa: F[A]) = implicitly[Functor[F]].map(fa)(a => a.toString) case class Holder(i: Int) val jlist: JList[Holder] = { val l = new java.util.ArrayList[Holder]() l add Holder(1); l add Holder(2); l add Holder(3) l } val list = describe(jlist) // list: java.util.List[String] = [Holder(1), Holder(2), Holder(3)] val box2 = describe(Box2(Holder(4), Holder(5)) // box: Box2[String] = Box2(Holder(4),Holder(5)) [/cc] So, we have a describe function that works for any type that we can map over! We could also use this with a traditional subtyping approach to have our boxes implement the map method directly with the appropriate signature. This is a little more convoluted, but still possible: [cc lang=’scala’] /** * note we need a recursive definition of F as a subtype of Functor * because we need to refer to it in the return type of map(…) */ trait Functor[A, F[_] B): F[B] } case class Box[A](a: A) extends Functor[A, Box] { def map[B](f: A => B) = Box(f(a)) } def describe[A, F[A] a.toString) val box = describe(Box(Holder(6))) // box: Box[String] = Box(Holder(6)) [/cc] As a bonus, this last example quite nicely shows how subtype polymorphism is strictly less powerful and also more complicated (both syntactically and semantically) than ad-hoc polymorphism via type-classes. Postscript These techniques can lead to some very general and powerful libraries, such as scalaz, spire and shapeless. These libraries may take some getting used to, and as many of these generalizations are inspired by the mother of all generalizations – mathematics – they have names that need learning (like Monad). However, the techniques are useful without needing to use scalaz. HKT is important for creating type-classes, and creating your own type-classes to encapsulate things like JSON encoding may be of value to your project. There are many ways this can be used within Scala. If you’re interested in reading more, here’s the original paper for Scala. Among other things, it contains the following very useful graphic: Also note that the Scala 2.11 REPL is getting a :kind command although its output is a little more convoluted due to the presence of variance annotations on type parameters. * Strictly speaking, in Scala List[_] is actually an existential type. For the purposes of this post I am using the [_] notation to show the existence of type parameters. Thanks to Stephen Compall for pointing this out. ** An alternate syntax for a context-bound is an explicit implicit block: [cc lang=’scala’] def describe2[A, F[_]](fa: F[A])(implicit functor: Functor[F]) = functor.map(fa) { _.toString } [/cc]
https://www.atlassian.com/blog/archives/scala-types-of-a-higher-kind
CC-MAIN-2020-50
refinedweb
899
60.04
roi_perspective_transform¶ paddle.fluid.layers. roi_perspective_transform(input, rois, transformed_height, transformed_width, spatial_scale=1.0, name=None)[source] The rois of this op should be a LoDTensor. ROI perspective transform op applies perspective transform to map each roi into an rectangular region. Perspective transform is a type of transformation in linear algebra. - Parameters input (Variable) – 4-D Tensor, input of ROIPerspectiveTransformOp. The format of input tensor is NCHW. Where N is batch size, C is the number of input channels, H is the height of the feature, and W is the width of the feature. The data type is float32. rois (Variable) – 2-D LoDTensor, ROIs (Regions of Interest) to be transformed. It should be a 2-D LoDTensor of shape (num_rois, 8). Given as [[x1, y1, x2, y2, x3, y3, x4, y4], …], (x1, y1) is the top left coordinates, and (x2, y2) is the top right coordinates, and (x3, y3) is the bottom right coordinates, and (x4, y4) is the bottom left coordinates. The data type is the same as input transformed_height (int) – The height of transformed output. transformed_width (int) – The width of transformed output. spatial_scale (float) – Spatial scale factor to scale ROI coords. Default: 1.0 name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name - Returns A tuple with three Variables. (out, mask, transform_matrix) out: The output of ROIPerspectiveTransformOp which is a 4-D tensor with shape (num_rois, channels, transformed_h, transformed_w). The data type is the same as input mask: The mask of ROIPerspectiveTransformOp which is a 4-D tensor with shape (num_rois, 1, transformed_h, transformed_w). The data type is int32 transform_matrix: The transform matrix of ROIPerspectiveTransformOp which is a 2-D tensor with shape (num_rois, 9). The data type is the same as input - Return Type: tuple Examples import paddle.fluid as fluid x = fluid.data(name='x', shape=[100, 256, 28, 28], dtype='float32') rois = fluid.data(name='rois', shape=[None, 8], lod_level=1, dtype='float32') out, mask, transform_matrix = fluid.layers.roi_perspective_transform(x, rois, 7, 7, 1.0)
https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/roi_perspective_transform.html
CC-MAIN-2021-04
refinedweb
346
58.69
? For an example: I have a dll file liba.dll, While compiling i am giving the path of the library file. It is compiling properly. dll liba.dll Now i have to load this binary/.exe file and DLL file to other system/device, Then what about the DLL file? How our program gets to know the path of DLL during run time? I mean how DLL file will be linked? GetModuleFileName() works fine from inside the DLL's codes. Just be sure NOT to set the first parameter to NULL, as that will get the filename of the calling process. You need to specify the DLL's actual module instance instead. You get that as an input parameter in the DLL's DllEntryPoint() function, just save it to a variable somewhere for later use when needed. GetModuleFileName() A complete example: CStringW thisDllDirPath() { CStringW thisPath = L""; WCHAR path[MAX_PATH]; HMODULE hm; if( GetModuleHandleExW( GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS | GET_MODULE_HANDLE_EX_FLAG_UNCHANGED_REFCOUNT, (LPWSTR) &thisDllDirPath, &hm ) ) { GetModuleFileNameW( hm, path, sizeof(path) ); PathRemoveFileSpecW( path ); thisPath = CStringW( path ); if( !thisPath.IsEmpty() && thisPath.GetAt( thisPath.GetLength()-1 ) != '\\' ) thisPath += L"\\"; } else if( _DEBUG ) std::wcout << L"GetModuleHandle Error: " << GetLastError() << std::endl; if( _DEBUG ) std::wcout << L"thisDllDirPath: [" << CStringW::PCXSTR( thisPath ) << L"]" << std::endl; return thisPath; } Let's say I have an exe-file (for example computer game) and need to forbid to run it until certain date or time during the day. Any 'manipulations' with file are allowed. Could you, please, offer me a simple way of how to encode/decode such a file mostly in C, Please assume that I don't want to create library file Lets say, I have 2 files main.cpp and function.cpp. I have 2 methods to compile both the file? main.cpp function.cpp 1st) while compiling just include both files, g++ main.cpp function.cpp 2nd) In main.cpp file i can include function.cpp #include "function.cpp" As far as I know while compiling using 2nd method it will take more time, other than that any advantage/disadvantage? Also what if i have more files lets say i have 500+ files, all are linked with each other, then which method will be preferable? In this case if I go with 1st method then I will have to provide 500+ file names in my makefile? Can anybody help? Forgot Your Password? 2018 © Queryhome
https://www.queryhome.com/tech/98849/how-dll-dynamic-library-is-linked-during-run-time
CC-MAIN-2018-47
refinedweb
390
66.03
Not sure that I totally get the whole importing functions thing. What exactly is the syntax for that? So far, I've tried from module import sqrt to try to get what you would if you inputted import math print math.sqrt(25) but I don't think that that's right? Function Imports Not sure that I totally get the whole importing functions thing. What exactly is the syntax for that? So far, I've tried math is a module, it has a function called sqrt you can import sqrt from math or you can import math and access sqrt through that Both ways will run the entire module that you are importing, the difference is in what/which names are added to your current namespace I write the codes on the shell script (python 2.7.10). It runs well. I do not really know why it can not run on the codeacademy. it responds IdentationError :unexpected indent(python, line3). Sounds to me like the code you entered is different from what you entered in idle. I could sort of imagine that the editor is having a hiccup and that copying the code and pasting it back, perhaps refreshing/resetting the exercise.. I don't know who's fault it is, but Python sure isn't getting the same code as what you entered in idle. Yep. It runs fine on this emulator: try this out from math import sqrt Hello, please help me, I am totally confused:((( I tried all possible but with no success. What you have written (code)? This what I used: # Import *just* the sqrt function from math on line 3! from math import sqrt For me ( from math import sqrt) worked, but I had space before start of the line 3 and first word 'from'. That was my problem
https://discuss.codecademy.com/t/function-imports/26388
CC-MAIN-2018-43
refinedweb
306
80.41
The LoadJpeg Advantage on Windows Phone3/19/2012 | Tags: windows-phone Please, tell us what you think about this news by voting source: charlespetzold.com Windows Phone includes a class named Extensions in the System.Windows.Media.Imaging namespace that contains two extension methods for WriteableBitmap named LoadJpeg and SaveJpeg. The first one loads a JPEG from a Stream into a WriteableBitmap, and the second saves the contents of a WriteableBitmap to a Stream. Although I was overjoyed with the SaveJpeg method and used it a few times in my book, the LoadJpeg method seemed unnecessary because BitmapSource - the parent class to BitmapImage and WriteableBitmap - defines a SetSource method that alos accepts a Stream object for creating the image, and not only that, but it works with PNG files as well! Today I found out that LoadJpeg has a definite benefit when loading large
http://www.geekchamp.com/news/the-loadjpeg-advantage-on-windows-phone
CC-MAIN-2014-35
refinedweb
144
65.25
Python Scikit-learn: Create a hitmap using Seaborn to present their relations Python Machine learning Iris Visualization: Exercise-17 with Solution Write a Python program to find the correlation between variables of iris data. Also create a hitmap using Seaborn to present their relations. Sample Solution: Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns iris = pd.read_csv("iris.csv") #Drop id column iris = iris.drop('Id',axis=1) X = iris.iloc[:, 0:4] f, ax = plt.subplots(figsize=(10, 8)) corr = X.corr() print(corr) sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(220, 10, as_cmap=True),square=True, ax=ax, linewidths=.5) plt.show() Output: Python Code Editor: Have another way to solve this solution? Contribute your code (and comments) through Disqus. Previous: Write a Python program using seaborne to create a kde (Kernel Density Estimate) plot of two shaded bivariate densities of Sepal Width and Sepal Length.Next:
https://www.w3resource.com/machine-learning/scikit-learn/iris/python-machine-learning-scikit-learn-iris-visualization-exercise-17.php
CC-MAIN-2021-17
refinedweb
166
52.66
import classes from same package. Can you tell me how to import two or more public classes from the same package using a single import statement. like import java.nisha.*; here in the java package i have nisha sub package in which all the classes are present and i want to import al Local Variable ,Package & import ; System.out.print(classVariable); } } Package & import Java classes can be grouped together in packages. A package name is the same as the directory...Local Variable ,Package & import A local variable has a local scope.Such How to import a package How to import a package There are two ways so that you can use the public classes stored in package. Declaring the fully-qualified class name. For example The import keyword ; The import statement make available one or all the classes in a package... ambiguity when multiple package includes classes having same names. Use.... Java versions from J2SE 5.0 to upward can import the static members Java Package to use them in an easier way. In the same manner, Java package is a group... Java Package In a computer terminology, package is a collection of the related files in the same Master Java In A Week Master Java In A Week Master Java Programming Language in a week... for Java classes. Header file generator... into HTML format from Java source code. It is interesting to know that Javadoc Java Get classes In Package Java Get classes In Package  ... the classes from the package by providing the path of the jar file and the package.... Following code adds all the classes of the package getting from the jar Class and interface in same file and accessing the class from a main class in different package Class and interface in same file and accessing the class from a main class..."; } } CheckAnimal.java package com.designpattern.main; import com.classification.pojo.Animal... correctly, Animal.java, Dog.java and Fly.java needs to be in same package. If I Java AWT Package Example it. This is done by using java gui programming using Classes or APIs of java awt package... Java AWT Package Example In this section you will learn about the AWT package of the Java What is a package? related to package in java.... programming language, A package is group of related types of classes & interfaces...What is a package? hi, What is a package? thanks Hi
http://www.roseindia.net/tutorialhelp/allcomments/5840
CC-MAIN-2014-42
refinedweb
403
59.3
Sometimes people ask me what I do. I do a lot I guess. Maybe too much. But honestly .NET and the web are great companions and I would never betray them. Therefore I spent most of my time in another, but closely related field: High Performance Computing, or short HPC. HPC is the industry that tries to come up with larger, more efficient systems. The largest simulations in the world are run on HPC systems. The state of the HPC industry is reflected by two very important lists. The Top 500, which contains the 500 fastest supercomputers currently on the planet and the Green 500. If a machine is listed in the Top 500, it is allowed to enter the Green 500. The Green 500 cares about computations per energy. If a machine is able to run the most computations, but requires the most energy, it may be less efficient then a second system, doing almost as many computations, but with far less energy consumption. Supercomputers are very expensive. The one-time price to design, purchase and build the system is actually quite low, compared to the regular costs that come with running the system. A standard HPC system is easily consuming hundreds of kW of power. Hence the strong drift for energy-efficient computing. Luckily, chip vendors have also caught that drift and provide us with new, exciting possibilities. These days ARM is a good candidate for future energy-efficient systems. But who knows what's to come in the future. The design possibilities seem endless. Good ideas are still rare, though. An excellent idea has been to introduce hot water cooling. Regular (cold) water cooling has been established for quite a while, but is not as efficient as possible. Of course it is much more efficient than standard air cooling, but the operation and maintenance of chillers are real deal breakers. They require a lot of energy. Hot water cooling promises to yield chiller-free, i.e., cost-free, cooling all year long. The basic principle is similar to the way that blood flows through our body. In this article I will try to discuss the ideas and principles behind designing a state of the art supercomputer. I will outline some key decisions and I will discuss the whole process for a real machine: The QPACE 2. The small machine is fast enough to be easily ranked in the Top 500. It is also efficient enough to be one the twenty most energy efficient systems in the currently known universe. The prototype is installed at the University of Regensburg. It is also possible to purchase the technology that has been created for this project. The Aurora Hive series from EuroTech is the commercially available product, which emerged from this project. A while ago I realized that I wanted to write a kind of unique article for CodeProject. An article that covers a topic that is rarely explained and that does not spawn a very broad and generalized explanation, but rather something specialized and maybe interesting. There was only one topic: supercomputing! In the past we've seen really cool articles like Using Cudafy for GPGPU Programming in .NET by Nick Kopp (don't forget to check also out his related article CUDA Programming Model on AMD GPUs and Intel CPUs) or golden classics such as High performance computing from C++ to MMX. Also the countless great IoT articles during the Microsoft Azure IoT contest shouldn't be forgotten. So what I wanted to do is give a broad overview of the field and then go into really special topics in greater detail. Especially the section about hot water cooling should be of interest. The whole article is themed to be about efficient high-performance computing. The consequences from research in this field are particularly interesting. They will empower future generations of data centers, mobile devices and personal computing. The demand for more energy has to stop. But the demand for computing is not going away. This crisis can only be solved by improved technology and more intelligent algorithms. We need to do more with less. I hope you like the journey. I included a lot of pictures to visualize the process even though my writing skills are limited. When an HPC project is launched there has to be some demand. Usually a specific application is chosen to be boosted by the capabilities of the upcoming machine. The design and architecture of this machine has to be adjusted for that particular application. Sometimes there are a lot of options to achieve the goals for the desired application. Here we need to consider secondary parameters. In our case we aimed for innovative cooling, high computing density and outstanding efficiency. This reduced the number of options to two candidates, where both are accelerators. One is a high-end Nvidia GPU and the other option is a brand new product from Intel, called a co-processor. This is basically a CPU on steroids. It contains a magnitude more cores than a standard Xeon processor. The cores are quite weak, but have a massive vector unit. This is an x86 compatible micro-architecture. At this point it is good to go back to the application. What would be better for the application? Our application was already adjusted for x86 (like) processors. An adjustment for GPU cards would have been possible, however, creating another work item did not seem necessary at the time. Plus using a Xeon Phi seems innovative. Alright, so our architecture will revolve around the Intel Xeon Phi - codename KNC (Knights Corner), the first publicly available MIC product (Many Integrated Cores). This is where the actual design and architecture starts. In the following subsections we will discuss the process of finding the right components for issues like: This is a lot of stuff and I will hopefully give most parts the space they deserve. From an efficiency point of view it is quite clear that the design should be accelerator focused. The accelerators have a much better operation / energy ratio than ordinary CPUs. We would like to alter the design of our applications to basically circumvent using the CPU (besides for simple jobs, e.g., distributing tasks or issuing IO writes). The way to reflect this objective in our design is by reducing the number of CPUs and maximizing the number of co-processors. The Xeon Phi is no independent system. It needs a real CPU to be able to boot. After all, it is just an extension board. Hence we require at least 1 CPU in the whole system. The key question is now how many co-processors can be sustained by a single CPU? And what are the requirements for the CPU? Could we take any CPU (x86, ARM, ...)? It turns out that there are many options, but most of them have significant disadvantages. The option with the least probability of failure is to purchase a standard Intel server CPU. These CPUs are recommended by Intel for usage with the Xeon Phi co-processor. There are some minimal requirements that need to be followed. A standard solution would contain two co-processors. The ratio of 2:1 sounds lovely, but is far away from the density we desire. Additionally to the requirement of using as many co-processors as possible we still need to make sure to be able to sustain high-speed network connections. In the end we arrive at an architecture that looks as follows. We have 6 cards in the PCIe bus, with one containing the CPU and another one for the network connection(s). The remaining four slots are assigned to Xeon Phi co-processors. The design and architecture of the CPU board has been outsourced to a specialized company in Japan. The only things they need to know are specifications for the board. What are its dimensions? How many PCIe lanes and what generation should it use / be compatible with? What processor should be integrated? What about the board management controller? What kind of peripherals have to be added? Finally the finished card contains everything we care about. It even has a VGA connector, even though we won't have a monitor connected normally. But such little connectors are essential for debugging. The CPU board is called Juno. It features a low-power CPU. The Intel Xeon E3-1230L v3 is a proper fit for our needs. It supports the setup we have in mind. Speaking about debugging: The CPU card alone does not have the space to expose all debugging possibilities. Instead an array of slots is offered, which may be used in conjunction with the array of pins available on a subboard. This suboard contains yet another Ethernet and USB connector, as well as another array of slots. The latter can be used with UART. UART allows us to do many interesting things, e.g., flashing the BMC firmware or doing a BIOS update. UART can also be used to connect to the BMC or receive its debugging output. If no network connection is available UART may be the only way of contacting the machine. Internally everything depends on two things: Power and communication via the PCIe bus. Both of these tasks are managed by the midplane, which could be regarded as a mainboard. The design and construction of this component has been outsourced to a company in Italy. Especially the specification for implementing the PLX chip correctly is vast. There are many edge cases and a lot of knowledge around the PCIe bus system is required. The PLX chip is the PEX 8796. It is a 96-lane, 24-port, PCIe Gen3 switch developed on 40 nm technology. It is therefore sufficient for our purposes. The following picture shows the midplane without any attached devices. The passive cooling body on top of the PCIe switch is not mounted in the final design. The midplane is called Mars. It is definitely the longest component in a QPACE 2 node. It exposes basic information of the system via three LEDs. It contains the six PCIe slots. It features a hot swap controller to enable hot plugging capabilities. Among the most basic features of a hot swap controller are the abilities to control current and provide fault isolation. The board also features an I2C expander and an I2C switch. This makes the overall design very easy to extend. Internally a BMC (Board Management Controller) is responsible for connecting to I2C devices. Therefore the BMC image needs to be customized by us. The BMC also exposes an IPMI interface allowing other devices to gather information about the status of the system. A crucial task that has been set up for the BMC is the surveillance of the system's temperature. While we will later see that a whole monitoring infrastructure has been set up for the whole rack, the node needs to be able to detect anomalous continuous temperature spikes, which may require full node shutdown. Similarly efficient water leakage sensors are integrated to prevent single leaks to be unspotted. An case of a local leak the nodes need to be shut down to prevent further damages from happening. Leakages should also be detectable by pressure loss in the cooling circuit. The Intel Xeon Phi is the first publicly available product based on Intel's Many Integrated Cores architecture. It comes in several editions. For our purposes we decided to use the version with 61 cores, 16 GB of GDDR5 memory and 1.2 GHz. The exact version is 7120X. The X denotes that the product ships without any cooling body. The following picture shows an Intel Xeon Phi package with a chassis for active cooling. We can call the Intel Xeon Phi a co-processor, an accelerator, a processor card, a KNC or just a MIC. Nevertheless, it should be noted that each term has a subtle different meaning. For instance an accelerator is possibly the broadest term. It could also mean a GPU. A MIC could also mean another generation of co-processor, e.g., a KNL (successor of the KNC, available soon) or a KNF (ancestor, we could name it a prototype, or beta version of the KNC). Let's get into some details of the Xeon Phi. After all we need to be aware of all the little details of its architecture. It will be our main unit of work and our application(s) need to be perfectly adjusted to get the maximum number of operations from this device. Otherwise we lose efficiency, which is the last thing we want to reduce. The following two images show the assembled KNC PCB from both sides. We start with the back. The KNC contains a small flash unit, which can be used to change hardware settings, access the internals error buffer or do firmware upgrades. It is a more or less independent system. Therefore it needs an SMC unit. The SMC unit also manages various counters for the system. It aggregates and distributes the data received from the various sensors. The board is so dense that its hard to figure out what important components are assigned to what side. For instance the GDDR memory chips are definitely placed on both sides. The form a ring around the die. This is also reflected by the internal wiring as we will see later. The front also shows the same ring of GDDR. There seems to be a lot more going on on the front. We have the two power connectors, a variety of temperature sensors (in total there are seven) and electronic components, such as voltage regulators. Most importantly we not only see the vias of the silicon, but also the chip itself. This is a massive one - and it carries 61 cores. A single KNC may take up to 300 W of power during stress. In idle the power consumption may be as low as 50 W, but without any tricks we may end oscillate around 100 W. A neat extra is the blue status LED. It may be one of the most useless LEDs ever. In fact I've never received any useful information from it. If the KNC fails during boot, it blinks. If it succeeds, it blinks. Needless to say that the only condition for a working LED is a PCIe connection. Even if power is not connected the LED will blink from the current transmitted via the PCIe connectors. Internally the cores of a KNC are connected with each other by a ring bus system. Each core has a shared L2 cache, which is connected to a distributed tag directory. The cache coherency protocol for the KNC cores is MESI. There are up to 6 memory controllers, which can be accessed via the ring bus as well. Finally the ring bus proxies the connection to the PCIe tree. The following image illustrates the concept of the ring bus. Even more important than understanding the communication of the cores with each other is knowledge about the core's internal architecture. Knowing the cache hierarchy and avoiding cache misses is crucial. Permanently using the floating point units instead of waiting for data delivery ensures high performance. We care about two things: The bandwidth delivered by a component, measured in GB/s, and the latency of accessing the device - here we count the clock cycles. In general the observed behavior is similar to common knowledge. These are basically numbers every programmer should know. Interestingly access to another core's L2 cache is a magnitude more expensive than reading from its own L2 cache. But this is still another magnitude cheaper than accessing data controlled by the memory controllers. The next illustration gives us some accurate numbers to give us a better impression of the internal workings. We use 2 instead 4 threads, since only 2 threads are run within one cycle. Threads only operate every other cycle, effectively cutting the frequency in half for all four threads. The quite low memory bandwidth is a big problem. There are some applications that are not really memory bound, but for our applications in Lattice QCD we will be definitely constraint by that. Once we reach the memory limit we are basically done with our optimizations. There is nothing more that could be improved. The next step would be search for better algorithms, which may lower the memory bandwidth requirements. This can be achieved by better data structures, more data reuse or more effective approximations to the problem that needs to be solved. The modular design of the QPACE 2 makes it theoretically possible to exchange the KNC with, e.g., the KNL. The only practical requirement is that the PCIe chip and the host CPU are able to manage the new card. But besides that there are no other real limiting factors. The CPU serves as the PCIe root complex. The co-processors as well as the Connect-IB card are PCIe endpoints. Peer-to-peer (P2P) communication between any pair of endpoints can take place via the switch. The reasoning behind this node design is that a high-performance network is typically quite expensive. A fat node with several processing elements and cheap internal communications (in this case over PCIe) has a smaller surface-to-volume ratio and thus requires less network bandwidth per floating-point performance. This lowers the relative cost of the network. The number of KNCs and InfiniBand cards on the PCIe switch is determined by the number of lanes supported by commercially available switches and by the communication requirements within and outside of the node. We are using the largest available switch, which supports 96 lanes of PCIe Gen3. Each of the Intel Xeon Phis has a 16-lane Gen2 interface. This corresponds to a bandwidth of 8 GB/s. Both, the CPU and the Connect-IB card, have a 16-lane Gen3 interface, i.e., almost 16 GB/s each. The external InfiniBand bandwidth for two FDR ports is 13.6 GB/s. This balance of internal and external bandwidth is consistent with the communication requirements of our target applications. The following image shows our PCIe test board. It is equipped with the PLX chip and contains PCIe slots, just like a normal mainboard does. Additionally it comes with plenty of jumpers to setup the bus according to our needs. Furthermore we have some debug information displayed via the integrated LCD display. The PCIe bus is a crucial component in the whole design of the machine. The intra-node communication is done exclusively via the PCIe bus. It is therefore required to have the best possible signal quality. We did many measurements and ensured that the link training could be performed as good as possible. The signal quality is controlled via standard eye diagrams. Even though every PCIe lane should perform at its best, it was not possible to test every lane as extensive as desired. We had to extrapolate or come to conclusions with limited information. The result, however, is quite okay. In general we only see a few PCIe errors, if any. The few errors we sometimes detect are all correctable and do not affect the overall performance. As the principle working horse of our network architecture an InfiniBand network has been chosen. This solution can be certainly considered conservative. Among the top 500 supercomputers, InfiniBand is the interconnect technology in most installations. Ethernet is still quite popular, though. We can compare the 224 installations of InfiniBand with 100 using Gb/s Ethernet connections, and 88 using 10 Gb/s. A proper network topology has to ensure that less cables (with fewer connectors) are needed, while minimizing the need for switches and increasing the directness of communication. The most naive, but expensive and unpractical way, would be to connect every node with every other node. Here, however, we require N - 1 ports on each node. Those ports need to be properly cabled. A lot of cables and a lot of ports, usually resulting in a lot of cards. Therefore we need a smarter way to pursue our goals. A good way is by looking at the number of possible ports first. In our case the decision was between a single and a dual port solution. Two single port cards are not a proper solution, because such a solution would consume too many PCIe slots. Naturally an obvious solution is to connect nearest neighbors only. A torus network describes such a mesh interconnect. We can arrange nodes in a rectilinear array of N > 1 dimensions, with processors connected to their nearest neighbors, and corresponding processors on opposite edges of the array connected. Here each node has 2N connections. Again we do not have enough ports available per node. The lowest number would be four. With two ports we can only create a ring bus. In the end we decided that the possibly best option is to use a hyper-crossbar topology. This has been used before, e.g., by the Japanese collaboration CP-PACS. Essentially we calculate the number of switches by using two times the number of nodes (i.e., 2 ports per node) divided by the number of ports per switch. In our case we have 36 ports per switch, which leaves us with 4 switches. We use 32 of these 36 ports per switch for the nodes, with the additional ports being assigned to connect the machine to a storage system. A hyper-crossbar solution is 2-dimensional, which means we have 2 switches in x-direction and the same number of y-direction. The previous image shows the hyper-crossbar topology in our scenario. The switches in x-direction are marked by the red rectangles, the switches in y-direction by the blue rectangles. Each node is represented by a black outlined ellipse. The assignment of the two ports are indicated by the bullets. The bullets are drawn with the respective color of their assigned switch. For a hyper-crossbar we need a dual port connector. Another requirement is the support for FDR data rates. FDR stands for fourteen data rate. FDR enables A suitable candidate could be found with the Connect-IB card from Mellanox. The card is providing 56 Gb/s per port, reaching up to 112 Gb/s in total. Additionally Connect-IB is also great for small messages. A single card can handle at least 130 × 106 messages per second. The latency is measured to be around a microsecond. For us the card is especially interesting due to CPU offload of transport operations. This is hardware enhanced transfer, or DMA. No CPU is needed to transport the bytes to the device. In the end we will benefit especially for our co-processor. The following image shows a Connect-IB card with the passive cooling body installed. Also we still have the mounting panel attached. There are other reasons for choosing the Connect-IB card. It is the fourth generation of IB cards from Mellanox. Furthermore it is highly power efficient. Practically a network with Connect-IB cards easily scales to tens-of-thousands of nodes. Scaling is yet another important property of modern HPC systems. Before the dimensions of a single node can be decided we should be sure what rack size we want or what our limits are. In our case we decided pretty early for a standardized 42U rack. This is the most common rack size in current data centers. The EIA (Electronic Industries Alliance) standard server rack (1U) is a 19 inch wide rack enclosure with rack mount rails which are 17 3/4 " (450.85 mm) apart and whose height is measured in 1.75" (44.45 mm) unit increments. Our rack is therefore roughly 2 m tall. The rack design introduces a system of pipes and electrically conductive rails additionally to the standard rack. Each node is measured to be 3U in height. We can have 8 such nodes per level. Furthermore we need some space to distribute cables to satisfy network and power demands. Additional space for some management electronics is required as well. At most we can pack 64 nodes into a single rack. The next image shows a CAD rendering of the rack's front bottom level. We only see a single node being mounted. The mounting rails are shown in pink. The power supply rails are drawn in dark gray and red. The water inlet and outlet per node is shown besides the green power supply. The nodes are inserted and removed by using the lever. In the end a node is pulled to break out of the connection held by the quick couplings. The quick couplings will be explained later. Inserting a node is done by pushing it into the quick couplings. The whole power distribution and power supply system had to be tested extensively. We know that up to 75 kW may be used, since we expect up to 1.2 kW per node (with 64 nodes). This is quite some load for such a system. The high density also comes with very high demands on terms of power per rack. The connection to the electrically conductive rails is very important. We use massive copper to carry the current without any problems. The supply units are also connected to the rails via copper. The following picture shows the connection prior to our first testing. It is quite difficult to simulate a proper load. How do you generate 75 kW? Nevertheless we found a suitable way to do some testing up to a few kW. The setup is sufficient to have unit tests for the PSUs, which will then scale without a problem. Our test setup was quite spectacular. We used immersion heaters borrowed from electric kettles. We aggregated them to deliver the maximum power. The key question was if the connected PSUs can share the load. A single PSU may only take up to 2 kW. Together they should go way beyond those 2 kW. In total we can go up to 96 kW. We use 6 PSUs (with at most 12 kW) to power 8 nodes (requiring at most 9.6 kW). Therefore we have a 5 + 1 system, i.e., 5 PSUs required to work under extreme conditions, with a single PSU redundancy. The grouping of PSUs to nodes forms a power domain. In total we have 8 such power domains. The power domains relate to the phase in the incoming current. Unrelated to the power domain is the power management. A PSU control board designed in Regensburg monitors and controls the PSUs via PMBus. We use a BeagleBone Black single-board computer that plugs into the master control board, which we named Heinzelmann. A single PSU control board services 16 PSUs. Therefore we require 3 control boards for the whole rack. There have been other questions, of course, that have been subject for this or similar tests. In the end we concluded that the provided specification was not completely accurate, but we could work around the observed problems. The top of the rack is reserved for the PSU devices. We have 24 PSUs on each side of the rack. Between the nodes (bottom) and the PSUs (top) we have additional space for auxiliary devices, such as Ethernet switches or the Heinzelmann components. All these components are - in contrast to the nodes - not water cooled. Most of them use active air cooling, some of them are satisfied with passive cooling. The PSU components also require PDU devices. They are 1U in size and come in an unmanaged configuration. We choose to have 12 outlets following the IEC 60320 C13 standard. This is a polarized three pole plug. Our PDUs distribute 10 A per outlet in 3 phases. A famous principle for the installation of water circuits is the Tichelmann principle. It states that paths need to be chosen in such a way, that the water pressure is equal for each point of interaction. In our case the point of interaction is the coupling to a node. Since paths are not equal in length, the Tichelmann principle is not chosen. A possible way around is to design the pipes differently for the points of interaction that have a longer distance from the origin. Also we may artificially extend the path to the coupling of each node to be as long as the longest path. This all sounds hard to design, control and build. It is also quite expensive. Since the pipes are quite large we estimated that the effect of water flow resistance is too small to have a negative influence in practice. We expect nearly the same flow on the longest vs the shortest path taken. The resistance within a node is much higher than the added resistance by traveling the longer path. The following image shows the empty rack of the QPACE 2 system. The piping system has already been installed. The previous image contained the electrically conductive rails, most of the power connectors and the mounting rails. The auxiliary devices, such as the management components, the InfiniBand and Ethernet switches, as well as cables and other wires are not assembled at this point. Assembling the rack is practically staged as follows. We start with the interior. First all possible positions need to be adjusted and prepared. Then the InfiniBand switches need to be mounted. Power supply units need to be attached. The trays need to be inserted. Now cables are laid out. The should be labeled correctly. The last step is to insert the nodes. Finally we have the fully assembled and also quite densely packed rack. The following image shows the front of the QPACE 2 rack. All nodes are available at this point in time. The cabling and the management is fully active. The LED at the bottom shows the water flow rate through the rack in liters per minute. Overall the rack design of the QPACE 2 is not as innovative as the whole machine, but it is a solid construction that fulfills its promise. The reasons to neglect the Tichelmann principle have been justified. The levers to insert and remove nodes work as expected. Another interesting part of the development of an HPC system is its monitoring system. Luckily we already had a scalable solution that could just be modified to include the QPACE 2 system. A really good architecture for monitoring large computer systems centers around a scalable, distributed database system. We choose Cassandra. Cassandra makes it possible to have high frequency write operations (many small logs), with great read performance. As a drawback we are limited in modifications (not needed anyway) and we need intelligent partitioning. The explicit logging and details about the database infrastructure will not be discussed in this article. Instead I want to walk us through a web application, which allows any user of the system to acquire information about its status. The data source of this web application is the Cassandra database. Complementary other data points, such as current ping information is used. All in all the web application is scalable as well. As an example the following kiviat diagram is shown to observe the current (peak) temperatures of all nodes within the rack. All temperature sensors are specified in a machine JSON file. The file has a structure that is similar to the following snippet. We specify basic information about the system, like name, year and the number of nodes. Additionally special queries and more are defined (not shown here). Finally an array with temperature sensors is provided. The id given for each sensor maps to the column in the Cassandra database system. The name represents a description of the sensor that is shown on the webpage. id { "qpace2": { "name": "QPACE 2", "year": 2015, "nodes": 64, "temperatures": [ { "id": "cpinlet", "name": "Water inlet" }, { "id": "cpoutlet", "name": "Water outlet" }, { "id": "core_0", "name": "Intel E3 CPU Core 1" }, { "id": "core_1", "name": "Intel E3 CPU Core 2" }, { "id": "core_2", "name": "Intel E3 CPU Core 3" }, { "id": "core_3", "name": "Intel E3 CPU Core 4" }, { "id": "mic0_temp", "name": "Intel Xeon Phi 1" }, { "id": "mic1_temp", "name": "Intel Xeon Phi 2" }, { "id": "mic2_temp", "name": "Intel Xeon Phi 3" }, { "id": "mic3_temp", "name": "Intel Xeon Phi 4" }, { "id": "pex_temp", "name": "PLX PCIe Chip" }, { "id": "ib_temp", "name": "Mellanox ConnectIB" } ] }, } Reading the peak temperatures of each node is achieved by taking the maximum of all current sensor readings. But the obtained view is only good to gain a good overview quickly. In the long run we require a more detailed plot for each node individually. Here we use a classic scatter chart. We connect the dots to indicate some trends. The following picture shows a demo chart for some node. The covered timespan is the last day. We see the different levels of temperature for different components. The four co-processors are measured to have roughly the same temperature. The CPU cores, as well as the water in- and outlet are among the lower temperatures. The PLX chip and the Mellanox Connect-IB card are slightly higher for idle processes. Nevertheless, they won't scale such as the Xeon Phi co-processors. Hence there are no potential problems coming up as a consequence from the given observation. How is the web front-end build exactly? After all this is just a simple Node.js application. The main entry point is shown below. Most importantly it gets some controllers and wires them up to some URL. A configuration brings in some other useful settings, such as serving static files or the protocol (http or https) to use. var express = require('express'); var readline = require('readline'); var settings = require('./settings'); var site = require('./site'); var server = require('./server'); var homeController = require('./controllers/home'); var machineController = require('./controllers/machine'); // ... others var app = express(); settings.directories.assets.forEach(function (directory) { app.use(express.static(directory)); }); app.set('views', settings.directories.views); app.set('view engine', settings.engine); app.use('/', homeController(site)); app.use('/machine', machineController(site)); // ... others app.use(function(req, res, next) { res.status(404).send(settings.messages.notfound); }); var instance = server.create(settings, app, function () { var address = server.address(instance) console.log(settings.messages.start, address.full); }); if (process.platform === 'win32') { readline.createInterface({ input: process.stdin, output: process.stdout }).on(settings.cancel, function () { process.emit(settings.cancel); }); } process.on(settings.cancel, function () { console.log(settings.messages.exit) process.exit(); }); We use https for our connections. The page does not contain any sensitive data and we do not have a signed certificate. So what's the purpose? Well, https does not harm and is just the wave of the future. We should all https everywhere. The server load is not really affected and the client can handle it anyway. The agility of switching from http to https and vice versa is all provided in the server.js file included in the code above. Here we consider some options again. The settings are passed in to an exported create function, which setups and starts the server, listening at the appropriate port and using the selected protocol. create var fs = require('fs'); var scheme = 'http'; var secure = false; var options = { }; function setup (ssl) { secure = ssl && ssl.active; scheme = secure ? 'https' : 'http'; if (secure) { options.key = fs.readFileSync(ssl.key); options.cert = fs.readFileSync(ssl.certificate); } } function start (app) { if (secure) { return require(scheme).createServer(options, app); } else { return require(scheme).createServer(app); } } module.exports = { create: function (settings, app, cb) { setup(settings.ssl); return start(app).listen(settings.port, cb); }, address: function (instance) { var host = instance.address().address; var port = instance.address().port; return { host: host, port: port, scheme: scheme, full: [scheme, '://', host, ':', port].join('') }; } }; The single most important quantity for standard users of the system is the availability. Users have to be able to see what fraction of the machine is online and if there are resources to allocate. The availability is especially important for users who want to run large jobs. These users are particularly interested about the up-time of the machine. There are many charts to illustrate the availability. The most direct one is a simple donut chart showing the ratio of nodes that are online against the ones that are offline. Charting is implemented by using the Chart.js library with some custom enhancements and further customization. Without the Chart.Scatter.js extension most of the plots wouldn't be so good / useful / fast. A good scatter plot is still the most natural to display. Also for the availability we provide a scatter plot. The scatter plot only shows the availability and usage over time. There are many different time options. Most of them relate to some aggregation options set in the Cassandra database system. Since our front-end web application follows the MVC pattern naturally, we need a proper design for the integrated controllers. We chose to export some constructor function, which will operate on a site object. The site contains useful information, such as the settings, or functions. As an example the site carries a special render function, which is used to render a view. The view is specified over its sitename. The sitename is mapped to the name of a view internally. Therefore we can change the names of files as much as we want to - there is only one change required. site render Internally the sitemap is used for various things. As mentioned the view is selected over the sitemap's id. Also links are generated consistently using the sitemap. Details of this implementation remain disclosed, but what should be noted is that we can also use the sitemap to generate breadcrumbs, a navigation or any other hierarchical view of the website. Without further ado the code for the HomeController. It has been stripped down to show only two pages. One is the landing page, which gets some data to fill blank spots and then transports the model with the data to the view. The second one is the imprint page, which is required according to German law. Here a list of maintainers is read from the site object. These sections are then transported via the model. HomeController var express = require('express'); var router = express.Router(); module.exports = function (site) { router.get('/', function (req, res) { // Get data for default route ... site.render(res, 'index', { title: 'Overview', machines: machines, powerStations: powerStations, frontends: frontends, }); }); router.get('/imprint', function (req, res) { var sections = Object.keys(site.maintainers).map(function (id) { var maintainers = site.maintainers[id]; return { id: id, persons: maintainers, }; }); site.render(res, 'imprint', { title: 'Imprint', sections: sections }); }); return router; }; The data for the plots is never aggregated when rendering. Instead placeholders are transported. All charts will be loaded in parallel by doing further AJAX requests. There are specialized API functions that will return JSON with the desired chart data. The utilization shows the number of allocated nodes. An allocation is managed by the job submission queuing system. As we will see later, our choice is SLURM. Users should be aware of the system's current (and / or past) utilization. If large jobs are up for submission, we need to know what the mean time to job initialization will be. The graphs displayed on the webpage give users hints to help them picking a good job size or estimating the start time. Again we use donut charts for the quick view. The site builds the necessary queries for the Cassandra database by using a query builder. This is a very simply object that allows us to replace raw text CQL with objects and meaning. Instead of having the potential problem of a query failing (and returning nothing) we may have a bug in our code, which crashes the application. The big difference is that the latter may be detected much more easily, even automatically, and that we can be sure about the transported query's integrity. There is no such thing as CQL injection possible, if done right. In our case we do not have to worry about CQL injection at all. Users cannot specify parameters, which will be used for the queries. Also the database is only read - the web application does not offer any possibility to insert or change data in Cassandra. The following code shows the construct of a SelectBuilder. It is the only kind of builder that is exported. The code can be easily extended for other scenarios. The CQL escaping for fields and values is not shown for brevity. SelectBuilder builder var SelectBuilder = function (fields) { this.conditions = []; this.tables = []; this.fields = fields; this.filtering = false; }; function conditionOperator (operator, condition) { if (this.fields.indexOf(condition.field) === -1) this.fields.push(condition.field); return [ condition.field, condition.value ].join(operator); } var conditionCreators = { delta: function (condition) { if (this.fields.indexOf(condition.field) === -1) this.fields.push(condition.field); return [ 'token(', condition.field, ') > token(', Date.now().valueOf() - condition.value * 1000, ')' ].join(''); }, eq: function (condition) { return conditionOperator.apply(this, ['=', condition]); }, gt: function (condition) { return conditionOperator.apply(this, ['>', condition]); }, lt: function (condition) { return conditionOperator.apply(this, ['<', condition]); }, standard: function (condition) { return condition.toString(); } }; SelectBuilder.prototype.from = function () { for (var i = 0; i < arguments.length; i++) { var table; if (typeof(arguments[i]) === 'string') { table = arguments[i]; } else if (typeof(arguments[i]) === 'object') { table = [arguments[i].keyspace, arguments[i].name].join('.'); } else { continue; } this.tables.push(table); } return this; }; SelectBuilder.prototype.where = function () { for (var i = 0; i < arguments.length; i++) { var condition; if (typeof(arguments[i]) === 'string') { condition = arguments[i]; } else if (typeof(arguments[i]) === 'object') { var creator = conditionCreators[arguments[i].type] || conditionCreators.standard; condition = creator.call(this, arguments[i]); } else { continue; } this.conditions.push(condition); } return this; }; SelectBuilder.prototype.filter = function () { this.filtering = true; return this; }; SelectBuilder.prototype.toString = function() { return [ 'SELECT', this.fields.length > 0 ? this.fields.join(', ') : '*', 'FROM', this.tables.join(', '), 'WHERE', this.conditions.join(' AND '), this.filtering ? 'ALLOW FILTERING' : '', ].join(' '); }; var builder = { select: function (fields) { fields = fields || '*'; return new SelectBuilder(Array.isArray(fields) ? fields : [fields]); }, }; module.exports = builder; Of course such a SelectBuilder could be much more complex. The version shown here is only a very lightweight variant that can be used without much struggle. It provides most of the benefits that we can expect from a DSL for expressing CQL queries without using raw text or direct string manipulation. Monitoring the power consumption may also be interesting. This is also a great indicator about the status of the machine. Is it running? How much is the load right now? Monitoring the power is also important for efficiency reasons. In idle we want to spent the least amount of energy possible. Under high stress we want to be as efficient as possible. Getting the most operations per energy is key. We have several scatter plots to illustrate the power consumption over time. A possible variant for a specific power grid is shown below. How are these charts generated? The client side just uses the Chart.js library as noted earlier. On the server-side we talk to an API, which eventually calls a method from the charts module. This module comes with some functions for each supported type of chart. For instance the scatter function generates the JSON output to generate a JSON chart on the client site. Therefore the view is independent on the specific type of chart. It will pick the right type of chart in response to the data. charts scatter The example code does two things. It iterates over the presented data and extracts the series information from it. Finally it creates an object that contains all information that are potentially needed or relevant for the client. // ... function withAlpha (color, alpha) { var r = parseInt(color.substr(1, 2), 16); var g = parseInt(color.substr(3, 2), 16); var b = parseInt(color.substr(5, 2), 16); return [ 'rgba(', [r, g, b, alpha].join(', '), ')' ].join(''); } var charts = { // ... scatter: function (data) { for (var i = data.length - 1; i >= 0; i--) { var series = data[i]; series.strokeColor = withAlpha(series.pointColor, 0.2); } return { data: data, containsData: data.length > 0, type: 'Scatter', options: { scaleType: 'date', datasetStroke: true, } }; }, // ... }; module.exports = charts; The withAlpha function is a helper to change a standard color presented in hex notation to an RGBA color function string. It is definitely useful to, e.g., use an automatically defined lighter stroke color than the fill color. withAlpha In conclusion the monitoring system has a well established backend with useful statistic collection. It does some important things autonomously and sends e-mails in case of warnings. In emergency scenarios it might initiate a shutdown sequence. The web interface gives users and administrators the possibility to have a quick glance at the most important data points. The web application is easy to maintain and extend. It has been designed for desktop and smartphone form factors. Water cooling is the de-facto standard for most HPC systems. There are reasons to use water instead of air. The specific heat capacity of water is many times larger than the specific heat capacity of air. Even more importantly, water is three magnitudes more dense than air. Therefore we have a much larger efficiency. Operating large fans just to amplify some air flow is probably one of the least efficient processes at all. Why is hot water cooling more efficient than standard water cooling? Well, standard water cooling requires cool water. Cooling water is a process that quite energy hungry. If we could save that energy we would already gain something. And yes, hot water cooling mimics how blood flows through our body. The process does not require any cooling, but still it has the ability to cool / keep a certain temperature. Hot water cooling is most efficient if we can operate on a small ΔT, which is difference between the temperature of the water and the temperature of the component. Hot water cooling does not require any special cooling devices. No chillers are needed. We only need a pump to drive the flow in the water circuit. There are certain requirements, because hot water cooling lives from turbulent flow. A turbulent flow is random and unorganized. It will self-interact and provide a great basis for mixing and heat conductivity. On the contrast a lamar flow will tend to separate and run smoothly along the boundary layers. Hence there won't be much heat transfer and if so, then only certain paths will be taken. The mixing character is not apparent. The design of our hot water cooling infrastructure had to split up in several pieces. We have to: The discussion about the rack design has already taken place. After everything has been decided we needed to run some tests. We used a simple table water cooler to see if the cooling package for a co-processor was working properly. The portable table cooler provides us with a fully operational water cooling circuit. We have a pump that manages to generate a flow of 6.8 liters per minute. The table cooler is operated with current from the standard power plug. It can be filled with up to 2.5 liters of water. In total the table cooler can deal with 2700 W of cooling capacity. In our setup we connected the cooling packages (also called cold plate) with the table cooler via some transparent tubes. The tubes used for this test are not the ones we wanted to use later. Also the tubes have been connected directly to the table cooler. In the process we still need to decide on some special distribution / aggregation units. The next picture shows the test of the cold plate for the co-processor. We use a standard Intel Server Board (S1200V3RP). This is certainly good enough to test a single KNC without encountering incompatibilities, e.g., with the host CPU or the chipset. We care about the ΔT in this test. It is measured via the table cooler and the peak temperature of the co-processor. The latter can be read out using the MPSS software stack from Intel. The real deal is of course much larger than just a single table cooler. We use a large pump in conjunction with some heat exchanger and a sophisticated filtering system. One of the major issues with water cooling is the potential threat of bacteria and leakages. While a leak is destructive for both, components and infrastructure, a growing number of bacteria has the ability to reduce cooling efficiency and congest paths in our cold plates. A good filter is therefore necessary, but cannot guarantee to prevent bacteria growth. Additionally all our tubes are completely opaque. We don't want to encourage breeding with external energy. But the most important action against bacteria is the usage of special biocide in the cooling water. The following picture shows how the underfloor wiring installation below the computing center looks like. As already explained the cooling circuit is only the beginning. Much more crucial are the right design choices for the cold plates and connectors. We need to take great care about possible leakages. The whole system should be as robust, efficient and maintainable as possible. Also we do need some flexibility and there are limits in our budget. What can we do? Obviously we cannot accept fixed pipes. We need some tubes inside the nodes. Otherwise we do not have some tolerance for the components inside. A fixed tube requires a very fixed chassis. Our chassis has been build with modularity and maintainability in mind. There may be a few millimeters here and there that are out of the specification scope. Let's start the discussion with the introduction of the right connectors. The connection of the nodes to the rack is done via quick couplings. Quick couplings offer a high quality mechanism for perfectly controllable transmission of water current. If not connected a quick coupling seals perfectly. There is no leakage. If plugged in a well-tuned quick coupling connection works binary. Either the connection is alright and we observe transmission with the maximum flow possible, or we do not see any transmission at all (the barrier is still sealed). A possible quick coupling connector is shown below. We choose a different model, but the main principles and design characteristics remain unchanged. The quick couplings need to be pulled back to release a mounted connector. This means we need another mechanism in our case. The simplest solution was to modify the quick couplings head, such that it can always release the mounted connector if enough force is being used. The disadvantage of this method is that a partial mounting is now possible. Partial mounting means that only a fraction of the throughput is flowing through the attached connector. We have practically broken the binary behavior. The two connectors attached to the quick coupling are the ones from the rack and a node. The node's connector is bridging the water's way to a distribution unit, called a manifold. The manifold can also work as an aggregation unit, which is the case for other connector (outlet). The job of the inlet's manifold is to distribute the water from a single pipe to six tubes. The six tubes end up in six cold plates. Here we have: Especially interesting is the choice of clamps to attach the tubes to the manifold (and cold plates). It turns out that there is actually an optimum choice. All other choices may either be problematic right away, or have properties that may result in leakages over time (or under certain circumstances). The following picture shows a manifold attached to a node with transparent tubes. The tubes and the clamps are only for testing purposes. They are not used in production. Choosing the right tubes and clamps is probably one of the most important decisions in terms of preventing leakages. If we choose the wrong tubes we could amplify bacteria growth, encounter problems during the mounting process or simply have a bad throughput. The clamps may also be the origin of headaches. A wrong decision here gives us a fragile solution that won't boost our confidence. The best choice is to use clamps that fit perfectly for the diameter of the tubes. The only ones applying to that criterion are clamps, which have a special trigger that requires a special tool. In our case the clamps come from a company called Oetiker. These ear clamps (series PG 167) are also used in other supercomputers, such as the SuperMUC. In our mounting process we place an additional stub on the tube. The stubs are processed at a temp of 800°C. The stub is then laid on the tube and heated together with an hot-air-gun (at 100 °C). As a result the tubes are effectively even thicker. Their outer wall strength has additionally increased. Consequently the clamps fit even better than they would have without the stubs. Finally we also have to talk about the cold plates. A good representative is the cold plate for the Intel Xeon Phi. It is not only the cold plate that has to show the best thermal performance, but also the one that is produced most often. The quick couplings and manifolds just provide the connection and distribution units. The key question now is: How are the components cooled exactly? As already touched we use a special cold plate to transfer the heat from the device to the flowing water. The cold plate consists of three parts. A backside (chassis), a connector (interposer) and a pathway (roll-bond). The backside is not very interesting. It may transfer some heat to the front, but its main purpose is to fix the front, such that it is attached in the right position. The interposer is the middle part. It is positioned between the roll-bond and the device. One of the requirements for the interposer is that it is flexible enough to adjust for possible tolerances on the device and solid enough to have good contact for maximum heat conductivity. The interposer features some nifty features, such as a cavity located at the die. The gap is then filled with a massive copper block. The idea here is to have something with higher heat conductivity than aluminum. The main reason is to amplify heat transfer in the critical regions. The copper block has to be a little bit smaller than the available space. That way we prepare for the different thermal expansion. Finally we have the roll-bond plate. It is attached to the interposer via some glue, which is discussed in the next section. The roll-bond plate has been altered to provide a kind of tube within the plate. This tube follows a certain path, which has been chosen to maximize the possible heat transfer from the interposer to the water. An overview over the previously named components is presented in the image below. The image shows the chassis (top), the roll-bond (right), the inserted copper block (left bottom) and the interposer (bottom). All cooling components are centered around an Intel Xeon Phi co-processor board. The yellow stripes in the image are thermal interface materials (TIMs). TIMs are crucial components of advanced high density electronic packaging and are necessary for the heat dissipation which is required to prevent the failure of electronic components due to overheating. The conventional TIMs we use are manufactured by introducing highly thermally conductive fillers, like metal or metal oxide microparticles, into the polymer matrix. We need them to ensure good heat transfer while providing the sufficient insulation. The design of the roll-bond has been done in such a way to ensure turbulent water already arising at our flow rates. Furthermore the heat transfer area had to be maximized without reducing the effective water flow. Finally the cold plate has to be reliable. Any leak would be a disaster. Choosing the right tubes seems like a trivial issue in the beginning. One of the problems that can arise with water cooling is leakage due to long-term erosion effects on the tubes. Other problems include increased maintenance costs and the already mentioned threat of bacteria. Most of those problems can be alleviated using the right tubes. We demand several properties for our tubes. They should have a diameter around 10.5 mm. They have to be completely opaque. Finally they should be as soft as possible. Stiff tubes make maintenance much harder - literally. We've chosen ethylene propylene diene monomer (EPDM) tubes. These tubes are quite soft and can be operated in a wide temperature range, starting at -50 °C and going up to 150 °C. The high tensile failure stress of 25 MPa makes it also a good candidate for our purposes. Also the tube is good enough to withstand our requirement of 10 bar pressure. The following image shows a typical EPDM tube. For the thermal paste we demand the highest possible thermal conductance without being electrically conductive. In practice this is hard to achieve. Electron mobility is the reason for an electric conductor and also a good mechanism for heat transportation. Another mechanism, however, can be found with lattice vibrations. Here phonons are the carriers. We found a couple of thermal pastes that satisfy our criteria. In order to find out what is the best choice in terms of the shown thermal performance we prepared some samples. Each sample was then benchmarked using a run of Burn-MIC, a custom application that nearly drives the Xeon Phi to its maximum temperature. The maximum temperature is then used to create the following distribution. We see that PK1 from Prolimatech is certainly the best choice. It is not the most expensive one here, but it is also certainly not cheap. For 30 g we have to pay 30 &eur;. Some of the other choices are also not bad. the cheapest one, KP98 from Keratherm, is still an adequate choice. We were disappointed by the most expensive one, WLPK from Fischer. However, even though the WLPK did not show an excellent performance here, this may be due to its more liquid-ish form as compared to the others. In our setup we therefore end up with a contact area that is not ideal. Now that our system is cooled and its design is finished, we need to take care about the software side. HPC does not require huge software stacks, but just the right software. We want to be as close to the metal as possible, so any overhead is usually unwanted. Still, we need some compatibility, hence we will see that there are many standard tools being used in the QPACE 2 system. Choosing the right software stack is of important. I guess there is no question about that. What needs to be evaluated is what kind of standard software is expected by the users of the system. We have applications in the science area, especially particle physics simulations in the field of Lattice QCD. It is therefore crucial to provide the most basic libraries, which are used by most applications. Also some standard software has to be available. And even more important, it has to be optimized for our system. Finally we also have to provide tools and software packages, which are important to get work done on the system. Like most systems in the Top 500 we use Linux as our operating system. There are many reasons for this choice. But whatever our reasons regarding the system itself are, it also makes sense from the perspective of our system's users. Linux is the standard operating system for most physicists and embraces command line tools, which are super useful for HPC applications. Also job submission systems and evaluation tools are best used in conjunction with a Linux based operating system. The following subsections discuss topics such as our operating system and its distribution, our choice of compilers, as well as special firmware and communication libraries. We will discuss some best practices briefly. Let's start with the QPACE 2 operating system. Every node does not have a single operating system. Instead every node runs multiple operating systems. One is the OS running on the processor of the CPU board. This is the main operating system, even though it is not the first one to be started. The first one is the operating system embedded in U-Boot. U-Boot is running on the BMC, before the BMC's real operating system can be started. It is an OS booter, so to speak. Finally each co-processor has its own operating system as well. The BMC on the CPU card is running an embedded Linux version and boots from flash memory as soon as power is turned on. It supports the typical functionalities of a board management controller, e.g., it can hold the other devices in the node in reset or release them. Additionally it can monitor voltages as well as current and temperature sensors and act accordingly. As an example an emergency shutdown could be started in severe cases. It can also access the registers in the PCIe switch via an I2C bus. Our nodes are disk-less. Once the CPU is released from reset it PXE-boots a minimal Linux image over the Ethernet network. The full Linux operating system (currently CentOS 7.0) uses an NFS-mounted root file system. The KNCs are booted and controlled by the CPU using Intel's KNC software stack MPSS. The KNCs support the Lustre file system of our main storage system, which is accessed over InfiniBand. Also the HDF5 file system has to be supported for our use-case. A variety of system monitoring tools are running either on the BMC or on the CPU, which, for example, regularly checks the temperatures of all major devices (KNCs, CPU, IB HCA, PCIe switch) as well as various error counters (ECC, PCIe, InfiniBand). A front-end server is used to NFS-export the operating system. The front-end server is also used to communicate with the nodes and monitor them. Furthermore the system makes it possible to log in to the machine, and to control the batch queues. Choosing the right compiler is a big deal in HPC. Sometimes months of optimization are worthless if we do not spent days evaluating available compilers and their compilation flags. The best possible choice can make a huge difference. Ideally we benefit without any code adjustments at all. Even though the best compiler is usually tied to the available hardware, there may be exclusions. In our case we do not have much choice actually. There are not many compilers that can compile for the KNC. Even though the co-processor is said to be compatible with x86, it is not on a binary level. The first consequence is that we need to recompile our applications for the Intel Xeon Phi. The second outcome of this odd incompatibility is that a special compiler is required. There is a version of the GCC compiler that is able to produce binaries for the Xeon Phi. This version is used to produce kernel modules for the KNCs operating system. However, this version does not know about any optimizations, in particular the ones regarding SIMD, and is therefore a really bad choice for HPC. The obvious solution is to use the Intel compiler. In some way that makes the Intel Xeon Phi even worse. While the whole CUDA stack can be obtained without any costs, licenses for the Intel compiler are quite expensive. Considering that the available KNC products are all very costly, it is a shame to be still left without the right software to write applications for it. Besides this very annoying model, which should not be supported at all by anyone, we can definitely say that the Intel compiler is delivering great performance. In addition to the Intel compiler we also get some fairly useful tools form Intel, most. Most of these tools come with Intel's Parallel Studio, which is Intel's IDE. Here we also get the infamous VTune. Furthermore we get some performance counters, debuggers, a lot of analyzers, Intel MPI (with some versions, e.g., 5.0.3) for distributed computing, the whole MPSS software stack (including hardware profilers, sensor applications and more) and many more goodies. For us the command line tools are still the way to do things. A crucial (and very elementary) task is therefore to figure out the best parameters for some of these tools. As already explained the optimum compiler flags need to be determined. Also the Intel MPI runtime needs to be carefully examined. We want the best possible configuration. The Intel compiler gives us the freedom not only to produce code compatible for the KNC, but also to leverage the three different programming modes: While the first and third mode are similar in terms of compilation, the second one if quite different. Here we produce a binary that has to be run from the host, which contains offloading sections that are run by spawning a process on the co-processor. The process on the co-processor is delivered by a subset of the original binary, which has been produced using a different assembler. Therefore our original binary was actually a mixture of host and MIC compatible code. Additionally this binary contained instructions to communicate with the co-processor(s). The native execution requires a tool called micnativeloadex, which also comes with the MPSS software stack. So what compilation flags should we use? Turns out there are many and to complicate things even more - they also depend on our application. Even though -O3 is the highest optimization level, we might sometimes experience strange results. It is therefore considered unreliable. In most cases -O2 is already sufficient. Nevertheless, we should always have a look at performance and reliability before picking one or the other. For targeting the MIC architecture directly we need the -mmic machine specifier. -O3 -O2 -mmic There are other interesting flags. Just to list them here briefly: -no-prec-div -ansi-alias -ipo Another possibility that has not been listed previously is the ability for Profile-Guided Optimization (PGO). PGO will improve the performance by reducing branch predictions, shrinking code size and therefore eliminating most instruction cache problems. For an effective usage we need to do some sampling beforehand. We start by compiling our programing with the -prof-gen and the -prof-dir=p flags, where p is the path to the profiler that will be generated. After running the program we will find the results of the sampling in the path that has been specified previously. Finally we can compile our application again, this time with -prof-use instead of the -prof-gen flag. -prof-gen -prof-dir=p -prof-use Compilers are not everything. We also need the right frameworks. Otherwise we have to write them ourselves, which may be tedious and result in a less general, less portable and less maintained solution. The following libraries or frameworks are interesting for our purposes. Some of these frameworks exclude each other, at least partially. For instance we cannot use the Cilk Plus threading capabilities together with OpenMP. The reason is that both come with a runtime to manage a threadpool, which contains (by default) as many SMT threads as possible. For our co-processor we would have 244 or 240 (with / without the core reserved for the OS). Here we have a classical resource limitation, where both frameworks fight for the available resources. That is certainly not beneficial. We do not use OpenACC, even though it would certainly be an interesting option for writing portable code that may run on co-processors, as well as on GPUs. The Intel compiler comes with its own set of special pragma instructions, which trigger offloading. Additionally we may want to use compiler intrinsics. The use of compiler intrinsics may be done in a quasi-portable way. An example implementation is available with the Vc library. Vc gives us a general data abstraction combined with C++ operator overloading, that makes SIMD not only portable, but also efficient to program. In the end we remain with a mostly classical choice. SIMD will either be done by using intrinsics (not very portable) or Cilk Plus (somewhat portable, very programmer friendly). Multi-threading will be laid in the hands of OpenMP. If we need offloading we will use the Intel compiler intrinsics. We may also want to use OpenMP, as an initial set of methods for offloading has been added in the latest version. Our MPI framework is also provided by Intel, even though alternative options, e.g., Open MPI, should certainly be evaluated. Another interesting software topic is the used firmware and driver stack. Even though our hardware stack consists nearly exclusively of commercially available (standard, at least for HPC) components, we use them in a non-standard way. Special firmware is required for these components: The latter two also demand some drivers for communication. The driver and firmware version have to match usually. Additionally we need drivers for connecting to our I/O system (Lustre) and possibly for reading values from the PLX chip. The PLX chip is quite interesting for checking the signal quality or detecting errors in general. Going away from the nodes we need drivers to access connectors available on the power control board, which is wired to a BeagleBone Black (BBB). Most of these drivers need to be connected to our Cassandra system. Therefore they either need to communicate with some kind of available web API or they need access to Cassandra directly. The communication with devices via I2C or GPIO is also essential for the BMC. Here we needed to provide our own system, which contains a set of helpers and utility applications. These applications can be as simple as enabling or disabling connected LEDs, but they can also be as complicated as talking to a hot-swap controller. The custom drivers already kick in during the initial booting of a node. We set some location pins, which can be read out via the BMC. When a node boots, we read out these pins and compare them with a global setting. If the pins are known we continue, otherwise we note assign the right IP address to the given location pin and MAC address. Now we have to restart the node. The whole process can be sketched with a little flow chart. Overall the driver and firmware software has been designed to allow any possible form of communication to happen with maximum performance and the least error rate. This also motivated us to create custom communication libraries. Communication is important in the field of HPC. Even the simplest kind of programs are distributed. The standard framework to use is MPI. Since the classic version of Moore's law has come to an end we also need to think about simultaneous multi-threading (SMT). This is usually handled by OpenMP. With GPUs and co-processors we also may want to think about offloading. Finally we have quite powerful cores, which implement the basics of vector-machines most popular the 70 s. The SIMD capabilities also need to be used. At this point we have 4 levels (distribute, offload, multi-thread, vectorize). Also these levels may be different for different nodes. Heterogeneous computing is coming. We need to care a lot about communication between our nodes. The communication may be over PCIe, or via the InfiniBand network. It may also occur between workers on the same co-processor. Obviously it is important to distinguish between the different connections. For instance if two co-processors are in the same node we should ensure they communicate with each other more often than with other co-processors, which are not in the same node. Additionally we need to use the dual-port of the Connect-IB card very efficiently. We therefore concluded that it is definitely required to provide our own MPI wrapper (over, e.g., Intel MPI), which uses IBverbs directly. This has been supported by statements from Intel. In the release notes of MPSS 3.2 there is a comment that, libibumad not avail on mic. Status: wontfix In principle MPI implementations can take topologies (such as our hyper-crossbar topology using 2 ports per node) into account. However, this usually requires libibumad. Since this library is not available for the KNC, we better provide our own implementation. The library would not only be aware of our topology, but would use it in the most efficient manner with the least possible overhead. It is definitely good practice to have a really efficient allreduce routine. Such a routine is only possible with the best possible topology information. Effective synchronization methods are also important. Initially we should try to avoid synchronization as best as we can. But sometimes there are not robust or reliable ways to come up with a lock-free alternative to our current algorithm. This is the point, where we demand synchronization with the least possible overhead. For instance in MPI calls we prefer asynchronous send operations and possibly synchronous receive. However, depending on the case we might have optimized our algorithm in such a way that we can ensure a working application without requiring any wait or lock operations. Before we should start to think about optimizing for a specific architecture, or other special implementation optimizations, we should find the best possible algorithm. An optimal algorithm cannot be beaten by smart implementations in general. In a very special scenario the machine-specific implementation using a worse algorithm may do better, but the effort and the scalability is certainly limited. Also such a highly specific optimization is not very portable. We definitely want to do both - using the best algorithm and then squeeze out the last possible optimizations by taking care of machine specific properties. A good example is our implementation of a synchronization barrier. Naively we can just use a combination of mutexes to fulfill a full barrier. But in general a mutex is a very slow synchronization mechanism. There are multiple types of barriers. We have a centralized barrier, which scales linearly. A centralized barrier may be good for a few threads, if all share the same processing unit, but is definitely slow for over a hundred threads distributed over more than fifty cores. There are two more effective kinds of barriers. A dissemination barrier (sometimes called butterfly barrier) groups threads into different groups, preventing all-to-all communication. Similarly a tournament (or tree) barrier forms groups of two with a tree like structure. The general idea is illustrated in the next image. We have two stages, play and release. In the first stage the tournament is played, where every contestant (thread) has to wait for its opponent to arrive before possibly advancing to the next round. The "loser" of each round has to wait until it has been released. The winner needs to release the loser later. Only the champion of the tournament switches the tournament stage from play to release. As a starting point we may implement the algorithm as simple as possible. Hence no device specific optimizations. No artificial padding or special instructions. We create a structure that embodies the specific information for the barrier. The barrier has to carry information regarding each round. Possible pairings and upcoming result. The whole tournament is essentially pre-determined. The result can take 5 different states: winner, loser, champion, dropout and bye. The first two states are pretty obvious. A champion is a winner that won't advance, but rather triggers the state change. It could be seen as a winner with additional responsibilities. Dropout is a placeholder for a non-existing position. Similarly bye: It means that the winner has not to notify (release) a potential loser, as there is none. The following code shows a pretty standard implementation in C++. The Barrier class has to be initialized with the number of threads to participate in the barrier. The await method has to be called from each thread with its own unique identifier (ranging from 0 to N - 1, where N is the number of threads). The provided version uses a static array that covers at most 256 threads. Dynamic memory management is possible as well, but we should consider alignment and padding (all platform specific). Barrier await Without further ado let's take a look at the sample portable implementation of a tournament barrier. #include <cmath> class Barrier final { public: Barrier(unsigned int threads) : threads(threads), rounds(std::ceil(std::log(threads) / std::log(2u))) { for (auto thread = 0u; thread < threads; ++thread) { for (auto rnd = 0u; rnd <= rounds; ++rnd) { array[thread][rnd].previous = false; array[thread][rnd].flag = false; array[thread][rnd].role = BarrierRole::Dropout; array[thread][rnd].threadId = thread; array[thread][rnd].roundNum = rnd; array[thread][rnd].opponent = nullptr; } } for (auto thread = 0u; thread < threads; ++thread) { auto current = 1u; auto previous = 1u; for (auto rnd = 0u; rnd <= rounds; ++rnd) { const auto left = thread - previous; const auto right = thread + previous; if (rnd > 0u) { const auto temp = thread % current; if (temp == 0u && right < threads && current < threads) { array[thread][rnd].role = BarrierRole::Winner; array[thread][rnd].opponent = &array[right][rnd].flag; } if (temp == 0u && right >= threads) { array[thread][rnd].role = BarrierRole::Bye; } if (temp == previous) { array[thread][rnd].role = BarrierRole::Loser; array[thread][rnd].opponent = &array[left][rnd].flag; } if (thread == 0u && current >= threads) { array[thread][rnd].role = BarrierRole::Champion; array[thread][rnd].opponent = &array[right][rnd].flag; } } previous = current; current *= 2u; } } } bool status(unsigned int pid) const { return array[pid][0].previous; } void await(unsigned int pid) { if (threads > 1u) { auto sense = !status(pid); block(pid, sense); status(pid, sense); } } protected: void status(unsigned int pid, bool value) { array[pid][0].previous = value; } void block(unsigned int vpid, const bool value) { auto rnd = 0u; //go sleep while (true) { if (array[vpid][rnd].role == BarrierRole::Loser) { *(array[vpid][rnd].opponent) = value; while(array[vpid][rnd].flag != value); break; } if (array[vpid][rnd].role == BarrierRole::Winner) { while(array[vpid][rnd].flag != value); } if (array[vpid][rnd].role == BarrierRole::Champion) { while(array[vpid][rnd].flag != value); *(array[vpid][rnd].opponent) = value; break; } if (rnd < rounds) rnd++; } //wake up while (rnd > 0u) { rnd--; if (array[vpid][rnd].role == BarrierRole::Winner) *(array[vpid][rnd].opponent) = value; if (array[vpid][rnd].role == BarrierRole::Dropout) break; } } enum class BarrierRole : unsigned int { Winner = 0u, Loser = 1u, Bye = 2u, Champion = 3u, Dropout = 4u }; struct BarrierRound { BarrierRole role; volatile bool* opponent; volatile bool flag; int threadId; int roundNum; bool previous; }; } private: BarrierRound array[256][16]; unsigned int threads; unsigned int rounds; }; A barrier implementation (especially a more platform specific one) needs to be carefully benchmarked. Micro benchmarks may lead to false conclusions and have to be executed with great care. Nevertheless they are needed to yield useful information, which is used to make decisions on how to implement certain things in our code. Things like which barrier to use. Things like what runtime (e.g., OpenMP, Cilk Plus, TBB, if any) should be used for managing the threads on a single co-processor. A good measure for the performance of a barrier depending on the number of threads, is given by placing some dummy barriers up front, measuring the current time for each thread, taking the barrier and measuring the time again. The latest thread to arrive at the barrier and the latest thread to leave the barrier determine the overhead of the barrier. An analogy would be a bus ride with a group. The group represents the threads, the bus ride (or bus) the barrier. Once everyone in the group entered the bus we are ready to go. Therefore the start time of the bus is determined only by the last one to enter. Similarly once the last person of the group leaves the bus, we've reached our destination. We are good to go again. Hence the last one to leave gives us the release time. There are several other benchmarks that needed to be done. We won't go into details of every benchmark and every performance improvement that may be possible. Instead we will have a quick tour on the benchmark system. It is a C++ application, which runs one or more little sub-programs. The program can be build with whatever multi-processing framework. Currently it supports 5 targets: Not every sub-program contains a benchmark for each target. The reason for having multiple targets instead of a single program containing all five frameworks is simple: Three of them (OpenMP, Cilk Plus and TBB) come with their own runtime. Sometimes these runtimes interfere with each other. Results may be unreliable in such cases, and may favor one or the other framework, which was able to win the race for the thread resources. Even though the framework is selected at compile-time via defined symbols, the benchmark application also comes with a runtime component. The runtime determination is necessary to handle some special edge cases. It has been implemented using C++11 enum classes. The following code snippet shows the enumeration, which is basically a bit flags enumeration. Hence we also define two useful functions, a bitwise-or and a bitwise-and operator. enum class Threading : unsigned char { None = 1, OpenMP = 2, Tbb = 4, CilkPlus = 8, PThreads = 16, All = 30 }; inline Threading operator & (Threading lhs, Threading rhs) { using T = std::underlying_type<Threading>::type; return (Threading)(static_cast<T>(lhs) & static_cast<T>(rhs)); } inline Threading operator | (Threading lhs, Threading rhs) { using T = std::underlying_type<Threading>::type; return (Threading)(static_cast<T>(lhs) | static_cast<T>(rhs)); } The benchmark implementation also does some other things automatically. It repeats measurements to increase the accuracy. It calculates the statistics. It writes output. All those things need to be handled correctly. A set of scripts is complementary to the benchmark application. For example there is a script that runs the same benchmark(s) for all possible frameworks. Another script would allocate a job on the queueing system and then run the desired benchmarks for every framework on every node. That way we gather a lot more data, which results in better statistics. The base class of each Benchmark is shown next. We only outline the header, as the real implementation does not really reveal particularly interesting details. Benchmark class Benchmark { public: Benchmark(const std::string& name, const std::string& desc, const Threading supports, const Config& config); virtual ~Benchmark(); std::string name() const noexcept; std::string description() const noexcept; std::string output() const noexcept; bool is_supported() const noexcept; bool supports(Threading threading) const noexcept; unsigned int repeats() const noexcept; unsigned int warmups() const noexcept; virtual void run() const = 0; protected: void warmup(std::function<void(void)> experiment) const noexcept; Statistic<double> repeat(std::function<double(void)> experiment) const noexcept; Statistic<double> warmup_and_repeat(std::function<double(void)> experiment) const noexcept; inline Statistic<double> measure(std::function<void(void)> experiment) const noexcept { warmup(experiment); return repeat([&](){ const auto start = rdtsc(); experiment(); const auto end = rdtsc(); return static_cast<double>(end - start); }); } template<typename... Ts> inline Statistic<double> measure(const std::tuple<Ts...>& args, std::function<void(const std::tuple<Ts...>&)> experiment) const noexcept { warumup([&](){ experiment(args); }); return measure([&](){ return experiment(args); }); } private: std::string _name; std::string _desc; Threading _supports; std::string _output; unsigned int _warmups; unsigned int _repeats; }; How are the various benchmarks added to the application? There is a simple factory that is loosely coupled to the idea of a Benchmark. The benchmarks are added via a template method, requiring a certain signature for the constructor. Nevertheless, the following code performs some magic on a (unknown for the header) proper benchmarks container (factory). Since add() is a template method, we need to provide the information to the compiler. We therefore call a template add<T>() to ensure no compilation errors. add() template add<T>() #pragma once #include "tests/barriers.h" // ... template<typename T> static void setup_benchmarks(T& benchmarks) { benchmarks.template add<Barriers>(); // ... } Next to the micro-benchmarks we are also very interested in temperature benchmarks. Here we need to come up with another application that does temperature measurements and correlates them to some load on the system. Overall the architecture required by the temperature benchmark is illustrated below. On the login node we run the distribution script, which performs the allocation and eventually calls a runner script. The runner is invoked via an ssh call on a specific node. We will connect to N nodes in parallel, executing the same script on each one. The runner script is, however, only running the measurement application, which spawns some threads. A single thread is dedicated to measurements. It collects all possible temperature data, e.g., the temperature from the various KNCs and the host temperature (for instance CPUs). Then there are four threads for each KNC. Each one is running the same script with a different argument. The script only connects to the KNC specified in the argument, running an executable on the co-processor. We use two different applications for this benchmark. We have an idle test that just runs the sleep application. Additionally we have a stress test that runs the appropriate XHPL (High Performance Linpack) program. The former is run for an hour, the latter for 10 minutes. In the end we have gathered important temperature distribution characteristics of our system. Unfortunately I cannot offer a test account for the system. It is exclusively reserved for academic purposes, specifically for the research collaboration that jointly receives the funding from the German Research Foundation. Usually researchers login to the system via one of many login nodes. The only requirement is an account (and assigned computing time, but usually that is not a problem). The monitoring system also informs users of the status for the login nodes. They should never be down, but one or the other could be offline due to maintenance reasons. The whole project lasted almost 3 years. Initially it was planned to release the system in 2014, but our first industry partner got into trouble with the American government. Due to political reasons the Russian company was added to a trading black list. Therefore we had to end the relationship. Otherwise the future of the whole project would have been endangered. Nevertheless, we managed to find a new partner in time, develop a new design and finish the project. In the end the whole intermezzo cost us over a year. The lesson is easy: one never knows what is up coming, but having a good plan B (and possibly also plan C) may be crucial. How does the prototype rack look like? This is a picture taken at its final destination, the university's computing center. Typically, the lifetime of supercomputers is between 5 and 10 years. Major advances in computing efficiency render a machine, even though it may be still among the fastest computers in the world, inefficient pretty fast. Therefore maintenance costs will most likely determine the end for most systems. Also new applications and improved infrastructure components, such as faster network interconnects, may scream for an upgrade. For us the future is QPACE 3. It will be based on the successor of the KNC and should be a magnitude faster. Furthermore it should also quadruple the efficiency compared with QPACE.
https://codeproject.freetls.fastly.net/Articles/1091462/Construction-of-a-Supercomputer-Architecture-and-D?PageFlow=Fluid
CC-MAIN-2022-40
refinedweb
14,527
58.58
rera SDE – Microsoft. *I did not write this bug…although I have written my share of bugs 😐 This blows me away! I really hope this is not indicative of the level of testing Microsoft has given the compact framework? IMHO the biggest failing of Microsoft’s Mobile operating system(s) is the lack of robustness caused by poor testing! <sarcasm on> Another negative comment by a Windows Mobile developer. I guess it’s time to shut off the comments. <sarcasm off> I’ve seen so much openness out of Microsoft since this blogging initiative started, but this blog has been particularly bad about this. You shut off the comments to the Windows Mobile 5.0 Security Model FAQ post when the comments turned negative () without an actual response to any of the developer complaints. Why should this be any different? I have been a Windows Mobile developer since Pocket PC 2000 (look it up on my site), and I think it’s a great operating system, but I think the Windows Mobile team does the worst job of any Microsoft team that I’ve dealt with in dealing with and addressing user feedback. Since this bug affect VB.NET too, the solution also works in vb.net myphone.Talk("+51739787799 ", True) Shane, we did block comments on the thread you pointed to, but we shouldn’t have. We’re sorry and it won’t happen again. Mike Calligaro Error 1 The type or namespace name ‘WindowsMobile’ does not exist in the namespace ‘Microsoft’ (are you missing an assembly reference?) C:Documents and Settings…Visual Studio 2005ProjectsDeviceApplication7DeviceApplication7Form1.cs 8 17 DeviceApplication7 — using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using Microsoft.WindowsMobile.Telephony; namespace DeviceApplication7 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { Phone myPhone = new Microsoft.WindowsMobile.Telephony.Phone(); myPhone.Talk("555-0100 "); } } } Well, this is interesting, yet the real silver bullet would be if one could click on a phone number in IE in Windows Mobile and let the phone automatically dial this number. Or is this already implemented and just needs activation? For p.janowski: Right click on your project and add a Microsoft.WindowsMobile.Telephony reference to your mobile application. To see it as one of the references, your application should be a Windows Mobile 5.0 .Net CF application. Good luck. How can i determine if the phone is currently muted using the Telephony namespace? Hi Luis, I wonder if you can help me with my phone volume problem? I have an O2 Atom and the ear piece volume is way too low even when turned fully up. Everyone comlains about it and we have reported it to O2 but 3 ROM revisions later and its still the same. Is there a registry hack that i can use to increase the maximum volume level of the ear piece? Many thanx if you can spare the time to help me. Regards Chris!
https://blogs.msdn.microsoft.com/windowsmobile/2006/01/03/using-microsoft-windowsmobile-telephony/
CC-MAIN-2017-22
refinedweb
506
58.69
Tell us what you think of the site. Hello! Tell me please is there any method to get access to manipulators (translation, rotation, scale), because sometimes i want to scale models like manipulator does .. The main problem that when using ‘GetDefinitionScaleVector’ for the actor, it scales symmetrical both hands, and there is now ‘symmetry edit OFF’ method in python.. So i need to scale the hands somehow separately and that can be done in some ways - ‘GeometryScaling’, which is not what i want, ‘SetVector’, which very strangely do absolutely nothing in this case (actor, active = False). And the manipulator ‘Scale’ does all what i need, but i can’t get a point how i can use it by python. Any ideas? Monkey I’ve not looked into your problem, but here is a suggested starting point. Create a scene in MotionBuilder with an actor with the hands scaled how you ideally want them then save it out in .ascii format. If you then open up the .ascii scene in a text editor you should be able to see the names of the exact properties required to achieve your effect. This is a useful ‘trick’ when trying to replicate anything you can see in MotionBuilder but cannot find property names etc. Hi! thanl you for your reply, that’s interesting trick but for my ask i don’t know how to use this data.. i can’t scale or rotate actorbones separately, (like with symmetry-edit-button off). When i look ASCII file i see that some nodes or skeletonstates are different, indeed: (I scaled one bone by manipulator) .... NODESCALE: 1,1,1 NODESCALE: 1,1,1 NODESCALE: 1.82418444497641,1,1 NODESCALE: 1,1,1 NODESCALE: 1,1,1.... ..... SKELETONSTATE: { SKELETONNODESTATE: 1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1 SKELETONNODESTATE: 1,0,0,0,0,1,0,0,0,0,1,0,0,97.2,-7.3,1 SKELETONNODESTATE: 1,0,0,0,0,1,0,0,0,0,1,0,9.6,93.6,0,1..... and so on.. and no idea how to achieve this result by python.. Yeah, I think you might be stuck. Someone else might be able to come up with an answer for you but I tried quite a few ways of acheiving what you want and failed I’m afraid. Even getting the matrix for the object, altering the scaling values and setting it back on the object doesn’t do anything:( doh - I should have read your first post:) This isn’t going to be much use to you I think. ------- ok - figured it out to some extent, you need to look at the actor not the actual geometry object. I’ve not worked out how to turn off the symmetrical operation - in this example both wrists will update but it’s a start. Look up the FBSkeletonNodeId class for more info on different body parts. Try this:- from pyfbsdk import * # create actoractor = FBActor("Actor") # get scale value of right wrist and print outscale_value = FBVector3d() actor.GetDefinitionScaleVector(FBSkeletonNodeId.kFBSkeletonRightWristIndex, scale_value)print scale_value # set scale value to be (1, 2, 3) on right wristscale_value = FBVector3d(1.0, 2.0, 3.0) actor.SetDefinitionScaleVector(FBSkeletonNodeId.kFBSkeletonRightWristIndex, scale_value) thank you, Monkey - yes, the main problem is that there i no symmetry-off function or smth like this. i’m trying to transfrom actor to markers, all the scales are ok, but when i’m trying for example to rotate hands they do it symmetrically so i don’t have any idea how to solve this problem..
http://area.autodesk.com/forum/autodesk-motionbuilder/python/manipulators-scale/page-last/
crawl-003
refinedweb
599
62.07
- NAME - SYNOPSIS - DESCRIPTION - WHY MOO EXISTS - MOO AND MOOSE - MOO AND CLASS::XSACCESSOR - MOO VERSUS ANY::MOOSE - PUBLIC METHODS - LIFECYCLE METHODS - IMPORTED SUBROUTINES - SUB QUOTE AWARE - CLEANING UP IMPORTS - INCOMPATIBILITIES WITH MOOSE - SUPPORT - AUTHOR - CONTRIBUTORS - LICENSE NAME Moo - Minimalist Object Orientation (with Moose compatibility) SYNOPSIS package Cat::Food; use Moo; use strictures 2; use namespace::clean;; DESCRIPTION to provide full interoperability via the metaclass inflation capabilities described in "MOO AND MOOSE". For a full list of the minor differences between Moose and Moo's surface syntax, see "INCOMPATIBILITIES WITH MOOSE". WHY MOO EXISTS If you want a full object system with a rich Metaprotocol, Moose is already wonderful.: - a command line or CGI script where fast startup is essential - - code designed to be deployed as a single file via App::FatPacker - - a CPAN module that may be used by others in the above situations -. MOO AND MOOSE. MOO AND CLASS::XSACCESSOR If a new enough version of Class::XSAccessor is available, it will be used to generate simple accessors, readers, and writers for better performance. Simple accessors are those without lazy defaults, type checks/coercions, or triggers. Simple readers are those without lazy defaults. Readers and writers generated by Class::XSAccessor will behave slightly differently: they will reject attempts to call them with the incorrect number of parameters. MOO VERSUS ANY::MOOSE. PUBLIC METHODS Moo provides several methods to any class using it. new Foo::Bar->new( attr1 => 3 ); or Foo::Bar->new({ attr1 => 3 }); The constructor for the class. By default it will accept attributes either as a hashref, or a list of key value pairs. This can be customized with the "BUILDARGS" method. does if ($foo->does('Some::Role1')) { ... } Returns true if the object composes in the passed role. DOES if ($foo->DOES('Some::Role1') || $foo->DOES('Some::Class1')) { ... } Similar to "does", but will also return true for both composed roles and superclasses. meta my $meta = Foo::Bar->meta; my @methods = $meta-. LIFECYCLE METHODS. FOREIGNBUILDARGS sub FOREIGNBUILDARGS { my ( $class, $options ) = @_; return $options->{foo}; }. BUILD. DEMOLISH. IMPORTED SUBROUTINES extends extends 'Parent::Class'; Declares a base class. Multiple superclasses can be passed for multiple inheritance but please consider using roles instead. The class will be loaded but no errors will be triggered if the class can't be found and there are already subs in the class. Calling extends more than once will REPLACE your superclasses, not add to them like 'use base' would. with with 'Some::Role1'; or with 'Some::Role1', 'Some::Role2'; Composes one or more Moo::Role (or Role::Tiny) roles into the current class. An error will be raised if these roles cannot be composed because they have conflicting method definitions. The roles will be loaded using the same mechanism as extends uses. haswpor rw. rostands for "read-only" and generates an accessor that dies if you attempt to write to it - i.e. a getter only - by defaulting readerto the name of the attribute. lazygenerates a reader like ro, but also sets lazyto 1 and builderto _build_${attribute_name}to allow on-demand generated attributes. This feature was my attempt to fix my incompetence when originally designing lazy_build, and is also implemented by MooseX::AttributeShortcuts. There is, however, nothing to stop you using lazyand builderyourself with rwpor rw- it's just that this isn't generally a good idea so we don't provide a shortcut for it. rwpstands for "read-write protected" and generates a reader like ro, but also sets writerto _set_${attribute_name}for attributes that are designed to be written from inside of the class, but read-only from outside. This feature comes from MooseX::AttributeShortcuts. rwstands for "read-write" and generates a normal getter/setter by defaulting the accessortoais discarded. Only if the sub dies does type validation fail. Since Moo does not run the isacheck before coerceif a coercion subroutine has been supplied, isachecks are not structural to your code and can, if desired, be omitted on non-debug builds (although if this results in an uncaught bug causing your program to break, the Moo authors guarantee nothing except that you get to keep both halves). If you want Moose compatible or MooseX::Types style named types, look at Type::Tiny. To cause your isaentries. coerce Takes a coderef which is meant to coerce the attribute. The basic idea is to do something like the following: coerce => sub { $_[0] % 2 ? $_[0] : $_[0] + 1 }, Note that Moo will always execute your coercion: this is to permit isaentries to be used purely for bug trapping, whereas coercions are always structural to your code. We do, however, apply any supplied isacheck after the coercion has run to ensure that it returned a valid value. If the isaoption is a blessed object providing a coerceor coercionmethod, then the coerceoption may be set to just 1. handles Takes a string handles => 'RobotRole' Where RobotRoleis a. The for that attribute was supplied to the constructor. Alternatively, if the attribute is lazy, defaultexecutes. NOTE: If the attribute is lazy, it will be regenerated from defaultor buildert. init_arg Takes the name of the key to look for at instantiation time of the object. A common use of this is to make an underscored attribute have a non-underscored initialization name. undefmeans before foo => sub { ... }; See "before method(s) => sub { ... };" in Class::Method::Modifiers for full documentation. around around foo => sub { ... }; See "around method(s) => sub { ... };" in Class::Method::Modifiers for full documentation. after after foo => sub { ... }; See "after method(s) => sub { ... };" in Class::Method::Modifiers for full documentation. SUB QUOTE AWARE "quote_sub" in Sub::Quote allows us to create coderefs that are "inlineable," giving us a handy, XS-free speed boost. Any option that is Sub::Quote aware can take advantage of this. To do this, you can write use Sub::Quote; use Moo; use namespace::clean;. CLEANING UP IMPORTS. INCOMPATIBILITIES WITH MOOSE); need Moose - Moo is strict and warnings, in a similar way to Moose. The authors recommend the use of strictures, which enables FATAL warnings, and several extra pragmas when used in development: indirect, multidimensional, and bareword::filehandles.; use strictures 2;. SUPPORT Users' IRC: #moose on irc.perl.org (click for instant chatroom login) Development and contribution IRC: #web-simple on irc.perl.org (click for instant chatroom login) Bugtracker: Git repository: git://github.com/moose/Moo.git Git browser:>. LICENSE This library is free software and may be distributed under the same terms as perl itself. See.
http://web-stage.metacpan.org/pod/release/HAARG/Moo-2.003004/lib/Moo.pm
CC-MAIN-2019-18
refinedweb
1,075
57.27
Hi there, recently i´ve encountered some problems pasting code from Rhino’s built in python editor. The code looks ok in the preview pane but once i clicked on send, it somehow is wrangled and a few line breaks are not preserved. So here is a python test, the code is straight from the built in python editor, the code contains blank lines as well: import rhinoscriptsyntax as rs import os def SaveAsRhinoFile(name="Default.3dm"): filename = name folder = "D:/Temp/" path = os.path.abspath(folder + filename) cmd = "_-SaveAs " + chr(34) + path + chr(34) rs.Command(cmd, True) if __name__=="__main__": SaveAsRhinoFile("MyTestFileName.3dm") To make shure that the line breaks are there, here is a screengrab of another editor which is able to display linebreaks: Along with this i would like to express one wish to make copying code examples easier. eg. discourse might provide a button under code examples to copy the content to the clipboard. I´ve found that selecting it and manually copying the code causes unexpected intendation once pasted into a local python editor. thanks, c.
https://discourse.mcneel.com/t/code-display-problem-and-copy-wish/23937
CC-MAIN-2020-45
refinedweb
183
61.36
12 March 2012 21:36 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> Exports decreased to 15,509 tonnes from 22,749 tonnes in January 2011. The decline was the result of weaker foreign demand during the period, which lengthened domestic supply, market sources said. Most IPA shipped abroad went to Collectively, those top five destinations took 87% of January’s exports. Month on month, exports fell by 13% from 17,792 tonnes in December, the ITC said. Imports came primarily from US prices for IPA were 73-75 cents/lb ($1,609-1,653/tonne, €1,223-1,256/tonne), as assessed by ICIS. US IPA producers include Shell Chemicals, Dow Chemical, LyondellBasell and ExxonMobil. (
http://www.icis.com/Articles/2012/03/12/9540864/us-january-ipa-exports-fall-32-year-on-year.html
CC-MAIN-2014-42
refinedweb
115
65.73
For reason beyond my control, I must parse a xml without xmlns tag. I believe that it is a bad practice because it miss namespace control. Anyway, I must program for such scenario. What could be the best way to face it? I try setNamespaceAware = false and it does work for file without xmlns tag but it seems that it doesn't work properly when reading xml with xmlns tag. Plus this, I guess that by setting any awareness configuration to false isn't a good practice. I read some article telling to add the xmlns to the file but I am in doubt if it might be a good option (How to add Namespaces programmatically to the XML parser ? (XML forum at JavaRanch)). Any comment will be appreciated. javax.xml.parsers.DocumentBuilderFactory fac = new org.apache.xerces.jaxp.DocumentBuilderFactoryImpl(); fac.setNamespaceAware(true); org.w3c.dom.Document d = null; javax.xml.parsers.DocumentBuilder builder = fac.newDocumentBuilder(); d = builder.parse("C:/my_folder/my_file.xml"); <?xml version="1.0" encoding="UTF-8"?> <c:de There is a question I posted before that is related to this subject but with very different question (DocumentBuilder.parse raises the error "The prefix "c" for element "c:de" is not bound."). P.s.: I posted the same question in other forum ()
http://www.javaprogrammingforums.com/whats-wrong-my-code/37826-what-best-way-parse-xml-file-without-xmlns.html
CC-MAIN-2015-18
refinedweb
215
59.8
Gavin wrote: "Most applications are not structured for arbitrary embedding." Absolutely right, but most documents ARE written to be processed by many applications. Documents that use namespaces provide a single, simple way for all applications to pick out exactly what they are to process. Using your approach every application has to have its own little bit of AI. Yours, John F Schlesinger SysCore Solutions 212 619 5200 x 219 917 886 5895 Mobile -----Original Message----- From: Gavin Thomas Nicol [mailto:gtn@...] Sent: Tuesday, July 11, 2000 1:48 PM To: 'XML-Dev Mailing list' Subject: RE: power uses of XML vs. simple uses of XML > "What kind of application can handle this?" > > Let me quote from Microsoft's white paper on microsoft.net: My point was that there are but a few applications that can handle this. Most applications are not structured for arbitrary embedding.
http://www.oxygenxml.com/archives/xml-dev/200007/msg00391.html
crawl-002
refinedweb
145
65.73
GroupCollection.Item Property (Int32) Enables access to a member of the collection by integer index. Assembly: System (in System.dll) Parameters - groupnum - Type: System.Int32 The zero-based index of the collection member to be retrieved. Property ValueType: System.Text.RegularExpressions.Group The member of the collection specified by groupnum. You can determine the number of items in the collection by retrieving the value of the Count property. Valid values for the groupnum parameter range from 0 to one less than the number of items in the collection. The GroupCollection object returned by the Match.Groups property always has at least one member. If the regular expression engine cannot find any matches in a particular input string, the single Group object in the collection has its Group.Success property set to false and its Group.Value property set to String.Empty. If groupnum is not the index of a member of the collection, or if groupnum is the index of a capturing group that has not been matched in the input string, the method returns a Group object whose Group.Success property is false and whose Group.Value property is String.Empty. The following example defines a regular expression that consists of two numbered groups. The first group captures one or more consecutive digits. The second group matches a single character. Because the regular expression engine looks for zero or one occurrence of the first group, it does not always find a match even if the regular expression match is successful. The example then illustrates the result when the Item[Int32] property is used to retrieve an unmatched group, a matched group, and a group that is not defined in the regular expression. The example defines a regular expression pattern (\d+)*(\w)\2, which is interpreted as shown in the following table. using System; using System.Text.RegularExpressions; public class Example { public static void Main() { string pattern = @"(\d+)*(\w)\2"; string input = "AA"; Match match = Regex.Match(input, pattern); // Get the first named group. Group group1 = match.Groups[1]; Console.WriteLine("Group 1 value: {0}", group1.Success ? group1.Value : "Empty"); // Get the second named group. Group group2 = match.Groups[2]; Console.WriteLine("Group 2 value: {0}", group2.Success ? group2.Value : "Empty"); // Get a non-existent group. Group group3 = match.Groups[3]; Console.WriteLine("Group 3 value: {0}", group3.Success ? group3.Value : "Empty"); } } // The example displays the following output: // Group 1 value: Empty // Group 2 value: A // Group 3 value:.
https://msdn.microsoft.com/en-us/library/25kfx75y(v=vs.100).aspx
CC-MAIN-2018-22
refinedweb
409
52.05
In the old days of MS-DOS and Windows 95 you could identify any volume by its drive letter. Things were so easy. But not anymore after we shipped Windows 2000. In W2K you could also have mount points - volumes mounted under certain directories. This feature is similar with the single-root file system namespaces in Unix. The important thing is that mount points behave like normal directories - you can move them around or rename them from explorer. You won't see any significant differences when working with them - a volume mount point appears just as a normal folder. To quickly play with mount points, you can use the mountvol command. This is a simple command-line utility that allows us to create, list or delete mount points. Under the cover, this utility is using the SetVolumeMountPoint API to create a mount point. Mount points are actually implemented using the NTFS reparse point technology. This allows you to assign a directory to any volume in the system, even to a removable one (so you could create a c:\cdrom folder for your CDROM). But at this point you might wonder how we can identify a volume in the system, given that we have so many variations of it? The key concept is the volume name, a special persistent name assigned to each volume in the system. The format of the volume name is \\?\Volume{GUID}\ and you can see the volume names for every volume in your system if you just run MOUNTVOL without parameters. What is a volume name? A volume name gets assigned first time the OS sees the volume in the system. There is an internal OS component called Mount point Manager (in short, MpM) implemented in mountmgr.sys. This driver maintains a persistent database of volumes in the system. In the case of basic disks, the database associates a volume name with a certain associated with a certain partition on a given disk. Since the disks are reliably identified after reboot based on their signatures, MpM can reliably "discover" each volume during boot, and assign it the correct volume name. A volume name is a MS-DOS device, as you could probably see from the fact that it has a \\?\ prefix. MS-DOS devices are nothing special. They are simply symbolic links that reside in the \\?\ Object Manager namespace, which usually point out to a real device located in the \Device namespace. What makes MS-DOS devices special is that they can be easily accessed from any process in user mode. In other words, you could simply call CreateFile on the volume GUID name, if you need to. You cannot call CreateFile from user mode on a regular device in the form of "\Device\HarrdiskVolume23". As I said above, being an MS-DOS device, the volume name is just a symbolic link which points back to a real volume device, usually in the form of \Device\HarddiskVolume23. There is another example of an MS-DOS device which is the drive letter. If your volume has the C: drive letter, you will then have a symbolic link called \\?\C: which points to a real volume in the \Device\HarddiskVolumeXX format. The real" device mentioned above (which I'll call it the "legacy device") is in the form of \Device\HarddiskVolumeXX and is implemented by another component in the operating system called the Volume Manager. There are two volume managers in the OS - the basic volume manager implemented by ftdisk.sys, and the dynamic volume manager implemented in dmio.sys. These are bus drivers that create volume devices as necessary, for example when you create a partition on a new disk. Wow! So we already have at least three types of devices for a volume: the drive letter MS-DOS device name, the volume name and the volume device name. And that's not all - you can create as many MS-DOS devices you want for a volume through the DefineDosDevice API. But note that these device names that you create with DefineDosDevice are not persistent - they will go away after a reboot. Translations Now, given a certain drive letter, or a volume mount point, how I can get the underlying legacy device? The trick is to use the QueryDosDevice API. This API must be used in the following way. First, let's say that your DOS device is \\?\Volume{4bcddd95-9e9e-11d6-b7f4-806e6f6e6963}\. You strip out the \\?\ prefix and the terminating backslash (if any). This way you get to the real DOS device which is "Volume{4bcddd95-9e9e-11d6-b7f4-806e6f6e6963}". Same thing for a drive letter. There, the real DOS device is "C:". Now, you feed this device name as the first parameter to QueryDosDevice and you obtain the legacy volume device. Done! But in practice, you rarely need to deal with legacy devices. It is much more useful to convert a drive letter or a mount point path to a volume name. The operating system provides a convenient function that does just this, called GetVolumeNameForVolumeMountPoint. By the way, I haven't verified, but this must be probably the Win32 API with the longest name! One more note - the GetVolumeNameForVolumeMountPoint API works only on a path which is either a volume drive letter or the root of a volume mount point. What if you want to get the underlying volume root path for a random path? You need to use the GetVolumePathName which returns the nearest volume root path for a given directory. For example if you have a path like C:\foo\bar\somefolder, and both C:\foo and C:\foo\bar are mount points, then the nearest root is C:\foo\bar. Enumerations In Windows 2000 you can enumerate all volumes in the system with FindFirstVolume/FindNextVolume. These APIs are enumerating all the underlying volumes that the MpM knows about. However, this only enumerates the volume names, not the actual drive letters and paths. Enumerating all volumes in the system can be confusing, given that the same volume might end up having both a drive letter and a mount point at the same time! For example you can have a volume mounted under C:\ and another one under C:\foo\, and another one under C:\foo\bar\. In Windows XP and Windows Server 2003, there is a new API called GetVolumePathNamesForVolumeName which allows you to enumerate all the "display names" of a volume, i.e. the drive letter (if any) and any mount points. Note that you can have volume names with no drive letters or mount points too. You can also enumerate all mount points on a certain "parent" volume using FindFirstVolumeMountPoint/FindNextVolumeMountPoint. How the OS does that? Normally, you would think that the API enumerates all the directories in that volume and find out what are the mount points. That would be terribly slow, so the OS provides an optimization. On each volume there is a NTFS stream called \$Extend\$Reparse which keeps the list of "child" volume mount points defined on a certain volume. (For more details on this stream or other predefined NTFS streams you might consult the wonderful Microsoft Windows Internals book from Dave Solomon) In practice, things can get little more complicated if you want to recursively enumerate all the directories under a certain path. Say that you want to recursively copy all files under C:\foo to C:\bar. Sounds easy, isn't it? Wrong. What if c:\foo\dir1\ is a mount point? Well, that's not a problem since as I said below, volume mount points look like normal folders to the shell so FindFirstFile/FindNextFile will have no problem enumerating their contents. But - wait - what if we have a cyclic volume mount point? In other words, c:\foo\dir1\ is the same volume as C:. In that case you can reach an infinite cycle, which will break at some point anyway since Win32 APIs will work with directory names with less than MAX_PATH (255 characters). I prepared a quick test below from which we can easily see that even the DIR command gets confused. Oops!... So things are not that easy. Consider also that you might end up in a situation where c:\foo\dir1 is one volume which contains another volume mount point named c:\foo\dir1\dir2 which contains another mount point which points back at C:, etc. Solving these enumerations the right way is not an easy task, and is left as an exercise to the reader... One more thing about FindFirstFile/FindNextFile - you can actually use these APIs to see whether you are in the context of a mount point (although, you can alternatively use the more expensive GetVolumePathNames API to do this). Just get the attributes of a directory with GetFileAttributes. If this directory has the FILE_ATTRIBUTE_REPARSE_POINT attribute, then it holds a mount point. A final observation about volume mount points is that certain file system operations don't work the right way when you have volume mount points. For example if, in explorer, you recursively restrict access on the C: to everyone except local administrators, you might think that you are safe. You are not, actually if C:\foo is a mount point, since the NTFS access control inheritance rules won't propagate across volume mount points! So, c:\foo will be widely open to everyone even if you though that you tighten the control on C:\. What you have to do is to recursively enumerate all mount points under C:\ and re-apply the security settings on all underlying volumes. Are volume names really unique? There is another quirk on volume names that might interest you. Although the GetVolumeNameForVolumeMountPoint is supposed to return the unique volume name, it doesn't work as expected in certain cases. In rare cases, a volume can end up having two volume names! This weird situation can happen if you migrate a disk from one computer to another. Another scenario is dual boot - remember that in a dual boot, the OS sees the disks of the other stopped instance of the operating system, so it "thinks" as if these disks were moved from one machine to another. So what is the actual problem? Remember the \$Extend\$Reparse NTFS stream above? That stream contains a list of (folder - volume GUID) associations. In other words, if your C:\ volume contains a mount point at C:\foo, then this stream contains an association between the "\foo" relative path and the volume GUID of the c:\foo volume. Now, let's assume that you moved both C:\ and C:\foo volumes from one machine to another. The MpM will assign a new volume GUID to the C:\foo volume. Later, when the C:\ volume also gets surfaced, the C:\foo is present in that enumeration with a different volume GUID. MpM reacts to this by assigning a new volume name to C:\foo, in order to be consistent with the \$Extend\$Reparse stream contents. Bottom line is: if you have disk/volume migrations, then you can expect multiple volume names for the same volume. But whatever it happens, there is always a unique volume name for the current boot session. You can obtain this unique name by calling GetVolumeNameForVolumeMountPoint once on your root, get the volume name, and then call GetVolumeNameForVolumeMountPoint again. This will always return the unique volume name. This trick is very useful for example when you want to check if two volume paths V1 and V2 represent the same volume or not. You get the first volume path (V1), call GetVolumeNameForVolumeMountPoint on it twice in the manner described above, and remember the returned volume name. You do the same thing on V2. In the end, you compare the volume names. If they are equal, then the two volumes are identical. Hidden volumes Although this post got rather long, I should mention one more thing. There are certain categories of volumes that are not visible to the MpM. One class is hidden volumes. These volumes have the special property that no PnP notifications are being sent on arrival. In addition, these volumes do not have volume names (because anyway MpM doesn’t know about them). Hidden volumes are used for the VSS Hardware Shadow Copies infrastructure. Another class of volumes that do not have a volume name are VOLSNAP shadow copy devices. These are devices created by the VOLSNAP.SYS driver in the form \Device\HarddiskVolumeShadowCopyXXX. These devices are not even managed by the volume manager, and no PnP volume arrival notifications are being sent on their arrival. Again, in this case, MpM won’t assign a volume name.
http://blogs.msdn.com/b/adioltean/archive/2005/04/16/408947.aspx
CC-MAIN-2014-41
refinedweb
2,106
63.49
Hi, On Tue, Mar 17, 2009 at 8:51 PM, Diego Biurrun <diego at biurrun.de> wrote: > On Tue, Mar 17, 2009 at 07:23:18PM -0300, Ramiro Polla wrote: >> >> swscale_funnycode2.diff checks for VirtualAlloc in windows.h (os/2 >> could also use something similar then). >> >> --- a/configure >> +++ b/configure > > OK > >> --- a/swscale.c >> +++ b/swscale.c >> @@ -68,6 +68,10 @@ untested special converters >> ?#define MAP_ANONYMOUS MAP_ANON >> ?#endif >> ?#endif >> +#ifdef HAVE_VIRTUALALLOC > > #if Ah, I had forgotten about the switch to #if >> +#define WIN32_LEAN_AND_MEAN > > Is this necessary? It's a good idea, like Reimar pointed out. >> @@ -2513,6 +2517,9 @@ SwsContext *sws_getContext(int srcW, int srcH, enum PixelFormat srcFormat, int d >> +#elif defined(HAVE_VIRTUALALLOC) > > Leave out the 'defined'. > >> @@ -3161,6 +3168,9 @@ void sws_freeContext(SwsContext *c){ >> +#elif defined(HAVE_VIRTUALALLOC) > > ditto Updated patch attached. Ramiro Polla -------------- next part -------------- A non-text attachment was scrubbed... Name: swscale_funnycode_3.diff Type: text/x-diff Size: 2037 bytes Desc: not available URL: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-March/073353.html
CC-MAIN-2014-23
refinedweb
156
53.07
Table of Content, grab yourself a copy of the Qt SDK or if you are on Linux the system-provided copy of Qt and a compiler and let’s get started! Baby steps: Creating a new project Let’s try making a trivial application that has a single window that shows a QLabel and a QLineEdit. To do this follow these simple steps: - Start up qt-creator. [gallery.theharmers.co.uk] - Go to File->“New File or Project…” menu entry. - Choose Qt Gui Application and choose a name for it. [gallery.theharmers.co.uk] - Enter a project name, “qt-tutorial-01”, say. [gallery.theharmers.co.uk] - Select one or more versions of Qt to target. A desktop build is fine for this tutorial. [gallery.theharmers.co.uk] - Select the base class to be QWidget (leave the class name as Widget which is the default). [gallery.theharmers.co.uk] - Check project creation options on summary and click “Finish”. [gallery.theharmers.co.uk]: [gallery.theharmers.co.uk] - Using the toolbox on the left, drag a Label onto the widget form [gallery.theharmers.co.uk] - Do similarly for a Line Edit and place it to the right of the Label. The exact position is not important. [gallery.theharmers.co.uk] - Click on the widget background so that both of your new widgets (the label and line edit) get deselected. [gallery.theharmers.co.uk] - In the toolbar at the top click on the “Lay out Horizontally” button or press Ctrl-H to add all widgets to a horizontal layout. The layout will take care of resizing your widgets for you if the parent widget’s size changes. [gallery.theharmers.co.uk] - Double click on the Label and it will switch to edit mode. Change the text to “My name is:” [gallery.theharmers.co.uk] -. [gallery.theharmers.co.uk] Now open up the widget.h file and edit it so that it looks like this: - #ifndef WIDGET_H - #define WIDGET_H - #include <QWidget> - namespace Ui { - class Widget; - } - - { - Q_OBJECT - public: - - ~Widget(); - - - private: - Ui::Widget *ui; - }; - #endif // WIDGET_H Now edit the corresponding .cpp file to look like this: - #include "widget.h" - #include "ui_widget.h" - - - ui(new Ui::Widget) - { - ui->setupUi(this); - } - Widget::~Widget() - { - delete ui; - } - - { - ui->lineEdit->setText(name); - } - - { - return ui->lineEdit->text(); - } Finally edit main.cpp to this: - #include <QtGui. [gallery.theharmers.co.uk] This is what the application looks like when it is executed: [gallery.theharmers.co.uk] As you can see the main() function is very simple. All we do is create a QApplication [doc.qt.nokia.com] and then a Widget (this is our custom widget that we layed out in designer and added custom behaviour to in code with the name() and setName() functions). We then call our custom setName function on the widget. This in turn gets a pointer to the QLineEdit [doc.qt.nokia.com] widget that we placed on the form and calls the setText() [doc.qt.nokia.com]() function of QLineEdit. Finally we show the widget and enter the event loop by calling a.exec(). Once you understand how this simple app works then you can start adding some more bells and whistles like signal/slot connections. See also - Introduction to Qt Quick for C++ Developers [developer.qt.nokia.com] - How to Use QPushButton [developer.qt.nokia.com] - How to use signals and slots [developer.qt.nokia.com]
http://qt-project.org/wiki/Basic_Qt_Programming_Tutorial
crawl-003
refinedweb
558
58.58
baz :: (MonadFree Type m) =>, f). Example This is literate Haskell! To run this example, open the source of this module and copy the whole comment block into a file with '.lhs' extension. For example, Teletype.lhs. {-# LANGUAGE DeriveFunctor, TemplateHaskell, FlexibleContexts #-} -- import Control.Monad (mfilter) import Control.Monad.Loops (unfoldM) import Control.Monad.Free (liftF, Free, iterM, MonadFree) import Control.Monad.Free.TH (makeFree) import Control.Applicative ((<$>)) import System.IO (isEOF) import Control.Exception (catch) import System.IO.Error (ioeGetErrorString) import System.Exit (exitSuccess) First, we define a data type with the primitive actions of a teleprinter. The param will stand for the next action to execute. type Error = String data Teletype param = Halt -- Abort (ignore all following instructions) | NL param -- Newline | Read (Char -> param) -- Get a character from the terminal | ReadOrEOF { onEOF :: param, onChar :: Char -> param } -- GetChar if not end of file | ReadOrError (Error -> param) (Char -> param) -- GetChar with error code | param :\^^ String -- Write a message to the terminal | (:%) param String [String] -- String interpolation deriving (Functor) By including a makeFree declaration: makeFree ''Teletype the following functions have been made available: halt :: (MonadFree Teletype m) => m a nL :: (MonadFree Teletype m) => m () read :: (MonadFree Teletype m) => m Char readOrEOF :: (MonadFree Teletype m) => m (Maybe Char) readOrError :: (MonadFree Teletype m) => m (Either Error Char) (\^^) :: (MonadFree Teletype m) => String -> m () (%) :: (MonadFree Teletype m) => String -> [String] -> m () To make use of them, we need an instance of 'MonadFree Teletype'. Since Teletype is a Functor, we can use the one provided in the Free package. type TeletypeM = Free Teletype Programs can be run in different ways. For example, we can use the system terminal through the IO monad. runTeletypeIO :: TeletypeM a -> IO a runTeletypeIO = iterM run where run :: Teletype (IO a) -> IO a run Halt = do putStrLn "This conversation can serve no purpose anymore. Goodbye." exitSuccess run (Read f) = getChar >>= f run (ReadOrEOF eof f) = isEOF >>= \b -> if b then eof else getChar >>= f run (ReadOrError ferror f) = catch (getChar >>= f) (ferror . ioeGetErrorString) run (NL rest) = putChar '\n' >> rest run (rest :\^^ str) = putStr str >> rest run ((:%) rest format tokens) = ttFormat format tokens >> rest ttFormat :: String -> [String] -> IO () ttFormat [] _ = return () ttFormat ('\\':'%':cs) tokens = putChar '%' >> ttFormat cs tokens ttFormat ('%':cs) (t:tokens) = putStr t >> ttFormat cs tokens ttFormat (c:cs) tokens = putChar c >> ttFormat cs tokens Now, we can write some helper functions: readLine :: TeletypeM String readLine = unfoldM $ mfilter (/= '\n') <$> readOrEOF And use them to interact with the user: hello :: TeletypeM () hello = do (\^^) "Hello! What's your name?"; nL name <- readLine "Nice to meet you, %." % [name]; nL halt We can transform any TeletypeM into an IO action, and run it: main :: IO () main = runTeletypeIO hello Hello! What's your name? $ Dave Nice to meet you, Dave. This conversation can serve no purpose anymore. Goodbye. When specifying DSLs in this way, we only need to define the semantics for each of the actions; the plumbing of values is taken care of by the generated monad instance.
http://hackage.haskell.org/package/free-4.6.1/docs/Control-Monad-Free-TH.html
CC-MAIN-2016-40
refinedweb
489
63.49
#include <tagUtils.h> int values2tag( char *tag, char *type, int start, int end, int strand, char *comment); This function converts a tag represented by a series of separate integer/string values to a single string of the format used by the experiment file TG line type. It performs the opposite task to the tag2values function. For the format of the tag string please see section tag2values. The type, start, end, strand and comment paramaters contain the current tag details. comment must be specified even when no comment exists, but can be specified as a blank string in this case. tag is expected to have been allocated already and no bounds checks are performed. A safe size for allocation is strlen(comment)+30. The function returns 0 for success, -1 for failure.
http://staden.sourceforge.net/scripting_manual/scripting_165.html
CC-MAIN-2014-49
refinedweb
131
63.8
csScreenTargetResult Struct ReferenceThis structure is returned by csEngineTools::FindScreenTarget(). More... #include <cstool/enginetools.h> Detailed DescriptionThis structure is returned by csEngineTools::FindScreenTarget(). Definition at line 60 of file enginetools.h. Member Data Documentation The intersection point (in world space) on the mesh where we hit. If no mesh was hit this will be set to the end point of the beam that was used for testing. Definition at line 72 of file enginetools.h. The mesh that was hit or 0 if nothing was hit. Definition at line 65 of file enginetools.h. If the accurate method of testing was used (not using collider system) then (depending on type of mesh) this might contain a polygon index that was hit. If not then this will be -1. Definition at line 79 of file enginetools.h. The documentation for this struct was generated from the following file: - cstool/enginetools.h Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/structcsScreenTargetResult.html
CC-MAIN-2014-52
refinedweb
161
60.82
Part of the deployment scripts I'm working on must programmatically munge .NET config files. And I want to be able to use xpath expressions to index into them, but xpath is more painful (for what I'm doing at least) when namespaces are involved. Sometimes config files look like this: <configuration>...</configuration> But they often also look like this: <configuration xmlns='...'>...</configuration> I needed a way to strip out the namespace before doing my queries. I found a solution and thought I'd share it here. XmlElement.SetAttribute can be used to change the namespace declaration (this is some special case code in the .NET DOM for this). But the change doesn't seem to take effect right away - I had to serialize the DOM tree and reload it, then things worked nicely. Here's a little function to do this: XmlDocument stripDocumentNamespace(XmlDocument oldDom) { // some config files have a default namespace // we are going to get rid of that to simplify our xpath expressions if (oldDom.DocumentElement.NamespaceURI.Length > 0) { oldDom.DocumentElement.SetAttribute("xmlns", ""); // must serialize and reload the DOM // before this will actually take effect XmlDocument newDom = new XmlDocument(); newDom.LoadXml(oldDom.OuterXml); return newDom; } else return oldDom; } Thank You Keith! \o/ Finally, uh, the purists out there simple doesn't seem to realise that there are cases when you can get away with quick 'n' dirty. Time is money. In my case it was a 3rd party app which rendered XML files with an invalid XMLNS reference (404). Simple solution was to ignore it, works like a charm.
http://www.pluralsight.com/community/blogs/keith/archive/2005/10/19/15714.aspx
crawl-002
refinedweb
262
57.77
On Tue, Sep 09, 2003 at 10:00:17AM -0700, Tom Lord wrote: > > > > From: Zack Brown <address@hidden> > > > Miles Bader said: > > > This is definitely true; what I wonder is, is it possible to make it > utterly > > > painless to start using arch without introducing obstacles that would > make it > > > hard for a project to grow later (for instance, the most `initially > painless' > > > tagging-method in arch might actually be `by name', but that causes > obvious > > > problems later on)? > > Watching IRC and the list, I've learned three things: > > a) screwing around with =tagging-method is, indeed, the number one > problem newbies encounter That's why the default should be something that requires no user action, but just works reasonably; and then if they want better behavior, they should be able to change it later. > c) of the huge number of new =tagging-method features planned, a > single feature, the `untagged-source' directive, solves these > problems quite well. As a refresher, the directives: > > untagged-source junk > untagged-source precious > > will mean to treat files matching the source pattern but lacking > tags as junk or precious rather than source (in implicit and > tagline) or unrecognized (in explicit). > > There's an unofficial patch floating around that hard-wires the > effect of `untagged-source junk' for explicit inventories. > > In consequence of this, I'm writing the support for that directive > _today_ and barring unforseen problems, it should be available later > today. More on that later. Does it require any user interaction at all? I think in this case, it's more important to have a transparent default that works 'well enough', than the best possible behavior. > It's already a pretty small number of commands. From the traffic > I've seen, the biggest obstacle relates to your observation that > people grab some source they already work with, and try to check that > into arch. Then they run face-first into the need for > =tagging-method customization and can't get through that wall until > they learn what `inventory' does fairly deeply (and, even then, it can > be awkward for some trees). Again, this is an argument for a tagging default that's transparent to the user. If the inventory stuff is so complicated, then just give sane defaults so the user doesn't have to worry about it. Let them 'upgrade' to more powerful controls later as they get more used to the system. > > Two things will help fix that, I think. One is the `untagged-source' > directive mentioned above. The other is a single command that let's > you select from a canned list of "pre-sets" for =tagging-method -- > perhaps just an option to `init-tree'. > > > > The second big piece of overhead is the naming conventions imposed on > > repositories (I know, I know, but let's avoid the flames please). Its > enough > > of a pain just understanding the category--branch--etc style, let alone > > remembering it later. Someone who just wants to experiment with tla > doesn't > > want to have to learn all those naming conventions. It should be > possible for > > the user to specify their own arbitrary string to be the name. tla > should > > then provide a mechanism for the user to migrate to the more featureful > > naming conventions later. If it's impossible for tla to provide its > full set > > of features with such a restricted name, then it should provide a > restricted > > set, and document those restrictions. Then the user can regain those > features > > when they migrate to the proper conventions later. > > In all honesty, while people do sometimes complain about the > namespace, I don't think it is a huge obstacle. I could be wrong. It's just an additional inconvenience. Before anyone can start to use tla, they are forced to develop a fairly deep understanding of its organizational infrastructure. While this organization may be good, it becomes part of a tla boot-up barrier that can't be avoided. By creating something like name defaults, you give the user the ability to follow the naming conventions when they're ready, but they can also experiment with the tool, without having to understand or be comfortable with the naming conventions. People I've talked to don't only find the naming conventions to be difficult to understand, but they balk at being forced to use a system that someone else thinks is best for them, when they themselves already disagree on the face of it. If you give them time to get to know the tool without imposing that control from the start, they may come to see the value of those controls as they learn how tla is used. I'm really suggesting not abandoning the naming conventions, but just providing a set of defaults that will work without user intervention, so the user will only need to know the bare minimum about those ideas before starting to play around with tla. Once they get the hang of it, they would then be able to start taking advantage of those naming features. > > > > Those two things are the big obstacles IMO. In order for a user to just > > sit down and use tla, there has to be first of all, a quick way to > start up > > a project; and second of all, a way to get around the complex repo > naming > > stuff, even if only temporarily. > > > The tagging method you talk about is another issue, but less important. > > Default tagging should just be by name, so the user doesn't have to do > > anything. Let the power users choose better tagging systems, and let > > novices have things just work reasonably. > > No, the default tagging should not be by `names'. That would > effectively disable some of the most interesting features of arch for > newbies. > > I think a decent choice for absolute beginners, especially if they are > a little bit familiar with CVS, is something like: > > explicit > untagged-source junk > > with a source regexp that matches tons of stuff and a `tree-lint' that > warns (not errors out) about things matching source but lacking tags. > > Then it's "just like CVS" in the sense that the things you add are > source, the things you don't aren't, and the mystery files give you > the moral equivalent of "? file" output. > > Dummy, me: I should have made this change weeks ago. I admit I don't really understand the feature you're talking about. But if it would require no user intervention, I'm all for it. As soon as you require that they make some choice between tagging options (or that they make modifications to files to adhere to their tagging choices), it seems to me you are requiring them to get to know all the ideas behind those options; and that's what I'm suggesting would be good to avoid. By having a default that requires nothing from them, you may not get the full shining glory of tla, but you give them more of the ability to approach that shining glory slowly, rather than overwhelming them with it all at once. Be well, Zack > > -t > -- Zack Brown
http://lists.gnu.org/archive/html/gnu-arch-users/2003-09/msg00414.html
CC-MAIN-2015-11
refinedweb
1,180
65.46
perlman gods <HR> <P> <H1><A NAME="NAME">NAME</A></H1> <P> perlop - Perl operators and precedence <P> <HR> <H1><A NAME="SYNOPSIS">SYNOPSIS</A></H1> <P> Perl operators have the following associativity and precedence, listed from highest precedence to lowest. Note that all operators borrowed from <FONT SIZE=-1>C</FONT> keep the same precedence relationship with each other, even where C's precedence is slightly screwy. (This makes learning Perl easier for <FONT SIZE=-1>C</FONT> folks.) With very few exceptions, these all operate on scalar values only, not array values. <P> <PRE> </PRE> <P> In the following sections, these operators are covered in precedence order. <P> Many operators can be overloaded for objects. See <U>the overload manpage</U><!--../lib/overload.html-->. <P> <HR> <H1><A NAME="DESCRIPTION">DESCRIPTION</A></H1> <P> <HR> <H2><A NAME="Terms_and_List_Operators_Leftwa">Terms and List Operators (Leftward)</A></H2> <P> <FONT SIZE=-1>A</FONT> <FONT SIZE=-1>TERM</FONT>. <P> In the absence of parentheses, the precedence of list operators such as [perlfunc:print|print], [perlfunc:sort|sort], or [perlfunc:chmod|chmod] is either very high or very low depending on whether you are looking at the left side or the right side of the operator. For example, in <P> <PRE> @ary = (1, 3, sort 4, 2); print @ary; # prints 1324 </PRE> <P> the commas on the right of the sort are evaluated before the sort, but the commas on the left are evaluated after. In other words, list operators tend to gobble up all the arguments that follow them, and then act like a simple <FONT SIZE=-1>TERM</FONT> with regard to the preceding expression. Note that you have to be careful with parentheses: <P> <PRE> # These evaluate exit before doing the print: print($foo, exit); # Obviously not what you want. print $foo, exit; # Nor is this. </PRE> <P> <PRE> # These do the print before evaluating exit: (print $foo), exit; # This is what you want. print($foo), exit; # Or this. print ($foo), exit; # Or even this. </PRE> <P> Also note that <P> <PRE> print ($foo & 255) + 1, "\n"; </PRE> <P> probably doesn't do what you expect at first glance. See <A HREF="#Named_Unary_Operators">Named Unary Operators</A> for more discussion of this. <P> Also parsed as terms are the [perlfunc:do] and [perlfunc:eval] constructs, as well as subroutine and method calls, and the anonymous constructors <CODE>[]</CODE> and <CODE>{}</CODE>. <P> See also <A HREF="#Quote_and_Quote_like_Operators">Quote and Quote-like Operators</A> toward the end of this section, as well as <A HREF="#I_O_Operators">I/O Operators</A>. <P> <HR> <H2><A NAME="The_Arrow_Operator">The Arrow Operator</A></H2> <P> Just as in <FONT SIZE=-1>C</FONT> and <FONT SIZE=-1>C++,</FONT> ``<CODE>-></CODE>'' is an infix dereference operator. If the right side is either a <CODE>[...]</CODE> or <CODE>{...}</CODE> subscript, then the left side must be either a hard or symbolic reference to an array or hash (or a location capable of holding a hard reference, if it's an lvalue (assignable)). See [perlman:perlref|the perlref manpage]. <P> Otherwise, the right side is a method name or a simple scalar variable containing the method name, and the left side must either be an object (a blessed reference) or a class name (that is, a package name). See [perlman:perlobj|the perlobj manpage]. <P> <HR> <H2><A NAME="Auto_increment_and_Auto_decremen">Auto-increment and Auto-decrement</A></H2> <P> ``++'' and ``--'' work as in <FONT SIZE=-1>C.</FONT> That is, if placed before a variable, they increment or decrement the variable before returning the value, and if placed after, increment or decrement the variable after returning the value. <CODE>/^[a-zA-Z]*[0-9]*$/</CODE>, the increment is done as a string, preserving each character within its range, with carry: <P> <PRE> print ++($foo = '99'); # prints '100' print ++($foo = 'a0'); # prints 'a1' print ++($foo = 'Az'); # prints 'Ba' print ++($foo = 'zz'); # prints 'aaa' </PRE> <P> The auto-decrement operator is not magical. <P> <HR> <H2><A NAME="Exponentiation">Exponentiation</A></H2> <P> Binary ``**'' is the exponentiation operator. Note that it binds even more tightly than unary minus, so -2**4 is -(2**4), not (-2)**4. (This is implemented using C's <CODE>pow(3)</CODE> function, which actually works on doubles internally.) <P> <HR> <H2><A NAME="Symbolic_Unary_Operators">Symbolic Unary Operators</A></H2> <P> Unary ``!'' performs logical negation, i.e., ``not''. See also <CODE>not</CODE> for a lower precedence version of this. <CODE>-bareword</CODE> is equivalent to <CODE>"-bareword"</CODE>. <P> Unary ``~'' performs bitwise negation, i.e., 1's complement. For example, <CODE>0666 &~ 027</CODE> is 0640. (See also <A HREF="#Integer_Arithmetic">Integer Arithmetic</A> and <A HREF="#Bitwise_String_Operators">Bitwise String Operators</A>.) <P> Unary ``+'' has no effect whatsoever, even on strings. It is useful syntactically for separating a function name from a parenthesized expression that would otherwise be interpreted as the complete list of function arguments. (See examples above under <A HREF="#Terms_and_List_Operators_Leftwa">Terms and List Operators (Leftward)</A>.) <P> Unary ``\'' creates a reference to whatever follows it. See [perlman:perlref|the perlref manpage]. Do not confuse this behavior with the behavior of backslash within a string, although both forms do convey the notion of protecting the next thing from interpretation. <P> <HR> <H2><A NAME="Binding_Operators">Binding Operators</A></H2> <P> Binary ``=~'' binds a scalar expression to a pattern match. Certain operations search or modify the string <CODE>$_</CODE>. <P> Binary ``!~'' is just like ``=~'' except the return value is negated in the logical sense. <P> <HR> <H2><A NAME="Multiplicative_Operators">Multiplicative Operators</A></H2> <P> Binary ``*'' multiplies two numbers. <P> Binary ``/'' divides two numbers. <P> Binary ``%'' computes the modulus of two numbers. Given integer operands <CODE>$a</CODE> and <CODE>$b</CODE>: If <CODE>$b</CODE> is positive, then <CODE>$a % $b</CODE> is <CODE>$a</CODE> minus the largest multiple of <CODE>$b</CODE> that is not greater than <CODE>$a</CODE>. If <CODE>$b</CODE> is negative, then <CODE>$a % $b</CODE> is <CODE>$a</CODE> minus the smallest multiple of <CODE>$b</CODE> that is not less than <CODE>$a</CODE> (i.e. the result will be less than or equal to zero). Note than when <CODE>use integer</CODE> is in scope, ``%'' give you direct access to the modulus operator as implemented by your <FONT SIZE=-1>C</FONT> compiler. This operator is not as well defined for negative operands, but it will execute faster. <P> Binary ``x'' is the repetition operator. In scalar context, it returns a string consisting of the left operand repeated the number of times specified by the right operand. In list context, if the left operand is a list in parentheses, it repeats the list. <P> <PRE> print '-' x 80; # print row of dashes </PRE> <P> <PRE> print "\t" x ($tab/8), ' ' x ($tab%8); # tab over </PRE> <P> <PRE> @ones = (1) x 80; # a list of 80 1's @ones = (5) x @ones; # set all elements to 5 </PRE> <P> <HR> <H2><A NAME="Additive_Operators">Additive Operators</A></H2> <P> Binary ``+'' returns the sum of two numbers. <P> Binary ``-'' returns the difference of two numbers. <P> Binary ``.'' concatenates two strings. <P> <HR> <H2><A NAME="Shift_Operators">Shift Operators</A></H2> <P> Binary ``<<`` returns the value of its left argument shifted left by the number of bits specified by the right argument. Arguments should be integers. (See also <A HREF="#Integer_Arithmetic">Integer Arithmetic</A>.) <P> Binary ``>>'' returns the value of its left argument shifted right by the number of bits specified by the right argument. Arguments should be integers. (See also <A HREF="#Integer_Arithmetic">Integer Arithmetic</A>.) <P> <HR> <H2><A NAME="Named_Unary_Operators">Named Unary Operators</A></H2> <P> The various named unary operators are treated as functions with one argument, with optional parentheses. These include the filetest operators, like <CODE>-f</CODE>, <CODE>-M</CODE>, etc. See . Examples: <P> <PRE> chdir $foo || die; # (chdir $foo) || die chdir($foo) || die; # (chdir $foo) || die chdir ($foo) || die; # (chdir $foo) || die chdir +($foo) || die; # (chdir $foo) || die </PRE> <P> but, because * is higher precedence than ||: <P> <PRE> chdir $foo * 20; # chdir ($foo * 20) chdir($foo) * 20; # (chdir $foo) * 20 chdir ($foo) * 20; # (chdir $foo) * 20 chdir +($foo) * 20; # chdir ($foo * 20) </PRE> <P> <PRE> rand 10 * 20; # rand (10 * 20) rand(10) * 20; # (rand 10) * 20 rand (10) * 20; # (rand 10) * 20 rand +(10) * 20; # rand (10 * 20) </PRE> <P> See also <A HREF="#Terms_and_List_Operators_Leftwa">Terms and List Operators (Leftward)</A>. <P> <HR> <H2><A NAME="Relational_Operators">Relational Operators</A></H2> <P> Binary ``<'' returns true if the left argument is numerically less than the right argument. <P> Binary ``>'' returns true if the left argument is numerically greater than the right argument. <P> Binary ``<='' returns true if the left argument is numerically less than or equal to the right argument. <P> Binary ``>='' returns true if the left argument is numerically greater than or equal to the right argument. <P> Binary ``lt'' returns true if the left argument is stringwise less than the right argument. <P> Binary ``gt'' returns true if the left argument is stringwise greater than the right argument. <P> Binary ``le'' returns true if the left argument is stringwise less than or equal to the right argument. <P> Binary ``ge'' returns true if the left argument is stringwise greater than or equal to the right argument. <P> <HR> <H2><A NAME="Equality_Operators">Equality Operators</A></H2> <P> Binary ``=='' returns true if the left argument is numerically equal to the right argument. <P> Binary ``!='' returns true if the left argument is numerically not equal to the right argument. <P> Binary ``<=>'' returns -1, 0, or 1 depending on whether the left argument is numerically less than, equal to, or greater than the right argument. <P> Binary ``eq'' returns true if the left argument is stringwise equal to the right argument. <P> Binary ``ne'' returns true if the left argument is stringwise not equal to the right argument. <P> Binary ``cmp'' returns -1, 0, or 1 depending on whether the left argument is stringwise less than, equal to, or greater than the right argument. <P> ``lt'', ``le'', ``ge'', ``gt'' and ``cmp'' use the collation (sort) order specified by the current locale if <CODE>use locale</CODE> is in effect. See [perlman:perllocale|the perllocale manpage]. <P> <HR> <H2><A NAME="Bitwise_And">Bitwise And</A></H2> <P> Binary ``&'' returns its operators ANDed together bit by bit. (See also <A HREF="#Integer_Arithmetic">Integer Arithmetic</A> and <A HREF="#Bitwise_String_Operators">Bitwise String Operators</A>.) <P> <HR> <H2><A NAME="Bitwise_Or_and_Exclusive_Or">Bitwise Or and Exclusive Or</A></H2> <P> Binary ``|'' returns its operators ORed together bit by bit. (See also <A HREF="#Integer_Arithmetic">Integer Arithmetic</A> and <A HREF="#Bitwise_String_Operators">Bitwise String Operators</A>.) <P> Binary ``^'' returns its operators XORed together bit by bit. (See also <A HREF="#Integer_Arithmetic">Integer Arithmetic</A> and <A HREF="#Bitwise_String_Operators">Bitwise String Operators</A>.) <P> <HR> <H2><A NAME="C_style_Logical_And">C-style Logical And</A></H2> <P> Binary ``&&'' performs a short-circuit logical <FONT SIZE=-1>AND</FONT> operation. That is, if the left operand is false, the right operand is not even evaluated. Scalar or list context propagates down to the right operand if it is evaluated. <P> <HR> <H2><A NAME="C_style_Logical_Or">C-style Logical Or</A></H2> <P> Binary ``||'' performs a short-circuit logical <FONT SIZE=-1>OR</FONT> operation. That is, if the left operand is true, the right operand is not even evaluated. Scalar or list context propagates down to the right operand if it is evaluated. <P> The <CODE>||</CODE> and <CODE>&&</CODE> operators differ from C's in that, rather than returning 0 or 1, they return the last value evaluated. Thus, a reasonably portable way to find out the home directory (assuming it's not ``0'') might be: <P> <PRE> $home = $ENV{'HOME'} || $ENV{'LOGDIR'} || (getpwuid($<))[7] || die "You're homeless!\n"; </PRE> <P> In particular, this means that you shouldn't use this for selecting between two aggregates for assignment: <P> <PRE> @a = @b || @c; # this is wrong @a = scalar(@b) || @c; # really meant this @a = @b ? @b : @c; # this works fine, though </PRE> <P> As more readable alternatives to <CODE>&&</CODE> and <CODE>||</CODE> when used for control flow, Perl provides <CODE>and</CODE> and <CODE>or</CODE> operators (see below). The short-circuit behavior is identical. The precedence of ``and'' and ``or'' is much lower, however, so that you can safely use them after a list operator without the need for parentheses: <P> <PRE> unlink "alpha", "beta", "gamma" or gripe(), next LINE; </PRE> <P> With the C-style operators that would have been written like this: <P> <PRE> unlink("alpha", "beta", "gamma") || (gripe(), next LINE); </PRE> <P> Use ``or'' for assignment is unlikely to do what you want; see below. <P> <HR> <H2><A NAME="Range_Operators">Range Operators</A></H2> <P> Binary ``..'' is the range operator, which is really two different operators depending on the context. In list context, it returns an array of values counting (by ones) from the left value to the right value. This is useful for writing <CODE>foreach (1..10)</CODE> loops and for doing slice operations on arrays. In the current implementation, no temporary array is created when the range operator is used as the expression in <CODE>foreach</CODE> loops, but older versions of Perl might burn a lot of memory when you write something like this: <P> <PRE> for (1 .. 1_000_000) { # code } </PRE> <P> In scalar context, ``..'' returns a boolean value. The operator is bistable, like a flip-flop, and emulates the line-range (comma) operator of <STRONG>sed</STRONG>, <STRONG>awk</STRONG>, and various editors. Each ``..'' operator maintains its own boolean state. It is false as long as its left operand is false. Once the left operand is true, the range operator stays true until the right operand is true, <EM>AFTER</EM> which the range operator becomes false again. (It doesn't become false till the next time the range operator is evaluated. It can test the right operand and become false on the same evaluation it became true (as in <STRONG>awk</STRONG>), but it still returns true once. If you don't want it to test the right operand till the next evaluation (as in <STRONG>sed</STRONG>), <FONT SIZE=-1>``E0''</FONT> <CODE>$.</CODE> variable, the current line number. Examples: <P> As a scalar operator: <P> <PRE> if (101 .. 200) { print; } # print 2nd hundred lines next line if (1 .. /^$/); # skip header lines s/^/> / if (/^$/ .. eof()); # quote body </PRE> <P> <PRE> # parse mail messages while (<>) { $in_header = 1 .. /^$/; $in_body = /^$/ .. eof(); # do something based on those } continue { close ARGV if eof; # reset $. each file } </PRE> <P> As a list operator: <P> <PRE> for (101 .. 200) { print; } # print $_ 100 times @foo = @foo[0 .. $#foo]; # an expensive no-op @foo = @foo[$#foo-4 .. $#foo]; # slice last 5 items </PRE> <P> The range operator (in list context) makes use of the magical auto-increment algorithm if the operands are strings. You can say <P> <PRE> @alphabet = ('A' .. 'Z'); </PRE> <P> to get all the letters of the alphabet, or <P> <PRE> $hexdigit = (0 .. 9, 'a' .. 'f')[$num & 15]; </PRE> <P> to get a hexadecimal digit, or <P> <PRE> @z2 = ('01' .. '31'); print $z2[$mday]; </PRE> <P> to get dates with leading zeros. If the final value specified is not in the sequence that the magical increment would produce, the sequence goes until the next value would be longer than the final value specified. <P> <HR> <H2><A NAME="Conditional_Operator">Conditional Operator</A></H2> <P> Ternary ``?:'' is the conditional operator, just as in <FONT SIZE=-1>C.</FONT> It works much like an if-then-else. If the argument before the ? is true, the argument before the : is returned, otherwise the argument after the : is returned. For example: <P> <PRE> printf "I have %d dog%s.\n", $n, ($n == 1) ? '' : "s"; </PRE> <P> Scalar or list context propagates downward into the 2nd or 3rd argument, whichever is selected. <P> <PRE> $a = $ok ? $b : $c; # get a scalar @a = $ok ? @b : @c; # get an array $a = $ok ? @b : @c; # oops, that's just a count! </PRE> <P> The operator may be assigned to if both the 2nd and 3rd arguments are legal lvalues (meaning that you can assign to them): <P> <PRE> ($a_or_b ? $a : $b) = $c; </PRE> <P> This is not necessarily guaranteed to contribute to the readability of your program. <P> Because this operator produces an assignable result, using assignments without parentheses will get you in trouble. For example, this: <P> <PRE> $a % 2 ? $a += 10 : $a += 2 </PRE> <P> Really means this: <P> <PRE> (($a % 2) ? ($a += 10) : $a) += 2 </PRE> <P> Rather than this: <P> <PRE> ($a % 2) ? ($a += 10) : ($a += 2) </PRE> <P> <HR> <H2><A NAME="Assignment_Operators">Assignment Operators</A></H2> <P> ``='' is the ordinary assignment operator. <P> Assignment operators work as in <FONT SIZE=-1>C.</FONT> That is, <P> <PRE> $a += 2; </PRE> <P> is equivalent to <P> <PRE> $a = $a + 2; </PRE> <P> although without duplicating any side effects that dereferencing the lvalue might trigger, such as from <CODE>tie().</CODE> Other assignment operators work similarly. The following are recognized: <P> <PRE> **= += *= &= <<= &&= -= /= |= >>= ||= .= %= ^= x= </PRE> <P> Note that while these are grouped by family, they all have the precedence of assignment. <P> Unlike in <FONT SIZE=-1>C,</FONT> the assignment operator produces a valid lvalue. Modifying an assignment is equivalent to doing the assignment and then modifying the variable that was assigned to. This is useful for modifying a copy of something, like this: <P> <PRE> ($tmp = $global) =~ tr [A-Z] [a-z]; </PRE> <P> Likewise, <P> <PRE> ($a += 2) *= 3; </PRE> <P> is equivalent to <P> <PRE> $a += 2; $a *= 3; </PRE> <P> <HR> <H2><A NAME="Comma_Operator">Comma Operator</A></H2> <P> Binary ``,'' is the comma operator. In scalar context it evaluates its left argument, throws that value away, then evaluates its right argument and returns that value. This is just like C's comma operator. <P> In list context, it's just the list argument separator, and inserts both its arguments into the list. <P> The => digraph is mostly just a synonym for the comma operator. It's useful for documenting arguments that come in pairs. As of release 5.001, it also forces any word to the left of it to be interpreted as a string. <P> <HR> <H2><A NAME="List_Operators_Rightward_">List Operators (Rightward)</A></H2> <P>: <P> <PRE> open HANDLE, "filename" or die "Can't open: $!\n"; </PRE> <P> See also discussion of list operators in <A HREF="#Terms_and_List_Operators_Leftwa">Terms and List Operators (Leftward)</A>. <P> <HR> <H2><A NAME="Logical_Not">Logical Not</A></H2> <P> Unary ``not'' returns the logical negation of the expression to its right. It's the equivalent of ``!'' except for the very low precedence. <P> <HR> <H2><A NAME="Logical_And">Logical And</A></H2> <P> Binary ``and'' returns the logical conjunction of the two surrounding expressions. It's equivalent to && except for the very low precedence. This means that it short-circuits: i.e., the right expression is evaluated only if the left expression is true. <P> <HR> <H2><A NAME="Logical_or_and_Exclusive_Or">Logical or and Exclusive Or</A></H2> <P> Binary ``or'' returns the logical disjunction of the two surrounding expressions. It's equivalent to || except for the very low precedence. This makes it useful for control flow <P> <PRE> print FH $data or die "Can't write to FH: $!"; </PRE> <P> This means that it short-circuits: i.e., the right expression is evaluated only if the left expression is false. Due to its precedence, you should probably avoid using this for assignment, only for control flow. <P> <PRE> $a = $b or $c; # bug: this is wrong ($a = $b) or $c; # really means this $a = $b || $c; # better written this way </PRE> <P> However, when it's a list context assignment and you're trying to use ``||'' for control flow, you probably need ``or'' so that the assignment takes higher precedence. <P> <PRE> @info = stat($file) || die; # oops, scalar sense of stat! @info = stat($file) or die; # better, now @info gets its due </PRE> <P> Then again, you could always use parentheses. <P> Binary ``xor'' returns the exclusive-OR of the two surrounding expressions. It cannot short circuit, of course. <P> <HR> <H2><A NAME="C_Operators_Missing_From_Perl">C Operators Missing From Perl</A></H2> <P> Here is what <FONT SIZE=-1>C</FONT> has that Perl doesn't: <DL> <DT><STRONG><A NAME="item_unary">unary &</A></STRONG><P> <DD> Address-of operator. (But see the ``\'' operator for taking a reference.) <P><DT><STRONG>unary *</STRONG><P> <DD> Dereference-address operator. (Perl's prefix dereferencing operators are typed: $, @, %, and &.) <P><DT><STRONG><A NAME="item__TYPE_">(TYPE)</A></STRONG><P> <DD> Type casting operator. </DL> <P> <HR> <H2><A NAME="Quote_and_Quote_like_Operators">Quote and Quote-like Operators</A></H2> <CODE>{}</CODE> represents any pair of delimiters you choose. Non-bracketing delimiters use the same character fore and aft, but the 4 sorts of brackets (round, angle, square, curly) will all nest. <P> <PRE> Customary Generic Meaning Interpolates '' q{} Literal no "" qq{} Literal yes `` qx{} Command yes (unless '' is delimiter) qw{} Word list no // m{} Pattern match yes qr{} Pattern yes s{}{} Substitution yes tr{}{} Transliteration no (but see below) </PRE> <P> Note that there can be whitespace between the operator and the quoting characters, except when <CODE>#</CODE> is being used as the quoting character. <CODE>q#foo#</CODE> is parsed as being the string <CODE>foo</CODE>, while <CODE>q #foo#</CODE> is the operator [perlman:perlop] followed by a comment. Its argument will be taken from the next line. This allows you to write: <P> <PRE> s {foo} # Replace foo {bar} # with bar. </PRE> <P> For constructs that do interpolation, variables beginning with ``<CODE>$</CODE>'' or ``<CODE>@</CODE>'' are interpolated, as are the following sequences. Within a transliteration, the first ten of these sequences may be used. <P> <PRE> \t tab (HT, TAB) \n newline (NL) \r return (CR) \f form feed (FF) \b backspace (BS) \a alarm (bell) (BEL) \e escape (ESC) \033 octal char \x1b hex char \c[ control char </PRE> <P> <PRE> \l lowercase next char \u uppercase next char \L lowercase till \E \U uppercase till \E \E end case modification \Q quote non-word characters till \E </PRE> <P> If <CODE>use locale</CODE> is in effect, the case map used by <CODE>\l</CODE>, <CODE>\L</CODE>, <CODE>\u</CODE> and <CODE>\U</CODE> is taken from the current locale. See [perlman:perllocale|the perllocale manpage]. <P> All systems use the virtual <CODE>"\n"</CODE> to represent a line terminator, called a ``newline''. There is no such thing as an unvarying, physical newline character. It is an illusion that the operating system, device drivers, <FONT SIZE=-1>C</FONT> libraries, and Perl all conspire to preserve. Not all systems read <CODE>"\r"</CODE> as <FONT SIZE=-1>ASCII</FONT> <FONT SIZE=-1>CR</FONT> and <CODE>"\n"</CODE> as <FONT SIZE=-1>ASCII</FONT> <FONT SIZE=-1>LF.</FONT> For example, on a Mac, these are reversed, and on systems without line terminator, printing <CODE>"\n"</CODE> may emit no actual data. In general, use <CODE>"\n"</CODE> when you mean a ``newline'' for your system, but use the literal <FONT SIZE=-1>ASCII</FONT> when you need an exact character. For example, most networking protocols expect and prefer a <FONT SIZE=-1>CR+LF</FONT> ( <CODE>"\012\015"</CODE> or <CODE>"\cJ\cM"</CODE>) for line terminators, and although they often accept just <CODE>"\012"</CODE>, they seldom tolerate just <CODE>"\015"</CODE>. If you get in the habit of using <CODE>"\n"</CODE> for networking, you may be burned some day. <P> You cannot include a literal <CODE>$</CODE> or <CODE>@</CODE> within a <CODE>\Q</CODE> sequence. An unescaped <CODE>$</CODE> or <CODE>@</CODE> interpolates the corresponding variable, while escaping will cause the literal string <CODE>\$</CODE> to be inserted. You'll need to write something like <CODE>m/\Quser\E\@\Qhost/</CODE>. <P> Patterns are subject to an additional level of interpretation as a regular expression. This is done as a second pass, after variables are interpolated, so that regular expressions may be incorporated into the pattern from the variables. If this is not what you want, use <CODE>\Q</CODE> to interpolate a variable literally. <P> Apart from the above, there are no multiple levels of interpolation. In particular, contrary to the expectations of shell programmers, back-quotes do <EM>NOT</EM> interpolate within double quotes, nor do single quotes impede evaluation of variables when used within double quotes. <P> <HR> <H2><A NAME="Regexp_Quote_Like_Operators">Regexp Quote-Like Operators</A></H2> <P> Here are the quote-like operators that apply to pattern matching and related activities. <P> Most of this section is related to use of regular expressions from Perl. Such a use may be considered from two points of view: Perl handles a a string and a ``pattern'' to <FONT SIZE=-1>RE</FONT> (regular expression) engine to match, <FONT SIZE=-1>RE</FONT> engine finds (or does not find) the match, and Perl uses the findings of <FONT SIZE=-1>RE</FONT> engine for its operation, possibly asking the engine for other matches. <P> <FONT SIZE=-1>RE</FONT> engine has no idea what Perl is going to do with what it finds, similarly, the rest of Perl has no idea what a particular regular expression means to <FONT SIZE=-1>RE</FONT> engine. This creates a clean separation, and in this section we discuss matching from Perl point of view only. The other point of view may be found in [perlman:perlre|the perlre manpage]. <DL> <DT><STRONG><A NAME="item__PATTERN_">?PATTERN?</A></STRONG><P> <DD> This is just like the <CODE>/pattern/</CODE> search, except that it matches only once between calls to the <CODE>reset()</CODE> operator. This is a useful optimization when you want to see only the first occurrence of something in each file of a set of files, for instance. Only <CODE>??</CODE> patterns local to the current package are reset. <P> <PRE> while (<>) { if (?^$?) { # blank line between header and body } } continue { reset if eof; # clear ?? status for next file } </PRE> <P> This usage is vaguely deprecated, and may be removed in some future version of Perl. <P><DT><STRONG><A NAME="item_m">m/PATTERN/cgimosx</A></STRONG><DD> <DT><STRONG><A NAME="item__PATTERN_cgimosx">/PATTERN/cgimosx</A></STRONG><P> <DD> Searches a string for a pattern match, and in scalar context returns true (1) or false (''). If no string is specified via the <CODE>=~</CODE> or <CODE>!~</CODE> operator, the <CODE>$_</CODE> string is searched. (The string specified with <CODE>=~</CODE> need not be an lvalue--it may be the result of an expression evaluation, but remember the <CODE>=~</CODE> binds rather tightly.) See also [perlman:perlre|the perlre manpage]. See [perlman:perllocale|the perllocale manpage] for discussion of additional considerations that apply when <CODE>use locale</CODE> is in effect. <P> Options are: <P> <PRE>. </PRE> <P> <FONT SIZE=-1>LTS</FONT> (leaning toothpick syndrome). If ``?'' is the delimiter, then the match-only-once rule of <CODE>?PATTERN?</CODE> applies. <P> <FONT SIZE=-1>PATTERN</FONT> may contain variables, which will be interpolated (and the pattern recompiled) every time the pattern search is evaluated. (Note that <CODE>$)</CODE> and <CODE>$|</CODE> might not be interpolated because they look like end-of-string tests.) If you want such a pattern to be compiled only once, add a <CODE>/o</CODE> after the trailing delimiter. This avoids expensive run-time recompilations, and is useful when the value you are interpolating won't change over the life of the script. However, mentioning <CODE>/o</CODE> constitutes a promise that you won't change the variables in the pattern. If you change them, Perl won't even notice. <P> If the <FONT SIZE=-1>PATTERN</FONT> evaluates to the empty string, the last <EM>successfully</EM> matched regular expression is used instead. <P> If the <CODE>/g</CODE> option is not used, [perlman:perlop] in a list context returns a list consisting of the subexpressions matched by the parentheses in the pattern, i.e., (<CODE>$1</CODE>, <CODE>$2</CODE>, <CODE>$3</CODE>...). (Note that here <CODE>$1</CODE> etc. are also set, and that this differs from Perl 4's behavior.) When there are no parentheses in the pattern, the return value is the list <CODE>(1)</CODE> for success. With or without parentheses, an empty list is returned upon failure. <P> Examples: <P> <PRE> open(TTY, '/dev/tty'); <TTY> =~ /^y/i && foo(); # do foo if desired </PRE> <P> <PRE> if (/Version: *([0-9.]*)/) { $version = $1; } </PRE> <P> <PRE> next if m#^/usr/spool/uucp#; </PRE> <P> <PRE> # poor man's grep $arg = shift; while (<>) { print if /$arg/o; # compile only once } </PRE> <P> <PRE> if (($F1, $F2, $Etc) = ($foo =~ /^(\S+)\s+(\S+)\s*(.*)/)) </PRE> <P> This last example splits <CODE>$foo</CODE> into the first two words and the remainder of the line, and assigns those three fields to <FONT SIZE=-1>$F1,</FONT> <FONT SIZE=-1>$F2,</FONT> and $Etc. The conditional is true if any variables were assigned, i.e., if the pattern matched. <P> The <CODE>/g</CODE>. <P> In scalar context, each execution of <CODE>m//g</CODE> finds the next match, returning <FONT SIZE=-1>TRUE</FONT> if it matches, and <FONT SIZE=-1>FALSE</FONT> if there is no further match. The position after the last match can be read or set using the <CODE>pos()</CODE> function; see [perlfunc:pos|pos]. <FONT SIZE=-1>A</FONT> failed match normally resets the search position to the beginning of the string, but you can avoid that by adding the <CODE>/c</CODE> modifier (e.g. <CODE>m//gc</CODE>). Modifying the target string also resets the search position. <P> You can intermix <CODE>m//g</CODE> matches with <CODE>m/\G.../g</CODE>, where <CODE>\G</CODE> is a zero-width assertion that matches the exact position where the previous <CODE>m//g</CODE>, if any, left off. The <CODE>\G</CODE> assertion is not supported without the <CODE>/g</CODE> modifier; currently, without <CODE>/g</CODE>, <CODE>\G</CODE> behaves just like <CODE>\A</CODE>, but that's accidental and may change in the future. <P> Examples: <P> <PRE> # list context ($one,$five,$fifteen) = (`uptime` =~ /(\d+\.\d+)/g); </PRE> <P> <PRE> # scalar context $/ = ""; $* = 1; # $* deprecated in modern perls while (defined($paragraph = <>)) { while ($paragraph =~ /[a-z]['")]*[.!?]+['")]*\s/g) { $sentences++; } } print "$sentences\n"; </PRE> <P> <PRE> #"; } </PRE> <P> The last example should print: <P> <PRE> 1: 'oo', pos=4 2: 'q', pos=5 3: 'pp', pos=7 1: '', pos=7 2: 'q', pos=8 3: '', pos=8 </PRE> <P> <FONT SIZE=-1>A</FONT> useful idiom for <CODE>lex</CODE>-like scanners is <CODE>/\G.../gc</CODE>. You can combine several regexps like this to process a string part-by-part, doing different actions depending on which regexp matched. Each regexp tries to match where the previous one leaves off. <P> <PRE> $_ = <<'EOL'; $url = new URI::URL "<A HREF=""">"</A>;;"; } </PRE> <P> Here is the output (split into several lines): <P> <PRE> line-noise lowercase line-noise lowercase UPPERCASE line-noise UPPERCASE line-noise lowercase line-noise lowercase line-noise lowercase lowercase line-noise lowercase lowercase line-noise MiXeD line-noise. That's all! </PRE> <DT><STRONG><A NAME="item_q">q/STRING/</A></STRONG><DD> <DT><STRONG><A NAME="item__STRING_">'STRING'</A></STRONG><P> <DD> <FONT SIZE=-1>A</FONT> single-quoted, literal string. <FONT SIZE=-1>A</FONT> backslash represents a backslash unless followed by the delimiter or another backslash, in which case the delimiter or backslash is interpolated. <P> <PRE> $foo = q!I said, "You said, 'She said it.'"!; $bar = q('This is it.'); $baz = '\n'; # a two-character string </PRE> <DT><STRONG><A NAME="item_qq">qq/STRING/</A></STRONG><DD> <DT><STRONG><A NAME="item__STRING_">"STRING"</A></STRONG><P> <DD> <FONT SIZE=-1>A</FONT> double-quoted, interpolated string. <P> <PRE> $_ .= qq (*** The previous line contains the naughty word "$1".\n) if /(tcl|rexx|python)/; # :-) $qr/STRING/imosx</A></STRONG><P> <DD> <FONT SIZE=-1>A</FONT> string which is (possibly) interpolated and then compiled as a regular expression. The result may be used as a pattern in a match <P> <PRE> $re = qr/$pattern/; $string =~ /foo${re}bar/; # can be interpolated in other patterns $string =~ $re; # or used standalone </PRE> <P> Options are: <P> <PRE> i Do case-insensitive pattern matching. m Treat string as multiple lines. o Compile pattern only once. s Treat string as single line. x Use extended regular expressions. </PRE> <P> The benefit from this is that the pattern is precompiled into an internal representation, and does not need to be recompiled every time a match is attempted. This makes it very efficient to do something like: <P> <PRE> foreach $pattern (@pattern_list) { my $re = qr/$pattern/; foreach $line (@lines) { if($line =~ /$re/) { do_something($line); } } } </PRE> <P> See [perlman:perlre|the perlre manpage] for additional information on valid syntax for <FONT SIZE=-1>STRING,</FONT> and for a detailed look at the semantics of regular expressions. <P><DT><STRONG><A NAME="item_qx">qx/STRING/</A></STRONG><DD> <DT><STRONG><A NAME="item__STRING_">`STRING`</A></STRONG><P> <DD> <FONT SIZE=-1>A</FONT> string which is (possibly) interpolated and then executed as a system command with <CODE>/bin/sh</CODE> <FONT SIZE=-1>$INPUT_RECORD_SEPARATOR).</FONT> <P> Because backticks do not affect standard error, use shell file descriptor syntax (assuming the shell supports this) if you care to address this. To capture a command's <FONT SIZE=-1>STDERR</FONT> and <FONT SIZE=-1>STDOUT</FONT> together: <P> <PRE> $output = `cmd 2>&1`; </PRE> <P> To capture a command's <FONT SIZE=-1>STDOUT</FONT> but discard its <FONT SIZE=-1>STDERR:</FONT> <P> <PRE> $output = `cmd 2>/dev/null`; </PRE> <P> To capture a command's <FONT SIZE=-1>STDERR</FONT> but discard its <FONT SIZE=-1>STDOUT</FONT> (ordering is important here): <P> <PRE> $output = `cmd 2>&1 1>/dev/null`; </PRE> <P> To exchange a command's <FONT SIZE=-1>STDOUT</FONT> and <FONT SIZE=-1>STDERR</FONT> in order to capture the <FONT SIZE=-1>STDERR</FONT> but leave its <FONT SIZE=-1>STDOUT</FONT> to come out the old <FONT SIZE=-1>STDERR:</FONT> <P> <PRE> $output = `cmd 3>&1 1>&2 2>&3 3>&-`; </PRE> <P> To read both a command's <FONT SIZE=-1>STDOUT</FONT> and its <FONT SIZE=-1>STDERR</FONT> separately, it's easiest and safest to redirect them separately to files, and then read from those files when the program is done: <P> <PRE> system("program args 1>/tmp/program.stdout 2>/tmp/program.stderr"); </PRE> <P> Using single-quote as a delimiter protects the command from Perl's double-quote interpolation, passing it on to the shell instead: <P> <PRE> $perl_info = qx(ps $$); # that's Perl's $$ $shell_info = qx'ps $$'; # that's the new shell's $$ </PRE> man:perlsec|the perlsec manpage] for a clean and safe example of a manual <CODE>fork()</CODE> and <CODE>exec()</CODE> to emulate backticks safely. <P>. <CODE>;</CODE> on many Unix shells; <CODE>&</CODE> on the Windows <FONT SIZE=-1>NT</FONT> <CODE>cmd</CODE> shell). <P> Beware that some command shells may place restrictions on the length of the command line. You must ensure your strings don't exceed this limit after any necessary interpolations. See the platform-specific release notes for more details about your particular environment. <P> Using this operator can lead to programs that are difficult to port, because the shell commands called vary between systems, and may in fact not be present at all. As one example, the <CODE>type</CODE> command under the <FONT SIZE=-1>POSIX</FONT> shell is very different from the <CODE>type</CODE> command under <FONT SIZE=-1>DOS.</FONT>. <P> See <A HREF="#I_O_Operators">I/O Operators</A> for more discussion. <P><DT><STRONG><A NAME="item_qw">qw/STRING/</A></STRONG><P> <DD> Returns a list of the words extracted out of <FONT SIZE=-1>STRING,</FONT> using embedded whitespace as the word delimiters. It is exactly equivalent to <P> <PRE> split(' ', q/STRING/); </PRE> <P> This equivalency means that if used in scalar context, you'll get split's (unfortunate) scalar context behavior, complete with mysterious warnings. <P> Some frequently seen examples: <P> <PRE> use POSIX qw( setlocale localeconv ) @EXPORT = qw( foo bar baz ); </PRE> <P> <FONT SIZE=-1>A</FONT> common mistake is to try to separate the words with comma or to put comments into a multi-line [perlman:perlop]-string. For this reason the <CODE>-w</CODE> switch produce warnings if the <FONT SIZE=-1>STRING</FONT> contains the ``,'' or the ``#'' character. <P><DT><STRONG><A NAME="item_s">s/PATTERN/REPLACEMENT/egimosx</A></STRONG><P> <DD> Searches a string for a pattern, and if found, replaces that pattern with the replacement text and returns the number of substitutions made. Otherwise it returns false (specifically, the empty string). <P> If no string is specified via the <CODE>=~</CODE> or <CODE>!~</CODE> operator, the <CODE>$_</CODE> variable is searched and modified. (The string specified with <CODE>=~</CODE> must be scalar variable, an array element, a hash element, or an assignment to one of those, i.e., an lvalue.) <P> If the delimiter chosen is single quote, no variable interpolation is done on either the <FONT SIZE=-1>PATTERN</FONT> or the <FONT SIZE=-1>REPLACEMENT.</FONT> Otherwise, if the <FONT SIZE=-1>PATTERN</FONT> contains a $ that looks like a variable rather than an end-of-string test, the variable will be interpolated into the pattern at run-time. If you want the pattern compiled only once the first time the variable is interpolated, use the <CODE>/o</CODE> option. If the pattern evaluates to the empty string, the last successfully executed regular expression is used instead. See [perlman:perlre|the perlre manpage] for further explanation on these. See [perlman:perllocale|the perllocale manpage] for discussion of additional considerations that apply when <CODE>use locale</CODE> is in effect. <P> Options are: <P> <PRE> e Evaluate the right side as an expression. g Replace globally, i.e., all occurrences. i Do case-insensitive pattern matching. m Treat string as multiple lines. o Compile pattern only once. s Treat string as single line. x Use extended regular expressions. </PRE> <P> Any non-alphanumeric, non-whitespace delimiter may replace the slashes. If single quotes are used, no interpretation is done on the replacement string (the <CODE>/e</CODE> modifier overrides this, however). Unlike Perl 4, Perl 5 treats backticks as normal delimiters; the replacement text is not evaluated as a command. If the <FONT SIZE=-1>PATTERN</FONT> is delimited by bracketing quotes, the <FONT SIZE=-1>REPLACEMENT</FONT> has its own pair of quotes, which may or may not be bracketing quotes, e.g., [perlman:perlop] or <CODE>s<foo>/bar/</CODE>. <FONT SIZE=-1>A</FONT> <CODE>/e</CODE> will cause the replacement portion to be interpreted as a full-fledged Perl expression and <CODE>eval()ed</CODE> right then and there. It is, however, syntax checked at compile-time. <P> Examples: <P> <PRE> s/\bgreen\b/mauve/g; # don't change wintergreen </PRE> <P> <PRE> $path =~ s|/usr/bin|/usr/local/bin|; </PRE> <P> <PRE> s/Login: $foo/Login: $bar/; # run-time pattern </PRE> <P> <PRE> ($foo = $bar) =~ s/this/that/; # copy first, then change </PRE> <P> <PRE> $count = ($paragraph =~ s/Mister\b/Mr./g); # get change-count </PRE> <P> <PRE> $_ = 'abc123xyz'; s/\d+/$&*2/e; # yields 'abc246xyz' s/\d+/sprintf("%5d",$&)/e; # yields 'abc 246xyz' s/\w/$& x 2/eg; # yields 'aabbcc 224466xxyyzz' </PRE> <P> <PRE> s/%(.)/$percent{$1}/g; # change percent escapes; no /e s/%(.)/$percent{$1} || $&/ge; # expr now, so /e s/^=(\w+)/&pod($1)/ge; # use function call </PRE> <P> <PRE> # expand variables in $_, but dynamics only, using # symbolic dereferencing s/\$(\w+)/${$1}/g; </PRE> <P> <PRE> # /e's can even nest; this will expand # any embedded scalar variable (including lexicals) in $_ s/(\$\w+)/$1/eeg; </PRE> <P> <PRE> # Delete (most) C comments. $program =~ s { /\* # Match the opening delimiter. .*? # Match a minimal number of characters. \*/ # Match the closing delimiter. } []gsx; </PRE> <P> <PRE> s/^\s*(.*?)\s*$/$1/; # trim white space in $_, expensively </PRE> <P> <PRE> for ($variable) { # trim white space in $variable, cheap s/^\s+//; s/\s+$//; } </PRE> <P> <PRE> s/([^ ]*) *([^ ]*)/$2 $1/; # reverse 1st two fields </PRE> <P> Note the use of $ instead of \ in the last example. Unlike <STRONG>sed</STRONG>, we use the \<<EM>digit</EM>> form in only the left hand side. Anywhere else it's $<<EM>digit</EM>>. <P> Occasionally, you can't use just a <CODE>/g</CODE> to get all the changes to occur. Here are two common cases: <P> <PRE> # put commas in the right places in an integer 1 while s/(.*\d)(\d\d\d)/$1,$2/g; # perl4 1 while s/(\d)(\d\d\d)(?!\d)/$1,$2/g; # perl5 </PRE> <P> <PRE> # expand tabs to 8-column spacing 1 while s/\t+/' ' x (length($&)*8 - length($`)%8)/e; </PRE> <DT><STRONG><A NAME="item_tr">tr/SEARCHLIST/REPLACEMENTLIST/cds</A></STRONG><DD> <DT><STRONG><A NAME="item_y">y/SEARCHLIST/REPLACEMENTLIST/cds</A></STRONG><P> <DD> Transliterates all occurrences of the characters found in the search list with the corresponding character in the replacement list. It returns the number of characters replaced or deleted. If no string is specified via the =~ or !~ operator, the <CODE>$_</CODE> string is transliterated. (The string specified with =~ must be a scalar variable, an array element, a hash element, or an assignment to one of those, i.e., an lvalue.) <FONT SIZE=-1>A</FONT> character range may be specified with a hyphen, so <CODE>tr/A-J/0-9/</CODE> does the same replacement as <CODE>tr/ACEGIBDFHJ/0246813579/</CODE>. For <STRONG>sed</STRONG> devotees, [perlman:perlop] is provided as a synonym for [perlman:perlop]. If the <FONT SIZE=-1>SEARCHLIST</FONT> is delimited by bracketing quotes, the <FONT SIZE=-1>REPLACEMENTLIST</FONT> has its own pair of quotes, which may or may not be bracketing quotes, e.g., <CODE>tr[A-Z][a-z]</CODE> or <CODE>tr(+\-*/)/ABCD/</CODE>. <P> Options: <P> <PRE> c Complement the SEARCHLIST. d Delete found but unreplaced characters. s Squash duplicate replaced characters. </PRE> <P> If the <CODE>/c</CODE> modifier is specified, the <FONT SIZE=-1>SEARCHLIST</FONT> character set is complemented. If the <CODE>/d</CODE> modifier is specified, any characters specified by <FONT SIZE=-1>SEARCHLIST</FONT> not found in <FONT SIZE=-1>REPLACEMENTLIST</FONT> are deleted. (Note that this is slightly more flexible than the behavior of some <STRONG>tr</STRONG> programs, which delete anything they find in the <FONT SIZE=-1>SEARCHLIST,</FONT> period.) If the [perlman:perlop] modifier is specified, sequences of characters that were transliterated to the same character are squashed down to a single instance of the character. <P> If the <CODE>/d</CODE> modifier is used, the <FONT SIZE=-1>REPLACEMENTLIST</FONT> is always interpreted exactly as specified. Otherwise, if the <FONT SIZE=-1>REPLACEMENTLIST</FONT> is shorter than the <FONT SIZE=-1>SEARCHLIST,</FONT> the final character is replicated till it is long enough. If the <FONT SIZE=-1>REPLACEMENTLIST</FONT> is empty, the <FONT SIZE=-1>SEARCHLIST</FONT> is replicated. This latter is useful for counting characters in a class or for squashing character sequences in a class. <P> Examples: <P> <PRE> $ARGV[1] =~ tr/A-Z/a-z/; # canonicalize to lower case </PRE> <P> <PRE> $cnt = tr/*/*/; # count the stars in $_ </PRE> <P> <PRE> $cnt = $sky =~ tr/*/*/; # count the stars in $sky </PRE> <P> <PRE> $cnt = tr/0-9//; # count the digits in $_ </PRE> <P> <PRE> tr/a-zA-Z//s; # bookkeeper -> bokeper </PRE> <P> <PRE> ($HOST = $host) =~ tr/a-z/A-Z/; </PRE> <P> <PRE> tr/a-zA-Z/ /cs; # change non-alphas to single space </PRE> <P> <PRE> tr [\200-\377] [\000-\177]; # delete 8th bit </PRE> <P> If multiple transliterations are given for a character, only the first one is used: <P> <PRE> tr/AAA/XYZ/ </PRE> <P> will transliterate any <FONT SIZE=-1>A</FONT> to <FONT SIZE=-1>X.</FONT> <P> Note that because the transliteration table is built at compile time, neither the <FONT SIZE=-1>SEARCHLIST</FONT> nor the <FONT SIZE=-1>REPLACEMENTLIST</FONT> are subjected to double quote interpolation. That means that if you want to use variables, you must use an <CODE>eval():</CODE> <P> <PRE> eval "tr/$oldlist/$newlist/"; die $@ if $@; </PRE> <P> <PRE> eval "tr/$oldlist/$newlist/, 1" or die $@; </PRE> </DL> <P> [perlman:perlop2|More...]<BR> Return to the [Library]
http://www.perlmonks.org/?displaytype=xml;node_id=377
CC-MAIN-2015-27
refinedweb
7,505
54.22
See also: IRC log <glazou_pain> dsinger: you have to use /invite, that's what I did <glazou> so we have regrets from szilles, anne, molly, dbaron and probably plinss too <glazou> hi ChrisL hi daniel <dsinger> Which module? <scribe> scribenick: chrisl <fantasai> am: showed three combinations in an email ... page-break-avoid and column-break-avoid, all combinations make sense ... supposr 2 col layout, something is a column and a half wide ... if we had two separate properties, column-break-avoid would make it start in the second column ... page-break-avoid would move it to the next page ... if they were totallyy separate ... however, a single break property would move things to the next column but not necessarily the next page dg: so you would get a blank page am: or a blank column before the next page break <fantasai> break-inside: avoid | avoid-column | avoid-page <fantasai> would give you all combinations am: if we think page break is always a column break, then its hard to say thata page break is avoided but ok to start im mid column ... close to the opinion that its okay to have separate column and page properties el: one property (as above) would do it as well as long as all combinations are listed <dsinger> Can we lay out all cases? Near end of girst col, near end of second <dsinger> Pb avoid, cb avoid <dsinger> Pb+cb avoid? am: advantage of separate properties is that you avoid first column breaks then page breaks dg: also an issue of readability el: can be readable with one property, with good choice of values. encourages people to think about pages when designing columns <dsinger> A break over page would violate cb avoid? dg: these are being confused bb: more interesting question, they are semi independent so all combinations need to be considered either way dg: some comninations will be unused bb: is there a list of all the combinations? am: email did not listall of them <glazou> dg: avoid means 'try to avoid' am: most common pattern is to avoid all breaks. dg: column should take precedence over pages am: some people think there are only two combinations, but differ on which two those are el: happy to define all three am: avoid colum, page, both makes sense to me bb: agree with elika, define all three even though one is not useful ... avoid-both is ok, if you avoid a column break also avoids a column break el: no, avoiding both means you prioritise avoiding page breaks over column breaks <glazou> ok <dsinger> right, col1 of page 2 is not col2 of page 1 cl: a page break always produces a new colum break bb: if its too long then there is no need to push it anywhere am: avoid is not forbid. its 'attempt to not break" cl: no way to say 'minimise the total number of breaks' am: good point, can be complex to optimise for that though sg: see example with avoid-column <fantasai> am: I would prefer to specify that you try to lay out, and if it doesn't fit, you push to the top of the next column am: choice of keeping "most" of the article together ... prefer a break art the end rather than a break near the start ... page break is always a column break as well. that has to be made clear el: i agree with alex. want avoid to mean 'try layout then push over a break'. more complex stuff needs different keeywords. avoid behaviour is simple and useful so is what we should do now bb: seems fine dg: seem close to consensus el: page break inside option does not work, ... introducing a shorthand that combines both column and page is the best option am: cleaner solution to forget the old property el: have to support the old property am: yes but avoid in new documents <fantasai> (consensus seems to be reached) <fantasai> el: We're down to either alias or shorthand el so we eliminate the first of melindas options but have to choose between 2 and 3 bb: shorthand seems like overkill am: fine with either dg: would like to see a summing up and final proposal el: can work with hakon to propose something that covers all three combinations, need to pick 2 or 3. 3. Define 'break-before', 'break-after', and 'break-inside' as aliases to 'page-break-before', 'page-break-after', and 'page-break-inside'. am: does the alias mean all the values apply to the old properties? el: no, one is a superset of the others am: preferable from an implementor standpoint to allow all the properties el: 'always' value would be a problem am: ok so i prefer a new set of properties el: so do I cl: so everyone seems to like melindas option 2 best resolution: Add three new column-breaking properties per melindas email oprtion 2 <fantasai> /resolution/RESOLVED/ <scribe> ACTION: fantasai work with hakon on spec text to define the column-break properties and interaction with page bbreak properties [recorded in] <trackbot> Created ACTION-141 - Work with hakon on spec text to define the column-break properties and interaction with page bbreak properties [on Elika Etemad - due 2009-05-06]. <glazou> "The naming was briefly discussed in another SVG telcon[1], and the conclusion was that the SVG WG prefers the naming 'content-fit' and 'content-position' because of the reasons already mentioned above. " el: concern is that for css, this only appies to images while the name implies it applies more widely eg to text content ... but cant comu up with a better name dg: don't see a clash with the content property, but could live with it el: wonder if we should ask for better names dg: ack the problem and ask for a better name <scribe> ACTION: daniel respond to agreeing there is a problem but asking for a better name [recorded in] <trackbot> Created ACTION-142 - Respond to agreeing there is a problem but asking for a better name [on Daniel Glazman - due 2009-05-06]. dg: what can we move to PR? ... chris reported progress with implementations agfainst the colour module tests ... we are seen as very slow and need to publish and move forward ... other candidates? el: namespaces? ... a parsing bug and one test is failing dg: all implementations fail? dg; who is doing implementation reports? el: easy to do once implementations pass, test suite is not very long dg: discussed media queries with anne, he thinks some will be untestable as we do not have suitable devices and that testing on desktop is enough ... concerned that we need to test on mono and character-cell devices ... desktop ones seem to be interoperable at this time, but some features do not apply el: do we have any implementations for grid? dg: no el: should make an imp report for dersktop and survey what other devices actually exist. at end of 6 months if there are no implementations of some features or devices we can drop them from the spec bb: not honest to say we pass a test if there are no implementations ... some implementatiosn can emullate and always pass ... currently no features at risk cl: prefer to do an imp report then mark features at risk and republish sz: only need to claim to be a device, not to actually be that device dg: yes but not implementation claims to be a grid for example sz: only issue in testing is if the right selection was made, not whether it then goes on to lay outcorrectly dg: will not agree to implement like that and claim to have a feature that they in fact don't have <Bert> If we test '@media (grid)' and '@media not (grid)' and Opera does the right thing for both, sin't that enough? dg: can test for not-grid el: probably sufficient. ... will have to say its sufficient dg: anne had some tests for desktop only. no tests for other devices. WG should look at tests from Anne and contribute more ... as soon as we have tests, we can move forward cl: where are anne's tests? <Bert> not listed on bb: because not reviewed yet dg: will try to review for next week cl: cdf testsuite has media query tests which could be re-used sz: snapshot? el: depends on 2.1 and selectors sz: snapshot is important as it actually defines the current state dg: don't think the snapshot is very useful <dsinger> well, there will be browsers that are interoperable on defined modules... <dsinger> bye meetiing: CSS WG telcon <scribe> agenda: (member only) <scribe> Agenda: (member only) <scribe> Chair: Daniel Meetiing: CSS WG telcon This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/mind/mid/ Succeeded: s/property/value/ Succeeded: s/dfo/do/ Found ScribeNick: chrisl Inferring Scribes: chrisl Default Present: +1.408.398.aaaa, +1.408.398.aacc, glazou, sylvaing, alexmog, dsinger, Bert, ChrisL, fantasai, SteveZ Present: +1.408.398.aaaa +1.408.398.aacc glazou sylvaing alexmog dsinger Bert ChrisL fantasai SteveZ Regrets: anne molly david steve WARNING: No meeting title found! You should specify the meeting title like this: <dbooth> Meeting: Weekly Baking Club Meeting Got date from IRC log name: 29 Apr 2009 Guessing minutes URL: People with action items: daniel fantasai respond[End of scribe.perl diagnostic output]
http://www.w3.org/2009/04/29-CSS-minutes.html
CC-MAIN-2016-26
refinedweb
1,591
61.9
Beginners Guide To Setup React Project With Parcel Tutorial is the topic. In this example, you will see how we can use Parcel as a module bundler for React.js development. Most of the time, we have used webpack as a module bundler. It is very famous right now. But to configure the webpack is tedious. You need to have some knowledge about webpack plugins and configuration. Parcel solves this problem. It arrives with Zero configuration. That is why today I am showing you how you can use Parcel for your next React.js project. If you are new to a parcel, webpack, react.js, then please check out following tutorials on this blog. - Webpack 3 Tutorial From Scratch - ReactJS Tutorial For Beginners - Beginners Guide To Setup React Environment React 16 – The Complete Guide (incl. React Router 4 & Redux) Content Overview Setup React Project With Parcel First, we will start our project by creating a package.json file. npm init Okay, now, we need to install parceljs globally in our system. npm install -g parcel-bundler Now, parcel needs to take an entry file, which is JavaScript file. In our case it is app.js. So let us create that file in the root. Also, create one HTML file inside root called index.html. Now write the javascript code inside an app.js file. //app.js export const project = (x, y) => { return x + y + 2*x; } alert(project(2,4)); In above example, I have used ES6 arrow functions. Now, include this file in the index.html file and we need to start the server. <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http- <title>Setup React.js with Parcel</title> </head> <body> <h1>Welcome to React.js Environment With Parcel</h1> <div id="app"></div> <script data-</script> </body> </html> Now start the development server with the following command. parcel index.html It will start the server at port: 1234. Now you can see that our application is running on port: 1234. Switch to the browser and hit the following URL: Though we have not use any es6 plugin, it still compiles the ES6 version of JS to ES5, and that is the central power of Parcel that we do not need to configure the ES6 plugin for the project. It by default comes with Parceljs. Parcel and React To work with React.js, we need to install the following dependencies. npm install react react-dom --save As React.js is written in ES6; we will need a way to transpile the code. Parcel does that for you with no need for configs, all you have to do is install its presets. We do need a presets, but we do not need to configure it as we have configured in webpack. Parcel does it for us, and that is the main advantage of it. We just install the dependencies, and that is it. npm i babel-preset-react --save-dev Create a .babelrc file and add the following code. { "presets": ["react"] } Okay, now create a div with the id of an app. Create one folder called src. Move the app.js file inside src folder and also update the path inside an index.html file. <script src="src/app.js" type="text/javascript"></script> If your server gives an error in the console, then please restart the server. Create React Components. Inside the src folder, create the new folder called components. In the components, make a new component called Dashboard.js. // Dashboard.js import React, { Component } from 'react'; export class Dashboard extends Component { render() { return ( <div> Dashboard Component </div> ); } } Include this component inside src >> app.js file. // app.js import React from 'react'; import { render } from 'react-dom'; import { Dashboard } from './components/Dashboard'; render(<Dashboard />, document.getElementById('app')); It will target the dom element of id app. If you got any error, please recheck this code or restart the server. Now you can see that our dashboard component is rendering inside the index.html page. Now, you can build any shape of your application structure as you want. Just keep in mind that it is scalable for a long term. Parcel For Production Use case For production use, we just need to run the following command. parcel build index.html It will generate following files. dist\index.html They are minified and production ready files. Now, add development and production both of the scripts in the package.json file. "scripts": { "dev": "parcel index.html", "prod": "parcel build index.html" }, Now, hit the following command to ready for production build. npm run prod Also, for development mode, hit the following command. npm run dev Finally, our Beginners Guide To Setup React Project With Parcel Tutorial is over. Thanks. Hope you will find useful.
https://appdividend.com/2018/02/27/beginners-guide-setup-react-project-parcel-tutorial/
CC-MAIN-2019-18
refinedweb
805
68.67
Unit? Register or login to poll Results I already use this approach as much as possible, both for the applications I develop and my Open Exchange projects. It makes it easy to trace from test to tested unit, and (in the community package manager world) it avoids collisions between different packages all trying to use the same unit test package. I strongly agree with Evgeny's recommendation. I agree with both Tim and Evgeny. In addition, I will add that we also store our unit tests in a different location, which has a top level of /internal for all of our in-house applications. We store unit tests and test data in /internal so our integration scripts can explicitly ignore changes in those branches when we're porting things to our LIVE branch. This ensures that no testing code or testing data ever make it into production. Thanks, Ben! This is another good topic for discussion on should the test code be included in the production Hi Evgeny, I agree. We use a UnitTest package for everything UnitTesty. We keep all of the UnitTest classes in a UnitTest namespace/database and map them to our live namespaces. Thanks, Steve! I prefer To leave a comment or answer to post please log in Please log in
https://community.intersystems.com/post/unit-testing-naming-convention
CC-MAIN-2020-45
refinedweb
215
68.91